Saturday 10 December 2011

UNDERGROND HELL IN RUSSIA

place or symbolical? Bible talks about hell several times. But I would like to share you this true to life experience of scientist named Dr. Azzacove from Russia who doesnt believe in God, in bible and hell. This news appeared in the well respected Finland newspaper, Ammenusastia.

Heres what happened, a group of geological scientist was drilling a 14.4 kilometers hole deep in Siberia, … then after using their super sensitive microphones to test the underground for sound intervals, they shocked on what they heard. First they heard a high pitched sound, they thought the sound was from their digging devices. After some adjustments: They heard a terrifying screams…, not just screams of a single human, but screams of millions human voice, a screams of pain.
Dr. Azzavoc picture
Dr. Azzavoc the project manager of the scientist says. Aside from the screams they heard they were also surprised on a very high temperature around 1,100 degrees celsius. They are afraid to continue the project, and about half of the scientist quit because of fear.
Dr. Azzavoc is a communist, doesnt believe on God, and bible.. But after this event. He believes now that Hell really exist!. If you want to download the hell screaming sounds in mp3 format… click here. You may put that mp3 file in your cellphones, ipods so you can share this to other.

NEW DISCOVERY BY SALAMI TAIWO

1) The tooth is the only part of the human body that cant repair  itself.
2) In an average adult the volume of blood is one eleventh of the body weight, or between 5 and 6 quarts.
3) The feet account for one quarter of all the human bodies bones.That is 52 bones in the feet alone.
4) The first toy balloon,made of vulcanized rubber, was thought of by someone in the J.G Ingram company in          London,England in 1847.
5) A cough releases an explosive charge of air that moves at speeds up to 60 mph.
6) Do not cut or push back your toenail cuticles,Cuticles protect you against bacterial infection.
7) Saturn rings are about 500,000 miles in circumference but only about a foot thick.

Friday 18 November 2011

WHY WE YAWN

Yawning, which opens the sinuses located to the left and right of the nose, acts to cool the brain when it gets too hot.
Excessive yawning, argue the researchers, appears to be a symptom of conditions that increase the brain or core temperature, such as damage to the central nervous. In addition, fits of yawning often precede epileptic seizures and migraines. Therefore, the authors say, understanding the physiological purpose of the reflex could have medical relevance.
Earlier work by the authors showed that the brains of mice increased in temperature just before a yawn and decreased directly after. The authors propose that the mucus within the sinus constantly evaporates and, like a refrigerator, cools the surrounding blood vessels and cerebrospinal fluid. A yawn, they suggest, would amplify this process by stretching the jaw, which flexes the walls of the sinus bringing new air into it rapidly cooling an overheated brain.

Tuesday 15 November 2011

How Blood Cells Thwart Malaria

Two hemoglobin mutations, including one that causes sickle cell anemia, may protect people from severe malaria by gumming up the cellular machinery the parasite uses to transmit deadly proteins to the cell surface. The findings, published today (November 10) in Science Express, suggest potential ways to fight the deadly disease.
“It’s a great study,” said Rick Fairhurst, who studies malaria pathogenesis and immunity at the National Institute of Allergy and Infectious Diseases, and was not involved in the study. “It really takes us a huge, giant leap forward.” By showing what mutations enable cells to avoid the deadliest consequences of malaria, the research may also point to potential treatment targets, he added.
For decades, researchers have known that people who carry a gene for sickle cell anemia are highly resistant to dying from malaria. Research has shown that it is the mutation to one of their hemoglobin genes, which codes for the oxygen-transporting protein in red blood cells, that was responsible for the fortuitous effect, and that other mutations in that same gene were also protective. But despite years of research, no one knew why.
Fairhurst and his colleagues published a paper showing that compared to normal cells, red blood cells carrying two hemoglobin mutations had lower surface concentrations of a virulent, sticky protein produced by the malaria parasite, Plasmodium falciparum. The protein, which is normally shuttled from the parasite’s protein making factory inside the cells to the cell surface, prevents the body from clearing infected blood cells. But in the blood cells with mutated hemoglobin, that didn’t seem to happen—the parasite proteins never made it to the surface of the cells.
To find out why, parasitologist Michael Lanzer of Heidelberg University in Germany and his colleagues flash froze red blood cells with and without the mutations and used electron microscopy to visualize the cells before and after infection.
They found that uninfected cells contained short pieces of actin that help keep the membrane skeleton rigid. After infection, in normal cells, long actin filaments appeared, linking a cellular component made by the parasite called the Maurer’s cleft, which looks a bit like a stack of pancakes, to the cell membrane. “We concluded that the parasite mines that actin from the membrane skeleton of the host, and uses this mined actin to generate actin filaments of its own design,” Lanzer said—namely to transmit the proteins to the outside surface of the red blood cell.
By contrast, in cells with mutated hemoglobin, the Maurer’s cleft looked more like a big blob and the disordered actin filaments did not connect the cleft to the cell membrane. Somehow, it seemed, the mutations were preventing the parasite from setting up its protein factory effectively, thereby reducing protein transport outside the cell.
The team also took a closer look at the hemoglobin, and found that  the difference between the mutated and wild-type cells was that mutated forms were more easily oxidized.  When they placed actin filaments in the presence of both the normal and mutated copies of hemoglobin, the researchers found that mutated forms of hemoglobin led to shorter actin chains than wild-type hemoglobin. Actin placed with oxidized hemoglobin similarly failed to form long chains, suggesting that oxidation was indeed responsible for the difference in the parasite’s protein machinery seen in cells with mutated and normal hemoglobin.
Taken together, the findings suggest that the hemoglobin mutations blunt the deadliness of malaria because the oxidized hemoglobin inhibits actin reorganization, thereby preventing the malaria parasite from shuttling its proteins to the surface red blood cells, Lanzer said.
Future work should confirm that the same effects are seen in cells that carry only one copy of the mutated hemoglobin, since the vast majority of people who carry these mutations and have a survival advantage against malaria are heterozygous for the mutated hemoglobin, Fairhurst said.
Understanding how the body gets around malaria’s deadliness could guide the design of drug therapies, he added, for instance by finding small molecules that inhibit the parasite’s protein trafficking machinery.
“Mother Nature had 10,000 years or so to mutate the genome in ways that would actually protect against death,” Fairhurst said. “We want to use these mutations to teach us something about how you protect children.”

Thursday 27 October 2011

EVOLUTION OF MAN

Evolution Of Man - What is it?
The modern theory concerning the evolution of man proposes that humans and apes derive from an apelike ancestor that lived on earth a few million years ago. The theory states that man, through a combination of environmental and genetic factors, emerged as a species to produce the variety of ethnicities seen today, while modern apes evolved on a separate evolutionary pathway. Perhaps the most famous proponent of evolutionary theory is Charles Darwin (1809-82) who authored The Origin of Species (1859) to describe his theory of evolution. It was based largely on observations which he made during his 5-year voyage around the world aboard the HMS Beagle (1831-36). Since then, mankind's origin has generally been explained from an evolutionary perspective. Moreover, the theory of man's evolution has been and continues to be modified as new findings are discovered, revisions to the theory are adopted, and earlier concepts proven incorrect are discarded.

Evolution Of Man - Concepts in Evolutionary Theory
The currently-accepted theory of the evolution of man rests on three major principles. These principles hinge on the innate ability which all creatures have to pass on their genetic information to their offspring through the reproductive process. An alternative explanation for homology is a common designer. According to this reasoning, the similarities in anatomical features between species point to a blueprint used by a Creator/Designer.

The first tenet is microevolution, the occurrence and build-up of mutations in the genetic sequence of an organism. Mutations are predominantly random and can occur naturally through errors in the reproductive process or through environmental impacts such as chemicals or radiation.

The second tenet of evolution is natural selection. Natural selection is a natural mechanism by which the fittest members of a species survive to pass on their genetic information, while the weakest are eliminated (die off) because they are unable to compete in the wild. Natural selection is often termed "survival of the fittest" or "elimination of the weakest."

The third tenet is speciation, which occurs when members of a species mutate to the point where they are no longer able to breed with other members of the same species. The new population becomes a reproductively isolated community that is unable to breed with its former community. Through speciation, the genes of the new population become isolated from the previous group.

Evolution Of Man - Scientific Evidence
The theory of evolution of man is supported by a set of independent observations within the fields of anthropology, paleontology, and molecular biology. Collectively, they depict life branching out from a common ancestor through gradual genetic changes over millions of years, commonly known as the "tree of life." Although accepted in mainstream science as altogether factual and experimentally proven, a closer examination of the evidences reveal some inaccuracies and reasonable alternative explanations. This causes a growing number of scientists to dissent from the Darwinian theory of evolution for its inability to satisfactorily explain the origin of man.

One of the major evidences for the evolution of man is homology, that is, the similarity of either anatomical or genetic features between species. For instance, the resemblance in the skeleton structure of apes and humans has been correlated to the homologous genetic sequences within each species as strong evidence for common ancestry. This argument contains the major assumption that similarity equals relatedness. In other words, the more alike two species appear, the more closely they are related to one another. This is known to be a poor assumption. Two species can have homologous anatomy even though they are not related in any way. This is called "convergence" in evolutionary terms. It is now known that homologous features can be generated from entirely different gene segments within different unrelated species. The reality of convergence implies that anatomical features arise because of the need for specific functionality, which is a serious blow to the concept of homology and ancestry.

Additionally, the evolution of man from ape-like ancestors is often argued on the grounds of comparative anatomy within the fossil record. Yet, the fossil record indicates more stability in the forms of species than slow or even drastic changes, which would indicate intermediate stages between modern species. The "missing links" are missing. And unfortunately, the field of paleoanthropology has been riddled with fraudulent claims of finding the missing link between humans and primates, to the extent that fragments of human skeletons have been combined with other species such as pigs and apes and passed off as legitimate. Although genetic variability is seen across all peoples, the process of natural selection leading to speciation is disputed. Research challenging the accepted paradigm continues to surface raising significant questions about the certainty of evolution as the origin of man.


Evolution Of Man - The Scrutiny
The theory concerning the evolution of man is under increased scrutiny due to the persistence of gaps in the fossil record, the inability to demonstrate "life-or-death" determining advantageous genetic mutations, and the lack of experiments or observations to truly confirm the evidence for speciation. Overall, the evolution of man pervades as the accepted paradigm on the origin of man within the scientific community. This is not because it has been proven scientifically, but because alternative viewpoints bring with them metaphysical implications which go against the modern naturalistic paradigm. Nevertheless, a closer examination of the evidence reveals evolution to be increasingly less scientific and more reliant upon beliefs, not proof.

Thursday 20 October 2011

A BRIEF HISTORY OF THE INTERNET

The Internet was the result of some visionary thinking by people in the early 1960s who saw great potential value in allowing computers to share information on research and development in scientific and military fields. J.C.R. Licklider of MIT, first proposed a global network of computers in 1962, and moved over to the Defense Advanced Research Projects Agency (DARPA) in late 1962 to head the work to develop it. Leonard Kleinrock of MIT and later UCLA developed the theory of packet switching, which was to form the basis of Internet connections. Lawrence Roberts of MIT connected a Massachusetts computer with a California computer in 1965 over dial-up telephone lines. It showed the feasibility of wide area networking, but also showed that the telephone line's circuit switching was inadequate. Kleinrock's packet switching theory was confirmed. Roberts moved over to DARPA in 1966 and developed his plan for ARPANET. These visionaries and many more left unnamed here are the real founders of the Internet.
When the late Senator Ted Kennedy heard in 1968 that the pioneering Massachusetts company BBN had won the ARPA contract for an "interface message processor (IMP)," he sent a congratulatory telegram to BBN for their ecumenical spirit in winning the "interfaith message processor" contract.
The Internet, then known as ARPANET, was brought online in 1969 under a contract let by the renamed Advanced Research Projects Agency (ARPA) which initially connected four major computers at universities in the southwestern US (UCLA, Stanford Research Institute, UCSB, and the University of Utah). The contract was carried out by BBN of Cambridge, MA under Bob Kahn and went online in December 1969. By June 1970, MIT, Harvard, BBN, and Systems Development Corp (SDC) in Santa Monica, Cal. were added. By January 1971, Stanford, MIT's Lincoln Labs, Carnegie-Mellon, and Case-Western Reserve U were added. In months to come, NASA/Ames, Mitre, Burroughs, RAND, and the U of Illinois plugged in. After that, there were far too many to keep listing here.

Who was the first to use the Internet?

Charley Kline at UCLA sent the first packets on ARPANet as he tried to connect to Stanford Research Institute on Oct 29, 1969. The system crashed as he reached the G in LOGIN!
The Internet was designed in part to provide a communications network that would work even if some of the sites were destroyed by nuclear attack. If the most direct route was not available, routers would direct traffic around the network via alternate routes.
The early Internet was used by computer experts, engineers, scientists, and librarians. There was nothing friendly about it. There were no home or office personal computers in those days, and anyone who used it, whether a computer professional or an engineer or scientist or librarian, had to learn to use a very complex system.

Did Al Gore invent the Internet?

According to a CNN transcript of an interview with Wolf Blitzer, Al Gore said,"During my service in the United States Congress, I took the initiative in creating the Internet." Al Gore was not yet in Congress in 1969 when ARPANET started or in 1974 when the term Internet first came into use. Gore was elected to Congress in 1976. In fairness, Bob Kahn and Vint Cerf acknowledge in a paper titled Al Gore and the Internet that Gore has probably done more than any other elected official to support the growth and development of the Internet from the 1970's to the present .
E-mail was adapted for ARPANET by Ray Tomlinson of BBN in 1972. He picked the @ symbol from the available symbols on his teletype to link the username and address. The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments (RFC) in 1972. RFC's are a means of sharing developmental work throughout community. The ftp protocol, enabling file transfers between Internet sites, was published as an RFC in 1973, and from then on RFC's were available electronically to anyone who had use of the ftp protocol.
Libraries began automating and networking their catalogs in the late 1960s independent from ARPA. The visionary Frederick G. Kilgour of the Ohio College Library Center (now OCLC, Inc.) led networking of Ohio libraries during the '60s and '70s. In the mid 1970s more regional consortia from New England, the Southwest states, and the Middle Atlantic states, etc., joined with Ohio to form a national, later international, network. Automated catalogs, not very user-friendly at first, became available to the world, first through telnet or the awkward IBM variant TN3270 and only many years later, through the web. See The History of OCLC
Ethernet, a protocol for many local networks, appeared in 1974, an outgrowth of Harvard student Bob Metcalfe's dissertation on "Packet Networks." The dissertation was initially rejected by the University for not being analytical enough. It later won acceptance when he added some more equations to it.
The Internet matured in the 70's as a result of the TCP/IP architecture first proposed by Bob Kahn at BBN and further developed by Kahn and Vint Cerf at Stanford and others throughout the 70's. It was adopted by the Defense Department in 1980 replacing the earlier Network Control Protocol (NCP) and universally adopted by 1983.
The Unix to Unix Copy Protocol (UUCP) was invented in 1978 at Bell Labs. Usenet was started in 1979 based on UUCP. Newsgroups, which are discussion groups focusing on a topic, followed, providing a means of exchanging information throughout the world . While Usenet is not considered as part of the Internet, since it does not share the use of TCP/IP, it linked unix systems around the world, and many Internet sites took advantage of the availability of newsgroups. It was a significant part of the community building that took place on the networks.
Similarly, BITNET (Because It's Time Network) connected IBM mainframes around the educational community and the world to provide mail services beginning in 1981. Listserv software was developed for this network and later others. Gateways were developed to connect BITNET with the Internet and allowed exchange of e-mail, particularly for e-mail discussion lists. These listservs and other forms of e-mail discussion lists formed another major element in the community building that was taking place.
In 1986, the National Science Foundation funded NSFNet as a cross country 56 Kbps backbone for the Internet. They maintained their sponsorship for nearly a decade, setting rules for its non-commercial government and research uses.
As the commands for e-mail, FTP, and telnet were standardized, it became a lot easier for non-technical people to learn to use the nets. It was not easy by today's standards by any means, but it did open up use of the Internet to many more people in universities in particular. Other departments besides the libraries, computer, physics, and engineering departments found ways to make good use of the nets--to communicate with colleagues around the world and to share files and resources.
While the number of sites on the Internet was small, it was fairly easy to keep track of the resources of interest that were available. But as more and more universities and organizations--and their libraries-- connected, the Internet became harder and harder to track. There was more and more need for tools to index the resources that were available.
The first effort, other than library catalogs, to index the Internet was created in 1989, as Peter Deutsch and his crew at McGill University in Montreal, created an archiver for ftp sites, which they named Archie. This software would periodically reach out to all known openly available ftp sites, list their files, and build a searchable index of the software. The commands to search Archie were unix commands, and it took some knowledge of unix to use it to its full capability.

McGill University, which hosted the first Archie, found out one day that half the Internet traffic going into Canada from the United States was accessing Archie. Administrators were concerned that the University was subsidizing such a volume of traffic, and closed down Archie to outside access. Fortunately, by that time, there were many more Archies available.
At about the same time, Brewster Kahle, then at Thinking Machines, Corp. developed his Wide Area Information Server (WAIS), which would index the full text of files in a database and allow searches of the files. There were several versions with varying degrees of complexity and capability developed, but the simplest of these were made available to everyone on the nets. At its peak, Thinking Machines maintained pointers to over 600 databases around the world which had been indexed by WAIS. They included such things as the full set of Usenet Frequently Asked Questions files, the full documentation of working papers such as RFC's by those developing the Internet's standards, and much more. Like Archie, its interface was far from intuitive, and it took some effort to learn to use it well.
Peter Scott of the University of Saskatchewan, recognizing the need to bring together information about all the telnet-accessible library catalogs on the web, as well as other telnet resources, brought out his Hytelnet catalog in 1990. It gave a single place to get information about library catalogs and other telnet resources and how to use them. He maintained it for years, and added HyWebCat in 1997 to provide information on web-based catalogs.
In 1991, the first really friendly interface to the Internet was developed at the University of Minnesota. The University wanted to develop a simple menu system to access files and information on campus through their local network. A debate followed between mainframe adherents and those who believed in smaller systems with client-server architecture. The mainframe adherents "won" the debate initially, but since the client-server advocates said they could put up a prototype very quickly, they were given the go-ahead to do a demonstration system. The demonstration system was called a gopher after the U of Minnesota mascot--the golden gopher. The gopher proved to be very prolific, and within a few years there were over 10,000 gophers around the world. It takes no knowledge of unix or computer architecture to use. In a gopher system, you type or click on a number to select the menu selection you want.
Gopher's usability was enhanced much more when the University of Nevada at Reno developed the VERONICA searchable index of gopher menus. It was purported to be an acronym for Very Easy Rodent-Oriented Netwide Index to Computerized Archives. A spider crawled gopher menus around the world, collecting links and retrieving them for the index. It was so popular that it was very hard to connect to, even though a number of other VERONICA sites were developed to ease the load. Similar indexing software was developed for single sites, called JUGHEAD (Jonzy's Universal Gopher Hierarchy Excavation And Display).
Peter Deutsch, who developed Archie, always insisted that Archie was short for Archiver, and had nothing to do with the comic strip. He was disgusted when VERONICA and JUGHEAD appeared.
In 1989 another significant event took place in making the nets easier to use. Tim Berners-Lee and others at the European Laboratory for Particle Physics, more popularly known as CERN, proposed a new protocol for information distribution. This protocol, which became the World Wide Web in 1991, was based on hypertext--a system of embedding links in text to link to other text, which you have been using every time you selected a text link while reading these pages. Although started before gopher, it was slower to develop.
Marc AndreessenThe development in 1993 of the graphical browser Mosaic by Marc Andreessen and his team at the National Center For Supercomputing Applications (NCSA) gave the protocol its big boost. Later, Andreessen moved to become the brains behind Netscape Corp., which produced the most successful graphical type of browser and server until Microsoft declared war and developed its MicroSoft Internet Explorer.

MICHAEL DERTOUZOS
1936-2001

The early days of the web was a confused period as many developers tried to put their personal stamp on ways the web should develop. The web was threatened with becoming a mass of unrelated protocols that would require different software for different applications. The visionary Michael Dertouzos of MIT's Laboratory for Computer Sciences persuaded Tim Berners-Lee and others to form the World Wide Web Consortium in 1994 to promote and develop standards for the Web. Proprietary plug-ins still abound for the web, but the Consortium has ensured that there are common standards present in every browser.
Read Tim Berners-Lee's tribute to Michael Dertouzos.
Since the Internet was initially funded by the government, it was originally limited to research, education, and government uses. Commercial uses were prohibited unless they directly served the goals of research and education. This policy continued until the early 90's, when independent commercial networks began to grow. It then became possible to route traffic across the country from one commercial site to another without passing through the government funded NSFNet Internet backbone.
Delphi was the first national commercial online service to offer Internet access to its subscribers. It opened up an email connection in July 1992 and full Internet service in November 1992. All pretenses of limitations on commercial use disappeared in May 1995 when the National Science Foundation ended its sponsorship of the Internet backbone, and all traffic relied on commercial networks. AOL, Prodigy, and CompuServe came online. Since commercial usage was so widespread by this time and educational institutions had been paying their own way for some time, the loss of NSF funding had no appreciable effect on costs.
Today, NSF funding has moved beyond supporting the backbone and higher educational institutions to building the K-12 and local public library accesses on the one hand, and the research on the massive high volume connections on the other.
Bill GatesMicrosoft's full scale entry into the browser, server, and Internet Service Provider market completed the major shift over to a commercially based Internet. The release of Windows 98 in June 1998 with the Microsoft browser well integrated into the desktop shows Bill Gates' determination to capitalize on the enormous growth of the Internet. Microsoft's success over the past few years has brought court challenges to their dominance. We'll leave it up to you whether you think these battles should be played out in the courts or the marketplace.
During this period of enormous growth, businesses entering the Internet arena scrambled to find economic models that work. Free services supported by advertising shifted some of the direct costs away from the consumer--temporarily. Services such as Delphi offered free web pages, chat rooms, and message boards for community building. Online sales have grown rapidly for such products as books and music CDs and computers, but the profit margins are slim when price comparisons are so easy, and public trust in online security is still shaky. Business models that have worked well are portal sites, that try to provide everything for everybody, and live auctions. AOL's acquisition of Time-Warner was the largest merger in history when it took place and shows the enormous growth of Internet business! The stock market has had a rocky ride, swooping up and down as the new technology companies, the dot.com's encountered good news and bad. The decline in advertising income spelled doom for many dot.coms, and a major shakeout and search for better business models took place by the survivors.
A current trend with major implications for the future is the growth of high speed connections. 56K modems and the providers who supported them spread widely for a while, but this is the low end now. 56K is not fast enough to carry multimedia, such as sound and video except in low quality. But new technologies many times faster, such as cablemodems and digital subscriber lines (DSL) are predominant now.
Wireless has grown rapidly in the past few years, and travellers search for the wi-fi "hot spots" where they can connect while they are away from the home or office. Many airports, coffee bars, hotels and motels now routinely provide these services, some for a fee and some for free.
A next big growth area is the surge towards universal wireless access, where almost everywhere is a "hot spot". Municipal wi-fi or city-wide access, wiMAX offering broader ranges than wi-fi, EV-DO, 4g, and other formats will joust for dominance in the USA in the years ahead. The battle is both economic and political.
Another trend that is rapidly affecting web designers is the growth of smaller devices to connect to the Internet. Small tablets, pocket PCs, smart phones, ebooks, game machines, and even GPS devices are now capable of tapping into the web on the go, and many web pages are not designed to work on that scale.
As the Internet has become ubiquitous, faster, and increasingly accessible to non-technical communities, social networking and collaborative services have grown rapidly, enabling people to communicate and share interests in many more ways. Sites like Facebook, Twitter, Linked-In, YouTube, Flickr, Second Life, delicious, blogs, wikis, and many more let people of all ages rapidly share their interests of the moment with others everywhere.
As Heraclitus said in the 4th century BC, "Nothing is permanent, but change!"
May you live in interesting times! (ostensibly an ancient Chinese curse)

Friday 12 August 2011

THE HUMAN HEARTH

The Human Heart

The heart is one of the most important organs in the entire human body. It is really nothing more than a pump, composed of muscle which pumps blood throughout the body, beating approximately 72 times per minute of our lives. The heart pumps the blood, which carries all the vital materials which help our bodies function and removes the waste products that we do not need. For example, the brain requires oxygen and glucose, which, if not received continuously, will cause it to loose consciousness. Muscles need oxygen, glucose and amino acids, as well as the proper ratio of sodium, calcium and potassium salts in order to contract normally. The glands need sufficient supplies of raw materials from which to manufacture the specific secretions. If the heart ever ceases to pump blood the body begins to shut down and after a very short period of time will die.
The heart is essentially a muscle(a little larger than the fist). Like any other muscle in the human body, it contracts and expands. Unlike skeletal muscles, however, the heart works on the "All -or-Nothing Law". That is, each time the heart contracts it does so with all its force. In skeletal muscles, the principle of "gradation" is present. The pumping of the heart is called the Cardiac Cycle, which occurs about 72 times per minute. This means that each cycle lasts about eight-tenths of a second. During this cycle the entire heart actually rests for about four-tenths of a second.

Make-up of the Heart.

The walls of the heart are made up of three layers, while the cavity is divided into four parts. There are two upper chambers, called the right and left atria, and two lower chambers, called the right and left ventricles. The Right Atrium, as it is called, receives blood from the upper and lower body through the superior vena cava and the inferior vena cava, respectively, and from the heart muscle itself through the coronary sinus. The right atrium is the larger of the two atria, having very thin walls. The right atrium opens into the right ventricle through the right atrioventicular valve(tricuspid), which only allows the blood to flow from the atria into the ventricle, but not in the reverse direction. The right ventricle pumps the blood to the lungs to be reoxygenated. The left atrium receives blood from the lungs via the four pulmonary veins. It is smaller than the right atrium, but has thicker walls. The valve between the left atrium and the left ventricle, the left atrioventicular valve(bicuspid), is smaller than the tricuspid. It opens into the left ventricle and again is a one way valve. The left ventricle pumps the blood throughout the body. It is the Aorta, the largest artery in the body, which originates from the left ventricle.
The Heart works as a pump moving blood around in our bodies to nourish every cell. Used blood, that is blood that has already been to the cells and has given up its nutrients to them, is drawn from the body by the right half of the heart, and then sent to the lungs to be reoxygenated. Blood that has been reoxygenated by the lungs is drawn into the left side of the heart and then pumped into the blood stream. It is the atria that draw the blood from the lungs and body, and the ventricles that pump it to the lungs and body. The output of each ventricle per beat is about 70 ml, or about 2 tablespoons. In a trained athlete this amount is about double. With the average heart rate of 72 beats per minute the heart will pump about 5 litres per ventricle, or about 10 litres total per minute. This is called the cardiac output. In a trained athlete the total cardiac output is about 20 litres. If we multiply the normal, non-athlete output by the average age of 70 years, we see that the cardiac output of the average human heart over a life time would be about 1 million litres, or about 250,000 gallons(US).

Friday 5 August 2011

HIV/AIDS

Acquired immune deficiency syndrome or acquired immunodeficiency syndrome (AIDS) is a disease of the human immune system caused by the human immunodeficiency virus (HIV).[1][2][3] This condition progressively reduces the effectiveness of the immune system and leaves individuals susceptible to opportunistic infections and tumors. HIV is transmitted through direct contact of a mucous membrane or the bloodstream with a bodily fluid containing HIV, such as blood, semen, vaginal fluid, preseminal fluid, and breast milk.[4][5] This transmission can involve anal, vaginal or oral sex, blood transfusion, contaminated hypodermic needles, exchange between mother and baby during pregnancy, childbirth, breastfeeding or other exposure to one of the above bodily fluids.
AIDS is now a pandemic.[6] As of 2009, the World Health Organization (WHO) estimated that there are 33.3 million people worldwide living with HIV/AIDS, with 2.6 million new HIV infections per year and 1.8 million annual deaths due to AIDS.[7]In 2007, UNAIDS estimated: 33.2 million people worldwide had AIDS that year; AIDS killed 2.1 million people in the course of that year, including 330,000 children, and 76% of those deaths occurred in sub-Saharan Africa.[8] According to UNAIDS 2009 report, worldwide some 60 million people have been infected, with some 25 million deaths, and 14 million orphaned children in southern Africa alone since the epidemic began.[9]
Genetic research indicates that HIV originated in west-central Africa during the late nineteenth or early twentieth century.[10][11] AIDS was first recognized by the U.S. Centers for Disease Control and Prevention in 1981 and its cause, HIV, identified in the early 1980s.[12]

Although treatments for AIDS and HIV can slow the course of the disease, there is no known cure or vaccine. Antiretroviral treatment reduces both the mortality and the morbidity of HIV infection, but these drugs are expensive and routine access to antiretroviral medication is not available in all countries.[13] Due to the difficulty in treating HIV infection, preventing infection is a key aim in controlling the AIDS pandemic, with health organizations promoting safe sex and needle-exchange programmes in attempts to slow the spread of the virus.

Anatomy, Physiology & Pathology of the Human Eye

The human eye is the organ which gives us the sense of sight, allowing us to observe and learn more about the surrounding world than we do with any of the other four senses.  We use our eyes in almost every activity we perform, whether reading, working, watching television, writing a letter, driving a car, and in countless other ways.  Most people probably would agree that sight is the sense they value more than all the rest.
The eye allows us to see and interpret the shapes, colors, and dimensions of objects in the world by processing the light they reflect or emit.  The eye is able to detect bright light or dim light, but it cannot sense objects when light is absent.

process of vision

Light waves from an object (such as a tree) enter the eye first through the cornea, which is the clear dome at the front of the eye.  The light then progresses through the pupil, the circular opening in the center of the colored iris.
Fluctuations in incoming light change the size of the eye’s pupil.  When the light entering the eye is bright enough, the pupil will constrict (get smaller), due to the pupillary light response.
Initially, the light waves are bent or converged first by the cornea, and then further by the crystalline lens (located immediately behind the iris and the pupil), to a nodal point (N) located immediately behind the back surface of the lens.  At that point, the image becomes reversed (turned backwards) and inverted (turned upside-down).
The light continues through the vitreous humor, the clear gel that makes up about 80% of the eye’s volume, and then, ideally, back to a clear focus on the retina, behind the vitreous.  The small central area of the retina is the macula, which provides the best vision of any location in the retina.  If the eye is considered to be a type of camera (albeit, an extremely complex one), the retina is equivalent to the film inside of the camera, registering the tiny photons of light interacting with it.
Within the layers of the retina, light impulses are changed into electrical signals.  Then they are sent through the optic nerve, along the visual pathway, to the occipital cortex at the posterior (back) of the brain.  Here, the electrical signals are interpreted or “seen” by the brain as a visual image.
Actually, then, we do not “see” with our eyes but, rather, with our brains.  Our eyes merely are the beginnings of the visual process.

myopia, hyperopia, astigmatism

If the incoming light from a far away object focuses before it gets to the back of the eye, that eye’s refractive error is called “myopia” (nearsightedness).  If incoming light from something far away has not focused by the time it reaches the back of the eye, that eye’s refractive error is “hyperopia” (farsightedness).
In the case of “astigmatism,” one or more surfaces of the cornea or lens (the eye structures which focus incoming light) are not spherical (shaped like the side of a basketball) but, instead, are cylindrical or toric (shaped a bit like the side of a football).  As a result, there is no distinct point of focus inside the eye but, rather, a smeared or spread-out focus.  Astigmatism is the most common refractive error.

presbyopia (“after 40” vision)

After age 40, and most noticeably after age 45, the human eye is affected by presbyopia.  This natural condition results in greater difficulty maintaining a clear focus at a near distance with an eye which sees clearly far away.
Presbyopia is caused by a lessening of flexibility of the crystalline lens, as well as to a weakening of the ciliary muscles which control lens focusing.  Both are attributable to the aging process.
An eye can see clearly at a far distance naturally, or it can be made to see clearly artificially, such as with the aid of eyeglasses or contact lenses, or else following a photorefractive procedure such as LASIK (laser-assisted in situ keratomileusis).  Nevertheless, presbyopia eventually will affect the near focusing of every human eye.

eye growth

The average newborn’s eyeball is about 18 millimeters in diameter, from front to back (axial length).  In an infant, the eye grows slightly to a length of approximately 19½ millimeters.
The eye continues to grow, gradually, to a length of about 24-25 millimeters, or about 1 inch, in adulthood.  A ping-pong ball is about 1½ inch in diameter, which makes the average adult eyeball about 2/3 the size of a ping-pong ball.
The eyeball is set in a protective cone-shaped cavity in the skull called the “orbit” or “socket.”  This bony orbit also enlarges as the eye grows.

extraocular muscles

The orbit is surrounded by layers of soft, fatty tissue.  These layers protect the eye and enable it to turn easily.
Traversing the fatty tissue are three pairs of extraocular muscles, which regulate the motion of each eye: the medial & lateral rectus muscles, the superior & inferior rectus muscles, and the superior & inferior oblique muscles.

eye structures

Several structures compose the human eye.  Among the most important anatomical components are the cornea, conjunctiva, iris, crystalline lens, vitreous humor, retina, macula, optic nerve, and extraocular muscles

Thursday 4 August 2011

ANATOMY OF THE BRAIN

The anatomy of the brain is complex due its intricate structure and function. This amazing organ acts as a control center by receiving, interpreting, and directing sensory information throughout the body. There are three major divisions of the brain. They are the forebrain, the midbrain, and the hindbrain.

Anatomy of the Brain: Brain Divisions

The forebrain is responsible for a variety of functions including receiving and processing sensory information, thinking, perceiving, producing and understanding language, and controlling motor function. There are two major divisions of forebrain: the diencephalon and the telencephalon. The diencephalon contains structures such as the thalamus and hypothalamus which are responsible for such functions as motor control, relaying sensory information, and controlling autonomic functions. The telencephalon contains the largest part of the brain, the cerebrum. Most of the actual information processing in the brain takes place in the cerebral cortex.

The midbrain and the hindbrain together make up the brainstem. The midbrain is the portion of the brainstem that connects the hindbrain and the forebrain. This region of the brain is involved in auditory and visual responses as well as motor function.

The hindbrain extends from the spinal cord and is composed of the metencephalon and myelencephalon. The metencephalon contains structures such as the pons and cerebellum. These regions assists in maintaining balance and equilibrium, movement coordination, and the conduction of sensory information. The myelencephalon is composed of the medulla oblongata which is responsible for controlling such autonomic functions as breathing, heart rate, and digestion.

ESTROGE NEW ROLE

Neuroscientists at the University of Massachusetts, Amherst (UMass), and the University of California, Los Angeles, have found that estrogen can act as a neurotransmitter, in addition to its usual role as a hormone in the bloodstream, according to a study published in the Journal of Neuroscience last month.
Estradiol—the type of estrogen that is most prevalent in the body during a female’s reproductive years—is produced by the ovaries and then enters the blood stream where it takes hours or days to bring about changes in the cortex region of the brain. But in of zebra finches, neurons also produced estradiol directly inside the presynaptic terminal. Within a matter of seconds, the hormone then crossed the synapses of the auditory forebrain—the area of the brain that responds to sound.
This is “similar to the way neurotransmitters are controlled,” Luke Remage-Healey, neuroscientist at UMass, said in a press release.
This is the first time that scientists have directly measured estrogen levels over a short period of time in the brain of a live animal to determine how estradiol is produced and transmitted between neurons. Because of the brain’s ability to produce it quickly and in a precise location, Remage-Healey and his colleagues believe that estradiol, which is known to play a role in memory, cognition, and neuroplasticity, may someday be a target to improve brain function.

WHO IS THIS

DARWIN THEORY OF EVOLUTION

Darwin's Theory of Evolution - The Premise
Darwin's Theory of Evolution is the widely held notion that all life is related and has descended from a common ancestor: the birds and the bananas, the fishes and the flowers -- all related. Darwin's general theory presumes the development of life from non-life and stresses a purely naturalistic (undirected) "descent with modification". That is, complex creatures evolve from more simplistic ancestors naturally over time. In a nutshell, as random genetic mutations occur within an organism's genetic code, the beneficial mutations are preserved because they aid survival -- a process known as "natural selection." These beneficial mutations are passed on to the next generation. Over time, beneficial mutations accumulate and the result is an entirely different organism (not just a variation of the original, but an entirely different creature).

Darwin's Theory of Evolution - Natural Selection
While Darwin's Theory of Evolution is a relatively young archetype, the evolutionary worldview itself is as old as antiquity. Ancient Greek philosophers such as Anaximander postulated the development of life from non-life and the evolutionary descent of man from animal. Charles Darwin simply brought something new to the old philosophy -- a plausible mechanism called "natural selection." Natural selection acts to preserve and accumulate minor advantageous genetic mutations. Suppose a member of a species developed a functional advantage (it grew wings and learned to fly). Its offspring would inherit that advantage and pass it on to their offspring. The inferior (disadvantaged) members of the same species would gradually die out, leaving only the superior (advantaged) members of the species. Natural selection is the preservation of a functional advantage that enables a species to compete better in the wild. Natural selection is the naturalistic equivalent to domestic breeding. Over the centuries, human breeders have produced dramatic changes in domestic animal populations by selecting individuals to breed. Breeders eliminate undesirable traits gradually over time. Similarly, natural selection eliminates inferior species gradually over time.

Darwin's Theory of Evolution - Slowly But Surely...
Darwin's Theory of Evolution is a slow gradual process. Darwin wrote, "…Natural selection acts only by taking advantage of slight successive variations; she can never take a great and sudden leap, but must advance by short and sure, though slow steps." [1] Thus, Darwin conceded that, "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." [2] Such a complex organ would be known as an "irreducibly complex system". An irreducibly complex system is one composed of multiple parts, all of which are necessary for the system to function. If even one part is missing, the entire system will fail to function. Every individual part is integral. [3] Thus, such a system could not have evolved slowly, piece by piece. The common mousetrap is an everyday non-biological example of irreducible complexity. It is composed of five basic parts: a catch (to hold the bait), a powerful spring, a thin rod called "the hammer," a holding bar to secure the hammer in place, and a platform to mount the trap. If any one of these parts is missing, the mechanism will not work. Each individual part is integral. The mousetrap is irreducibly complex. [4]

Darwin's Theory of Evolution - A Theory In Crisis
Darwin's Theory of Evolution is a theory in crisis in light of the tremendous advances we've made in molecular biology, biochemistry and genetics over the past fifty years. We now know that there are in fact tens of thousands of irreducibly complex systems on the cellular level. Specified complexity pervades the microscopic biological world. Molecular biologist Michael Denton wrote, "Although the tiniest bacterial cells are incredibly small, weighing less than 10-12 grams, each is in effect a veritable micro-miniaturized factory containing thousands of exquisitely designed pieces of intricate molecular machinery, made up altogether of one hundred thousand million atoms, far more complicated than any machinery built by man and absolutely without parallel in the non-living world." [5]

And we don't need a microscope to observe irreducible complexity. The eye, the ear and the heart are all examples of irreducible complexity, though they were not recognized as such in Darwin's day. Nevertheless, Darwin confessed, "To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree." [6]

Wednesday 3 August 2011

THE HUMAN BRAIN


Human brain
Skull and brain normal human.svg
Human brain and skull
Cerebral lobes.png
Cerebral lobes: the frontal lobe (pink), parietal lobe (green) and occipital lobe (blue)
Latin Cerebrum
Gray's subject #184 736
System Central nervous system
Artery Anterior communicating artery, middle cerebral artery
Vein Cerebral veins, external veins, basal vein, terminal vein, choroid vein, cerebellar veins
The human brain is the center of the human nervous system. Enclosed in the cranium, the human brain has the same general structure as that of other mammals, but is over three times larger than the brain of a typical mammal with an equivalent body size.[1] Most of the spatial expansion comes from the cerebral cortex, a convoluted layer of neural tissue which covers the surface of the forebrain. Especially expanded are the frontal lobes, which are associated with executive functions such as self-control, planning, reasoning, and abstract thought. The portion of the brain devoted to vision, the occipital lobe, is also greatly enlarged in human beings.
Brain evolution, from the earliest shrew-like mammals through primates to hominids, is marked by a steady increase in encephalization, or the ratio of brain to body size. Estimates vary for the number of neuronal and non-neuronal cells contained in the brain, ranging from 80 or 90 billion (~85 109) non-neuronal cells (glial cells) and an approximately equal number of (~86 109) neurons,[2] of which about 10 billion (1010) are cortical pyramidal cells, to over 120 billion neuronal cells, with an approximately equal number of non-neuronal cells.[3] These cells pass signals to each other via as many as 1000 trillion (1015, 1 quadrillion) synaptic connections.[4] Due to evolution, however, the modern human brain has been shrinking over the past 28,000 years.[5][6]
The brain monitors and regulates the body's actions and reactions. It continuously receives sensory information, and rapidly analyzes this data and then responds accordingly by controlling bodily actions and functions. The brainstem controls breathing, heart rate, and other autonomic processes that are independent of conscious brain functions. The neocortex is the center of higher-order thinking, learning, and memory. The cerebellum is responsible for the body's balance, posture, and the coordination of movement.
Despite being protected by the thick bones of the skull, suspended in cerebrospinal fluid, and isolated from the bloodstream by the blood-brain barrier, the human brain is susceptible to many types of damage and disease. The most common forms of physical damage are closed head injuries such as a blow to the head, a stroke

Saturday 30 July 2011

THE CELL

The cell is the functional basic unit of life. It was discovered by Robert Hooke and is the functional unit of all known living organisms. It is the smallest unit of life that is classified as a living thing, and is often called the building block of life.[1] Some organisms, such as most bacteria, are unicellular (consist of a single cell). Other organisms, such as humans, are multicellular. Humans have about 100 trillion or 1014 cells; a typical cell size is 10 µm and a typical cell mass is 1 nanogram. The longest human cells are about 135 µm in the anterior horn in the spinal cord while granule cells in the cerebellum, the smallest, can be some 4 µm and the longest cell can reach from the toe to the lower brain stem (Pseudounipolar cells).[2] The largest known cells are unfertilised ostrich egg cells, which weigh 3.3 pounds.[3][4]
In 1835, before the final cell theory was developed, Jan Evangelista Purkyně observed small "granules" while looking at the plant tissue through a microscope. The cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that all cells come from preexisting cells, that vital functions of an organism occur within cells, and that all cells contain the hereditary information necessary for regulating cell functions and for transmitting information to the next generation of cells.[5]

The word cell comes from the Latin cellula, meaning, a small room. The descriptive term for the smallest living biological structure was coined by Robert Hooke in a book he published in 1665 when he compared the cork cells he saw through his microscope to the small rooms monks lived in.[6]

Wednesday 27 July 2011

Reproductive system

The reproductive system or genital system is a system of organs within an organism which work together for the purpose of reproduction. Many non-living substances such as fluids, hormones, and pheromones are also important accessories to the reproductive system.[1] Unlike most organ systems, the sexes of differentiated species often have significant differences. These differences allow for a combination of genetic material between two individuals, which allows for the possibility of greater genetic fitness of the offspring.[2]
The major organs of the human reproductive system include the external genitalia (penis and vulva) as well as a number of internal organs including the gamete producing gonads (testicles and ovaries). Diseases of the human reproductive system are very common and widespread, particularly communicable sexually transmitted diseases.[3]
Most other vertebrate animals have generally similar reproductive systems consisting of gonads, ducts, and openings. However, there is a great diversity of physical adaptations as well as reproductive strategies in every group of vertebrates.

Monday 25 July 2011

Banner code 2

Banner code

Wednesday 20 July 2011

TIGER

The tiger (Panthera tigris), a member of the Felidae family, is the largest of the four "big cats" in the genus Panthera.[4] The tiger is native to much of eastern and southern Asia, and is an apex predator and an obligate carnivore. The larger tiger subspecies are comparable in size to the biggest extinct felids,[5][6] reaching up to 3.3 metres (11 ft) in total length, weighing up to 300 kilograms (660 pounds), and having canines up to 4 inches (100 mm) long.[7] Aside from their great bulk and power, their most recognisable feature is a pattern of dark vertical stripes that overlays near-white to reddish-orange fur, with lighter underparts. The most numerous tiger subspecies is the Bengal tiger, while the largest is the Siberian tiger.
Tigers have a lifespan of 10–15 years in the wild, but can live longer than 20 years in captivity.[8] They are highly adaptable and range from the Siberian taiga to open grasslands and tropical mangrove swamps.
They are territorial and generally solitary animals, often requiring large contiguous areas of habitat that support their prey demands. This, coupled with the fact that they are indigenous to some of the more densely populated places on earth, has caused significant conflicts with humans. Three of the nine subspecies of modern tiger have gone extinct, and the remaining six are classified as endangered, some critically so. The primary direct causes are habitat destruction, fragmentation, and hunting.
Historically, tigers have existed from Mesopotamia and the Caucasus throughout most of South and East Asia. Today, the range of the species is radically reduced. All surviving species are under formal protection, yet poaching, habitat destruction, and inbreeding depression continue to threaten the tigers.
Tigers are among the most recognisable and popular of the world's charismatic megafauna. They have featured prominently in ancient mythology and folklore, and continue to be depicted in modern films and literature. Tigers appear on many flags and coats of arms, as mascots for sporting teams, and as the national animal of several Asian nations, including India.[9]

THE TURTLE

The painted turtle is the only species of Chrysemys, a genus of pond turtles. It lives in slow-moving freshwaters, from southern Canada to the Louisiana Gulf Coast and northern Mexico, and from the Atlantic to the Pacific. Four U.S. states name the painted turtle as their official reptile. Fossils show that the painted turtle existed 15 million years ago, but four regionally based subspecies (the eastern, midland, southern, and western) evolved during the last ice age.
The turtle's skin is olive to black with distinctive red, orange, or yellow stripes. Reliant on warmth from its surroundings, the painted turtle can frequently be seen basking on logs. Crayfish and dragonflies are among the turtle's preferred prey. Many predators eat the turtle eggs or hatchlings, but the adult's shell protects it from most enemies except for raccoons, alligators and humans. Turtles in the wild can live for more than 55 years. (more...)

ADRENOCORTICOTROPIC HORMONE

Adrenocorticotropic Hormone

An adrenocorticotropic hormone test measures the level of adrenocorticotropic hormone (ACTH) in the blood to check for problems with the pituitary gland or the adrenal glands. See pictures of the pituitary gland camera and adrenal glands camera.
ACTH is made in the pituitary gland in response to the release of another hormone, called corticotropin-releasing hormone (CRH), by the hypothalamus. In turn, the adrenal glands then make a hormone called cortisol, which helps your body manage stress. Cortisol is needed for life, so its levels in the blood are closely controlled. When cortisol levels rise, ACTH levels normally fall. When cortisol levels fall, ACTH levels normally rise.
Both ACTH and cortisol levels change throughout the day. ACTH is normally highest in the early morning (between 6 a.m. and 8 a.m.) and lowest in the evening (between 6 p.m. and 11 p.m.). ACTH levels may be tested in the morning or evening if your doctor thinks that they are abnormal. Cortisol levels are often measured at the same time as ACTH.
ACTH is released in bursts, so its levels in the blood can vary from minute to minute. Interpretation of the test results is difficult and often requires the skill of an endocrinologist.

Why It Is Done

A test to measure ACTH is done to check for:
  • A problem with the adrenal glands or pituitary gland. A high level of ACTH and a low level of cortisol (or low ACTH and high cortisol levels) could be caused by a problem with the adrenal glands. Low levels of ACTH and cortisol could be caused by a problem with the pituitary gland.
  • Overproduction of ACTH. This may be caused by an overactive pituitary gland. In response, the adrenal glands release too much cortisol (one form of Cushing's syndrome).
  • The correct dose of corticosteroid medicine.

How To Prepare

You may not be able to eat or drink for 10 to 12 hours before an ACTH test. Your doctor may ask you to eat low-carbohydrate foods for 48 hours before the test. Be sure to ask your doctor if there are any foods that you should not eat.
Many medicines can change the results of this test. Be sure to tell your doctor about all the nonprescription and prescription medicines you take. If you take a medicine, such as a corticosteroid, that could change the test results, you will need to stop taking it for up to 48 hours before the test. Your doctor will tell you exactly how long depending on what medicine you take.
Do not exercise for 12 hours before this test.
Try to avoid emotional stress for 12 hours before the test.
Collecting the blood sample at the right time is often important. Your blood will be drawn in the morning if your doctor wants a peak ACTH level. Your blood will be drawn in the evening if your doctor wants a low (trough) ACTH level.
Talk to your doctor about any concerns you have about the need for the test, its risks, how it will be done, or what the results will mean. To help you learn about this test and how important it is, fill out the medical test information formpdf(What is a PDF document?).

Tuesday 19 July 2011

Hormone

A hormone (from Greek ὁρμή "impetus") is a chemical released by a cell or a gland in one part of the body that sends out messages that affect cells in other parts of the organism. Only a small amount of hormone is required to alter cell metabolism. In essence, it is a chemical messenger that transports a signal from one cell to another. All multicellular organisms produce hormones; plant hormones are also called phytohormones. Hormones in animals are often transported in the blood. Cells respond to a hormone when they express a specific receptor for that hormone. The hormone binds to the receptor protein, resulting in the activation of a signal transduction mechanism that ultimately leads to cell type-specific responses.
Endocrine hormone molecules are secreted (released) directly into the bloodstream, whereas exocrine hormones (or ectohormones) are secreted directly into a duct, and, from the duct, they flow either into the bloodstream or from cell to cell by diffusion in a process known as paracrine signalling.
Recently it has been found that a variety of exogenous modern chemical compounds have hormone-like effects on both humans and wildlife. Their interference with the synthesis, secretion, transport, binding, action, or elimination of natural hormones in the body can change the homeostasis, reproduction, development, and/or behavior the same as endogenous produced hormones."[1]

Monday 18 July 2011

Human evolution

Human evolution is the phenotypic history of the genus Homo, including the emergence of Homo sapiens as a distinct species and as a unique category of hominids ("great apes") and mammals. The study of human evolution uses many scientific disciplines, including physical anthropology, primatology, archaeology, linguistics and genetics.[1]
The term "human" in the context of human evolution refers to the genus Homo, but studies of human evolution usually include other hominids, such as the Australopithecines, from which the genus Homo had diverged by about 2.3 to 2.4 million years ago in Africa.[2][3] Scientists have estimated that humans branched off from their common ancestor with chimpanzees about 5–7 million years ago. Several species and subspecies of Homo evolved and are now extinct, introgressed or extant. Examples include Homo erectus (which inhabited Asia, Africa, and Europe) and Neanderthals (either Homo neanderthalensis or Homo sapiens neanderthalensis) (which inhabited Europe and Asia). Archaic Homo sapiens evolved between 400,000 and 250,000 years ago.
The dominant view among scientists concerning the origin of anatomically modern humans is the hypothesis known as "Out of Africa", recent African origin of modern humans, ROAM, or recent African origin hypothesis,[4][5][6] which argues that Homo sapiens arose in Africa and migrated out of the continent around 50,000 to 100,000 years ago, replacing populations of Homo erectus in Asia and Neanderthals in Europe.
Scientists supporting an alternative multiregional hypothesis argue that Homo sapiens evolved as geographically separate but interbreeding populations stemming from a worldwide migration of Homo erectus out of Africa nearly 2.5 million years ago. Evidence suggests that an X-linked haplotype of the Neanderthal origin is present among all non-African populations, and Neaderthals and other hominids, such as Denisova hominin may have contributed up to 6% of their genome to modern humans.[7][8] Archaic genetic contribution contradicts

Tuesday 12 July 2011

science

Science (from Latin: scientia meaning "knowledge") is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the world.[1][2][3][4] An older and closely related meaning still in use today is that of Aristotle, for whom scientific knowledge was a body of reliable knowledge that can be logically and rationally explained (see "History and etymology" section below).[5]
Since classical antiquity science as a type of knowledge was closely linked to philosophy. In the early modern era the two words, "science" and "philosophy", were sometimes used interchangeably in the English language. By the 17th century, "natural philosophy" (which is today called "natural science") had begun to be considered separately from "philosophy" in general.[6][7] However, "science" continued to be used in a broad sense denoting reliable knowledge about a topic, in the same way it is still used in modern terms such as library science or political science.
In modern use, science is "often treated as synonymous with ‘natural and physical science’, and thus restricted to those branches of study that relate to the phenomena of the material universe and their laws, sometimes with implied exclusion of pure mathematics. This is now the dominant sense in ordinary use."[8] This narrower sense of "science" developed as a part of science became a distinct enterprise of defining "laws of nature", based on early examples such as Kepler's laws, Galileo's laws, and Newton's laws of motion. In this period it became more common to refer to natural philosophy as "natural science". Over the course of the 19th century, the word "science" became increasingly associated with the disciplined study of the natural world including physics, chemistry, geology and biology. This sometimes left the study of human thought and society in a linguistic limbo, which was resolved by classifying these areas of academic study as social science. Similarly, several other major areas of disciplined study and knowledge exist today under the general rubric of "science", such as formal science and applied science.[

Saturday 9 July 2011

The skin

Skin is a soft outer covering of an animal, in particular a vertebrate. Other animal coverings such as the arthropod exoskeleton or the seashell have different developmental origin, structure and chemical composition. The adjective cutaneous means "of the skin" (from Latin cutis, skin). In mammals, the skin is the largest organ of the integumentary system made up of multiple layers of ectodermal tissue, and guards the underlying muscles, bones, ligaments and internal organs.[1] Skin of a different nature exists in amphibians, reptiles, and birds.[2] All mammals have some hair on their skin, even marine mammals which appear to be hairless. Because it interfaces with the environment, skin plays a key role in protecting (the body) against pathogens[3] and excessive water loss.[4] Its other functions are insulation, temperature regulation, sensation, and the protection of vitamin D folates. Severely damaged skin will try to heal by forming scar tissue. This is often discoloured and depigmented.
Hair with sufficient density is called fur. The fur mainly serves to augment the insulation the skin provides, but can also serve as a secondary sexual characteristic or as camouflage. On some animals, the skin is very hard and thick, and can be processed to create leather. Reptiles and fish have hard protective scales on their skin for protection, and birds have hard feathers, all made of tough β-keratins. Amphibian skin is not a strong barrier to passage of chemicals and is often subject to osmosis. For example, a frog sitting in an anesthetic solution could quickly go to sleep.

Tuesday 10 May 2011

skull

The skull is a bony structure in the head of many animals that supports the structures of the face and forms a cavity for the brain.
The skull is composed of two parts: the cranium and the mandible

Saturday 7 May 2011

The weather

Weather is the state of the atmosphere, to the degree that it is hot or cold, wet or dry, calm or stormy, clear or cloudy. Most weather phenomena occur in the troposphere,just below the stratosphere. Weather refers, generally, to day-to-day temperature and precipitation activity, whereas climate is the term for the average atmospheric conditions over longer periods of time. When used without qualification, "weather" is understood to be the weather of Earth.
Weather is driven by density (temperature and moisture) differences between one place and another. These differences can occur due to the sun angle at any particular spot, which varies by latitude from the tropics. The strong temperature contrast between polar and tropical air gives rise to the jet stream. Weather systems in the

Acid rain

The corrosive effect of polluted, acidic city air on limestone and marble was noted in the 17th century by John Evelyn, who remarked upon the poor condition of the Arundel marbles.[2] Since the Industrial Revolution, emissions of sulfur dioxide and nitrogen oxides to the atmosphere have increased.[3][4] In 1852, Robert Angus Smith was the first to show the relationship between acid rain and atmospheric pollution in Manchester, England.[5] Though acidic rain was discovered in 1852, it was not until the late 1960s that scientists began widely observing and studying the phenomenon.[6] The term "acid rain" was coined in 1872 by Robert Angus Smith.[7] Canadian Harold Harvey was among the first to research a "dead" lake. Public awareness of acid rain in the U.S increased in the 1970s after The New York Times promulgated reports from the Hubbard Brook Experimental Forest in New Hampshire of the myriad deleterious environmental effects demonstrated to result from it.[8][9]
Occasional pH readings in rain and fog water of well below 2.4 have been reported in industrialized areas.[3] Industrial acid rain is a substantial problem in China and Russia[10][11] and areas down-wind from them. These areas all burn sulfur-containing coal to generate heat and electricity.[12] The problem of acid rain not only has increased with population and industrial growth, but has become more widespread. The use of tall smokestacks to reduce local pollution has contributed to the spread of acid rain by releasing gases into regional atmospheric circulation.[13][14] Often deposition occurs a considerable distance downwind of the emissions, with mountainous regions tending to receive the greatest deposition (simply because of their higher rainfall). An example of this effect is the low pH of rain (compared to the local emissions) which falls in scandinavia