Science News-Unprecedented Control of Genome Editing in Flies Promises Insight Into Human Development, Disease

Unprecedented Control of Genome Editing in Flies Promises Insight Into Human Development, Disease

 In an era of widespread genetic sequencing, the ability to edit and alter an organism's DNA is a powerful way to explore the information within and how it guides biological function.
A paper from the University of Wisconsin-Madison in the August issue of the journal Genetics takes genome editing to a new level in fruit flies, demonstrating a remarkable level of fine control and, importantly, the transmission of those engineered genetic changes across generations.
Both features are key for driving the utility and spread of an approach that promises to give researchers new insights into the basic workings of biological systems, including embryonic development, nervous system function, and the understanding of human disease.
"Genome engineering allows you to change gene function in a very targeted way, so you can probe function at a level of detail" that wasn't previously possible, says Melissa Harrison, an assistant professor of biomolecular chemistry in the UW-Madison School of Medicine and Public Health and one of the three senior authors of the new study.
Disrupting individual genes has long been used as a way to study their roles in biological function and disease. The new approach, based on molecules that drive a type of bacterial immune response, provides a technical advance that allows scientists to readily engineer genetic sequences in very detailed ways, including adding or removing short bits of DNA in chosen locations, introducing specific mutations, adding trackable tags, or changing the sequences that regulate when or where a gene is active.
The approach used in the new study, called the CRISPR RNA/Cas9 system, has developed unusually fast. First reported just one year ago by scientists at the Howard Hughes Medical Institute and University of California, Berkeley, it has already been applied to most traditional biological model systems, including yeast, zebrafish, mice, the nematode C. elegans, and human cells. The Wisconsin paper was the first to describe it in fruit flies and to show that the resulting genetic changes could be passed from one generation to the next.
"There was a need in the community to have a technique that you could use to generate targeted mutations," says Jill Wildonger, a UW-Madison assistant professor of biochemistry and another senior author of the paper. "The need was there and this was the technical advance that everyone had been waiting for."
"The reason this has progressed so quickly is that many researchers -- us included -- were working on other, more complicated, approaches to do exactly the same thing when this came out," adds genetics assistant professor Kate O'Connor-Giles, the third senior author. "This is invaluable for anyone wanting to study gene function in any organism and it is also likely to be transferable to the clinical realm and gene therapy."
The CRISPR RNA/Cas9 system directs a DNA-clipping enzyme called Cas9 to snip the DNA at a targeted sequence. This cut then stimulates the cell's existing DNA repair machinery to fill in the break while integrating the desired genetic tweaks. The process can be tailored to edit down to the level of a single base pair -- the rough equivalent of changing a single letter in a document with a word processor.
The broad applicability of the system is aided by a relatively simple design that can be customized through creation of a short RNA sequence to target a specific sequence in the genome to generate the desired changes. Previous genome editing methods have relied on making custom proteins, which is costly and slow.
"This is so readily transferable that it's highly likely to enable gene knockout and other genome modifications in any organism," including those that have not previously been used for laboratory work, says O'Connor-Giles. "It's going to turn non-model organisms into genetic model organisms."
That ease may also pay off in the clinic. "It can be very difficult and time-consuming to generate models to study all the gene variants associated with human diseases," says Wildonger. "With this genome editing approach, if we work in collaboration with a clinician to find [clinically relevant] mutations, we can rapidly translate these into a fruit fly model to see what's happening at the cellular and molecular level."
The work, led by genetics graduate student Scott Gratz, was the joint effort of three UW-Madison labs -- particularly notable, Harrison says, that each is in a different department and headed by a female assistant professor. "This has been an amazing collaboration," she says. "It wouldn't have worked if any one of us had tried it on our own."

New Data Reveal Extent of Genetic Overlap Between Major Mental Disorders

The largest genome-wide study of its kind has determined how much five major mental illnesses are traceable to the same common inherited genetic variations. Researchers funded in part by the National Institutes of Health found that the overlap was highest between schizophrenia and bipolar disorder; moderate for bipolar disorder and depression and for ADHD and depression; and low between schizophrenia and autism. Overall, common genetic variation accounted for 17-28 percent of risk for the illnesses.
"Since our study only looked at common gene variants, the total genetic overlap between the disorders is likely higher," explained Naomi Wray, Ph.D., University of Queensland, Brisbane, Australia, who co-led the multi-site study by the Cross Disorders Group of the Psychiatric Genomics Consortium (PGC), which is supported by the NIH's National Institute of Mental Health (NIMH). "Shared variants with smaller effects, rare variants, mutations, duplications, deletions, and gene-environment interactions also contribute to these illnesses."
Dr. Wray, Kenneth Kendler, M.D., of Virginia Commonwealth University, Richmond, Jordan Smoller, M.D., of Massachusetts General Hospital, Boston, and other members of the PGC group report on their findings August 11, 2013 in the journal Nature Genetics.
"Such evidence quantifying shared genetic risk factors among traditional psychiatric diagnoses will help us move toward classification that will be more faithful to nature," said Bruce Cuthbert, Ph.D., director of the NIMH Division of Adult Translational Research and Treatment Development and coordinator of the Institute's Research Domain Criteria (RDoC) project, which is developing a mental disorders classification system for research based more on underlying causes.
Earlier this year, PGC researchers -- more than 300 scientists at 80 research centers in 20 countries -- reported the first evidence of overlap between all five disorders. People with the disorders were more likely to have suspect variation at the same four chromosomal sites. But the extent of the overlap remained unclear. In the new study, they used the same genome-wide information and the largest data sets currently available to estimate the risk for the illnesses attributable to any of hundreds of thousands of sites of common variability in the genetic code across chromosomes. They looked for similarities in such genetic variation among several thousand people with each illness and compared them to controls -- calculating the extent to which pairs of disorders are linked to the same genetic variants.
The overlap in heritability attributable to common genetic variation was about 15 percent between schizophrenia and bipolar disorder, about 10 percent between bipolar disorder and depression, about 9 percent between schizophrenia and depression, and about 3 percent between schizophrenia and autism.
The newfound molecular genetic evidence linking schizophrenia and depression, if replicated, could have important implications for diagnostics and research, say the researchers. They expected to see more overlap between ADHD and autism, but the modest schizophrenia-autism connection is consistent with other emerging evidence.
The study results also attach numbers to molecular evidence documenting the importance of heritability traceable to common genetic variation in causing these five major mental illnesses. Yet this still leaves much of the likely inherited genetic contribution to the disorders unexplained -- not to mention non-inherited genetic factors. For example, common genetic variation accounted for 23 percent of schizophrenia, but evidence from twin and family studies estimate its total heritability at 81 percent. Similarly, the gaps are 25 percent vs. 75 percent for bipolar disorder, 28 percent vs. 75 percent for ADHD, 14 percent vs. 80 percent for autism, and 21 percent vs. 37 percent for depression.
Among other types of genetic inheritance known to affect risk and not detected in this study are contributions from rare variants not associated with common sites of genetic variation. However, the researchers say that their results show clearly that more illness-linked common variants with small effects will be discovered with the greater statistical power that comes with larger sample sizes.
"It is encouraging that the estimates of genetic contributions to mental disorders trace those from more traditional family and twin studies. The study points to a future of active gene discovery for mental disorders" said Thomas Lehner, Ph.D., chief of the NIMH Genomics Research Branch, which funds the project.

Key Protein That Modulates Organismal Aging Identified

Scientists at Sanford-Burnham Medical Research Institute have identified a key factor that regulates the autophagy process, a kind of cleansing mechanism for cells in which waste material and cellular debris is gobbled up to protect cells from damage, and in turn, modulates aging. The findings, published in Nature Communications today, could lead to the development of new therapies for age-related disorders that are characterized by a breakdown in this process.
Malene Hansen, Ph.D., associate professor in Sanford-Burnham's Del E. Webb Center for Neuroscience, Aging and Stem Cell Research, and her team as well as collaborators found a transcription factor -- an on/off switch for genes -- that induces autophagy in animal models, including the nematode C. elegans, the primary model organism studied in the Hansen lab. This transcription factor, called HLH-30, coordinates the autophagy process by regulating genes with functions in different steps of the process. Two years ago, researchers discovered a similar transcription factor, or orthologue, called TFEB that regulates autophagy in mammalian cells.
"HLH-30 is critical to ensure longevity in all of the long-lived C. elegans strains we tested," says Hansen. "These models require active HLH-30 to extend lifespan, possibly by inducing autophagy. We found this activation not only in worm longevity models, but also in dietary-restricted mice, and we propose the mechanism might be conserved in higher organisms as well."
HLH-30 is the first transcription factor reported to function in all known autophagy-dependent longevity paradigms, strengthening the emerging concept that autophagy can contribute to long lifespan. In a previous study, Hansen and her colleagues discovered that increased autophagy has an anti-aging effect, possibly by promoting the activity of an autophagy-related, fat-digesting enzyme. With these findings, scientists now know a key component of the regulation of autophagy in aging.
Hansen's team is now working to find therapeutic targets, particularly upstream kinases, molecules that change protein function, which might actually phosphorylate the transcription factor to alter its function. "We already have a clue about the protein TOR, a master regulator that influences metabolism and aging in many species, but there might be other kinases that regulate HLH-30 or TFEB activity as well," says lead study author Louis René Lapierre, Ph.D., a postdoctoral fellow in Hansen's laboratory, and a recent recipient of a K99/R00 Pathway to Independence career award from the National Institutes of Health.
Autophagy has become the subject of intense scientific scrutiny over the past few years, particularly since the process -- or its malfunction -- has been implicated in many human diseases, including cancer, Alzheimer's, as well as cardiovascular disease and neurodegenerative disorders. HLH-30 and TFEB may represent attractive targets for the development of new therapeutic agents against such diseases.

New Knowledge About Permafrost Improving Climate Models

 New research findings from the Centre for Permafrost (CENPERM) at the Department of Geosciences and Natural Resource Management, University of Copenhagen, document that permafrost during thawing may result in a substantial release of carbon dioxide into the atmosphere and that the future water content in the soil is crucial to predict the effect of permafrost thawing. The findings may lead to more accurate climate models in the future.
The permafrost is thawing and thus contributes to the release of carbon dioxide and other greenhouse gases into the atmosphere. But the rate at which carbon dioxide is released from permafrost is poorly documented and is one of the most important uncertainties of the current climate models.
The knowledge available so far has primarily been based on measurements of the release of carbon dioxide in short-term studies of up to 3-4 months. The new findings are based on measurements carried out over a 12-year period. Studies with different water content have also been conducted. Professor Bo Elberling, Director of CENPERM (Centre for Permafrost) at the University of Copenhagen, is the person behind the novel research findings which are now being published in the scientific journal Nature Climate Change.
"From a climate change perspective, it makes a huge difference whether it takes 10 or 100 years to release, e.g., half the permafrost carbon pool. We have demonstrated that the supply of oxygen in connection with drainage or drying is essential for a rapid release of carbon dioxide into the atmosphere," says Bo Elberling.
Water content in the soil crucial to predict effect of permafrost thawing
The new findings also show that the future water content in the soil is a decisive factor for being able to correctly predict the effect of permafrost thawing. If the permafrost remains water-saturated after thawing, the carbon decomposition rate will be very low, and the release of carbon dioxide will take place over several hundred years, in addition to methane that is produced in waterlogged conditions. The findings can be used directly to improve existing climate models.
The new studies are mainly conducted at the Zackenberg research station in North-East Greenland, but permafrost samples from four other locations in Svalbard and in Canada have also been included and they show a surprising similarity in the loss of carbon over time.
"It is thought-provoking that microorganisms are behind the entire problem -- microorganisms which break down the carbon pool and which are apparently already present in the permafrost. One of the critical decisive factors -- the water content -- is in the same way linked to the original high content of ice in most permafrost samples. Yes, the temperature is increasing, and the permafrost is thawing, but it is, still, the characteristics of the permafrost which determine the long-term release of carbon dioxide," Bo Elberling concludes.

Interactions Between Species: Powerful Driving Force Behind Evolution?

Scientists at the University of Liverpool have provided the first experimental evidence that shows that evolution is driven most powerfully by interactions between species, rather than adaptation to the environment.
The team observed viruses as they evolved over hundreds of generations to infect bacteria. They found that when the bacteria could evolve defences, the viruses evolved at a quicker rate and generated greater diversity, compared to situations where the bacteria were unable to adapt to the viral infection.
The study shows, for the first time, that the American evolutionary biologist Leigh Van Valen was correct in his 'Red Queen Hypothesis'. The theory, first put forward in the 1970s, was named after a passage in Lewis Carroll's Through the Looking Glass in which the Red Queen tells Alice, 'It takes all the running you can do to keep in the same place'. This suggested that species were in a constant race for survival and have to continue to evolve new ways of defending themselves throughout time.
Dr Steve Paterson, from the University's School of Biosciences, explains: "Historically, it was assumed that most evolution was driven by a need to adapt to the environment or habitat. The Red Queen Hypothesis challenged this by pointing out that actually most natural selection will arise from co-evolutionary interactions with other species, not from interactions with the environment.
"This suggested that evolutionary change was created by 'tit-for-tat' adaptations by species in constant combat. This theory is widely accepted in the science community, but this is the first time we have been able to show evidence of it in an experiment with living things."
Dr Michael Brockhurst said: "We used fast-evolving viruses so that we could observe hundreds of generations of evolution. We found that for every viral strategy of attack, the bacteria would adapt to defend itself, which triggered an endless cycle of co-evolutionary change. We compared this with evolution against a fixed target, by disabling the bacteria's ability to adapt to the virus.
"These experiments showed us that co-evolutionary interactions between species result in more genetically diverse populations, compared to instances where the host was not able to adapt to the parasite. The virus was also able to evolve twice as quickly when the bacteria were allowed to evolve alongside it."
The team used high-throughput DNA sequencing technology at the Centre for Genomic Research to sequence thousands of virus genomes. The next stage of the research is to understand how co-evolution differs when interacting species help, rather than harm, one another.
The research is published in Nature and was supported by funding from the Natural Environment Research Council (NERC); the Wellcome Trust; the European Research Council and the Leverhulme Trust.

Evolution On the Inside Track: How Viruses in Gut Bacteria Change Over Time

Humans are far more than merely the sum total of all the cells that form the organs and tissues. The digestive tract is also home to a vast colony of bacteria of all varieties, as well as the myriad viruses that prey upon them. Because the types of bacteria carried inside the body vary from person to person, so does this viral population, known as the virome.
By closely following and analyzing the virome of one individual over two-and-a-half years, researchers from the Perelman School of Medicine at the University of Pennsylvania, led by professor of Microbiology Frederic D. Bushman, Ph.D., have uncovered some important new insights on how a viral population can change and evolve -- and why the virome of one person can vary so greatly from that of another. The evolution and variety of the virome can affect susceptibility and resistance to disease among individuals, along with variable effectiveness of drugs.
Their work was published in the Proceedings of the National Academy of Sciences.
Most of the virome consists of bacteriophages, viruses that infect bacteria rather than directly attacking their human hosts. However, the changes that bacteriophages wreak upon bacteria can also ultimately affect humans.
"Bacterial viruses are predators on bacteria, so they mold their populations," says Bushman. "Bacterial viruses also transport genes for toxins, virulence factors that modify the phenotype of their bacterial host." In this way, an innocent, benign bacterium living inside the body can be transformed by an invading virus into a dangerous threat.
At 16 time points over 884 days, Bushman and his team collected stool samples from a healthy male subject and extracted viral particles using several methods. They then isolated and analyzed DNA contigs (contiguous sequences) using ultra-deep genome sequencing .
"We assembled raw sequence data to yield complete and partial genomes and analyzed how they changed over two and a half years," Bushman explains. The result was the longest, most extensive picture of the workings of the human virome yet obtained.
The researchers found that while approximately 80 percent of the viral types identified remained mostly unchanged over the course of the study, certain viral species changed so substantially over time that, as Bushman notes, "You could say we observed speciation events."
This was particularly true in the Microviridae group, which are bacteriophages with single-stranded circular DNA genomes. Several genetic mechanisms drove the changes, including substitution of base chemicals; diversity-generating retroelements, in which reverse transcriptase enzymes introduce mutations into the genome; and CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats), in which pieces of the DNA sequences of bacteriophages are incorporated as spacers in the genomes of bacteria.
Such rapid evolution of the virome was perhaps the most surprising finding for the research team. Bushman notes that "different people have quite different bacteria in their guts, so the viral predators on those bacteria are also different. However, another reason people are so different from each other in terms of their virome, emphasized in this paper, is that some of the viruses, once inside a person, are changing really fast. So some of the viral community diversifies and becomes unique within each individual."
Since humans acquire the bacterial population -- and its accompanying virome -- after birth from food and other environmental factors, it's logical that the microbial population living within each of us would differ from person to person. But this work, say the researchers, demonstrates that another major explanatory factor is the constant evolution of the virome within the body. That fact has important implications for the ways in which susceptibility and resistance to disease can differ among individuals, as well as the effectiveness of various drugs and other treatments.
The research was supported by Human Microbiome Roadmap Demonstration Project (UH2DK083981) the Penn Genome Frontiers Institute, and the University of Pennsylvania Center for AIDS Research (CFAR; P30 Al 045008).


 Mars Had Oxygen-Rich Atmosphere 4,000 Million Years Ago

Differences between Martian meteorites and rocks examined by a NASA rover can be explained if Mars had an oxygen-rich atmosphere 4000 million years ago -- well before the rise of atmospheric oxygen on Earth 2500m years ago.
Scientists from Oxford University investigated the compositions of Martian meteorites found on Earth and data from NASA's 'Spirit' rover that examined surface rocks in the Gusev crater on Mars. The fact that the surface rocks are five times richer in nickel than the meteorites was puzzling and had cast doubt on whether the meteorites are typical volcanic products of the red planet.
'What we have shown is that both meteorites and surface volcanic rocks are consistent with similar origins in the deep interior of Mars but that the surface rocks come from a more oxygen-rich environment, probably caused by recycling of oxygen-rich materials into the interior,' said Professor Bernard Wood, of Oxford University's Department of Earth Sciences, who led the research reported in this week's Nature.
'This result is surprising because while the meteorites are geologically 'young', around 180 million to 1400 million years old, the Spirit rover was analysing a very old part of Mars, more than 3700 million years old.'
Whilst it is possible that the geological composition of Mars varies immensely from region to region the researchers believe that it is more likely that the differences arise through a process known as subduction -- in which material is recycled into the interior. They suggest that the Martian surface was oxidised very early in the history of the planet and that, through subduction, this oxygen-rich material was drawn into the shallow interior and recycled back to the surface during eruptions 4000 million years ago. The meteorites, by contrast, are much younger volcanic rocks that emerged from deeper within the planet and so were less influenced by this process.
Professor Wood said: 'The implication is that Mars had an oxygen-rich atmosphere at a time, about 4000 million years ago, well before the rise of atmospheric oxygen on earth around 2500 million years ago. As oxidation is what gives Mars its distinctive colour it is likely that the 'red planet' was wet, warm and rusty billions of years before Earth's atmosphere became oxygen rich.

Studying Meteorites May Reveal Mars' Secrets of Life

 In an effort to determine if conditions were ever right on Mars to sustain life, a team of scientists, including a Michigan State University professor, has examined a meteorite that formed on the Red Planet more than a billion years ago.
And although this team's work is not specifically solving the mystery, it is laying the groundwork for future researchers to answer this age-old question.
The problem, said MSU geological sciences professor Michael Velbel, is that most meteorites that originated on Mars arrived on Earth so long ago that now they have characteristics that tell of their life on Earth, obscuring any clues it might offer about their time on Mars.
"These meteorites contain water-related mineral and chemical signatures that can signify habitable conditions," he said. "The trouble is by the time most of these meteorites have been lying around on Earth they pick up signatures that look just like habitable environments, because they are. Earth, obviously, is habitable.
"If we could somehow prove the signature on the meteorite was from before it came to Earth, that would be telling us about Mars."
Specifically, the team found mineral and chemical signatures on the rocks that indicated terrestrial weathering -- changes that took place on Earth. The identification of these types of changes will provide valuable clues as scientists continue to examine the meteorites.
"Our contribution is to provide additional depth and a little broader view than some work has done before in sorting out those two kinds of water-related alterations -- the ones that happened on Earth and the ones that happened on Mars," Velbel said.
The meteorite that Velbel and his colleagues examined -- known as a nakhlite meteorite -- was recovered in 2003 in the Miller Range of Antarctica. About the size of a tennis ball and weighing in at one-and-a-half pounds, the meteorite was one of hundreds recovered from that area.
Velbel said past examinations of meteorites that originated on Mars, as well as satellite and Rover data, prove water once existed on Mars, which is the fourth planet from the sun and Earth's nearest Solar System neighbor.
"However," he said, "until a Mars mission successfully returns samples from Mars, mineralogical studies of geochemical processes on Mars will continue to depend heavily on data from meteorites."
Velbel is currently serving as a senior fellow at the Smithsonian Institution's National Museum of Natural History in Washington D.C.
The research is published in Geochimica et Cosmochimica Acta, a bi-weekly journal co-sponsored by two professional societies, the Geochemical Society and the Meteoritical Society.

'International Beam Team' Solves Martian Meteorite-Age Puzzle

By directing energy beams at tiny crystals found in a Martian meteorite, a Western University-led team of geologists has proved that the most common group of meteorites from Mars is almost 4 billion years younger than many scientists had believed -- resolving a long-standing puzzle in Martian science and painting a much clearer picture of the Red Planet's evolution that can now be compared to that of habitable Earth.
In a paper published today in the journal Nature, lead author Desmond Moser, an Earth Sciences professor from Western's Faculty of Science, Kim Tait, Curator, Mineralogy, Royal Ontario Museum, and a team of Canadian, U.S., and British collaborators show that a representative meteorite from the Royal Ontario Museum (ROM)'s growing Martian meteorite collection, started as a 200 million-year-old lava flow on Mars, and contains an ancient chemical signature indicating a hidden layer deep beneath the surface that is almost as old as the solar system.
The team, composed of scientists from ROM, the University of Wyoming, UCLA, and the University of Portsmouth, also discovered crystals that grew while the meteorite was launched from Mars towards Earth, allowing them to narrow down the timing to less than 20 million years ago while also identifying possible launch locations on the flanks of the supervolcanoes at the Martian equator.
More details can be found in their paper titled, "Solving the Martian meteorite age conundrum using micro-baddeleyite and launch-generated zircon."
Moser and his group at Western's Zircon & Accessory Phase Laboratory (ZAPLab), one of the few electron nanobeam dating facilities in the world, determined the growth history of crystals on a polished surface of the meteorite. The researchers combined a long-established dating method (measuring radioactive uranium/lead isotopes) with a recently developed gently-destructive, mineral grain-scale technique at UCLA that liberates atoms from the crystal surface using a focused beam of oxygen ions.
Moser estimates that there are roughly 60 Mars rocks dislodged by meteorite impacts that are now on Earth and available for study, and that his group's approach can be used on these and a much wider range of heavenly bodies.
"Basically, the inner solar system is our oyster. We have hundreds of meteorites that we can apply this technique to, including asteroids from beyond Mars to samples from the Moon," says Moser, who credits the generosity of the collectors that identify this material and make it available for public research.

Warm-Blooded Dinosaurs Worked Up A Sweat


Were dinosaurs "warm-blooded" like present-day mammals and birds, or "cold-blooded" like present day lizards? The implications of this simple-sounding question go beyond deciding whether or not you'd snuggle up to a dinosaur on a cold winter's evening.
In a study published this week in the journal PLoS ONE, a team of researchers, including Herman Pontzer, Ph.D., assistant professor of anthropology in Arts & Sciences, has found strong evidence that many dinosaur species were probably warm-blooded.
If dinosaurs were endothermic (warm-blooded) they would have had the potential for athletic abilities rivalling those of present day birds and mammals, and possibly similar quick thinking and complicated behaviours as well¬. Their internal furnace would have enabled them to live in colder habitats that would kill ectotherms (cold-blooded animals), such as high mountain ranges and the polar regions, allowing them to cover the entire Mesozoic landscape. These advantages would have come at a cost, however; endothermic animals require much more food than their ectothermic counterparts because their rapid metabolisms fatally malfunction if they cool down too much, and so a constant supply of fuel is required.
Pontzer worked with colleagues John R. Hutchinson and Vivian Allen from the Structure and Motion Laboratory at the Royal Veterinary College, UK, to bring a combination of simple measurements, rigorous computer modeling techniques and their knowledge of physiology in present-day animals to bear in a new study on this hot topic. Using their combined experience, the authors set out to determine whether a variety of dinosaurs and closely related extinct animals were endothermic or ectothermic, and when, where and how often in the dinosaur family tree this important trait may have evolved.
"It's exciting to apply our studies of living animals back to the fossil record to test different evolutionary scenarios," Pontzer said. "I work on the evolution of human locomotion, using studies of living humans and other animals to figure out the gait and efficiency of our earliest fossil ancestors. When I realized this approach could be applied to the dinosaur record, I contacted John Hutchinson, an expert on dinosaur locomotion, and suggested we collaborate on this project. Our results provide strong evidence that many dinosaur species were probably warm-blooded. The debate on this issue will no doubt continue, but we hope our study will add a useful new line of evidence."
Studies of present-day animals have shown that endothermic animals are able to sustain much higher rates of energy use (that is, they have a higher "VO2max") than ectothermic animals can. Following this observation, the researches reasoned that if the energy cost of walking and running could be estimated in dinosaurs, the results might show whether these extinct species were warm- or cold-blooded. If walking and running burned more energy than a cold-blooded physiology can supply, these dinosaurs were probably warm-blooded.
But metabolism and energy use are complex biological processes, and all that remains of extinct dinosaurs are their bones. So, the authors made use of a recent work by Pontzer showing that the energy cost of walking and running is strongly associated with leg length -- so much so that hip height (the distance from the hip joint to the ground) can predict the observed cost of locomotion with 98% accuracy for a wide variety of land animals. As hip height can be simply estimated from the length of fossilized leg bones, Pontzer and colleagues were able to use this to obtain simple but reliable estimates of locomotor cost for dinosaurs.
To back up these estimates, the authors used a more complex method based on estimating the actual volume of leg muscle dinosaurs would have had to activate in order to move, using methods Hutchinson and Pontzer had previously developed. Activating more muscle leads to greater energy demands, which may in turn require an endothermic metabolism to fuel. Estimating active muscle volume in an extinct animal is a great deal more complicated than measuring the length of the legs, however, and so the authors went back to basic principles of locomotion.
First, how large would the forces required from the legs have to be to move the animal? In present-day animals, this is mainly determined by how much the animal weighs and what sort of leg posture it uses -- straight-legged like a human or bent-legged like a bird, for example. Second, how much muscle would be needed to supply these forces? Experiments in biological mechanics have shown that this depends mainly on the limb muscles' mechanical advantage, which in turn depends strongly on the size of the bony levers they are attached to.
To apply these principles to extinct dinosaurs, Pontzer and colleagues examined recent anatomical models of 13 extinct dinosaur species, using detailed measurements of the fossilized bony levers that limb muscles attached to. From this, the authors were able to reconstruct the mechanical advantage of the limb muscles and calculate the active muscle volume required for each dinosaur to walk or run at different speeds. The cost of activating this muscle was then compared to similar costs in present-day endothermic and ectothermic animals.
The results of both the simple and complex method were in very close agreement: based on the energy they consumed when moving, many dinosaurs were probably endothermic, athletic animals because their energy requirements during walking and running were too high for cold-blooded animals to produce. Interestingly, when the results for each dinosaur were arranged into an evolutionary family tree, the authors found that endothermy might be the ancestral condition for all dinosaurs. This pushes the evolution of endothermy further back into the ancient past than many researchers expected, suggesting that dinosaurs were athletic, endothermic animals throughout the Mesozoic era. This early adoption of high metabolic rates may be one of the key factors in the massive evolutionary success that dinosaurs enjoyed during the Triassic, Jurassic and Cretaceous periods, and continue to enjoy now in feathery, flying form.
Their methods add to the many lines of evidence, from bone histology to lung ventilation and insulatory "protofeathers," that are all beginning to support the fundamental conclusion that dinosaurs were generally endothermic. Ironically, indirect anatomical evidence for active locomotion in dinosaurs was originally some of the first evidence used by researchers John Ostrom and Robert Bakker in the 1960s to infer that dinosaurs were endothermic.
Pontzer and his colleagues provide a new perspective on dinosaur anatomy, linking limb design to energetics and metabolic strategies. The debate over dinosaur physiology will no doubt continue to evolve, and while the physiology of long-extinct species will always remain a bit speculative, the authors hope the methods developed in this study provide a new tool for researchers in the field. 

New Evidence for Warm-Blooded Dinosaurs

University of Adelaide research has shown new evidence that dinosaurs were warm-blooded like birds and mammals, not cold-blooded like reptiles as commonly believed.
In a paper published in PLoS ONE, Professor Roger Seymour of the University's School of Earth and Environmental Sciences, argues that cold-blooded dinosaurs would not have had the required muscular power to prey on other animals and dominate over mammals as they did throughout the Mesozoic period.
"Much can be learned about dinosaurs from fossils but the question of whether dinosaurs were warm-blooded or cold-blooded is still hotly debated among scientists," says Professor Seymour.
"Some point out that a large saltwater crocodile can achieve a body temperature above 30°C by basking in the sun, and it can maintain the high temperature overnight simply by being large and slow to change temperature.
"They say that large, cold-blooded dinosaurs could have done the same and enjoyed a warm body temperature without the need to generate the heat in their own cells through burning food energy like warm-blooded animals."
In his paper, Professor Seymour asks how much muscular power could be produced by a crocodile-like dinosaur compared to a mammal-like dinosaur of the same size.
Saltwater crocodiles reach over a tonne in weight and, being about 50% muscle, have a reputation for being extremely powerful animals.
But drawing from blood and muscle lactate measurements collected by his collaborators at Monash University, University of California and Wildlife Management International in the Northern Territory, Professor Seymour shows that a 200 kg crocodile can produce only about 14% of the muscular power of a mammal at peak exercise, and this fraction seems to decrease at larger body sizes.
"The results further show that cold-blooded crocodiles lack not only the absolute power for exercise, but also the endurance, that are evident in warm-blooded mammals," says Professor Seymour.
"So, despite the impression that saltwater crocodiles are extremely powerful animals, a crocodile-like dinosaur could not compete well against a mammal-like dinosaur of the same size.
"Dinosaurs dominated over mammals in terrestrial ecosystems throughout the Mesozoic. To do that they must have had more muscular power and greater endurance than a crocodile-like physiology would have allowed."
His latest evidence adds to that of earlier work he did on blood flow to leg bones which concluded that the dinosaurs were possibly even more active than mammals. 

Tropical Cyclones in the Arabian Sea Have Intensified Due to Earlier Monsoon Onset

 The tropical cyclones in the Arabian Sea during the pre-monsoon season (May -- June) have intensified since 1997 compared to 1979 -- 1997. This has been attributed to decreased vertical wind shear due to the dimming effects of increased anthropogenic black carbon and sulfate emissions in the region. The decrease in vertical wind shear, however, is not the result of these emissions, but due to a 15-day on average earlier occurrence of tropical cyclones, according to a study spearheaded by Bin Wang at the International Pacific Research Center, University of Hawaii at Manoa and published in "Brief Communications Arising" in the September 20, 2012, issue of Nature.
"About 90% of the pre-monsoon tropical cyclones occur during a small widow in late spring. The mean date during which the cyclones with maximum intensity occur has advanced from June 8 in the earlier period to May 24 in the second period," explains Bin Wang. "This advance has been accompanied by a significant decrease in vertical wind shear, which leads to tropical cyclone intensification, because large vertical wind shear is most destructive to intensification."
"The ultimate reason for this earlier occurrence of storms and their intensification is the tendency we have noticed for the southwesterly monsoon to begin earlier in recent years," says Wang. "This earlier monsoon onset is related to the greater warming of the Asian landmass than the ocean and thus an increased temperature ocean-land contrast over the last years. This greater temperature difference may strengthen the monsoon and create more favorable conditions for the formation of tropical cyclones."
"All the changes that we see in the pre-monsoon storms and the earlier monsoon onset since the late 90s, can be the result either of natural variability, namely the Interdecadal Pacific Oscillation, or of warming effects due to greater greenhouse gas emissions, but not the effect of increased aerosols. Only time and more research will tell."

'Brown Ocean' Can Fuel Inland Tropical Cyclones

In the summer of 2007, Tropical Storm Erin stumped meteorologists. Most tropical cyclones dissipate after making landfall, weakened by everything from friction and wind shear to loss of the ocean as a source of heat energy. Not Erin. The storm intensified as it tracked through Texas. It formed an eye over Oklahoma. As it spun over the southern plains, Erin grew stronger than it ever had been over the ocean.

Erin is an example of a newly defined type of inland tropical cyclone that maintains or increases strength after landfall, according to NASA-funded research by Theresa Andersen and J. Marshall Shepherd of the University of Georgia in Athens.
Before making landfall, tropical storms gather power from the warm waters of the ocean. Storms in the newly defined category derive their energy instead from the evaporation of abundant soil moisture -- a phenomenon that Andersen and Shepherd call the "brown ocean."
"The land essentially mimics the moisture-rich environment of the ocean, where the storm originated," Andersen said.
The study is the first global assessment of the post-landfall strength and structure of inland tropical cyclones, and the weather and environmental conditions in which they occur.
"A better understanding of inland storm subtypes, and the differences in the physical processes that drive them, could ultimately improve forecasts," Andersen said. "Prediction and earlier warnings can help minimize damage and loss of life from severe flooding, high winds, and other tropical cyclone hazards."
The study was published March 2013 in the International Journal of Climatology.
To better understand tropical cyclones that survive beyond landfall, Andersen and Shepherd accessed data archived by the National Oceanic and Atmospheric Administration's National Climatic Data Center for tropical cyclones from 1979 to 2008. Storms had to meet the criteria of retaining a measureable central pressure by the time they tracked at least 220 miles (350 kilometers) inland, away from the maritime influence of the nearest coast. Next they obtained atmospheric and environmental data for before and after the storms from NASA's Modern Era Retrospective-Analysis for Research and Applications.
Of the 227 inland tropical cyclones identified, 45 maintained or increased strength, as determined by their wind speed and central pressure. The researchers show, however, that not all such storms are fueled equally.
In October 2012, Hurricane Sandy demonstrated the destructive power of extratropical cyclones -- a well-studied storm type that undergoes a known physical and thermal transition. These systems begin as warm-core tropical cyclones that derive energy from the ocean. Over land, the storms transition to cold-core extratropical cyclones that derive energy from clashes between different air masses. Of the study's 45 inland storms that maintained or increased strength, 17 belonged to this category.
Tropical Storm Erin, however, is among the newly described storm category that accounted for 16 of the 45 tropical cyclones. Instead of transitioning from a warm-core to cold-core system, these storms maintain their tropical warm-core characteristics. The storm type, which Andersen and Shepherd call tropical cyclone maintenance and intensification events, or TCMIs, have the potential to deliver much more rainfall than their extratropical counterparts.
"Until events like Erin in 2007, there was not much focus on post-landfall tropical cyclones unless they transitioned," Andersen said. "Erin really brought attention to the inland intensification of tropical cyclones."
"This is particularly critical since a study by former National Hurricane Center Deputy Director Ed Rappaport found that 59 percent of fatalities in landfalling tropical cyclones are from inland freshwater flooding," Shepherd said.
While most inland tropical cyclones occur in the United States and China, the hotspot for TCMIs during the 30-year study period turned out to be Australia. The uneven geographic distribution led Andersen and Shepherd to investigate the environment and conditions surrounding the brown ocean phenomenon that gives rise to the storms.
Andersen and Shepherd show that a brown ocean environment consists of three observable conditions. First, the lower level of the atmosphere mimics a tropical atmosphere with minimal variation in temperature. Second, soils in the vicinity of the storms need to contain ample moisture. Finally, evaporation of the soil moisture releases latent heat, which the team found must measure at least 70 watts averaged per square meter. For comparison, the latent heat flux from the ocean averages about 200 watts per square meter.
Indeed, all three conditions were present when Erin tracked across the U.S. Gulf Coast and Midwest. Still, questions remain about the factors -- such as variations in climate, soil and vegetation -- that make Australia the region where brown ocean conditions most often turn up.
The research also points to possible implications for storms' response to climate change. "As dry areas get drier and wet areas get wetter, are you priming the soil to get more frequent inland tropical cyclone intensification?" asked Shepherd.

Solar Tsunami Used to Measure Sun's Magnetic Field

A solar tsunami observed by NASA's Solar Dynamics Observatory (SDO) and the Japanese Hinode spacecraft has been used to provide the first accurate estimates of the Sun's magnetic field.
Solar tsunamis are produced by enormous explosions in the Sun's atmosphere called coronal mass ejections (CMEs). As the CME travels out into space, the tsunami travels across the Sun at speeds of up to 1000 kilometres per second.
Similar to tsunamis on Earth, the shape of solar tsunamis is changed by the environment through which they move. Just as sound travels faster in water than in air, solar tsunamis have a higher speed in regions of stronger magnetic field. This unique feature allowed the team, led by researchers from UCL's Mullard Space Science Laboratory, to measure the Sun's magnetic field. The results are outlined in a paper soon to be published in the journal Solar Physics.
Dr David Long, UCL Mullard Space Science Laboratory, and lead author of the research, said: "We've demonstrated that the Sun's atmosphere has a magnetic field about ten times weaker than a normal fridge magnet."
Using data obtained using the Extreme ultraviolet Imaging Spectrometer (EIS), a UK-led instrument on the Japanese Hinode spacecraft, the team measured the density of the solar atmosphere through which the tsunami was travelling.
The combination of imaging and spectral observations provides a rare opportunity to examine the magnetic field which permeates the Sun's atmosphere.
Dr Long noted: "These are rare observations of a spectacular event that reveal some really interesting details about our nearest star."
Visible as loops and other structures in the Sun's atmosphere, the Sun's magnetic field is difficult to measure directly and usually has to be estimated using intensive computer simulations. The Hinode spacecraft has three highly sensitive telescopes, which use visible, X-ray and ultraviolet light to examine both slow and rapid changes in the magnetic field.
The instruments on Hinode act like a microscope to track how the magnetic field around sunspots is generated, shapes itself, and then fades away. These results show just how sensitive these instruments can be, measuring magnetic fields that were previously thought too weak to detect.
The explosions that produce solar tsunamis can send CMEs hurtling towards the Earth. Although protected by its own magnetic field, the Earth is vulnerable to these solar storms as they can adversely affect satellites and technological infrastructure.
Dr Long said: "As our dependency on technology increases, understanding how these eruptions occur and travel will greatly assist in protecting against solar activity."

Modern Dog Breeds Genetically Disconnected from Ancient Ancestors 

Cross-breeding of dogs over thousands of years has made it extremely difficult to trace the ancient genetic roots of today's pets, according to a new study led by Durham University.
An international team of scientists analyzed data of the genetic make-up of modern-day dogs, alongside an assessment of the global archaeological record of dog remains, and found that modern breeds genetically have little in common with their ancient ancestors.
Dogs were the first domesticated animals and the researchers say their findings will ultimately lead to greater understanding of dogs' origins and the development of early human civilization.
Although many modern breeds look like those depicted in ancient texts or in Egyptian pyramids, cross-breeding across thousands of years has meant that it is not accurate to label any modern breeds as "ancient," the researchers said.
Breeds such as the Akita, Afghan Hound and Chinese Shar-Pei, which have been classed as "ancient," are no closer to the first domestic dogs than other breeds due to the effects of lots of cross-breeding, the study found.
Other effects on the genetic diversity of domestic dogs include patterns of human movement and the impact on dog population sizes caused by major events, such as the two World Wars, the researchers added.
The findings were published May 21 in the scientific journal Proceedings of the National Academy of Sciences USA (PNAS). The Durham-led research team was made up of scientists from a number of universities including Uppsala University, Sweden, and the Broad Institute, in the USA.
In total the researchers analysed genetic data from 1,375 dogs representing 35 breeds. They also looked at data showing genetic samples of wolves, with recent genetic studies suggesting that dogs are exclusively descended from the grey wolf.
Lead author Dr Greger Larson, an evolutionary biologist in Durham University's Department of Archaeology, said the study demonstrated that there is still a lot we do not know about the early history of dog domestication including where, when, and how many times it took place.
Dr Larson added: "We really love our dogs and they have accompanied us across every continent.
"Ironically, the ubiquity of dogs combined with their deep history has obscured their origins and made it difficult for us to know how dogs became man's best friend.
"All dogs have undergone significant amounts of cross-breeding to the point that we have not yet been able to trace all the way back to their very first ancestors."
Several breeds, including Basenjis, Salukis and Dingoes, possess a differing genetic signature, which previous studies have claimed to be evidence for their ancient heritage, the research found.
However the study said that the unique genetic signatures in these dogs was not present because of a direct heritage with ancient dogs. Instead these animals appeared genetically different because they were geographically isolated and were not part of the 19th Century Victorian-initiated Kennel Clubs that blended lineages to create most of the breeds we keep as pets today.
The study also suggested that within the 15,000 year history of dog domestication, keeping dogs as pets only began 2,000 years ago and that until very recently, the vast majority of dogs were used to do specific jobs.
Dr Larson said: "Both the appearance and behavior of modern breeds would be deeply strange to our ancestors who lived just a few hundred years ago.
"And so far, anyway, studying modern breeds hasn't yet allowed us to understand how, where and when dogs and humans first started this wonderful relationship."
The researchers added that DNA sequencing technology is faster and cheaper than ever and could soon lead to further insights into the domestication and subsequent evolution of dogs. 

Asian Origins of Native American Dogs Confirmed

Once thought to have been extinct, native American dogs are on the contrary thriving, according to a recent study that links these breeds to ancient Asia.
The arrival of Europeans in the Americas has generally been assumed to have led to the extinction of indigenous dog breeds; but a comprehensive genetic study has found that the original population of native American dogs has been almost completely preserved, says Peter Savolainen, a researcher in evolutionary genetics at KTH Royal Institute of Technology in Stockholm.
In fact, American dog breeds trace their ancestry to ancient Asia, Savolainen says. These native breeds have 30 percent or less modern replacement by European dogs, he says.
"Our results confirm that American dogs are a remaining part of the indigenous American culture, which underscores the importance of preserving these populations," he says.
Savolainen's research group, in cooperation with colleagues in Portugal, compared mitochondrial DNA from Asian and European dogs, ancient American archaeological samples, and American dog breeds, including Chihuahuas, Peruvian hairless dogs and Arctic sled dogs.
They traced the American dogs' ancestry back to East Asian and Siberian dogs, and also found direct relations between ancient American dogs and modern breeds.
"It was especially exciting to find that the Mexican breed, Chihuahua, shared a DNA type uniquely with Mexican pre-Columbian samples," he says. "This gives conclusive evidence for the Mexican ancestry of the Chihuahua."
The team also analysed stray dogs, confirming them generally to be runaway European dogs; but in Mexico and Bolivia they identified populations with high proportions of indigenous ancestry.
Savolainen says that the data also suggests that the Carolina Dog, a stray dog population in the U.S., may have an indigenous American origin.
Savolainen works at the Science for Life Laboratory (SciLifeLab www.scilifelab.se), a collaboration involving KTH Royal Institute of Technology, Stockholm University, the Karolinska Institutet and Uppsala University. 

Climate Changes Faster Than Species Can Adapt, Rattlesnake Study Finds

The ranges of species will have to change dramatically as a result of climate change between now and 2100 because the climate will change more than 100 times faster than the rate at which species can adapt, according to a newly published study by Indiana University researchers.
The study, which focuses on North American rattlesnakes, finds that the rate of future change in suitable habitat will be two to three orders of magnitude greater than the average change over the past 300 millennia, a time that included three major glacial cycles and significant variation in climate and temperature.
"We find that, over the next 90 years, at best these species' ranges will change more than 100 times faster than they have during the past 320,000 years," said Michelle Lawing, lead author of the paper and a doctoral candidate in geological sciences and biology at IU Bloomington. "This rate of change is unlike anything these species have experienced, probably since their formation."
The study, "Pleistocene Climate, Phylogeny, and Climate Envelope Models: An Integrative Approach to Better Understand Species' Response to Climate Change," was published by the online science journal PLoS ONE. Co-author is P. David Polly, associate professor in the Department of Geological Sciences in the IU Bloomington College of Arts and Sciences.
The researchers make use of the fact that species have been responding to climate change throughout their history and their past responses can inform what to expect in the future. They synthesize information from climate cycle models, indicators of climate from the geological record, evolution of rattlesnake species and other data to develop what they call "paleophylogeographic models" for rattlesnake ranges. This enables them to map the expansion and contraction at 4,000-year intervals of the ranges of 11 North American species of the rattlesnake genus Crotalus.
Projecting the models into the future, the researchers calculate the expected changes in range at the lower and upper extremes of warming predicted by the Intergovernmental Panel on Climate Change -- between 1.1 degree and 6.4 degrees Celsius. They calculate that rattlesnake ranges have moved an average of only 2.3 meters a year over the past 320,000 years and that their tolerances to climate have evolved about 100 to 1000 times slower, indicating that range shifts are the only way that rattlesnakes have coped with climate change in the recent past. With projected climate change in the next 90 years, the ranges would be displaced by a remarkable 430 meters to 2,400 meters a year.
Increasing temperature does not necessarily mean expanded suitable habitats for rattlesnakes. For example, Crotalus horridus, the timber rattlesnake, is now found throughout the Eastern United States. The study finds that, with a temperature increase of 1.1 degree Celsius over the next 90 years, its range would expand slightly into New York, New England and Texas. But with an increase of 6.4 degrees, its range would shrink to a small area on the Tennessee-North Carolina border. C. adamanteus, the eastern diamondback rattlesnake, would be displaced entirely from its current range in the southeastern U.S. with a temperature increase of 6.4 degrees.
The findings suggest snakes wouldn't be able to move fast enough to keep up with the change in suitable habitat. The authors suggest the creation of habitat corridors and managed relocation may be needed to preserve some species.
Rattlesnakes are good indicators of climate change because they are ectotherms, which depend on the environment to regulate their body temperatures. But Lawing and Polly note that many organisms will be affected by climate change, and their study provides a model for examining what may happen with other species. Their future research could address the past and future effects of climate change on other types of snakes and on the biological communities of snakes.



Climate Change Evident Across Europe, Confirming Urgent Need for Adaptation

Climate change is affecting all regions in Europe, causing a wide range of impacts on society and the environment. Further impacts are expected in the future, potentially causing high damage costs, according to the latest assessment published by the European Environment Agency this week.
The report, 'Climate change, impacts and vulnerability in Europe 2012' finds that higher average temperatures have been observed across Europe as well as decreasing precipitation in southern regions and increasing precipitation in northern Europe. The Greenland ice sheet, Arctic sea ice and many glaciers across Europe are melting, snow cover has decreased and most permafrost soils have warmed.
Extreme weather events such as heat waves, floods and droughts have caused rising damage costs across Europe in recent years. While more evidence is needed to discern the part played by climate change in this trend, growing human activity in hazard-prone areas has been a key factor. Future climate change is expected to add to this vulnerability, as extreme weather events are expected to become more intense and frequent. If European societies do not adapt, damage costs are expected to continue to rise, according to the report.
Some regions will be less able to adapt to climate change than others, in part due to economic disparities across Europe, the report says. The effects of climate change could deepen these inequalities.
Jacqueline McGlade, EEA Executive Director said: "Climate change is a reality around the world, and the extent and speed of change is becoming ever more evident. This means that every part of the economy, including households, needs to adapt as well as reduce emissions."

Evolution Too Slow to Keep Up With Climate Change

Many vertebrate species would have to evolve about 10,000 times faster than they have in the past to adapt to the rapid climate change expected in the next 100 years, a study led by a University of Arizona ecologist has found.
Scientists analyzed how quickly species adapted to different climates in the past, using data from 540 living species from all major groups of terrestrial vertebrates, including amphibians, reptiles, birds and mammals. They then compared their rates of evolution to rates of climate change projected for the end of this century. This is the first study to compare past rates of adaption to future rates of climate change.
The results, published online in the journal Ecology Letters, show that terrestrial vertebrate species appear to evolve too slowly to be able to adapt to the dramatically warmer climate expected by 2100. The researchers suggested that many species may face extinction if they are unable to move or acclimate.
"Every species has a climatic niche which is the set of temperature and precipitation conditions in the area where it lives and where it can survive," explained John J. Wiens, a professor in UA's department of ecology and evolutionary biology in the College of Science. "For example, some species are found only in tropical areas, some only in cooler temperate areas, some live high in the mountains, and some live in the deserts."
Wiens conducted the research together with Ignacio Quintero, a postgraduate research assistant at Yale University.
"We found that on average, species usually adapt to different climatic conditions at a rate of only by about 1 degree Celsius per million years," Wiens explained. "But if global temperatures are going to rise by about 4 degrees over the next hundred years as predicted by the Intergovernmental Panel of Climate Change, that is where you get a huge difference in rates. What that suggests overall is that simply evolving to match these conditions may not be an option for many species."
For their analysis, Quintero and Wiens studied phylogenies -- essentially evolutionary family trees showing how species are related to each other -- based on genetic data. These trees reveal how long ago species split from each other. The sampling covered 17 families representing the major living groups of terrestrial vertebrates, including frogs, salamanders, lizards, snakes, crocodilians, birds and mammals.
They then combined these evolutionary trees with data on the climatic niche of each species to estimate how quickly climatic niches evolve among species, using climatic data such as annual mean temperature and annual precipitation as well as high and low extremes.
"Basically, we figured out how much species changed in their climatic niche on a given branch, and if we know how old a species is, we can estimate how quickly the climatic niche changes over time," Wiens explained. "For most sister species, we found that they evolved to live in habitats with an average temperature difference of only about 1 or 2 degrees Celsius over the course of one to a few million years."
"We then compared the rates of change over time in the past to projections for what climatic conditions are going to be like in 2100 and looked at how different these rates are. If the rates were similar, it would suggest there is a potential for species to evolve quickly enough to be able to survive, but in most cases, we found those rates to be different by about 10,000-fold or more," he said.
"According to our data, almost all groups have at least some species that are potentially endangered, particularly tropical species."
Species can respond to climate change by acclimating without evolutionary change or by moving over space to track their preferred climate. For example, some species might be able to move to higher latitudes or higher elevation to remain in suitable conditions as the climate warms. In addition, many species could lose many populations due to climate change but might still be able to persist as a species if some of their populations survive. Barring any these options, extinction is the most likely outcome.
He explained that moving to more suitable climatic conditions may not always be an option for many species.
"Some studies suggest many species won't be able to move fast enough," he said. "Also, moving may require unimpeded access to habitats that have not been heavily disturbed by humans. Or consider a species living on the top of a mountain. If it gets too warm or dry up there, they can't go anywhere."
In an earlier study, Wiens and co-authors asked what might actually cause species to go extinct. They showed that species extinctions and declines from climate change are more often due to changes in interactions with other species rather than inability to cope with changing conditions physiologically.
"What seemed to be a big driver in many species declines was reduced food availability," Wiens said. "For example, bighorn sheep: If it gets drier and drier, the grass gets sparse and they starve to death.

New Stretchable Solar Cells Will Power Artificial Electronic 'Super Skin'

 "Super skin" is what Stanford researcher Zhenan Bao wants to create. She's already developed a flexible sensor that is so sensitive to pressure it can feel a fly touch down. Now she's working to add the ability to detect chemicals and sense various kinds of biological molecules. She's also making the skin self-powering, using polymer solar cells to generate electricity. And the new solar cells are not just flexible, but stretchable -- they can be stretched up to 30 percent beyond their original length and snap back without any damage or loss of power.
Super skin, indeed.
"With artificial skin, we can basically incorporate any function we desire," said Bao, a professor of chemical engineering. "That is why I call our skin 'super skin.' It is much more than what we think of as normal skin."
The foundation for the artificial skin is a flexible organic transistor, made with flexible polymers and carbon-based materials. To allow touch sensing, the transistor contains a thin, highly elastic rubber layer, molded into a grid of tiny inverted pyramids. When pressed, this layer changes thickness, which changes the current flow through the transistor. The sensors have from several hundred thousand to 25 million pyramids per square centimeter, corresponding to the desired level of sensitivity.
To sense a particular biological molecule, the surface of the transistor has to be coated with another molecule to which the first one will bind when it comes into contact. The coating layer only needs to be a nanometer or two thick.
"Depending on what kind of material we put on the sensors and how we modify the semiconducting material in the transistor, we can adjust the sensors to sense chemicals or biological material," she said.
Bao's team has successfully demonstrated the concept by detecting a certain kind of DNA. The researchers are now working on extending the technique to detect proteins, which could prove useful for medical diagnostics purposes.
"For any particular disease, there are usually one or more specific proteins associated with it -- called biomarkers -- that are akin to a 'smoking gun,' and detecting those protein biomarkers will allow us to diagnose the disease," Bao said.
The same approach would allow the sensors to detect chemicals, she said. By adjusting aspects of the transistor structure, the super skin can detect chemical substances in either vapor or liquid environments.
Regardless of what the sensors are detecting, they have to transmit electronic signals to get their data to the processing center, whether it is a human brain or a computer.
Having the sensors run on the sun's energy makes generating the needed power simpler than using batteries or hooking up to the electrical grid, allowing the sensors to be lighter and more mobile. And having solar cells that are stretchable opens up other applications.
A recent research paper by Bao, describing the stretchable solar cells, will appear in an upcoming issue of Advanced Materials. The paper details the ability of the cells to be stretched in one direction, but she said her group has since demonstrated that the cells can be designed to stretch along two axes.
The cells have a wavy microstructure that extends like an accordion when stretched. A liquid metal electrode conforms to the wavy surface of the device in both its relaxed and stretched states.
"One of the applications where stretchable solar cells would be useful is in fabrics for uniforms and other clothes," said Darren Lipomi, a graduate student in chemical engineering in Bao's lab and lead author of the paper.
"There are parts of the body, at the elbow for example, where movement stretches the skin and clothes," he said. "A device that was only flexible, not stretchable, would crack if bonded to parts of machines or of the body that extend when moved." Stretchability would be useful in bonding solar cells to curved surfaces without cracking or wrinkling, such as the exteriors of cars, lenses and architectural elements.
The solar cells continue to generate electricity while they are stretched out, producing a continuous flow of electricity for data transmission from the sensors.
Bao said she sees the super skin as much more than a super mimic of human skin; it could allow robots or other devices to perform functions beyond what human skin can do.
"You can imagine a robot hand that can be used to touch some liquid and detect certain markers or a certain protein that is associated with some kind of disease and the robot will be able to effectively say, 'Oh, this person has that disease,'" she said. "Or the robot might touch the sweat from somebody and be able to say, 'Oh, this person is drunk.'"
Finally, Bao has figured out how to replace the materials used in earlier versions of the transistor with biodegradable materials. Now, not only will the super skin be more versatile and powerful, it will also be more eco-friendly.

Breakthrough Could Lead to 'Artificial Skin' That Senses Touch, Humidity and Temperature

 Using tiny gold particles and a kind of resin, a team of scientists at the Technion-Israel Institute of Technology has discovered how to make a new kind of flexible sensor that one day could be integrated into electronic skin, or e-skin. If scientists learn how to attach e-skin to prosthetic limbs, people with amputations might once again be able to feel changes in their environments.
The findings appear in the June issue of ACS Applied Materials & Interfaces.
The secret lies in the sensor's ability to detect three kinds of data simultaneously. While current kinds of e-skin detect only touch, the Technion team's invention "can simultaneously sense touch, humidity, and temperature, as real skin can do," says research team leader Professor Hossam Haick. Additionally, the new system "is at least 10 times more sensitive in touch than the currently existing touch-based e-skin systems."
Researchers have long been interested in flexible sensors, but have had trouble adapting them for real-world use. To make its way into mainstream society, a flexible sensor would have to run on low voltage (so it would be compatible with the batteries in today's portable devices), measure a wide range of pressures, and make more than one measurement at a time, including humidity, temperature, pressure, and the presence of chemicals. In addition, these sensors would also have to be able to be made quickly, easily, and cheaply.
The Technion team's sensor has all of these qualities. The secret is the use of monolayer-capped nanoparticles that are only 5-8 nanometers in diameter. They are made of gold and surrounded by connector molecules called ligands. In fact, "monolayer-capped nanoparticles can be thought of as flowers, where the center of the flower is the gold or metal nanoparticle and the petals are the monolayer of organic ligands that generally protect it," says Haick.
The team discovered that when these nanoparticles are laid on top of a substrate -- in this case, made of PET (flexible polyethylene terephthalate), the same plastic found in soda bottles -- the resulting compound conducted electricity differently depending on how the substrate was bent. (The bending motion brings some particles closer to others, increasing how quickly electrons can pass between them.) This electrical property means that the sensor can detect a large range of pressures, from tens of milligrams to tens of grams. "The sensor is very stable and can be attached to any surface shape while keeping the function stable," says Dr. Nir Peled, Head of the Thoracic Cancer Research and Detection Center at Israel's Sheba Medical Center, who was not involved in the research.
And by varying how thick the substrate is, as well as what it is made of, scientists can modify how sensitive the sensor is. Because these sensors can be customized, they could in the future perform a variety of other tasks, including monitoring strain on bridges and detecting cracks in engines.
"Indeed," says Dr. Peled, "the development of the artificial skin as biosensor by Professor Haick and his team is another breakthrough that puts nanotechnology at the front of the diagnostic era."
The research team also included Meital Segev-Bar and Gregory Shuster, graduate students in the Technion's Russell Berrie Nanotechnology Institute, as well as Avigail Landman and Maayan Nir-Shapira, undergraduate students in the Technion's Chemical Engineering Department.

Origin Of Life On Earth: Simple Fusion To Jump-Start Evolution

 With the aid of a straightforward experiment, researchers have provided some clues to one of biology's most complex questions: how ancient organic molecules came together to form the basis of life.
Specifically, this study demonstrated how ancient RNA joined together to reach a biologically relevant length.
RNA, the single-stranded precursor to DNA, normally expands one nucleic base at a time, growing sequentially like a linked chain. The problem is that in the primordial world RNA molecules didn't have enzymes to catalyze this reaction, and while RNA growth can proceed naturally, the rate would be so slow the RNA could never get more than a few pieces long (for as nucleic bases attach to one end, they can also drop off the other).
Ernesto Di Mauro and colleagues examined if there was some mechanism to overcome this thermodynamic barrier, by incubating short RNA fragments in water of different temperatures and pH.
They found that under favorable conditions (acidic environment and temperature lower than 70 degrees Celsius), pieces ranging from 10-24 in length could naturally fuse into larger fragments, generally within 14 hours.
The RNA fragments came together as double-stranded structures then joined at the ends. The fragments did not have to be the same size, but the efficiency of the reactions was dependent on fragment size (larger is better, though efficiency drops again after reaching around 100) and the similarity of the fragment sequences.
The researchers note that this spontaneous fusing, or ligation, would a simple way for RNA to overcome initial barriers to growth and reach a biologically important size; at around 100 bases long, RNA molecules can begin to fold into functional, 3D shapes. 

How Some Unusual RNA Molecules Home in On Targets

The genes that code for proteins -- more than 20,000 in total -- make up only about 1 percent of the complete human genome. That entire thing -- not just the genes, but also genetic junk and all the rest -- is coiled and folded up in any number of ways within the nucleus of each of our cells. Think, then, of the challenge that a protein or other molecule, like RNA, faces when searching through that material to locate a target gene.
Now a team of researchers led by newly arrived biologist Mitchell Guttman of the California Institute of Technology (Caltech) and Kathrin Plath of UCLA, has figured out how some RNA molecules take advantage of their position within the three-dimensional mishmash of genomic material to home in on targets. The research appears in the current issue of Science Express.
The findings suggests a unique role for a class of RNAs, called lncRNAs, which Guttman and his colleagues at the Broad Institute of MIT and Harvard first characterized in 2009. Until then, these lncRNAs -- short for long, noncoding RNAs and pronounced "link RNAs" -- had been largely overlooked because they lie in between the genes that code for proteins. Guttman and others have since shown that lncRNAs scaffold, or bring together and organize, key proteins involved in the packaging of genetic information to regulate gene expression -- controlling cell fate in some stem cells, for example.
In the new work, the researchers found that lncRNAs can easily locate and bind to nearby genes. Then, with the help of proteins that reorganize genetic material, the molecules can pull in additional related genes and move to new sites, building up a "compartment" where many genes can be regulated all at once.
"You can now think about these lncRNAs as a way to bring together genes that are needed for common function into a single physical region and then regulate them as a set, rather than individually," Guttman says. "They are not just scaffolds of proteins but actual organizers of genes."
The new work focused on Xist, a lncRNA molecule that has long been known to be involved in turning off one of the two X chromosomes in female mammals (something that must happen in order for the genome to function properly). Quite a bit has been uncovered about how Xist achieves this silencing act. We know, for example, that it binds to the X chromosome; that it recruits a chromatin regulator to help it organize and modify the structure of the chromatin; and that certain distinct regions of the RNA are necessary to do all of this work. Despite this knowledge, it had been unknown at the molecular level how Xist actually finds its targets and spreads across the X chromosome.
To gain insight into that process, Guttman and his colleagues at the Broad Institute developed a method called RNA Antisense Purification (RAP) that, by sequencing DNA at high resolution, gave them a way to map out exactly where different lncRNAs go. Then, working with Plath's group at UCLA, they used their method to watch in high resolution as Xist was activated in undifferentiated mouse stem cells, and the process of X-chromosome silencing proceeded.
"That's where this got really surprising," Guttman says. "It wasn't that somehow this RNA just went everywhere, searching for its target. There was some method to its madness. It was clear that this RNA actually used its positional information to find things that were very far away from it in genome space, but all of those genes that it went to were really close to it in three-dimensional space."
Before Xist is activated, X-chromosome genes are all spread out. But, the researchers found, once Xist is turned on, it quickly pulls in genes, forming a cloud. "And it's not just that the expression levels of Xist get higher and higher," Guttman says. "It's that Xist brings in all of these related genes into a physical nuclear structure. All of these genes then occupy a single territory."
The researchers found that a specific region of Xist, known as the A-repeat domain, that is known to be vital for the lncRNA's ability to silence X-chromosome genes is also needed to pull in all the genes that it needs to silence. When the researchers deleted the domain, the X chromosome did not become inactivated, because the silencing compartment did not form.
One of the most exciting aspects of the new research, Guttman says, is that it has implications beyond just explaining how Xist works. "In our paper, we talk a lot about Xist, but these results are likely to be general to other lncRNAs," he says. He adds that the work provides one of the first direct pieces of evidence to explain what makes lncRNAs special. "LncRNAs, unlike proteins, really can use their genomic information -- their context, their location -- to act, to bring together targets," he says. "That makes them quite unique." 

How the Brain Creates the 'Buzz' That Helps Ideas Spread

 How do ideas spread? What messages will go viral on social media, and can this be predicted?
UCLA psychologists have taken a significant step toward answering these questions, identifying for the first time the brain regions associated with the successful spread of ideas, often called "buzz."
The research has a broad range of implications, the study authors say, and could lead to more effective public health campaigns, more persuasive advertisements and better ways for teachers to communicate with students.
"Our study suggests that people are regularly attuned to how the things they're seeing will be useful and interesting, not just to themselves but to other people," said the study's senior author, Matthew Lieberman, a UCLA professor of psychology and of psychiatry and biobehavioral sciences and author of the forthcoming book "Social: Why Our Brains Are Wired to Connect." "We always seem to be on the lookout for who else will find this helpful, amusing or interesting, and our brain data are showing evidence of that. At the first encounter with information, people are already using the brain network involved in thinking about how this can be interesting to other people. We're wired to want to share information with other people. I think that is a profound statement about the social nature of our minds."
The study findings are published in the online edition of the journal Psychological Science, with print publication to follow later this summer.
"Before this study, we didn't know what brain regions were associated with ideas that become contagious, and we didn't know what regions were associated with being an effective communicator of ideas," said lead author Emily Falk, who conducted the research as a UCLA doctoral student in Lieberman's lab and is currently a faculty member at the University of Pennsylvania's Annenberg School for Communication. "Now we have mapped the brain regions associated with ideas that are likely to be contagious and are associated with being a good 'idea salesperson.' In the future, we would like to be able to use these brain maps to forecast what ideas are likely to be successful and who is likely to be effective at spreading them."
In the first part of the study, 19 UCLA students (average age 21), underwent functional magnetic resonance imaging (fMRI) brain scans at UCLA's Ahmanson-Lovelace Brain Mapping Center as they saw and heard information about 24 potential television pilot ideas. Among the fictitious pilots -- which were presented by a separate group of students -- were a show about former beauty-queen mothers who want their daughters to follow in their footsteps; a Spanish soap opera about a young woman and her relationships; a reality show in which contestants travel to countries with harsh environments; a program about teenage vampires and werewolves; and a show about best friends and rivals in a crime family.
The students exposed to these TV pilot ideas were asked to envision themselves as television studio interns who would decide whether or not they would recommend each idea to their "producers." These students made videotaped assessments of each pilot.
Another group of 79 UCLA undergraduates (average age 21) was asked to act as the "producers." These students watched the interns' videos assessments of the pilots and then made their own ratings about the pilot ideas based on those assessments.
Lieberman and Falk wanted to learn which brain regions were activated when the interns were first exposed to information they would later pass on to others.
"We're constantly being exposed to information on Facebook, Twitter and so on," said Lieberman. "Some of it we pass on, and a lot of it we don't. Is there something that happens in the moment we first see it -- maybe before we even realize we might pass it on -- that is different for those things that we will pass on successfully versus those that we won't?"
It turns out, there is. The psychologists found that the interns who were especially good at persuading the producers showed significantly more activation in a brain region known as the temporoparietal junction, or TPJ, at the time they were first exposed to the pilot ideas they would later recommend. They had more activation in this region than the interns who were less persuasive and more activation than they themselves had when exposed to pilot ideas they didn't like. The psychologists call this the "salesperson effect."
"It was the only region in the brain that showed this effect," Lieberman said. One might have thought brain regions associated with memory would show more activation, but that was not the case, he said.
"We wanted to explore what differentiates ideas that bomb from ideas that go viral," Falk said. "We found that increased activity in the TPJ was associated with an increased ability to convince others to get on board with their favorite ideas. Nobody had looked before at which brain regions are associated with the successful spread of ideas. You might expect people to be most enthusiastic and opinionated about ideas that they themselves are excited about, but our research suggests that's not the whole story. Thinking about what appeals to others may be even more important."
The TPJ, located on the outer surface of the brain, is part of what is known as the brain's "mentalizing network," which is involved in thinking about what other people think and feel. The network also includes the dorsomedial prefrontal cortex, located in the middle of the brain.
"When we read fiction or watch a movie, we're entering the minds of the characters -- that's mentalizing," Lieberman said. "As soon as you hear a good joke, you think, 'Who can I tell this to and who can't I tell?' Making this judgment will activate these two brain regions. If we're playing poker and I'm trying to figure out if you're bluffing, that's going to invoke this network. And when I see someone on Capitol Hill testifying and I'm thinking whether they are lying or telling the truth, that's going to invoke these two brain regions.
"Good ideas turn on the mentalizing system," he said. "They make us want to tell other people."
The interns who showed more activity in their mentalizing system when they saw the pilots they intended to recommend were then more successful in convincing the producers to also recommend those pilots, the psychologists found.
"As I'm looking at an idea, I might be thinking about what other people are likely to value, and that might make me a better idea salesperson later," Falk said.
By further studying the neural activity in these brain regions to see what information and ideas activate these regions more, psychologists potentially could predict which advertisements are most likely to spread and go viral and which will be most effective, Lieberman and Falk said.
Such knowledge could also benefit public health campaigns aimed at everything from reducing risky behaviors among teenagers to combating cancer, smoking and obesity.
"The explosion of new communication technologies, combined with novel analytic tools, promises to dramatically expand our understanding of how ideas spread," Falk said. "We're laying basic science foundations to addressimportant public health questions that are difficult to answer otherwise -- about what makes campaigns successful and how we can improve their impact."
As we may like particular radio DJs who play music we enjoy, the Internet has led us to act as "information DJs" who share things that we think will be of interest to people in our networks, Lieberman said.
"What is new about our study is the finding that the mentalizing network is involved when I read something and decide who else might be interested in it," he said. "This is similar to what an advertiser has to do. It's not enough to have a product that people should like."

Farming Started in Several Places at Once: Origins of Agriculture in the Fertile Crescent

 For decades archaeologists have been searching for the origins of agriculture. Their findings indicated that early plant domestication took place in the western and northern Fertile Crescent. In the July 5 edition of the journal Science, researchers from the University of Tübingen, the Tübingen Senckenberg Center for Human Evolution and Paleoenvironment, and the Iranian Center for Archaeological Research demonstrate that the foothills of the Zagros Mountains of Iran in the eastern Fertile Crescent also served as a key center for early domestication.
Archaeologists Nicholas Conard and Mohsen Zeidi from Tübingen led excavations at the aceramic tell site of Chogha Golan in 2009 and 2010. They documented an 8 meter thick sequence of exclusively aceramic Neolithic deposits dating from 11,700 to 9,800 years ago. These excavations produced a wealth of architectural remains, stone tools, depictions of humans and animals, bone tools, animal bones, and -- perhaps most importantly -- the richest deposits of charred plant remains ever recovered from the Pre-Pottery Neolithic of the Near East.
Simone Riehl, head of the archaeobotany laboratory in Tübingen, analyzed over 30,000 plant remains of 75 taxa from Chogha Golan, spanning a period of more than 2,000 years. Her results show that the origins of agriculture in the Near East can be attributed to multiple centers rather than a single core area and that the eastern Fertile Crescent played a key role in the process of domestication.
Many pre-pottery Neolithic sites preserve comparatively short sequences of occupation, making the long sequence form Chogha Golan particularly valuable for reconstructing the development of new patterns of human subsistence. The most numerous species from Chogha Golan are wild barley, goat-grass and lentil, which are all wild ancestors of modern crops. These and many other species are present in large numbers starting in the lowest deposits, horizon XI, dating to the end of the last Ice Age roughly 11,700 years ago. In horizon II dating to 9.800 years ago, domesticated emmer wheat appears.
The plant remains from Chogha Golan represent a unique, long-term record of cultivation of wild plant species in the eastern Fertile Crescent. Over a period of two millennia the economy of the site shifted toward the domesticated species that formed the economic basis for the rise of village life and subsequent civilizations in the Near East. Plants including multiple forms of wheat, barley and lentils together with domestic animals later accompanied farmers as they spread across western Eurasia, gradually replacing the indigenous hunter-gather societies. Many of the plants that were domesticated in the Fertile Crescent form the economic basis for the world population today. 

Live Fast, Die Young: Long-Lived Mice Are Less Active, Biologists Find

Female mice with a high life expectancy are less active and less explorative. They also eat less than their fellow females with a lower life expectancy. Behavioral biologists from the University of Zurich reveal that there is a correlation between longevity and personality for female house mice, and a minimum amount of boldness is necessary for them to survive.
Risky behavior can lead to premature death -- in humans. Anna Lindholm and her doctoral student Yannick Auclair investigated whether this also applies to animals by studying the behavior of 82 house mice. They recorded boldness, activity level, exploration tendency and energy intake of female and male house mice with two different allelic variants on chromosome 17, thereby testing predictions of "life-history theory" on how individuals invest optimally in growth and reproduction. According to this theory, individuals with a greater life expectancy will express reactive personality traits and will be shy, less active and less explorative than individuals with a lower survival expectation.
Is personality reflected in life expectancy?
Female mice of the t haplotype, one of the two genetic variants on chromosome 17, are known to live longer. The t haplotype in house mice is a naturally occurring selfish genetic element that is transmitted to 90 percent of the offspring by t carrying males. Embryos that inherit a t copy from both parents, however, die before birth. With his experiment, Yannick Auclair wanted to investigate whether there was a correlation between this selfish genetic element and the personality of the mice.
Live fast, die young -- even in mice
The researchers reveal that the longer-lived t haplotype females are less active than the shorter-lived non-carrier females. They also consume less food, are less explorative and thus express reactive personality traits favouring cautiousness and energy conservation, as predicted by theory. "For the first time, we report personality traits associated with a selfish genetic element that influences life expectancy" says Auclair. According to the research team, female mice with a longer life expectancy follow the strategy "live slow, die old" whereas those with a shorter life expectancy live according to the principle "live fast, die young."
In contrast to the predictions of the "life-history" theory, there are no extremely individuals among t haplotype female mice. The researchers suppose that selection does not favor mice that are too cautious. "In order for a mouse to find food and be able to reproduce, clearly a minimum level of boldness is required," explains Auclair. "In such a situation, large variation will not develop."

Single Men, Smokers at Higher Risk for Oral Human Papillomavirus Infection

Smokers and single men are more likely to acquire cancer-causing oral human papillomavirus (HPV), according to new results from the HPV Infection in Men (HIM) Study. Researchers from Moffitt Cancer Center, the National Cancer Institute, Mexico and Brazil also report that newly acquired oral HPV infections in healthy men are rare and when present, usually resolve within one year.
The study results appeared in the July issue of The Lancet.
HPV infection is known to cause virtually all cervical cancers, most anal cancers and some genital cancers. It has recently been established as a cause of the majority of oropharyngeal cancers, a malignancy of the tonsils and base of tongue.
HPV-related oropharyngeal cancer is rare, but rates have been increasing rapidly, especially among men. To determine the pattern of HPV acquisition and persistence in the oral region, researchers evaluated the HPV infection status in oral mouthwash samples collected as part of the HIM Study, which was originally designed to evaluate the natural history of genital HPV infections in healthy men.
"Some types of HPV, such as HPV16, are known to cause cancer at multiple places in the body, including the oral cavity," said study lead author Christine M. Pierce Campbell, Ph.D., M.P.H., a postdoctoral fellow in Moffitt's Center for Infection Research in Cancer. "We know that HPV infection is associated with oropharyngeal cancer, but we don't know how the virus progresses from initial infection to cancer in the oral cavity. One aspect of the HIM Study is to gather data to help us understand the natural history of these infections."
During the first 12 months, nearly 4.5 percent of men in the study acquired an oral HPV infection. Less than 1 percent of men in the study had an HPV16 infection, the most commonly acquired type, and less than 2 percent had a cancer-causing type of oral HPV.
Their findings are consistent with previous studies showing a low prevalence of oral HPV cancers. However, this study shows the acquisition of cancer-causing oral HPV appeared greater among smokers and unmarried men.
"Additional HPV natural history studies are needed to better inform the development of infection-related prevention efforts," said Anna R. Giuliano, Ph.D., director of Moffitt's Center for Infection Research in Cancer. "HPV16 is associated with the rapid increase in incidence of oropharyngeal cancer, most noticeably in the United States, Sweden and Australia, where it is responsible for more than 50 percent of cases. Unfortunately, there are no proven methods to prevent or detect these cancers at an early stage."
The researchers note that persistent oral HPV16 infection may be a precursor to oropharyngeal cancer, similar to how persistent cervical HPV infection leads to cervical pre-cancer.

Scientists Help Explain Visual System's Remarkable Ability to Recognize Complex Objects

How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?
It seems easy to us -- the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That's why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.
Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.
The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.
Their deep and detailed research -- -involving recordings from hundreds of neurons -- -may also have future clinical and practical implications, says the study's senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.
"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways -- -such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain -- -not the eye, but the cortex -- -- processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."
Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.
"The reason that machines are limited in their capacity to recognize things in the world around us is that we don't really understand how the brain does it as well as it does," he says.
The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.

Integrating parts into wholes

In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.
"Neurons in the visual system are sensitive to regions of space -- -- they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows -- -known as receptive fields -- -are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space -- -they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."
Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex -- -that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.
Both new studies investigated the issue of translation invariance -- -- the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.
The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.
Dogma in the field is that V4 neurons all exhibit translation invariance.
"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.
For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance -- -the ability to recognize any stimulus, no matter where it is in space -- -increases as an image moves up through the visual processing hierarchy.
"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."

A deeper mystery to be solved

The Salk researchers found that neurons that respond to more complicated shapes -- -like the curve in 5 or in a rock -- -- demonstrated decreased translation invariance. "They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning," Reynolds says. "Cells that prefer that complex shape don't yet have the capacity to recognize that shape everywhere."
On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. "They don't care where the stimuli they are tuned to is, as long as it is within their receptive field," Sharpee says.
"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object's image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.
"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.
"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."
Minjoon Kouh, a former postdoctoral fellow at Salk, participated in the PNAS study. Salk postdoctoral researcher Anirvan Nandy and senior staff scientist Jude Mitchell, of the Salk Systems Neurobiology Laboratory, were co-authors of the Neuron paper.
Both studies were funded by grants from the National Institutes of Health (R01EY019493), the McKnight Scholarship and the Ray Thomas Edwards and W. M. Keck Foundations. In addition, the PNAS study received a grant from the Searle Funds. The Neuron study was additionally funded by grants from the Alfred P. Sloan Foundation, the National Institutes of Health (EY0113802), the Gatsby Charitable Foundation and the Schwartz Foundation, and a Pioneer Fund postdoctoral fellowship.

Type 2 Diabetes Gene Predisposes Children to Obesity, Study Finds

 Pediatric researchers have found that a gene already implicated in the development of type 2 diabetes in adults also raises the risk of being overweight during childhood. The finding sheds light on the genetic origins of diabetes and may present an avenue for developing drugs to counteract the disease, which has been on the upswing in childhood and adolescence.
Researchers from The Children's Hospital of Philadelphia and the University of Pennsylvania School of Medicine published the study Nov. 23 in the online version of the journal Diabetes.
"It has been a bit of a mystery to scientists how or even if these adult diabetes genes function during childhood," said study leader Struan F.A. Grant, Ph.D., a researcher and associate director of the Center for Applied Genomics of The Children's Hospital of Philadelphia. "This finding suggests that there may be genetic activity during childhood that lays the foundation for the later development of type 2 diabetes."
Type 2 diabetes occurs either when the pancreas produces too little insulin, or when the body cannot efficiently use the insulin that is produced because the cells have become resistant. Formerly called adult-onset diabetes and still most common in adults, type 2 diabetes has been increasing sharply among children and teenagers.
Grant and study co-leader Hakon Hakonarson, M.D., Ph.D., director of the Center for Applied Genomics at Children's Hospital, investigated 20 gene variants, known as single nucleotide polymorphisms (SNPs), previously reported to be associated with type 2 diabetes. The researchers drew on a cohort of nearly 7,200 Caucasian children, aged 2 to 18 years, in an ongoing genome-wide association study of childhood obesity at Children's Hospital. Dividing the cohort randomly in half allowed the team to follow their discovery study with a replication study.
Researchers continue to unravel the complicated role of different diabetes-related genes in influencing body weight toward both lower and higher ends of the scale. The risk of developing type 2 diabetes in adulthood is often influenced by factors in the first year of life, including lower birth weight, as well as by higher body mass index (BMI) during childhood. Obesity is a well-known risk factor for type 2 diabetes.
A previous study earlier this year by the same study team found that another type 2 diabetes gene, CDKAL1, affects fetal growth and increases the likelihood that a baby will be underweight at birth.
The current study found that the gene HHEX-IDE does not affect birth weight, but makes it more likely that a child will become obese during childhood. The gene does not appear to predispose to obesity in adults, although by contributing to childhood obesity, it may set the stage for type 2 diabetes in adulthood.
Grant cautioned that HHEX-IDE accounts for only a small proportion of the genetic contribution to the risk of type 2 diabetes, so many other gene variants remain to be discovered. However, he adds, HHEX-IDE may represent an important underpinning of the disease. "Previously we thought that this gene affects insulin production during adulthood, but we now see that it may play an early role in influencing insulin resistance through its impact on body size during childhood," said Grant. "One implication is that if we can develop medicines to target specific biological pathways in childhood, we may be able to prevent diabetes from developing later in life."

Early Childhood Respiratory Infections May Be Potential Risk Factor for Type 1 Diabetes

 Respiratory infections in early childhood may be a potential risk factor for developing type 1 diabetes mellitus (T1D), according to a study published by JAMA Pediatrics, a JAMA Network publication.
The incidence of T1D is increasing worldwide, although its etiology is not well understood. Infections have been discussed as an important environmental determinant, according to the study background.
Andreas Beyerlein, Ph.D., from the Institute of Diabetes Research, Munich, Germany, and colleagues sought to determine whether early, short-term or cumulative exposures to episodes of infection and fever during the first three years of life were associated with the initiation of persistent islet autoimmunity (development of antibodies against the islet cells of the pancreas) in children at increased risk for T1D.
"Our study identified respiratory infections in early childhood, especially in the first year of life, as a risk factor for the development of T1D. We also found some evidence for short-term effects of infectious events on development of autoimmunity, while cumulative exposure alone seemed not to be causative," the authors note.
The study included 148 children at high risk for T1D with 1,245 documented infectious events during 90,750 person-days during their first three years of life.
According to the results, an increased hazard ratio (HR) of islet autoantibody seroconversion was associated with respiratory infections during the first six months of life (HR=2.27) and ages 6 to almost 12 months (HR=1.32). During the second year of life, no meaningful associations were detected for any infectious category. A higher number of respiratory infections in the six months prior to islet autoantibody seroconversion was also associated with an increased HR (1.42).
"Potential prevention strategies against T1D derived from studies like this might address early vaccination against specific infectious agents. Unfortunately, we were not able to identify a single infectious agent that might be instrumental in the development of T1D. Our results point to a potential role of infections in the upper respiratory tract and specifically of acute rhinopharyngitis (inflammation of the mucous membranes)," the authors conclude.


Curious Mix of Precision and Brawn in a Pouched Super-Predator

 A bizarre, pouched super-predator that terrorised South America millions of years ago had huge sabre-like teeth but its bite was weaker than that of a domestic cat, new research shows.
Australian and American marsupials are among the closest living relatives of the extinct Thylacosmilus atrox, which had tooth roots extending rearwards almost into its small braincase.
"Thylacosmilus looked and behaved like nothing alive today," says UNSW palaeontologist, Dr Stephen Wroe, leader of the research team.
"To achieve a kill the animal must have secured and immobilised large prey using its extremely powerful forearms, before inserting the sabre-teeth into the windpipe or major arteries of the neck -- a mix of brute force and delicate precision."
The iconic North American sabre-toothed 'tiger', Smilodon fatalis, is often regarded as the archetypal mammalian super-predator.
However, Smilodon -- a true cat -- was just the end point in one of at least five independent 'experiments' in sabre-tooth evolution through the Age of Mammals, which spanned some 65 million years.
Thylacosmilus atrox is the best preserved species of one of these evolutionary lines -- pouched sabre-tooths that terrorised South America until around 3.5 million years ago.
For its size, its huge canine teeth were larger than those of any other known sabre-tooth.
Smilodon's killing behaviour has long attracted controversy, but scientists now mostly agree that powerful neck muscles, as well as jaw muscles, played an important role in driving the sabre-teeth into the necks of large prey.
Little was known about the predatory behaviour in the pouched Thylacosmilus.
To shed light on this super-predator mystery, Dr Wroe's team of Australian and US scientists constructed and compared sophisticated computer models of Smilodon and Thylacosmilus, as well as a living conical-toothed cat, the leopard.
These models were digitally 'crash-tested' in simulations of biting and killing behaviour. The results are published in the journal PLoS ONE.
"We found that both sabre-tooth species were similar in possessing weak jaw-muscle-driven bites compared to the leopard, but the mechanical performance of the sabre-tooths skulls showed that they were both well-adapted to resist forces generated by very powerful neck muscles," says Dr Wroe.
"But compared to the placental Smilodon, Thylacosmilus was even more extreme."
"Frankly, the jaw muscles of Thylacosmilus were embarrassing. With its jaws wide open this 80-100 kg 'super-predator' had a bite less powerful than a domestic cat. On the other hand -- its skull easily outperformed that of the placental Smilodon in response to strong forces from hypothetical neck muscles."
"Bottom line is that the huge sabres of Thylacosmilus were driven home by the neck muscles alone and -- because the sabre-teeth were actually quite fragile -- this must have been achieved with surprising precision."
"For Thylacosmilus -- and other sabre-tooths -- it was all about a quick kill."
"Big prey are dangerous -- even to super-predators -- and the faster the kill the less likely it is that the predator will get hurt -- or for that matter attract unwanted attention from other predators."
"It may not have been the smartest of mammalian super-predators -- but in terms of specialisation -- Thylacosmilus took the already extreme sabre-tooth lifestyle to a whole new level," says Dr Wroe.


 -DNA Particles in the Blood May Help Speed Detection of Coronary Artery Disease

 DNA fragments in your blood may someday help doctors quickly learn if chest pain means you have narrowed heart arteries, according to a new study published in the American Heart Association journal Arteriosclerosis, Thrombosis, and Vascular Biology.
The study involved 282 patients, ages 34 to 83, who reported chest pain and were suspected of having coronary artery disease. Researchers used computed tomography imaging to look for hardened, or calcified, buildup in the blood vessels that supply the heart. Blood samples also were tested for bits of genetic material. Release of small DNA particles in the blood occurs during chronic inflammatory conditions such as coronary artery disease.
Higher levels of DNA particles in the blood were linked to high levels of coronary artery calcium deposits. These particles are potentially markers of disease, and may eventually help identify patients with severely narrowed coronary arteries, predict how many coronary vessels were affected, and even whether a patient is likely to suffer a serious heart problem or heart-related death.
"If those markers are proven to be effective -- specific and sensitive -- they may improve medical care in terms of identifying patients at risk sooner," said Julian Borissoff, M.D., Ph.D., lead author of the study and postdoctoral research fellow at Boston Children's Hospital and Harvard Medical School. "And so the patients may go on treatment sooner."
The scientists noted that larger studies, following more patients for longer periods, are needed to see how precisely these markers might identify patients at risk for developing coronary artery disease. Almost half of the patients studied were followed for a year and a half or longer.
If the markers do pan out, they have the potential to help doctors efficiently pinpoint which patients with chest pain are likely to have coronary artery disease rather than some other problem causing the discomfort, Borissoff said. Currently, a time-consuming and costly battery of tests is used to determine whether the heart is at risk, he said.
It is plausible to think that the DNA particles themselves might contribute to the progression of atherosclerosis and the risk of dangerous blood vessel blockages, the study's authors wrote. "The more the ongoing cell death, which is normal with inflammation, the more DNA enters the circulation and more plaque builds up," Borissoff said. "Cells get damaged, and the products released from the damaged cells can cause even more damage and inflammatory responses."
The researchers are testing the DNA particle components further, he said, to see which ones are most sensitive and to understand more about how their levels might vary -- for instance, during different stages of progression of atherosclerosis, or during a treadmill test, or after treatment for a heart attack.
Co-authors are Ivo A. Joosen, M.D., Mathijs O. Versteylen, M.D.; Alexander Brill, M.D., Ph.D.; Tobias A. Fuchs, Ph.D.; Alexander S. Savchenko, M.D., Ph.D.; Maureen Gallant, B.S.; Kimberly Martinod, B.A., B.S.; Hugo ten Cate, M.D., Ph.D.; Leonard Hofstra, M.D., Ph.D.; Harry J. Crijns, M.D., Ph.D.; Denisa D. Wagner, Ph.D.; Bas L.J.H. Kietselaer, M.D., Ph.D. 


Link Between Fear and Sound Perception Discovered

Anyone who's ever heard a Beethoven sonata or a Beatles song knows how powerfully sound can affect our emotions. But it can work the other way as well -- our emotions can actually affect how we hear and process sound. When certain types of sounds become associated in our brains with strong emotions, hearing similar sounds can evoke those same feelings, even far removed from their original context. It's a phenomenon commonly seen in combat veterans suffering from post-traumatic stress disorder (PTSD), in whom harrowing memories of the battlefield can be triggered by something as common as the sound of thunder. But the brain mechanisms responsible for creating those troubling associations remain unknown. Now, a pair of researchers from the Perelman School of Medicine at the University of Pennsylvania has discovered how fear can actually increase or decrease the ability to discriminate among sounds depending on context, providing new insight into the distorted perceptions of victims of PTSD.
Their study is published in Nature Neuroscience.
"Emotions are closely linked to perception and very often our emotional response really helps us deal with reality," says senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery and Neuroscience at Penn. "For example, a fear response helps you escape potentially dangerous situations and react quickly. But there are also situations where things can go wrong in the way the fear response develops. That's what happens in anxiety and also in PTSD -- the emotional response to the events is generalized to the point where the fear response starts getting developed to a very broad range of stimuli."
Geffen and the first author of the study, Mark Aizenberg, PhD, a postdoctoral researcher in her laboratory, used emotional conditioning in mice to investigate how hearing acuity (the ability to distinguish between tones of different frequencies) can change following a traumatic event, known as emotional learning. In these experiments, which are based on classical (Pavlovian) conditioning, animals learn to distinguish between potentially dangerous and safe sounds -- called "emotional discrimination learning." This type of conditioning tends to result in relatively poor learning, but Aizenberg and Geffen designed a series of learning tasks intended to create progressively greater emotional discrimination in the mice, varying the difficulty of the task. What really interested them was how different levels of emotional discrimination would affect hearing acuity -- in other words, how emotional responses affect perception and discrimination of sounds. This study established the link between emotions and perception of the world -- something that has not been understood before.
The researchers found that, as expected, fine emotional learning tasks produced greater learning specificity than tests in which the tones were farther apart in frequency. As Geffen explains, "The animals presented with sounds that were very far apart generalize the fear that they developed to the danger tone over a whole range of frequencies, whereas the animals presented with the two sounds that were very similar exhibited specialization of their emotional response. Following the fine conditioning task, they figured out that it's a very narrow range of pitches that are potentially dangerous."
When pitch discrimination abilities were measured in the animals, the mice with more specific responses displayed much finer auditory acuity than the mice who were frightened by a broader range of frequencies. "There was a relationship between how much their emotional response generalized and how well they could tell different tones apart," says Geffen. "In the animals that specialized their emotional response, pitch discrimination actually became sharper. They could discriminate two tones that they previously could not tell apart."
Another interesting finding of this study is that the effects of emotional learning on hearing perception were mediated by a specific brain region, the auditory cortex. The auditory cortex has been known as an important area responsible for auditory plasticity. Surprisingly, Aizenberg and Geffen found that the auditory cortex did not play a role in emotional learning. Likely, the specificity of emotional learning is controlled by the amygdala and sub-cortical auditory areas. "We know the auditory cortex is involved, we know that the emotional response is important so the amygdala is involved, but how do the amygdala and cortex interact together?" says Geffen. "Our hypothesis is that the amygdala and cortex are modifying subcortical auditory processing areas. The sensory cortex is responsible for the changes in frequency discrimination, but it's not necessary for developing specialized or generalized emotional responses. So it's kind of a puzzle."
Solving that puzzle promises new insight into the causes and possible treatment of PTSD, and the question of why some individuals develop it and others subjected to the same events do not. "We think there's a strong link between mechanisms that control emotional learning, including fear generalization, and the brain mechanisms responsible for PTSD, where generalization of fear is abnormal," Geffen notes. Future research will focus on defining and studying that link.

Has Evolution Given Humans Unique Brain Structures?

Humans have at least two functional networks in their cerebral cortex not found in rhesus monkeys. This means that new brain networks were likely added in the course of evolution from primate ancestor to human.
These findings, based on an analysis of functional brain scans, were published in a study by neurophysiologist Wim Vanduffel (KU Leuven and Harvard Medical School) in collaboration with a team of Italian and American researchers.
Our ancestors evolutionarily split from those of rhesus monkeys about 25 million years ago. Since then, brain areas have been added, have disappeared or have changed in function. This raises the question, 'Has evolution given humans unique brain structures?'. Scientists have entertained the idea before but conclusive evidence was lacking. By combining different research methods, we now have a first piece of evidence that could prove that humans have unique cortical brain networks.
Professor Vanduffel explains: "We did functional brain scans in humans and rhesus monkeys at rest and while watching a movie to compare both the place and the function of cortical brain networks. Even at rest, the brain is very active. Different brain areas that are active simultaneously during rest form so-called 'resting state' networks. For the most part, these resting state networks in humans and monkeys are surprisingly similar, but we found two networks unique to humans and one unique network in the monkey."
"When watching a movie, the cortex processes an enormous amount of visual and auditory information. The human-specific resting state networks react to this stimulation in a totally different way than any part of the monkey brain. This means that they also have a different function than any of the resting state networks found in the monkey. In other words, brain structures that are unique in humans are anatomically absent in the monkey and there no other brain structures in the monkey that have an analogous function. Our unique brain areas are primarily located high at the back and at the front of the cortex and are probably related to specific human cognitive abilities, such as human-specific intelligence."
The study used fMRI (functional Magnetic Resonance Imaging) scans to visualise brain activity. fMRI scans map functional activity in the brain by detecting changes in blood flow. The oxygen content and the amount of blood in a given brain area vary according to a particular task, thus allowing activity to be tracked.

Social Networks Shape Monkey 'Culture' Too

 Of course Twitter and Facebook are all the rage, but the power of social networks didn't start just in the digital age. A new study on squirrel monkeys reported in Current Biology, a Cell Press publication, on June 27 finds that monkeys with the strongest social networks catch on fastest to the latest in foraging crazes. They are monkey trendsters.
The researchers, led by Andrew Whiten of the University of St Andrews, made the discovery by combining social network analysis with more traditional social learning experiments. By bringing the two together, they offer what they say is the first demonstration of how social networks may shape the spread of new cultural techniques. It's an approach they hope to see adopted in studies of other social animals.
"Our study shows that innovations do not just spread randomly in primate groups but, as in humans, are shaped by the monkeys' social networks," Whiten said.
Whiten, along with Nicolas Claidière, Emily Messer, and William Hoppitt, traced the monkeys' social networks by recording which monkeys spent time together in the vicinity of "artificial fruits" that could be manipulated to extract tempting food rewards. Sophisticated statistical analysis of those data revealed the monkeys' social networks, with some individuals situated at the heart of the network and others more on the outside. The researchers rated each of the monkeys on their "centrality," or social status in the network, with the highest ratings going to monkeys with the most connections to other well-connected individuals.
The artificial fruits could be opened in two different ways, either by lifting a little hatch on the front or by pivoting it from side to side. The researchers trained the alpha male in one group of monkeys on the lift technique, while the leader in another group was trained on the pivot method. They then sent them back to their groups and watched to see how those two methods would catch on in the two groups.
More central monkeys with the strongest social ties picked up the new methods more successfully, the researchers found. They were also more likely than peripheral monkeys to learn the method demonstrated by their trained alpha leaders.
Whiten said the squirrel monkeys are a good species for these studies because of their natural inquisitiveness. They also lead rather intense social lives.
The researchers now hope to extend their studies to focus on the squirrel monkeys in different contexts -- while foraging, moving, and resting, for example -- and how those contexts might influence the spread of innovations. They suspect they might even find evidence for different monkey subcultures.
"If there are subgroups within the network, then what appear to be mixed behaviors at the group level could in fact be different behaviors for different subgroups -- what could be called subcultures," Whiten said.
Current Biology, "Diffusion dynamics of socially learned foraging techniques in squirrel monkeys."

Study Links Cardiac Hormone-Related Inflammatory Pathway With Tumor Growth

A cardiac hormone signaling receptor abundantly expressed both in inflamed tissues and cancers appears to recruit stem cells that form the blood vessels needed to feed tumor growth, reports a new study by scientists at the University of South Florida Nanomedicine Research Center.
The research may lead to the development of new drugs or delivery systems to treat cancer by blocking this receptor, known as natriuretic peptide receptor A (NPRA).
The findings appeared online recently in the journal Stem Cells.
"Our results show that NRPA signaling by cancer cells produces some molecular factors that attract stem cells, which in turn form blood vessels that provide oxygen and nutrients to the tumor," said the study's principal investigator Subhra Mohapatra, PhD, associate professor in the Department of Molecular Medicine. "We showed that if the NPRA signal is blocked, so is the angiogenesis and, if the tumor's blood supply is cut off it will die."
Using both cultured cells and a mouse model, Dr. Mohapatra and her team modeled interactions to study the association between gene mutations and exposure to an inflammatory tissue microenvironment.
The researchers demonstrated that cardiac hormone NRPA played a key role in the link between inflammation and the development of cancer-causing tumors. Mice lacking NPRA signaling failed to induce tumors. However, co-implanting tumor cells with mesenchymal stem cells, which can turn into cells lining the inner walls of blood vessels, promoted the sprouting of blood vessels (angiogenesis) needed to promote tumor growth in NPRA- deficient mice, the researchers found. Furthermore, they showed that NRPA signaling appears to regulate key inflammatory cytokines involved in attracting these stem cells to tumor cells.
Dr. Mohapatra's laboratory is testing an innovative drug delivery system using special nanoparticles to specifically target cancers cells like a guided missile, while sparing healthy cells. The treatment is intended to deliver a package of molecules that interferes with the cardiac hormone receptor's ability to signal.
Dr. Mohapatra collaborated with Shyam Mohapatra, PhD, and Srinivas Nagaraj, PhD, both faculty members in the Nanomedicine Research Center and Department of Internal Medicine, on genetic and immunological aspects of the study.
The study was supported by the National Institutes of Health and a Florida Biomedical Research Grant.

NASA's IRIS Mission Readies for a New Challenge

  
 NASA is getting ready to launch a new mission, a mission to observe a largely unexplored region of the solar atmosphere that powers its dynamic million-degree outer atmosphere and drives the solar wind.
In late June 2013, the Interface Region Imaging Spectrograph, or IRIS, will launch from Vandenberg Air Force Base, Calif. IRIS will advance our understanding of the interface region, a region in the lower atmosphere of the sun where most of the sun's ultraviolet emissions are generated. Such emissions impact the near-Earth space environment and Earth's climate.
The interface region lies between the sun's 11,000-degree Fahrenheit, white-hot, visible surface, the photosphere, and the much hotter multi-million-degree upper corona. Interactions between the violently moving plasma and the sun's magnetic field in this area may be the source of the energy that heats the corona to some hundreds and occasionally thousands of times hotter than the sun's surface.
IRIS will orbit Earth and use its ultraviolet telescope to obtain high-resolution solar images and spectra. IRIS observations along with advanced computer models will deepen our understanding of how heat and energy move through the lower atmosphere of the sun and other sun-like stars.



NASA Launches Satellite to Study How Sun's Atmosphere Is Energized

 June 28, 2013  NASA's Interface Region Imaging Spectrograph (IRIS) spacecraft launched Thursday at 7:27 p.m. PDT (10:27 p.m. EDT) from Vandenberg Air Force Base, Calif. The mission to study the solar atmosphere was placed in orbit by an Orbital Sciences Corporation Pegasus XL rocket.
"We are thrilled to add IRIS to the suite of NASA missions studying the sun," said John Grunsfeld, NASA's associate administrator for science in Washington. "IRIS will help scientists understand the mysterious and energetic interface between the surface and corona of the sun."
IRIS is a NASA Explorer Mission to observe how solar material moves, gathers energy and heats up as it travels through a little-understood region in the sun's lower atmosphere. This interface region between the sun's photosphere and corona powers its dynamic million-degree atmosphere and drives the solar wind. The interface region also is where most of the sun's ultraviolet emission is generated. These emissions impact the near-Earth space environment and Earth's climate.
The Pegasus XL carrying IRIS was deployed from an Orbital L-1011 carrier aircraft over the Pacific Ocean at an altitude of 39,000 feet, off the central coast of California about 100 miles northwest of Vandenberg. The rocket placed IRIS into a sun-synchronous polar orbit that will allow it to make almost continuous solar observations during its two-year mission.
The L-1011 took off from Vandenberg at 6:30 p.m. PDT and flew to the drop point over the Pacific Ocean, where the aircraft released the Pegasus XL from beneath its belly. The first stage ignited five seconds later to carry IRIS into space. IRIS successfully separated from the third stage of the Pegasus rocket at 7:40 p.m. At 8:05 p.m., the IRIS team confirmed the spacecraft had successfully deployed its solar arrays, has power and has acquired the sun, indications that all systems are operating as expected.
"Congratulations to the entire team on the successful development and deployment of the IRIS mission," said IRIS project manager Gary Kushner of the Lockheed Martin Solar and Atmospheric Laboratory in Palo Alto, Calif. "Now that IRIS is in orbit, we can begin our 30-day engineering checkout followed by a 30-day science checkout and calibration period."
IRIS is expected to start science observations upon completion of its 60-day commissioning phase. During this phase the team will check image quality and perform calibrations and other tests to ensure a successful mission.
NASA's Explorer Program at Goddard Space Flight Center in Greenbelt, Md., provides overall management of the IRIS mission. The principal investigator institution is Lockheed Martin Space Systems Advanced Technology Center. NASA's Ames Research Center will perform ground commanding and flight operations and receive science data and spacecraft telemetry.
The Smithsonian Astrophysical Observatory designed the IRIS telescope. The Norwegian Space Centre and NASA's Near Earth Network provide the ground stations using antennas at Svalbard, Norway; Fairbanks, Alaska; McMurdo, Antarctica; and Wallops Island, Va. NASA's Launch Services Program at the agency's Kennedy Space Center in Florida is responsible for the launch service procurement, including managing the launch and countdown. Orbital Sciences Corporation provided the L-1011 aircraft and Pegasus XL launch system.

Largest Carnivorous Dinosaur Tooth Ever Found In Spain

 Researchers from the Teruel-Dinópolis Joint Palaeontology Foundation have compared an Allosauroidea tooth found in deposits in Riodeva, Teruel, with other similar samples. The palaeontologists have concluded that this is the largest tooth of a carnivorous dinosaur to have been found to date in Spain.
The features and size of the 9.83cm tooth provide key information needed to identify its former owner. The researchers are in no doubt – it was a large, predatory, carnivorous dinosaur (theropod) belonging to the Allosauroidea clade (one of the branches of the phylogenetic tree), a group that contains large carnivorous dinosaurs measuring between six and 15 meters.
"Given the great variations between the teeth of different kinds of allosauroids, it would be prudent for us to assign this fossil to an indeterminate Allosauroidea", Luis Alcalá, one of the researchers involved in the study to be published in the upcoming issue of Estudios Geológicos and managing director of the Teruel-Dinópolis Joint Palaeontology Foundation, tells SINC.
The tooth, found by local residents in Riodeva, Teruel, in the Villar del Arzobispo Formation, has been compared with other samples from the Allosauroidea group from the Iberian Peninsula – in particular with a large tooth from Portugal (measuring 12.7cm) and another belonging to an Allosauroidea indet in Spain, until now described as the largest in Spain at 8.27cm.
Working towards a complete faunal record of Riodeva
The palaeontologists say that "the presence of a large Allosauroidea is a great addition to the faunal record of the dinosaurs described in the Villar del Arzobispo Formation in Riodeva".
Plant-eating dinosaur groups (phytophages) discovered in the deposit to date have been identified as sauropods, stegosaurids and basal ornithopods (from tooth remains and a complete rear leg). "Now the carnivorous dinosaurs are also represented, at least by two medium-sized theropods and a large predator belonging to the Allosauroidea clade", adds Alcalá.
Carnivorous dinosaurs grew new teeth over their lifetimes, which increase the likelihood of finding them. In this case, the condition of the crown of the tooth found (without any reabsorption surfaces) indicates that it was not a discarded tooth. The palaeontologists hope to discover the remains of this large predator, which could have attacked Turiasaurus riodevensis, the 'European giant'. 

How 'Parrot Dinosaur' Switched from Four Feet to Two as It Grew

Tracking the growth of dinosaurs and how they changed as they grew is difficult. Using a combination of biomechanical analysis and bone histology, palaeontologists from Beijing, Bristol, and Bonn have shown how one of the best-known dinosaurs switched from four feet to two as it grew.
Psittacosaurus, the 'parrot dinosaur' is known from more than 1000 specimens from the Cretaceous, 100 million years ago, of China and other parts of east Asia. As part of his PhD thesis at the University of Bristol, Qi Zhao, now on the staff of the Institute for Vertebrate Paleontology in Beijing, carried out the intricate study on bones of babies, juveniles and adults.
Dr Zhao said: "Some of the bones from baby Psittacosaurus were only a few millimetres across, so I had to handle them extremely carefully to be able to make useful bone sections. I also had to be sure to cause as little damage to these valuable specimens as possible."
With special permission from the Beijing Institute, Zhao sectioned two arm and two leg bones from 16 individual dinosaurs, ranging in age from less than one year to 10 years old, or fully-grown. He did the intricate sectioning work in a special palaeohistology laboratory in Bonn, Germany,
The one-year-olds had long arms and short legs, and scuttled about on all fours soon after hatching. The bone sections showed that the arm bones were growing fastest when the animals were ages one to three years. Then, from four to six years, arm growth slowed down, and the leg bones showed a massive growth spurt, meaning they ended up twice as long as the arms, necessary for an animal that stood up on its hind legs as an adult.
Professor Xing Xu of the Beijing Institute, one of Dr Zhao's thesis supervisors, said: "This remarkable study, the first of its kind, shows how much information is locked in the bones of dinosaurs. We are delighted the study worked so well, and see many ways to use the new methods to understand even more about the astonishing lives of the dinosaurs."
Professor Mike Benton of the University of Bristol, Dr Zhao's other PhD supervisor, said: "These kinds of studies can also throw light on the evolution of a dinosaur like Psittacosaurus. Having four-legged babies and juveniles suggests that at some time in their ancestry, both juveniles and adults were also four-legged, and Psittacosaurus and dinosaurs in general became secondarily bipedal."

People Prefer 'Carrots' to 'Sticks' When It Comes to Healthcare Incentives

 To keep costs low, companies often incentivize healthy lifestyles. Now, new research suggests that how these incentives are framed -- as benefits for healthy-weight people or penalties for overweight people -- makes a big difference.
The research, published in Psychological Science, a journal of the Association for Psychological Science, shows that policies that carry higher premiums for overweight individuals are perceived as punishing and stigmatizing.
Researcher David Tannenbaum of the Anderson School of Management at the University of California, Los Angeles wanted to investigate how framing healthcare incentives might influence people's attitudes toward the incentives.
"Two frames that are logically equivalent can communicate qualitatively different messages," Tannenbaum explains.
In the first study, 126 participants read about a fictional company grappling with managing their employee health-care policy. They were told that the company was facing rising healthcare costs, due in part to an increasing percentage of overweight employees, and were shown one of four final policy decisions.
The "carrot" plan gave a $500 premium reduction to healthy-weight people, while the "stick" plan increased premiums for overweight people by $500. The two plans were functionally equivalent, structured such that healthy-weight employees always paid $2000 per year in healthcare costs, and overweight employees always paid $2500 per year in healthcare costs.
There were also two additional "stick" plans that resulted in a $2400 premium for overweight people.
Participants were more likely to see the "stick" plans as punishment for being overweight and were less likely to endorse them.
But they didn't appear to differentiate between the three "stick" plans despite the $100 premium difference. Instead, they seemed to evaluate the plans on moral grounds, deciding that punishing someone for being overweight was wrong regardless of the potential savings to be had.
The data showed that framing incentives in terms of penalties may have particular psychological consequences for affected individuals: People with higher body mass index (BMI) scores reported that they would feel particularly stigmatized and dissatisfied with their employer under the three "stick" plans.
Another study placed participants in the decision maker's seat to see if "stick" and "carrot" plans actually reflected different underlying attitudes. Participants who showed high levels of bias against overweight people were more likely to choose the "stick" plan, but provided different justification depending on whether their bias was explicit or implicit:
"Participants who explicitly disliked overweight people were forthcoming about their decision, admitting that they chose a 'stick' policy on the basis of personal attitudes," noted Tannenbaum. "Participants who implicitly disliked overweight people, in contrast, justified their decisions based on the most economical course of action."
Ironically, if they were truly focused on economic concerns they should have opted for the "carrot" plan, since it would save the company $100 per employee. Instead, these participants tended to choose the strategy that effectively punished overweight people, even in instances when the "stick" policy implied a financial cost to the company.
Tannenbaum concludes that these framing effects may have important consequences across many different real-world domains:
"In a broad sense, our research affects policymakers at large," says Tannenbaum. "Logically equivalent policies in various domains -- such as setting a default option for organ donation or retirement savings -- can communicate very different messages, and understanding the nature of these messages could help policymakers craft more effective policy."
Co-authors on this research include Chad Valasek of the University of California, San Diego; Eric Knowles of New York University; and Peter Ditto of the University of California, Irvine.

Parents' Behavior Linked to Kids' Videogame Playing

Children who think their parents are poor monitors or nag a lot tend to play videogames more than other kids, according to a study by Michigan State University researchers.
The study, funded by the National Science Foundation, is one of the first to link parental behavior to kids' videogame playing. The researchers surveyed more than 500 students from 20 middle schools and found that the more children perceived their parents' behavior as negative (e.g., "nags a lot") and the less monitoring parents did, the more the children played videogames.
The next step, said lead researcher Linda Jackson, is to find out what's fueling children's videogame behavior -- a topic Jackson and her team plan to examine.
"Does a parent's negative interactions with their child drive the child into the world of videogames, perhaps to escape the parent's negativity?" said Jackson, professor of psychology. "Or, alternatively, does videogame playing cause the child to perceive his or her relationship with the parent as negative?"
There also could be another characteristic of the child that's responsible for the relationship between perceptions of parent negativity and videogame playing, she said.
Jackson said an equally interesting question is the relationship between videogame playing and actual rather than perceived behavior of parents. Perceptions don't always mirror reality, she said, and this may be the case in the child-parent relationship.
The study appears in the Proceedings of the 2011 World Conference on Educational Multimedia, Hypermedia & Telecommunications.
The study is part of a larger project in which Jackson and colleagues are exploring the effects of technology use on children's academic performance, social life, psychological well-being and moral reasoning.

'Nerdy' Mold Needs Breaking to Recruit Women Into Computer Science

The 'computer nerd' is a well-known stereotype in our modern society. While this stereotype is inaccurate, it still has a chilling effect on women pursuing a qualification in computer science, according to a new paper by Sapna Cheryan from the University of Washington in the US, and colleagues. However, when this image is downplayed in the print media, women express more interest in further education in computer science. The work is published online in Springer's journal, Sex Roles.
Despite years of effort, it has proven difficult to recruit women into many fields that are perceived to be masculine and male-dominated, including computer science. The image of a lone computer scientist, concerned only with technology, is in stark contrast to a more people-oriented or traditionally feminine image. Understanding what prevents women from entering computer science is key to achieving gender parity in science, technology, engineering and mathematics.
Cheryan and team sought to prove that the shortage of women in computer science and other scientific fields is not only due to a lack of interest in the subject matter on the part of women. In a first study, 293 college students from two US West Coast universities were asked to provide descriptions of computer science majors. The authors wanted to discover what the stereotypical computer scientist looks like in students' minds.
Both women and men spontaneously offered an image of computer scientists as technology-oriented, intensely focused on computers, intelligent and socially unskilled. These characteristics contrast with the female gender role, and are inconsistent with how many women see themselves.
The way a social group is represented in the media also influences how people think about that group and their relation to it. In a second study, the researchers manipulated the students' images of a computer scientist, using fabricated newspaper articles, to examine the influence of these media on women's interest in entering the field. A total of 54 students read articles about computer science majors that described these students as either fitting, or not fitting, the current stereotype. Students were then asked to rate their interest in computer science.
Exposure to a newspaper article claiming that computer science majors no longer fit current preconceived notions increased women's interest in majoring in computer science. These results were in comparison to those of exposure to a newspaper article claiming that computer science majors do indeed reflect the stereotype. Men, however, were unaffected by how computer science majors were represented.

Giving Children Non-Verbal Clues About Words Boosts Vocabularies

The clues that parents give toddlers about words can make a big difference in how deep their vocabularies are when they enter school, new research at the University of Chicago shows.
By using words to reference objects in the visual environment, parents can help young children learn new words, according to the research. It also explores the difficult-to-measure quality of non-verbal clues to word meaning during interactions between parents and children learning to speak. For example, saying, "There goes the zebra" while visiting the zoo helps a child learn the word "zebra" faster than saying, "Let's go to see the zebra."
Differences in the quality of parents' non-verbal clues to toddlers (what children can see when their parents are talking) explain about a quarter (22 percent) of the differences in those same children's vocabularies when they enter kindergarten, researchers found. The results are reported in the paper, "Quality of early parent input predicts child vocabulary three years later," published in the current issue of the Proceedings of the National Academy of Sciences.
"Children's vocabularies vary greatly in size by the time they enter school," said lead author Erica Cartmill, a postdoctoral scholar at UChicago. "Because preschool vocabulary is a major predictor of subsequent school success, this variability must be taken seriously and its sources understood."
Scholars have found that the number of words youngsters hear greatly influences their vocabularies. Parents with higher socioeconomic status -- those with higher income and more education -- typically talk more to their children and accordingly boost their vocabularies, research has shown.
That advantage for higher-income families doesn't show up in the quality research, however.
"What was surprising in this study was that social economic status did not have an impact on quality. Parents of lower social economic status were just as likely to provide high-quality experiences for their children as were parents of higher status," said co-author Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor in Psychology at UChicago.
Although scholars have amassed impressive evidence that the number of words children hear -- the quantity of their linguistic input -- has an impact on vocabulary development, measuring the quality of the verbal environment -- including non-verbal clues to word meaning -- has proved much more difficult.
To measure quality, the research team reviewed videotapes of everyday interactions between 50 primary caregivers, almost all mothers, and their children (14 to 18 months old). The mothers and children, from a range of social and economic backgrounds, were taped for 90-minute periods as they went about their days, playing and engaging in other activities.
The team then showed 40-second vignettes from these videotapes to 218 adults with the sound track muted. Based on the interaction between the child and parent, the adults were asked to guess what word the parent in each vignette used when a beep was sounded on the tape.
A beep might occur, for instance, in a parent's silenced speech for the word "book" as a child approaches a bookshelf or brings a book to the mother to start storytime. In this scenario, the word was easy to guess because the mother labeled objects as the child saw and experienced them. In other tapes, viewers were unable to guess the word that was beeped during the conversation, as there were few immediate clues to the meaning of the parent's words. Vignettes containing words that were easy to guess provided high-quality clues to word meaning.
Although there were no differences in the quality of the interactions based on parents' backgrounds, the team did find significant individual differences among the parents studied. Some parents provided non-verbal clues about words only 5 percent of the time, while others provided clues 38 percent of the time, the study found.
The study also found that the number of words parents used was not related to the quality of the verbal exchanges. "Early quantity and quality accounted for different aspects of the variance found in the later vocabulary outcome measure," the authors wrote. In other words, how much parents talk to their children (quantity), and how parents use words in relation to the non-verbal environment (quality) provided different kinds of input into early language development.
"However, parents who talk more are, by definition, offering their children more words, and the more words a child hears, the more likely it will be for that child to hear a particular word in a high-quality learning situation," they added. This suggests that higher-income families' vocabulary advantage comes from a greater quantity of input, which leads to a greater number of high-quality word-learning opportunities. DMaking effective use of non-verbal cues may be a good way for parents to get their children started on the road to language.
Joining Cartmill and Goldin-Meadow as authors were University of Pennsylvania scholars Lila Gleitman, professor emerita of psychology; John Trueswell, professor of psychology; Benjamin Armstrong, a research assistant; and Tamara Medina, assistant professor of psychology at Drexel University.

Why Are Gull Chicks Murdered, Especially On Sundays?

 Why are gull chicks murdered especially on Sundays? How does man influence the size of gull populations? These and many other questions are answered in the doctoral thesis of Kees Camphuysen from the Royal Netherlands Institute for Sea Research NIOZ. Camphuysen will defend his thesis at the University of Groningen on 21 June 2013.
Kees Camphuysen has been doing research on seabirds since 1973, and since 2006 his focus has been on seagulls in the Texel dunes. There, he focused special attention on the European herring gull and the lesser black-backed gull. Since the sixties, the number of herring gulls has increased enormously, then stabilized and subsequently strongly declined. The lesser black-backed gull, which established itself in the Netherlands around 1930, suddenly became much more numerous later and this species has finally eclipsed the herring gull in numbers. Has the lesser black-backed gull supplanted the herring gull by taking up the best spots, or did it win the competition for food? Or does the decline in herring gulls and the increase of lesser black-backed gulls maybe have nothing to do with each other?
From Camphuysen's research it appears that both gull species have benefited from an expanded food offering caused by people. Herring gulls have learned to tap food in landfills, while lesser black-backed gulls in particular were attracted by the fish waste that was put overboard at sea. Now that the majority of landfills are covered and the fishing fleets are shrinking, both gull species are finding it more difficult to find food. It seems that the increase and decrease of both species is not directly influenced by each other.
Camphuysen also discovered a remarkable rhythm in the growth of the chicks and also that much more cannibalism took place over the weekend than on weekdays (gull chicks that were pecked to death by adult gulls and sometimes eaten). It turned out that gulls, especially during chick care, rely heavily on fish waste thrown overboard from fishing boats. Bad luck for these birds: at the weekend, the fishing fleet is largely in the harbour. This weekly rhythm is a problem, especially in the second half of the chick care period (in July), when there is barely enough food to be found for the hungry chicks.
The fleet is expected to shrink even more in the coming years. The problem of food shortage will continue to increase as a result, but then not only at the weekend. European policy, wherein by-catch may no longer be put overboard, will clearly have consequences for both gull species. How they will react to this is difficult to predict. A strong inland increase of gulls, looking for alternative food sources, is one of the likely effects.
Camphuysen has also equipped gulls with a GPS data logger, so as to be able to see where they look for food. It emerged that lesser black-backed gulls foraged much further afield than herring gulls. Lesser black-backed gulls also went oftener and further onto the North Sea, following fishing boats. One lesser black-backed gull, which had three youngsters that were not growing well, took a desperate measure and flew via Hoorn to Amsterdam, in order to hang out in the Leidsestraat there. Who knows, did she eat chips there, or a kebab roll? She then flew to the North Sea in order to follow a fishing boat far out at sea. The next day, her young had grown properly again.

Is TV Becoming a Regular Babysitter for Busy Parents?

New research out of the University of Cincinnati finds that young children are watching TV, videos and other screen media while parents are trying to take care of other tasks in the home. The research by Sue Schlembach, a recent master's degree graduate from the UC educational studies program, also found that, although parents believed screen media could be used as an important learning tool for their young children, parents may rarely use it for that purpose.
The findings come from a questionnaire answered by 21 people that explored parents' beliefs, attitudes and behaviors into children -- aged six months to five years -- and screen media. The respondents to the questionnaire were overwhelmingly women.
Schlembach was examining four areas concerning children and screen time:
  • Did parents believe screen media could be an important educational tool?
  • Did they believe it was important to watch programs together with their child?
  • Did parents use screen time for instructional purposes, set rules or restrictions on screen use, or mainly use it as a monitoring or recreational activity?
  • Did parents have a positive, neutral or negative attitude toward children's screen media use?
Schlembach says her research supported previous national studies that parents may be doing other tasks while young children are watching TV. Furthermore, over half said they left the TV on during meals and 48 percent indicated that often, the TV was on when no one was really watching.
Schlembach says she was interested in exploring parental attitudes about kids and screen time because she was interested in early childhood development -- specifically, young children and implications surrounding the contextual nature of screen use and learning. She says the study also was motivated by American Academy of Pediatrics (AAP) recommendations that there should be no screen-media viewing at all for children under age 2, and that for older children, parents should engage in viewing and interacting with their children about the program material.
"This is a study that is certainly not meant to judge people, but rather to educate people about what's going on at home," says Schlembach. "For young children, meal time is a really important part of the day. It's a time for parents to engage in conversation with their children, serve as role models for dining behavior and also build on language and social skills.
"Even when parents say the TV is on when no one is watching, the TV is usually set up in the most central gathering location of the home," says Schlembach. "In that regard, a child could be playing with his or her toys in the living room, as a TV program is producing disruptive background noise."
Schlembach says ultimately, she hopes health care providers will talk with parents about screen media time as part of their health checklist, and as part of efforts to educate parents about child development AAP recommendations.

Advantages of Imaginary Friends

 Most parents do not worry if their young child has an imaginary friend and even see advantages in such an invisible companion.
These are the findings to be presented 10 January 2013, at the Annual Professional Event of the British Psychological Society's Division of Educational and Child Psychology. The event is being held at the Grand Thistle Hotel, Bristol.
Dr Karen Majors Chartered Educational Psychologist, Barking and Dagenham Educational Psychology Service and Dr Ed Baines, Senior Lecturer in Psychology from the Institute of Education, collected 265 questionnaires from parents about their children's imaginary friends.
The great majority of the parents (88 per cent) answered that they did not think that there were disadvantages for their child in having an imaginary friend. Parents saw the main reasons for having invisible friends as supporting fantasy play and as a companion to play and have fun with. Parents also gave numerous examples of how invisible friends helped their children process and cope with life events.
Younger children also used their interactions with invisible friends to test their parents' reactions to behaviour that might be disapproved of, thus helping them learn to regulate their behaviour..
The results also showed that children were more likely to have same-sex imaginary friends, with boys particularly likely to have other boys as invisible companions.
Dr Karen Majors says: "Our results showed that imaginary friends provided an outlet for children's imagination and story making, facilitating games, fun and companionship. These versatile friends also enabled them to cope with new life events like moving house or going on holiday. Above all, these findings remind us just how imaginative children are, which is something we should be pleased about."

Learning Disabilities Affect Up to 10 Percent of Children

Up to 10 per cent of the population are affected by specific learning disabilities (SLDs), such as dyslexia, dyscalculia and autism, translating to 2 or 3 pupils in every classroom, according to a new article.
The review -- by academics at UCL and Goldsmiths -- also indicates that children are frequently affected by more than one learning disability.
The research, published today in Science, helps to clarify the underlying causes of learning disabilities and the best way to tailor individual teaching and learning for affected individuals and education professionals.
Specific learning disabilities arise from atypical brain development with complicated genetic and environmental causes, causing such conditions as dyslexia, dyscalculia, attention-deficit/hyperactivity disorder, autism spectrum disorder and specific language impairment.
While these conditions in isolation already provide a challenge for educators, an additional problem is that specific learning disabilities also co-occur for more often that would be expected. As, for example, in children with attention-deficit/hyperactivity disorder, 33 to 45 per cent also suffer from dyslexia and 11 per cent from dyscalculia.
Lead author Professor Brian Butterworth (UCL Institute of Cognitive Neuroscience) said: "We now know that there are many disorders of neurological development that can give rise to learning disabilities, even in children of normal or even high intelligence, and that crucially these disabilities can also co-occur far more often that you'd expect based on their prevalence.
"We are also finally beginning to find effective ways to help learners with one or more SLDs, and although the majority of learners can usually adapt to the one-size-fits-all approach of whole class teaching, those with SLDs will need specialised support tailored to their unique combination of disabilities."
As part of the study, Professor Butterworth and Dr Yulia Kovas (Goldsmiths) have summarised what is currently known about SLD's neural and genetic basis to help clarify what is causing these disabilities to develop, helping to improve teaching for individual learners, and also training for school psychologists, clinicians and teachers.
What the team hope is that by developing an understanding of how individual differences in brain development interact with formal education, and also adapting learning pathways to individual needs, those with specific learning disabilities will produce more tailored education for such learners.
Professor Butterworth said: "Each child has a unique cognitive and genetic profile, and the educational system should be able to monitor and adapt to the learner's current repertoire of skills and knowledge.
"A promising approach involves the development of technology-enhanced learning applications -- such as games -- that are capable of adapting to individual needs for each of the basic disciplines."

Fiber-Optic Pen Helps See Inside Brains of Children With Learning Disabilities

 For less than $100, University of Washington researchers have designed a computer-interfaced drawing pad that helps scientists see inside the brains of children with learning disabilities while they read and write.
The device and research using it to study the brain patterns of children will be presented June 18 at the Organization for Human Brain Mapping meeting in Seattle. A paper describing the tool, developed by the UW's Center on Human Development and Disability, was published this spring in Sensors, an online open-access journal. "Scientists needed a tool that allows them to see in real time what a person is writing while the scanning is going on in the brain," said Thomas Lewis, director of the center's Instrument Development Laboratory. "We knew that fiber optics were an appropriate tool. The question was, how can you use a fiber-optic device to track handwriting?"
To create the system, Lewis and fellow engineers Frederick Reitz and Kelvin Wu hollowed out a ballpoint pen and inserted two optical fibers that connect to a light-tight box in an adjacent control room where the pen's movement is recorded. They also created a simple wooden square pad to hold a piece of paper printed with continuously varying color gradients. The custom pen and pad allow researchers to record handwriting during functional magnetic resonance imaging, or fMRI, to assess behavior and brain function at the same time.Other researchers have developed fMRI-compatible writing devices, but "I think it does something similar for a tenth of the cost," Reitz said of the UW system. By using supplies already found in most labs (such as a computer), the rest of the supplies -- pen, fiber optics, wooden pad and printed paper -- cost less than $100.The device connects to a computer with software that records every aspect of the handwriting, from stroke order to speed, hesitations and liftoffs. Understanding how these physical patterns correlate with a child's brain patterns can help scientists understand the neural connections involved.
Researchers studied 11- and 14-year-olds with either dyslexia or dysgraphia, a handwriting and letter-processing disorder, as well as children without learning disabilities. Subjects looked at printed directions on a screen while their heads were inside the fMRI scanner. The pen and pad were on a foam pad on their laps.
Subjects were given four-minute blocks of reading and writing tasks. Then they were asked to simply think about writing an essay (they later wrote the essay when not using the fMRI). Just thinking about writing caused many of the same brain responses as actual writing would.
"If you picture yourself writing a letter, there's a part of the brain that lights up as if you're writing the letter," said Todd Richards, professor of radiology and principal investigator of the UW Integrated Brain Imaging Center. "When you imagine yourself writing, it's almost as if you're actually writing, minus the motion problems."
Richards and his staff are just starting to analyze the data they've collected from about three dozen subjects, but they have already found some surprising results.
"There are certain centers and neural pathways that we didn't necessarily expect" to be activated, Richards said. "There are language pathways that are very well known. Then there are other motor pathways that allow you to move your hands. But how it all connects to the hand and motion is still being understood."
Besides learning disorders, the inexpensive pen and pad also could help researchers study diseases in adults, especially conditions that cause motor control problems, such as stroke, multiple sclerosis and Parkinson's disease.
"There are several diseases where you cannot move your hand in a smooth way or you're completely paralyzed," Richards said. "The beauty is it's all getting recorded with every stroke, and this device would help us to study these neurological diseases.

More Than One in Five Parents Believe They Have Little Influence in Preventing Teens from Using Illicit Substances

 A new report indicates that more than one in five parents of teens aged 12 to 17 (22.3 percent) think what they say has little influence on whether or not their child uses illicit substances, tobacco, or alcohol. This report by the Substance Abuse and Mental Health Services Administration (SAMHSA) also shows one in ten parents said they did not talk to their teens about the dangers of using tobacco, alcohol, or other drugs -- even though 67.6 percent of these parents who had not spoken to their children thought they would influence whether their child uses drugs if they spoke to them.
In fact national surveys of teens ages 12 to 17 show that teens who believe their parents would strongly disapprove of their substance use were less likely to use substances than other. For example, current marijuana use was less prevalent among youth who believed their parents would strongly disapprove of their trying marijuana once or twice than among youth who did not perceive this level of disapproval (5.0 percent vs. 31.5 percent).
"Surveys of teens repeatedly show that parents can make an enormous difference in influencing their children's perceptions of tobacco, alcohol, or illicit drug use," said SAMHSA Administrator Pamela S. Hyde. "Although most parents are talking with their teens about the risks of tobacco, alcohol, and other drugs, far too many are missing the vital opportunity these conversations provide in influencing their children's health and well-being. Parents need to initiate age-appropriate conversations about these issues with their children at all stages of their development in order to help ensure that their children make the right decisions."
Parents can draw upon a number of resources to help them talk with their children about substance use. One resource is SAMHSA's "Navigating the Teen Years: A Parent's Handbook for Raising Healthy Teens," available at http://store.samhsa.gov/product/Navigating-the-Teen-Years-A-Parent-s-Handbook-for-Raising-Health-Teens/PHD1127.
"Talk. They Hear You." is SAMHSA's new national media campaign encouraging parents with ideas and resources to promote conversations with children ages nine and older about the dangers of underage drinking. The campaign features a series of TV, radio, and print public service announcements in English and Spanish showing parents how to seize the moment to talk with their children about alcohol. Information about the campaign is available at: www.underagedrinking.samhsa.gov.
The SAMHSA report, "1 in 5 Parents Think What They Say Has Little Impact on Their Child's Substance Use," is available at http://www.samhsa.gov/data/spotlight/Spot081-parents-think.pdf. It is based on the findings of SAMHSA's National Survey on Drug Use and Health -- an annual nationwide survey of 67,500 Americans aged 12 or older.

Parents Talking About Their Own Drug Use to Children Could Be Detrimental

 Parents know that one day they will have to talk to their children about drug use. The hardest part is to decide whether or not talking about ones own drug use will be useful in communicating an antidrug message. Recent research, published in the journal Human Communication Research, found that children whose parents did not disclose drug use, but delivered a strong antidrug message, were more likely to exhibit antidrug attitudes.
Jennifer A. Kam, University of Illinois at Urbana-Champaign and Ashley V. Middleton, MSO Health Information Management, published in Human Communication Research their findings from surveys of 253 Latino and 308 European American students from the sixth through eighth grades. The students reported on the conversations that they have had with their parents about alcohol, cigarettes, and marijuana. Kam and Middleton were interested in determining how certain types of messages were related to the students' substance-use perceptions, and in turn, behaviors.
Past research found that teens reported that they would be less likely to use drugs if their parents told them about their own past drug use. In Kam and Middleton's study, however, Latino and European American children who reported that their parents talked about the negative consequences, or regret, over their own past substance use were actually less likely to report anti-substance-use perceptions. This finding means that when parents share their past stories of substance use, even when there is a learning lesson, such messages may have unintended consequences for early adolescent children.
Kam and Middleton's study identifies specific messages that parents can relay to their children about alcohol, cigarettes, and marijuana that may encourage anti-substance-use perceptions, and in turn, discourage actual substance use. For example, parents may talk to their kids about the negative consequences of using substances, how to avoid substances, that they disapprove of substance use, the family rules against substance use, and stories about others who have gotten in trouble from using substances.
"Parents may want to reconsider whether they should talk to their kids about times when they used substances in the past and not volunteer such information, Kam said. "Of course, it is important to remember this study is one of the first to examine the associations between parents' references to their own past substance use and their adolescent children's subsequent perceptions and behaviors."

Parents Key to Preventing Alcohol, Marijuana Use by Kids

New research from North Carolina State University, Brigham Young University and the Pennsylvania State University finds that parental involvement is more important than the school environment when it comes to preventing or limiting alcohol and marijuana use by children."Parents play an important role in shaping the decisions their children make when it comes to alcohol and marijuana," says Dr. Toby Parcel, a professor of sociology at NC State and co-author of a paper on the work. "To be clear, school programs that address alcohol and marijuana use are definitely valuable, but the bonds parents form with their children are more important. Ideally, we can have both."The researchers evaluated data from a nationally representative study that collected information from more than 10,000 students, as well as their parents, teachers and school administrators.Specifically, the researchers looked at how "family social capital" and "school social capital" affected the likelihood and/or frequency of marijuana use and alcohol use by children. Family social capital can essentially be described as the bonds between parents and children, such as trust, open lines of communication and active engagement in a child's life. School social capital captures a school's ability to serve as a positive environment for learning, including measures such as student involvement in extracurricular activities, teacher morale and the ability of teachers to address the needs of individual students.The researchers evaluated marijuana use and alcohol use separately. In both cases, researchers found that students with high levels of family social capital and low levels of school social capital were less likely to have used marijuana or alcohol -- or to have used those substances less frequently -- than students with high levels of school social capital but low family social capital.

Student Engagement More Complex, Changeable Than Thought

 A student who shows up on time for school and listens respectfully in class might appear fully engaged to outside observers, including teachers. But other measures of student engagement, including the student's emotional and cognitive involvement with the course material, may tell a different story -- one that could help teachers recognize students who are becoming less invested in their studies, according to a new study coauthored by a University of Pittsburgh researcher.
More importantly for educators, the study, published online in the professional journal Learning and Instruction, suggests that student engagement -- essential for success in school -- is malleable, and can be improved by promoting a positive school environment. The result paves the way for future work to offer teachers diagnostic tools for recognizing disengagement, as well as strategies for creating a school environment more conducive to student engagement.
"Enhancing student engagement has been identified as the key to addressing problems of low achievement, high levels of student misbehavior, alienation, and high dropout rates," said Ming-Te Wang, assistant professor of psychology in education in the School of Education and of psychology in the Kenneth P. Dietrich School of Arts and Sciences at Pitt, who coauthored the study with Jacquelynne S. Eccles, the Wilbert McKeachie and Paul Pintrich Distinguished University Professor of Psychology and Education at the University of Michigan.
"When we talk about student engagement, we tend to talk only about student behavior," Wang added. "But my coauthor and I feel like that doesn't tell us the whole story. Emotion and cognition are also very important."
Wang and Eccles' study is among the first attempts by researchers to use data to explore a multidimensional approach to the question of student engagement. In the past, only behavioral measures of student engagement -- such as class attendance, turning in homework on time, and classroom participation -- had been evaluated when gauging student engagement. By conducting a study linking students' perceptions of the school environment with behavior, the authors have provided one of the first pieces of empirical research supporting the viability of the multidimensional perspective, which had previously been largely theoretical.
The researchers designed a 100-question survey that includes the evaluation of emotional engagement and cognitive engagement. Sample survey questions that tested emotional engagement in classes across all subject areas asked students to agree or disagree with statements such as "I find schoolwork interesting" and "I feel excited by the work in school." Sample questions concerning cognitive engagement asked students to provide ratings to questions like "How often do you make academic plans for solving problems?" and "How often do you try to relate what you are studying to other things you know about?"
Using the survey, Wang and Eccles conducted a two-year longitudinal study, tracking approximately 1,200 Maryland students from seventh through eighth grade. The authors also measured students' perceptions of their environment by having them answer questions in five areas: school structure support, which gauged the clarity of teacher expectations; provision of choice, which assessed students' opportunities to make learning-related decisions; teaching for relevance, which evaluated the frequency of activities deemed relevant to students' personal interests and goals; students' perceptions of the emotional support offered by teachers; and students' perceptions of how positive their relationships were with fellow students.
The authors found that students who felt that the subject matter being taught and the activities provided by their teachers were meaningful and related to their goals were more emotionally and cognitively engaged than were their peers. Adding measures of emotional and cognitive engagement could broaden researchers' perspectives on student engagement in future work in this area.
Also among the paper's main findings is that the school environment can and, indeed, should be changed if it is impeding student engagement. A positive and supportive school environment is marked, Wang said, by "positive relationships with teachers and peers. Schools must provide opportunities for students to make their own choices. But they also must create a more structured environment so students know what to do, what to expect, from school." Wang also noted, however, that there is no "one size fits all" strategy to the problem of student engagement.
"Usually people say, 'Yes, autonomy is beneficial. We want to provide students with choices in school,'" Wang said. "This is the case for high achievers, but not low achievers. Low achievers want more structure, more guidelines."
As a result, Wang said, teachers must take into account individual variation among students in order to fulfill the needs of each student.
Wang's current work, undertaken in partnership with six Allegheny County school districts, focuses on developing a diagnostic tool that teachers can use to identify students who are disengaged from school, with a specific emphasis on math and science classes.

No comments:

Post a Comment

Popular Posts