Wednesday, June 9, 2010

life turns left

'Ancestral Eve' crystal may explain origin of life's left-handedness |
e! Science News
Molecules of aspartic acid with a left-handed orientation, shown in crystal form, could be the "ancestral Eve" of all amino acids -- the building blocks of proteins -- in life on Earth.
Published: Wednesday, April 21, 2010 - 12:21
 
American Chemical Society
Scientists are reporting discovery of what may be the "ancestral Eve" crystal that billions of years ago gave life on Earth its curious and exclusive preference for so-called left-handed amino acids. Those building blocks of proteins come in two forms — left- and right-handed — that mirror each other like a pair of hands. Their study, which may help resolve one of the most perplexing mysteries about the origin of life, is in ACS' Crystal Growth & Design, a bi-monthly journal. Tu Lee and Yu Kun Lin point out that conditions on the primordial Earth held an equal chance of
forming the same amounts of left-handed and right-handed amino acids. Nevertheless, when the first forms of life emerged more than 3 billion years ago, all the amino acids in the proteins had the left-handed configuration. That pattern continued right up to modern plants and animals.
The scientists used mixtures of both left- and right-handed aspartic acid (an amino acid) in laboratory experiments to see how temperature and other conditions affected formation of crystals of the material. They found that under conditions that could have existed on primitive Earth, left-handed aspartic acid crystals could have formed easily and on a large scale. "The aspartic acid crystal would then truly become a single mother crystal: an ancestral Eve for the whole left-handed population," the article notes.
Source: American Chemical Society



Sunday, June 6, 2010

herb's a gift from the earth

Not Just A High - Science News

Not just a high
Scientists test medicinal marijuana against MS, inflammation and cancer
font_down font_up
access

Not just a highCannabis compounds show their stuff against a host of medical problems, relieving symptoms far beyond pain and nausea. KatsgraphicsLV/iStockphoto
In science’s struggle to keep up with life on the streets, smoking cannabis for medical purposes stands as Exhibit A.
Medical use of cannabis has taken on momentum of its own, surging ahead of scientists’ ability to measure the drug’s benefits. The pace has been a little too quick for some, who see medicinal joints as a punch line, a ruse to free up access to a recreational drug.
But while the medical marijuana movement has been generating political news, some researchers have been quietly moving in new directions — testing cannabis and its derivatives against a host of diseases. The scientific literature now brims with potential uses for cannabis that extend beyond its well-known abilities to fend off nausea and block pain in people with cancer and AIDS. Cannabis derivatives may combat multiple sclerosis, Crohn’s disease and other inflammatory conditions, the new research finds. Cannabis may even kill cancerous tumors.
Many in the scientific community are now keen to see if this potential will be fulfilled, but they haven’t always been. Pharmacologist Roger Pertwee of the University of Aberdeen in Scotland recalls attending scientific conferences 30 years ago, eager to present his latest findings on the therapeutic effects of cannabis. It was a hard sell.
“Our talks would be scheduled at the end of the day, and our posters would be stuck in the corner somewhere,” he says. “That’s all changed.”
Underlying biology
The long march to credibility for cannabis research has been built on molecular biology. Smoking or otherwise consuming marijuana — Latin name Cannabis sativa — has a medical history that dates back thousands of years. But the euphoria-inducing component of cannabis, delta-9-tetrahydrocannabinol, or THC, wasn’t isolated until 1964, by biochemist Raphael Mechoulam, then of the Weizmann Institute of Science in Rehovot, Israel, and his colleagues. Within two decades, other researchers had developed synthetic THC to use in pill form.
The secrets of how THC worked in the body lay hidden until the late 1980s, when researchers working with rats found that the compound binds to a protein that pops up on the surface of nerve cells. Further tests showed that THC also hooks up with another protein found elsewhere in the body. These receptor proteins were dubbed CB1 and CB2.
A bigger revelation came in 1992: Mammals make their own compound that binds to, and switches on, the CB1 receptor. Scientists named the compound anandamide. Researchers soon found its counterpart that binds mainly to the CB2 receptor, calling that one 2AG, for 2-arachidonyl glycerol. The body routinely makes these compounds, called endocannabinoids, and sends them into action as needed.
“At that point, this became a very, very respectable field,” says Mechoulam, now at Hebrew University of Jerusalem, who along with Pertwee and others reported the anandamide discovery in Science. “THC just mimics the effects of these compounds in our bodies,” Mechoulam says. Although the receptors are abundant, anandamide and 2AG are short-acting compounds, so their effects are fleeting.
In contrast, when a person consumes cannabis, a flood of THC molecules bind to thousands of CB1 and CB2 receptors, with longer-lasting effects. The binding triggers so many internal changes that, decades after the receptors’ discovery, scientists are still sorting out the effects. From a biological standpoint, smoking pot to get high is like starting up a semitruck just to listen to the radio. There’s a lot more going on.
access
Sanctioned smokingView larger image | Though smoked cannabis has not been approved by the Food and Drug Administration, its use for medical purposes has been sanctioned by law in 14 states (shown in green, year given). Different states apply their own restrictions, some of which are highlighted.kelly ann mccann; source: D.E. hoffmann and E. weber/nejm 2010
Though the psychoactive effect of THC has slowed approval for cannabis-based drugs, the high might also have brought on a serendipitous discovery, says neurologist Ethan Russo, senior medical adviser for GW Pharmaceuticals, which is based in Porton Down, England. “How much longer would it have taken us to figure out the endocannabinoid system if cannabis didn’t happen to have these unusual effects on human physiology?”
Beyond the pain

Today smoked cannabis is a sanctioned self-treatment for verifiable medical conditions in 14 U.S. states, Canada, the Netherlands and Israel, among other places. It usually requires a doctor’s recommendation and some paperwork.
People smoke the drug to alleviate pain, sleep easier and deal with nausea, lack of appetite and mood disorders such as anxiety, stress and depression. Patients not wanting to smoke cannabis can seek out prescriptions for FDA-approved capsules containing cannabis compounds for treatment of some of these same problems.
Research now suggests that multiple sclerosis could join the growing list of cannabis-treated ailments. More than a dozen medical trials in the past decade have shown that treatments containing THC (and some that combine THC with another derivative called cannabidiol, or CBD) not only ease pain in MS patients but also alleviate other problems associated with the disease. MS results from damage to the fatty sheaths that insulate nerves in the brain and spinal cord.
“MS patients get burning pain in the legs and muscle stiffness and spasms that keep them awake at night,” says John Zajicek, a neurologist at the Peninsula College of Medicine and Dentistry in Plymouth, England. Patients can take potent steroids and other anti-inflammatory drugs, but the effects of these medications can be inconsistent.

now is all there is -but how big is now?

Law & Disorder - Science News

Physicists keep trying to explain why time flows one way
access
Before the BangSome theories propose that the known universe is just a baby bubble of spacetime that budded off a preexisting space. Other baby universes might have formed the same way, but some with time flowing in the opposite direction, preserving time symmetry for the entire cosmos. Kelly Ann McCann
In a famous passage from his 1938 book The Realm of Truth, the Spanish-American philosopher George Santayana compared time to a flame running along a fuse. The flame’s position marked the present moment, speeding forward but never backward as the fuse disappeared behind it. “The essence of nowness,” Santayana remarked, “runs like fire along the fuse of time.” Each spark along the fuse represents one of the “nows” that transform the future into the past and “combine perfectly to form the unchangeable truth of history.”
It’s far from a perfect analogy. A flame flitting along a wire doesn’t fully capture the quirky features of time that perplex physicists pondering relativity and quantum mechanics, for example. But Santayana’s sparks do illustrate one of time’s most enduring and puzzling properties — its irreversibility.
Time always, always marches forward into the future. You can travel into the future just by breathing, but the past is accessible only in memories and other records. Time flies in one direction — like an arrow — and never makes a U-turn. Popcorn never unpops, eggs never unscramble and you can’t put an exploded stick of dynamite back together again.
“This difference between the past and the future shows up in physics, it shows up in philosophy, it shows up in biology and psychology and all these different things,” says theoretical physicist Sean Carroll of Caltech. “The arrow of time absolutely pervades the way that we think about the universe.”
But peculiarly, the physical laws governing the universe do not recognize this temporal imperative. Equations describing the forces that guide matter in motion work just as well going backward in time as forward. A microworld video of bouncing molecules would need time stamps to distinguish forward from reverse — on a molecular scale, time has no direction. In the big world of bouncing basketballs, though, the clock is always running, and its hands never reverse the direction of their rotation.
For more than a century, the emergence of time’s arrow from time-blind laws of nature has confused physicists and philosophers alike. And even though it seems that more solutions for this mystery have been proposed than apps for the iPhone, new attempts at explanation continue to appear as regularly as clockwork. Some of the latest proposals suggest that time’s mystery may be an essential subplot in an even grander drama involving the origin of the cosmos itself.
Time gets messy
Although there’s no complete agreement on the precise source of time’s arrow, most experts concur that it has something to do with entropy, the ever-increasing disorder of things required by the second law of thermodynamics. As time goes by, disorder increases (or at least stays the same) in any system isolated from external influences.
Sadly, though, explaining time’s arrow by appeal to the second law alone doesn’t solve the puzzle. Sure, rising entropy defines a direction of time, but only until everything is in a state of equilibrium — in technical terms, all messed up. And as the Austrian physicist Ludwig Boltzmann explained in the 19th century, “all messed up” is by far the most probable way for things to be. It should be an enormously lucky break, like drawing a royal flush on every hand in an all-night poker game, for the entropy of the universe to be perceptibly less than the maximum amount possible. So by all odds, everything should already be all messed up —and there should therefore be no arrow of time.
But that’s not the way the universe is. As messy as things are, they aren’t as messy as they could be, and so the fuse of cosmic time can continue to burn. In other words, entropy in the universe was low enough in the past to have plenty of room to keep getting higher, and it is that quest toward disarray that drives time’s arrow in its singular direction. Explaining time’s arrow requires not only the second law, then, but also some reason why entropy used to be so much lower — specifically, why it was low when the universal clock began ticking with the lighting of the cosmic fuse in the Big Bang.
“Trying to understand why you can mix cream into coffee but not unmix them takes us back to the Big Bang, takes us back to questions of the origin of our observable universe,” Carroll said in February in San Diego at the annual meeting of the American Association for the Advancement of Science.
Before the Bang
From the instant of the Big Bang, about 13.7 billion years ago, space has been expanding. Invoking this expansion to explain the flow of time in daily life has become a standard strategy for solving time’s mystery. That approach dates to half a century ago, when astronomer Thomas Gold was apparently the first to link the thermodynamic arrow of time defined by the second law to the cosmic arrow defined by the Big Bang–induced expansion. In various forms, this approach argues that expanding space allows entropy to increase however low or high it started. Even if entropy starts high, expansion permits it to grow even higher. Consequently it continues to rise, and the universal clock keeps on ticking.
Carroll, though, in his new book From Eternity to Here, points out (as others have before him) that this solution simply assumes the existence of time’s direction without explaining it. Basically it just defines the Big Bang as a point in the “past” from which time flows in one direction. That scenario does not preserve the parity between the two time directions found in the universe’s basic equations. Finding a complete explanation, Carroll proposes, will require reaching even farther back into time, to before the Big Bang.
“You often hear cosmologists say that the Big Bang is the moment when space and time began, there’s no such thing as before the Big Bang,” Carroll said at the AAAS meeting. “The truth is the Big Bang is the moment where our understanding ends. We don’t know what happened before the Big Bang, but it’s absolutely possible that something did.”
In fact, many cosmologists today seriously study the possibility that all sorts of things happened before the Big Bang, and that the universe it created is just one among a multitude of distinct space­time bubbles, coating the surface of eternity like the froth on a mug of beer (SN: 6/6/09, p. 26). This complex “multi­verse” could contain countless individual universes, each born in a Big Bang of its own in the form of a baby bubble that then severed the umbilical wormhole linking it to a primordial emptiness.
That emptiness, Carroll suggests, would be a high-entropy environment technically known as de Sitter space. “Empty,” however, does not convey a precisely correct description. Because of quantum physics — specifically, the Heisenberg uncertainty principle — an utterly empty space is impermissible. Fluctuations of energy are unavoidable, and on rare occasion one such fluctuation will be huge enough to burst a whole new spacetime bubble into existence — a baby universe. That baby could expand into just the sort of thing that human physicists see in the one bubble they can examine from within.
“Every so often a fluctuation will make a little dollop of universe here, dominated by energy that makes it expand really, really fast,” Carroll explained. “That energy can stick around for a while before it turns into ordinary matter and radiation, and the whole scenario would look just like our Big Bang.”
In this way, the high-entropy empty spacetime that existed before the Big Bang can always increase its entropy even more — by giving birth to a baby universe. Although the baby would have low entropy, the total entropy of the system (mother de Sitter space plus baby) would be higher, preserving the second law. After pinching itself away from the mother space, the low-entropy baby will expand and the second law will drive a direction of time as the baby’s entropy rises. Eventually, the baby universe’s entropy will reach a maximum, becoming just like its timeless de Sitter space parent. And then it could give birth to baby universes of its own.

looking for gravity waves? come to my house.

Gravity waves 'around the corner' : Nature News

Published online 19 August 2009 | Nature | doi:10.1038/news.2009.844

News

Gravity waves 'around the corner'

Sensitive search fails to find ripples in space, but boosts hopes for future hunts.

Crab NebulaSupernovas, such as the one which created the Crab Nebula, should send out bursts of gravity waves.NASA

The hunt for gravitational waves may not have found the elusive ripples in space-time predicted by Albert Einstein, but the latest results from the most sensitive survey to date are providing clear insight into the origins and fabric of the Universe.

General relativity predicts that gravitational waves are generated by accelerating masses. Violent yet rare events, such as a supernova explosion or the collision of two black holes, should make the biggest and most detectable waves.

A more pervasive yet weaker source of waves should be the stochastic gravitational wave background (SGWB) that was mostly created in the turmoil immediately after the Big Bang, and which has spread unhindered through the Universe ever since.

The Laser Interferometer Gravitational-wave Observatory (LIGO) detectors, based in Washington state and Louisiana, look for these cosmic gravitational waves by measuring any slight disturbance to laser beams that shuttle between heavy mirrors held kilometres apart. Whereas the gravitational wave signal from a distinct event, such as a black-hole merger, would appear as a spike in the LIGO data, the SGWB is a murmur that is more difficult to detect.

“For 40 years they've been saying that gravity waves are around the corner ... I think for the first time that's actually a true statement.”

Michael Turner
University of Chicago, Illinois

Working with the Virgo Collaboration, which runs a gravitational wave detector near Pisa, Italy, the LIGO team has now analysed what their own detector saw between November 2005 and September 2007. Although LIGO did not find any waves, the teams conclude in Nature1 that the SGWB is even smaller than LIGO can currently detect. This result rules out some theoretical models of the early Universe that would generate a relatively large background of gravitational waves.

Cosmic predictions

"This is the first time that an experiment directly searching for gravitational waves is essentially going and making a statement about cosmology, about the evolution of the Universe," says Vuk Mandic, an astrophysicist at the University of Minnesota in Minneapolis, and part of the LIGO team. The data also exclude certain cosmological models involving cosmic strings — hypothetical cracks in the fabric of space that are thinner than an atom but have immense gravitational fields.

The LIGO results reduce the upper limit for the size of the SGWB, which had previously been set by indirect measurements. A relatively large SGWB in the very early Universe, for example, would have had a measurable effect on both the cosmic microwave background radiation left over from that time, and the relative amounts of light elements — such as hydrogen, helium and lithium — created within minutes of the Big Bang.

The LIGO and Virgo collaborations are in the process of merging their scientific efforts, and the teams plan to include data and collaborative work from both experiments in all of their future papers. Detector improvements should help Virgo to match LIGO's sensitivity in the next few years, and a series of upgrades to both experiments should increase their sensitivity to the SGWB by more than a thousand times by 2014 — which astrophysicists say is almost certain to be enough to pin down its quarry at last.

"For some 40 years they've been saying that gravity waves are around the corner," says Michael Turner, an astrophysicist at the University of Chicago in Illinois, who was not involved in the research. "And I think for the first time in 40 years that's actually a true statement."

the twisted sex lives of muscovy ducks

The sex wars of ducks : Nature News

Published online 23 December 2009 | Nature | doi:10.1038/news.2009.1159

News

The sex wars of ducks

An evolutionary battle against unwanted fertilization.

Closeup Muscovy duckThe twisted sex life of Muscovy ducks is the result of an evolutionary battle between males and females.iStockphoto

Scientists have elucidated the mechanism by which female ducks thwart forced copulations.

Unwanted sex is an unpleasant fact of life for many female ducks. After carefully selecting a mate, developing a relationship and breeding, a female must face groups of males that did not find mates and want nothing more than a quick fling.

Now a team led by Patricia Brennan, an evolutionary biologist at Yale University in New Haven, Connecticut, has described the morphology of the duck penis and found how the physiology and behaviour of female ducks can help to prevent unwanted sperm from being deposited far inside the oviduct.

Birds of both sexes have a single reproductive and excretory opening — the cloaca. Usually, sperm is transferred from male to female in a brief 'cloacal kiss'. Waterfowl, however, are different. They, like ostriches, have penises. In ducks, these appear through the cloaca very quickly and can be longer than 40 centimetres. Making things more complicated, the male and female genitalia spiral in corkscrew fashion rather than being straight.

Fowl play

"I have long had a fantasy about working out where these enormous penises actually went inside the female and mentioned to [Brennan] years ago that it was too bad we didn't have a perspex female for males to mount," says Tim Birkhead, an avian biologist at the University of Sheffield, UK.

“I have long had a fantasy about working out where these enormous penises actually went inside the female.”

Tim Birkhead
University of Sheffield

Brennan tried the next best thing. Working with a commercial breeder of Muscovy ducks (Cairina moschata), Brennan observed more than 50 male ducks that were conditioned to ejaculate when shown a stimulating female. The males were presented with clear containers of different shapes after being exposed to attractive females and as their penises everted (the duck equivalent of an erection) into the containers, the team used high-speed video to capture details of how the penises worked, and to determine how the shape of the container affected eversion.

The containers varied in shape from being straight to being anticlockwise corkscrews that followed the shape of the Muscovy duck's penis, or clockwise corkscrews or bent at 135° to better mimic the shape of the vagina. "We wanted to know whether the shape that we were seeing in females was an adaptation that was helping them to respond to unwanted sex," says Brennan.

Out for a duck

The team reports in Proceedings of the Royal Society B1 that eversion of the Moscovy duck penis, to a length of up to 20 centimetres, took a grand total of about 0.3 seconds in air and 0.5–0.8 seconds in the female-mimicking glass tubes. Ejaculation happened at the moment of maximum eversion.

When penises everted into the clockwise-corkscrew shape mimicking the female vagina, they could not get nearly so far down the tube as they could in the anticlockwise-corkscrew and straight containers (see video). In all cases, the males released semen, but their inability to get as far in the clockwise-corkscrew container and the acute-angled container suggested that in such environments males would have a lower chance of reproductive success.

Everted Muscovy DuckEverted male ducks can have penises longer than 40 centimetres.Patricia Brennan

The work backs up earlier research2 in which Brennan and her colleagues hypothesized that the sexual organs of ducks have evolved as a result of sexual conflict to prevent the sperm from unwanted males from fertilizing eggs and to help females maintain control of reproduction even as they endure unwanted sexual encounters.

Indeed, in the latest work, the team observed that sexually receptive females contract and relax their cloacal muscles in a way that could help the male achieve full penetration. But during forced copulations, the females struggle violently, which would reduce the likelihood of fertilization.

"The female presumably relaxes her vagina to allow access. This is telling us a lot about forced copulation. Clearly there is an evolutionary battle of the sexes taking place," says Birkhead.

universal blues

Does a minor key give everyone the blues? : Nature News

Published online 8 January 2010 | Nature | doi:10.1038/news.2010.3

Column: Muse

Does a minor key give everyone the blues?

Can a link between speech patterns and downbeat music prove that minor keys are intrinsically sad, asks Philip Ball?

Why does Handel's Water Music and The Beatles' 'Here Comes The Sun' sound happy, while Albinoni's Adagio and 'Eleanor Rigby' sound sad?

Some might say it's because the first two are in major keys, while the second two are in minor keys. But are the emotional associations of major and minor intrinsic to the notes themselves, or are they culturally imposed? Many music psychologists suspect the latter, but a new study now suggests that there's something fundamentally similar about major or minor keys and the properties of happy or sad speech, respectively.

Nigel TufnelNigel Tufnell of spoof rock band Spinal Tap famously called D minor "the saddest of all keys". But are minor keys intrinsically sad?SPINAL TAP PRODUCTION / THE KOBAL COLLECTION

Neuroscientist Daniel Bowling and colleagues at Duke University in Durham, North Carolina, compared the sound spectra — the profiles of different acoustic frequencies – of speech with those in Western classical music and Finnish folk songs. They found that the spectra in major-key music are close to those in excited speech, while the spectra of minor-key music are more similar to subdued speech1.

Most cultures share the same acoustic characteristics of happy or sad speech, the former being relatively fast and loud, and the latter slower and quieter. There's good reason to believe that music mimics some of these universal emotional behaviours, supplying a universal vocabulary that permits listeners sometimes to deduce the intended emotion in unfamiliar music.

For example, Western listeners can judge fairly reliably — based largely on tempo — whether pieces of Kyrghistani, Hindustani and Navajo Native American music were meant to be joyous or sad23. A study of the Mafa people of Cameroon, who had never heard Western music, also found that they could guess whether extracts were intended to be happy, sad or fearful4. So although it's simplistic to suppose that all music is happy or sad, these crude universal indicators of emotion do seem to work across cultural boundaries.

The minor fall and the major lift

So is musical key another of these universal indicators, as Bowling's study suggests? The idea that the minor key is intrinsically anguished, while the major is joyful, is so deeply ingrained in Western listeners that many have deemed this to be a natural principle of music. This notion was influentially argued by musicologist Deryck Cooke in his 1959 book The Language of Music.

Cooke pointed out that musicians throughout the ages have used minor keys for vocal music with an explicitly sad content, and major keys for happy lyrics. But he failed to acknowledge that this might simply be a matter of cultural convention rather than an innate property of the music. And when faced with the fact that some cultures, such as Spanish and Slavic, use minor keys for happy music, he offered the patronizing suggestion that such rustic people were inured to a hard life and didn't expect to be happy.

ADVERTISEMENT

No such chauvinism afflicts the latest work from Bowling and colleagues. But their conclusions are still open to question. For one thing, they don't establish that people actually hear in music the characteristic acoustic features that they identify. Also, they assume that the ratios of frequencies sounded simultaneously in speech can be compared with the ratios of frequencies sounded sequentially in music. And most troublingly, major-type frequency ratios dominate the spectra of both excited and subdued speech, but merely less so in the latter case.

This work also faces the problem that some cultures — including Europe before the Renaissance, not to mention the ancient Greeks — don't link minor keys to sadness. Western listeners sometimes misjudge the emotional quality of Javanese music that uses a scale with similarities to the minor mode yet is deemed happy by the musicians. So even if a fundamental sadness is present in the minor mode, it seems likely to be weak and easily over-written by acculturation. It's possible even in the Western idiom to write happy minor-key music — Van Morrison's 'Moondance', for example — or sad major-key music, such as Billie Holiday's 'No Good Man'. It seems too soon to conclude that minor keys give everyone the blues.

Philip Ball's book The Music Instinct is published in February by Bodley Head.


ya think?

Sex bias blights drug studies : Nature News

Published online 16 March 2010 | Nature 464, 332-333 (2010) | doi:10.1038/464332b

News

Sex bias blights drug studies

Omission of females is skewing results.

Diseases such as depression can disproportionately affect women.M. Constantini/Ès Photography/Corbis

The typical patient with chronic pain is a 55-year-old woman — the typical chronic-pain study subject is an 8-week-old male mouse. To pain researcher Jeffrey Mogil at McGill University in Montreal, Canada, that discrepancy is a telling example of the problem that he and other neuroscientists discussed last week at a workshop held in San Francisco, California: the serious under-representation of females in biomedical studies.

The lack of female participation, which extends from basic research in animals to clinical trials in humans, has obvious consequences for women, not least a paucity of effective drug treatments for diseases that predominantly affect them. But it also affects men, for example when drug candidates fail to get regulatory approval because they don't work in women in late-stage clinical trials, or have severe side effects in women that are not seen in men. Such failures might be spotted earlier if more preclinical work was done in female model animals, according to researchers at the Workshop on Sex Differences and Implications for Translational Neuroscience Research, convened on 8–9 March by the US Institute of Medicine's Forum on Neuroscience and Nervous System Disorders.

"This is an issue of enormous importance," says biologist Irving Zucker at the University of California, Berkeley. "In a number of disciplines, researchers simply don't study females, and there is so much evidence for sex differences at all levels of biological organization that to only study males, and assume the results apply to females, is just wrong."

Many diseases disproportionately affect one sex: chronic pain, depression and autoimmune disease, for instance, more often affect women, with addiction and cardiovascular disease disproportionately affecting men. The NIH Revitalization Act of 1993 attempted to take account of this by requiring clinical trials funded by the National Institutes of Health to include adequate numbers of women to detect differing treatment effects.

The US Government Accountability Office reported in 2000 that participation of women in NIH-funded trials had increased substantially (see www.gao.gov/archive/2000/he00096.pdf). The following year, it noted similar progress in late-stage drug trials overseen by the Food and Drug Administration (see www.gao.gov/new.items/d01754.pdf).

“To only study males, and assume the results apply to females, is just wrong.”


But data from clinical trials are often not analysed separately on the basis of sex. Such analyses could reveal whether a drug has different adverse effects in men and women, for example.

And the accountability office also noted in 2001 that although women made up more than half of full-scale safety and efficacy trials, they formed just 22% of the participants in initial, small-scale safety studies.

Back at the bench, lab animals are still predominantly male, even in studies of diseases that disproportionately affect women. Investigators tend to prefer male animals because it is thought that females might introduce variability through factors such as their oestrous cycles. But Mogil has reported that in common tests used to measure responses to pain, data from female mice are no more variable than those from males (J. S. Mogil and M. L. Chanda Pain 117, 1–5; 2005).

The problem is particularly acute in neuroscience, in which the ratio of male- to female-only studies is 5.5 to 1. But under-representation of females occurs in most fields of basic research, according to an unpublished analysis by Zucker and postdoctoral researcher Annaliese Beery. They investigated the use of female and male animals in research published during 2009 in 10 fields across 42 journals. They found that studies in eight of the fields used only male animals more often than only females, and that the data were often not analysed by sex. In two journals that the researchers investigated back to 1909, the proportion of animal studies using only males had actually grown since the early twentieth century. The authors speculate that this is because oestrous cycles in guinea pigs, rats and mice were first clearly characterized only in the 1920s.

Researchers such as Karen Berkley of Florida State University in Tallahassee have been trying to boost female representation for decades, and Berkley feels that some progress has been made, citing, for example, the launch of an online journal in the field, Biology of Sex Differences, and an overall growth in research on sex differences.

ADVERTISEMENT

But at the workshop, Berkley and other scientists agreed that further steps were needed to address the problem. Many endorsed the idea that journals should require their authors to report the sex of animals used in the research. The London-based National Centre for the Replacement, Refinement and Reduction of Animals in Research is developing a set of research reporting guidelines that will call for authors to disclose the sex of animals. Seán Murphy, chief editor of the Journal of Neurochemistry, says that there is "interest" in the guidelines among journal editors at its publisher, Wiley-Blackwell.

Researchers at the workshop said that funding agencies, too, could ask grant applicants to specify the sex of the animals they propose studying and justify their decision whenever they chose only male animals. Funders could also change the status quo, the researchers claimed, by supporting more work on sex differences, although the NIH would not comment on whether this was under consideration.


The metabolic secrets of good runners : Nature News

The metabolic secrets of good runners : Nature News

Published online 26 May 2010 | Nature | doi:10.1038/news.2010.266

News

The metabolic secrets of good runners

Chemical changes in runners linked to physical fitness.

Runner winning a marathonThe metabolic changes that take place in marathon runners has been pinned down.B. Magnus / iStockphoto

A healthy heart and svelte physique are not the only physical changes wrought by exercise: researchers have also identified a host of metabolic changes that occur during exercise in physically fit athletes.

These changes, described today in Science Translational Medicine1, suggest that exercise revs up the pathways that break down stored sugars, lipids and amino acids, as well as improving blood-sugar control. The results might eventually lead to dietary supplements that boost athletic performance or invigorate patients suffering from debilitating diseases, says study author Robert Gerszten, a clinician scientist at Massachusetts General Hospital in Boston.

Metabolic profiling has lagged behind large-scale studies of gene and protein expression, in part because the collection of metabolites in the human body — sometimes referred to as the metabolome — is so complex. "The alphabet for the DNA world is four letters long, and for the protein world it's twenty amino acids," says David Wishart, a metabolomics researcher at the University of Alberta in Canada who was not involved with the new work. "The alphabet for metabolites is about 8,000 different compounds, so it's a tough language to learn."

Gerszten and his colleagues assayed 210 of these metabolites to fill a gap in our understanding of the effects of exercise. "It's well known that exercise protects against cardiovascular and metabolic diseases and predicts long-term survival," says Gerszten. "But how exercise confers its salutary effects is less understood."

Feel the burn

The team used mass spectrometry to measure changes in the metabolic profiles of 70 people before and after a ten-minute run on a treadmill. They detected changes in 21 compounds, including several not previously linked to exercise.

“In our hunt to find genes and proteins to explain everything, we often forget the importance of small molecules.”


Some of these changes tended to be more extreme in study participants who had a greater peak oxygen intake — an indicator of physical fitness. Running the Boston Marathon also brought on some of these changes, particularly in those runners who finished the 26.2-mile course within four hours.

And samples from 302 participants in a large health study called the Framingham Heart Study indicated that the concentration of two metabolites associated with fitness — glycerol and glutamine — also varied with resting heart rate, another measure of physical fitness.

It is too soon to say whether these changes actually contribute to physical stamina — so far the study has shown only that the levels of these metabolites change. Gerszten says that his team is now addressing this lingering question using animal studies. But the team did feed muscle cells grown in culture a mixture of five metabolites that were all increased by running — glycerol, niacinamide, glucose-6-phosphate, pantothenate and succinate. In response, the cells immediately upped their expression of a protein called NUR77, which regulates the use of glucose and lipids in skeletal muscle.

The study was remarkably thorough, says Gary Siuzdak, senior director of the Center for Mass Spectrometry at the Scripps Research Institute in La Jolla, California. A valuable follow-up, he adds, would be to profile metabolites in an untargeted way rather than restricting the assay to the initial set of two hundred specific compounds. This approach is more difficult: researchers perform less-specific initial assays to look for differences in the chemical composition of their samples, and must then double-back to identify the chemicals, a process that Siuzdak describes as both "tedious" and "really quite daunting".

ADVERTISEMENT

But the approach could be worth the trouble, he says. Siuzdak and his colleagues recently found that the balance of oxidized and reduced metabolites regulates stem-cell differentiation2 — an observation that, he says, they would have missed had they not taken an unbiased approach to looking for metabolic differences.

Meanwhile, Wishart says the work raises the bar for metabolic profiling because of the sheer number of metabolites assayed. The study should also serve as a reminder of the need to focus on metabolites, he adds. "In our hunt to find genes and proteins to explain everything, we often forget the importance of small molecules."

  • References

    1. Lewis, G. D. et al. Sci. Trans. Med. 2, 33ra37 (2010).
    2. Yanes, O. et al. Nature Chem. Biol. doi:10.1038/nchembio.364 (2010).

deflating crazy brains

Antipsychotic deflates the brain : Nature News

Published online 6 June 2010 | Nature | doi:10.1038/news.2010.281

News

Antipsychotic deflates the brain

Drug for schizophrenia causes side effects by shrinking part of the brain.

MRI brain scanA commonly-prescribed antipsychotic drug shrinks the brain within hours of administration.Jim Wehtje / Photodisc / Getty Images

A leading antipsychotic drug temporarily reduces the size of a brain region that controls movement and coordination, causing distressing side effects such as shaking, drooling and restless leg syndrome.

Just two hours after injection with haloperidol, an antipsychotic commonly prescribed to treat schizophrenia, healthy volunteers experienced impaired motor abilities that coincided with diminished grey-matter volume in the striatum1 — a brain region that mediates movement.

"We've seen changes in the brain before, but to see significant remodelling of the striatum within a couple of hours is staggering," says Clare Parish at the Howard Florey Institute for brain research in Melbourne, Australia, who was not involved in the study. "Our viewpoint was that only chemical changes would happen in such a short time."

In functional magnetic resolution imaging (fMRI) scans the authors observed the participants' striatal volume diminishing and changes to the structure of the motor circuitry in their brains. Further, their reaction times slowed in a computer test taken after the treatment, indicating the onset of lapses in motor control that affect many patients on antipsychotics.

Dopamine downsizing

Haloperidol has a number of side effects, although many of these are minor and recede within weeks of starting treatment. With few better alternatives, psychiatrists have prescribed the drug for more than 40 years to treat people suffering from hallucinations, delirium, delusions and hyperactivity.

Like most antipsychotics, haloperidol blocks the D2 receptor, which is sensitive to dopamine. The drug stifles the elevated dopamine activity that is thought to underlie psychosis. D2 receptors are abundant in the striatum, where their activity regulates gene expression. But, until now, no one knew that blocking the receptors would rapidly alter the brain's physical structure.

Brain striatumsHaloperidol shrank volunteers' striatums in two hours, but they bounced back within a day.Tost, H et al.

"This is the fastest change in brain volume ever seen," says Andreas Meyer-Lindenberg, professor of psychiatry and psychotherapy at the University of Heidelberg in Mannheim, Germany, and a lead author on the report in Nature Neuroscience1. "Studies have found that the volume of brain regions changes over a number of days, but this is in one to two hours, and in half that time it bounces back."

Within a day, volunteers' brains returned to almost their original size as the effects of the single haloperidol dose subsided. Meyer-Lindenberg says this result should alleviate fears that the drug destroys brain cells. "We know it's not killing neurons because the brain rebounds," he says.

Instead, the team suggests that haloperidol downsizes synapses, the junctions through which neurons communicate. Meyer-Lindenberg speculates that the change is mediated by the protein BDNF, which is involved in synapse growth and diminishes in response to antipsychotic treatments in mice.

The bigger picture

"I think this study will cause worry to some," says Shitij Kapur, dean of the Institute of Psychiatry at King's College London. To counteract those fears, he notes that the brain changes caused by the drug seem to be reversible and that the dose used in the study was a little higher than that usually given to patients who had not taken the drug before.

ADVERTISEMENT

The findings may also hint at why people with bipolar disorder have reduced grey matter in parts of their brains after manic mood swings2. Andrew McIntosh at the University of Edinburgh, UK, says that the connection between the brain-shrinking effects of an antipsychotic reported in this study and the grey-matter reduction he and others have observed in schizophrenic and bipolar patients is "a bit uncertain but this paper definitely makes this worthy of further investigation".

Furthermore, D2 receptors in parts of the striatum have been associated with addiction. This has led Parish to ponder on whether structural changes underlie reward-seeking behaviours. "You wonder what sort of acute changes happen through those receptors in the addicted brain because you hear of cases where addiction happens after just one exposure," she says.

  • References

    1. Tost, H. et al. Nature Neurosci. doi:10.1038/nn.2572 (2010).
    2. Moorhead, T. et al. Biol. Psychiatry 62, 894-900 (2007).

so you think you can design life?

Synthetic-biology competition launches : Nature News

Published online 2 June 2010 | Nature | doi:10.1038/news.2010.271

Synthetic-biology competition launches

Genome-design contest aims to engineer cress for commercial uses.

Thale cressThale cress could be genetically redesigned to munch air pollutants.A. Jagel/Blickwinkel/Still Pictures

A Japanese competition launched last week is aiming to help the burgeoning science of synthetic biology to deliver commercial applications.

Last month's unveiling of the first fully functioning cell with a synthetic genome (see 'Researchers start up cell with synthetic genome') marked a milestone in scientists' ability to manipulate the code of life. But efforts to engineer specific genetic sequences and integrate them with bacteria or plant genomes so that they perform useful functions have faced a variety of hurdles.

These genetic sequences can give the host organisms the ability to make proteins with useful properties: producing useful chemicals such as biofuels or drugs; acting as biochemical sensors; or breaking down environmental pollutants, for example. But when these genes are integrated into living cells, they are often frustratingly unpredictable and sometimes incompatible with the host organism (see 'Five hard truths for synthetic biology').

The International Rational Genome-Design Contest (GenoCon), launched by the Yokohama-based Bioinformatics and Systems Engineering (BASE) division of Japan's RIKEN research institute, is now hoping that its participants will optimize genetic sequences so that they can be used practically. No prize money has been offered; honour, it seems, should be sufficient reward.

The inaugural challenge asks contestants to genetically redesign thale cress (Arabidopsis thaliana) so that it can metabolize the airborne pollutant formaldehyde. Researchers have previously inserted genes into thale cress that give it a limited ability to absorb formaldehyde, and GenoCon's goal is to improve on this.

Commercial tack

GenoCon is not the first synthetic biology contest. Since 2004, the annual International Genetically Engineered Machine (iGEM) competition, run by the Massachusetts Institute of Technology, Cambridge, has asked teams of undergraduates to use genetic components that give bacteria novel features. In the past, contestants have produced microbes that act as biosensors for arsenic; or that smell of bananas or wintergreen; or that come in a rainbow of different hues.

Many of the genetic parts that confer these properties are developed anew by competitors, and must be registered in a growing, open-access repository.

But GenoCon is taking a different tack. Whereas iGEM's participants get to choose how they transform bacteria, GenoCon will focus on specific challenges that have a clear environmental application. BASE director Tetsuro Toyoda hopes that this will spark wider interest among scientists, the public and industry, and prove the value of synthetic biology to funders.

The competition also hopes to attract researchers familiar with bioinformatics who perhaps lack the experimental resources to build what they design. Participants have until the end of September to assemble genetic code — within a 'virtual laboratory' on BASE's website — that will make thale cress an effective formaldehyde detoxifier. Judges will pick the most promising 20 or 30 sequences, which RIKEN and affiliated research institutes will use to create plants with the given sequences integrated into the cress genome. The plants will be housed together in a formaldehyde-rich environment — normally toxic to the plants — and tested for their ability to survive. Next year, the prize will go to the design of the best drought-resistant cress, and participants will also be invited to improve on winning designs from previous contests.

Masayuki Yamamura, a bioinformatics researcher at the Tokyo Institute of Technology, whose group received gold medals from iGEM in 2007, 2008 and 2009, believes that GenoCon will make a huge contribution to synthetic biology, both in Japan and internationally. He points out that because iGEM's parts registry is open access, the sequences cannot be used in patents. This has deterred the biotech industry from getting deeply involved in iGEM, argues Yamamura: "Those from industry have mostly just been looking on from the sidelines."

By contrast, GenoCon is "expecting small-scale business groups and university people with patented DNA sequences to use our platform to find much more optimized versions of the sequences claimed in the patent", Toyoda says. Results will normally be made public, but participating companies will have the option to keep sequences secret if they are negotiating joint patent or licensing agreements with other businesses. "This framework is what we call open-optimization research," says Toyoda. Randy Rettberg, director of iGEM, declined to comment on the GenoCon competition.

Talent scouting

The organizers hope that GenoCon will attract budding scientists through its separate category for high-school students. Yutaka Mizokami, a biology teacher at the Yokohama Science Frontier High School (see 'Reading, writing and nanofabrication'), says that he expects several teams from his school to join, and thinks that most of the other 125 Super Science High Schools in Japan (which are given extra funding to accelerate science teaching) will also put up teams.

Adam Arkin, a bioengineering expert based at the University of California, Berkeley, and at the Lawrence Berkeley National Laboratory, says that GenoCon "beautifully refocuses students and their mentors on the design aspects of synthetic biology".

Arkin is also co-director of the International Open Facility Advancing Biotechnology (BIOFAB), based at the Joint BioEnergy Institute in Emeryville, which bills itself as "the world's first biological design-build facility". He and other BIOFAB scientists are helping to coordinate another nascent synthetic-biology competition to improve biological parts, called Critical Assessment of Genetically Engineering Networks (CAGEN), which will see its first competition run until 2012.

"This constellation of competitions — iGEM, CAGEN and GenoCon — will drive a great deal of international conversation and collaboration, and this can only be stimulating for the field as a whole," says Arkin.