Tuesday, September 30, 2008

believe me -it's worth it

Frugal nature: Euler and the calculus of variations
iicon
Frugal nature: Euler and the calculus of variations
by Phil Wilson


Dido and her lover AeneasDido and her lover Aeneas

Aeneas tells Dido about the fall of Troy. Baron Pierre-Narcisse Guérin.

Denied by her brother, the killer of her husband, a share of the golden throne of the ancient Phoenician city of Tyre, Dido convinces her brother's servants and some senators to flee with her across the sea in boats laden with her husband's gold. After a brief stop in Cyprus to pick up a priest and to "acquire" some wives for the men, the boats continue, rather lower in the water, to the Northern coast of Africa. Landing in modern-day Tunisia, Dido requests a small piece of land to rest on, only for a little while, and only as big as could be surrounded by the leather from a single oxhide. "Sure," the locals probably thought, "We can spare such a trifling bit of land."

Neither history nor legend recalls who wielded the knife, but Dido arranged to have the oxhide cut into very thin strips, which tied together were long enough to surround an entire hill. "That'll do nicely," we can imagine Dido thinking, and I'm sure we can all make a pretty good guess as to what the locals were thinking too. The city of Carthage was founded on this hill named Byrsa ("Oxhide"), and the civilisation it fostered became a major centre of culture and trade for 668 years until its destruction in 146 BCE, although the city lives on as a suburb of Tunis.

Celebrated in perishable poems and paintings, Queen Dido has been given more durable fame by mathematicians, who have named the following problem after her:

Given a choice from all planar closed curves [curves in the plane whose endpoints join up] of equal perimeter, which encloses the maximum area?

The Dido, or isoperimetric, problem is an example of a class of problems in which a given quantity (here the enclosed area) is to be maximised. But the Dido problem is also equivalent to asking which of all planar closed curves of fixed area minimises the perimeter, and so is an example of the more general problem of finding an extremal value — a maximum or minimum. Extremising problems have been an obsession among physicists and mathematicians for at least the last 400 years. A first mention comes from much further back: around 1700 years ago Pappus of Alexandria noted that the hexagonal honeycombs of bees hold more honey than squares or triangles would.

Extremising principles exhibit an apparent universality and deductive power that has led many otherwise rational minds to take a few nervous steps backwards and invoke a god or other mystical unifying animistic force to account for them. One such mind belonged to Leonhard Euler, the Swiss mathematician whose 300th anniversary we celebrate this year. This reaction is perhaps less surprising in Euler's case, since he entered the University of Basel at the age of 14 to study theology. Luckily for maths he spent his Saturdays hanging out with the great mathematician Johann Bernoulli.

Bernoulli was great, but Euler was greater, and his lifetime output of over 800 books and papers included the foundations of still-vital research fields today, including fluid dynamics, celestial mechanics, number theory, and topology. Another Plus article this year has documented the colourful life of this father of 13 who would knock out a paper before dinner while dangling a baby on his knee. We will focus on Euler's calculus of variations, a method applicable to solving the entire class of extremising problems.

Extreme answers to tricky questionsThe front page of Euler's 1744 book. Image courtesy <a href='http://posner.library.cmu.edu/Posner/'>Posner Collection</a>, Carnegie Mellon University Libraries, Pittsburgh PA, USA.
The front page of Euler's 1744 book. Image courtesy Posner Collection, Carnegie Mellon University Libraries, Pittsburgh PA, USA.



Mathematicians and scientists had been playing with the ideas which Euler systematised in his 1744 book Methodus inveniendi lineas curvas maximi minimive proprietate gaudentes, sive solutio problematis isoperimetrici lattissimo sensu accepti (A method for finding curved lines enjoying properties of maximum or minimum, or solution of isoperimetric problems in the broadest accepted sense). Indeed, Euler himself first published on this topic in 1732, when he wrote the article De linea brevissima in superficie quacunque duo quaelibet puncta jungente (On the shortest line joining two points on a surface) based on an assignment given to him by Bernoulli — persuasive encouragement to always do your homework. It was in his 1744 book, though, that Euler transformed a set of special cases into a systematic approach to general problems: the calculus of variations was born.

Euler coined the term calculus of variations, or variational calculus, based on the notation of Joseph-Louis Lagrange whose work formalised some of the underlying concepts. In their joint honour, the central equation of the calculus of variations is called the Euler-Lagrange equation. But why call it a calculus at all? What has it to do with the Newton-Leibniz differential calculus we encounter at school?
Two graphs of functions.

Figure 1: If the curve is flat, as the one shown on the top, then a small change in x corresponds to a small change in y. The steep curve below has the same change in x corresponding to a larger change in y. To find the slope at a given point, you calculate the limit of the ratios of changes in x and y as the size of the interval on the x-axis tends to zero. At the highest point of both curves — the maximum — the slope is zero.

Two graphs of functions.
Differential calculus is concerned with the rates of change of quantities. Take, for example, the slope of a graph representing a function of one independent variable. We might call the variable x and the function y(x). To work out the slope at a given point x, we move a little to the left of x and then a little to the right of x and measure how the value of the function changes from the leftmost position to the rightmost. The ratio of the change in function value to the change in variable value gets closer and closer to the actual slope of the function as the little bit we move to the left and right shrinks to zero. Applying this process to every point x, you end up with the function's derivative, usually written as dy/dx or y'(x).

Differential calculus enables you to find the stationary points of functions, locations at which the slope is zero; these are either extrema (maxima or minima) or inflectional points of the curve. The stationary points are the values of x which satisfy what is called an ordinary differential equation, namely dy/dx=0.

Two graphs of functions.
The calculus of variations also deals with rates of change. The difference is that this time you are looking at functions of functions, which are called functionals. As a simple example, think of all the non-self-intersecting curves connecting two given points in the plane. For each curve c, you can work out its length l(c) (if you know your calculus, you'll remember that this is done using integrals). If you want to know how length varies over the different curves, you treat c as a variable and the length l(c) as a function of the variable. But now c isn't as simple as our x above — it's not just a placeholder for a number, but a curve. Mathematically, a curve can be written as a function; the straight line in figure 2, for example, consists of all points with co-ordinates (x,2x). In this example the y coordinate of each point (x,y) on the line is equal to 2x, so we can express the curve as the function
y(x) = 2x. In general the length l(c) is a function of a function — a functional.
The line y = 2x.

Figure 2: The line y = 2x.

The calculus of variations enables you to find stationary points of functionals and the functions at which the extrema occur, the extremising functions. (Mathematically, the process involves finding stationary points of integrals of unknown functions.) In our example, an extremising curve would be one that maximises or minimises curve length.

It turns out that the extremising functions are those which satisfy a an ordinary differential equation, the Euler-Lagrange equation. Euler and Lagrange established the mathematical machinery behind this in a formal and systematic way and in full generality for all extremising problems. Importantly, the new systematic theory allowed for the inclusion of constraints, meaning that a whole swathe of problems in which something must be extremised while something else is kept fixed could be solved. Dido, who maximised the area for a fixed perimeter, would have been proud.


Lazy nature



Euler's foundational 1744 book is one of the first (along with the works of Pierre Louis Maupertuis) to present and discuss the physical principle of least action, indicating a deep — and controversial — connection between the calculus of variations and physics. The principle of least action can be stated informally as "nature is frugal": all physical systems — the orbiting planets, the apple that supposedly fell on Newton's head, the movement of photons — behave in a way that minimises the effort required. The word "effort" here is used in a rather vague sense — the quantity is more properly termed the action of the system. So what is this action, and what (or who, we might once have asked) does the minimising? Why should the universe be run along this parsimonious principle anyway?

Euler formulated a precise definition of the action for a body moving without resistance. Once you know what the action of a particular system is, Euler and Lagrange's calculus of variations becomes a powerful tool for deducing the laws of nature. If you know where and when the body started out and where and when it ended up, then you can use variational calculus to find the path between these endpoints that minimises action. According to the least action principle, this is the path the body must take, so the method should give you information on the fundamental laws governing the motion. If you do the calculations, you'll find out that the well-known equations of motion do indeed pop out in the end. Since Euler's time appropriate actions have been defined for all sorts of physical systems. Euler and Lagrange contributed substantially to this work, but we should also mention William Rowan Hamilton, who built on Euler's work to bring the least action principle to its modern form. Least action principles play an important role in modern physics, including the theory of relativity and, as we shall see, in quantum mechanics.
Maths and god: the limits of knowledge

The principle of least action raises two deep and unanswered issues which bump up against the limits of our knowledge and of what is knowable. It's these that have caused scientists to invoke a god as a default answer, a hypothesis which, arguably, explains nothing at best, and at worst raises more difficult questions than it answers.

The first issue is why our universe should be parsimonious. Stopping short of invoking a god as an explanation, how can we address such an issue? One way would be to ask what a universe without a least action principle would be like. Could life arise? Could explanatory theories and accurate predictions be made in such a universe? This possible line of reasoning invokes the anthropic principle, which states that naturally the universe seems explainable and cosy for life, since, if it were not, we wouldn't be around to argue about it. Unfortunately, this answer leaves our current science behind, because there is as yet no way we can test whether it is true or not.
Sun shining through clouds

Why do sunbeams travel along optimal paths?

The second issue is how the universe achieves being parsimonious. The principle of least action and variational calculus seem to suggest that the behaviour of everything in the universe is dictated by the future. A particle can only take a path of least action if it "knows" where it is going to end up — different endpoints will yield different paths. But how can that be the way the universe actually operates? How can the universe depend on knowing where the system ends up in order to work out how it got there? It doesn't make any sense ... yet the method of variational calculus works. This issue goes beyond the teleological notion that there is a plan or design in the universe. It invokes perfect knowledge of the future for the entire universe — everything is done and dusted and things like free will, morality and scientific endeavour become meaningless.

One possible solution to this problem may lie in a future understanding of how the laws of quantum mechanics — the unbelievably accurate but non-commonsensical theory of the interactions of subatomic particles — give rise to the everyday laws and concepts we see around us. Since we don't operate at the quantum mechanical length scales or time scales, we see only approximations of underlying reality. Our laws may arise from statistical correlations smoothed over innumerable interactions at the smallest scales. There is indeed a least action principle of quantum mechanics and, promisingly, it presents no teleological problems.

Putting aside unresolved philosophical questions, the calculus of variations is a technique of great power used everyday by scientists and mathematicians around the globe to solve real questions posed by the natural, industrial, and biomedical worlds. We will end with an event early in the life of its founder and birthday boy of the year, Leonhard Euler. In 1727, at the age of 20 and before formulating the calculus of variations, Euler caught the world's attention by writing a maths essay (there's hope for me yet) which received an honourable mention in the annual Grand Prix of the Paris Academy of Sciences, certainly the Nobel Prize of the day. Euler's essay was motivated by a real-world problem: how optimally to position the masts on a ship — and this from a man who had never left land-locked Switzerland, nor seen a big sailing ship. He wrote "I did not find it necessary to confirm this theory of mine by experiment because it is derived from the surest and most secure principles of mechanics, so that no doubt whatsoever can be raised on whether or not it be true and takes place in practice." Even after Newton the dangerous idea that mathematics could faithfully reproduce reality was still startling. To this day the implications of this idea are changing the world.
calculus Euler euler year anthropic principle variational calculus least action principle mathematics and religion derivative differentiation pendulum

About the author
Phil Wilson

Phil Wilson is a lecturer in mathematics at the University of Canterbury, New Zealand. He applies mathematics to the biological, medical, industrial, and natural worlds. You can read more of his inspired writing on his blog.
Download PDF version |

http://plus.maths.org/issue44/features/wilson/index.html

http://tinyurl.com/3fxgcb

told you so

Do We Live in a Giant Cosmic Bubble? | LiveScience
Do We Live in a Giant Cosmic Bubble?

By Clara Moskowitz, Staff Writer

posted: 30 September 2008 06:48 am ET


If the notion of dark energy sounds improbable, get ready for an even more outlandish suggestion.

Earth may be trapped in an abnormal bubble of space-time that is particularly void of matter. Scientists say this condition could account for the apparent acceleration of the universe's expansion, for which dark energy currently is the leading explanation.

Dark energy is the name given to the hypothetical force that could be drawing all the stuff in the universe outward at an ever-increasing rate. Current thinking is that 74 percent of the universe could be made up of this exotic dark energy, with another 21 percent being dark matter, and normal matter comprising the remaining 5 percent.

Until now, there has been no good way to choose between dark energy or the void explanation, but a new study outlines a potential test of the bubble scenario.

If we were in an unusually sparse area of the universe, then things could look farther away than they really are and there would be no need to rely on dark energy as an explanation for certain astronomical observations.

"If we lived in a very large under-density, then the space-time itself wouldn't be accelerating," said researcher Timothy Clifton of Oxford University in England. "It would just be that the observations, if interpreted in the usual way, would look like they were."

Scientists first detected the acceleration by noting that distant supernovae seemed to be moving away from us faster than they should be. One type of supernova (called Type Ia) is a useful distance indicator, because the explosions always have the same intrinsic brightness. Since light gets dimmer the farther it travels, that means that when the supernovae appear faint to us, they are far away, and when they appear bright, they are closer in.

But if we happened to be in a portion of the universe with less matter in it than normal, then the space-time around us would be different than it is outside, because matter warps space-time. Light travelling from supernovae outside our bubble would appear dimmer, because the light would diverge more than we would expect once it got inside our void.

One problem with the void idea, though, is that it negates a principle that has reined in astronomy for more than 450 years: namely, that our place in the universe isn't special. When Nicholas Copernicus argued that it made much more sense for the Earth to be revolving around the sun than vice versa, it revolutionized science. Since then, most theories have to pass the Copernican test. If they require our planet to be unique, or our position to be exalted, the ideas often seem unlikely.

"This idea that we live in a void would really be a statement that we live in a special place," Clifton told SPACE.com. "The regular cosmological model is based on the idea that where we live is a typical place in the universe. This would be a contradiction to the Copernican principle."

Clifton, along with Oxford researchers Pedro G. Ferreira and Kate Land, say that in coming years we may be able to distinguish between dark energy and the void. They point to the upcoming Joint Dark Energy Mission, planned by NASA and the U.S. Department of Energy to launch in 2014 or 2015. The satellite aims to measure the expansion of the universe precisely by observing about 2,300 supernovae.

The scientists suggest that by looking at a large number of supernovae in a certain region of the universe, they should be able to tell whether the objects are really accelerating away, or if their light is merely being distorted in a void.

The new study will be detailed in an upcoming issue of the journal Physical Review Letters.

but can it detect bs?

Invention: Universal detector - tech - 26 September 2008 - New Scientist Tech
Invention: Universal detector

* 16:51 26 September 2008
* NewScientist.com news service
* Justin Mullins

Advertisement

Zap a metal with light and the electrons on the surface ripple into waves – known as plasmons – which emit light of their own. The frequency of that light reflects the electronic nature of the surface and is highly sensitive to contamination.

Kevin Tetz and colleagues in the Ultrafast and Nanoscale Optics Group at the University of California, San Diego, have designed a system to exploit that to test for any surface contamination on the surface of, well, anything.

Their idea uses a thin layer of metal drilled with nanoscale holes, laid onto the surface being tested. When the perforated plate is zapped with laser light, the surface plasmons that form emit light with a frequency related to the materials touching the plate. A sensitive light detector is needed to measure the frequency of light given off.

The team says devices using this approach can be small and portable, will work on very low power, and could detect everything from explosives to bacteria. All that needs to be done now is build a system able to decode the light signatures.

Read the full universal detector patent application

Hybrid Nanoparticles Image and Treat Tumors

Print the story
Hybrid Nanoparticles Image and Treat Tumors

(PhysOrg.com) -- By combining a magnetic nanoparticle, a fluorescent quantum dot, and an anticancer drug within a lipid-based nanoparticle, a multi-institutional research team headed by members of the National Cancer Institute’s (NCI) Alliance for Nanotechnology in Cancer has created a single agent that can image and treat tumors. In addition, this new nanoparticle is able to avoid detection by the immune system, enabling the particle to remain in the body for extended periods of time.

“The idea involves encapsulating imaging agents and drugs into a protective ‘mothership’ that evades the natural processes that normally would remove these payloads if they were unprotected,” said Michael Sailor, Ph.D., an Alliance member at the University of California, San Diego, who led this research effort. Other Alliance members who participated in this study include Sangeeta Bhatia, M.D., Ph.D., Massachusetts Institute of Technology, and Erkki Ruoslahti, M.D., Ph.D., Burnham Institute for Medical Research at the University of California, Santa Barbara. The researchers published the results of their work in the journal Angewandte Chemie International Edition.

“Many drugs look promising in the laboratory but fail in humans because they do not reach the diseased tissue in time or at concentrations high enough to be effective,” added Dr. Bhatia. “These drugs don’t have the capability to avoid the body’s natural defenses or to discriminate their intended targets from healthy tissues. In addition, we lack the tools to detect diseases such as cancer at the earliest stages of development, when therapies can be most effective.”

The researchers designed the hull of their motherships to evade detection by constructing them of lipids modified with poly(ethylene glycol) (PEG). The researchers also designed the material of the hull to be strong enough to prevent accidental release of the mothership’s cargo while circulating through the bloodstream. Tethered to the surface of the hull is a protein called F3, a molecule that sticks to cancer cells. Prepared in Dr. Ruoslahti’s laboratory, F3 was engineered to specifically home in on tumor cell surfaces and then transport itself into their nuclei.

The researchers loaded their mothership nanoparticles with three payloads before injecting them in mice. Two types of nanoparticles, superparamagnetic iron oxide and fluorescent quantum dots, were placed in the ship’s cargo hold, along with the anticancer drug doxorubicin. The iron oxide nanoparticles allow the ships to show up in a magnetic resonance imaging (MRI) scan, and the quantum dots can be seen with another type of imaging tool, a fluorescence scanner.

“The fluorescence image provides higher resolution than MRI,” said Dr. Sailor. “One can imagine a surgeon identifying the specific location of a tumor in the body before surgery with an MRI scan, then using fluorescence imaging to find and remove all parts of the tumor during the operation.”

To its surprise, the team found that a single mothership can carry multiple iron oxide nanoparticles, which increases their brightness in the MRI image. “The ability of these nanostructures to carry more than one superparamagnetic nanoparticle makes them easier to see by MRI, which should translate to earlier detection of smaller tumors,” said Dr. Sailor. “The fact that the ships can carry very dissimilar payloads—a magnetic nanoparticle, a fluorescent quantum dot, and a small molecule drug—was a real surprise.”

This work, which is detailed in the paper “Micellar Hybrid Nanoparticles for Simultaneous Magnetofluorescent Imaging and Drug Delivery,” was supported by the NCI Alliance for Nanotechnology in Cancer. (http://dx.doi.org/doi:10.1002/anie.200801810)

Provided by National Cancer Institute




This news is brought to you by PhysOrg.com


It's nothing a case of beer won't fix...

Forget black holes, could the LHC trigger a “Bose supernova”?
Forget black holes, could the LHC trigger a “Bose supernova”?
September 29th, 2008 | by KFC |

lhc-higgs.jpg

The fellas at CERN have gone to great lengths to reassure us all that they won’t destroy the planet (who says physicists are cold hearted?).

The worry was that the collision of particles at the LHC’s high energies could create a black hole that would swallow the planet. We appear to be safe on that score but it turns out there’s another way in which some people think the LHC could cause a major explosion.

The worry this time is about Bose Einstein Condensates, lumps of matter so cold that their constituents occupy the lowest possible quantum state.

Physicists have been fiddling with BECs since the early 1990s and have become quite good at manipulating them with magnetic fields.

One thing they’ve found is that it is possible to switch the force between atoms in certain kinds of BECs from positive to negative and back using a magnetic field, a phenomenon known as a Feschbach resonance.

But get this: in 2001, Elizabeth Donley and buddies at JILA in Boulder, Colorado, caused a BEC to explode by switching the forces like. These explosions have since become known as Bose supernovas.

Nobody is exactly sure how these explosions proceed which is a tad worrying for the following reason: some clever clogs has pointed out that superfluid helium is a BEC and that the LHC is swimming in 700,000 litres of the stuff. Not only that but the entire thing is bathed in some of the most powerful magnetic fields on the planet.

So is the LHC a timebomb waiting to go off? Not according to Malcolm Fairbairn and Bob McElrath at CERN who have filled the back of a few envelopes in calculating that we’re still safe. To be doubly sure, they also checked that no other superfluid helium facilities have mysteriously blown themselves to kingdom come.

“We conclude that that there is no physics whatsoever which suggests that Helium could undergo
any kind of unforeseen catastrophic explosion,” they say.

That’s comforting and impressive. Ruling out foreseen catastrophies is certainly useful but the ability to rule out unforeseen ones is truly amazing.

Ref: arxiv.org/abs/0809.4004: There is no Explosion Risk Associated with Superfluid Helium in the LHC Cooling System

Sunday, September 28, 2008

financial phi

How Fractals Can Explain What's Wrong with Wall Street: Scientific American
Editor's Note: This story was originally published in the February 1999 edition of Scientific American. We are posting it in light of recent news involving Lehman Brothers and Merrill Lynch.

Individual investors and professional stock and currency traders know better than ever that prices quoted in any financial market often change with heart-stopping swiftness. Fortunes are made and lost in sudden bursts of activity when the market seems to speed up and the volatility soars. Last September, for instance, the stock for Alcatel, a French telecommunications equipment manufacturer, dropped about 40 percent one day and fell another 6 percent over the next few days. In a reversal, the stock shot up 10 percent on the fourth day.

The classical financial models used for most of this century predict that such precipitous events should never happen. A cornerstone of finance is modern portfolio theory, which tries to maximize returns for a given level of risk. The mathematics underlying portfolio theory handles extreme situations with benign neglect: it regards large market shifts as too unlikely to matter or as impossible to take into account. It is true that portfolio theory may account for what occurs 95 percent of the time in the market. But the picture it presents does not reflect reality, if one agrees that major events are part of the remaining 5 percent. An inescapable analogy is that of a sailor at sea. If the weather is moderate 95 percent of the time, can the mariner afford to ignore the possibility of a typhoon?

The risk-reducing formulas behind portfolio theory rely on a number of demanding and ultimately unfounded premises. First, they suggest that price changes are statistically independent of one another: for example, that today’s price has no influence on the changes between the current price and tomorrow’s. As a result, predictions of future market movements become impossible. The second presumption is that all price changes are distributed in a pattern that conforms to the standard bell curve. The width of the bell shape (as measured by its sigma, or standard deviation) depicts how far price changes diverge from the mean; events at the extremes are considered extremely rare. Typhoons are, in effect, defined out of existence.

Do financial data neatly conform to such assumptions? Of course, they never do. Charts of stock or currency changes over time do reveal a constant background of small up and down price movements—but not as uniform as one would expect if price changes fit the bell curve. These patterns, however, constitute only one aspect of the graph. A substantial number of sudden large changes—spikes on the chart that shoot up and down as with the Alcatel stock—stand out from the background of more moderate perturbations. Moreover, the magnitude of price movements (both large and small) may remain roughly constant for a year, and then suddenly the variability may increase for an extended period. Big price jumps become more common as the turbulence of the market grows—clusters of them appear on the chart.

According to portfolio theory, the probability of these large fluctuations would be a few millionths of a millionth of a millionth of a millionth. (The fluctuations are greater than 10 standard deviations.) But in fact, one observes spikes on a regular basis—as often as every month—and their probability amounts to a few hundredths. Granted, the bell curve is often described as normal—or, more precisely, as the normal distribution. But should financial markets then be described as abnormal? Of course not—they are what they are, and it is portfolio theory that is flawed.

Modern portfolio theory poses a danger to those who believe in it too strongly and is a powerful challenge for the theoretician. Though sometimes acknowledging faults in the present body of thinking, its adherents suggest that no other premises can be handled through mathematical modeling. This contention leads to the question of whether a rigorous quantitative description of at least some features of major financial upheavals can be developed. The bearish answer is that large market swings are anomalies, individual “acts of God” that present no conceivable regularity. Revisionists correct the questionable premises of modern portfolio theory through small fixes that lack any guiding principle and do not improve matters sufficiently. My own work—carried out over many years— takes a very different and decidedly bullish position.

get down with your funky self

Why do we like to dance--And move to the beat?: Scientific American
dance

THE THRILL OF TANGO: dance

Scientists believe that dancing combines two of our greatest pleasures: movement and music.
© ISTOCKPHOTO/Guillermo Perales Gonzalez

Many things stimulate our brains' reward centers, among them, coordinated movements. Consider the thrill some get from watching choreographed fight or car chase scenes in action movies. What about the enjoyment spectators get when watching sports or actually riding on a roller coaster or in a fast car?

Scientists aren't sure why we like movement so much, but there's certainly a lot of anecdotal evidence to suggest we get a pretty big kick out of it. Maybe synchronizing music, which many studies have shown is pleasing to both the ear and brain, and movement—in essence, dance—may constitute a pleasure double play.

Music is known to stimulate pleasure and reward areas like the orbitofrontal cortex, located directly behind one's eyes, as well as a midbrain region called the ventral striatum. In particular, the amount of activation in these areas matches up with how much we enjoy the tunes. In addition, music activates the cerebellum, at the base of the brain, which is involved in the coordination and timing of movement.

So, why is dance pleasurable?

First, people speculate that music was created through rhythmic movement—think: tapping your foot. Second, some reward-related areas in the brain are connected with motor areas. Third, mounting evidence suggests that we are sensitive and attuned to the movements of others' bodies, because similar brain regions are activated when certain movements are both made and observed. For example, the motor regions of professional dancers' brains show more activation when they watch other dancers compared with people who don't dance.

This kind of finding has led to a great deal of speculation with respect to mirror neurons—cells found in the cortex, the brain's central processing unit, that activate when a person is performing an action as well as watching someone else do it. Increasing evidence suggests that sensory experiences are also motor experiences. Music and dance may just be particularly pleasurable activators of these sensory and motor circuits. So, if you're watching someone dance, your brain's movement areas activate; unconsciously, you are planning and predicting how a dancer would move based on what you would do.

That may lead to the pleasure we get from seeing someone execute a movement with expert skill—that is seeing an action that your own motor system cannot predict via an internal simulation. This prediction error may be rewarding in some way.

So, if that evidence indicates that humans like watching others in motion (and being in motion themselves), adding music to the mix may be a pinnacle of reward.

Music, in fact, can actually refine your movement skills by improving your timing, coordination and rhythm. Take the Brazilian folk art, Capoeira—which could be a dance masquerading as a martial art or vice versa. Many of the moves in that fighting style are choreographed, taught and practiced, along with music, making the participants more adept—and giving them the pleasure from the music as well as from performing the movement.

Adding music in this context may cross the thin line between a killing machine and a dancing machine.


reboot yourself

Cell 'rebooting' technique sidesteps risks : Nature News
Cell 'rebooting' technique sidesteps risks

Virus reprograms cells without disrupting genome.

Erika Check Hayden
stem cellA pluripotent stem cells induced by transient gene delivery.Mathias Stadtfeld and Konrad Hochedlinger


Scientists today announce a major advance in a technology for engineering cells with broad regenerative powers.

Writing in Science1, a team led by biologist Konrad Hochedlinger of Harvard Medical School in Boston, Massachusetts, describes how it transformed mouse tail and liver cells into an embryonic-like state. And unlike other scientists who had previously made such 'induced pluripotent stem cells', or iPS cells, Hochedlinger's team did not use a virus that integrates itself into a cell's genome to 'reboot' the cells. Instead, the team used an adenovirus, which keeps out of a cell's own DNA, avoiding the potential for serious side-effects such as cancer, which might result from viral disruption of a cell's DNA.

The work brings the science of iPS cells one step closer to clinical reality. But it also answers some key biological questions about the cells: "This is a potentially exciting advance, since it gets around the dangers of viral integration into the genome," says Martin Pera, director of the University of Southern California's Institute for Stem Cell and Regenerative Medicine in Los Angeles.
Tailor-made tissues

Scientists are excited about iPS cells because they can theoretically be made from any of an individual's cells, and might therefore be used to tailor-make tissues that match a patient. They are also being used for the study of disease, and avoid the ethical and practical issues surrounding the use of embryonic stem (ES) cells harvested from blastocysts, the hollow balls of cells that form during the early stages of human development . They are also much more readily available than human eggs, which are also being used in one technique to try to reprogram adult cells.

However, many questions remain about the adenovirally reprogramed cells. Some have claimed that the study eliminates the need for work on human ES cells, but Hochedlinger disagrees: "It's much too early to say that — at this point we clearly still need ES cells," he said.

The issue has been a contentious one in the US presidential campaign. Barack Obama has said that iPS cells do not eliminate the need for ES cell research. John McCain's stance has been more vague, while his running mate, Sarah Palin, opposes ES cell work (see 'US election: Questioning the candidates'). Yet Hochedlinger points out that he has not been able to use his technique to reprogram human cells yet, and that even if his lab was able to do so, researchers still do not know whether iPS cells share all of ES cells' powers.

"It is unclear to what extent ES cells and iPS cells are really equivalent to each other, and showing this will require much more work," says Hochedlinger.

Saturday, September 27, 2008

Tuesday, September 9, 2008

walking the walk

Observers of Walking Figures See Men Advancing, Women in Retreat: Scientific American Podcast
One signature detail we use when we recognize people we know—one that is often overlooked—is their walk. Past studies show that we can discern gender, mood, personality traits by just watching animated simple point-figures…meaning, points of light marking joint positions on amblers, such as knees, elbows, hips, etc.

A group from the Southern Cross University in Australia published a fascinating result in the journal Current Biology, after they manipulated the light points of walkers, making figures appear more feminine or masculine.

If they made the point-form body appear male, subjects perceived that figure as walking toward them—regardless of its actual direction. If the walker seemed female, the subjects reported it was walking away from them.

Curiously, gender-neutral figures tended to appear to be moving toward the viewer. "It was only when walkers had characteristics consistent with being female did the observers begin to perceive them more often as facing away," the researchers reported.

Further, even when the researchers included perspective cues that enhance a stroller's directionality, they found subjects saw males as coming toward them more than half the time, and nearly always viewed females as in retreat.

The researchers speculate that these misperceptions may signal deeper evolutionary factors: "…a male figure that is otherwise ambiguous might best be perceived as approaching to allow the observer to prepare to flee or fight. Similarly…especially for infants, the departure of females might signal a need to act…"

Hm. Not sure, but that’s the thing in science… fascinating proven results are often left waiting for their counterpart explanations to catch up.

- Christie Nicholson