Tuesday, June 1, 2010

Creative people are just high functioning schizophrenics

New research shows a possible explanation for the link between mental health and creativity. By studying receptors in the brain, researchers at Karolinska Institute have managed to show that the dopamine system in healthy, highly creative people is similar in some respects to that seen in people with schizophrenia.

High creative skills have been shown to be somewhat more common in people who have mental illness in the family. Creativity is also linked to a slightly higher risk of schizophrenia and bipolar disorder. Certain psychological traits, such as the ability to make unusual or bizarre associations are also shared by schizophrenics and healthy, highly creative people. And now the correlation between creativity and mental health has scientific backing.

"We have studied the brain and the dopamine D2 receptors, and have shown that the dopamine system of healthy, highly creative people is similar to that found in people with schizophrenia," says associate professor Fredrik Ullen from Karolinska Institutet's Department of Women's and Children's Health, co-author of the study that appears in the journal PLoS ONE.

Just which brain mechanisms are responsible for this correlation is still something of a mystery, but Dr Ullen conjectures that the function of systems in the brain that use dopamine is significant; for example, studies have shown that dopamine receptor genes are linked to ability for divergent thought. Dr Ullen's study measured the creativity of healthy individuals using divergent psychological tests, in which the task was to find many different solutions to a problem.

"The study shows that highly creative people who did well on the divergent tests had a lower density of D2 receptors in the thalamus than less creative people," says Dr Ullen. "Schizophrenics are also known to have low D2 density in this part of the brain, suggesting a cause of the link between mental illness and creativity."
The thalamus serves as a kind of relay centre, filtering information before it reaches areas of the cortex, which is responsible, amongst other things, for cognition and reasoning.

"Fewer D2 receptors in the thalamus probably means a lower degree of signal filtering, and thus a higher flow of information from the thalamus," says Dr Ullen, and explains that this could a possible mechanism behind the ability of healthy highly creative people to see numerous uncommon connections in a problem-solving situation and the bizarre associations found in the mentally ill.

"Thinking outside the box might be facilitated by having a somewhat less intact box," says Dr Ullen about his new findings.

Tuesday, May 11, 2010

Record Labels and Human Psychology

Record Labels Waged War On Human Psychology…And Lost
he major record labels and the RIAA could have averted a PR nightmare and saved themselves millions of dollars by hiring a few behavioral economists instead of lawyers to advise them. The basis of all human economic transactions are psychological. Our animal spirits animate markets. The lawsuits the RIAA waged against consumers several years ago were designed to prevent people from downloading free, illegal music. They largely had the opposite effect. The obvious stupidity of the strategy is that you’re beating people over the head, then saying “Now, buy my product!”

The more profound aspect of that is that the RIAA could spend every last penny they’re worth and they still wouldn’t stop the inevitable death of the CD . Not because people don’t want to pay $17 for a CD, but because $17 doesn’t make sense anymore in the context of how human beings make rational choices. The whole file sharing phenomenon (and legal music downloading) is largely driven by a powerful psychological aversion to being cheated.

It turns out that free is so powerful not because it’s free, but because it allows us to minimize the risk of being cheated. Duke University behavioral economist Dan Ariely conducted an interesting experiment to understand “free”, which he writes about in his book Predictably Irrational. First, he and his colleagues sold random college students two kinds of chocolates. One was Lindt Truffles from Switzerland. The second was Hersheys Kisses. The truffles were 15 cents and the Kisses were 1 cent. The students reasoned that the difference in price between the two chocolates was due to quality. 73% chose the truffles and 27% chose the Kisses.

Then Ariely did something interesting. He introduced free into the experiment. He lowered the price of each chocolate by 1 cent, so the truffles were now 14 cents and the Kisses were free. All of a sudden, preference for the Kisses skyrocketed.

Ariely concluded that free is so enticing because it eliminates the risk of buyer’s remorse, or what I like to call the “Oh, crap!” factor. Nobody wants to buy something and then discover that it’s not what they expected. Even if the price of that thing is just a few cents, the psychological aversion still exists. When something is free, that risk is eliminated entirely. It may still not be what you expected, but at least you didn’t lose anything by paying for it.

The world of marketing is largely based on the concept of convincing people to overcome their natural aversion to being cheated. Companies hire high-priced consultants and agencies to help them craft brands that people will trust. Money-back guarantees are another popular tactic to get people to buy risk-free.

You can see examples of the “Oh, crap!” factor in everyday life. In his spectacular book Free, Wired Magazine editor Chris Anderson gives the example of zappos.com. Most people still prefer to buy shoes in stores because they want to try them on and make sure they fit and look good on their feet before buying . Zappos became a multi-billion dollar company by eliminating the psychological barrier to buying shoes online by letting people try out and return shoes as many times as they want. So it’s even better than buying shoes in the real world because some of the other costs associated with buying shoes like getting in your car, dealing with traffic, talking to a rude cashier, etc are eliminated.

The record industry has fared so poorly largely because it sells a high risk product that competes with copies of the product that carry far fewer risks. Everybody knows that labels fill CD albums with fluff. And everybody hates it. Sites like Amazon that let you sample albums for free before buying them take some of the risk out of buying. But why would you buy the entire album if you only like a few songs on it? That’s where the whole buy-one-song iTunes model has become a powerful force. Buy what you want, ignore the rest. Or even better, download it for free on some file sharing site.

But free isn’t completely free. Just because something doesn’t cost money doesn’t mean there aren’t other hidden costs. If you download a free mp3, you could be dealing with poor quality, viruses, problematic file formats or maybe the wrong song. And then finding a good version takes time. That’s a cost too.

The reason Steve Jobs and Apple have been so spectacularly successful at reinventing the music business is because they’re the only ones who’ve managed to invent a hardware and software platform that mitigates the risk factors involved in owning music. And they made it sexy and stylish. Apple didn’t win on technology. Nobody does, ultimately. They won on business smarts. And business is based largely on manipulating psychology.

Many younger people get all their music from free file sharing sites. Part of the reason is that they have less money and more time than older people, who’s busier lives encourage them to pay for things that save them time. Younger people are willing to spend time to understand technology and deal with the costs that come with free music. I think free music has also become an expectation for younger people. There’s an entire group of people under the age of about 30 who believe that music should be free because that’s the world they’ve grown up in. So that’s where the future lies.

The labels lost because they waged war on human psychology. Apple won because it adapted to human psychology.

Why We Lie So Well

Survival of the Fibbest: Why We Lie So Well


Your child tells you he didn't eat a cookie despite the tell-tale crumbs all over his mouth. You call your boss to say you're taking "a sick day," feigning a cough while on the phone. You're both lying, but is it the same?

Whether we're 2 years old or 62, our reasons for lying are mostly the same: to get out of trouble, for personal gain and to make ourselves look better in the eyes of others. But a growing body of research is raising questions about how a child's lie is different from an adult's lie, and how the way we deceive changes as we grow.
The Lying Life

Research suggests we begin lying as toddlers and keep on as adults, but how we deceive changes as we age. View chart
[LAB] Getty Images

Developmental psychologists are trying to understand lying through behavior. Neuroscientists are tracking which regions of the brain are activated when we spin lies. Their results could shed light on issues from why a tween lies to your face about breaking a vase to whether young children can be trusted to give eye-witness testimony in court. One intriguing new study suggests that lying may spring from a completely different part of the brain in children compared with adults.

What has become clear from studies including the work of Kang Lee, a professor at the University of Toronto and director of the Institute of Child Study, is that lying is a sign of normal maturation.

Parents and teachers who catch their children lying "should not be alarmed—and their children are not going to turn out to be pathological liars," says Dr. Lee, who has spent the last 15 years studying how lying changes as kids get older, why some people lie more than others as well as which factors can reduce lying. "The fact that their children tell lies is a sign that they have reached a new developmental milestone."

Dr. Lee and Victoria Talwar, a colleague he often collaborates with at McGill University, conducted a series of studies in which they bring children into a lab with hidden cameras. Children and young adults age 2 to 17 are enticed to lie by being told not to peek at a toy—often a plush purple Barney dinosaur—that is put behind the child's back. The test giver then leaves the room for one minute, ostensibly to answer a phone call, giving the child ample time to peek at the toy. Whether or not the child sneaks a look is caught on tape.
For Parents: the Truth About Lying

* Lying is normal and isn't a problem unless kids lie frequently and consistently.
* Ask a child to promise to tell the truth. Children who promise are much more likely to tell the truth than kids who aren't asked to promise.
* Lying shouldn't be ignored. When a lie comes to light, be explicit with children that it is wrong to lie.

* Don't set up children to lie. If you know they committed a transgression, don't ask if they did it. Instead, ask why they did it.
* If a child confesses, thank them for telling you the truth. If kids are only punished for lying, they will be more likely to lie in the future, according to studies.

* Stories with an ending that show truth-telling as a good thing appear more effective at damping lying than fear- or punishmentbased stories (e.g., Pinocchio's nose grows longer when he lies).

Source: American Pain Foundation

For young kids, the temptation to cheat is "tremendous" and 90% peek in these experiments. Even adolescents and adults are tempted in similar situations, says Dr. Lee.

When the test giver returns to the room, the child is asked if he or she peeked. At age 2, about a quarter of children will lie and say they didn't. By 3, half of kids will lie, and by 4, that figure is 90%, studies show.

This trend continues until kids are about 15. By that age, nearly everyone who cheated in the experiment will lie about it. The good news: The number of liars begins to decline beyond this age. By 17, the percentage that lies drops to about 70%.

Researchers have also examined why some kids lie more than others, and have found that it isn't related to better moral values or religious upbringing. Rather, it's kids with better cognitive abilities who lie more. That's because to lie you also have to keep the truth in mind, which involves multiple brain processes, such as integrating several sources of information and manipulating that information, according to Shawn Christ, a neuropsychologist at the University of Missouri-Columbia.

The ability to lie—and lie successfully—is thought to be related to development of brain regions that allow so-called "executive functioning," or higher order thinking and reasoning abilities. Kids who perform better on tests that involve executive functioning also lie more.

To get a clearer picture of potential differences between adult and child lying, recently Markus Kruesi and colleagues at the Medical University of South Carolina scanned the brains of a couple dozen children ages 10 to 16 and adults ages 19 to 40 while they were telling the lies and telling the truth.

As the children and adults lied, the researchers expected to see increased blood flow due to neural activity in the frontal regions of the brain, where executive functioning is thought to be carried out. That happened in adult scans, but none of the frontal regions in the children's brains showed the activity.

While it is too early to know why these differences exist, Dr. Kruesi is looking into whether other areas of the brain, such as those tied to emotion, might be more active when children lie.

When it comes to covering up their lies up, studies show that kids learn quite young that they need to disguise their lying, and very quickly adopt truthful-looking behaviors—like not looking away when questioned. Dr. Talwar's work has shown that it's hard even for a young child's own parent to detect when the child is lying just by looking at the child's behaviors.

But young kids often give themselves away verbally, according to recent research by Drs. Lee and Talwar. Kids may say they didn't peek at the Barney doll, but when the experimenter asks, "What do you think the toy is?" the children blurt out, "Barney." When asked how they knew, many children then confess.

Starting around five, children begin to understand that such an answer gives their deception away, so they pretend to guess or come up with better reasons for why they knew the answer. Even so, the logic may be flawed. Dr. Lee recounted how one little girl asked to place her hand underneath a blanket that was over the toy before she answered the question. After feeling the toy but not seeing it, she said, "It feels purple, so it's Barney!"

By seven, the majority of kids can conceal their lying and cheating very well. "The time to catch a liar is before eight years of age," says Dr. Lee.

So what's a parent to do after that? Some studies suggest there is no long-term effect of parenting on lying behavior, but the work of Dr. Talwar and her colleague Angela Crossman at the John Jay College at the City College of New York shows that a certain type of parenting style seems to discourage lying. They suggest parents discuss why there are rules against lying. Also, parents who point out when kids lie—and also acknowledge when children come clean—can foster more truth-telling, says Dr. Talwar.

Saturday, May 8, 2010

The Moral Life of Babies

The Moral Life of Babies

Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.

This incident occurred in one of several psychology studies that I have been involved with at the Infant Cognition Center at Yale University in collaboration with my colleague (and wife), Karen Wynn, who runs the lab, and a graduate student, Kiley Hamlin, who is the lead author of the studies. We are one of a handful of research teams around the world exploring the moral life of babies.

Like many scientists and humanists, I have long been fascinated by the capacities and inclinations of babies and children. The mental life of young humans not only is an interesting topic in its own right; it also raises — and can help answer — fundamental questions of philosophy and psychology, including how biological evolution and cultural experience conspire to shape human nature. In graduate school, I studied early language development and later moved on to fairly traditional topics in cognitive development, like how we come to understand the minds of other people — what they know, want and experience.

But the current work I’m involved in, on baby morality, might seem like a perverse and misguided next step. Why would anyone even entertain the thought of babies as moral beings? From Sigmund Freud to Jean Piaget to Lawrence Kohlberg, psychologists have long argued that we begin life as amoral animals. One important task of society, particularly of parents, is to turn babies into civilized beings — social creatures who can experience empathy, guilt and shame; who can override selfish impulses in the name of higher principles; and who will respond with outrage to unfairness and injustice. Many parents and educators would endorse a view of infants and toddlers close to that of a recent Onion headline: “New Study Reveals Most Children Unrepentant Sociopaths.” If children enter the world already equipped with moral notions, why is it that we have to work so hard to humanize them?

A growing body of evidence, though, suggests that humans do have a rudimentary moral sense from the very start of life. With the help of well-designed experiments, you can see glimmers of moral thought, moral judgment and moral feeling even in the first year of life. Some sense of good and evil seems to be bred in the bone. Which is not to say that parents are wrong to concern themselves with moral development or that their interactions with their children are a waste of time. Socialization is critically important. But this is not because babies and young children lack a sense of right and wrong; it’s because the sense of right and wrong that they naturally possess diverges in important ways from what we adults would want it to be.

Smart Babies
Babies seem spastic in their actions, undisciplined in their attention. In 1762, Jean-Jacques Rousseau called the baby “a perfect idiot,” and in 1890 William James famously described a baby’s mental life as “one great blooming, buzzing confusion.” A sympathetic parent might see the spark of consciousness in a baby’s large eyes and eagerly accept the popular claim that babies are wonderful learners, but it is hard to avoid the impression that they begin as ignorant as bread loaves. Many developmental psychologists will tell you that the ignorance of human babies extends well into childhood. For many years the conventional view was that young humans take a surprisingly long time to learn basic facts about the physical world (like that objects continue to exist once they are out of sight) and basic facts about people (like that they have beliefs and desires and goals) — let alone how long it takes them to learn about morality.

I am admittedly biased, but I think one of the great discoveries in modern psychology is that this view of babies is mistaken.

A reason this view has persisted is that, for many years, scientists weren’t sure how to go about studying the mental life of babies. It’s a challenge to study the cognitive abilities of any creature that lacks language, but human babies present an additional difficulty, because, even compared to rats or birds, they are behaviorally limited: they can’t run mazes or peck at levers. In the 1980s, however, psychologists interested in exploring how much babies know began making use of one of the few behaviors that young babies can control: the movement of their eyes. The eyes are a window to the baby’s soul. As adults do, when babies see something that they find interesting or surprising, they tend to look at it longer than they would at something they find uninteresting or expected. And when given a choice between two things to look at, babies usually opt to look at the more pleasing thing. You can use “looking time,” then, as a rough but reliable proxy for what captures babies’ attention: what babies are surprised by or what babies like.

The studies in the 1980s that made use of this methodology were able to discover surprising things about what babies know about the nature and workings of physical objects — a baby’s “naïve physics.” Psychologists — most notably Elizabeth Spelke and Renée Baillargeon — conducted studies that essentially involved showing babies magic tricks, events that seemed to violate some law of the universe: you remove the supports from beneath a block and it floats in midair, unsupported; an object disappears and then reappears in another location; a box is placed behind a screen, the screen falls backward into empty space. Like adults, babies tend to linger on such scenes — they look longer at them than at scenes that are identical in all regards except that they don’t violate physical laws. This suggests that babies have expectations about how objects should behave. A vast body of research now suggests that — contrary to what was taught for decades to legions of psychology undergraduates — babies think of objects largely as adults do, as connected masses that move as units, that are solid and subject to gravity and that move in continuous paths through space and time.

Other studies, starting with a 1992 paper by my wife, Karen, have found that babies can do rudimentary math with objects. The demonstration is simple. Show a baby an empty stage. Raise a screen to obscure part of the stage. In view of the baby, put a Mickey Mouse doll behind the screen. Then put another Mickey Mouse doll behind the screen. Now drop the screen. Adults expect two dolls — and so do 5-month-olds: if the screen drops to reveal one or three dolls, the babies look longer, in surprise, than they do if the screen drops to reveal two.

A second wave of studies used looking-time methods to explore what babies know about the minds of others — a baby’s “naïve psychology.” Psychologists had known for a while that even the youngest of babies treat people different from inanimate objects. Babies like to look at faces; they mimic them, they smile at them. They expect engagement: if a moving object becomes still, they merely lose interest; if a person’s face becomes still, however, they become distressed.

But the new studies found that babies have an actual understanding of mental life: they have some grasp of how people think and why they act as they do. The studies showed that, though babies expect inanimate objects to move as the result of push-pull interactions, they expect people to move rationally in accordance with their beliefs and desires: babies show surprise when someone takes a roundabout path to something he wants. They expect someone who reaches for an object to reach for the same object later, even if its location has changed. And well before their 2nd birthdays, babies are sharp enough to know that other people can have false beliefs. The psychologists Kristine Onishi and Renée Baillargeon have found that 15-month-olds expect that if a person sees an object in one box, and then the object is moved to another box when the person isn’t looking, the person will later reach into the box where he first saw the object, not the box where it actually is. That is, toddlers have a mental model not merely of the world but of the world as understood by someone else.

These discoveries inevitably raise a question: If babies have such a rich understanding of objects and people so early in life, why do they seem so ignorant and helpless? Why don’t they put their knowledge to more active use? One possible answer is that these capacities are the psychological equivalent of physical traits like testicles or ovaries, which are formed in infancy and then sit around, useless, for years and years. Another possibility is that babies do, in fact, use their knowledge from Day 1, not for action but for learning. One lesson from the study of artificial intelligence (and from cognitive science more generally) is that an empty head learns nothing: a system that is capable of rapidly absorbing information needs to have some prewired understanding of what to pay attention to and what generalizations to make. Babies might start off smart, then, because it enables them to get smarter.

Nice Babies
Psychologists like myself who are interested in the cognitive capacities of babies and toddlers are now turning our attention to whether babies have a “naïve morality.” But there is reason to proceed with caution. Morality, after all, is a different sort of affair than physics or psychology. The truths of physics and psychology are universal: objects obey the same physical laws everywhere; and people everywhere have minds, goals, desires and beliefs. But the existence of a universal moral code is a highly controversial claim; there is considerable evidence for wide variation from society to society.

In the journal Science a couple of months ago, the psychologist Joseph Henrich and several of his colleagues reported a cross-cultural study of 15 diverse populations and found that people’s propensities to behave kindly to strangers and to punish unfairness are strongest in large-scale communities with market economies, where such norms are essential to the smooth functioning of trade. Henrich and his colleagues concluded that much of the morality that humans possess is a consequence of the culture in which they are raised, not their innate capacities.

At the same time, though, people everywhere have some sense of right and wrong. You won’t find a society where people don’t have some notion of fairness, don’t put some value on loyalty and kindness, don’t distinguish between acts of cruelty and innocent mistakes, don’t categorize people as nasty or nice. These universals make evolutionary sense. Since natural selection works, at least in part, at a genetic level, there is a logic to being instinctively kind to our kin, whose survival and well-being promote the spread of our genes. More than that, it is often beneficial for humans to work together with other humans, which means that it would have been adaptive to evaluate the niceness and nastiness of other individuals. All this is reason to consider the innateness of at least basic moral concepts.

In addition, scientists know that certain compassionate feelings and impulses emerge early and apparently universally in human development. These are not moral concepts, exactly, but they seem closely related. One example is feeling pain at the pain of others. In his book “The Expression of the Emotions in Man and Animals,” Charles Darwin, a keen observer of human nature, tells the story of how his first son, William, was fooled by his nurse into expressing sympathy at a very young age: “When a few days over 6 months old, his nurse pretended to cry, and I saw that his face instantly assumed a melancholy expression, with the corners of his mouth strongly depressed.”

There seems to be something evolutionarily ancient to this empathetic response. If you want to cause a rat distress, you can expose it to the screams of other rats. Human babies, notably, cry more to the cries of other babies than to tape recordings of their own crying, suggesting that they are responding to their awareness of someone else’s pain, not merely to a certain pitch of sound. Babies also seem to want to assuage the pain of others: once they have enough physical competence (starting at about 1 year old), they soothe others in distress by stroking and touching or by handing over a bottle or toy. There are individual differences, to be sure, in the intensity of response: some babies are great soothers; others don’t care as much. But the basic impulse seems common to all. (Some other primates behave similarly: the primatologist Frans de Waal reports that chimpanzees “will approach a victim of attack, put an arm around her and gently pat her back or groom her.” Monkeys, on the other hand, tend to shun victims of aggression.)

Some recent studies have explored the existence of behavior in toddlers that is “altruistic” in an even stronger sense — like when they give up their time and energy to help a stranger accomplish a difficult task. The psychologists Felix Warneken and Michael Tomasello have put toddlers in situations in which an adult is struggling to get something done, like opening a cabinet door with his hands full or trying to get to an object out of reach. The toddlers tend to spontaneously help, even without any prompting, encouragement or reward.

Is any of the above behavior recognizable as moral conduct? Not obviously so. Moral ideas seem to involve much more than mere compassion. Morality, for instance, is closely related to notions of praise and blame: we want to reward what we see as good and punish what we see as bad. Morality is also closely connected to the ideal of impartiality — if it’s immoral for you to do something to me, then, all else being equal, it is immoral for me to do the same thing to you. In addition, moral principles are different from other types of rules or laws: they cannot, for instance, be overruled solely by virtue of authority. (Even a 4-year-old knows not only that unprovoked hitting is wrong but also that it would continue to be wrong even if a teacher said that it was O.K.) And we tend to associate morality with the possibility of free and rational choice; people choose to do good or evil. To hold someone responsible for an act means that we believe that he could have chosen to act otherwise.

Babies and toddlers might not know or exhibit any of these moral subtleties. Their sympathetic reactions and motivations — including their desire to alleviate the pain of others — may not be much different in kind from purely nonmoral reactions and motivations like growing hungry or wanting to void a full bladder. Even if that is true, though, it is hard to conceive of a moral system that didn’t have, as a starting point, these empathetic capacities. As David Hume argued, mere rationality can’t be the foundation of morality, since our most basic desires are neither rational nor irrational. “ ’Tis not contrary to reason,” he wrote, “to prefer the destruction of the whole world to the scratching of my finger.” To have a genuinely moral system, in other words, some things first have to matter, and what we see in babies is the development of mattering.

Moral-Baby Experiments
So what do babies really understand about morality? Our first experiments exploring this question were done in collaboration with a postdoctoral researcher named Valerie Kuhlmeier (who is now an associate professor of psychology at Queen’s University in Ontario). Building on previous work by the psychologists David and Ann Premack, we began by investigating what babies think about two particular kinds of action: helping and hindering.

Our experiments involved having children watch animated movies of geometrical characters with faces. In one, a red ball would try to go up a hill. On some attempts, a yellow square got behind the ball and gently nudged it upward; in others, a green triangle got in front of it and pushed it down. We were interested in babies’ expectations about the ball’s attitudes — what would the baby expect the ball to make of the character who helped it and the one who hindered it? To find out, we then showed the babies additional movies in which the ball either approached the square or the triangle. When the ball approached the triangle (the hinderer), both 9- and 12-month-olds looked longer than they did when the ball approached the square (the helper). This was consistent with the interpretation that the former action surprised them; they expected the ball to approach the helper. A later study, using somewhat different stimuli, replicated the finding with 10-month-olds, but found that 6-month-olds seem to have no expectations at all. (This effect is robust only when the animated characters have faces; when they are simple faceless figures, it is apparently harder for babies to interpret what they are seeing as a social interaction.)

This experiment was designed to explore babies’ expectations about social interactions, not their moral capacities per se. But if you look at the movies, it’s clear that, at least to adult eyes, there is some latent moral content to the situation: the triangle is kind of a jerk; the square is a sweetheart. So we set out to investigate whether babies make the same judgments about the characters that adults do. Forget about how babies expect the ball to act toward the other characters; what do babies themselves think about the square and the triangle? Do they prefer the good guy and dislike the bad guy?

Here we began our more focused investigations into baby morality. For these studies, parents took their babies to the Infant Cognition Center, which is within one of the Yale psychology buildings. (The center is just a couple of blocks away from where Stanley Milgram did his famous experiments on obedience in the early 1960s, tricking New Haven residents into believing that they had severely harmed or even killed strangers with electrical shocks.) The parents were told about what was going to happen and filled out consent forms, which described the study, the risks to the baby (minimal) and the benefits to the baby (minimal, though it is a nice-enough experience). Parents often asked, reasonably enough, if they would learn how their baby does, and the answer was no. This sort of study provides no clinical or educational feedback about individual babies; the findings make sense only when computed as a group.

For the experiment proper, a parent will carry his or her baby into a small testing room. A typical experiment takes about 15 minutes. Usually, the parent sits on a chair, with the baby on his or her lap, though for some studies, the baby is strapped into a high chair with the parent standing behind. At this point, some of the babies are either sleeping or too fussy to continue; there will then be a short break for the baby to wake up or calm down, but on average this kind of study ends up losing about a quarter of the subjects. Just as critics describe much of experimental psychology as the study of the American college undergraduate who wants to make some extra money or needs to fulfill an Intro Psych requirement, there’s some truth to the claim that this developmental work is a science of the interested and alert baby.

In one of our first studies of moral evaluation, we decided not to use two-dimensional animated movies but rather a three-dimensional display in which real geometrical objects, manipulated like puppets, acted out the helping/hindering situations: a yellow square would help the circle up the hill; a red triangle would push it down. After showing the babies the scene, the experimenter placed the helper and the hinderer on a tray and brought them to the child. In this instance, we opted to record not the babies’ looking time but rather which character they reached for, on the theory that what a baby reaches for is a reliable indicator of what a baby wants. In the end, we found that 6- and 10-month-old infants overwhelmingly preferred the helpful individual to the hindering individual. This wasn’t a subtle statistical trend; just about all the babies reached for the good guy.

(Experimental minutiae: What if babies simply like the color red or prefer squares or something like that? To control for this, half the babies got the yellow square as the helper; half got it as the hinderer. What about problems of unconscious cueing and unconscious bias? To avoid this, at the moment when the two characters were offered on the tray, the parent had his or her eyes closed, and the experimenter holding out the characters and recording the responses hadn’t seen the puppet show, so he or she didn’t know who was the good guy and who the bad guy.)

One question that arose with these experiments was how to understand the babies’ preference: did they act as they did because they were attracted to the helpful individual or because they were repelled by the hinderer or was it both? We explored this question in a further series of studies that introduced a neutral character, one that neither helps nor hinders. We found that, given a choice, infants prefer a helpful character to a neutral one; and prefer a neutral character to one who hinders. This finding indicates that both inclinations are at work — babies are drawn to the nice guy and repelled by the mean guy. Again, these results were not subtle; babies almost always showed this pattern of response.

Does our research show that babies believe that the helpful character is good and the hindering character is bad? Not necessarily. All that we can safely infer from what the babies reached for is that babies prefer the good guy and show an aversion to the bad guy. But what’s exciting here is that these preferences are based on how one individual treated another, on whether one individual was helping another individual achieve its goals or hindering it. This is preference of a very special sort; babies were responding to behaviors that adults would describe as nice or mean. When we showed these scenes to much older kids — 18-month-olds — and asked them, “Who was nice? Who was good?” and “Who was mean? Who was bad?” they responded as adults would, identifying the helper as nice and the hinderer as mean.

To increase our confidence that the babies we studied were really responding to niceness and naughtiness, Karen Wynn and Kiley Hamlin, in a separate series of studies, created different sets of one-act morality plays to show the babies. In one, an individual struggled to open a box; the lid would be partly opened but then fall back down. Then, on alternating trials, one puppet would grab the lid and open it all the way, and another puppet would jump on the box and slam it shut. In another study (the one I mentioned at the beginning of this article), a puppet would play with a ball. The puppet would roll the ball to another puppet, who would roll it back, and the first puppet would roll the ball to a different puppet who would run away with it. In both studies, 5-month-olds preferred the good guy — the one who helped to open the box; the one who rolled the ball back — to the bad guy. This all suggests that the babies we studied have a general appreciation of good and bad behavior, one that spans a range of actions.

A further question that arises is whether babies possess more subtle moral capacities than preferring good and avoiding bad. Part and parcel of adult morality, for instance, is the idea that good acts should meet with a positive response and bad acts with a negative response — justice demands the good be rewarded and the bad punished. For our next studies, we turned our attention back to the older babies and toddlers and tried to explore whether the preferences that we were finding had anything to do with moral judgment in this mature sense. In collaboration with Neha Mahajan, a psychology graduate student at Yale, Hamlin, Wynn and I exposed 21-month-olds to the good guy/bad guy situations described above, and we gave them the opportunity to reward or punish either by giving a treat to, or taking a treat from, one of the characters. We found that when asked to give, they tended to chose the positive character; when asked to take, they tended to choose the negative one.

Dispensing justice like this is a more elaborate conceptual operation than merely preferring good to bad, but there are still-more-elaborate moral calculations that adults, at least, can easily make. For example: Which individual would you prefer — someone who rewarded good guys and punished bad guys or someone who punished good guys and rewarded bad guys? The same amount of rewarding and punishing is going on in both cases, but by adult lights, one individual is acting justly and the other isn’t. Can babies see this, too?

To find out, we tested 8-month-olds by first showing them a character who acted as a helper (for instance, helping a puppet trying to open a box) and then presenting a scene in which this helper was the target of a good action by one puppet and a bad action by another puppet. Then we got the babies to choose between these two puppets. That is, they had to choose between a puppet who rewarded a good guy versus a puppet who punished a good guy. Likewise, we showed them a character who acted as a hinderer (for example, keeping a puppet from opening a box) and then had them choose between a puppet who rewarded the bad guy versus one who punished the bad guy.

The results were striking. When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior.

All of this research, taken together, supports a general picture of baby morality. It’s even possible, as a thought experiment, to ask what it would be like to see the world in the moral terms that a baby does. Babies probably have no conscious access to moral notions, no idea why certain acts are good or bad. They respond on a gut level. Indeed, if you watch the older babies during the experiments, they don’t act like impassive judges — they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events (remember the toddler who smacked the bad puppet). The babies’ experiences might be cognitively empty but emotionally intense, replete with strong feelings and strong desires. But this shouldn’t strike you as an altogether alien experience: while we adults possess the additional critical capacity of being able to consciously reason about morality, we’re not otherwise that different from babies — our moral feelings are often instinctive. In fact, one discovery of contemporary research in social psychology and social neuroscience is the powerful emotional underpinning of what we once thought of as cool, untroubled, mature moral deliberation.

Is This the Morality We’re Looking For?
What do these findings about babies’ moral notions tell us about adult morality? Some scholars think that the very existence of an innate moral sense has profound implications. In 1869, Alfred Russel Wallace, who along with Darwin discovered natural selection, wrote that certain human capacities — including “the higher moral faculties” — are richer than what you could expect from a product of biological evolution. He concluded that some sort of godly force must intervene to create these capacities. (Darwin was horrified at this suggestion, writing to Wallace, “I hope you have not murdered too completely your own and my child.”)

A few years ago, in his book “What’s So Great About Christianity,” the social and cultural critic Dinesh D’Souza revived this argument. He conceded that evolution can explain our niceness in instances like kindness to kin, where the niceness has a clear genetic payoff, but he drew the line at “high altruism,” acts of entirely disinterested kindness. For D’Souza, “there is no Darwinian rationale” for why you would give up your seat for an old lady on a bus, an act of nice-guyness that does nothing for your genes. And what about those who donate blood to strangers or sacrifice their lives for a worthy cause? D’Souza reasoned that these stirrings of conscience are best explained not by evolution or psychology but by “the voice of God within our souls.”

The evolutionary psychologist has a quick response to this: To say that a biological trait evolves for a purpose doesn’t mean that it always functions, in the here and now, for that purpose. Sexual arousal, for instance, presumably evolved because of its connection to making babies; but of course we can get aroused in all sorts of situations in which baby-making just isn’t an option — for instance, while looking at pornography. Similarly, our impulse to help others has likely evolved because of the reproductive benefit that it gives us in certain contexts — and it’s not a problem for this argument that some acts of niceness that people perform don’t provide this sort of benefit. (And for what it’s worth, giving up a bus seat for an old lady, although the motives might be psychologically pure, turns out to be a coldbloodedly smart move from a Darwinian standpoint, an easy way to show off yourself as an attractively good person.)

The general argument that critics like Wallace and D’Souza put forward, however, still needs to be taken seriously. The morality of contemporary humans really does outstrip what evolution could possibly have endowed us with; moral actions are often of a sort that have no plausible relation to our reproductive success and don’t appear to be accidental byproducts of evolved adaptations. Many of us care about strangers in faraway lands, sometimes to the extent that we give up resources that could be used for our friends and family; many of us care about the fates of nonhuman animals, so much so that we deprive ourselves of pleasures like rib-eye steak and veal scaloppine. We possess abstract moral notions of equality and freedom for all; we see racism and sexism as evil; we reject slavery and genocide; we try to love our enemies. Of course, our actions typically fall short, often far short, of our moral principles, but these principles do shape, in a substantial way, the world that we live in. It makes sense then to marvel at the extent of our moral insight and to reject the notion that it can be explained in the language of natural selection. If this higher morality or higher altruism were found in babies, the case for divine creation would get just a bit stronger.

But it is not present in babies. In fact, our initial moral sense appears to be biased toward our own kind. There’s plenty of research showing that babies have within-group preferences: 3-month-olds prefer the faces of the race that is most familiar to them to those of other races; 11-month-olds prefer individuals who share their own taste in food and expect these individuals to be nicer than those with different tastes; 12-month-olds prefer to learn from someone who speaks their own language over someone who speaks a foreign language. And studies with young children have found that once they are segregated into different groups — even under the most arbitrary of schemes, like wearing different colored T-shirts — they eagerly favor their own groups in their attitudes and their actions.

The notion at the core of any mature morality is that of impartiality. If you are asked to justify your actions, and you say, “Because I wanted to,” this is just an expression of selfish desire. But explanations like “It was my turn” or “It’s my fair share” are potentially moral, because they imply that anyone else in the same situation could have done the same. This is the sort of argument that could be convincing to a neutral observer and is at the foundation of standards of justice and law. The philosopher Peter Singer has pointed out that this notion of impartiality can be found in religious and philosophical systems of morality, from the golden rule in Christianity to the teachings of Confucius to the political philosopher John Rawls’s landmark theory of justice. This is an insight that emerges within communities of intelligent, deliberating and negotiating beings, and it can override our parochial impulses.

The aspect of morality that we truly marvel at — its generality and universality — is the product of culture, not of biology. There is no need to posit divine intervention. A fully developed morality is the product of cultural development, of the accumulation of rational insight and hard-earned innovations. The morality we start off with is primitive, not merely in the obvious sense that it’s incomplete, but in the deeper sense that when individuals and societies aspire toward an enlightened morality — one in which all beings capable of reason and suffering are on an equal footing, where all people are equal — they are fighting with what children have from the get-go. The biologist Richard Dawkins was right, then, when he said at the start of his book “The Selfish Gene,” “Be warned that if you wish, as I do, to build a society in which individuals cooperate generously and unselfishly toward a common good, you can expect little help from biological nature.” Or as a character in the Kingsley Amis novel “One Fat Englishman” puts it, “It was no wonder that people were so horrible when they started life as children.”

Morality, then, is a synthesis of the biological and the cultural, of the unlearned, the discovered and the invented. Babies possess certain moral foundations — the capacity and willingness to judge the actions of others, some sense of justice, gut responses to altruism and nastiness. Regardless of how smart we are, if we didn’t start with this basic apparatus, we would be nothing more than amoral agents, ruthlessly driven to pursue our self-interest. But our capacities as babies are sharply limited. It is the insights of rational individuals that make a truly universal and unselfish morality something that our species can aspire to.

Paul Bloom is a professor of psychology at Yale. His new book, “How Pleasure Works,” will be published next month.

Saturday, March 27, 2010

Psychopaths’ brains wired to seek rewards no matter what

Psychopaths’ brains wired to seek rewards no matter what, researchers say
March 15, 2010
Courtesy Vanderbilt University
and World Science staff
“Psycho.” The very word con­jures im­ages of cold, re­morse­less cri­min­ality. But sci­ent­ists don’t ful­ly un­der­stand how the brains of psy­cho­paths—peo­ple with anti­social, em­pathy-short and some­times cri­min­al per­son­a­lities—work.

A study has now found that the brains of psy­chopaths seem to be wired to keep seek­ing a re­ward at any cost. Sci­en­tists say the re­search clar­i­fies the role of the brain’s re­ward sys­tem in psy­chop­a­thy and opens a new ar­ea of study for un­der­stand­ing what drives these twisted minds.

The study from from Van­der­bilt Uni­vers­ity in Nash­ville, Tenn. is pub­lished in the March 14 is­sue of the re­search jour­nal Na­ture Neu­ro­sci­ence.

Ab­nor­mal­i­ties in how a brain struc­ture called the nu­cle­us ac­cum­bens, high­light­ed he­re, pro­cesses dopamine have been found in peo­ple with psy­cho­pathic traits, sci­en­tists say. (Cred­it: Greg­o­ry R.Samanez-Larkin and Josh­ua W. Buck­holtz )
“Psy­chopaths are of­ten thought of as cold-blood­ed crim­i­nals who take what they want with­out think­ing about con­se­quences,” Josh­ua Buck­holtz, a grad­u­ate stu­dent in psy­chol­o­gy and lead au­thor of the new stu­dy, said. “We found that a hyper-reac­tive dopamine re­ward sys­tem may be the founda­t­ion for some of the most prob­lem­at­ic be­hav­iors as­so­ci­at­ed with psy­chop­a­thy, such as vi­o­lent crime, re­cid­i­vism and sub­stance abuse.”

Dopamine is the brain chem­i­cal most closely as­so­ci­at­ed with pleas­ure and ex­cite­ment.

Pre­vi­ous re­search on psy­chop­a­thy has fo­cused on what these peo­ple lack­—fear, em­pa­thy and in­ter­per­son­al skills. The new re­search, how­ev­er, ex­am­ines what they have in abun­dance—im­pul­siv­ity, height­ened at­trac­tion to re­wards and risk tak­ing, said Buck­holtz and his co-auth­ors. Im­por­tant­ly, the lat­ter traits are those most closely linked with the vi­o­lent and crim­i­nal as­pects of psy­chop­a­thy, re­search­ers said.

“There has been a long tra­di­tion of re­search on psy­chop­a­thy that has fo­cused on the lack of sen­si­ti­vity to pun­ish­ment and a lack of fear, but those traits are not par­tic­u­larly good pre­dic­tors of vi­o­lence or crim­i­nal be­hav­ior,” said Van­der­bilt psy­chol­o­gist Da­vid Zald, co-au­thor of the stu­dy.

“Our da­ta is sug­gest­ing that some­thing might be hap­pen­ing on the oth­er side of things. These in­di­vid­u­als ap­pear to have such a strong draw to re­ward—to the car­rot—that it over­whelms the sense of risk or con­cern about the stick.”

The re­search­ers used a brain im­ag­ing tech­nique called pos­i­tron emis­sion to­mog­ra­phy, or PET, to meas­ure dopamine re­lease, in con­cert with a probe of the brain’s re­ward sys­tem us­ing func­tion­al mag­net­ic im­ag­ing, or fMRI. “The really strik­ing thing is with these two very dif­fer­ent tech­niques we saw a very si­m­i­lar pat­tern—both were height­ened in in­di­vid­u­als with psy­cho­pathic traits,” Zald said.

Vol­un­teers for the study took a per­son­al­ity test to gauge their lev­el of psy­cho­pathic traits. These traits lie on a spec­trum: vi­o­lent crim­i­nals fall at its ex­treme end, but a nor­mally func­tion­ing per­son can al­so have psy­cho­pathic traits to some de­gree. These traits in­clude ma­ni­pu­la­tive­ness, ego­cen­tricity, ag­gres­sion and risk tak­ing.

The re­search­ers gave the vol­un­teers a dose of am­phet­a­mine, or speed, and then scanned their brains us­ing PET to view dopamine re­lease in re­sponse to the stim­u­lant. Sub­stance abuse has been shown in the past to be as­so­ci­at­ed with al­tera­t­ions in dopamine re­sponses. Psy­chop­a­thy is strongly as­so­ci­at­ed with sub­stance abuse.

“Our hy­poth­e­sis was that psy­cho­pathic traits are al­so linked to dys­func­tion in dopamine re­ward cir­cuit­ry,” Buck­holtz said. “Con­sis­tent with what we thought, we found peo­ple with high lev­els of psy­cho­pathic traits had al­most four times the amount of dopamine re­leased in re­sponse to am­phet­a­mine.”

The re­search sub­jects were lat­er told they would re­ceive some mon­ey for com­plet­ing a sim­ple task. Their brains were scanned with fMRI while they were per­form­ing the task. The re­search­ers found in those par­ti­ci­pants with more psy­cho­pathic traits the dopamine re­ward ar­ea of the brain, the nu­cle­us ac­cum­bens, was much more ac­tive while they were an­ti­cipat­ing the re­ward.

“It may be that be­cause of these ex­ag­ger­at­ed dopamine re­sponses, once they fo­cus on the chance to get a re­ward, psy­chopaths are un­able to al­ter their at­ten­tion un­til they get what they’re af­ter,” Buck­holtz said. Added Zald, “It’s not just that they don’t ap­pre­ci­ate the po­ten­tial threat, but that the an­ti­cipa­t­ion or mo­tiva­t­ion for re­ward over­whelms those con­cerns.”

Thursday, February 11, 2010

Miryachit, the Mysterious Siberian Mental Disorder

You know how kids will “copy” one another just to be annoying? This usually leads to whines of protestation: “Mom! Tell Jimmy to quit copying me!” Well, if Jimmy were a Siberian Russian around the turn of the last century, chances are he would’ve been diagnosed with Miryachit — a bizarre condition the description of which I recently stumbled across.

The only definitive article on the subject of miryachit seems to have been written by a 19th century surgeon named William Hammond, who based his theories on a report written by the captain of a Navy ship sailing past Siberia to Europe in the summer of 1882. I’ve heard about some strange psychological disorders, but I’ve never heard of anything like miryachit. What follows is a pitiful account of a Siberian ship’s steward being tormented by his crewmates in what amounts to the opposite of the “make him quit copying me!” scenario:

It seemed that he was afflicted with a peculiar mental or nervous disease, which forced him to imitate everything suddenly presented to his senses. Thus, when the captain slapped the paddle-box suddenly in the presence of the steward, the latter instantly gave it a similar thump; or, if any noise were made suddenly, he seemed compelled against his will to imitate it instantly, and with remarkable accuracy. To annoy him, some of the passengers imitated pigs grunting, or called out absurd names; others clapped their hands and shouted, jumped, or threw their hats on the deck suddenly, and the poor steward, suddenly startled, would echo them all precisely, and sometimes several consecutively. Frequently he would expostulate, begging people not to startle him, and again would grow furiously angry, but even in the midst of his passion he would helplessly imitate some ridiculous shout or motion directed at him by his pitiless tormenters. Frequently he shut himself up in his pantry, which was without windows, and locked the door, but even there he could be heard answering the grunts, shouts, or pounds on the bulkhead outside. He was a man of middle age, fair physique, rather intelligent in facial expression, and without the slightest indication in appearance of his disability.

“We afterward witnessed an incident which illustrated the extent of his disability. The captain of the steamer, running up to him, suddenly clapping his hands at the same time, accidentally slipped and fell hard on the deck; without having been touched by the captain, the steward instantly clapped his bands and shouted, and then, in powerless imitation, he too fell as hard and almost precisely in the same manner and position as the captain.

More fascinating still, it seems that this particular condition is (or was) widely known in Siberia, and yet had rarely if ever been seen outside of it.

In speaking of the steward’s disorder, the captain of the general staff stated that it was not uncommon in Siberia; that he had seen a number of cases of it, and that it was commonest about Yakutsk, where the winter cold is extreme. Both sexes were subject to it, but men much less than women. It was known to Russians by the name of ‘miryachit.’”

Other reports of the era compare miryachit to a similar condition noted in Java, called “Lata,” and to a condition peculiar to a group known as “The Jumping Frenchmen of Maine,” which “was characterized by a marked and violent jump in response to sudden noise or startle.” But I can’t find anything debunking or really expanding on the condition — or very much written about it at all after the turn of the 20th century — and it makes me wonder, A) how many other “regional” diseases/disorders might be out there, and B) how many other bizarre conditions were described a century or more ago without anyone ever bothering to follow up?

In any case, the mind is a strange place, and the science of the mind is — I think it goes without saying — far from settled.