adsense code

Showing posts with label cognition. Show all posts
Showing posts with label cognition. Show all posts

Wednesday, November 27, 2019

Succeeding Without Brilliance


I recently got a note from a fan of this blog who was depressed because her IQ scores were not much above average. Her sadness magnified when she read a research paper from a group of 16 researchers at several prestigious universities who asserted that education after you are 20 years old doesn´t much improve your intelligence. My reader said, “That is devastating for me, as it makes it clearer that I´m pretty stuck in my "average" position. Do you think that this analysis is clearly conclusive? Or is there still some way to improve myself?”

She continued, “I dream of finishing my degree in Electronical Engineering and then going for Physics, but after seeing that analysis and lots of IQ charts for job positions and careers, I’m pretty disappointed.”

Before addressing her concerns, I need to summarize the paper that dismayed her. The report, published in the prestigious Proceedings of the National Academy of Sciences (PNSA) was based on what authors called General Cognitive Ability (GCA), which they defined as any IQ-like summary or principal component index of overall cognitive function. They admit and referenced some studies that have found that additional education increases intelligence, but their hypothesis was the opposite.

One thing the researchers did was conduct a basic meta-analysis of seven studies (10 datasets) with pre- and post-comparisons. The basic finding was that each additional year of education accounted for an average of 1.20 additional later-life IQ points.

My blog fan apparently missed the good news. That is easy to do, because the paper was one of the most poorly written and confusing research reports I have read over decades. This is what you might expect from anything written by 16 academics. What I think the report said about the meta-analysis was that each additional year of education accounted for an average of 1.20 additional later-life IQ points. You can see this as a glass half empty or half full. The half full view is that four years of formal post-high school education raises your IQ almost 5 points on average. Don’t bet the farm on this conclusion. These studies had an excessive amount of uncontrolled variables.

The PNAS paper did cite a study reporting that completing a university education led to a midlife gain of gains of 6–22.4 IQ points over adolescent cognitive ability compared with individuals who did not attend university. The students in that study were tested for IQ at age 15. That means that the average IQ of 100 could have jumped to 122, which is definitely adequate for most intellectually challenging careers. And this assumes just four years of ordinary college, without regard to major or rigor of intellectual challenge. Trust me, all college education is not equal.

These authors also conducted their own study and found little effect of education on IQ in an all-male, predominantly white, non-Hispanic sample at age 56-66. For example, averaging data across a large pool of subjects, they report that GCA accounted for 40% of GCA variance in late midlife and approximately 10% of variance in each of seven other cognitive domains. Averaging obscures the detection of individuals who could have had large GCA gains from education and life experience. Moreover, the kind of education and life experience must surely have varied widely and was not accounted for in the study. Even so, 60% of the variance in CGA did NOT depend on the test scores the men had taken when they were 20 years old. Don’t forget that 90% of late-life GCA variance was influenced by something other than formal education.

The IQ-like test they used was a military qualification test (AFQT), known to correlate well with established IQ tests. All their subjects, military veterans, took the test around age 20 and again about three decades later. Their data were interpreted to indicate that education does not make one much smarter. One result seemed especially clear: individuals with higher intellectual capacity tend to attain more education, achieve higher occupational status, and engage more in cognitive-intellectual activities.

There was an association of education, occupational complexity, and cognitive-intellectual activities with better later-life cognitive functioning, but these associations are not the cause of late-life ability. In other words, smart people are smarter when they are older because they were smarter to begin with. They became educated because they were already smart enough to seek it, not that education made them smart. The authors did concede that they were unable to definitively confirm their hypotheses regarding possible sensitive periods for brain development and the age of baseline testing. Such confirmation would require testing at multiple time points before the completion of education all within the same study.

One clear take-home message is that most intellectual gains occur before the age of 20. This is why elementary and secondary school education are crucial for creating optimal intelligence. As a professor for over 50 years, I am convinced that public schools today are not doing as much to make youngsters smarter as was the case in previous decades. That does not mean that further gains cannot be obtained after age 20. Education and intellectually challenging life experience do produce intelligence gains, just not as much as they do in youngsters.

The preference of researchers for averaging data obscures finding out what happens for a particular person. Is a person age 20 with low IQ more or less able to benefit from education than a person with a higher initial IQ? Or is it the other way around? Would the effect of education be different for women or minorities?

The kind of education and intellectual life challenge surely matter. For example, do we really expect the same mental benefit from four years of being a college physics major as an education major? Think also about still more benefit from a rigorous, emphasize rigorous, PhD program. Do we expect the same results from someone with little post-college training compared to a life-long learner?

IQ scores are affected by many things besides education that can affect how we interpret any effects of education. What about the age at which IQ is first tested? Brain development occurs throughout youth and extends past age 20. Obviously, IQ tests in elementary school are less valid than test results obtained after puberty.

Other variables affect IQ scores as well, particularly the mental state of the individual when the test was taken. Factors that will surely decrease scores, independently of actual cognitive ability, include sleep deficiency, emotional stress, and persistent mental distraction.

Consider especially stress. The persistent release of cortisol in chronic stress shrinks neuronal synapses and surely diminishes cognitive ability. The pool of veterans in this study must surely have varied widely in the amount of stress the men experienced during their military years. Some surely had combat-related PTSD, while others had non-stressful jobs.

One other thing: IQ tests not only measure how well you can figure things out, but only certain kinds of things, especially analogies. They also measure how fast you can solve a problem. Sometimes it doesn’t matter how long it takes to solve a problem. Einstein worked on special relativity for at least 10 years, despite claims of some others that it was a lightning-flash eureka moment.

What is my advice to my blog follower, and all those others, including me, with unimpressive IQs? First, do what you love that is helpful to you and others. But do not allow your reach to exceed your grasp. As the Army says, “Be all you can be.” The turtle sometimes beats the hare. But accept that the hare usually wins. Do not obsess or become stressed over your limitations, for that is counterproductive.

You should be happy and bring happiness to others. That should suffice. You don’t need the ability to invent relativity to be happy or make a meaningful contribution to others.

Source:
Kremen, William S. et al. (2019). Influence of young adult cognitive ability and additional education on later-life cognition. PNAS. 116(6), 2021-2026.




Sunday, May 27, 2018

IQ Changes in Teenagers


Common wisdom asserts that your IQ is fixed. Of course, the various “multiple intelligences” change with personal life experiences and growth, but we usually consider the standard IQ score to be inherent and unchangeable. But even the standard IQ measure changes during different life stages. Clearly, the IQ of young children changes as they mature. Several studies even show that working-memory training can raise the IQ of elementary-school children. More than one analyst claims that a rigorous PhD program can raise IQ in adults. Most obvious is the decline of IQ in those elderly who do not age well because of disease.

A neglected segment along the age spectrum is the teenage years. Now, evidence indicates that this age group experiences IQ changes ranging from a decline to an increase. A study of this issue shows that both verbal and non-verbal IQ scores in teenagers relate closely to the developmental changes that occur in brain structure during the teenage years. Longitudinal brain-imaging studies in the same individuals reveal that either increases or decreases in IQ occur coincident with structural changes in cerebral grey matter that occur in teenagers.

The study conducted MRI brain scans and IQ tests on 33 normal adolescents in early teenage years and then again in late teenage years.  A wide range of IQs were noted, 77 to 135 in the early group and 87 to 143 in the late group. For any given individual, the change in IQ score changed from -20 to +23 for verbal IQ and -28- to +17 for non-verbal IQ. Correlation analysis revealed that increases in IQ were associated with increased in cortical density and volume for brain regions involved in verbal and movement functions.

The implications are profound, especially as they relate to the local environment of a given teenager. What happens during the teenager years apparently changes brain structure and mental ability. Many influences likely damage the brain, such as drug abuse, or social stress, or poor education and intellectual stimulation. Conversely, the data indicate that positive benefits to both brain structure and mental capability can result from a mentally healthy environment and rich educational experience.
The data suggest that all the emphasis on pre-school and “Head Start” initiatives may diminish our attention to the key role played by middle school and early high school. This confirms what many of us always suspected, namely that our society tends to insufficiently nurture “late bloomers.” Maybe the early high achievers who fail to live up to their promise do so, because we wrongly assume they can manage without much help. Parents, educators, and education policy makers need to take notice.
Few books can change a person's future. One of them could be my book, Better Grades, Less Effort, which explains the learning tips and tricks that I used to become valedictorian, when a high school teacher said my modest IQ did not justify the high grades I was making. Teachers predicted I "would have trouble with college." Really? I went on to be an Honors student in three universities -- including graduating early with a D.V.M. degree and securing a PhD in two-and-a-half years. My IQ documented that I was not so smart. I believe that poor learning skills are what hold back most students from superior achievement. This book can change a person's life, as my own experiences with learning how to learn have changed my life. I suspect it helped my brain development as well.

Source:

Ramsden, Sue et al. (2011). Verbal and non-verbal intelligence changes in the teenage brain. Nature. May 17. Doi:10:1038/nature10514.

Saturday, August 26, 2017

Do We See the World Like a Movie?

We have the feeling that we experience the world like a continuously sampled data stream. If we perceive multiple objects of events seemingly at the same time, we may actually be multiplexing the several data streams; that is, we take a sample from one data stream, switch to take a sample from the next stream and so onall on a millisecond time scale.

But another possibility is that we perceive objects and events like a movie frame, where the brain takes working-memory snapshots and plays them in succession. Like still frames in a movie, if played at a high-enough speed, the frames will blend in our mind to give the illusion of continuous monitoring.

In either case, we have to account for working memory. That is, we can only hold a small amount of information in our working memory at any one instant, as in being able to dial a seven-digit phone number you just looked up. In the phone number case, does our brain accumulate and buffer the representation of each integer until reaching the working memory holding capacity and then report it to consciousness as a set? Or is each integer transferred to consciousness and concantenated until the working memory capacity is filled?

A profound recent model of perception addresses the issue of continuous or movie-like perception, but unfortunately, it did not take working memory into consideration.  The model did address the issue of how consciousness integrates the static and dynamic aspects of the object of attention. For example, when viewing a white and moving baseball, consciousness apparently tracks both the static white color and shape of the ball and its movement at the same time. Are these two visual features bundled together and made available to consciousness on a continual basis or as a batch frame?
A related issue is the so-called flash-lag illusion. Displaying a moving object and a stationary light flash at the same time and location creates the illusion that the flash is lagging. There is some debate over why this happens, but it does argue against continuous monitoring of linked objects.

Another phenomenon that argues against continuous monitoring is the “color phi” phenomenon. Here, if two differently colored disks are shown at two locations in rapid succession, a viewer perceives just one disk that moves from the first location to the second, and the color of the first disk changes along the illusory path of movement. But the viewer cannot know in advance what the color and location of the second disk is. The brain must construct that perception after the fact.
Another way of studying fusion phenomenon is to show two different colored disks in rapid succession at the same location. In this case, an initial red disk followed by a green disk will be perceived as only one yellow disk. A viewer cannot consciously recognize the individual properties if there is not enough time between the two disks. This suggests that information is batched processed unconsciously and later made available to conscious awareness. Transcranial magnetic stimulation can disrupt the fusion, but only for about 400 milliseconds after the first stimulus when presumably the processing is unconscious. Since the presentation of the two disks only takes about 60 msecs, it means that unconscious processing of the fusion takes some 340 milliseconds before the results become available for conscious recognition.

Similar fusion can occur with other sense modalities. For example, the “cutaneous rabbit” effect is a somatosensory fusion illusion in which touch stimulation of first the wrist followed quickly by stimulation near the elbow produces the feeling of touch along the nerve pathway between the two points, as if a rabbit was hopping along the nerve. There is no way for conscious mind to know the pathway without the second touch near the elbow actually occurring. Perception of that pathway information is delayed until the information has been processed unconsciously.

So while these examples argue against continuous conscious monitoring of sensation, they don’t really fit well with the movie-frame idea either. We can distinguish two visual stimuli only 3 msecs apart, but a snapshot model that samples stimuli say every 40 msecs would miss the second stimulus. So to reconcile these conflicting possibilities, the authors advance a two-step model in which sensations are processed unconsciously at high speed, but the conscious percept is reported periodically or is read out when unconscious activity reaches a certain threshold or when there is top-down demand.. This fits the data from others that conscious awareness is delayed after the actual sensory event. For visual stimuli, this delay can be as long as 400 msecs.

Here the question of interest is why sensory awareness might require a mixture of continuous monitoring and periodic reporting of immediately prior data segments. Continuous monitoring and processing permits high-temporal resolution. Snapshot reporting conserves neural resources because information accumulates as a batch (a few bytes) before becoming available to consciousness. The really interesting question is what, if anything, happens to that string of movie-like snapshots that are captured in consciousness. How do these frames affect subsequent unconscious processing in the absence of further sensory input? Can unconscious processes capture and operate on the frames of conscious data? Or can successive frames of conscious data be processed batch wise in consciousness? A useful analogy might be whole-word reading. A beginning reader must sound out each letter in a word, which is comparable to the high-resolution time tracking of sensory input. However, whole word reading allows the more efficient capture of meaning because meaning has been batch pre-processed.

How do these ideas fit with the claim of other scholars that consciousness is just an observer witnessing the movie of life as it occurs? However, this assumption ignores the role that consciousness might have in reasoning, making decisions, and issuing commands. I argue this point elsewhere in my books, Mental Biology, and Making a Scientific Case for Conscious Agency and Free Will.

Research claimed as showing that free will is illusory needs reinterpretation in light of this two-step model of perception. Those experiments typically involved asking a subject to make a simple movement, like press a button, whenever they “freely” want to do so. They are to note when they made the decision by looking at a large, high-resolution clock. At the same time, their brain activity is monitored before, during, and after the chain of events.

The first event is the intention to button press. Intention is a conscious event. Was it preceded by unconscious high-resolution processing? If so, what was the need for high resolution? Or maybe this is just the way the brain is built to operate. The button press decision-making is a slow, deliberative process, which perhaps could be handled consciously as a slow progression of successive frames of conscious thought. Critics may say that there is no such thing as conscious processing, but there is no evidence for such conjecture. Once an intent is consciously realized, the subject is now thinking about when to make the press. This decision may well be determined unconsciously, but again there is no need for high temporal resolution. Moreover, there are intervening conscious steps, where the subject may think to himself, “I just did a press. Shouldn’t I wait? Is there any point in making many presses with short intervals? Or with long intervals? Or with some random mixture? Are each of these questions answered by the two-step model of sensory processing?” However the decision developed, corresponding brain electrical activity is available to be measured.

Then, there is the actual button press, the conscious realization that it has occurred, and the conscious registration of the time on the clock when the subject thought the decision to button press was made. Does the two-step model apply here? If so, there has to be a great deal of timing delays between what actually happened consciously in the brain and what the subject eventually realized the conscious thoughts.

The point is that the two-stage model of perception may have profound implications beyond sensation that involve ideation, reasoning, decision-making, and voluntary behavior. I have corresponded with the lead author to verify that I have a correct understanding of the publication. He said that his group does plan to study the implications for working memory and for free will.

Source:

Herzog, M. H., Kammer, Thomas, and Scharnowski, F. (2017). Time slices: What is the duration of a pecept? PLOS. April 12. Hrp://de.doi.org/10.1371/journal.pbio/1002433

Image

Monday, April 10, 2017

Victim of Biology and Circumstance?

An area of controversy in the life sciences relates to the relative roles of genetics and the environment. Confusion commonly afflicts politics. For example, early Communists glommed on to the discredited genetic theory of “inheritance of acquired characteristics.” This theory holds that changing a person’s attitude and behavior would somehow result in changes to his or her genes, which would allow for genetic transmission of the changed attitudes and behavior to his or her children. For this idea to be true, outside influences on the brain would have to change the genes not only in brains but also in the sex cells (sperm and egg cells). The idea was held in ancient times by Hippocrates and Aristotle, but it gained scholarly imprimatur with formal publication in 1809 by Jean-Baptiste Lamarck. In the 1930s, the Russian president of the Soviet Academy of Agricultural Sciences, Trofim Lysenko, applied the doctrine to Soviet agriculture with disastrous results. At the same time, Soviet political leaders extended the mistaken doctrine to inheritance of educational and social experiences; that is, changing human nature by government policy. They expected that indoctrinating the current generation in collectivism would genetically transfer collectivist attitudes and behavior to all future generations. Cuba, North Korea, and China showed that collectivism can be transferred culturally but not biologically.
In the United States, much political angst arises from disputes over whether more effective educational and social policies will succeed in lifting people out of poverty and dysfunctional behaviors. When I was a child, I often heard the axiom, “You can take the boy out of the country, but you can’t take the country out of the boy.” Today, the corresponding axiom would seem to be, “You can take the boy out of the ghetto, but you can’t take the ghetto out of the boy.” The reality is that you can take the country or ghetto out of the boy, but this won’t transfer to his children by his genes.
What we are now discovering is that environment and experience affect the expression of genes. Whether or not genes are accessible for readout often depends on the environment. People have underestimated their capacity to sculpt their own brains, attitudes, and behavior by controlling experiences that affect gene expression. Though people may control to some extent how their own genes are expressed, there won’t be any biological transfer to their heirs. Environmental and cultural influences do of course transfer, so one’s heirs can be taught how to likewise exert control over how their genes are expressed.
Having the right chemicals in the right environment at the right time is believed by most scientists to be all that is needed for creating life and shaping the mental life of the individual. To them, life seems like a highly improbable occurrence. But it did happen, and even more improbable, there may be a life force that sustains it.
Many scientists also think of the brain’s conscious mind as an emergent property of brain function. Emergent properties follow the rule that the whole is greater than the sum of its parts. Another way of saying this is that the properties of the whole cannot be predicted from what you know about the properties of the contributing parts. Yet, paradoxically, most scientists believe that as they learn more and more about less and less, they will somehow explain the whole.
Emergent properties apply both to molecules in a primordial soup that generate simple living organisms and to the 87 billion or so neurons of a human brain that generate a conscious mind. A physical world that can generate emergent properties is a mysterious and magical world indeed.
What gets left out in such consideration is the capacity for personal control over one’s biology, which is an important theme. I contend that at the level of the individual person, mind itself—especially conscious mind—is a major force of natural selection that drives creation of mental capacity and character. The implications for daily living could not be more profound. Accepting one’s biology and circumstance breeds helplessness and fatalism. So, it boils down to one’s belief system. Either you are “captain of your own ship, master of your own fate,” or you are shackled by the belief that change is not possible. What you think and do shapes your brain's function.

Excerpted from Mental Biology. The New Science of How the Brain and Mind Relate, by W. R. Klemm. New York: Prometheus. See rave reviews at WRKlemm.com, click on "author."


Friday, October 04, 2013

Landmark Research: Why We Need to Get Enough Sleep

In other blog posts I have explained why sleep is good for the brain in general and memory formation in particular. Now a new discovery provides another reason for people to get enough sleep. The study examined a type of support cell in the brain, oligodendrocytes–let’s call them oligos for short. These cells wrap their membranes around nerve cells to form what is called myelin, which forms an electrical insulation in a way that speeds up the propagation of nerve impulses through neural networks. You may have heard about oligos in reading about multiple sclerosis, a disease that impairs nerve communication because oligos die and the myelin insulation degrades.

Speed of transmission is important–it influences IQ for example. As you know from buying a new computer, the faster processor speed gives it new capabilities your old clunker could not do. A similar idea applies to the brain.

Anyway, this new study, from the University of Wisconsin, focused on oligos because other research had shown that sleep promoted the expression of several genes that are involved in synthesis of cell membranes in general and those in oligos in particular. Unlike neurons, oligos die, and are replaced in the brain. Thus, anything that affects their turnover is important for brain function. Sleep has been implicated in this turnover because a common neurotransmitter in the brain, glutamate, is known to increase in wakefulness and decline during sleep. Glutamate  suppresses maturation of oligo precursor cells into formation of myelin insulation.

In this particular study, investigators examined a genome-wide profile of oligo gene expression in mice after a 6-7 hour periods of sleep or spontaneous wakefulness, or four hours of forced wakefulness (sleep deprivation). They found that 357 genes were expressed differently, depending on the time of day, in response to normal daily rhythms. More dramatic was the observation that 714 genes changed expression in conjunction with the sleep/wakefulness cycle, independent of the time of day. Of these genes, 310 were “sleep” genes that were selectively activated during sleep.

Many of the sleep genes contribute to maturation of oligos into myelin. In follow up experiments, mice were injected with a radiolabeled tag that marks the birth of new cells. Injection occurred eight hours before mice spent a long period of either of wakefulness or sleep. The number of newly born oligos was almost double in the sleep group compared to the wake group. More detailed analysis showed that this increase was specifically correlated with the amount of REM sleep (dream sleep in humans).

This REM effect may have particular importance in humans. Most REM sleep occurs in the early morning hours and only after substantial time has been spent in non-REM stages of sleep. Thus, cutting a night’s sleep short by getting up early may decrease the amount of REM time and thus the beneficial effects on oligo proliferation. So don’t feel guilty about “sleeping in” from time to time.

Image
We might also think about how these findings could have special relevance to children, whose brains are incompletely myelinated. Getting children up early in the morning to start school at 8 AM may not be such a good idea. Until school districts get around to changing school hours, you might tell you kids about my learning and memory improvement e-book, Better Grades, Less Effort, available at Smashwords.com.

Source:


Bellesi, M., et al. (2013) Effects of sleep and wake on oligodendrocytes and their precursors. J. Neuroscience. 33 (36), 14288-14300.

Tuesday, September 10, 2013

Brain Exercise Works

Most people now have been told that mental activity is good for the brain. I have even posted information that it can build “cognitive reserve” that can delay or reduce the symptoms of Alzheimer’s disease. Therefore, it would be no surprise if popularity increased for mentally stimulating games like crossword puzzles, Sudoko, bridge, dominoes, chess, and the like.

In addition to these traditional games, another form of mental stimulation is to learn mnemonic techniques, such as creating associations with mental images, acrostics, acronyms, the method of loci, mental imaging of peg-words, and the like, which I explain in my books, Memory Power 101 and Better Grades, Less Effort. While these techniques are task specific, mastering them can produce benefits that last beyond the time when you are using these mnemonics. For example, when I was in high school, I used to give memory demonstrations using a well-known image-word peg system. Even when I quit doing that, my general capacity for remembering remained better than before because my brain had been trained to be more agile and imaginative in generating images that I could use in making memory associations. My mind was also probably more disciplined.

Image
The scientific basis for such claims is solid. Numerous research reports confirm that even older people can improve their memory skills with instruction and practice.[1] Even with traditional memory training, research has shown that by teaching people multiple strategies, the training benefit can be seen immediately, can endure for up to five years, and even transfer to everyday learning tasks.

The scientific explanation is straightforward. When the brain is challenged to solve problems and enhance memory capability, the neurons have to grow new contact points among neurons. This process requires new protein synthesis, growth of neuron terminals, and boosting of neurotransmitter systems. In other words, mental challenge changes the brain physically. Through training, you can sculpt a more alert, focused, and smarter brain.

As a result of this understanding, a host of mental training options have become available. The hype often seems to sound like snake oil, but some training programs are documentably effective. For example, we know from published research that I have described before that working memory capacity can be extended by formal training and that IQ increases as a result.

A new emphasis seems to be emerging to create training platforms that are cost effective, self-administered, flexible, and easily distributed to wide segments of population. CD, audiotape, and web-based approaches can reduce the need for trainers who work one-on-one or with small groups. The web-based training seems the most feasible, except for the current crop of elderly, many of whom do not use the Internet.

Effective training need not be specifically address memory. Non-specific mental stimulation can improve memory capability, because whatever affects the brain affects the brain’s ability to remember things. Especially promising are training programs that train people to be more attentive, to have more positive attitudes about their memory ability, reduce anxiety and stress, and require learners to apply memory techniques to everyday mental tasks.1 When benefits from memory training persist after the training, researchers assume it is because the trainees are still using the techniques they have learned. Method-of-loci and peg-word systems are extremely powerful, but it is hard to get people to create new habits of thinking and memorization. Even so, memory training produces other lasting effects that benefit memory irrespective of the explicit use of techniques. One of these effects is actual re-wiring of the brain, which intense learning is known to produce.

Many sites on the Web focus on teaching people about mental fitness in general, which as I just said, has collateral benefit on memory capability. One site I recommend, and have posted to, is Sharp Brains (http://sharpbrains.com/). Among the better known Web training programs are Brainware Safari and Lumosity (I have no conflict of interest here). Using “brain fitness” as search words in Google or Bing will identify many other sites that I am not familiar with.

Recently, a new three-dimensional videogame system, “NeuroRacer” that reportedly works even for older adults has been developed at the University of California, San Francisco.[2] In this game, a user navigates a race car along a winding track and hits a button on a controller whenever a green circle appears, making the response as quickly as possible. This task forces concentration and trains the brain to switch operations rapidly and accurately.

In a recently published test of the NeuroRacer’s effects on older adults, people aged 60 to 85 were trained on the game for 12 hours, spread over a month. Without training, the researchers found a clear age-related decline in performance in the game. After training on the game, the seniors performed on the game better than untrained 20-year olds, and the benefit lasted at least six months.

Popular press reports and numerous blogs of this study have attributed the benefit to the value of multi-tasking. I contend that multi-tasking is harmful for memory and, moreover, that the benefit of NeuroRacer is not multi-tasking training as such but rather the training it provides for attentiveness and executive control.  It is perhaps not surprising that such good effects were seen in older folks. A typical problem in aging is a loss in ability to focus, and thus training that increases attentiveness would be likely to have conspicuously beneficial effects.



[1] Rebok, G. W., Carlson, M. c., and Langaum, J. B. S. (2007). Training and maintaining memory abilities in health older adults: traditional and novel approaches. J. Gerontology. 62B (Special Issue): 53-61.

[2] Anguera, J. A. et al. (2013). Video game training enhances cognitive control in older adults. Nature 501: 97-101.