Thursday, November 20, 2014

HI-BAR: More benefits of Lumosity training for older adults?

HI-BAR (Had I Been A Reviewer)

For more information about HI-BAR reviews, see my post introducing the acronym

Ballesteros S, Prieto A, Mayas J, Toril P, Pita C, Ponce de León L, Reales JM and Waterworth J (2014) Brain training with non-action video games enhances aspects of cognition in older adults: a randomized controlled trial. Front. Aging Neurosci. 6:277. doi: 10.3389/fnagi.2014.00277

Update: I posted a brief summary of this HI-BAR as a comment on the paper at Frontiers, and Dr. Ballesteros responded to indicate that this was the same training study. Her response quoted the one-sentence citation described below and noted that they did not intend to give the impression that this was a new intervention. Her reply did not address the overlap in presentation of methods and results.

Update 2: Dr. Ballesteros sent me a response to my questions [pdf]. I have just posted a new blog post in which I quote from her response and add new comments.  

This paper testing the benefits of Lumosity training for older adults just appeared in Frontiers. The paper is co-authored two of the same people (Ballesteros and Mayas) as a recent PLoS One paper on Lumosity training that I critiqued in a HI-BAR in April (note that the new paper includes six additional authors who were not on the PLoS paper and the PLoS paper includes two who weren't on the Frontiers paper). 

I was hopeful that this new paper would address some of the shortcomings of the earlier one. Unfortunately, it didn't. In fact, it couldn't have, because the "new" Frontiers paper is based on the same intervention as the PLoS paper.

And, when I say, "the same intervention," I don't mean that they directly replicated the procedures from their earlier paper. The Frontiers paper reports data from the same intervention with the same participants. It would be hard to know that from reading just the Frontiers paper, though, because it does not mention that these are additional measures from the same intervention. 

As my colleagues and I have noted (ironically, in Frontiers), this sort of partial reporting of outcome measures gives the misleading impression that there are more interventions demonstrating transfer of training than actually exist. If a subsequent meta-analysis treats these reports as independent, it will produce a distorted estimate of the size of any intervention benefits. Moreover, whenever a paper does not report all outcome measures, there's no way to appropriately correct for multiple comparisons. At a minimum, all publications derived from the same intervention study should state explicitly that it is the same intervention and identify all outcome measures that will, at some point, be reported. 

The Frontiers cites the PLoS paper only twice. The first citation appears in the method section in reference to the oddball outcome task that was the focus of the PLoS paper:
Results from this task have been reported separately (see Mayas et al., 2014).
That is the only indication in the paper that there might be overlap of any kind between the study presented in Frontiers and the one presented in PLoS. It's a fairly minimalist way to indicate that this was actually a report of different outcome measures from the same study. 

It's also not entirely true: The results section of the Frontiers paper describes the results for that oddball task in detail as well, and all of the statistics for that task reported in the Frontiers paper were also reported in the PLoS paper. The results section does not acknowledge the overlap. Figure 3b in the Frontiers paper is nearly identical to Figure 2b in the PLoS paper, reporting the same statistics from the oddball task with no mention that the data in the figure were published previously.

After repeating the main results of the oddball task from the earlier paper, the Frontiers paper states:
"New analyses were conducted on distraction (novel vs. standard sound) and alertness (silence vs. standard sound), showing that the ability to ignore relevant sounds (distraction) improved significantly in the experimental group after training (12 ms) but not in the control group. The analyses of alertness showed that the experimental group increased 26 ms in alertness (p < 0.05) but control group did not (p = 0.54)."
I thought "new analysis" might mean that these analyses were new to this paper, but they weren't. Here's the relevant section of the PLoS paper (which reported the actual statistical tests):
"In other words, the ability to ignore irrelevant sounds improved significantly in the experimental group after video game training (12 ms approximately) but not in the control group....Pre- and post-training comparisons showed a 26ms increase of alertness in the experimental group..."
Stating that "results from this task have been reported separately" implies that any analyses of that task reported in Frontiers are new, but they're not.

The Frontiers paper also reports the same training protocol and training performance data without reference to the earlier study. The Frontiers paper put the training data in a figure whereas the PLoS paper put them in a table. Why not just cite the earlier paper for the training data rather than republishing them without citation?

The same CONSORT diagram appears as Figure 1 in both papers. The Frontiers paper CONSORT diagram does report that the experimental condition had data from 17 participants rather than the 15 participants reported in the PLoS paper. From the otherwise identical figures, it appears that those two additional participants were the ones that the PLoS paper noted were lost to followup due to "personal travel" and "no response from participant." I guess both responded after the PLoS paper was completed, although the method sections of the two papers noted that testing took place over the same time period (January through July of 2013). The presence of data from these two additional participants did not change any of the statistics for the oddball task that were reported in both papers. 

One other oddity about the subject samples in the consort diagram: The PLoS paper excluded one participant from the control condition analyses due to "Elevated number of errors," leaving 12 participants in that analysis. The Frontiers paper did not exclude that participant. If they made too many errors for the PLoS paper, why were they included in the Frontiers paper?

Other than the one brief note in the method section about the oddball task, the only other reference to the PLoS paper appeared in the discussion, and gives the impression that the PLoS paper provided independent evidence for the contribution of frontal brain regions to alertness and attention filtering:
We also found that the trainees reduced distractibility by improving alertness and attention filtering, functions that decline with age and depend largely on frontal regions (Mayas et al., 2014).
That citation and the one in the method section are the only mentions of the earlier paper in the entire article. 

Given that I was already familiar with the PLoS paper, had I been a reviewer of this paper, here's what I would have said:

  1. The paper suffers from all of the same criticisms I raised about the intervention in the PLoS paper (e.g., inadequate control group among many other issues), and in my view, those shortcomings should preclude publication of this paper. See
  2. The paper should have explained that this was the same intervention study, with the same participants, as the earlier PLoS One paper.
  3. The paper should have noted explicitly which data and results were reported previously and which were not.
It's possible that the authors notified the editor of the overlap between these papers upon submission. If so, it is surprising that the paper was even sent out for review before these issues were addressed. If the editor was not informed of the overlap, I cannot fault the editor and reviewers for missing the fact that the intervention, training results, and some of the outcome results were reported previously. Unless they happened to know about the PLoS One paper, they could be excused for missing the one reference to that paper in the method section. Still, the many other major problems with this intervention study probably should have precluded publication, at least without major corrections and qualifications of the claims.