This page looks best with JavaScript enabled

The dilemma of neuroimaging and cognitive science

 ·  ☕ 2 min read  ·  ✍️ Wei-Hsiang

Recently, there was an article in the “Nature” magazine titled “Cognitive Neuroscience at the Crossroads,” which briefly discussed the issues in cognitive neuroscience. In the following, I will summarize the content and add my personal thoughts.



Cognitive neuroscience is a complex discipline that combines psychology, biology, neuroscience, information science, and other fields. One approach to research in this area is to use various techniques to understand human behavior and the underlying neural mechanisms. EEG and fMRI are among the most commonly used neuroimaging techniques because they can detect signals within the brain in a non-invasive way, making them popular among researchers. Given their usefulness, it’s no surprise that these techniques are used extensively in research. But are the results they produce really that great? In a shocking experiment, Bennett et al. (2009) placed a dead salmon in an MRI scanner and presented images for the deceased fish to “look at.” The scan results showed significant activity in certain areas, which should be impossible! This raises the question of whether MRI scans truly reveal accurate results. The main point of the study, of course, is that neuroimaging analysis must undergo rigorous statistical testing and correction for multiple comparisons to eliminate false-positive results. This leads to doubts about whether using these brain imaging techniques is a good approach for studying cognitive neuroscience. Marek et al. (2022) conducted a series of simulations to explore the stability of whole-brain MRI connectivity research results, particularly how the number of participants affects the final outcome. The main conclusion is that at least 10,000 participants are needed to ensure that the observed results are not random. This finding brings both good news and bad news. The good news is that researchers can save time by not pursuing studies with fewer than 10,000 participants and can quickly escape the suffering of their PhD journey. The bad news is that well-funded supervisors may ask you to recruit 10,000 participants – after all, they have the money to do so.



In addition, Greene et al. (2022) published another paper, mainly exploring the research methods of using brain response patterns to predict behavioral performance through machine learning techniques. The prediction targets can be the behavioral performance of the group or predicting which subjects in a group are patients and which are healthy individuals based on brain response data. Such methods often yield good results. In the study by Greene et al. (2022), they collected multiple datasets, which were fMRI scans of subjects performing various tasks (such as memory, executive function, etc.). Then, they used machine learning methods to predict whether the subjects belonged to a group with good or poor performance based on the fMRI data. They found that a small number of subjects were consistently mispredicted by the model, regardless of which tasks were used. More importantly, they found that the main reason for the misprediction was that a certain variable in this specific group did not conform to the overall trend. For example, in general, cognitive function is negatively correlated with age - the older the age, the poorer the cognitive function. However, a small number of people do not follow this trend, and some older people may have good cognitive function, while some young people may have poor cognitive function. In this case, using brain connectivity data to predict the quality of cognitive function may produce incorrect results, indicating that this approach is not stable. Furthermore, these results also show that we should not only consider variables related to complex cognitive tasks, but also pay attention to other potential hidden variables, such as age and medical history, as they may be more important for prediction accuracy.



These studies tell us that various challenges of different sizes are faced when using various techniques to conduct research related to cognitive neuroscience. These challenges may likely cause the results of many studies to be re-examined. Although these are indeed problems, we can also take simple measures to ensure the validity of the research, such as maintaining rigorous research procedures, conducting a thorough literature review and analysis to establish a solid research hypothesis, performing power analysis to obtain the required sample size, correctly executing experiments and analyses, and considering possible confounding variables and control group experiments that can further clarify the issues. In addition, obtaining more diverse participant sources and using multiple tasks (battery of tasks) in a single study can broaden the discussion scope of the research, ensuring that the results are at least reasonable and debatable. In contrast, various attempts made solely for the purpose of finding statistically significant results (such as optional stopping, p-hacking, etc.) are the main reasons for the decline in a scientific field. Taking the study by Marek et al. (2022) mentioned in the first paragraph as an example, I believe that it’s not just about having more participants; the key should be to establish a solid research hypothesis and rigorous experimental procedures. Only by arbitrarily using various variables for correlations would lead to a situation requiring tens of thousands of data points.




參考文獻

  1. Marek, S. et al. Reproducible brain-wide association studies require thousands of individuals. Nature 603, 654–660 (2022).
  2. Bennett, C., Miller, M. & Wolford, G. Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: an argument for multiple comparisons correction. NeuroImage 47, S125 (2009).
  3. Cognitive neuroscience at the crossroads. Nature 608, 647–647 (2022).
  4. Greene, A. S. et al. Brain–phenotype models fail for individuals who defy sample stereotypes. Nature 609, 109–118 (2022).