- a) his educational background.
- b) anecdotes.
- Given his educational background in neuroscience, 5?
Question 3: Which criteria for the quality of scientific research does Andrew Huberman rely on? In the episode he remarks how the study is not peer reviewed, and in other episodes he often discusses whether a study appeared in a peer reviewed journal (and sometimes if the journal is considered prestigious). Do you think this is a good criterion of scientific quality? Which aspects make this a good criterion? Which aspects do not make this a good criterion?
- a) I believe the following aspects make this a good criterion: published in a prestigious scientific journal
- b) I believe the following aspects do not make this a good criterion: single experimental paper
- c) My overall evaluation about whether a study being peer reviewed or not is a good criterion for scientific quality is:
Question 4: Another criterion Andrew Huberman uses to evaluate whether a finding can be trusted is if there are multiple published articles that show a similar effect. Which aspects make this a good criterion? Which aspects do not make this a good criterion? The section in the textbook on publication bias might help to reflect on this question: https://lakens.github.io/statistical_inferences/12-bias.html#sec-publicationbias
- a) I believe the following aspects make this a good criterion:
- b) I believe the following aspects do not make this a good criterion: publication bias
- c) My overall evaluation about whether the presence of multiple studies in the literature is a good criterion for scientific quality is:
Question 5: a) Which criticisms do Christopher Kavanagh and Matthew Browne raise of the study Huberman discusses? b) Which criticisms do the podcast hosts raise about how Huberman presents the study? c) Which warning signs of the past studies by the same lab do the podcast hosts raise?
- A small study demonstrating the placebo effect with extreme results should be met with more criticism.
- Huberman seems to be more about overhyping the studies than being more critical.
Question 6: The podcast hosts discuss the ‘dead salmon’ study. I agree with podcast host Christopher Kavanagh that people interested in metascience should know about this study. It lead to lasting changes in the data analysis of fMRI studies. A similar point was made in a full paper, which you can read here. The title of the paper is “Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition”. The original title of this paper when submitted to the journal was ““Voodoo Correlations in Social Neuroscience”. The peer reviewers did not like this title, and the authors had to change it before publication, but it is still often referred to as the ‘voodoo correlations’ paper, together with the ‘dead salmon’ poster. Read through the study (which was presented as a poster at a conference, not as a full paper). It is not intended as a serious paper. What is the main point of the poster? A high-resolution version is available here.
- Methodologies on fMRI studies should be more critical because using normal methods it can show many false positives, including showing brain activity on a dead fish.
Question 8: a) Do you think Andrew Huberman is overclaiming in the end of the podcast about possible applications of this effect? Is he overhyping? b) How do you think the studies should have been communicated to a general audience?
Question 9: It is not possible to ask the following question in any other way, than to make it a loaded question. It is clear what I think about this topic, as I chose to make this assignment. Nevertheless, feel free to disagree with my beliefs. a) Is Andrew Huberman’s understanding of statistics (and red flags where reading the results of a study) strong enough to adequately weigh the evidence in studies? b) How well should science communicators be able to interpret the evidence underlying scientific claims in the literature, for example through adequate training in research methods and statistics? c) How well should you be trained in research methods and statistics to be able to weigh the evidence in research yourself?