Methodology, scientific life, and bad language. Co-hosted by Dr. Dan Quintana (University of Oslo) and Dr. James Heathers (Cipher Skin)
…
continue reading
. Hosted on Acast. See acast.com/privacy for more information.
…
continue reading
We discuss the recent retraction of a paper that reported the effects of rigour-enhancing practices on replicability. We also cover James' new estimate that 1 out of 7 scientific papers are fake. Links The story about data integrity concerns in 130 women’s health papers James' new preprint with the estimate that 1 out of 7 scientific papers are fak…
…
continue reading
Open access articles have democratized the availability of scientific research, but are author-paid publication fees undermining the quality of science? The preprint by Morgan and Smaldino - https://osf.io/preprints/osf/3ez9v Paul Smaldino's text book - Modeling social behavior Main edisode takeaways (AI-assisted summary) There is a wide variabilit…
…
continue reading
1
183: Too beautiful to be true
45:05
45:05
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
45:05
Dan and James discuss a paper describing a journal editor's efforts to receive data from authors who submitted papers with results that seemed a little too beautiful to be true Main edisode takeaways (AI generated summary) This editorial on the reproducibility crisis emphasizes the importance of providing raw data in scientific publications and hig…
…
continue reading
1
182: What practices should the behavioural sciences borrow (and ignore) from other research fields?
51:09
51:09
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
51:09
Dan and James answer a listener question on what practices should the behavioural sciences borrow (and ignore) from other research fields. Here are the main takeaways: Keeping laboratory records and using electronic lab management software is beneficial practices biology that would benefit the behavioral sciences The rate of pre-registration of met…
…
continue reading
We discuss how following citation chains in psychology can often lead to unexpected places, and how this can contribute to unreplicable findings. We also discuss why team science has taken longer to catch on in psychology compared to other research fields. Here is the preprint that we mentioned authored by Andrew Gelman and Nick Brown - https://osf…
…
continue reading
1
180: Consortium peer reviews
50:14
50:14
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
50:14
Dan and James discuss why innovation in scientific publishing is so hard, an emerging consortium peer review model, and a recent replication of the 'refilling soup bowl' study. Other things they cover and links: Which studies should we spend time replicating? The business models of for-profit scientific publishers How many tacos can you buy with th…
…
continue reading
1
179: Discovery vs. maintenance
48:38
48:38
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
48:38
Dan and James discuss how scientific research often neglects the importance of maintenance and long-term access for scientific tools and resources. Other things they cover: Should there be an annual limit on publications (even if this were somehow possible)? The downsides of PhD by publication The Gates Foundation's new Open Access policy Other lin…
…
continue reading
1
178: Alerting researchers about retractions
49:45
49:45
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
49:45
Dan and James discuss the Retractobot service, which emails authors about papers they've cited that have been retracted. What should authors do if they discover a paper they've cited has been retracted after they published their paper? Other things they chat about A listener question about including examiner's comments in thesis The different types…
…
continue reading
We discuss two recent plagiarism cases, one you've probably heard about and another that you probably haven't heard about if you're outside Norway. We also chat about the parallels between plagiarism and sports doping—would people reconsider academic dishonesty if they were reminded that future technology may catch them out? Here are some of the ta…
…
continue reading
1
176: Tracking academic workloads
36:12
36:12
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
36:12
We chat about a paper on the invisible workload of open science and why academics are so bad at tracking their workloads. This episode was originally recorded in May 2023 in a hotel room just before our live recording of Episode 169, which is why we refer to the paper as a 'new' paper near the start of the episode. Links The paper on the invisible …
…
continue reading
1
175: Defending against the scientific dark arts
38:10
38:10
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
38:10
We chat about a recent blogpost from Dorothy Bishop, in which she proposes a Master course that will provide training in fraud detection—what should such a course specifically teach and where would these people work to apply their training? We also discuss whether open science is a cult that has trouble seeing outward. Links The blog post on the Ma…
…
continue reading
1
174: Smug missionaries with test tubes
53:21
53:21
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
53:21
James proposes proposes a new type of consortium paper that could provide collaborative opportunities for researchers from countries that are underrepresented in published research papers. We also talk about computational reproducibility and paper publication bonuses. Links The paper from Steve Lindsay on computational reproducbility: A Plea to Psy…
…
continue reading
1
173: How do science journalists evaluate psychology papers?
35:07
35:07
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé
35:07
Dan and James discuss a recent paper that investigated how science journalists evaluate psychology papers. To answer this question, the researchers presented science journalists with fictitious psychology studies and manipulated sample size, sample representativeness, p-values, and institutional prestige Links The paper on how science journalists e…
…
continue reading