Posts

Don't bar barplots, but use them cautiously

Image
Should we outlaw the the commonest visualization in psychology? The hashtag #barbarplots  has been introduced as part of a systematic campaign  to promote a ban on bar graphs. The argument is simple: barplots mask the distributional form of the data, and all sorts of other visualization forms exist that are more flexible and precise, including boxplots, violin plots, and scatter plots. All of these show the distributional characteristics of a dataset more effectively than a bar plot. Every time the issue gets discussed on twitter, I get a little bit rant-y; this post is my attempt to explain why. It's not because I fundamentally disagree with the argument. Barplots do mask important distributional facts about datasets. But there's more we have to take into account. Here's my basic argument: Hey #barbarplots folks: I agree with you that plotting variability is important, but the world of data is big! /1 � Michael C. Frank (@mcxfrank) August 10, 2016 hot: https://amzn.to/3xD...

Preregister everything

Image
Which methodological reforms will be most useful for increasing reproducibility and replicability? I've gone back and forth on this blog about a number of possible reforms to our methodological practices, and I've been particularly ambivalent in the past about preregistration , the process of registering methodological and analytic decisions prior to data collection. In a post from about three years ago, I worried that preregistration was too time-consuming  for small-scale studies, even if it was appropriate for large-scale studies. And last year, I worried whether  preregistration validates the practice of running (and publishing) one-offs , rather than running cumulative study sets. I think these worries were overblown, and resulted from my lack of understanding of the process. Instead, I want to argue here that we should be preregistering every experiment do. The cost is extremely low and the benefits � both to the research process and to the credibility of our results � a...

Minimal nativism

Image
(After blogging a little less in the last few months, I'm trying out a new idea: I'm going to write a series of short posts about theoretical ideas I've been thinking about.) Is human knowledge built using a set of of perceptual primitives combined by the statistical structure of the environment, or does it instead rest on a foundation of pre-existing, universal concepts? The question of innateness is likely the oldest and most controversial in developmental psychology (think Plato vs. Aristotle, Locke vs. Descartes). In modern developmental work, this question so bifurcates the research literature that it can often feel like scientists are playing for different "teams," with incommensurable assumptions, goals, and even methods. But these divisions have a profoundly negative effect on our science. Throughout my research career, I've bounced back and forth between research groups and even institutions that are often seen as playing on different teams from one a...

Reproducibility and experimental methods posts

In celebration of the third anniversary of this blog, I'm collecting some of my posts on reproducibility. I didn't initially anticipate that methods and the "reproducibility crisis" in psychology would be my primary blogging topic, but it's become a huge part of what I write about on a day-to-day basis. Here are my top four posts in this sequence: A moderate's view of the reproducibility crisis  � part 1 of a sequence, in part responding to the release of the Open Science Collaboration reproducibility project paper .  The slower, harder ways to increase reproducibility  � part 2 of the sequence. Estimating p(replication) in a practical setting  � a report on the results from my graduate methods course, in which students replicate previously published papers.  Shifting our cultural understanding of replication  � a plea for changes in practices and incentives. Then I've also written substantially about a number of other topics, including publication incenti...

An adversarial test for replication success

(tl;dr: I argue that the only way to tell if a replication study was successful is by considering the theory that motivated the original.) Psychology is in the middle of a sea change in its attitudes towards direct replication. Despite their value in providing evidence for the reliability of a particular experimental finding, incentives for direct replications have typically been limited . Increasingly, however, journals and funding agencies  now increasingly value these sorts of efforts. One major challenge, however, has been evaluating the success of direct replications studies. In short, how do we know if the finding is the same? There has been limited consensus on this issue, so many projects have used a diversity of methods. The RP:P 100-study replication project , reports several indicators of replication success, including 1) the statistical significance of the replication, 2) whether the original effect size lies within the confidence interval of the replication, 3) the re...

Misperception of incentives for publication

There's been a lot of conversation lately about negative incentives in academic science. A good example of this is Xenia Schmalz's nice recent post . The basic argument is, professional success comes from publishing a lot and publishing quickly, but scientific values are best served by doing slower, more careful work. There's perhaps some truth to this argument, but it overstates the misalignment in incentives between scientific and professional success. I suspect that people think  that quantity matters more than quality, even if the facts are the opposite. Let's start with the (hopefully uncontroversial) observation that number of publications will be correlated at some magnitude with scientific progress. That's because for the most part, if you haven't done any research you're not likely to be able to publish, and if you have made a true advance it should be relatively easier to publish.* So there will be some correlation between publication record and th...

Was Piaget a Bayesian?

tl;dr: Analogies between Piaget's theory of development and formal elements in the Bayesian framework. Intro I'm co-teaching a course with Alison Gopnik at Berkeley this quarter. It's called "What Changes?" and the goal is to revisit some basic ideas about what drives developmental changes. Here's the syllabus , if you're interested. As part of the course, we read the first couple of chapters of Flavell's brilliant book, " The Developmental Psychology of Jean Piaget ." I had come into contact with Piagetian theory before of course, but I've never spent that much time engaging with the core ideas. In fact, I don't actually teach Piaget in my intro to developmental psychology course . Although he's clearly part of the historical foundations of the discipline, to a first approximation, a lot of what he said turned out to be wrong . In my own training and work, I've been inspired by probabilistic models of cognition and cognitive ...