Are we in a canoe or a row boat?

· by Brian Anderson · Read in about 4 min · (732 words) ·

Two entrepreneurship/innovation papers published this year in top management journals caught my attention:

To be clear, I am not criticizing these papers specifically, rather I think of them as illustrative of the broader challenge facing entrepreneurship data science, and it’s this…

Are we facing the replication and credibility crisis in social science in a row boat, or in a canoe?

Both boats move—or carry—you forward. In the canoe though you focus your attention on where you are going, while in the row boat you look only to where you have been.1

Each of the above papers are published in an unequivocal “A” journal in management, and both journals place a substantially high bar on “making a theoretical contribution” (see here and here). I’ve written before that I’m in the camp that a strong theoretical contribution is largely in the eye of the beholder, so I’m not going to comment on whether I think the papers are theoretically interesting. What I will note is the statistical analyses used to test the theory presented in the paper. In both cases, I would argue that each paper is a row boat, at least methodologically speaking.

In the case of the Chen & Nadkarni (2017) paper, the authors chose to use the classic mediation approach suggested by Baron and Kenny over thirty years ago. While the authors also report the popular bootstrapping method to estimate the standard errors, the weaknesses in both approaches have been well-known for some time. In the case of Liu et al (2017) paper, the authors test and provide a visualization for moderation using only a simple slopes comparison without confidence intervals or marginal effects, and with a small multilevel sample.

Are the empirical conclusions wrong in either paper? Well, that’s hard to say, absent a replication, posting of code and data, and a disclosure about researcher degrees of freedom, they could be, or they could not be. Unfortunately with dated methodologies, it makes evaluating the efficacy of a model even more difficult. The broader concern is that if the methodologies themselves make assumptions unlikely to be satisfied in practice, combined with concern over researcher degrees of freedom, p-hacking, HARKing, etc., what is the value of a paper that is “interesting” if its results must be accompanied by a rather large caveat?

What’s the solution going forward? Well, beyond changing mindsets, I think it’s for reviewers—and editors—to expect more from an empirical paper. More importantly, it’s to make sure that editors and reviewers are themselves up to speed on advances in research design and statistical methodologies. I’m not sure that the way to go is to require some type of continuing education requirement for reviewers, but for editors, that might be valuable. Ensuring doctoral training emphasizes current methods and statistical software—relegate the “classics” to further reading/use if interested—would also be helpful, given the limited amount of time most PhD students have for research methods and statistics training.

Now, lest I be accused of living a glass house, I have published previous work that suffer from the limitations mentioned in both papers. Further, the substantial lead time in management publishing also means that the data and analyses for the papers above likely happened several years before being “officially” published. I generally do not use the same methods today that I used five years ago. Nonetheless, these are papers published in 2017—and in our “top” journals—and so serve as recent methodological templates for future scholars. That makes the impact of row boat empirics more concerning for our field, and another reason why data/code sharing, pre-registration, and post-publication peer review have substantial value.

Science builds cumulatively, and we build our research on work that has gone before. But part of advancing science—and entrepreneurship data science in particular—is to acknowledge the methodological limitations of prior work as we construct the new. As a field, and given the credibility gap in social science research, we had all better get into the canoe, or we will ride our row boats right over the waterfall.


  1. Special thanks to Tobias Gilk for the row boat vs. canoe analogy