This post challenges the assumption that for an academic paper to be relevant it must be interesting, and for the paper to be interesting, it only needs appropriate empirics, as opposed to a rigorous research design and empirical treatment.
An easy critique of this assumption is to say that I’ve got a straw-man argument; to publish you need rigorous empirics AND a compelling story that makes a contribution. I don’t think that’s the case. I think as a field (management and entrepreneurship specifically), we are too willing to trade studies that are interesting for those that are less interesting, even if the less interesting paper has a stronger design and stronger empirics. The term interesting is, without question, subjectively determined by journal editors and reviewers—what is interesting to one scholar may or may not be interesting to another.
Generally we think of interesting in terms of making a theoretical contribution; the standard for publication at most of our top empirical journals is that a paper must make a novel insight—or insights—to be publishable. The problem with this standard, as has been amply covered by others is that is encourages, or forgives, researcher degrees of freedom that may weaken statistical and causal inference to maximize the ‘interesting-ness’ factor. The ongoing debate over the replicability of power-posing is a notable case in point.
My hypothesis is that the willingness to trade rigorous research design for ‘novel’ insights is the root cause for the very real gap between academic management research and management practice. The requirement to make a novel insight encourages poor research behavior while minimizing the critical role that replicability plays in the trustworthiness of scientific research. In entrepreneurship research, we are also late in embracing concepts like counterfactual reasoning and appropriate techniques to deal with endogeneity, which diminishes the causal inference of our research and hence its usefulness.
In short, managers are less likely to adopt practices borne out of academic research not because such findings are unapproachable—although true, studies aren’t easy reads—but that most academic research simply isn’t trustworthy. I’m not suggesting most research is the result of academic misconduct, far from it. But I am suggesting that weak designs and poorly done analyses lower the trustworthiness of study results and their usefulness to practice.
To be clear, a well done study that maximizes causal inference AND is theoretically novel is, certainly, ideal. But the next most important consideration should be a rigorously designed and executed study on a simple main effect relationship that maximizes causal inference and understanding. It may not be interesting, but at least it will be accurate.
The best way to be relevant is to be trustworthy, and the best way to be trustworthy is to be rigorous. You can’t have external validity without first maximizing internal validity.