Bad statistics and theoretical looseness

· by Brian Anderson · Read in about 2 min · (412 words) ·

I’m actually a big fan of theory—I’m just not wild about the ways in which we (management and entrepreneurship scholars) test it. The driving reason is theoretical looseness; the ability to offer any number of theoretical explanations for a phenomenon of interest.

What concerns me most with theoretical looseness is that researchers often become blind to questioning results that don’t align with the preponderance of evidence in the literature. The race for publication, combined with the ability to offer what is a logically consistent—even if contradictory to most published research—explanation makes it all to easy to slip studies with flimsy results into the conversation.

In EO research, we see this often with studies purporting to find a nill, or a negative, effect of entrepreneurial behavior and firm growth. Is it possible? Sure. A good Bayesian will always allow for a non-zero prior, however small it might be. But is is logical? Well, therein lies the problem. Because our theories are generally broad, or because we can pull from a plethora of possible theoretical explanations that rarely provide specific estimates of causal effects and magnitudes, it is easy to take a contradictory result and offer an argument about why being entrepreneurial results in a firm’s growth decreasing.

The problem is, researchers often don’t take the extra steps to evaluate the efficacy of the model he or she estimated. Even checking basics like distributional assumptions and outliers are foregone in the race to write up the results and send it out for review. As estimators have become easier to use thanks to point and click software and macros, it’s even easier for researchers to throw data into the black box, get three asterisks, and then find some theoretical rationale to explain seemingly inconsistent results. It’s just too easy for bad statistics but easy theorizing to get published.

The answer, as others have noted, is to slow the process down. Here I think pre-prints are particularly valuable, and one reason why I’ll be starting to use them myself. Ideas and results need time to percolate—to be looked at and to be challenged by the community. Once a paper is published it is simply too hard to ‘correct’ the record from one-off studies that, tragically, can become influential simply because they are ‘interesting’. In short, take the time to get it right, and avoid the temptation to pull a theoretical rabbit out of the hat when the results don’t align with the majority of the conversation.