Testing the usefulness theory

· by Brian Anderson · Read in about 2 min · (266 words) ·

I’m testing my theory that a better way to judge a paper is by its usefulness to other scholars. Here’s the abstract…

In this paper, I discuss common endogeneity problems found in entrepreneurship research. I show how entrepreneurship researchers, particularly those working with observational data, typically face several specific endogeneity threats in a given study. I then show through a series of simulated of datasets instrument variable approaches that effectively address endogeneity in a variety of research designs. I discuss best-practice recommendations for integrating instrument variables into entrepreneurship research at the study design phase, and outline potential sources for instruments. To facilitate replication and to provide a guide for scholars, I posted all data and code, in R and in Stata, online at: https://osf.io/d453n/

This paper just got rejected from a good journal on the basis of its limited usefulness. In part, it could be that I’ve focused too narrowly on a target audience of entrepreneurship researchers mostly using primary data. It could also be that I’ve not made it complicated enough, by focusing on more sophisticated methods to dealing with endogeneity.

So I thought I would go another route, and try the preprint option at Open Science Foundation, where I’ve included code and examples in R and in Stata for dealing with endogeneity in…

  • Direct effect models
  • Latent variable (SEM) models
  • Mediation models

I’ve also included a discussion about 2SLS methods in moderation models.

My hope is to get some feedback on the paper and see just how useful the paper is. It’s also my first time doing a preprint, but I’m liking it already!