Informing an uninformed reviewer

· by Brian Anderson · Read in about 10 min · (1979 words) ·

I’ve written before about dealing with jerk reviewers. This is a different issue—how do you inform an uniformed reviewer? An uniformed reviewer is not a jerk. He or she generally wants the author to address an issue he or she thinks is important. Unfortunately, the reviewer suggests a method or approach no longer appropriate, or worse, an incorrect approach.

This is tricky terrain. Over the years, I’ve seen three basic approaches authors take to address an uninformed reviewer:

  • Fight back: “Reviewer X, you are wrong because…”
  • Acquiescence: “Of course Reviewer X, thank you for pointing out our oversight. We now use…”
  • Education: “Thank you for your perspective Reviewer X, but we opted not to use…”

Unhappy reviewers correlate with rejection, which tends to result from the first approach. I’m going to argue for the third approach, and offer some tips about engaging in a constructive dialogue with an uniformed reviewer.

It doesn’t always work

Approach three doesn’t always work. You are telling the reviewer that he or she is wrong, but you are trying to be constructive about it. You are also telling the editor that the reviewer is wrong and that the editor shouldn’t trust the reviewer’s perspective on this point. That’s a fine line.

The problem with approach one is you stopped the dialogue. An author does not want to be called ignorant or stupid any more than a reviewer does. When vitriol meets genuine—but misguided—criticism, the reviewer is likely to recommend rejection. The author is also likely to lose the editor’s favor, because to allow the ‘fight back’ is to devalue the reviewer’s time and due care, which editors are loath to do. Approach one leaves no room for the reviewer to come back and say “thank you for educating me.”

I also discourage the second approach, acquiescence. Yes, doing what the reviewer wants improves publication likelihood. This is also true when the editor did not address the reviewer’s flawed counsel in the decision letter (or maybe the editor doesn’t know it is flawed!). Adopting a flawed or outdated method is bad for science and bad for your career. It takes courage to challenge an uninformed reviewer, but the field is better off because of it. Have courage—good papers always find good homes.

Guiding principles

So you’ve received a comment from an uniformed reviewer—now what? Before writing your response, do these three things:

Make sure you are right. If the reviewer asked you to use method X to test your model, but you are not sure about method X, before arguing that your method Y is better, you need to get educated on method X. If you aren’t sure, seek out an expert opinion from another researcher.

Make the distinction between fact and opinion. Data science requires judgement calls. Sometime the line blurs between an incorrect approach and a reasonable judgement call. The challenge is to determine where the reviewer’s recommendation falls on this spectrum. You do not have to resort to acquiescence, but “there is a flaw in method X” is a different argument than “method Y is better than method X” because method X may still be appropriate in other cases.

When in doubt, ask the editor. The editor should provide guidance on whether to use the reviewer’s suggestion in a revision. But decision letters are often short, either by the journal’s custom or the editor’s proclivities, and you wonder if you really need to use method X. When in doubt, ask the editor for guidance. But when you do, I would suggest trying out the argumentation you would use in the reviewer response. Rather than asking the editor “Do we need to use method X”, provide an argument for using method Y instead of method X. Most editors appreciate a constructive conversation, and are happy to oblige if he or she feels it will improve the review process.

Writing the response

Start by imaging you are having a conversation with the reviewer. The reviewer is another scholar; they may be junior to you or senior to you, but a professional. Speak to him or her as a professional. Be polite and respectful. But remember you are a professional too, so no need for sycophantic language like “Thank you for pointing out this amazing/terrific/insightful/penetrating issue we were completely ignorant of…”.

Next, be willing to acknowledge that the reviewer’s recommendation may have been appropriate at some time, or may be appropriate under a different set of assumptions then your study or data. Almost all professors like to learn if you approach them as professionals. Acknowledge their knowledge base, but help to put it in context.

Next, if possible, show what would happen if you used method X. You do not have to include this analysis in the paper or in an appendix, but you are showing the reviewer that you considered his or her recommendation.

Next, provide references to appropriate literature discussing why method X is no-longer accepted, or why method Y solves the problem method X does in a better way. You are giving the reviewer a literature base to update his or her knowledge—that’s helpful.

Then show why method Y yields better estimates, or solves your analysis problem in a better way, but offer a tutorial to explain what is happening and why you are doing what you are doing.

An example

Here’s an example from one of my papers. We are using structural equation modeling to test a mediation hypothesis. One thing I still run into as an author and now as an editor is reviewers recommending the Baron and Kenny (1986) three-step method to test mediation. I also run into reviewer recommendations about splitting or modifying the entrepreneurial orientation construct in a particular way that is inconsistent with the paper’s theoretical development. In the case of the former, this approach is no longer appropriate; with the latter, this is a judgement call.

After the first round review, the reviewer questioned our decision to split the EO construct, and our choice of analytical method. Here is how we answered the reviewer, drawing from the guiding principles and steps outlined above. Yes, it is a bit on the long side, but that is usually better than a terse response.

Similar to our answer to the preceding comment, this was a difficult question to address, as we will explain. We are very cognizant of your concern that we may have modeled the data in such a way as to generate our expected results. In addition to our answer below, we would like to extend this offer. Working with the Editor to ensure confidentiality, we would be most happy to provide the data for Study 1 and Study 2, along with our code, and invite you to explore our models on your own. We are confident that you will be able to replicate our results, and that by using alternative estimators and measurement model constructions, you will find similar nomological conclusions to what we report.

To illustrate, the simplest way to model our hypothesis would be with an OLS estimator, and using the mean value of the focal constructs–by far the dominant approach in the EO literature (Rauch et al., 2009). Here, we’re using the full nine-item EO scale, modeled as a single, unidimensional construct, and we tested the model with the data from Study 1. Using the well known Baron and Kenny (1986) method, the results from the steps are as follows. Step 1 (Firm Growth → EO), we find a positive, and significant effect (𝞫 = 0.25; p < .001). Step 2 (Firm Growth → Adaptive Capability), again we find a positive and significant effect (𝞫 = 0.26; p < .001). In Step 3 (Adaptive Capability → EO controlling for the presence of Firm Growth) we again find a positive and significant effect of the mediating path (total indirect effect = 0.12; p < .001).

From this classic modeling approach, and using the full EO scale, we would say that we find support for our hypothesis that firm growth influences EO through its indirect effect on adaptive capability. The Sobel and Goodman tests for the strength of the indirect effect support this conclusion. We also replicated this model with the more recent Preacher and Hayes (2004) bootstrapping method and reached a substantially similar conclusion (minor differences in the coefficient values and standard errors but not in significance).

Notably, by the Baron and Kenny method and the Preacher and Hayes method–both common approaches in the management literature–we reached a similar conclusion to what we report in the paper. Unfortunately, neither of these methods correct for measurement error, assume no correlation between the disturbance term(s), and estimate each equation in isolation instead of taking into consideration their systematic interdependencies (e.g., the theoretical expectation of disturbance term covariance).

Estimating a structural equation model with the full nine-item EO scale loading on a single, unidimensional construct gives a similar conclusion. Here we correct for measurement error but not for endogeneity, and estimate the model as a system. Again, we see positive and significant paths with only marginal differences in the coefficient values from the two preceding methods (values omitted for parsimony, but we are happy to provide). Nomologically, Firm Growth influences EO through its effect on Adaptive Capability. However, this model is misspecified (𝟀2 = 340.73; p < .001), although the normative fit indices are roughly in line with commonly accepted standards (CFI = .91; RMSEA = .91; SRMR = 0.05). Again this model is ‘wrong’ but nomologically consistent.

What we are very respectfully attempting to convey is that if we relax the stricter assumptions used in the paper, our nomological conclusions are substantially similar to those we report. The parameter estimates, however, in all of these other models are inconsistent, and hence uninterpretable. The model we report in the paper, corrected for measurement error, using instruments to correct for correlated disturbances, and without evidence of global model misspecification, gives us confidence that our parameter estimates are consistent with respect to the underlying covariance structure of the data.

To your observation about splitting the lower-order dimensions, we followed the Anderson et al. (2015) reconceptualization, which we discuss in the paper. If we may, we very politely refer you to that paper for a more detailed discussion of the conceptual reason for splitting entrepreneurial behaviors and managerial attitude towards risk. Briefly, however, the core logic as we discuss on page 5 is that as originally conceptualized, EO contains both attitudinal and behavioral constitute elements. When modeling antecedent relationships, if you use the single unidimensional construction, you are effectively suggesting that the focal antecedent predicts both a behavior and an attitude of the same direction and magnitude. As Anderson et al. (2015) note, and we concur, this is conceptually tenuous.

However, and consistent with Reviewer #1’s comments, we conducted a comparison of the structural paths from adaptive capability to EO’s two lower order dimensions, but found no significant difference between the two. As we now discuss further in the discussion section, adopting the Anderson et al. (2015) EO conceptualization allows for modeling flexibility when specifying antecedent relationships, but that does not necessarily imply that there will be a significant difference in these relationships. One implication, as we mention on page 27, is that additional work on EO’s conceptual domain would be a valuable contribution to the literature to reconcile Anderson et al’s observations with the original, behavioral approach of the Miller/Covin and Slevin conceptualization.

We sincerely hope that we addressed your concern about the efficacy of our results. If you have any further questions or concerns here, we are most happy to continuing working with you to better clarify our approach. Thank you again for your comment.

Fortunately, approach three worked. I’ve tried approach three other times with different results. Still, it is my default approach for dealing with uniformed reviewers, and hopefully helps you in your reviewer responses.