In the Fall 2006 issue of Dialogue—the original print version of the SPSP member newsletter—the editors reached out to the editors of its journals at the time, along with a group of former journal editors, to ask them to write in their own words, what makes for good (and bad) practice in peer-review writing. As demonstrated in the depth and breadth of the responses submitted by the editors who participated (including Harry Reis, John Cacioppo, Brenda Major, Chet Insko, Cindy Pickett, John Jost, and Sonja Lyubomirsky), there was much to be shared in terms of insights and suggestions for the benefit of both future and current reviewers, but also for the paper authors who aspire to have their research elevated and shared through the publication process.

It's been nearly 15 years since Dialogue posed that question to its journal experts, so eDialogue wanted to see how much had changed and how much had stayed the same. eDialogue reached out to a group of current and recent journal editors and asked them to answer the same question in 2021—what makes an excellent review

Tessa West

Past Co-Editor (2017 – 2020), Personality and Social Psychology Bulletin

The best reviews are the ones that hit on a few main themes: What is the contribution, how solid is the method, and is there a disconnect between what the authors say they are testing, and what they actually test? In my experience as an editor, papers usually get rejected for one of these three reasons. Maybe the method is solid but the contribution is incremental. Or perhaps the ideas are grand and exciting, but the method doesn't actually test them. Broad strokes comments are fine, but as an editor and an author I love detail. And if the authors are leaving out a big important chunk of the literature, references help too! 

Chris Crandall

Past Editor (2016 – 2020), Personality and Social Psychology Bulletin

What makes an excellent review? Is it different in the “era of open science?” It certainly depends upon who’s doing the judging of excellence.  Authors are biased evaluators of the quality of a review, and vary in their receptivity to negative decisions. The editor's decision communicates clearly to the author, explain/justifies the decision, and helps improve the author(s) manuscripts. The reviewer’s job is to help them. What makes the editor think a review is “excellent?”

1. Editors want good judgment—there are no simple recipes for an excellent review. At PSPB I’ve read hundreds of reviews in the last four years. So many of them were excellent! But most of them were merely good or adequate. Keep in mind that a handful of good/adequate reviews, and a good independent reading by the editor and their synthesis can make for an “excellent” decision process even if the individual contributions are not excellent in themselves.  An insufficient review was actually rare, but they did happen. If an editor gets two insufficient reviews, they are in trouble.

2. A good review must be readable, clear. It should state what the reviewer thinks is in the paper. It should make clear what they think is good about the manuscript, and what they think needs improvement. A good reviewer states their standards, and compares the manuscript to these standards. It is often good to state whether the manuscript can be improved to the standard of publication, and what that might look like.

2a. Go ahead and number your main points. Indicate which are main issues, and which are minor issues.

3. A good review is not a decision letter. In our field, the action editor has near complete authority in making acceptance/rejection/revision decisions. At PSPB, in four years and over 2,500 manuscripts, I think we changed decision on two papers (after resubmission and re-review and re-revision). The reviewer has two audiences—one writes directly to the action editor, but one knows the author will be “reading over the editor’s shoulder.”

4. Should a reviewer sign the review? It’s an open question, and some people are committed to the practice. If the reviewer might edit themselves, worry about what the author(s) will think of them, soften their tone, or shade their recommendations, then they should feel free to review anonymously. A reviewer should never be a jerk, and never be insulting; it undermines their legitimacy to the author, and it’s never necessary.  A strongly negative review can be written kindly, and should be done virtually every time.  

5. Judge a manuscript on its own merits—does it do what it sets out to do? Then judge whether such a task is worthwhile. No one really cares if the manuscript isn’t the one you would have written.

6. What’s different in the era of open science? Not very much. There is now more information, and a reviewer can read a preregistration and compare it to the finished manuscript. The discrepancies are far more common than the explicit acknowledgment of them. Reviewers sometimes re-analyze data, or check code—this is very, very uncommon at the moment; it is certainly welcome. It is hard to require this burden of time, effort and skill—until we pay such reviewers or make it their specific role it is too much to expect, even as a standard of excellence. The largest change I’ve observed so far is for the open science registration of manuscripts, data, and code, is to undo the double-blind nature of the review, which increases biased evaluation due to gender, age, institutional status, reputation, and so on. I cannot say that this is A Good Thing.

Colin Wayne Leach

Past Co-Editor and Associate Editor, Personality and Social Psychology Bulletin; Editor, Journal of Personality and Social Psychology

Peer review is yet another cardinal skill—like ethics, writing, and teaching—that most of us are expected to intuit or learn on the job. This is likely why there is so much idiosyncrasy in the style and content of peer reviews. Of course, it is now all the more clear that our field’s continuing lack of consensual standards for the conduct, reporting, and interpretation of research further fuels idiosyncrasy in peer reviews. The ongoing discussion of best practice for open science has likely introduced new, and perhaps greater, idiosyncrasy in the standards by which peers review research.

Until we can all agree on basic standards for conducting, reporting, and interpreting research, we must all agree on how to productively discuss our (idiosyncratic) views.  Thus, as a journal editor, my main wish is that reviews engage in principled and civil evaluation grounded in standards that the reviewer makes explicit with specific details and supporting references (see Leach, 2020).  Making explicit one’s standards of evaluation in a review empowers all involved to cooperate in informed exchange regarding the legitimacy of those standards and whether those standards have been met by the argument and evidence presented.

Firstly, principled and civil evaluation based in explicit standards empowers the handling editor to more appropriately weight a review and to adjudge whether the evaluation is line with that of other reviewers and with the stated aims and standards of the journal.  Reviews that offer a great deal of (idiosyncratic) evaluation with little explicit statement of the standards behind the evaluation amount to statements of liking.  Liking is a poor basis of principled and civil scholarly discourse and can instead invite distrust and rancor. 

Secondly, principled and civil evaluation based in explicit standards empowers the author(s) to better understand reviewer’s evaluations.  This can ultimately translate into more honest self-reflection and improvement by the author(s).  It can also disseminate information to all involved and thus is a step in moving us toward more consensual standards.  Importantly, principled and civil evaluation also conveys greater procedural justice to authors, which can make it easier for authors to accept reviews and to gain from them. 

Lastly, principled and civil evaluation based in explicit standards empowers the reviewer themselves to be more self-reflective regarding their evaluation, especially as it concerns its legitimacy and its importance to the evaluation.  Having to substantiate one’s evaluation by making one’s standards explicit should make reviewers more accountable to all involved, and thus make them more careful and conscientious.  I am sure that many of us would withdraw at least some of our most damming evaluations if we had to root them in explicit standards to voice them.

The APA Publication Manual is an under-utilized source of basic standards for the conduct and reporting of research.  The related, and publicly accessible, APA reporting standards are another under-utilized source: Journal Article Reporting Standards for Quantitative Research in Psychology; Journal Article Reporting Standards for Qualitative Primary, Qualitative Meta-Analytic, and Mixed Methods Research in Psychology.  I’ve listed some helpful guides to reviewing below.

Nature

https://www.nature.com/articles/d41586-020-03394-y
https://www.nature.com/articles/d41586-018-06991-0

APA

https://www.apa.org/pubs/journals/resources/how-to-review-manuscripts

JPSP: IRGP

https://www.apa.org/pubs/journals/features/psp-pspi0000226.pdf
https://www.apa.org/pubs/highlights/editor-spotlight/psp-irgp-leach