Reviewer Coordination versus Score Divergence Conference

Rob and I started the Reviewer Coordination Program a few months into the 2017 season.  Since it's still a new and perhaps unfamiliar program, I thought I'd share some thoughts about what it is and why we're doing it.

Reviewer Coordination Process

Initial Actions

  • Once three reviewers have been identified for a book, the Awards Directors send an email to the author letting him/her know it's time to send us review copies.
  • At this time, we also send the three reviewers a "Reviewer Coordination" email notification.  In this way, reviewers...
    • ...know that a new review book will be on the way soon
    • ...are reminded whether they are reviewing and scoring the book (Reviewer #1) or just scoring the book (Reviewers #2 and #3)
    • ...learn who the other two reviewers are
  • No action is required (other than saving the email for possible future use)... it's FYI only!
    • Please do not contact the other reviewers before you do your own scoring!
    • Initial scoring of books should be an individual process--free from outside influence and/or collaboration.
  • However... IF any reviewer discovers that there is a problem with the book, they can contact the other reviewers AND the Awards Directors so that we can take action quickly.
    • What type of problem should I surface with the other reviewers and Awards Directors?
      • Not suitable for our website (X-rated, bashes military, etc.)
      • Wrong genre/subcategory
      • Desperately in need of editing
      • Possible copyright violations
      • Anything that might reflect unfavorably on MWSA!

Draft Review Stage

  • Once the #1 reviewer finishes reading and scoring the book, they are encouraged to share a draft of their review with the other two reviewers.
    • Why share the draft?
      • Mostly for editing/quality control by two other reviewers who've read the book.
      • A side benefit is that the review might have a better chance of representing a consensus among three reviewers if it's shared beforehand.
    • When should I share my draft review?
      • Ideally, it'd be best that this coordination/editing happen after all three reviewers have finished reading and scoring--thus avoiding the above-mentioned "influence and/or collaboration."
      • However, no one but the Awards Directors have visibility on who's done what and when, so the #1 reviewer should feel free to share his/her draft review at any time after they've finished reading and scoring the book themselves.
    • Bottom line: it's the #1 reviewer's review!
      • Fellow reviewers are free to suggest changes, additions or subtractions... but the #1 Reviewer has the final say among the three reviewers!

 Score Divergence Process/Conference

Why have a Divergence Conference?

  • In most cases, there's no need to share/discuss scores among reviewers.
  • However, this changes when an Awards Director notes a significant divergence in scoring and convenes a "Divergence Conference" to try to explain and resolve the divergence

Goal of Conference: Consensus on Change or Acceptance of Average Score Results

  • The "conference" is initiated by the Awards Directors as a group email correspondence among the three reviewers.
    • The initial email kicking off the Divergence Conference will highlight the area(s) of divergence.
      • The scoring divergences often happen in the Technical Section
      • Recall that the Tech Section...
        • ...overrides the overall scoring (i.e. Overall average score: "Gold;" but Tech score: "No Medal" equals "No Medal" overall!).
        • ...should be the most objective part of our scoring process... and therefore (theoretically) the least conducive to divergence!
    • Please use "Reply All" and include the Awards Directors
    • If necessary (normally it's not), we can convene a video or phone conference call
  • Once a divergence is identified, it's appropriate--and in fact vital--that the three reviewers discuss and share their scoring with each other.
  • IF, after discussion, there's a consensus on changing an individual score, we can and should do this.
  • During the initial stages, we're looking for the following examples:
    • "Wow, I didn't find those spelling/grammatical errors on pages 25, 67, 103 and 125.  I need to change my tech score."
    • "I marked 5-10 typos, but after our discussion I went back and counted.  I only found four misspelled words."
    • "After reading your email, I now agree that the character 'Bob,' was shallow and undeveloped."
  • If agreement/consensus is reached with respect to a specific score change, there's no need to resubmit another score sheet (unless a reviewer would like to do so); the Awards Directors can change scores/medal results and use the "email trail" to document their actions
  • Our goal is NOT to encourage (much less browbeat) an "outlying reviewer" to join the other two... nor are we looking for "two vote Gold, one votes Bronze; the Golds have it!"
    • Doing this would force us away from our goal of "individual objectivity"
  • The goal is consensus on whether or not a book meets our scoring criteria, not necessarily on what medal it should get!
  • If a divergence conference does NOT result in a consensus on any changes to scoring, then we're done. 
    • Unless a reviewer is intentionally disregarding our scoring criteria (and yes, that's already happened a few times this year), whatever medal results from the AVERAGE of all three reviewers' scores is the "final verdict!"