I’m happy to report that over the last week we transitioned all of our writing projects at Servio to a modified workflow.
Before explaining the change and why I’m happy (and how it relates to incentives and expectations), it’s important to understand how all work gets approved on CloudCrowd, Servio’s crowdsourcing platform.
Luckily it’s quite simple: all work is peer reviewed a minimum of one time, and frequently up to three or even four times, depending on context or work complexity. This means that if I, as a worker, am instructed to “go find an image of a camera on a white background measuring 300 by 300 px” then another worker will verify that the image I’ve submitted meets all the criteria outlined in the project instructions.
The peer worker reviewing my submission makes a determination to approve my work or reject it. If approved, I get paid. If rejected, I do not.
Here we can see that workers have a very strong economic incentive to provide quality, accurate work because failure to do so means any time devoted to the project is a waste of time. I think of this process as creating and leveraging social accountability to ensure quality. The peer review process is one way Servio guarantees quality, and it’s extremely powerful. I discuss a few other quality control techniques here and here.
Prior to the change in workflows, the writing process at Servio followed this same two-step process:
1st — a writer would receive instructions on what topic and length to write on. There would likely be structural and tone requirements too. (You can view a fairly extensive set of options for custom content writing on Servio.)
2nd — the “peer” in this case would be an editor, not a writer.
On the surface this seems fine. It even mimics the real-world in many ways. Writers write and editors edit. Seems fine because we’re mimicking the real world and known social forms—one of my (personal) requirements for how crowdsourcing workflows should flow.
So where’s the problem you ask?
The editors reviewing written content were instructed to approve a document if there were less than two grammatical errors. If there were more than two or if there were any spelling errors, the document should be rejected. In this case, the writer would receive no payment for the work.
At first glance, this may seem totally acceptable. Of course the writer shouldn’t be paid for submitting writing that has grammatical or spelling errors. However, we have to remember the perspectives of a writer and an editor are very different. Editors know all sorts of nitty-gritty details on grammar, frequently details the common writer, including myself, won’t remember (or even care to). Editors on Servio are pretty hardcore. You have to really know your stuff if you want to keep up.
This situation created an expectation imbalance in which the editors were holding writers to unrealistic expectations. I’m not trying say that writers shouldn’t know as much about grammar as possible. I am saying that small slips in grammar rules should not be a determination in whether you get paid for the bulk of your work.
Remember, if the submitted writing is rejected because of a couple of comma splices and a misuse of parallelism in a series, then the writer gets nothing. If the writing project was for a 2000 word article on the risks of Oxycontin/Oxycodone addiction or Lithium withdrawal, subjects that would require fairly extensive research to write on, then a writer may have two hours of work thrown out for fairly minor reasons. Talk about a negative incentive to participate! The result is low worker morale, low throughput, and frustration between editors and writers.
I’ll go into detail on other techniques Servio uses to avoid wasted work like this another time—we’ve released a series of really, really awesome feature/workflow enhancements starting in 2010 to address this overall problem.
For now, we can contrast the prior workflow (the editors reviewing writer work) to the new workflow.
The New Workflow
Rather than having editors reviewing writer work directly, writers review writer submissions. After a writer’s article is approved (by another writer during the peer-review process), the article is automatically routed to an editor who is responsible for addressing deeper facets of the writing. Here’s a portion of what we expect from editors:
Proofreading: Proofreading is merely to check a written text for errors in spelling and grammar.
Editing: Editing is the process of selecting and preparing language. This means, editors are required to make improvements to word choice and sentence structure, otherwise it’s merely proofreading.
Note that we don’t expect a trivial proofreading of the content. We expect the editors to focus on improving the writing, from sentence structure to word choice.
By isolating these project expectations—that is, by allowing writers to remain responsible exclusively for adherence to topic/focus/tone requirements and routing completed articles to an edit-only work queue—we’re able to achieve a much higher quality in final article while also better aligning expectations for both writers and editors.
The result is a lot less stress for the writers (no longer held to unrealistic expectations) and better pay (less rejected work for small errors means the average earning per submitted article will go up). Another result is faster completion of each writing assignment (primarily due to lower rejection rates). In a business that sells on quality and scale, this is a pretty big deal.
For business reasons, I cannot explain why we originally started with the first flawed workflow, but I can say that I’m extremely happy with where we are now. (Feedback on the worker support forums indicates workers are happy with this too!)
The Important Lesson
When designing the workflows and writing instructions to workers, it is important to properly aligned expectations and incentives (pay, points, rewards, etc). And as I illustrated above, small changes in the workflow can have major impacts on quality, worker satisfaction (hugely important) and throughput.