How many times have you heard some version of “the quickest way to get any job done is to do it right the first time?” This notion may seem counterintuitive in document review where speed and quality are often treated as opposing variables. Common wisdom says, a quality review must take a long time or a quick review will produce poor quality.
Entering my twentieth year leading document reviews, speed and cost still dominate the conversation as clients remain hyper-focused on cost. However, it is evident to me that the key to controlling cost and increasing speed is quality. There are two primary approaches to controlling cost and improving speed:
- finding the highest quality team of reviewers possible, and
- implementing early and consistent quality control (QC) protocols for that team.
Naturally, if you take the time to identify the highest quality performers and use them on every review, the burden and expense associated with QC is reduced.
The Early Gold Standard
In some of the first big reviews I managed, when the industry was shifting from paper-based review to electronic discovery, a common approach emerged: the accepted “gold standard” was to use a large team to get through the data as quickly as possible. Following this “first-pass” review the team was winnowed down to a much smaller subset of the best and brightest. The “best” performed QC on every responsive document resulting from the initial review.
Sometimes a second quality control review would be done of everything that survived the first QC. “Just to be sure.” We would then pat ourselves on the back for going above and beyond for the sake of a “perfect” work product.
As eDiscovery volumes exploded we were engaged by a client who was facing a steady stream of large-scale eDiscovery matters. It quickly became evident that this “gold standard” was not up to the task. Our client, well-acquainted with manufacturing processes, asked us to help improve the efficiency of the entire review process.
The New Gold Standard
We set up a system to track and report on doc-per-hour rates and coding overturns across the entire system (reviewers, projects, technologies, etc). We tracked all QC activities. We assigned costs to the QC efforts of finding and correcting coding errors. We developed data on price per doc adjusted for error rates so we could assess individual reviewer efficiency and value. We used all this data to quantify the overall value of our review methods and Project Management (PM) teams. The results were not only instructive, they were career changing. And we swiftly adjusted our business practices accordingly.
Quantifying Success
It turns out that each doc with an error costs almost twice as much to find and fix as docs that were marked correctly (aka: if we had “done it right the first time”). The most eye-opening realization was that a small fraction of low performers were responsible for 80% of the total errors.
Using these key performance indicators (KPIs), we were able to assemble a team of high-performing reviewers that had a coding turnover rate of less than 2% – this was the crème de la crème of reviewers. We performed rigorous early QC on any newly engaged reviewers taking quick action to remediate any performance not meeting this benchmark. Furthermore, our core team of expert reviewers moved seamlessly from one project to the next for the more than 3-years we supported the client.
Leveraging KPIs for improved QC, improved project management, and the ability to recognize and retain top performing reviewers, we never again fielded a team of greater than 50 for this client, no matter how many documents were slated for review. We also achieved a coding turnover rates far below the industry standard error rate across all subsequent projects.
The Takeaway
The primary lever for effective document review is to tightly define quality and closely monitor it across all reviewers and projects.
The result is higher quality, increased speed, and lower cost.
Link to image on iStock
Link to author photo on Dropbox here
Author bio on Dropbox here