Schedule a Meeting

Return to Enterprise Automation Blog

User Design Myths: Pre-Population For Data Keying

July 28 2021

3 min read

Looking for Improvements, Testing for Impact 

Data keyers are always looking to increase their speed and improve output. When working with the Hyperscience Platform, we want to ensure that it’s a seamless and easy to use tool when confirmation is required. It’s a common request from our customers to provide the ML/AI’s best guess even if it’s not certain.

Interested in diving deeper into its potential productivity, our Product Design team tested if pre-populating fields with that best guess transcription would actually improve the speed of data keyers utilizing the platform. Here’s what we found out:

UX Methodology: Human-In-The-Loop

At Hyperscience, we work to ensure that our intelligent automation platform empowers humans to do their best work — rather than completely replace human intervention. Our ML/AI gets a vast majority of the answers right. When it’s uncertain, however, we ask data keyers to ensure accuracy and fine-tune the models. Situated on the bleeding edge of Machine Learning, our Design team members are experts in crafting a seamless relationship between machines and humans.

For this experiment, we had data keyers transcribe 100 fields that were pre-populated with 50 errors of varying degrees and 100 empty fields. We timed both and counted errors for comparison.

Hyperscience Transcription Task

Mythbusting Pre-Population UX 

Myth 1: Pre-population makes keyers faster – BUSTED

In fact, it took keyers the same amount of time to transcribe fields with and without pre-population.  We also found a higher standard deviation for typing into empty fields, indicating the ceiling for keying speed was higher than for checking and correcting fields pre-populated with the machine’s best guess.

Myth 2: Pre-population would decrease error rates – BUSTED

Our study showed that keyers made 9% more errors when checking and correcting pre-populated fields. Keyers were especially prone to missing small errors, like the difference between a “1” and an “l.”  This was an important finding since those are the types of errors the ML/AI is more likely to make when it’s uncertain. Furthermore, keyers made twice as many errors when checking and correcting longer fields, like addresses.

Mistakes in fields pre-populated by our ML/AI are more dangerous, too. This is because if they are the machine’s best guess, validating areas that are actually wrong will reinforce the behavior and make it more likely to make that same mistake in the future.

The Importance of Research and Testing

Research is a necessary element of creating user-oriented design. Our Designers apply this action to all of our applications to efficiently secure better customer outcomes. For example, we enable our users to identify fields in just one click, create new layouts in a few simple steps with our new layout editor and intuitively identify nested tables.

We work closely with our customers to understand their needs as users. However, we also understand that it is tough to predict how changing the relationship between humans and our models will affect business outcomes. A trust and verify approach is crucial.

Gregg Tourville

We test before building so we can ensure we are only delivering the most valuable features to drive the best outcomes. In this case, adding pre-population features had no upside and may have actually hurt performance.

Gregg Tourville is a Lead Product Designer based out of our New York office. Connect with Gregg on LinkedIn.