
Optical Character Recognition
Index incoming documents and automate workflows using optical character recognition
Situation: Insurance agencies and carriers still receive enormous amounts of paper mail that need to be uploaded to the system, cataloged by a human, and automatically assigned to appropriate workgroups and tasks.
Task: Design a feature for an existing app that utilizes optical character recognition (OCR) capabilities to capture information from forms that are scanned into a user's system, or that come in via an email receiver, and assign them to the appropriate workflow and team based on form type.
Result: OCR became a key selling point for this product and allowed us to secure a deal with United Healthcare. I also discovered my passion for designing productivity tools.
Actions:
Competitive research
User research & testing
Task flows
Wireframes
Prototypes
Beginning
The first step was to sit down with the product manager and lead developer, our "team of three," to understand the basic requirements of the project and set scope. OCR tools like this exist, but our requirement was to integrate the output with our content management software. Therefore, understanding where those intersections are was key to understanding where this feature would add value. We began by recruiting a core group of our customers who were interested in this technology to be part of our design partner program. I interviewed them to understand where their current bottlenecks were in getting content into the system and assigned to workflows. Next, I researched competitive products to see what capabilities they had and identified areas where integration would cut down on manual processes.
Task Flows
With input from PM, I developed a set of user stories and requirements that I used to create flows and rough wireframes in order to start planning out initial interactions. We held several meetings with our design partners to validate our understanding and requirements, and took them through these initial interactions. We met regularly with them throughout this project, getting solid feedback we could incorporate and unearthing new points to consider.
The First Screen
This the first step in the process, which is to teach the software the form type, where the data is on the form, and to identify certain key fields such as name and address. Additionally, it allows for keywords to be indexed that can later be searched on, making content much easier to find.
Users also assign each form to a workflow, a specific step within that workflow, and which person is responsible for that step.
Selection & Focus States
Selection and focus states turned out to be one of my bigger decisions on this project. Users click the appropriate tool icon in order to draw a map over a field on the form in view. After drawing the map, should the tool remain selected so the user can continue drawing or should focus move to the first corresponding field so the user can type the required data?
I checked out some other drawing tools, created a basic prototype to try each method, and then realized allowing users to map fields without filling in the data would introduce tricky error handling scenarios. Therefore, I specified the action to be: map field > type data.
Editing & Versions
While on a design partner call, I was showing an early version of how form mapping would work. One of our participants asked, "So what happens if Tom and I are working on this form at the same time?" Excellent question! We talked through some options, including merging changes, and ultimately decided it would be best to only allow one person to edit at a time. I added a step that requires the user to first put the form in edit mode, which then locks it so no one else can make changes. Additional customer input around editing also led us to design version tracking, which would be included in a later release.
The Rest of the Story
1. Some forms had fields containing personally protected information (PPI), such as social security numbers or contact info. Not everyone at a company should be able to view all information on a form. Therefore, admins could designate PPI fields for encryption. Non-permissioned users would be able to view the form, but the encrypted info would be hidden.
2. Naturally, there were lots of places where things could go wrong for the user. I identified 15 error and warning scenarios at the outset and crafted messaging that explains what went wrong and what to do next.
3. Form templates can be in the system but not yet ready to be in use when ingesting documents. Therefore, I needed to design a way to make them "live" or "not live." A simple toggle switch turns the template on or off in the system.
4. I had already created a robust set of keyboard shortcuts for the app this functionality lives in. When the user enters this feature area, the keyboard shortcuts adjust to represent the actions power users can take while using this functionality. The end-users who will do the mapping will have hundreds of forms to teach the system, and I wanted to make it as fast and easy for them as possible.
5. I turned my wires over to my fantastic teammate to provide the visual design and she knocked it out of the park.
Conclusion
Overall, this was a very detailed, multi-layered, and meaty feature to design - my favorite kind of work. It released in Q2 2018 (after my departure) and has been a key selling point, allowing Vertafore to secure deals with large companies such as United Healthcare.