Connect with your exact customers
See, hear, and talk to your customers
Uncover insights about any experience
Share key insights across your organization
When customers are viewing the video of a completed test, helpful features speed up the analysis process. These include the ability to speed up or slow down video play speed, jump to any part of the session, create Notes & Clips, and immediately access the Transcripts.
We know that transcription can be subject to errors since it’s an activity completed by machines. So we worked with our transcription vendor to identify ways to improve accuracy—and in the process, we learned a lot.
The bulk of vendors providing transcription services are either general purpose or optimized for use cases such as call centers, voice assistance, or conference transcription.
However, most UserTesting videos record feedback on ease of use and difficulty, satisfaction and delight, expectations and requests, and other ideas related to UX and the creation of better experiences. This means that words, phrases, and concepts that our customers are used to hearing may sound unfamiliar and downright foreign to an algorithm that is primed to transcribe customer support calls. And a mismatch between training and test domains can negatively impact transcription accuracy.
In order to help the Automatic Speech Recognition (ASR) system learn our domain language, we worked with our vendor to create a custom model. This is a core tenet of machine learning: the provision of a dataset that enables the machine to learn and change algorithms the more information that is processed. The resulting domain adaptation algorithms help bridge the domain difference or mismatch between the training data and test data.
Using proprietary video content, part of which was manually transcribed, we worked with our vendor to create a dataset that was then used to train the custom ASR model. Now the software better adapts to the common ambience or background noise in our recordings. When it hears a word that sounds sort of like “slick,” it knows that the word is likely “select.” Or if it hears something resembling “cook,” it knows that the word is probably “click.” We taught the model the language of UX and it, in turn, is much better at generating text that is a closer match to the words that were spoken during a test.
We’ve deployed this new model and we can already see that our Transcripts are more accurate. Premium subscription customers can view Transcripts from self-guided video recordings as well as Live Conversation sessions (English only).
Transcripts are a great time saver when analyzing completed tests. You can:
If you haven’t yet tried Transcripts, we’d love for you to give our updated feature a spin and then let us know your thoughts.
This post was written with assistance from Huiying Li (Machine Learning Engineer) and Krystal Kavney (Product Manager).
Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.