Automatic Transcription Software (2/6) – Synote

This is the second post of this series about the project Evaluation of Transcription Software  looking at Synote.  You can read the previous blog in this series from  Automatic Transcription Software (1/6) – Zoom.

I tried automatic transcription with eight voice recordings from a purposely selected sample of UCEM employees reading a script of 1000 words containing  paragraphs relating to various subject disciplines in the built environment.

Synote is a transcription software that can be used collaboratively. So, if all students have accounts they are able to make corrections in the transcripts should there be any errors. However, at UCEM we used Synote in a  slightly different  way where we did not provide all students with accounts. In this model,  the collaborative corrections of transcripts that happen in the way Synote is built to be used is taken away hence there has to be an intermediary , an admin, between students and Synote to  correct any reported errors.

As I have already mentioned in my previous blog post I used Word Error Rate (WER),  as a measure of accuracy:

WER = (Substitution + Deletion + Insertions) / N

where N is the total number of words in the reference transcript (Apone, Botkin, Brooks, & Goldberg, 2011).

However, WER is not a great way to check accuracy as it considers all words as equal.  Here I have calculated the WER and one minus WER as a percentage taken as the accuracy rate.

Synote worked really well for some users but not so for others.

The highest accuracy rate was recorded for a native British  speaker’s recording. However, for non-native speakers the accuracy rate was much lower than what we hoped for.  With just above 30% accuracy rate for some transcripts this was not going to be accurate enough as an accessibility aid.

Synote Transcription Accuracy Rate shown in a graph it varies from about 86% to just above 30%
Synote Transcription Accuracy Rate

I will blog about the other automatic transcription software in due cause.


Apone, T., Botkin, B., Brooks, M., & Goldberg, L. (2011). Caption Accuracy Metrics Project: Research into Automated Error Ranking of Real-time Captions in Live Television News Programs. Retrieved from