Tuesday, 8 September 2020

Closed Captioning – Everything You Need to Know

This article has been contributed by Mildred Austria.

When planning video content, creating closed captions may not be the first thing that springs to mind. But it’s just as important for your brand or marketing campaign as professional filming equipment, strong lighting and sharp editing.

Approximately 15% of American adults report trouble hearing. Any brand seeking to generate engagement, loyalty and interest must ensure that their content is accessible, and the best way to ensure that viewers are able to gain the full benefit from your efforts – and truly recognise the strength of your brand – is to apply closed captioning to your content.

It took well over 44 years from the invention of the television for closed captioning to be added to programs. The tech appeared for the first time on TV in 1972 and by the turn of the 21st century, it would appear that captions had become the legal mandate for television in general.

What is Closed Captioning?

Captions are a textual representation of an audio file, within a media file. Traditionally, they have been used to make video much more accessible to the deaf and the hard of hearing.

Captions are also essential for viewers watching video on mute, whether they be travelling on public transport, not want others to hear what they’re viewing or possibly have sleeping children nearby!

This is achieved by providing a “time to text” track which is a supplement or sometimes, a substitute to the audio. While the text in a caption file consists mostly of speech, captions are also known for including non-speech elements as well. This can include speaker IDs or even sound effects (like “calming music” or ”loud explosion”). These are all critical if you want to understand the plot, or what is happening in the video.

When Should Marketers Use Closed Captioning?

Consider any content you produce that will put those who are deaf or hard of hearing at a disadvantage: videos posted to social media, video-sharing platforms, and anything you post onto your own website will risk losing a significant viewership if you have not taken the time to create accurate captions. This is not only incredibly frustrating to viewers, but it can limit the ROI you are able to see on your content creation.

These sites will require you to upload your closed captions in an SRT (SubRip Subtitle) file, which will detail exactly when – and for how long – these captions will appear in your video, to ensure a good level of accuracy. Once the video has been uploaded, it can be edited to include the SRT file and enable viewers to turn them on or off from the sidecar file.

There are various types of captions available – some are better for online video and others for social media. Make sure that you look into the different options properly so you can reap the maximum rewards and benefits.

Types of Captions

By now, you probably have a good idea of what closed captions are. Not many people know how they differ from subtitles though. It’s time to explore that, right here.

At the end of the day, there is a very big difference between subtitles and closed captions. Even though they are normally used interchangeably, captions tend to assume that the viewer cannot hear anything. They are dictated by the CC icon on the video player or on the remote. Subtitles however, are for hearing viewers who might not understand the language of the audio. The purpose is to try and translate the spoken word into the viewer’s language.

Subtitles do not include the non-speech elements that you find in audio, such as sounds or even speaker identifications. They are not considered to be appropriate accommodation if you are deaf or if you are hard of hearing either.

Closed Captions vs Subtitles

Subtitle and language buttons on remote control

Closed captions tend to assume that the viewer cannot hear. They are time-synchronised, and they include any non-speech elements, such as discourse markers and laughter. Subtitles assume that the viewer can hear, but they don’t understand the language.

You should note that subtitles translate audio into another language and they do not include any non-speech elements.

When you look at the US, there is a very clear distinction between subtitles and captions. In other regions like Latin America, closed captions are called subtitles.

Subtitles for the deaf or even the hard of hearing tend to assume that the viewer cannot understand the language. SDH (Subtitles for the Deaf and Hard of Hearing) subtitles tend to combine the information that is being conveyed by the captions or the subtitles and this also includes any non-critical speech elements.

Closed Captions vs Open Captions

The difference between closed captioning and open captioning is the amount of user control you have. Open captions tend to be burned into the video and this means that as a user, you don’t have the option to toggle it on or off. Closed captions, on the other hand, tend to be added to the video in a sidecar file. They are completely separate from the video. When you order a file for encoding, you choose between closed or open.

Closed captions are normally used in online video but it can vary. Open caption are burnt onto the film and this means that it doesn’t matter if the video is published offline or online because you cannot turn them off.

The best and easiest way to create open captions is to hire a professional service. They can work with you to make sure that you get the result you need, and they will also time your captions perfectly. If you choose to do it yourself, it can be very difficult and time-consuming. You may have to invest in expensive video software, and this can add on to the cost even more. If you want to get some help then look into closed captioning software at Verbit.ai.

Now, you may be wondering, why on earth would someone want to use open captions? This is understandable, but at the end of the day, open captions are very useful when your video player doesn’t accept sidecar files. They are also very helpful if you want to use them on social media.

Platforms such as Twitter, Snapchat and even Instagram don’t tend to let users upload a caption file and this means that companies often end up having to let their users use open captions instead.

This means that the captions are always going to be accessible and it also means that everyone gets the service they want.

Caption Quality

If you turn the automatic captions on in a video and you compare them to what is being said,  you may find that there is a lot of discrepancy between what you are hearing and what you are seeing. In fact, you may sometimes find that the captions just don’t make sense in the slightest.

If you are deaf or hard of hearing, then this can be frustrating to say the least. Caption quality does matter because captions are meant to be an accurate alternative to audio when it comes to individuals who have hearing loss. When you have captions that are not accurate, this makes the entire video inaccessible.

What is 99% Accuracy?

It’s important to know that there is an industry-standard level that you need to try and meet. The industry standard is 99%. Your accuracy will measure spelling, as well as grammar and punctuation as well. If you have a 99% accuracy rate then this means that there is 1% chance of error, or a leniency rate of around 15 errors per 1500 words spoken.

What if the Accuracy Rate is Lower?

Studies have shown time and time again that even if you have a 95% accuracy rate, then sometimes this is not enough for you to convey some degree of complex material. If you have an average sentence length of 8 words, even at a 95% accuracy rate, you would have an error on average every two and a half sentences. Unfortunately, most Automatic Speech Recognition (ASR) tech has an accuracy rate of just 80%.

Knowing how a captioning vendor determines their accuracy rate is so important. Some captioning vendors don’t take into account punctuation when determining the overall accuracy rate, but it is important to know that this can completely change the meaning of a sentence.

Woman watching TV and ironing

About Automatic Speech Recognition (ASR)

ASR is a technology that automatically transfers the spoken word into a video, without any human assistance. ASR transcripts tend to be riddled with inconsistencies and they also lack important quality standards.

ASR is mostly fast and cheap and it can in fact give you a good draft, but if you want your users to actually know what your videos are all about then you need to try and make sure that you are not using this as your caption option.

The Cost of Inaccurate Captions

Other than inaccessibility, it’s important to know that inaccurate captions do cost you. If you don’t have good captions, then you may find that this creates way more work for you in the future.

Instead of focusing on other projects, you have to spend way more time fixing your captions, and this is the last thing you need. On top of this, you may even find that you are not able to comprehend the content as well. Misspellings can lead to your users being misled and sometimes this can open you up to lawsuits.

Studies by Global Lingo have found that 59% of respondents state that they would not even think about using a company that had spelling mistakes or grammatical mistakes on their site or in any of their marketing material. Grammar mistakes can hurt your credibility and it can cost you far more than you think in sales too.

It’s imperative that you make a good impression so make sure that you are not overlooking any part of this when planning out your branding material.

Captioning Standards

The DCMP (Described and Captioned Media Program), FCC (Federal Communications Commission) and even the WCAG (Web Content Accessibility Guidelines) have outlined various captioning standards to try and make sure that everything is as accessible as possible for those who are deaf or even hard of hearing.

DCMP Caption Standards

The DCMP has a set of guidelines used to caption anything. The DCMP has a philosophy when it comes to captions. They say that all captions need to include as much of the original language as possible. This includes any words or even phrases that the audience might not understand. In other words, they should not be replaced with simple synonyms. Editing the text however might be required in order to make the captions completely readable when you take into account synchronisation.

Caption quality should be accurate, and it should also be errorless. This is the goal for each production. On top of this, it should also be very consistent. Uniformity in style and even in presentation is crucial if you are a viewer and want to have a good level of understanding.

Lastly, it has to be clear. A textual representation of the audio and the speaker identification needs to provide clarity. If these guidelines are not met, then you may find that you are not able to have the best experience when viewing your material and this can make your users suffer as well.

Man holding remote watching TV with captions

FCC Caption Standards

FCC captions and quality standards are based on accuracy, timing, placement and completeness. These captioning standards apply to live, near-live and pre-recorded programming. The FCC have stated that captions need to match the spoken words for the audio to the fullest extent. This includes preserving any accents or in some instances, slang.

For live captioning, of course, some leniency does apply. With the FCC, it’s important to understand that captions have to be synchronised. They need to align with the audio track and each caption frame needs to stay on the screen for between 3 and 7 seconds. Completeness is important, as captions need to run from the beginning to the end of the program.

Of course, it’s important to understand that there are so many things that you can do to try and make sure that your captions are the best that they can be and if you follow this guide then you will soon find that it is easier than ever for you to make sure that you are not only meeting the right regulations, but that you are also taking the right steps to give your users the best experience possible.

Conclusion

It’s obvious that captions are vital for the accessibility of your videos and effectively engaging your deaf and hard of hearing viewers. But it’s also important to consider them for the sake of your audience that chooses or needs to watch your video content on mute or can’t use sound on their device.

If you need some help with your captions, then you need to try and go through the right captioning service. When you do, you will soon find that it is easier than ever for you to not only get the best experience, but for you to also really feel confident in the service that you are getting.

_

About the author: Mildred Austria works with Ocere, content creation and SEO agency.

No comments:

Post a Comment