Captioning the future; or are we already there?
Captioning is something that has always been an issue in post-production. It doesn’t make sense to tie up an edit suite and editor to do it, so sending it to a specialist subtitling department is the usual option.
In Europe, subtitling used to be handled by the broadcaster and carried in Teletext: data in the top few lines of the TV signal. However, with the advent of digital broadcasting and On Demand, this has disappeared and a lot of the responsibility has now been moved to the production company. Broadcasters and content aggregators will usually specify the requirements for subtitling, including languages and subtitles for the hard of hearing, which also carry descriptions of important audio content, like when a door creaks open or a person sighs.
In America, the situation is even more complex in that there is legislation from the FCC and CVAA, which makes subtitling (or closed captions) a legal requirement, not only for broadcast but also for online content. This means it is even more important to make adding closed captions a standard part of the post-production workflow.
The first step in any subtitling workflow is obviously to transcribe the speech into text. Computer voice recognition has improved dramatically over the last few years to aid this, but it is still not 100% accurate. You are still going to need a human translator to go through and check the results, which will give a better result, but it will take longer and can cost the production a lot more.
In recent years, cloud-based transcription and subtitling services have been developed that allows you to use native speakers in the target language wherever they are based. This is the ideal scenario from a time and money saving perspective, as they can work remotely to produce high quality results. One of the capabilities of Forscene that you may not be aware of is the ability to add subtitles to rushes and edits. As Forscene is completely cloud-based you can hire translators from anywhere to work on your media and add subtitles. We have even had clients subtitling their rushes so that editors and producers can work with content that has been shot in another language.
We have also partnered with Take1, a specialist transcription company, so that media requiring transcription can be uploaded from Forscene directly to Take1. They can then transcribe the speech and the resulting text is loaded back into Forscene and added to the media. The client also gets a hard copy of the transcription so they can go through the content and can then find the relevant clips in Forscene by searching the text.
Once you have the transcription you have the option to add subtitles to edits automatically. They will appear on the timeline as blocks of media on dedicated subtitle tracks. You can easily edit the position and duration of subtitles and if you plan to export your subtitled clip directly from Forscene you can also choose the screen position, font and colour of your titles. You can have multiple subtitle tracks on the timeline so it is easy to subtitle a sequence in different languages. If you are going to conform and finish your edit in Avid then the AAF export from Forscene will also carry the subtitle data and add it to the Avid SubCap track in the timeline.
Subtitles and closed captions are more widely used in video production today and the ways in which the industry has evolved to produce them demonstrates their importance. No doubt new technology will surface in time that perfects the delivery of subtitles and closed captions (so you don’t end up with a closed caption fail!) but for now, cloud-based offerings are the way to go.
We will be exhibiting our subtitling and closed caption capabilities at this year’s NAB Show in Las Vegas, where you can find Forscene in booth SL5305. Click here now to book a demo of these features and more.
Follow Forscene on social media: