FCC vs. ADA Caption Requirements

Are your videos in compliance with FCC and ADA requirements?

Both the FCC (Federal Communications Commission) and the ADA (Americans with Disabilities Act) strive to protect and assist individuals with disabilities. This includes individuals who are hard-of-hearing and their rights to have full access to video programming. To ensure access to video programming, the FCC and ADA have set standards and requirements for closed captioning on live and pre-recorded programming.

Does your programming meet their standards and requirements?

Let’s find out:

FCC Requirements for closed captioning on television –

FCC rules apply to all television programming with captions. The organization states that captions must be accurate, synchronous, complete, and properly placed.

  • The program’s captions must match the spoken words while also displaying the background noises in an accurate manner.
  • Captions must be synced with the audio of the programming. Text must coincide with the spoken words and sounds at the same time and speed.
  • Captions must be included from the beginning of the programming to the end of the programming.
  • Captions should not block any important visuals on the screen, overlap causing difficulty in reading, or run off the screen.
  • It’s important to note that these rules also apply to internet video programming if the “video programming was broadcast on television in the U.S. with captions.”

ADA Compliance Laws for Closed Captioning –

The ADA closed captioning guidelines are targeted towards government institutions, public schools and universities, as well as businesses and non-profit organizations that serve the public. The closed captioning requirements for both television and online internet video content are designed to ensure that captions are being created correctly.

  • Each caption should hold 1-3 lines of text onscreen at a time, and should not exceed 3 lines.
  • Captions should have an accuracy of 99%.
  • The captioning font should be similar to Helvetica
  • Background noises, or non-speech sounds, should be added in square brackets.
  • Punctuation and both lower and upper case letters should be used
  • Captions should reflect slang words used in the audio

You can find more information about the ADA’s captioning regulations from their website linked here.

Are there any exclusions?

It’s important to understand that captioning rules only apply to videos that have been aired on television. If content has never been aired on television, then these rules do not apply. However, although not legally required to have accurate captioning, these videos should still include accurate captions. Why? Because video with captions reach a much wider audience. Videos with accurate captions help individuals who are deaf or experience hearing loss to still enjoy the content. Captions allow individuals who are only able to watch the content with the sound off to still enjoy the video. Captions improve retention rates for videos filled with important information, and deliver a better viewing experience.

Are you following FCC and ADA guidelines? If not, a complaint could be filed against you and legal actions may be taken.

Be safe, avoid potential and unnecessary legal problems by captioning your videos with accurate and correctly placed captions.

To learn more about captioning software that complies with FCC and ADA guidelines, visit our Subtitling & Captioning Page

Guide to adding captions to online courses and eLearning Videos

Now, more than ever, online learning courses are a crucial component of school courses. After the COVID-19 pandemic, learning institutions have been challenged to make their courses and learning materials more accessible online should their students need to stay home instead of coming to the classroom. From university professors to elementary school teachers, everyone is uploading courses and teaching videos to online platforms to make sure that students don’t miss out on their material when they have to stay at home.

All of this material needs to be captioned.

Why?

First, students who are deaf or hard-of-hearing need to be able to access these videos as well. According to the American Disabilities Act (ADA), most higher education institutions, including both public and private, must offer closed captioning. Failure to comply with these regulations could lead to serious penalties for the learning institution.

Second, professors must understand that students in the modern world are not always able to study in perfectly quiet environments. Many students are living at home with their families and are constantly surrounded by noise and distractions. Captions help these students to understand what is being said even if they can’t hear every word perfectly due to noisy surroundings.

Another reason for adding captions to eLearning courses is to help improve student’s comprehension, accuracy, engagement, and retention. A study by educause.edu sought to better understand student’s use of captions when it comes to online learning courses. Amongst the students who said they use closed captioning, “59.1 percent reported that the closed captions in course videos are very or extremely helpful, and an additional 29 percent said they were moderately helpful.” This is after the survey found that over 50% of students without any hearing disabilities use closed captions at least some of the time.

Enhance your student’s learning experience and improve their success rates through the use of closed captioning.

Here’s how to add closed captioning to your online and eLearning courses:

Step 1: Have both your video file and its audio file ready to be uploaded. You can use websites such as vlchelp to help create an audio only file from any video.

Step 2: Purchase and download WinCaps Q4.

WinCaps Q4 is BroadStream’s software solution for closed caption and subtitle creation. WinCaps Q4 takes any video or audio file and creates an accurate closed caption file that provides the text as well as timing for captions. These captions can be easily edited to change any wording, punctuation, or spelling. A separate caption file can also be created to easily translate these files into a foreign language.

If you are an educational institution, ask about our special WinCaps Q4 Educational that provides a special license to last the duration of your course.

Step 3: Export the caption file that can then be uploaded on your learning platform alongside the course video. Students will be able to choose whether or not they want to add these captions to the video, providing them with the best personal learning experience.

If you have any questions, reach out to our team online through our contact page. We’ll be happy to answer any questions about how to best add closed captioning to your online courses.

For more information about WinCaps Q4 and it’s various features, visit our WinCaps page.

To learn about our closed captioning services that start at just $0.25/minute, visit our closed captioning services page.

Captions vs. Subtitles

What’s the difference between captions and subtitles?

Many people confuse captions and subtitles. They both appear as text on a television, computer screen or mobile phone while a video is playing, and help individuals understand the speech better. So what is the actual difference between the two?

The basic difference is captions are in the same language as the spoken word on the screen and subtitles are in a different language.

Captions take the speech and display it as text in the same language. Subtitles, on the other hand, are a translation of the speech into different languages. This means that with captions, what you read is what you also hear. With subtitles, what you read is a different language than what you hear.

Captions were originally developed to make television programs more accessible to the deaf community. Individuals with hearing impairments may not be able to fully understand the audio but can follow along with the closed captions to understand what is being spoken.

Closed captions prevent discrimination against people with disabilities and are required by law in many countries including America and all of Europe. Not only do captions benefit the deaf community, but they also make multimedia videos more engaging and accessible. With captions, videos can be played on silent in public areas or noisy rooms. Captions also help viewers to better retain information from university lectures, training videos, conference meetings, live events, and so much more.

Subtitles, on the other hand, were originally developed to make television viewing more accessible to viewers who don’t speak the same language as the audio in the program. Videos and TV programs can now be shared across the world with the help of subtitles. Although the speech remains in one language, individuals can add their foreign language, if available, using subtitles to better understand what is being said. Subtitles not only make multimedia more accessible across languages, but also help individuals who are trying to learn a new language. Statistics show that adding subtitles in a foreign language can help individuals learn a new language by watching the words and phrases pop-up on the screen. Subtitles also offer benefits for the deaf and hard-of-hearing who seek to access videos in foreign languages as well.

Both subtitles and captions make multimedia videos and television programs accessible across the world. Video content is quickly invading social media platforms and videos are becoming much more important in education and business environments.

Every video you create should have captions or subtitles to improve engagement, accessibility and retention for all viewers regardless of their hearing situation.

Check out our Captioning & Subtitling Software to learn more about what these technologies can do for you.

How VoCaption Cut WVIA’s Live Program Workflow from Aggravation to a Single Mouse Click

WVIA, a PBS station operating 3 channels in Northeastern Pennsylvania and the Central Susquehanna Valley, has been using BroadStream’s OASYS Integrated Playout solution and recently added VoCaption, our Automated Speech Recognition solution for live, automated captioning.

Joe Glynn and his staff at WVIA have been providing live captions on a limited basis, but getting live captions on-air took too many steps and had multiple points of failure which increased both stress and anxiety levels beyond what the team needed.

The workflow necessary to launch live captioning each week included having someone call the captioning company to schedule the captioner, provide all the details including date and time of the show, length of the broadcast and arrange for testing prior to the show airing.

On the day of the show an engineer needed to be available at least 15 minutes before the show’s scheduled start to connect the call to the audio bridge and test the outbound audio, which often took a back-seat to microphone checks from the studio. The captioner was required to call-in prior to air to confirm they were available, which didn’t always happen, and connect to the caption encoder so the station could verify that captions would actually appear. If there were issues, those had to be resolved quickly. If the captioner was late, it made things much more difficult and stressful and on several occasions shows aired without captions. WVIA felt the live captioning process was too complicated with too many steps and points of failure and felt there must be a better solution.

Looking to resolve the issues associated with live captioning Joe decided to look at VoCaption to see if automation would help smooth things out. Accuracy was, of course, a concern, but if VoCaption was anywhere close to their current accuracy level it would be worth it to make the switch.

The installation and commissioning hit a few bumps along the way but they were resolved and after a few tests WVIA formally selected BroadStream’s VoCaption, Automated Live Caption Solution.

Joe Glynn, Chief Technology Officer at WVIA commented, “All of the problems and multiple extra steps necessary to put closed captions on-air for a live show were reduced to one single mouse click with VoCaption.” Glynn continued, “While VoCaption did offer us some cost savings, by far the biggest benefit was we now had full control of the process and no longer depended on an outside, 3rd party provider. This change reduced a previously frustrating and complex workflow to a single mouse click.”

With VoCaption, WVIA is saving time, reducing frustration, and improving their workflow by fully controlling their entire live captioning process. Follow Joe Glynn’s lead and install VoCaption to reduce your live captioning frustrations and simplify the process to a single mouse click.

For more information please visit our website to learn more about VoCaption or our Contact Us page to arrange a call or demo.

 

A Brief History of ASR Technology

A Brief History of ASR Technology

Did you know that the first ASR Technology was invented in 1952?

ASR stands for Automated Speech Recognition. This technology uses machines (computers) instead of humans to convert speech to text for captions, subtitles, transcripts and other documentation.

One of the earliest projects that can be considered an ASR technology was developed in 1952 by researchers at Bell Laboratories. They called this technology “Audrey” and it could only recognize spoken numerical digits. A few years later in the 1960’s, IBM engineered a new technology called Shoebox which, unlike Audrey, could recognize arithmetic commands as well as digits.

Later in the 1970’s, a new model of ASR was developed called the Hidden Markov Model. In brief, this ASR speech model used probability functions to transcribe what it determined to be the correct words. Although the original technology was not very efficient nor accurate, about 80% of the ASR technology currently being used today derives from this original model.

So how did these technologies evolve into the ASR software that we know today?

In the 1970’s, various groups began to take speech recognition technology more seriously. The U.S Department of Defense’s ARPA, for example, began the Speech Understanding Research program which funded various research projects and led to the creation of new ASR systems. In the 1980’s, engineers began taking the Hidden Markov Method seriously which led to a huge leap forward in the commercial production of more accurate ASR technologies. Instead of trying to get computers to copy the way humans digest language, researchers began using statistical models to allow computers to interpret speech.

This led to highly expensive ASR technologies being sold during the 90’s which thankfully became more accessible and affordable during the technology boom in the 2000’s.

Nowadays, ASR technologies continue to grow and develop to constantly improve accuracy, speed, and affordability. The need for humans to check the accuracy of these technologies is decreasing, and the availability of ASR technology across all industries is spreading. No longer is ASR considered to be only useful for broadcast TV. The importance of this technology is being explored by universities, school systems, businesses, houses of worship, and much more.

What first began as a technology to recognize numerical digits has now developed into a highly advanced system of recognizing hundreds of languages and accents in real-time. BroadStream continues to innovate and improve upon ASR products to create systems that are accurate, easy to install and run, and affordable across various industries.

Our VoCaption and SubCaptioner solutions, provide real-time live captioning and on-premise, file-based captioning that saves time and money when compared to using human captioners and increases video accessibility and engagement. To learn more about these solutions, please visit our Captioning & Subtitling page!

WVIA Large Logo

WVIA Praises Workflow Improvements with VoCaption Automated Live Captioning

Duluth, GA, USA – November 08, 2021: PBS station WVIA in Pittston, PA went live recently with BroadStream’s VoCaption Automated Live Captioning Solution.

WVIA operates 3 channels using BroadStream’s OASYS Integrated Playout solution and serves Northeastern Pennsylvania and the Central Susquehanna Valley area. Broadcasting since 1966, WVIA has a long history of service to the community and surrounding area with local programming, educational programming, outreach and services.

In the broadcast world, workflows matter. If there are too many extra steps, friction or points of failure, both stress and anxiety levels ratchet up beyond the normal levels broadcasters face daily. This was especially true for Joe Glynn and his staff at WVIA when live captioning was needed. The amount of live captioning the station needed each week was limited but the work associated with getting live captions on-air was not.

The workflow necessary to launch live captioning each week included having someone call the captioning company to schedule the captioner, provide all the details including date and time of the show, length of the broadcast and arrange for testing prior to the show airing.

An engineer needed to be available at least 15 minutes before the show’s scheduled start to connect the call to the audio bridge and test the outbound audio, which often took a back-seat to microphone checks from the studio. The captioner was required to call-in prior to air to confirm they were available, which didn’t always happen, and connect to the caption encoder so the station could verify that captions would actually appear. If there were issues, those had to be resolved quickly. If the captioner was late it made things much more difficult and stressful and on several occasions shows aired without captions. WVIA viewed the live captioning process as too complicated with too many steps and points of failure and felt there must be a better solution.

To smooth out the workflows and problems associated with live captioning, WVIA selected BroadStream’s VoCaption, Automated Live Caption Solution. Joe Glynn, Chief Technology Officer at WVIA commented, “All of the problems and multiple extra steps necessary to put closed captions on-air for a live show were reduced to one single mouse click with VoCaption.” Glynn continued, “While VoCaption did offer us some cost savings, by far the biggest benefit was we now had full control of the process and no longer depended on an outside, 3rd party provider. This change reduced a previously frustrating and complex workflow to a single mouse click.”

Follow Joe Glynn’s lead. Use VoCaption to reduce your live captioning frustrations from a process that involves multiple steps and people to a single mouse click.  WVIA is saving time and reduced their frustration and anxiety while making things much easier for everyone because they fully control the entire live captioning process.

For more information please visit our website https://broadstream.com/vocaption-live/ to learn more about VoCaption or our Contact Us https://broadstream.com/contact-us page to arrange a call or demo.

###

 

About WVIA

WVIA Public Media is a catalyst, convener and educator, using media, partnerships, powerful ideas and programs to improve lives and advance the best attributes of an enlightened society. Learn more, wvia.org.
 

About BroadStream Solutions

BroadStream Solutions (www.broadstream.com) acquired Screen Subtitling Systems in 2018 and specializes in the playout of linear television channels, Live Automated Captioning, automated File-based captioning and Subtitling tools for broadcasters, networks, cable, and satellite operations, as well as non-broadcast applications, around the globe. Our expertise in broadcast is focused on developing solutions that are software-based and consistently deliver flexibility, dependability, improved workflows, operational efficiency and reliability for our clients and customers.

 

KLRU Control Room

What’s the Impact of AI on Subtitling?

Captioning and Subtitling are similar but different and rather than dive into the differences we’ll use the word subtitling generically for this article to reference both subtitles and captions.

The beginnings of a wave of change with subtitling are happening in television. You may not have felt the tremors yet, but you will.

Subtitles on television began in the early 1970’s. Live subtitling, however, did not begin until 1982 when it was developed by the National Captioning Institute using court reporters trained to write 225 words per minute using a Stenograph machine. This provided viewers with on-screen subtitles within two to three seconds of the word being spoken and it’s been that way ever since. But…

Stenography, as a profession, has been in decline since 2013. As senior professionals retire they are not being replaced by younger candidates who are passing on this field as a career in favor of other professions. One reason for this is the demanding requirements of the job. Candidates must be able to type at lease 225 – 250 words per minute with limited mistakes. This requires discipline and focus for many hours at a time. Schools that taught stenography typically had a graduation rate around 4% due to the high standards and have seen a gradual decline in enrollment that ultimately forced many schools to close. In the US alone, there are over 5,000 open stenography positions with limited candidates to fill those positions.

At the same time the Stenography profession began trending downward, the demands for live subtitling were rising. This increase was driven by new government mandates in multiple countries, the exponential growth of live and breaking news, 24-hour cable news cycles, and more live sports broadcasts. Additional competition for live captioners is also coming from corporate events, government briefings, meetings and increased usage from the legal system for depositions and trials that are creating resource issues and rising prices for human captioning.

So how do we fill the gap between a decline in human subtitling also facing an increase in market demand? Technology. Specifically, Artificial Intelligence (AI). The technology has been around for many years. In fact, voice recognition dates back to the late 1800’s and early 1900’s. The technology began to show significant improvement beginning in 1971 and continued to evolve into 2014 when it became commercially available. Early efforts suffered from accuracy problems and limitations.

With the acquisition of Screen Subtitling Systems, BroadStream began working to incorporate AI speech engines to create an automated solution we call VoCaption. VoCaption delivers live subtitling for your live broadcast that is more accurate than previous AI implementations and in many cases equal to or better than humans.

VoCaption can be used in multiple applications including:

  1. Emergency Subtitling – for those occasions when subtitles are expected to be in a program but for some technical reason they are not. VoCaption can help reduce those angry viewer calls because it can be activated in just a matter of seconds.
  2. Supplemental Subtitling – your news may provide subtitles using a script and teleprompter but for weather, sports, traffic and un-scripted field reports, rather than use a human who must be available and “on the clock” for the entire time, VoCaption can be turned on and off as needed.

The two biggest benefits you can expect are:

  1. Improvements in accuracy – A frequent comment we hear is “we tried AI several years ago and it wasn’t very accurate.” That was true then, but the technology has made excellent progress and accuracy is up significantly depending on the program genre and the audio quality. In addition, it’s easy to regionalize or localize the technology by using custom dictionaries to import regional or local names, geography, schools, sports teams and more. Utilization of these dictionaries will substantially increase accuracy and improve pronunciations and we can help you with that process during commissioning.
  2. Substantial savings – Human subtitlers are expensive and as the shortage of qualified subtitlers continues to decline you will likely see an increase in rates. Current estimates range from a low of $60 per hour to a high of $900 per hour depending on your needs. VoCaption is available when you need it and it sleeps when you don’t. You only pay for the times you need it so you will experience a significant cost savings versus human captioners.
VoCaption 1RU Hardware

 

VoCaption is available in both hardware and software versions for 3rd party caption inserters or as part of OASYS Integrated Playout and Polistream our leading, subtitle inserter. For more information you can contact us here and a representative will be happy to answer any questions, arrange a demonstration or provide a quotation.