Do company training videos have to be captioned?

Many companies use video to provide training to their employees. Topics range from new employee on-boarding to workplace conduct and safety procedures. Video can also be used for continuous training to ensure that employees are up-to-date on the latest policies and procedures as well as product and service related information needed for customer interactions.

Using video for training is a great way to educate employees when a live trainer is not available or needed to cover basic information. To make sure your video content is understood and retained it’s important to make sure the each video includes accurate captions.

Do you legally have to caption training videos?

Yes. According to ADA (Americans with Disabilities Act). Private companies must caption videos, including training videos, video tutorials, and videos used for internal communications to be in compliance with ADA standards.

Specifically, Title I and Title II of the ADA prohibit employers from discriminating against employees on the basis of a disability. This includes job training and providing any necessary aids to ensure equal access to information for all individuals.

Providing captions for all internal videos is one way to ensure your company is compliant.

Failure to comply with ADA regulations could lead to un-neccesary lawsuits. Take FedEx as an example. FedEx was sued by the EEOC in 2014 for failing to provide “closed-captioned training videos during the mandatory initial tour of the facilities and new-hire orientation for deaf and hard-of-hearing applicants”. (https://www.eeoc.gov/newsroom/eeoc-sues-fedex-ground-package-system-inc-nationwide-disability-discrimination)

So how can your business ensure you are following ADA regulations?

To start, make sure any company videos include accurate closed captioning. If you have any videos that do not include captions, we can help!

Fill out the form to get a quote for captioning your video content and send us your files. Using advanced ASR technology, our team will create accurate closed captioning files for your videos with fast turnaround times.

Using our captioning service, we create highly accurate captions for your training and informational videos for $0.25 per minute of content.

To make your company videos even more inclusive, consider adding translated subtitles. Adding Spanish captioning is a great way to ensure that all employees, especially those who speak English as a second language, are able to fully retain company information and training guidelines.

Translated subtitles for your company videos is done using a combination of our ASR technology and professional human translators. This way, we are able to provide you with Spanish video subtitles that are both accurate and affordable, starting an unbeatable price of $1.67 per minute. To learn more about how you can add Spanish subtitles to your training videos, visit our subtitling page.

What is ‘Respeaking’?

Respeaking is a common method used to create captions and subtitles in many countries…but what does it mean?

Respeaking is done by a professional “respeaker” to create captions or subtitles for live and pre-recorded programming. The respeaker listens to the program’s audio and repeats what is said into a special microphone (a.k.a speech silencer), being sure to add punctuation and labels to identify speakers and sounds. Speech recognition software is used to convert the speech to text that is used to create a subtitle file for the program. The speech silencer used by the respeaker helps to improve the accuracy of the captions by removing any background noise and confusing sounds.

This method of respeaking requires the use of highly trained professionals who speak clearly, quickly and accurately. Respeakers, or speech-to-text reporters, must listen to the audio, respeak the audio quickly and accurately, and then check the output to make any necessary corrections. All of this must be done quickly, especially for live programming where the captions must appear in time with the live audio.

Due to the vocal strain from respeaking, respeakers are only advised to do 15-minute stints at a time. For live programming, broadcasters must have a team of respeakers ready to rotate throughout the program to ensure that the subtitle accuracy doesn’t decline as the respeaker’s voice becomes strained.

ASR Technology & Respeaking

Advanced Automated Speech Recognition (ASR) technology is quickly becoming an innovative partner to the process of respeaking. ASR technology, when combined with the method of respeaking, helps to improve productivity and the speed in which captions can be created.

Many broadcasters today use a combined ASR-Respeaker method of creating captions in order to ensure that they are fully utilizing their respeaker’s time. Respeakers can work faster and caption more content when using ASR technology as a supplemental tool. This change in workflow improves overall productivity as content producers are able to extend their re-speakers to more projects and use ASR technology as a supplemental tool to speed-up certain tasks or take over when a respeaker needs a break.

Our team works together with broadcasters to help them combine our advanced ASR technology, such as WinCaps or VoCaption Live, with their current method of respeaking to achieve higher productivity levels.

If you’re interested in learning more about our captioning and subtitling software and solutions and how they can benefit your operation, learn more here or contact our team directly. 

FCC vs. ADA Caption Requirements

Are your videos in compliance with FCC and ADA requirements?

Both the FCC (Federal Communications Commission) and the ADA (Americans with Disabilities Act) strive to protect and assist individuals with disabilities. This includes individuals who are hard-of-hearing and their rights to have full access to video programming. To ensure access to video programming, the FCC and ADA have set standards and requirements for closed captioning on live and pre-recorded programming.

Does your programming meet their standards and requirements?

Let’s find out:

FCC Requirements for closed captioning on television –

FCC rules apply to all television programming with captions. The organization states that captions must be accurate, synchronous, complete, and properly placed.

  • The program’s captions must match the spoken words while also displaying the background noises in an accurate manner.
  • Captions must be synced with the audio of the programming. Text must coincide with the spoken words and sounds at the same time and speed.
  • Captions must be included from the beginning of the programming to the end of the programming.
  • Captions should not block any important visuals on the screen, overlap causing difficulty in reading, or run off the screen.
  • It’s important to note that these rules also apply to internet video programming if the “video programming was broadcast on television in the U.S. with captions.”

ADA Compliance Laws for Closed Captioning –

The ADA closed captioning guidelines are targeted towards government institutions, public schools and universities, as well as businesses and non-profit organizations that serve the public. The closed captioning requirements for both television and online internet video content are designed to ensure that captions are being created correctly.

  • Each caption should hold 1-3 lines of text onscreen at a time, and should not exceed 3 lines.
  • Captions should have an accuracy of 99%.
  • The captioning font should be similar to Helvetica
  • Background noises, or non-speech sounds, should be added in square brackets.
  • Punctuation and both lower and upper case letters should be used
  • Captions should reflect slang words used in the audio

You can find more information about the ADA’s captioning regulations from their website linked here.

Are there any exclusions?

It’s important to understand that captioning rules only apply to videos that have been aired on television. If content has never been aired on television, then these rules do not apply. However, although not legally required to have accurate captioning, these videos should still include accurate captions. Why? Because video with captions reach a much wider audience. Videos with accurate captions help individuals who are deaf or experience hearing loss to still enjoy the content. Captions allow individuals who are only able to watch the content with the sound off to still enjoy the video. Captions improve retention rates for videos filled with important information, and deliver a better viewing experience.

Are you following FCC and ADA guidelines? If not, a complaint could be filed against you and legal actions may be taken.

Be safe, avoid potential and unnecessary legal problems by captioning your videos with accurate and correctly placed captions.

To learn more about captioning software that complies with FCC and ADA guidelines, visit our Subtitling & Captioning Page

Why Do Our Customers Recommend OASYS Integrated Playout?

Why do our customers recommend OASYS Integrated Playout?

There are several reasons our customers love us and here’s what they say:

1. OASYS has given their station engineers peace-of-mind in a software solution that’s reliable and efficient.

2. OASYS combines multiple processes into one, meaning broadcasters save themselves the headaches of working with various vendors and support contracts. Everything works together seamlessly in an efficient process that removes unnecessary third-parties.

3. Our support team has a reputation for being available around-the-clock to help clients make changes and fix problems immediately. A great support team behind the software helps our customers feel confident that someone has their back.

4. The software is flexible. As your station grows and your needs change, OASYS can be easily reconfigured or expanded to meet your new requirements. Need to add a channel? No problem. Want to step up your graphics game? OASYS’ Graphics package can assist with dynamically updated graphics for news, sports, weather or other situations. Need to begin providing live closed captioning that fits your workflow and costs less than human captioners with similar accuracy?

Don’t just take our word for it. Read our customer’s testimonies to see why our customers recommend OASYS Integrated Playout as well.

Interested in learning more about OASYS integrated playout system? Contact Us!

 

Guide to adding captions to online courses and eLearning Videos

Now, more than ever, online learning courses are a crucial component of school courses. After the COVID-19 pandemic, learning institutions have been challenged to make their courses and learning materials more accessible online should their students need to stay home instead of coming to the classroom. From university professors to elementary school teachers, everyone is uploading courses and teaching videos to online platforms to make sure that students don’t miss out on their material when they have to stay at home.

All of this material needs to be captioned.

Why?

First, students who are deaf or hard-of-hearing need to be able to access these videos as well. According to the American Disabilities Act (ADA), most higher education institutions, including both public and private, must offer closed captioning. Failure to comply with these regulations could lead to serious penalties for the learning institution.

Second, professors must understand that students in the modern world are not always able to study in perfectly quiet environments. Many students are living at home with their families and are constantly surrounded by noise and distractions. Captions help these students to understand what is being said even if they can’t hear every word perfectly due to noisy surroundings.

Another reason for adding captions to eLearning courses is to help improve student’s comprehension, accuracy, engagement, and retention. A study by educause.edu sought to better understand student’s use of captions when it comes to online learning courses. Amongst the students who said they use closed captioning, “59.1 percent reported that the closed captions in course videos are very or extremely helpful, and an additional 29 percent said they were moderately helpful.” This is after the survey found that over 50% of students without any hearing disabilities use closed captions at least some of the time.

Enhance your student’s learning experience and improve their success rates through the use of closed captioning.

Here’s how to add closed captioning to your online and eLearning courses:

Step 1: Have both your video file and its audio file ready to be uploaded. You can use websites such as vlchelp to help create an audio only file from any video.

Step 2: Purchase and download WinCaps Q4.

WinCaps Q4 is BroadStream’s software solution for closed caption and subtitle creation. WinCaps Q4 takes any video or audio file and creates an accurate closed caption file that provides the text as well as timing for captions. These captions can be easily edited to change any wording, punctuation, or spelling. A separate caption file can also be created to easily translate these files into a foreign language.

If you are an educational institution, ask about our special WinCaps Q4 Educational that provides a special license to last the duration of your course.

Step 3: Export the caption file that can then be uploaded on your learning platform alongside the course video. Students will be able to choose whether or not they want to add these captions to the video, providing them with the best personal learning experience.

If you have any questions, reach out to our team online through our contact page. We’ll be happy to answer any questions about how to best add closed captioning to your online courses.

For more information about WinCaps Q4 and it’s various features, visit our WinCaps page.

Have you evaluated your Master Control operations recently?

Have you evaluated your Master Control operations recently?

What’s going on…

  • more problems than you need;
  • can’t trust the system to be reliable; system aging out;
  • need to find more efficient workflows or make it work with fewer people?

You, more than anyone else, knows how important it is to trust your Master Control software and systems. If they fail, you miss programs and commercials or go off air completely. And if you experience any on-air issues you hear it from viewers and advertisers.

So, are you feeling at peace with your Master Control system, or stressed out? Let’s evaluate:

  • During the week, do you and your team spend too many hours dealing with third-party vendor support or trouble-shooting problems?
  • Can you leave Master Control un-manned or running over night, or days at a time, and be confident your phone won’t ring with new problems?
  • Do you feel confident that your live-captions will work for your next live broadcast?
  • During the holiday season, does your stress increase due to lower staffing levels or slower response times from 3rd party vendor support?
  • Do you have a redundant back-up solution that can take-over automatically when the system detects any problems?
  • Many chief engineers will probably say no. Their systems are overly complex, require more man-power than you can budget and are woefully out of date. They worry about the impact on viewers and staff should the system crater, which it will, when you least expect it.

You are not alone. We commonly hear that Master Control is out-of-date, it causes too many problems; program prep workflows are slow and too much time is spent with multiple support vendors that point fingers or don’t seem to care if your issue is solved today or next week.

Did we just describe your situation? Then maybe it’s time for a change.

If your Master Control system doesn’t bring you and your team joy and peace-of-mind, then you need to learn more about OASYS Integrated Playout. OASYS delivers confidence and peace of mind by using standard IT hardware and fully integrated software that:

  • Reduces your overall dependence on specific boxes because software replaces the need for purpose-built hardware.
  • Reduces the number of support contracts you need to maintain.
  • Reduces your overall foot-print requirement and saves money on utilities.
  • Future-proofs your path forward because the software is easier to upgrade than hardware and saves money in the long-run.

So, if your current system isn’t flexible, doesn’t automate your workflows, doesn’t eliminate your need for purpose-built devices and multiple vendors, then you need OASYS.

Learn more about OASYS Integrated Playout here. 

Captions vs. Subtitles

What’s the difference between captions and subtitles?

Many people confuse captions and subtitles. They both appear as text on a television, computer screen or mobile phone while a video is playing, and help individuals understand the speech better. So what is the actual difference between the two?

The basic difference is captions are in the same language as the spoken word on the screen and subtitles are in a different language.

Captions take the speech and display it as text in the same language. Subtitles, on the other hand, are a translation of the speech into different languages. This means that with captions, what you read is what you also hear. With subtitles, what you read is a different language than what you hear.

Captions were originally developed to make television programs more accessible to the deaf community. Individuals with hearing impairments may not be able to fully understand the audio but can follow along with the closed captions to understand what is being spoken.

Closed captions prevent discrimination against people with disabilities and are required by law in many countries including America and all of Europe. Not only do captions benefit the deaf community, but they also make multimedia videos more engaging and accessible. With captions, videos can be played on silent in public areas or noisy rooms. Captions also help viewers to better retain information from university lectures, training videos, conference meetings, live events, and so much more.

Subtitles, on the other hand, were originally developed to make television viewing more accessible to viewers who don’t speak the same language as the audio in the program. Videos and TV programs can now be shared across the world with the help of subtitles. Although the speech remains in one language, individuals can add their foreign language, if available, using subtitles to better understand what is being said. Subtitles not only make multimedia more accessible across languages, but also help individuals who are trying to learn a new language. Statistics show that adding subtitles in a foreign language can help individuals learn a new language by watching the words and phrases pop-up on the screen. Subtitles also offer benefits for the deaf and hard-of-hearing who seek to access videos in foreign languages as well.

Both subtitles and captions make multimedia videos and television programs accessible across the world. Video content is quickly invading social media platforms and videos are becoming much more important in education and business environments.

Every video you create should have captions or subtitles to improve engagement, accessibility and retention for all viewers regardless of their hearing situation.

Check out our Captioning & Subtitling Software to learn more about what these technologies can do for you.

How VoCaption Cut WVIA’s Live Program Workflow from Aggravation to a Single Mouse Click

WVIA, a PBS station operating 3 channels in Northeastern Pennsylvania and the Central Susquehanna Valley, has been using BroadStream’s OASYS Integrated Playout solution and recently added VoCaption, our Automated Speech Recognition solution for live, automated captioning.

Joe Glynn and his staff at WVIA have been providing live captions on a limited basis, but getting live captions on-air took too many steps and had multiple points of failure which increased both stress and anxiety levels beyond what the team needed.

The workflow necessary to launch live captioning each week included having someone call the captioning company to schedule the captioner, provide all the details including date and time of the show, length of the broadcast and arrange for testing prior to the show airing.

On the day of the show an engineer needed to be available at least 15 minutes before the show’s scheduled start to connect the call to the audio bridge and test the outbound audio, which often took a back-seat to microphone checks from the studio. The captioner was required to call-in prior to air to confirm they were available, which didn’t always happen, and connect to the caption encoder so the station could verify that captions would actually appear. If there were issues, those had to be resolved quickly. If the captioner was late, it made things much more difficult and stressful and on several occasions shows aired without captions. WVIA felt the live captioning process was too complicated with too many steps and points of failure and felt there must be a better solution.

Looking to resolve the issues associated with live captioning Joe decided to look at VoCaption to see if automation would help smooth things out. Accuracy was, of course, a concern, but if VoCaption was anywhere close to their current accuracy level it would be worth it to make the switch.

The installation and commissioning hit a few bumps along the way but they were resolved and after a few tests WVIA formally selected BroadStream’s VoCaption, Automated Live Caption Solution.

Joe Glynn, Chief Technology Officer at WVIA commented, “All of the problems and multiple extra steps necessary to put closed captions on-air for a live show were reduced to one single mouse click with VoCaption.” Glynn continued, “While VoCaption did offer us some cost savings, by far the biggest benefit was we now had full control of the process and no longer depended on an outside, 3rd party provider. This change reduced a previously frustrating and complex workflow to a single mouse click.”

With VoCaption, WVIA is saving time, reducing frustration, and improving their workflow by fully controlling their entire live captioning process. Follow Joe Glynn’s lead and install VoCaption to reduce your live captioning frustrations and simplify the process to a single mouse click.

For more information please visit our website to learn more about VoCaption or our Contact Us page to arrange a call or demo.

 

The Importance of Workflow in Broadcast Media

In the broadcast world the process necessary to take a program and broadcast it to viewers can be quite complicated. Lots of things can go wrong.

Video tape is no longer used and has been replaced by digital video which, in the beginning made things more complicated for broadcasters. Things like video file format, audio, captioning, graphics, scheduling commercial breaks as well as the programs must go smoothly to create a full day of programing and and get it ready to broadcast. Now multiply this process over 24 hours per day and 168 hours per week and the workload can be staggering.

To maintain quality, TV stations use specific routines and processes, called workflows, managed by highly qualified technical staff, to ensure the correct programs air at their scheduled time.

Each workflow can have multiple steps and go though multiple forms of processing or editing to become broadcast-ready.

To make the overall process as efficient as possible, TV stations are constantly looking for ways to improve their workflows, and that means reducing the complexity and number of steps required down to the fewest number of steps possible. With so many different steps and potential friction points, the work can be stressful and anxiety producing. If the workflow isn’t simplified, clearly outlined, and properly managed, many broadcasters can find it difficult to do their job to the best of their abilities.

That’s why the workflow is important. A well constructed workflow can improve efficiency and productivity while improving quality and at the same time helping to create a more stress-free environment.

Let’s use WVIA as an example.

Joe Glynn and his staff at WVIA provided live captioning to their viewers. The amount of live captioning the station needed each week was limited but the work associated with getting live captions on-air was not. The station was using human-generated live captions through an outside vendor.

Their workflow to produce the live captions was inefficient. It involved working with an outside company that provided captioners for live shows. This meant that every week, someone from the station needed to call the captioning company, schedule the captioner, provide all the details for the show, and arrange for testing prior to the show airing.

Before the show began, an engineer or operator was required to connect the call to the audio bridge and test the outbound audio. The captioner was required to call in and confirm they were present before the show began, which didn’t always happen. If the captioner did call in to confirm their presence, then the station had to connect to the captioning encoder to ensure that captions would actually show up on the screen.

After all the planning the station often ran into multiple issues that needed to be addressed at the last minute. No pressure right?

They often weren’t able to perform a necessary audio test with the captioner before the show because they had to do microphone checks for the show. Or if the caption encoder wouldn’t connect before the show a second engineer was necessary to fix the issue to while another engineer continued to get the system ready for the live broadcast.

This workflow was full of stress, constant problems, and little efficiency. Not only that, but the workflow of adding live captions actually hindered their quality and success in other areas of the broadcast.

That’s why workflow is so important – a good workflow improves quality and success while also ensuring that no other workflows are hindered in the process.

To find a solution and eliminate the workflow issues surrounding live closed captioning, Joe reached out to our team at BroadStream to see how we could help. We recommended VoCaption to provide an Automated Live Captioning Solution and helped his team with the installation and training. Once in place, their workflow problems were eliminated.

Today, with one simple mouse click, live captions are triggered and on-screen. No planning, no worry. Hours of work were reduced to that one mouse click. Simple, easy and at a cost savings as well.

The team now has more control. They no longer rely on a third-party for their live captioning because a complicated workflow was eliminated by new and proven technology.

If you want to know more or see a demo, reach out to us on our Contact page.

A Brief History of ASR Technology

A Brief History of ASR Technology

Did you know that the first ASR Technology was invented in 1952?

ASR stands for Automated Speech Recognition. This technology uses machines (computers) instead of humans to convert speech to text for captions, subtitles, transcripts and other documentation.

One of the earliest projects that can be considered an ASR technology was developed in 1952 by researchers at Bell Laboratories. They called this technology “Audrey” and it could only recognize spoken numerical digits. A few years later in the 1960’s, IBM engineered a new technology called Shoebox which, unlike Audrey, could recognize arithmetic commands as well as digits.

Later in the 1970’s, a new model of ASR was developed called the Hidden Markov Model. In brief, this ASR speech model used probability functions to transcribe what it determined to be the correct words. Although the original technology was not very efficient nor accurate, about 80% of the ASR technology currently being used today derives from this original model.

So how did these technologies evolve into the ASR software that we know today?

In the 1970’s, various groups began to take speech recognition technology more seriously. The U.S Department of Defense’s ARPA, for example, began the Speech Understanding Research program which funded various research projects and led to the creation of new ASR systems. In the 1980’s, engineers began taking the Hidden Markov Method seriously which led to a huge leap forward in the commercial production of more accurate ASR technologies. Instead of trying to get computers to copy the way humans digest language, researchers began using statistical models to allow computers to interpret speech.

This led to highly expensive ASR technologies being sold during the 90’s which thankfully became more accessible and affordable during the technology boom in the 2000’s.

Nowadays, ASR technologies continue to grow and develop to constantly improve accuracy, speed, and affordability. The need for humans to check the accuracy of these technologies is decreasing, and the availability of ASR technology across all industries is spreading. No longer is ASR considered to be only useful for broadcast TV. The importance of this technology is being explored by universities, school systems, businesses, houses of worship, and much more.

What first began as a technology to recognize numerical digits has now developed into a highly advanced system of recognizing hundreds of languages and accents in real-time. BroadStream continues to innovate and improve upon ASR products to create systems that are accurate, easy to install and run, and affordable across various industries.

Our VoCaption and SubCaptioner solutions, provide real-time live captioning and on-premise, file-based captioning that saves time and money when compared to using human captioners and increases video accessibility and engagement. To learn more about these solutions, please visit our Captioning & Subtitling page!