WCAG Guideline 1.2: Time-based Media
Today, we explore the WCAG Guideline 1.2 — Time-based media.
An in-depth discussion on this guideline can be found on the WAI website at https://www.w3.org/WAI/media/av/
Purpose of the guideline
This guideline was developed to make content that is dependent on time (i.e., audio & video) accessible by providing alternative formats.
Time-based media include the following:
- Audio-only media (e.g., podcasts)
- Video-only media (e.g., silent showcases)
- Audio-video media (e.g., movies)
- Interactive audio and/or video
Motive of the guideline
This was to address the experiences and needs of people with disabilities that impede their ability to process certain kinds of media. Examples of such situations are:
- People who are deaf, but can read, can make use of captions and transcripts. Others might prefer sign language.
- People who have difficulty processing information flowing at a predetermined pace might prefer transcripts, which they can consume at their own pace.
- Blind people could use audio descriptions to access visual information not conveyed in a video’s original soundtrack.
Methods for providing alternatives
There are 4 primary ways in which time-based media can be made accessible to people with varying disabilities. These are:
- Captions
- Transcripts
- Audio descriptions
- Sign language
Let’s take a look at these in detail.
Captions (A.K.A. Subtitles)
Captions enable access to audio content for people who are deaf or hard-of-hearing. They are text versions of speech and non-speech information needed to understand the content. They are synchronised with the audio, and can usually be turned on or off.
Transcripts
Transcripts are similarly text alternatives of video or audio content. However, they present the entire information upfront. This enables users to consume the content at their own pace, rather than in real time. Synchronisation can however be achieved by highlighting the portion at the current position.
Audio Descriptions
Audio Descriptions are narrations of unspoken actions and events in a video, for the benefit of people who cannot see what is happening. They are usually included where there are gaps in the original dialogue.
Here’s an example of what an Audio Description could be:
“Pat opens a small box, looks at a diamond engagement ring, and cries.”
Such information might not be present in the dialogue, but is important to understand the current scene.
Sign language
Sign language is a way of using body gestures to convey information primarily to people with hearing impairments. Video and audio content can incorporate visuals of a sign language interpreter translating the audible content for people with such impairments.
A note on automatic captions
Some tools allow creators to automatically generate captions or transcripts for media. However, these captions can often be inaccurate, and, depending on the kind of content, possibly disastrous. It is important that they are proofed and corrected.
Here’s an example from W3C of a bad automatic caption that could cause a fire:
Spoken audio: “Broil on high for 4 to 5 minutes. You should not preheat the oven.”
Automatic caption: “Broil on high for 45 minutes. You should know to preheat the oven.”
Stay tuned for more!
Thanks for reading! Found this helpful? Be sure to leave a like and share!
Social accounts:
This article was written as an accessible alternative to the post shared on my LinkedIn account.