Home
Blog
Building a Whisper YouTube transcription generator for automated captioning

Building a Whisper YouTube transcription generator for automated captioning

Building a Whisper YouTube transcription generator for automated captioning
Published on
Mar 2024

With over 500 hours of video uploaded to YouTube every minute, providing accurate captions and transcripts is essential for creators to make their content engaging and accessible. However, manually transcribing long videos is tedious and time-consuming.

YouTube does automatically generate captions for uploaded videos. However, it can take hours for new videos to get captions – and the quality tends to disappoint. For creators needing high-quality captions immediately, an API-based solution may be a better alternative.

In this step-by-step guide, we’ll show you how to easily build your own Whisper YouTube transcription generator using Gladia's optimized Whisper API. With just a few lines of code, you can leverage the power of cutting-edge ASR and large language models to automatically generate captions and transcripts for your YouTube videos.

This guide will walk you through how to tap into Gladia’s Whisper-based AI transcription API to easily generate captions for any video. Let's get started!

Overview

Tools like yt_dlp allow you to download video and audio content from YouTube and other sites. Bringing these pieces together, you can automatically generate subtitles for any video. First, we’ll use a package like yt_dlp to download the video file. Next, we’ll send it to the Gladia API to generate the transcription. We’ll then take this text and format it into a subtitle file like SRT. Finally, we’ll utilize ffmpeg to insert the subtitles back into the original video. 

Prerequisites

  • Python 3.7+
    - Python Packages
    - requests
    - Yt_dlp

Downloading a Youtube video

Step 1: Install dependencies

To get started, we first need to download the YouTube video we want to transcribe. We'll use the popular `yt_dlp` library to handle this.

First, install `yt_dlp` if you don't already have it:


pip install yt_dlp

With the dependencies installed, we can now use the following Python code to download a video.

Step 2: Download a video


import yt_dlp

# Code to download video...

def download_video(url):
  ydl_opts = {
    'outtmpl': '%(id)s.%(ext)s',
    'format': 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4'
  }

  with yt_dlp.YoutubeDL(ydl_opts) as ydl:
    ydl.download([url]) 

download_video('https://www.youtube.com/watch?v=VIDEO_ID')

This will save the video to your current directory. We can now use this file as input for the Whisper transcription.

When we call the `download_video()` function using the the Rick Astley's 'Never Gonna Make You Cry' YouTube URL:


download_video('https://www.youtube.com/watch?v=dQw4w9WgXcQ')

So when the download completes, yt-dlp will save the video with the name:`dQw4w9WgXcQ.mp4`

The `.mp4` extension is automatically added because we set the format to download the best available MP4 file.

This results in the rickroll video being saved as `dQw4w9WgXcQ.mp4` in our working directory, which we can then pass to Gladia's API to transcribe.

Transcribing videos with the Gladia API

Step 1: Retrieve your API key

Gladia provides an AI-powered API for transcribing and analyzing audio and video files. To get started using the Gladia API, you first need to create an account at app.gladia.io. You can register with an email and password or using your Google account. After signing up, you will be provided with an API key that is required to authenticate when making API requests.

Step 2: Import Python modules

First, we import the requests library to make HTTP requests, and the os module to interact with the file system:If you don't have requests installed run:


pip install requests

import requests
import os

Step 3: Code Integration

The easiest way to use the Gladia's API is by providing an URL to a video:


apiKey = os.getenv("GLADIA_API_KEY")

headers = {
  'x-gladia-key': apiKey
}

We can specify the YouTube URL directly in the `files` dictionary:


 API_URL=”https://api.gladia.io/audio/text/audio-transcription/”
 response = requests.post(API_URL, headers=headers, files=files)

This can be useful if you don't want to manage downloading and storing the audio files yourself. The tradeoff is the transcription may take slightly longer as the audio has to be downloaded first.

Next, we will explore uploading a file downloaded directly from YouTube.


apiKey = os.getenv("GLADIA_API_KEY")

headers = {
  'x-gladia-key': apiKey, 
}

We specify the file path to the video file we want to transcribe:


file_path = "dQw4w9WgXcQ.mp4"

Then we open the file in binary read mode:


with open(file_path, 'rb') as f:

We create a dictionary called `files` to hold the data we want to send in the request:


  filename= “'dQw4w9WgXcQ.mp4”
  files = {
    'audio_file': (mp4_file , f, 'audio/mp4'),
    'toggle_diarization': (None, 'true')
  }

This includes the audio file itself, and a parameter to enable speaker diarization in the transcription output.

We make the API request, passing the headers and files:


API_URL=”https://api.gladia.io/audio/text/audio-transcription/”
response = requests.post(API_URL, headers=headers, files=files)
data = response.json()

This allows us to send the video file and process the transcription results from Gladia's API.

Generating subtitles

Step 1: Understanding subtitle formats

A .srt file is a subtitle file that contains the timing and text for subtitles in a video. It consists of sequential blocks like this:


1
00:00:01,000 --> 00:00:05,000
This is the subtitle text for the first segment

2 
00:00:05,001 --> 00:00:10,000
Subtitle text for the second segment

We can use the following Python code to take the JSON response from Gladia's API and convert it into .srt format.

Step 2: Converting the API response to .srt format


segments = data['prediction']

lines = []
for i, segment in enumerate(segments):

  start = seconds_to_timestamp(segment['time_begin'])
  end = seconds_to_timestamp(segment['time_end'])

  text = segment['transcription']

  lines.append(str(i + 1))
  lines.append(f"{start} --> {end}")
  lines.append(text)
  lines.append('')
  
srt = '\n'.join(lines)

with open('subtitles.srt', 'w') as f:
  f.write(srt)

This generates a standard .srt file that can be added to video editing software like Premiere Pro or uploaded to YouTube to add captions.

Step 3: Inserting Subtitles

This ffmpeg command inserts subtitles from a SRT file into a video using the subtitles filter.


ffmpeg -i dQw4w9WgXcQ.mp4 -vf subtitles=video_transcription.srt dQw4w9WgXcQ.mp4

The subtitles filter overlays the subtitles from the SRT file on top of the input video.

This provides a convenient one-step process to overlay subtitles without encoding them separately. The subtitles filter handles overlaying the SRT on the video as needed.

YouTube subtitles final preview

Conclusion

Transcribing and subtitling video content opens up a world of possibilities. The Gladia API powered by Whisper ASR makes it simple to transcribe an audio file to text. This transcription can then be formatted as subtitles and added to the video.

The end result is a subtitled video with minimal effort. While the technical details may seem complex at first, the overall workflow is straightforward. Automated transcription paves the way for increased accessibility and discoverability online, and with the right tools and knowledge, anyone can now easily add subtitles for their videos using Gladia.

About Gladia

At Gladia, we built an optimized version of Whisper in the form of an enterprise-grade API, adapted to real-life professional use cases and distinguished by exceptional accuracy, speed, extended multilingual capabilities and state-of-the-art features.

Contact us

280
Your request has been registered
A problem occurred while submitting the form.

Read more

Speech-To-Text

Key techniques to improve the accuracy of your LLM app: Prompt engineering vs Fine-tuning vs RAG

Large Language Models (LLMs) are at the forefront of the boom in artificial intelligence (AI) and they continue to get more advanced. However, LLMs can suffer from performance issues, and produce inaccurate, misleading, or biased information, leading to poor user experience and creating difficulties for product builders.

Speech-To-Text

Keeping LLMs accurate: Your guide to reducing hallucinations

Over the last few years, Large Language Models (LLMs) have become accessible and transformative tools, powering everything from customer support and content generation to complex, industry-specific applications in healthcare, education, and finance.

Case Studies

Transforming note-taking for students with AI transcription

In recent years, fuelled by advancements in LLMs, the numbers of AI note-takers has skyrocketed. These apps are increasingly tailored to meet the unique needs of specific user groups, such as doctors, sales teams and project managers.