Site icon Shuru

FFmpeg in React: Parallel Processing With Web Worker

ffmpeg

Introduction: 

Recently, I was tasked with solving an interesting problem: combining multiple video files into one seamless video and allowing the user to download the video as a single file—all on the client side.

I tried searching on google and could not come up with a good source which could guide me through ffmpeg in react using web workers. So I found this as an opportunity to help the community and learn a bit more about ffmpeg in general. 


The project was built with React, and the backend was sending video files in chunks. While my primary goal was to merge these videos and enable downloading. I saw an opportunity to dive deeper into the the library on the client side.

For the sake of learning (and some added fun), I decided to explore additional features like extracting audio from a video file, cutting specific parts of a video, and, of course, using Web Workers to handle all these resource-intensive tasks without freezing the React app’s main thread.

This blog will guide you through the process of combining videos in React using FFmpeg and Workers. We’ll also touch on these extra features for some fun!



Problem Statements


1. Combine multiple videos that are split and should be combined together to seamlessly play in a single go. (ex 1st video : 10 mins, 2nd video : 15 mins then combined should be a single video of 25 mins.)


2. Be able to cut a video in parts if needed.

3. Extract mp3 out of a mp4 video.


First things first! We will start with setting up the libraries needed

npm install @ffmpeg/ffmpeg @ffmpeg/util
Code language: CSS (css)

After installing the ffmpeg libraries we want the following structure in our code

  1. ffmpegWorker.js : This file will handle every video customisation tasks and functions inside it. 
  2. Video.jsx : This file will call the workers and show the necessary UI. (Buttons, placeholders for the output, etc)

Step 1: Setting up Web Workers

onmessage = async (event) => {

  const ffmpeg = new FFmpeg();

  await ffmpeg.load();

};Code language: JavaScript (javascript)

This is a simple onmessage provided by workers that will take the request from our react component.

Now in our react component which will contain ui elements, we will add the following

const worker = new Worker(new URL('./ffmpegWorker.js', import.meta.url), {

  type: 'module',

});Code language: JavaScript (javascript)
function OurCustomComponent() {

     ...

     ...

     const [videoUrls, setVideoUrls] = useState([

       'http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4',

     ]);

     const [audioSrc, setAudioSrc] = useState(null);

     const [videoSrc, setVideoSrc] = useState(null);

     const [isProcessing, setIsProcessing] = useState(false);

     const handleAddUrl = () => {

       if (videoUrls.length < 2) {

         setVideoUrls([...videoUrls, '']);

       }

     };

     const handleRemoveUrl = (index) => {

       if (videoUrls.length > 1) {

         const updatedUrls = videoUrls.filter((_, idx) => idx !== index);

         setVideoUrls(updatedUrls);

       }

     };

     const handleUrlChange = (index, value) => {

       const updatedUrls = videoUrls.map((url, idx) => (idx === index ? value : url));

       setVideoUrls(updatedUrls);

     };

     const combineVideos = () => {

       setIsProcessing(true);

       worker.postMessage({

         videoUrls,

         action: 'combine',

       });

       worker.onmessage = (event) => {

         const { videoData } = event.data;

         const url = URL.createObjectURL(new Blob([videoData], { type: 'video/mp4' }));

         setVideoSrc(url);

         setIsProcessing(false);

       };

     };

 const cutVideo = (cutStart, cutEnd) => {

   setIsProcessing(true);

   worker.postMessage({

     videoUrls: [videoUrls[0]],

     action: 'cut',

     options: { cutStart, cutEnd },

   });

   worker.onmessage = (event) => {

     const { videoData } = event.data;

     const url = URL.createObjectURL(new Blob([videoData], { type: 'video/mp4' }));

     setVideoSrc(url);

     setIsProcessing(false);

   };

 };

 const extractAudio = () => {

   setIsProcessing(true);

   worker.postMessage({

     videoUrls: [videoUrls[0]],

     action: 'extractAudio',

   });

   worker.onmessage = (event) => {

     const { audioData } = event.data;

     const url = URL.createObjectURL(new Blob([audioData], { type: 'audio/mp3' }));

     setAudioSrc(url);

     setIsProcessing(false);

   };

 };

     return (

       //   basic UI elements

       <div>

         <div>

           {videoUrls.map((url, index) => (

             <div key={index}>

               <input

                 type="text"

                 value={url}

                 onChange={(e) => handleUrlChange(index, e.target.value)}

                 placeholder="Enter video URL"

               />

               <button onClick={() => handleRemoveUrl(index)} disabled={videoUrls.length <= 1}>

                 Remove

               </button>

             </div>

           ))}

           {videoUrls.length < 2 && (

             <button onClick={handleAddUrl} disabled={isProcessing}>

               Add Video URL

             </button>

           )}

         </div>

         <button onClick={combineVideos} disabled={isProcessing}>

           {isProcessing ? 'Processing...' : 'Combine Videos'}

         </button>

       <button onClick={() => cutVideo('00:00:10', '00:00:30')} disabled={isProcessing}>

       {isProcessing ? 'Processing...' : 'Cut Video'}

     </button>

     <button onClick={extractAudio} disabled={isProcessing}>

       {isProcessing ? 'Processing...' : 'Extract Audio'}

     </button>

     {videoSrc && (

       <div>

         <video controls width="600" src={videoSrc} />

       </div>

     )}

     {audioSrc && (

       <div>

         <audio controls src={audioSrc} />

       </div>

     )}

       </div>

     );

   }

   export default OurCustomComponent;Code language: JavaScript (javascript)

Let’s go through what we have just added.


State :
1. videoUrls : currently a hardcoded url of a mp4 video
2. videoSrc : output for the video file
3. audioSrc : output for the audio file
4. isProcessing : a simple loader flag

Functions :


combineVideos :
1. function to initiate cocatanation
2. Sends the worker urls and receives videoSrc, converts it into a blob url and used in video tag.
3. Action === “combine”

cutVideo :
1. function to initiate to cut a video
2. Sends the worker video url and receives videoSrc, converts it into a blob url and used in audio tag.
3. Action === “cut”

extractAudio :
1. function to initiate a mp4 to mp3 conversion.
2. Sends the worker options with start time and end time
3. Sends the worker url and receives audioSrc, converts it into a blob url and used in video tag.
4. Action === “extractAudio”


Communication Between Main Thread and Worker:


Now to the fun stuff !!


We will be updating our previous setup of worker code by adding code to check for different actions that are defined in our component
  

import { FFmpeg } from '@ffmpeg/ffmpeg';

   import { fetchFile } from '@ffmpeg/util';

   onmessage = async (event) => {

     const { videoUrls, action, options } = event.data;

     const ffmpeg = new FFmpeg();

     await ffmpeg.load();

     let outputFileName = 'output.mp4';

     if (action === 'extractAudio') {

       outputFileName = 'output.mp3';

     }

     if (videoUrls.length === 1) {

       await fetchAndWriteFile(ffmpeg, videoUrls[0]);

       if (action === 'cut') {

         await cutVideos(ffmpeg, options.cutStart, options.cutEnd);

       }

       if (action === 'extractAudio') {

         await extractAudio(ffmpeg);

       }

     } else if (videoUrls.length > 1 && action === 'combine') {

       await combineVideos(ffmpeg, videoUrls);

     }

     const data = await ffmpeg.readFile(outputFileName);

     if (action === 'extractAudio') {

       postMessage({ audioData: data.buffer });

     }

     else {

       postMessage({ videoData: data.buffer });

     }

   };Code language: JavaScript (javascript)



We have defined the output file as output.mp4 for video file and output.mp3 for audio file. This is because ffmpeg.readfile would fail otherwise throwing an exception error.

Once this is set up, our main UI thread and component can now send and receive data between workers. Let’s get to the FFMPEG part now.

We will be updating our worker file one last time. 

async function fetchAndWriteFile(ffmpeg, url, index = 0) {

     const videoData = await fetchFile(url);

     await ffmpeg.writeFile(`input${index}.mp4`, videoData);

   }

   async function combineVideos(ffmpeg, videoUrls) {

     for (let i = 0; i < videoUrls.length; i++) {

       const videoData = await fetchFile(videoUrls[i]);

       await ffmpeg.writeFile(`input${i}.mp4`, videoData);

     }

     const concatFile = 'concat.txt';

     const concatContent = videoUrls.map((_, i) => `file 'input${i}.mp4'`).join('\n');

     await ffmpeg.writeFile(concatFile, new TextEncoder().encode(concatContent));

     await ffmpeg.exec(['-f', 'concat', '-safe', '0', '-i', concatFile, '-c', 'copy','-y', 'output.mp4']);

   }

   async function cutVideos(ffmpeg, cutStart, cutEnd) {

     await ffmpeg.exec(['-i', 'input0.mp4', '-ss', cutStart, '-to', cutEnd, '-c:v', 'libx264', '-c:a', 'aac','-y', 'output.mp4']);

   }

   async function extractAudio(ffmpeg) {

     await ffmpeg.exec(['-i', 'input0.mp4', '-q:a', '0', '-map', 'a', '-y', 'output.mp3']);

   }Code language: JavaScript (javascript)

Breakdown

Let’s break down each of these functions and the arguments they use and how they are implemented. 


1. fetchAndWriteFile(ffmpeg, url, index = 0)

Purpose:

Fetches a video file from a given url and writes it into FFmpeg’s virtual file system.

• Code Breakdown:

• const videoData = await fetchFile(url);: Fetches the video file from the given URL and converts it into a format FFmpeg can process.

• await ffmpeg.writeFile(input${index}.mp4, videoData);: Saves the fetched video data as a virtual file named input{index}.mp4 in FFmpeg’s file system.

2. combineVideos(ffmpeg, videoUrls)

Purpose:

Combines multiple videos listed in videoUrls into a single video (output.mp4).

• Code Breakdown:

• Loops through each videoUrls entry, fetches the video data using fetchFile, and writes each video as input{i}.mp4.

• Creates a concat.txt file that FFmpeg uses to concatenate videos:

• file ‘input0.mp4’ (format expected by FFmpeg’s concat filter).

• Runs the FFmpeg command to concatenate videos.

FFmpeg Command (exec):

-f concat -safe 0 -i concat.txt -c copy output.mp4Code language: CSS (css)

• Explanation of Arguments:

• -f concat: Specifies that the input format is a concatenation file.

• -safe 0: Ensures FFmpeg can handle paths without additional validation.

• -i concat.txt: Specifies the concat.txt file as input.

• -c copy: Copies the video and audio streams without re-encoding for faster processing.

•       -y : To ensure that the output file is always overwritten

        •   output.mp4: The name of the output file.

3. cutVideos(ffmpeg, cutStart, cutEnd)

Purpose:

Extracts a portion of a video (input0.mp4) based on the provided cutStart and cutEnd times and saves it as output.mp4.

FFmpeg Command (exec):

-i input0.mp4 -ss {cutStart} -to {cutEnd} -c:v libx264 -c:a aac -y output.mp4

• -i input0.mp4: Specifies the input file.

• -ss {cutStart}: Starts trimming the video from the specified start time (cutStart in HH:MM:SS).

• -to {cutEnd}: Stops trimming at the specified end time (cutEnd in HH:MM:SS).

• -c:v libx264: Encodes the video using the H.264 codec (commonly used for compatibility).

• -c:a aac: Encodes the audio using the AAC codec.
•       -y : To ensure that the output file is always overwritten

• output.mp4: The name of the output file.


4. extractAudio(ffmpeg)

Purpose:

Extracts the audio from a video file (input0.mp4) and saves it as an MP3 file (output.mp3).

FFmpeg Command (exec):

-i input0.mp4 -q:a 0 -map a output.mp3Code language: CSS (css)


    •
Explanation of Arguments:

• -i input0.mp4: Specifies the input file.

• -q:a 0: Sets the audio quality to the best possible (higher quality).

• -map a: Selects only the audio stream (a) from the input file.
•       -y : To ensure that the output file is always overwritten  

• output.mp3: The name of the output file.

FFmpeg-Specific Functions:

• writeFile: Saves a file (e.g., a fetched video or text file) into FFmpeg’s in-memory virtual file system for processing.

• exec: Executes a given FFmpeg command with specified arguments, processing the files in the virtual file system.


Conclusion: FFmpeg

We now have a fully functional FFmpeg setup running inside a web worker, allowing us to perform advanced video and audio processing directly in the browser without overloading the main UI thread. Throughout this journey, we explored how this library can be used to combine videos, extract audio, and cut specific segments with precision, showcasing its incredible versatility and efficiency. Along with this we learned to use workers to use parallel processing and make the main UI thread free of heavy operations. 
The FFMPEG library is capable of doing much more advanced operations ex, creating GIF, adding video filters and adding subtitles but they would extend the scope of this blog.

This was a fun project and I am looking forward to learning more about FFMPEG in the future.
Happy hacking!!  🚀

Read more shuru tech blogs here

Author

Exit mobile version