⚡ Low-Level Media API

WebCodecs API
Reference

Direct, hardware-accelerated access to browser-native audio and video codecs — without WebAssembly overhead.

W3C Candidate Rec Chrome 94+ Edge 94+ Firefox 130+ GPU Accelerated Streams API
Introduction

What is WebCodecs?

The WebCodecs API gives web developers low-level access to the individual frames of a video stream and chunks of audio. It is useful for web applications that need full control over the way media is processed — video editors, conferencing tools, live streamers, and transcoding pipelines.

Many Web APIs (Web Audio, WebRTC, MSE) use media codecs internally, but expose no direct access to raw frames or encoded chunks. Developers previously had to ship entire codec implementations via WebAssembly — wasting bandwidth, battery and latency. WebCodecs solves this by exposing the codecs already inside the browser.

🎞️

Video Encoding

Encode raw VideoFrames to H.264, VP8, VP9, AV1 and more with fine-grained bitrate control.

📼

Video Decoding

Decode EncodedVideoChunks into VideoFrame objects ready for Canvas, WebGL, or WebGPU.

🎵

Audio Encoding

Encode AudioData to Opus, AAC, FLAC using the browser's native codec implementation.

🔊

Audio Decoding

Decode EncodedAudioChunks into AudioData for rendering via AudioWorklet or further processing.


Architecture

Core Concepts

WebCodecs uses an asynchronous, queue-based processing model. Encoders and decoders each maintain an internal processing queue. Control messages from configure(), encode(), decode(), and flush() operate asynchronously by appending to this queue.

MediaStream / File / Camera VideoEncoder AudioEncoder Encoded Chunk Stream VideoDecoder AudioDecoder WebCodecs Pipeline

Codec States

Every codec instance cycles through these states:

StateDescription
unconfiguredInitial state. Must call configure() before use.
configuredReady to process. Call encode() / decode().
closedPermanently closed. No further operations allowed.
ℹ️

flush() vs reset(): flush() waits for all pending work to complete. reset() synchronously aborts and clears the queue. close() is permanent — the instance cannot be reused.


Compatibility

Browser Support

🟡
Chrome
94+
✓ Full Support
🔵
Edge
94+
✓ Full Support
🦊
Firefox
130+
✓ Full Support
🧭
Safari
18+
~ Video only
💡

Always feature-detect using the static isConfigSupported() method before using a codec. Not all browsers support all codecs (e.g., H.264 support varies by platform license).


Video

VideoDecoder

Decodes EncodedVideoChunk objects into VideoFrame objects. The decoded frames are delivered via the output callback.

Constructor

javascript
const decoder = new VideoDecoder({
  output: (frame) => {
    // Paint frame to canvas
    ctx.drawImage(frame, 0, 0);
    frame.close(); // ⚠️ Always release memory
  },
  error: (err) => console.error("Decode error:", err),
});

Methods & Properties

NameTypeDescription
configure(config)methodConfigure the decoder with codec string, dimensions, description.
decode(chunk)methodEnqueue an EncodedVideoChunk for decoding.
flush()methodReturns a Promise that resolves when all pending frames are output.
reset()methodSynchronously abort all pending work and reset state to unconfigured.
close()methodPermanently close and free all resources.
stategetCurrent state: "unconfigured" | "configured" | "closed"
decodeQueueSizegetNumber of pending decode requests in the queue.
isConfigSupported(config)staticReturns a Promise resolving to a support object.

VideoDecoderConfig

javascript
decoder.configure({
  codec: "avc1.42001f",  // H.264 Baseline Level 3.1
  codedWidth: 1280,
  codedHeight: 720,
  description: avcDecoderConfigRecord, // Optional: ArrayBuffer
  hardwareAcceleration: "prefer-hardware", // or "prefer-software"
  optimizeForLatency: true,
});

Video

VideoEncoder

Encodes VideoFrame objects into EncodedVideoChunk objects, delivered via the output callback along with optional EncodedVideoChunkMetadata.

javascript — VideoEncoder setup
const encoder = new VideoEncoder({
  output: (chunk, metadata) => {
    // chunk: EncodedVideoChunk
    // metadata: { decoderConfig?, svc?, alphaSideData? }
    muxer.addVideoChunk(chunk, metadata);
  },
  error: (e) => console.error(e),
});

encoder.configure({
  codec: "vp09.00.10.08", // VP9 Profile 0
  width: 1280,
  height: 720,
  bitrate: 2_000_000,
  framerate: 30,
  bitrateMode: "variable",
  hardwareAcceleration: "prefer-hardware",
  latencyMode: "realtime",  // for conferencing
});

// Encode a frame — force keyframe every 150 frames
encoder.encode(videoFrame, { keyFrame: frameCount % 150 === 0 });
videoFrame.close();

Supported Codecs

CodecCodec String ExampleNotes
H.264 / AVCavc1.42001fWidely supported; patent-encumbered on some platforms
VP8vp8Royalty-free; common in WebRTC
VP9vp09.00.10.08Better compression than VP8; royalty-free
AV1av01.0.04M.08Best compression; hardware support growing
H.265 / HEVChev1.1.6.L93.B0Platform-specific; check isConfigSupported()

Video

VideoFrame

Represents a single decoded video frame. Implements CanvasImageSource, so it can be drawn directly to a canvas, used in WebGL, or passed to WebGPU.

⚠️

Always call frame.close() when done. VideoFrames hold GPU/CPU memory and system resources. Failing to close them causes memory leaks and degraded performance.

javascript — Creating a VideoFrame
// From a canvas element
const frame = new VideoFrame(canvasElement, {
  timestamp: performance.now() * 1000, // microseconds
  duration: 33333, // ~30fps in µs
});

// From raw pixel data (RGBA)
const init = {
  format: "RGBA",
  codedWidth: 640, codedHeight: 480,
  timestamp: 0,
};
const frame2 = new VideoFrame(rgbaBuffer, init);

// Read pixel data back
await frame.copyTo(outputBuffer, {
  format: "I420",
  rect: { x: 0, y: 0, width: 640, height: 480 },
});

frame.close(); // Always!

Pixel Formats

FormatDescription
I420YUV planar 4:2:0 — most common decoded format
I420AYUV 4:2:0 with alpha plane
I444YUV planar 4:4:4 — full chroma resolution
NV12YUV semi-planar — common GPU native format
RGBA8-bit RGBA interleaved
BGRA8-bit BGRA interleaved

Audio

AudioDecoder

Decodes EncodedAudioChunk objects into AudioData objects for playback via AudioWorklet or further DSP processing.

javascript
const audioDecoder = new AudioDecoder({
  output: (audioData) => {
    // Route to AudioWorklet or Web Audio API
    workletNode.port.postMessage({ audioData }, [audioData]);
  },
  error: (e) => console.error(e),
});

audioDecoder.configure({
  codec: "opus",
  sampleRate: 48000,
  numberOfChannels: 2,
});

Audio

AudioEncoder

javascript
const audioEncoder = new AudioEncoder({
  output: (chunk, meta) => muxer.addAudioChunk(chunk, meta),
  error: (e) => console.error(e),
});

audioEncoder.configure({
  codec: "opus",
  sampleRate: 48000,
  numberOfChannels: 2,
  bitrate: 128_000,
});

Supported Audio Codecs

CodecStringNotes
OpusopusRoyalty-free, excellent quality — preferred for WebRTC
AAC-LCmp4a.40.2Widely supported; common in MP4 containers
FLACflacLossless; large output files
PCM (u8)pcm-u8Uncompressed 8-bit
PCM (s16)pcm-s16Uncompressed 16-bit signed

Audio

AudioData

Represents a block of decoded audio samples. Like VideoFrame, always call audioData.close() after use to free resources.

Property/MethodTypeDescription
formatgetSample format: "u8", "s16", "s32", "f32", "u8-planar", etc.
sampleRategetSamples per second (e.g., 44100, 48000)
numberOfFramesgetNumber of audio frames in this chunk
numberOfChannelsgetNumber of audio channels (1=mono, 2=stereo)
durationgetDuration in microseconds
timestampgetPresentation timestamp in microseconds
copyTo(dest, options)methodCopy audio samples into a provided ArrayBuffer
clone()methodCreate a copy (retains resource lifetime independently)
close()methodRelease resources immediately

Chunks

EncodedVideoChunk & EncodedAudioChunk

Containers for compressed media data. Both share the same interface shape.

javascript
// Creating manually (e.g., from network/file)
const chunk = new EncodedVideoChunk({
  type: "key",      // "key" | "delta"
  timestamp: 0,     // microseconds
  duration: 33333,  // microseconds
  data: nalUnitBuffer,
});

// Reading the data back
const buf = new ArrayBuffer(chunk.byteLength);
chunk.copyTo(buf);
PropertyTypeValue
typeget"key" (keyframe/intra) or "delta" (inter/dependent frame)
timestampgetPresentation timestamp in microseconds
durationgetDuration in microseconds (may be undefined)
byteLengthgetSize of encoded data in bytes
copyTo(dest)methodCopy encoded bytes into an ArrayBuffer or TypedArray

Rendering

Rendering VideoFrames

VideoFrame implements CanvasImageSource, making it directly usable with Canvas 2D, WebGL, and WebGPU.

javascript — Canvas 2D
const canvas = document.getElementById("output");
const ctx = canvas.getContext("2d");

// In the decoder output callback:
output: (frame) => {
  ctx.drawImage(frame, 0, 0);
  frame.close();
}
javascript — WebGL texture
// VideoFrame as a WebGL texture source
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
  gl.TEXTURE_2D, 0, gl.RGBA,
  gl.RGBA, gl.UNSIGNED_BYTE,
  frame  // VideoFrame is a valid TexImageSource
);
frame.close();

Integration

ReadableStream Integration

Use MediaStreamTrackProcessor to bridge a live MediaStream into WebCodecs-ready VideoFrame or AudioData streams.

javascript — Camera → Encoder pipeline
const stream = await navigator.mediaDevices
  .getUserMedia({ video: true, audio: true });

const [videoTrack] = stream.getVideoTracks();
const processor = new MediaStreamTrackProcessor({ track: videoTrack });
const reader = processor.readable.getReader();

while (true) {
  const { value: frame, done } = await reader.read();
  if (done) break;
  encoder.encode(frame, { keyFrame: frameCount++ % 150 === 0 });
  frame.close();
}

Guide

Guide: Decoding a Video File

A complete, minimal example of fetching a video file, demuxing it (using mp4box.js), and rendering decoded frames to a canvas.

javascript — Full Decode Pipeline
// 1. Feature detection
const { supported } = await VideoDecoder.isConfigSupported({
  codec: "avc1.42001f",
  codedWidth: 1280, codedHeight: 720,
});
if (!supported) throw new Error("Codec not supported");

// 2. Set up decoder
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d");

const decoder = new VideoDecoder({
  output: (frame) => {
    canvas.width  = frame.displayWidth;
    canvas.height = frame.displayHeight;
    ctx.drawImage(frame, 0, 0);
    frame.close();
  },
  error: console.error,
});

decoder.configure({
  codec: "avc1.42001f",
  codedWidth: 1280, codedHeight: 720,
  description: avcConfig, // from demuxer
});

// 3. Feed encoded chunks from demuxer
for (const sample of videoSamples) {
  decoder.decode(new EncodedVideoChunk({
    type: sample.is_sync ? "key" : "delta",
    timestamp: sample.cts * 1e6 / sample.timescale,
    duration: sample.duration * 1e6 / sample.timescale,
    data: sample.data,
  }));
}

await decoder.flush();
decoder.close();
console.log("Done decoding!");

Guide

Guide: Encoding Camera Feed to WebM

javascript — Encode + Mux to WebM
// Uses webm-muxer (npm) for containerization
import { Muxer, ArrayBufferTarget } from "webm-muxer";

const target = new ArrayBufferTarget();
const muxer = new Muxer({ target, video: { codec: "V_VP9", width: 1280, height: 720 }});

const encoder = new VideoEncoder({
  output: (chunk, meta) => muxer.addVideoChunk(chunk, meta),
  error: console.error,
});
encoder.configure({ codec: "vp09.00.10.08", width: 1280, height: 720, bitrate: 2e6, framerate: 30 });

// Feed frames...
let frameIndex = 0;
function encodeFrame(bitmap) {
  const frame = new VideoFrame(bitmap, {
    timestamp: frameIndex * 33333,
    duration: 33333,
  });
  encoder.encode(frame, { keyFrame: frameIndex % 150 === 0 });
  frame.close();
  frameIndex++;
}

// Finalize
async function finish() {
  await encoder.flush();
  muxer.finalize();
  const blob = new Blob([target.buffer], { type: "video/webm" });
  const url = URL.createObjectURL(blob);
  document.getElementById("download").href = url;
}

Guide

Guide: Microphone Audio Pipeline

javascript — Mic → Opus Encoder
const stream = await navigator.mediaDevices
  .getUserMedia({ audio: true });
const [track] = stream.getAudioTracks();

const proc = new MediaStreamTrackProcessor({ track });
const reader = proc.readable.getReader();

const enc = new AudioEncoder({
  output: (chunk) => sendOverNetwork(chunk),
  error: console.error,
});
enc.configure({ codec: "opus", sampleRate: 48000, numberOfChannels: 1, bitrate: 64000 });

while (true) {
  const { value: audioData, done } = await reader.read();
  if (done) break;
  enc.encode(audioData);
  audioData.close();
}

Errors

Error Handling

Errors arrive via the error callback. A codec that encounters an error transitions to "closed" state and cannot be reused — create a new instance.

javascript
let decoder = createDecoder();

function createDecoder() {
  return new VideoDecoder({
    output: handleFrame,
    error: (e) => {
      console.error("Decoder error:", e.message);
      // Re-create if needed
      if (decoder.state === "closed") {
        decoder = createDecoder();
        decoder.configure(lastConfig);
      }
    },
  });
}

// Common error types:
// EncodingError — codec-level encode/decode failure
// InvalidStateError — calling methods on a closed codec
// NotSupportedError — unsupported config (check isConfigSupported first!)
⚠️

NotSupportedError: Always call isConfigSupported() before configure() to avoid runtime errors from unsupported codec configurations.


Performance

Performance Tips

🚀

Prefer Hardware Acceleration

Use hardwareAcceleration: "prefer-hardware" in config. Falls back to software automatically.

♻️

Close Frames Immediately

Call frame.close() and audioData.close() as soon as you're done. GPU memory is finite.

📊

Monitor Queue Size

Watch encoder.encodeQueueSize. If it grows large, back-pressure by awaiting before submitting more.

🔁

Avoid Frequent flush()

flush() is expensive. Call it only once all desired work is queued, not as a polling mechanism.

🧵

Use Web Workers

VideoEncoder/Decoder work in Web Workers. Offload encoding to a worker to keep the main thread responsive.

realtime Latency Mode

For live streaming/conferencing, use latencyMode: "realtime" in VideoEncoder config.

💡

Back-pressure pattern: Check encoder.encodeQueueSize and use await encoder.flush() periodically to prevent unbounded memory growth in high-throughput pipelines.

WebCodecs API Documentation · Based on W3C Candidate Recommendation · Last reviewed March 2026