Before CMAF, streaming a single video to every device meant maintaining two separate sets of encoded files: one for Apple devices using HLS streaming with MPEG-TS containers, and another for Android, smart TVs, and browsers using MPEG-DASH with fragmented MP4. That duplication doubled your encoding bill, doubled your storage costs, and doubled the work for your CDN.
CMAF eliminates that duplication. By standardizing on fragmented MP4 as a single container format that both HLS and MPEG-DASH can read, CMAF lets you encode your video once and deliver it everywhere. You also get a clear path to sub-3-second live latency through chunked transfer encoding — without switching to WebRTC.
This guide covers how CMAF works technically, what it means for latency, DRM, and storage costs, and how to decide whether it belongs in your streaming stack.
What Is CMAF?
CMAF (Common Media Application Format) is an international standard for packaging, storing, and delivering segmented media over HTTP. Published by ISO/IEC in January 2018 as ISO/IEC 23000-19, it defines a single media container format — fragmented MP4 (fMP4) — that works with both HLS and MPEG-DASH.
Instead of maintaining separate .ts files for HLS and .mp4 files for DASH, you package your content once into CMAF and serve it through either protocol. Both manifests point to the same media files.
CMAF is not a streaming protocol. It doesn’t define how media is requested or delivered across the network. It defines how media is structured and packaged. Delivery still happens through HLS or MPEG-DASH manifests — CMAF just standardizes what those manifests point to.
The standard was proposed in February 2016 by Apple and Microsoft, who brought the idea to the MPEG working group. Their goal was to cut the industry-wide cost and complexity of delivering video across a fragmented device landscape.
The Problem CMAF Solves
To understand why CMAF matters, you need to understand what streaming looked like before it.
The Two-Format Problem
HLS — Apple’s HTTP Live Streaming protocol — originally required MPEG-TS (.ts) container files for video segments. The MPEG-TS vs HLS comparison shows the key technical differences, but the core issue is this: MPEG-DASH, the international standard backed by most non-Apple devices, used fragmented MP4 containers.
If you wanted to reach both Apple and non-Apple devices with the same content, you had to:
- Encode your video once
- Package it into MPEG-TS segments for HLS
- Package the same video into fMP4 segments for DASH
- Store both sets of files on your CDN
- Maintain two separate manifest files
That doubles your storage footprint and CDN bandwidth for no technical reason — the video data is identical, just wrapped in different containers. According to Bunny.net, CMAF can reduce encoding, packaging, and storage demand by over 70% compared to maintaining both formats.
The Latency Ceiling
Traditional HLS was also limited in latency. The protocol buffered complete video segments — typically 6–10 seconds each — before delivering them to the player. That meant end-to-end latency of 15–30 seconds for standard HLS streams.
CMAF introduced chunked transfer encoding, which breaks each segment into smaller chunks that the CDN and player can start processing before the full segment is complete. This brings latency down to 2–5 seconds with CMAF HLS, and under 2 seconds with Low-Latency HLS (LL-HLS).
How CMAF Works
CMAF structures media in a layered hierarchy:
- CMAF Track: A single media stream (audio, video, or subtitles) encoded in fMP4 format. Each track contains a CMAF Header (decoder configuration data) and one or more CMAF Fragments.
- CMAF Fragment: An independently decodable unit within a track. Typically 1–6 seconds of media, aligned across audio and video tracks.
- CMAF Segment: A grouping of one or more fragments, referenced by the manifest.
- CMAF Switching Set: A group of tracks with the same content encoded at different bitrates — the foundation for adaptive bitrate streaming.
- CMAF Selection Set: A group of switching sets representing different content choices (audio language, subtitle track, video angle, etc.).
The manifest file — either an HLS .m3u8 playlist or a DASH .mpd file — references the CMAF segments and tells the player which URLs to request and when.
Supported Codecs
CMAF specifies which audio and video codecs are valid inside the fMP4 container:
- Video: H.264 (AVC), H.265 (HEVC)
- Audio: AAC-LC, HE-AAC, AC-3, EC-3, Opus
- Subtitles: WebVTT, TTML/IMSC1
This covers the major codecs used in production streaming today. H.264 is the safe default for broad device compatibility; H.265/HEVC offers better compression at the same quality but requires more recent devices.
How Chunked Transfer Enables Low Latency
Standard CMAF delivery still waits for a complete segment before publishing it to the CDN. Chunked transfer encoding changes that.
With chunked CMAF:
- The encoder outputs video in small chunks (100–500ms each) immediately after encoding each chunk
- The origin server starts transferring chunks to the CDN edge before the full segment is complete
- The CDN uses HTTP chunked transfer encoding to stream chunks to the player as they arrive
- The player buffers and starts playing chunks as they come in, without waiting for the full segment
This pipeline cuts end-to-end latency from 15–30 seconds (standard HLS) to 2–5 seconds (CMAF HLS), and as low as 1–2 seconds with LL-HLS partial segments.
CMAF vs. HLS vs. MPEG-DASH
| Feature | Traditional HLS | CMAF with HLS | MPEG-DASH | Low-Latency HLS |
|---|---|---|---|---|
| Container | MPEG-TS (.ts) | fMP4 (.m4s) | fMP4 (.m4s) | fMP4 (.m4s) |
| Protocol | HLS | HLS | DASH | HLS |
| Standard latency | 15–30s | 5–10s | 3–8s | 2–5s |
| Low-latency mode | No | Yes (chunked) | Yes (chunked) | Yes (partial segments) |
| Apple device support | Yes (all) | iOS 12+ | Limited | iOS 15+ |
| Android/browser support | Limited | Yes | Yes | Yes |
| DRM support | FairPlay (AES-128) | CENC / CBCS | CENC / CBCS | CENC / CBCS |
| Single encode for all | No | Yes | Yes | Yes |
| Industry standard | De facto | ISO/IEC 23000-19 | ISO/IEC 23009-1 | Apple LL-HLS spec |
For a detailed breakdown, see HLS vs DASH — CMAF with HLS gives you the widest device coverage from a single encode, while MPEG-DASH remains common in markets where Apple devices are less dominant.
CMAF and Low-Latency Streaming
CMAF’s chunked transfer approach is the leading solution for broadcast-quality latency without the infrastructure complexity of WebRTC. Here’s where CMAF fits in the latency spectrum:
| Method | Typical Latency | Scale | Best For |
|---|---|---|---|
| WebRTC | Under 500ms | Hundreds–thousands of viewers | Real-time interaction, video calls |
| CMAF with LL-HLS | 1–3 seconds | Millions of viewers | Live sports, news, live events |
| CMAF with HLS | 2–5 seconds | Millions of viewers | General live streaming |
| Standard HLS (MPEG-TS) | 15–30 seconds | Millions of viewers | VOD, delay-tolerant live |
For ultra-low-latency streaming at CDN scale, CMAF is the right protocol choice when you need to serve more than a few thousand concurrent viewers. WebRTC’s peer-to-peer or SFU architecture doesn’t scale to CDN-level audiences.
CMAF and DRM
One of CMAF’s most practical benefits is its approach to content protection.
Before CMAF, protecting content with multiple DRM systems required encoding and encrypting separate files — one for FairPlay (Apple), one for PlayReady (Microsoft), and one for Widevine (Google). That meant storing three encrypted versions of every piece of content.
CMAF standardizes on CENC (Common Encryption), specifically the CBCS encryption mode (AES-128 Cipher Block Chaining with pattern encryption). CBCS is supported by:
- FairPlay — Apple’s DRM for iOS, macOS, tvOS
- PlayReady — Microsoft’s DRM for Windows, Xbox, some smart TVs
- Widevine — Google’s DRM for Android, Chrome, Chromecast
A single CMAF-encrypted file carries initialization data for all three DRM systems. Your DRM-protected video is stored once, encrypted once, and served to any device — cutting DRM-related storage overhead by roughly 66% compared to system-specific encryption.
One caveat: FairPlay originally used a different encryption scheme (SAMPLE-AES with AES-128) for older HLS streams. Make sure your packaging tools specifically support CBCS for CMAF, not just the legacy FairPlay format.
Advantages of CMAF
Single Encode, Universal Delivery
Package your video once as CMAF and serve it to Apple devices via HLS manifests and to Android, smart TVs, and browsers via DASH manifests. No duplicate encoding runs, no duplicate storage. The storage savings exceed 70% compared to maintaining separate MPEG-TS and fMP4 segment libraries at scale.
Lower Latency Than Standard HLS
CMAF’s chunked transfer encoding brings live streaming latency below 5 seconds — well below the 15–30 seconds of traditional HLS. With LL-HLS, you can reach 1–3 seconds on CDN-based delivery.
Adaptive Bitrate Streaming
CMAF supports adaptive bitrate streaming through its switching set structure. The player automatically switches between quality levels based on available bandwidth, keeping playback smooth even when network conditions fluctuate.
CDN-Compatible Scale
Because CMAF uses standard HTTP delivery, any CDN that supports chunked transfer encoding can distribute CMAF streams. You get the full scale of global CDN infrastructure rather than WebRTC’s point-to-point architecture. CDN delivery for video applies directly to CMAF.
Standardized DRM
As covered above, CBCS encryption lets you protect content once and serve it with FairPlay, PlayReady, and Widevine from the same encrypted file. This cuts storage costs and removes per-DRM packaging complexity.
Simplified Encoding Pipeline
With one container, one encryption pass, and two lightweight manifests, your video encoding and packaging pipeline has fewer moving parts. Fewer format conversions means less processing overhead and fewer failure points during live streams.
Better CDN Cache Efficiency
Because CMAF segments are identical whether referenced by HLS or DASH manifests, CDN caches don’t need to store duplicate copies. One cached file serves requests from both HLS and DASH players, improving cache hit rates and reducing origin load.
Disadvantages of CMAF
Limited Support on Older Devices
CMAF’s fMP4 container in HLS requires iOS 12 or later and tvOS 12 or later. Devices running older Apple OS versions default to MPEG-TS-based HLS. If your user base has significant numbers of older Apple devices, you may need to maintain a fallback MPEG-TS stream alongside CMAF.
Not All CDNs Support Chunked Transfer
CMAF’s low-latency mode depends on end-to-end support for HTTP chunked transfer encoding — from origin server to CDN edge to player. Not all CDN configurations support this by default. Check your CDN’s documentation before assuming LL-HLS or chunked CMAF delivery is available.
DRM Encryption Complexity
While CBCS is the converging standard, some legacy DRM implementations still use CTR mode. If you’re supporting older PlayReady or Widevine clients, test for compatibility. CMAF technically supports both CBCS and CTR modes, but CBCS is the practical standard for multi-DRM CMAF deployments.
Tooling Still Maturing
CMAF is newer than HLS, and not all encoding, packaging, and player tools support it with equal quality. FFmpeg, Bento4, and Shaka Packager handle CMAF packaging well, but test your full pipeline — encoder to CDN to player — before deploying to production.
Can’t Match WebRTC Latency
Even with chunked transfer, CMAF cannot reach the sub-500ms latency of WebRTC. If your application requires real-time interaction — video calls, live auction bidding, interactive gaming — CMAF is not the right choice. WebRTC live streaming handles those use cases.
Choosing a packaging format is only one part of building a streaming application. You still need encoding infrastructure, CDN delivery, and a player that handles fMP4 segments. Whether you build that yourself or use a streaming API, the CMAF format sits in the middle of your pipeline.
How to Implement CMAF in Your Streaming Stack
Here’s a practical path to CMAF delivery.
Step 1: Choose an Encoder That Outputs fMP4
Your encoder or video transcoder needs to output fragmented MP4 segments, not MPEG-TS. Tools that support CMAF packaging:
- FFmpeg — open-source, supports CMAF-compatible HLS output via
-f hls -hls_segment_type fmp4 - Shaka Packager — Google’s open-source packager, supports CMAF segments for HLS and DASH
- AWS Elemental MediaConvert / MediaPackage — managed service with CMAF packaging for both HLS and DASH output
- Bitmovin Encoder — commercial encoder with CMAF and LL-HLS support
For live streams, your encoder should support chunked output at the ingest level so you’re not adding latency in the packaging step.
Step 2: Configure Dual-Manifest CMAF Packaging
Your packager creates two manifests pointing to the same CMAF media files:
- An HLS
.m3u8master playlist referencing.m4ssegment files - A DASH
.mpdmanifest referencing the same.m4sfiles
Both manifests point to the same fMP4 segments — no duplicate files needed. For low-latency CMAF, configure 2–6 second segments with chunk sizes of 100–500ms. Smaller chunks reduce latency but increase HTTP request overhead.
Step 3: Configure Your CDN for Chunked Transfer
For standard CMAF (5–10 second latency), most CDNs work without special configuration. For low-latency CMAF with chunked transfer, you need a CDN that supports:
- HTTP chunked transfer encoding at the edge
- Partial object caching — so the CDN can forward incomplete segments to clients
CDN providers with documented LL-HLS and CMAF support include Akamai, Fastly, and Cloudflare.
Step 4: Update Your Player
Most modern players handle CMAF fMP4 natively:
- hls.js — supports fMP4 segments in HLS (required for CMAF playback in browsers)
- Shaka Player — full CMAF support for HLS and DASH in browsers
- ExoPlayer (Android) — native CMAF / DASH support
- AVPlayer (iOS/tvOS) — native CMAF / HLS support from iOS 12+
For LL-HLS playback, use hls.js 1.0+ or Shaka Player 3.0+. Older player versions fall back to standard HLS segment behavior.
Step 5: Add DRM If Required
If your content needs protection, configure CBCS encryption in your packager. Generate a single encrypted CMAF stream with DRM initialization data for FairPlay, PlayReady, and Widevine embedded in the manifest. Your DRM license server handles per-system key delivery at playback time.
Using a Streaming API
Building and maintaining this pipeline — encoder, packager, CDN, player — takes significant engineering time. A live streaming API like LiveAPI handles the full pipeline: ingest via RTMP or SRT, automatic adaptive bitrate encoding, HLS output via Akamai, Cloudflare, and Fastly CDNs, and an embeddable player — so you’re shipping features instead of managing packaging infrastructure.
Is CMAF Right for Your Project?
Use CMAF if:
- You serve both Apple and non-Apple devices and want a single encode
- You need sub-5-second live latency without WebRTC infrastructure
- You’re implementing DRM and want one encrypted asset for FairPlay, PlayReady, and Widevine
- You’re optimizing storage and CDN costs at scale (the 70%+ savings add up fast above a few terabytes)
- You’re building an OTT platform or VOD service where wide device compatibility matters
Skip CMAF if:
- Your audience is exclusively on Apple devices and you already use MPEG-TS HLS — the migration may not be worth it
- You need sub-500ms latency — WebRTC is the right choice for that
- Your audience has significant numbers of pre-iOS 12 devices and you can’t run a fallback stream
- Your CDN doesn’t support chunked transfer and low latency is a hard requirement
CMAF FAQ
What does CMAF stand for?
CMAF stands for Common Media Application Format. It’s an ISO/IEC standard (23000-19) that defines how to package segmented media in fragmented MP4 containers for HTTP-based streaming across HLS and MPEG-DASH delivery.
Is CMAF a protocol?
No. CMAF is a container and packaging format, not a streaming protocol. It works with existing delivery protocols — HLS and MPEG-DASH — by standardizing the structure of the media files those protocols reference. The protocol layer (how requests are made, how playlists are served) stays the same.
What’s the difference between CMAF and HLS?
The two operate at different layers. HLS is a delivery protocol that defines how clients request and play back media. CMAF vs HLS explains the full picture: CMAF defines the container format of the media segments that HLS serves. Traditional HLS uses MPEG-TS containers; CMAF HLS uses fMP4 containers. CMAF doesn’t replace HLS — it replaces the MPEG-TS container inside HLS.
What latency can CMAF achieve?
Standard CMAF with HLS achieves 2–5 seconds end-to-end latency. With chunked transfer encoding and Low-Latency HLS (LL-HLS), you can reach 1–2 seconds. For sub-second latency, WebRTC is the only scalable option — CMAF can’t match that ceiling regardless of chunk size.
Does CMAF support DRM?
Yes. CMAF uses CENC (Common Encryption) with CBCS mode, which is supported by FairPlay (Apple), PlayReady (Microsoft), and Widevine (Google). A single CMAF-encrypted file can serve all three DRM systems from the same bytes, reducing storage requirements significantly compared to separate per-DRM encryption.
What container format does CMAF use?
CMAF uses fragmented MP4 (fMP4), also called ISOBMFF (ISO Base Media File Format). This is the same container MPEG-DASH has always used, which is why CMAF content can serve both HLS and DASH from a single file without re-encoding.
What devices support CMAF?
CMAF with HLS runs on iOS 12+, tvOS 12+, macOS (Safari), and all major browsers when using hls.js or Shaka Player. CMAF with DASH runs on Android, Chromecast, most smart TVs, and modern browsers. Legacy Apple devices running pre-iOS 12 require a separate MPEG-TS fallback stream.
How is CMAF different from MPEG-DASH?
CMAF is a packaging format; MPEG-DASH is a streaming protocol. MPEG-DASH has always used fMP4 containers, making CMAF content structurally compatible with DASH. What CMAF adds is the standardized track structure, switching sets, selection sets, and common encryption scheme — building a consistent layer on top of raw fMP4 that both HLS and DASH manifests can reference.
Is CMAF used by major streaming services?
Yes. Netflix, Disney+, and most major CDN providers have adopted CMAF. Apple added fMP4 support to HLS in 2017, and the ISO formally published the CMAF standard in January 2018. Akamai and other CDN providers have documented best-practice guides for ultra-low-latency CMAF delivery.
What tools support CMAF packaging?
FFmpeg, Shaka Packager, Bento4, AWS Elemental MediaConvert, Bitmovin, and most commercial encoding platforms support CMAF packaging. Confirm your tool specifically supports fMP4 segments for HLS output (hls_segment_type fmp4 in FFmpeg), not just DASH output.
Closing
CMAF solves a real infrastructure problem: the cost and complexity of maintaining separate media files for HLS and DASH delivery. By standardizing on fragmented MP4 as the container for both protocols, it cuts storage costs by over 70%, reduces packaging complexity, and opens the door to sub-5-second live streaming latency through chunked transfer encoding.
If you’re building a streaming application that needs to reach a broad device base — iOS, Android, browsers, smart TVs — with a single encode, CMAF is the right packaging format for your stack.
To build on top of CMAF-compatible delivery infrastructure without managing encoders, packagers, and CDN configurations yourself, get started with LiveAPI. LiveAPI handles RTMP and SRT ingest, HLS output across Akamai, Cloudflare, and Fastly CDNs, adaptive bitrate encoding, and an embeddable player — so you can focus on your product.


