If you’ve configured a media server, set up an encoder, or built any kind of live streaming pipeline, you’ve seen both MPEG-TS and HLS listed as output or ingest options. They appear side by side in encoder settings, CDN configurations, and streaming APIs — which raises a reasonable question: what’s actually different, and which one should you use?
The short answer: they solve different problems. MPEG-TS (MPEG Transport Stream) is a packet-based container format built for broadcast and managed network delivery. HLS (HTTP Live Streaming) is a segment-based streaming protocol built for internet-scale delivery with adaptive bitrate support. They don’t directly compete in most real-world workflows — but choosing the wrong one for your use case causes real problems, whether that’s buffering, incompatible hardware, or latency that misses your target.
This guide breaks down how each format works, where they differ on latency, device compatibility, and DRM, and which to choose for broadcast, IPTV, OTT, or developer-built streaming apps.
What Is MPEG-TS?
MPEG Transport Stream (MPEG-TS) is a digital container format for transmitting and storing audio, video, and data. It was standardized as ISO/IEC 13818-1 (MPEG-2 Part 1, Systems) in 1995 and designed specifically for environments where transmission reliability cannot be guaranteed — terrestrial broadcast, satellite, cable, and IPTV over managed telco infrastructure.
The defining characteristic of MPEG-TS is its fixed 188-byte packet structure. Every piece of data — video frames, audio samples, subtitles, metadata — is broken into 188-byte packets. Each packet carries a 4-byte header with a Packet Identifier (PID) that tells the receiver which elementary stream the packet belongs to. Multiple streams (video, audio, subtitle tracks) are multiplexed together into a single MPEG-TS stream, each identified by its unique PID.
MPEG-TS Packet Structure
Each 188-byte packet contains:
- 4-byte header: sync byte (0x47), 13-bit Packet Identifier (PID), continuity counter (4 bits for detecting lost packets), and control flags
- Optional adaptation field: carries PCR (Program Clock Reference) timestamps for audio/video sync, splice point metadata, and private data
- 184 bytes of payload: the actual audio, video, or data content
The 188-byte size wasn’t arbitrary — it was chosen to align with ATM (Asynchronous Transfer Mode) cell sizes. Four TS packets fit neatly into three ATM cells, making MPEG-TS compatible with the telecom infrastructure of the mid-1990s.
Key MPEG-TS Components
- PAT (Program Association Table): always on PID 0x0000; lists all programs in the stream and points to their PMTs
- PMT (Program Map Table): describes which PIDs carry video, audio, and data for each program
- PCR (Program Clock Reference): a 48-bit synchronization timestamp sent at least every 100ms with ±500 nanoseconds maximum jitter; keeps audio and video in sync at the decoder
- Null Packets (PID 0x1FFF): padding packets inserted to maintain constant bitrate (CBR) in broadcast environments
MPEG-TS transmits as a continuous push stream, typically over UDP — either multicast (for IPTV and cable distribution) or unicast. The receiver demultiplexes the stream in real time, following PAT → PMT to identify the relevant PIDs.
Where MPEG-TS is used today:
– Digital broadcast TV (DVB-T, DVB-S, DVB-C, ATSC, ISDB)
– Satellite television
– IPTV over managed telco networks
– Cable television head-end systems
– Professional broadcast contribution feeds (camera to encoder, encoder to uplink)
– IP camera streams over managed LAN
What Is HLS?
HTTP Live Streaming (HLS) is an adaptive bitrate streaming protocol developed by Apple and released in 2009. Unlike MPEG-TS’s continuous packet stream, HLS breaks video into small downloadable file segments — typically 2–10 seconds each — and delivers them over standard HTTP.
A client player downloads an .m3u8 playlist file that references available quality variants and individual segment URLs. The player fetches segments sequentially via HTTP GET requests, assembles them into continuous playback, and switches quality levels based on available bandwidth — without interrupting the stream. HLS was standardized as RFC 8216 in August 2017 and as of 2022 is consistently ranked the most-used streaming format in annual video industry surveys.
How HLS Works
- Video is encoded at multiple bitrates and resolutions (for example: 1080p at 5 Mbps, 720p at 3 Mbps, 480p at 1.5 Mbps)
- Each encoded stream is segmented into short files stored as
.tsor.fmp4 - A master playlist (
.m3u8) references all variant streams with their bitrates and resolutions - Individual media playlists list the segment URLs for each quality level
- The client downloads the master playlist, selects a quality variant, and fetches segments in order
- The player monitors bandwidth continuously and switches quality variants when needed
- A CDN caches and serves segments globally — only standard HTTP GET requests are required
The pull-based, HTTP-native delivery model is what makes HLS well-suited for internet delivery. CDNs cache individual segments efficiently. Firewalls pass HTTP traffic without special configuration. Any standard web server can deliver HLS content without specialized infrastructure.
How MPEG-TS and HLS Actually Relate
Here’s something most comparison articles skip: HLS traditionally uses MPEG-TS as its segment container format.
When you generate an HLS stream, the segment files (those 2–10 second chunks) have historically been MPEG-TS files with .ts extensions. The HLS protocol wraps MPEG-TS segments with an .m3u8 playlist layer to create the adaptive, HTTP-deliverable experience.
Since WWDC 2016, Apple has also supported fragmented MP4 (fMP4 / CMAF) as the HLS segment container. Modern HLS implementations increasingly use fMP4 because a single encode can serve both HLS and MPEG-DASH players, cutting packaging and storage costs roughly in half.
The real relationship looks like this:
- MPEG-TS is a transport container — a format for packaging audio, video, and data bits into a stream
- HLS is a delivery protocol — a system for segmenting, indexing, and adaptively serving video over HTTP
- HLS can use MPEG-TS as its segment format, or it can use fMP4/CMAF
- MPEG-TS exists independently as a broadcast delivery format, without any HLS wrapping
They’re not pure alternatives. They coexist in many production pipelines.
MPEG-TS vs HLS: Side-by-Side Comparison
| Feature | MPEG-TS | HLS |
|---|---|---|
| Type | Transport container format | Adaptive streaming protocol |
| Data unit | 188-byte fixed packets | 2–10 second segment files (.ts or .fmp4) |
| Transport | UDP (multicast or unicast) | HTTP (TCP, CDN-cacheable) |
| Delivery model | Push (continuous, real-time) | Pull (client requests segments on demand) |
| Latency | Sub-second possible | 6–30 sec standard; 2–5 sec (LL-HLS) |
| Adaptive bitrate | No | Yes |
| Browser support | Limited (requires MSE + JS library) | Excellent (native Safari; hls.js elsewhere) |
| Mobile support | Rare | Excellent (iOS and Android native) |
| CDN compatibility | Not applicable (UDP-based) | Excellent (HTTP caching) |
| DRM/Encryption | DVB CA (hardware-dependent); limited | AES-128/256, FairPlay, Widevine, PlayReady |
| Analytics | Basic | Advanced (client-side metrics) |
| Infrastructure | Specialized broadcast hardware | Standard HTTP servers + CDN |
| Primary use case | Broadcast, IPTV, contribution feeds | OTT, web, mobile, consumer streaming |
MPEG-TS vs HLS: Latency Deep Dive
Latency is the most important practical difference for many teams choosing between the two.
MPEG-TS Latency
MPEG-TS can deliver sub-second latency on managed networks. Because it’s a continuous push stream over UDP, there’s no segmentation delay — data flows from encoder to decoder in near real time. This is why broadcast engineers favor it for live sports, news, and IPTV on private telco infrastructure. The trade-off: you need a UDP-capable managed network, which means the public internet is largely off the table.
Standard HLS Latency
Standard HLS adds 6–30 seconds of latency, built in by design. The server must complete a full segment before any client can request it. The player then buffers several segments before starting playback. With a 6-second segment size and a 3-segment player buffer, you’re already at roughly 18 seconds of end-to-end latency before accounting for encoding and CDN hop time.
Low Latency HLS (LL-HLS)
LL-HLS, announced at WWDC 2019, closes much of this gap — reducing latency to 2–5 seconds through three mechanisms:
- Partial segments: 200–400ms chunks within each full segment, letting the client start downloading before the full segment completes
- Blocking playlist reloads: the server holds the client’s playlist request open until new content is available, cutting polling overhead
- Playlist delta updates: only changed entries are sent in playlist refreshes, reducing request payload size
LL-HLS requires HTTP/2 support from your CDN and origin server. Cloudflare, Akamai, and Fastly all support LL-HLS as of 2023.
For ultra low latency video streaming requirements below 1 second — interactive auctions, live betting, real-time co-watching — WebRTC is the appropriate technology. The comparison between WebRTC vs HLS is a separate decision with different infrastructure requirements and trade-offs.
Device and Browser Compatibility
HLS has a clear advantage for internet-facing delivery.
MPEG-TS Compatibility
MPEG-TS was designed for managed broadcast infrastructure — hardware decoders, set-top boxes, satellite receivers, and professional monitoring tools. Browsers cannot natively play MPEG-TS. Some players use the Media Source Extensions (MSE) API with a JavaScript library to demultiplex MPEG-TS and feed decoded data to the browser’s media pipeline, but this requires specific player support and adds client-side complexity.
Native MPEG-TS support exists in:
– Hardware IRDs (Integrated Receiver Decoders)
– DVB-compliant set-top boxes
– Professional broadcast monitors
– IPTV hardware clients
– VLC and some specialized desktop media players
HLS Compatibility
HLS has native support in:
– Safari on all platforms (iOS 3.0+, macOS 10.6+)
– Chrome on Android 4.1+
– All modern smart TVs, Roku, Amazon Fire TV, Apple TV
– Any browser using hls.js (the open-source JavaScript HLS player)
– Every major streaming service — Netflix, Twitch, YouTube Live, Hulu, BBC iPlayer
For internet-facing streaming, HLS covers the full device spectrum with native or JS-player-based support. For broadcast hardware delivery, MPEG-TS is the standard and HLS support is typically limited.
Security and DRM
If you’re protecting premium content for internet delivery, HLS is the only practical choice.
HLS encryption options:
– AES-128: full segment encryption (CBC mode with PKCS7 padding) — the most common, widely supported
– SAMPLE-AES: per-sample encryption — lower overhead, more flexible
– AES-256: stronger encryption for high-value content
– Multi-DRM: FairPlay (Apple), Widevine (Google), PlayReady (Microsoft) — all supported via CMAF/CENC packaging
MPEG-TS encryption options:
– DVB Conditional Access (CA) systems — hardware-dependent, used in closed broadcast environments
– Some IPTV middleware implementations add proprietary encryption layers
– No standardized cross-platform DRM for public internet delivery
If you need DRM for video on public internet delivery — protecting paid content on iOS, Android, and browsers at once — HLS with CMAF and Common Encryption (CENC) is the only practical path. MPEG-TS DRM options are hardware-bound and incompatible with standard web DRM systems like Widevine or FairPlay.
Infrastructure and Cost
The infrastructure requirements reflect the different network environments each format was built for.
MPEG-TS infrastructure:
– UDP-capable networks with multicast support
– Hardware encoders and IRDs
– Broadcast routers, modulators, and monitoring equipment
– Private managed networks (telco, cable headend, satellite uplink)
– Higher upfront capital investment in specialized broadcast equipment
HLS infrastructure:
– Any standard HTTP web server (nginx, Apache, cloud object storage like S3)
– A CDN for video streaming to handle global distribution
– A video transcoding layer to produce multiple quality renditions
– No specialized hardware required
– Pay-per-use pricing from CDN and cloud providers
For internet delivery at scale, HLS is significantly more cost-efficient. CDN HTTP caching means segment distribution costs decrease as a percentage of total cost as audience size grows. MPEG-TS multicast over managed networks is bandwidth-efficient for IPTV subscribers, but the hardware investment and network requirements are much higher.
When to Use MPEG-TS
MPEG-TS is the right choice when:
- You’re operating on a closed, managed network — cable plant, satellite link, private telco infrastructure, or IPTV over dedicated bandwidth
- You need sub-second latency and can guarantee a high-bandwidth connection (80 Mbps or more)
- Your delivery endpoints are broadcast hardware — set-top boxes, IRDs, professional monitors, satellite receivers
- You’re running contribution feeds between professional broadcast equipment — camera to encoder, encoder to uplink, or encoder to media server over LAN
- You’re building for DVB, ATSC, or ISDB terrestrial or satellite broadcast systems
- You need to carry multiple programs in a single stream (MPTS for IPTV channel lineups)
- Your encoders and decoders are hardware units that natively output and accept MPEG-TS over UDP
MPEG-TS is not a good fit for public internet delivery, mobile apps, or any scenario requiring adaptive bitrate or broad browser compatibility.
When to Use HLS
HLS is the right choice when:
- You’re delivering over the public internet to web or mobile users
- You need adaptive bitrate streaming to handle variable network conditions automatically
- You’re targeting broad device compatibility — iOS, Android, browsers, smart TVs
- You’re using a CDN for global distribution
- You need content protection via standard DRM — FairPlay, Widevine, or PlayReady
- You’re building a consumer-facing OTT platform or streaming service
- You need player-side analytics — buffer events, bitrate switches, segment load times, rebuffering ratios
- You want to run on standard web infrastructure without specialized hardware investment
HLS is the default for internet-scale live streaming. Every major consumer streaming service — Netflix, Hulu, Apple TV+, Twitch, BBC iPlayer — uses HLS for OTT delivery.
Live Streaming Workflow: Where Each Protocol Fits
In a real production pipeline, MPEG-TS and HLS often appear at different stages — not as alternatives, but as complements in the same workflow.
Here’s a typical live streaming production chain and where each format appears:
Stage 1 — Capture and Encoding
A camera feeds a hardware or software encoder. The encoder compresses video (H.264 or HEVC) and sends the ingest signal to a media server via RTMP, SRT, or directly as MPEG-TS over UDP. Broadcast hardware encoders commonly output MPEG-TS natively to professional receiving equipment on-site.
Stage 2 — Ingest and Transcoding
The media server receives the stream and transcodes it into multiple quality renditions. This video encoding step produces the bitrate ladder needed for adaptive streaming downstream.
Stage 3 — Packaging and Distribution
This is where the split typically happens:
– For broadcast/IPTV: the packager outputs MPEG-TS streams over UDP multicast to set-top boxes or satellite uplinks
– For OTT/internet: the packager segments video into .ts or .fmp4 files, generates .m3u8 playlists, and pushes to CDN origin for HLS streaming delivery
Stage 4 — Playback
– MPEG-TS playback: hardware decoders, IPTV middleware apps, professional monitoring tools
– HLS playback: web browsers, iOS/Android apps, smart TV streaming apps
A major sports broadcaster might run both paths at once — MPEG-TS over satellite for cable subscribers (sub-second latency on dedicated infrastructure) while serving HLS over CDN for mobile and web OTT viewers, all from the same source encode. The RTMP to HLS conversion step is a standard part of these pipelines.
If you’re building on top of a streaming API rather than managing your own media server, platforms like LiveAPI accept MPEG-TS as an ingest format alongside RTMP and SRT, and handle transcoding and HLS packaging automatically — giving you HLS output URLs with adaptive bitrate for all your internet delivery endpoints without building the packaging layer yourself.
MPEG-TS vs HLS vs MPEG-DASH
While this article focuses on the MPEG-TS vs HLS comparison, MPEG-DASH comes up in most streaming protocol discussions.
HLS vs DASH is a separate comparison, but the key distinction from MPEG-TS:
- Both HLS and DASH are HTTP-based segment-and-playlist delivery protocols — they’re in the same category as each other, not the same category as MPEG-TS
- DASH uses MPD manifest files instead of M3U8 playlists, and is codec-agnostic
- DASH has limited native iOS/Safari support, which constrains adoption for consumer apps targeting Apple devices
- CMAF (Common Media Application Format) allows a single encode to serve both HLS and DASH players, making the HLS-vs-DASH packaging choice less critical from a cost standpoint
- MPEG-TS sits in a different category entirely: transport container, not delivery protocol
For IPTV comparisons specifically, the question of “HLS or MPEG-TS” usually comes down to whether the system runs over a managed network (MPEG-TS multicast over UDP) or over the public internet (HLS or DASH over HTTP). The network type drives the protocol choice.
Choosing between MPEG-TS and HLS is really a question about your delivery environment. If you control the network — closed infrastructure, dedicated bandwidth, hardware endpoints — MPEG-TS gives you sub-second latency and is the broadcast industry standard. If you’re delivering over the public internet to web and mobile users, HLS gives you adaptive bitrate, broad compatibility, and CDN scalability that MPEG-TS can’t match in that context.
The harder question for most development teams is how to build the ingest and transcoding pipeline that feeds HLS output efficiently, without months of infrastructure work. That’s where a live streaming SDK or API platform handles the heavy lifting.
MPEG-TS vs HLS FAQ
What is the difference between MPEG-TS and HLS?
MPEG-TS is a packet-based transport container that sends audio and video as a continuous stream of 188-byte packets over UDP, designed for broadcast and managed networks. HLS is an HTTP-based adaptive streaming protocol that delivers video as short file segments with M3U8 playlists, designed for internet delivery. They serve different network environments and often appear at different stages of the same production pipeline.
Does HLS use MPEG-TS?
Yes — traditionally, HLS segments are stored as MPEG-TS files (.ts extension). Since WWDC 2016, HLS also supports fragmented MP4 (fMP4/CMAF) as an alternative segment container. Modern HLS pipelines increasingly use fMP4 because it enables a single encode to serve both HLS and DASH players.
Which has lower latency: MPEG-TS or HLS?
MPEG-TS delivers sub-second latency on managed networks because it’s a continuous push stream over UDP with no segment buffering. Standard HLS adds 6–30 seconds of latency from segmentation and player buffering. Low Latency HLS (LL-HLS) reduces this to 2–5 seconds, but MPEG-TS still wins for real-time broadcast on dedicated infrastructure.
Which is better for IPTV: HLS or MPEG-TS?
It depends on the IPTV architecture. Traditional IPTV over telco-managed networks (dedicated bandwidth, hardware set-top boxes) typically uses MPEG-TS multicast for low latency and bandwidth efficiency. Modern IPTV services delivered over the public internet increasingly use HLS for adaptive bitrate and broad device compatibility. Many IPTV providers support both.
Does MPEG-TS support adaptive bitrate?
No. MPEG-TS streams at a fixed bitrate. If network conditions change, the stream either plays or drops packets — there’s no mechanism to switch quality. Adaptive bitrate (ABR) is a feature of HLS and MPEG-DASH.
What is the MPEG-TS packet size?
Each MPEG-TS packet is exactly 188 bytes: a 4-byte header plus 184 bytes of payload. This fixed size was chosen to align with ATM cell sizes used in telecom infrastructure.
Can MPEG-TS be delivered over the public internet?
Technically yes, but it’s not practical for consumer delivery. MPEG-TS over UDP doesn’t traverse firewalls consistently, doesn’t work with standard CDN caching, doesn’t adapt to bandwidth changes, and isn’t natively supported in browsers. For internet delivery to consumer devices, HLS is the correct choice.
What is Low Latency HLS (LL-HLS)?
LL-HLS is an extension to HLS announced by Apple in 2019 that reduces latency from 6–30 seconds to 2–5 seconds. It uses partial segments (200–400ms chunks), blocking playlist reloads, and playlist delta updates to cut end-to-end delay while maintaining CDN compatibility and adaptive bitrate support.
What is M3U8?
M3U8 is the playlist file format used by HLS. A master playlist lists all available quality variants; media playlists list the individual segment URLs for each quality level. For a full breakdown, see the guide to M3U8 files.
What is the difference between MPEG-TS, HLS, and RTMP?
RTMP is a real-time streaming protocol used primarily as a live ingest protocol — from encoder to media server. MPEG-TS is a broadcast transport container for managed network delivery. HLS is an adaptive streaming delivery protocol for internet and OTT. In a typical live workflow: the encoder sends RTMP or SRT to the media server, the server transcodes and packages, then outputs MPEG-TS for broadcast or HLS for internet delivery. For a detailed comparison of the ingest options, see SRT vs RTMP.
Closing
MPEG-TS and HLS aren’t alternatives in the same lane — they’re formats built for different networks and different problems. MPEG-TS is the standard for broadcast and managed network delivery where you control the infrastructure and need sub-second latency. HLS is the standard for internet-scale delivery where you need adaptive bitrate, broad device support, and CDN-backed distribution.
Most production live streaming systems that serve both broadcast and internet audiences use both. The split happens at the packaging stage: MPEG-TS out for broadcast, HLS out for OTT.
If you’re building an internet-facing streaming application, HLS is almost certainly your delivery format. The remaining question is how to build the ingest and transcoding pipeline that feeds it without months of infrastructure work.
LiveAPI accepts MPEG-TS as an ingest format alongside RTMP and SRT, transcodes to multiple bitrates automatically, and outputs HLS-ready streams with adaptive bitrate built in. Get started with LiveAPI and go from ingest to HLS delivery in days.

