RTMP

RTMP to HLS: How to Convert Live Streams for Any Device

17 min read
Live streaming video signal being processed for RTMP to HLS conversion
Reading Time: 12 minutes

When you push a live stream from OBS, Wirecast, or a hardware encoder, it leaves over RTMP. That’s the standard ingest protocol — fast, reliable, and natively supported by every encoder on the market. The problem is on the delivery side: browsers, iPhones, Android devices, and smart TVs can’t play RTMP directly. They need HLS.

Converting an RTMP stream to HLS is one of the most common tasks in live streaming infrastructure. It’s the bridge between how you capture video and how your viewers watch it. How you handle it — with FFmpeg, NGINX, or a cloud API — directly affects latency, scalability, and how much infrastructure you have to manage.

This guide covers what RTMP-to-HLS conversion is, why it’s necessary, the difference between transmuxing and transcoding, three implementation methods with working commands, how to build an ABR ladder, and how LL-HLS cuts latency to under 5 seconds.


What Is RTMP to HLS Conversion?

RTMP to HLS conversion is the process of taking an incoming live stream delivered via RTMP and repackaging or re-encoding it as HLS segments for delivery to viewers.

RTMP and HLS are two different protocols built for different stages of the live streaming pipeline:

  • RTMP (Real-Time Messaging Protocol) is a TCP-based protocol originally developed by Macromedia for Flash Player. It maintains a persistent connection between the encoder and the server, giving it low latency — typically 0.8–3 seconds. It runs on port 1935 and carries video in an FLV container.
  • HLS (HTTP Live Streaming) is an HTTP-based delivery protocol EXTLINKdeveloped by AppleEXTLINK. It splits a video stream into short .ts (MPEG Transport Stream) segments — typically 2–6 seconds each — and publishes an M3U8 playlist that players use to request those segments in sequence.
RTMP HLS
Protocol TCP HTTP
Role Ingest (encoder → server) Delivery (server → viewer)
Container FLV MPEG-TS or fMP4
Default latency 0.8–3 seconds 6–30 seconds (standard)
Low-latency mode N/A 2–5 seconds (LL-HLS)
Browser support None (Flash removed) Native on all modern browsers
CDN-friendly No Yes
ABR support No Yes

Converting RTMP to HLS means your media server — or a cloud API — receives the RTMP stream, processes it in real time, and outputs HLS segments that any device can play over a standard web server or CDN.


Why Convert RTMP to HLS?

RTMP still dominates as an ingest protocol because every encoder — OBS, Wirecast, vMix, Telestream, hardware encoders from Blackmagic and Teradek — outputs RTMP natively. It’s stable, battle-tested, and fast. The problem is on the delivery side.

Browser compatibility. Adobe ended Flash support in 2020. Chrome, Firefox, Safari, and Edge no longer play RTMP natively. There’s no way to play an RTMP stream directly in a browser today without a plugin, and most enterprise environments block plugins entirely. HLS plays natively in every modern browser.

Mobile delivery. iOS has supported HLS natively since iOS 3. Android handles it through ExoPlayer and the native media player. Neither platform plays RTMP.

CDN caching. CDNs are built around HTTP. MPEG-TS segment files cache naturally at edge nodes, which means your HLS stream scales to thousands of concurrent viewers without hammering your origin server. RTMP is a persistent TCP connection — CDNs can’t cache it, and you’d need one server connection per viewer.

Adaptive bitrate delivery. HLS supports multiple quality renditions in a single stream. Your server generates separate M3U8 playlists for 1080p, 720p, 480p, and 360p, and the player switches between them automatically based on the viewer’s bandwidth. RTMP delivers one fixed quality to everyone.

DVR and catch-up. HLS segment files sit on a server or storage bucket. Keep them around and viewers can rewind during a live broadcast or watch a replay after it ends. RTMP streams don’t have this option without additional recording infrastructure.

If you want to stream live video to any device — web, mobile, or connected TV — RTMP-to-HLS conversion is part of the foundation.


Transmuxing vs. Transcoding: Which One Do You Need?

Before choosing an implementation method, you need to understand the difference between transmuxing and video transcoding. They’re often confused, but they’re not the same thing.

Transmuxing changes the container format without touching the encoded video data. For RTMP to HLS, that means taking the H.264 video and AAC audio from the FLV container and repackaging them into MPEG-TS segments (.ts files) with an M3U8 index. The video is never decoded or re-encoded — it’s moved from one wrapper to another.

This is fast and CPU-efficient. A single server can transmux dozens of concurrent streams with minimal overhead. The trade-off: you get one output quality. Whatever resolution and bitrate the encoder sent is what viewers receive.

Transcoding decodes the incoming video and re-encodes it at different resolutions and bitrates. You start with a 1080p RTMP stream and output separate renditions: 1080p at 6 Mbps, 720p at 3 Mbps, 480p at 1.5 Mbps, 360p at 800 Kbps. Those renditions feed into a master M3U8 playlist, and the player handles adaptive bitrate streaming automatically.

Transcoding is CPU-heavy. Each rendition requires a separate encode pass in real time, which means significant processing power. GPU-accelerated encoding (NVIDIA NVENC, Apple VideoToolbox) helps, but the infrastructure cost is still much higher than transmuxing.

Transmuxing Transcoding
What changes Container only Container + codec + quality
Output renditions One (source quality) Multiple (ABR)
CPU usage Low High
Latency added Near-zero 0.5–2 seconds
Best for Single-quality delivery Adaptive bitrate delivery

If your encoder already outputs multiple renditions (some hardware encoders support this), you can transmux each one separately. If you have a single RTMP feed and need to reach viewers on different connections and devices, you need transcoding.


How to Convert RTMP to HLS: 3 Methods

Method 1: FFmpeg

EXTLINKFFmpegEXTLINK is a free, open-source command-line tool that handles both transmuxing and transcoding. It’s the fastest way to test an RTMP-to-HLS conversion pipeline and works well for development or simple single-stream setups.

Basic transmux (single quality, no re-encode):

ffmpeg -i rtmp://localhost/live/stream \
  -c:v copy \
  -c:a copy \
  -f hls \
  -hls_time 4 \
  -hls_list_size 5 \
  -hls_flags delete_segments \
  /var/www/html/hls/stream.m3u8

What each flag does:
-c:v copy -c:a copy — copies video and audio without re-encoding (transmux only)
-hls_time 4 — creates 4-second segments
-hls_list_size 5 — keeps the last 5 segments in the playlist (about 20 seconds of buffer)
-hls_flags delete_segments — removes old .ts files automatically to avoid filling the disk

This writes .ts files and a stream.m3u8 playlist to /var/www/html/hls/. Serve that directory from NGINX or Apache and you have a working HLS stream.

Transcode to a single output (with re-encode):

ffmpeg -i rtmp://localhost/live/stream \
  -c:v libx264 -preset veryfast -b:v 3000k -maxrate 3500k -bufsize 6000k \
  -c:a aac -b:a 128k \
  -f hls \
  -hls_time 4 \
  -hls_list_size 5 \
  /var/www/html/hls/stream.m3u8

Use -preset veryfast or ultrafast for real-time encoding. Slower presets produce smaller files but won’t keep up with a live feed — FFmpeg will start dropping frames and falling behind.

FFmpeg is good for development, quick tests, and proof-of-concept work. For production with multiple concurrent streams, you want a media server that manages ingest, segmentation, and playlist updates without a separate FFmpeg process per stream.


Method 2: NGINX with the RTMP Module

The EXTLINKnginx-rtmp-moduleEXTLINK turns NGINX into a media server. It accepts RTMP ingest on port 1935 and outputs HLS segments directly — without requiring a separate FFmpeg process for each stream.

This is the most common self-hosted approach for RTMP-to-HLS in production. One NGINX instance can handle multiple live RTMP streams with minimal configuration.

Install NGINX with the RTMP module:

sudo apt-get install nginx libnginx-mod-rtmp

Or build from source:

git clone https://github.com/arut/nginx-rtmp-module.git
./configure --add-module=../nginx-rtmp-module
make && sudo make install

nginx.conf — complete RTMP + HTTP configuration:

rtmp {
    server {
        listen 1935;
        chunk_size 4096;

        application live {
            live on;
            record off;

            hls on;
            hls_path /var/www/html/hls;
            hls_fragment 4s;
            hls_playlist_length 30s;
        }
    }
}

http {
    server {
        listen 80;

        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
                video/mp2t ts;
            }
            root /var/www/html;
            add_header Cache-Control no-cache;
            add_header Access-Control-Allow-Origin *;
        }
    }
}

Once NGINX is running with this config, push an RTMP stream from OBS or any encoder:
RTMP ingest URL: rtmp://your-server-ip/live
Stream key: anything (e.g., mystream)

NGINX automatically generates HLS segments and a playlist at /var/www/html/hls/mystream.m3u8. Your HLS playback URL is http://your-server-ip/hls/mystream.m3u8.

The Cache-Control: no-cache header prevents CDN or browser caching of the M3U8 playlist (which changes every few seconds), while .ts segment files are safe to cache.

The NGINX + rtmp-module approach is solid for self-hosted setups, but you’re responsible for server provisioning, scaling, health monitoring, and disk management. For a one-server setup handling a handful of streams, it works well. For a product feature handling variable concurrent streams, the operational overhead adds up.


Method 3: Cloud API

Both FFmpeg and NGINX require you to manage the underlying infrastructure. For teams building live streaming as a product feature, that’s engineering time spent on server operations instead of your application.

Cloud-based video APIs handle the full pipeline: receive your RTMP stream, transcode it to multiple HLS renditions, distribute segments through a global CDN, and give you an HLS playback URL. You push one RTMP stream and get back a playback URL that works on any device.

With LiveAPI, the setup takes a few lines of code:

const sdk = require('api')('@liveapi/v1.0#5pfjhgkzh9rzt4');

sdk.post('/live-streams', {
  record: true,
  name: 'My Live Stream'
})
.then(({ data }) => {
  console.log('RTMP ingest URL:', data.rtmpUrl);
  console.log('HLS playback URL:', data.hlsUrl);
})
.catch(err => console.error(err));

Your encoder pushes to the rtmpUrl. Viewers play the hlsUrl. LiveAPI handles the conversion, ABR transcoding, CDN delivery (Akamai, Cloudflare, Fastly), and server failover automatically.

This approach makes sense when:
– You’re building an app and want to ship a streaming feature in days, not months
– You need global delivery without managing your own CDN
– You need live-to-VOD — recordings available immediately after the stream ends
– You want to stream to multiple platforms from the same RTMP feed


Building an ABR Ladder for HLS Delivery

A single-quality HLS stream works for controlled environments where you know every viewer’s bandwidth. For consumer-facing products, you want multiple quality renditions — so viewers on slow connections get smooth 360p while viewers on fiber get 1080p.

Here’s a standard ABR ladder for live streaming:

Rendition Resolution Video Bitrate Audio Bitrate
1080p 1920×1080 6,000 Kbps 192 Kbps
720p 1280×720 3,000 Kbps 128 Kbps
480p 854×480 1,500 Kbps 128 Kbps
360p 640×360 800 Kbps 96 Kbps
240p 426×240 400 Kbps 64 Kbps

Multi-rendition FFmpeg command (3 outputs):

ffmpeg -i rtmp://localhost/live/stream \
  -filter_complex \
    "[0:v]split=3[v1][v2][v3]; \
     [v1]scale=w=1280:h=720[v720p]; \
     [v2]scale=w=854:h=480[v480p]; \
     [v3]scale=w=640:h=360[v360p]" \
  -map "[v720p]" -c:v libx264 -b:v 3000k -preset veryfast \
    -map 0:a -c:a aac -b:a 128k \
    -f hls -hls_time 4 -hls_list_size 5 \
    /var/www/html/hls/720p/stream.m3u8 \
  -map "[v480p]" -c:v libx264 -b:v 1500k -preset veryfast \
    -map 0:a -c:a aac -b:a 128k \
    -f hls -hls_time 4 -hls_list_size 5 \
    /var/www/html/hls/480p/stream.m3u8 \
  -map "[v360p]" -c:v libx264 -b:v 800k -preset veryfast \
    -map 0:a -c:a aac -b:a 96k \
    -f hls -hls_time 4 -hls_list_size 5 \
    /var/www/html/hls/360p/stream.m3u8

After generating the individual rendition playlists, create a master M3U8 by hand (or script it):

#EXTM3U
#EXT-X-VERSION:3

#EXT-X-STREAM-INF:BANDWIDTH=3128000,RESOLUTION=1280x720
720p/stream.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=1628000,RESOLUTION=854x480
480p/stream.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=896000,RESOLUTION=640x360
360p/stream.m3u8

The player fetches the master M3U8, reads the BANDWIDTH values, measures actual throughput, and switches between renditions during playback.

One critical detail: your keyframe interval (GOP size) must match your segment duration. For 4-second HLS segments, configure your encoder to output a keyframe every 4 seconds. Misaligned keyframes cause segment boundary issues and buffering. Check your streaming bitrates and keyframe interval settings before going live.


Low-Latency HLS (LL-HLS)

Standard HLS has 10–30 seconds of end-to-end latency. That delay comes from several sources: encoder buffer (1–2s), segment duration (4–6s), playlist update interval (one segment), and player buffering (2–3 segments). Each adds up.

Low-Latency HLS (LL-HLS), introduced by Apple in 2019, cuts end-to-end latency to 2–5 seconds using partial segments. Instead of waiting for a full 4-second segment to finish, the server publishes 200ms partial segments as they’re generated. Players load partials incrementally, which means they start playing sooner and stay closer to the live edge.

To use LL-HLS with FFmpeg:

ffmpeg -i rtmp://localhost/live/stream \
  -c:v libx264 -preset ultrafast -tune zerolatency \
  -c:a aac \
  -f hls \
  -hls_time 1 \
  -hls_segment_type fmp4 \
  -hls_flags independent_segments+program_date_time \
  -master_pl_name master.m3u8 \
  /var/www/html/hls/stream.m3u8

Key changes for LL-HLS:
-hls_segment_type fmp4 — required; LL-HLS uses fragmented MP4 instead of MPEG-TS
-hls_time 1 — shorter segments reduce latency
-tune zerolatency — reduces encoder look-ahead buffer in x264

LL-HLS also requires HTTP/2 and a server that supports chunked transfer encoding to deliver partial segments before they’re complete. Standard NGINX needs additional configuration for this. Most managed video APIs handle LL-HLS automatically.

Use LL-HLS for sports, live auctions, interactive events, or any use case where a 10+ second delay creates a broken experience. For lectures, webinars, or broadcast-style events where delay isn’t noticeable, standard HLS is fine.


Essential Infrastructure for RTMP-to-HLS Pipelines

Whether you’re self-hosting or using a cloud API, these are the components your pipeline needs.

Media Server (RTMP Ingest)

The media server receives your RTMP push, manages connections, and runs the HLS conversion. Your options:

  • NGINX + nginx-rtmp-module — free, open source, good for small to medium setups
  • SRS (Simple Realtime Server) — newer, supports LL-HLS natively, active development
  • Wowza Streaming Engine — commercial, enterprise-grade with built-in transcoding
  • Managed API (LiveAPI) — no server to maintain, scales automatically

Your RTMP server configuration determines ingest stability, reconnection handling, and maximum concurrent stream capacity.

CDN for Segment Delivery

HLS .ts and .fmp4 segments need to reach viewers fast, from edge nodes close to them. A CDN for live streaming caches segments geographically, which reduces latency for distant viewers and keeps your origin server load flat even during traffic spikes.

For DIY setups, Cloudflare’s free tier works for low-traffic streams. For high-concurrency events with SLA requirements, Akamai or Fastly are better options. Managed APIs include CDN delivery in the platform cost.

HLS Player

Any HLS-compatible player works for playback. Common choices for web:

  • hls.js — JavaScript library, works in any browser including Chrome on desktop, most widely used
  • Video.js with HLS plugin
  • Shaka Player — Google’s open-source player, supports both HLS and DASH
  • Native <video> tag — works on Safari and mobile browsers without any library

For mobile, ExoPlayer (Android) and AVPlayer (iOS) handle HLS natively. If you want to embed a live stream on a website, hls.js with a <video> tag is the path of least resistance.

Storage and Live-to-VOD

One advantage of HLS is that segments are just files. If you store them in object storage (Amazon S3, Google Cloud Storage, Cloudflare R2) instead of deleting them, you get an instant VOD archive of every live stream. Add #EXT-X-ENDLIST to the M3U8 when the stream ends and it becomes a standard on-demand video file — no extra processing needed.


At this point, you have a complete picture of the RTMP-to-HLS pipeline: what’s happening technically, how to implement it three different ways, and how to add ABR and low-latency delivery. The remaining question is which approach fits your stack.


Is RTMP-to-HLS Right for Your Project?

Self-hosted (FFmpeg or NGINX) makes sense if:
– You have DevOps resources to maintain streaming infrastructure
– You’re running 1–5 concurrent streams at most
– Your audience is concentrated in a single region (no CDN needed)
– You want full control over the transcoding pipeline
– Budget is the main constraint

Managed API makes sense if:
– Your team wants to ship a streaming feature without owning the infrastructure
– You need global delivery with multi-CDN redundancy
– You’re expecting unpredictable or variable stream volume
– You need live-to-VOD out of the box

Consider upgrading your ingest protocol if:
– You’re building a new platform from scratch (SRT handles packet loss better than RTMP over unreliable networks — see SRT vs RTMP)
– You need end-to-end encryption at the transport layer
– You’re working with broadcast equipment that supports SRT or NDI natively

For most teams building streaming features into an application, a managed API is the right call. Running your own RTMP-to-HLS stack at scale — handling failover, segment storage, CDN integration, and monitoring — is a substantial operational commitment.


RTMP to HLS FAQ

What is the difference between RTMP and HLS?
RTMP is an ingest protocol — it carries a live stream from your encoder to a media server over TCP. HLS is a delivery protocol — it serves the stream to viewers as short HTTP segments. RTMP is fast but browser-incompatible; HLS works on every modern device. Most live streaming pipelines use RTMP for ingest and HLS for delivery.

Can FFmpeg convert RTMP to HLS directly?
Yes. FFmpeg connects to an incoming RTMP stream, transmuxes or transcodes it, and writes HLS segments to disk in real time. Use -i rtmp://server/app/key as the input and -f hls as the output format. For transmuxing without re-encoding, add -c:v copy -c:a copy. For video encoding to multiple ABR renditions, use the filter_complex split approach shown above.

What is transmuxing, and when should I use it?
Transmuxing means changing the container format (FLV to MPEG-TS) without re-encoding the video or audio. It’s fast and CPU-efficient but produces only one output quality. Use it when your encoder already outputs the right resolution and bitrate for your audience, or when you’re optimizing for minimal server-side processing. Use transcoding when you need multiple quality renditions for ABR delivery.

What is the latency of RTMP to HLS conversion?
Standard HLS adds 10–30 seconds of end-to-end latency. The main contributors are: encoder buffer (1–2s), segment duration (4–6s), playlist update interval (one segment), and player buffer (2–3 segments). With Low-Latency HLS, you can cut this to 2–5 seconds using partial segments and fMP4 containers.

Do I need an RTMP server to convert to HLS?
Yes — something needs to receive the RTMP push and run the conversion. That can be a self-hosted media server (NGINX + rtmp-module, SRS), a standalone FFmpeg process, or a managed cloud API. A plain NGINX web server without the RTMP module can’t accept RTMP ingest. You need the rtmp {} block in your NGINX config or a separate media server process.

What is the nginx-rtmp-module?
The nginx-rtmp-module is an open-source extension for NGINX that adds RTMP ingest and HLS output capability. It listens on port 1935, accepts encoder push streams, and writes HLS segments to disk automatically. It’s the most commonly used open-source solution for self-hosted RTMP-to-HLS conversion. Configuration goes in the rtmp {} block of nginx.conf.

Is RTMP still used in 2025?
Yes, but only for ingest — not delivery. RTMP remains the standard push protocol for encoders and broadcasting software because it’s fast, stable, and universally supported. YouTube, Twitch, and Facebook still accept RTMP push streams and convert them to HLS on the delivery side. Newer protocols like SRT are gaining adoption for production use cases where packet loss recovery and encryption matter.

What is the best segment duration for live HLS?
For standard HLS, 4–6 second segments are a good balance between latency and connection resilience. Shorter segments (1–2s) reduce latency but increase HTTP request overhead and playlist update frequency. Longer segments (8–10s) reduce overhead but add latency. For LL-HLS, use 1-second segments with partial segment delivery. Always match your encoder’s keyframe interval to your segment duration.

Can I push an RTMP stream to HLS and broadcast to multiple platforms at the same time?
Yes. You can configure NGINX to output HLS segments locally and push copies of the RTMP stream to YouTube, Twitch, or other destinations using the push directive in the rtmp block. Alternatively, use a live streaming API that handles multistreaming — LiveAPI’s multistream feature lets you push once and rebroadcast to 30+ destinations without managing each connection separately.


Start Building Your RTMP-to-HLS Pipeline

For a quick test, FFmpeg gives you a working HLS stream in minutes. For a production self-hosted setup, NGINX with the rtmp-module handles multi-stream ingest cleanly. For teams building streaming into an application, a managed video API handles the full pipeline — RTMP ingest, ABR transcoding, CDN delivery, and live-to-VOD — without any infrastructure to maintain.

Get started with LiveAPI and push your first RTMP stream to a global HLS delivery network in minutes.

Join 200,000+ satisfied streamers

Still on the fence? Take a sneak peek and see what you can do with Castr.

No Castr Branding

No Castr Branding

We do not include our branding on your videos.

No Commitment

No Commitment

No contracts. Cancel or change your plans anytime.

24/7 Support

24/7 Support

Highly skilled in-house engineers ready to help.

  • Check Free 7-day trial
  • CheckCancel anytime
  • CheckNo credit card required