{"id":846,"date":"2026-04-04T14:13:31","date_gmt":"2026-04-04T07:13:31","guid":{"rendered":"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/"},"modified":"2026-04-03T11:57:05","modified_gmt":"2026-04-03T04:57:05","slug":"rtmp-to-hls","status":"publish","type":"post","link":"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/","title":{"rendered":"RTMP to HLS: How to Convert Live Streams for Any Device"},"content":{"rendered":"<span class=\"rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\">12<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span><p>When you push a live stream from OBS, Wirecast, or a hardware encoder, it leaves over RTMP. That&#8217;s the standard ingest protocol \u2014 fast, reliable, and natively supported by every encoder on the market. The problem is on the delivery side: browsers, iPhones, Android devices, and smart TVs can&#8217;t play RTMP directly. They need HLS.<\/p>\n<p>Converting an RTMP stream to HLS is one of the most common tasks in live streaming infrastructure. It&#8217;s the bridge between how you capture video and how your viewers watch it. How you handle it \u2014 with FFmpeg, NGINX, or a cloud API \u2014 directly affects latency, scalability, and how much infrastructure you have to manage.<\/p>\n<p>This guide covers what RTMP-to-HLS conversion is, why it&#8217;s necessary, the difference between transmuxing and transcoding, three implementation methods with working commands, how to build an ABR ladder, and how LL-HLS cuts latency to under 5 seconds.<\/p>\n<hr \/>\n<h2>What Is RTMP to HLS Conversion?<\/h2>\n<p>RTMP to HLS conversion is the process of taking an incoming live stream delivered via RTMP and repackaging or re-encoding it as HLS segments for delivery to viewers.<\/p>\n<p><a href=\"https:\/\/liveapi.com\/blog\/what-is-rtmp\/\" target=\"_blank\" rel=\"noopener\">RTMP<\/a> and HLS are two different protocols built for different stages of the live streaming pipeline:<\/p>\n<ul>\n<li><strong>RTMP (Real-Time Messaging Protocol)<\/strong> is a TCP-based protocol originally developed by Macromedia for Flash Player. It maintains a persistent connection between the encoder and the server, giving it low latency \u2014 typically 0.8\u20133 seconds. It runs on port 1935 and carries video in an FLV container.<\/li>\n<li><strong><a href=\"https:\/\/liveapi.com\/blog\/what-is-hls\/\" target=\"_blank\" rel=\"noopener\">HLS (HTTP Live Streaming)<\/a><\/strong> is an HTTP-based delivery protocol EXTLINK<a href=\"https:\/\/developer.apple.com\/documentation\/http-live-streaming\">developed by Apple<\/a>EXTLINK. It splits a video stream into short <code>.ts<\/code> (MPEG Transport Stream) segments \u2014 typically 2\u20136 seconds each \u2014 and publishes an <a href=\"https:\/\/liveapi.com\/blog\/what-is-m3u8\/\" target=\"_blank\" rel=\"noopener\">M3U8 playlist<\/a> that players use to request those segments in sequence.<\/li>\n<\/ul>\n<table>\n<thead>\n<tr>\n<th><\/th>\n<th>RTMP<\/th>\n<th>HLS<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Protocol<\/strong><\/td>\n<td>TCP<\/td>\n<td>HTTP<\/td>\n<\/tr>\n<tr>\n<td><strong>Role<\/strong><\/td>\n<td>Ingest (encoder \u2192 server)<\/td>\n<td>Delivery (server \u2192 viewer)<\/td>\n<\/tr>\n<tr>\n<td><strong>Container<\/strong><\/td>\n<td>FLV<\/td>\n<td>MPEG-TS or fMP4<\/td>\n<\/tr>\n<tr>\n<td><strong>Default latency<\/strong><\/td>\n<td>0.8\u20133 seconds<\/td>\n<td>6\u201330 seconds (standard)<\/td>\n<\/tr>\n<tr>\n<td><strong>Low-latency mode<\/strong><\/td>\n<td>N\/A<\/td>\n<td>2\u20135 seconds (LL-HLS)<\/td>\n<\/tr>\n<tr>\n<td><strong>Browser support<\/strong><\/td>\n<td>None (Flash removed)<\/td>\n<td>Native on all modern browsers<\/td>\n<\/tr>\n<tr>\n<td><strong>CDN-friendly<\/strong><\/td>\n<td>No<\/td>\n<td>Yes<\/td>\n<\/tr>\n<tr>\n<td><strong>ABR support<\/strong><\/td>\n<td>No<\/td>\n<td>Yes<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Converting RTMP to HLS means your media server \u2014 or a cloud API \u2014 receives the RTMP stream, processes it in real time, and outputs HLS segments that any device can play over a standard web server or CDN.<\/p>\n<hr \/>\n<h2>Why Convert RTMP to HLS?<\/h2>\n<p>RTMP still dominates as an ingest protocol because every encoder \u2014 OBS, Wirecast, vMix, Telestream, hardware encoders from Blackmagic and Teradek \u2014 outputs RTMP natively. It&#8217;s stable, battle-tested, and fast. The problem is on the delivery side.<\/p>\n<p><strong>Browser compatibility.<\/strong> Adobe ended Flash support in 2020. Chrome, Firefox, Safari, and Edge no longer play RTMP natively. There&#8217;s no way to play an RTMP stream directly in a browser today without a plugin, and most enterprise environments block plugins entirely. HLS plays natively in every modern browser.<\/p>\n<p><strong>Mobile delivery.<\/strong> iOS has supported HLS natively since iOS 3. Android handles it through ExoPlayer and the native media player. Neither platform plays RTMP.<\/p>\n<p><strong>CDN caching.<\/strong> CDNs are built around HTTP. MPEG-TS segment files cache naturally at edge nodes, which means your HLS stream scales to thousands of concurrent viewers without hammering your origin server. RTMP is a persistent TCP connection \u2014 CDNs can&#8217;t cache it, and you&#8217;d need one server connection per viewer.<\/p>\n<p><strong>Adaptive bitrate delivery.<\/strong> HLS supports multiple quality renditions in a single stream. Your server generates separate M3U8 playlists for 1080p, 720p, 480p, and 360p, and the player switches between them automatically based on the viewer&#8217;s bandwidth. RTMP delivers one fixed quality to everyone.<\/p>\n<p><strong>DVR and catch-up.<\/strong> HLS segment files sit on a server or storage bucket. Keep them around and viewers can rewind during a live broadcast or watch a replay after it ends. RTMP streams don&#8217;t have this option without additional recording infrastructure.<\/p>\n<p>If you want to <a href=\"https:\/\/liveapi.com\/blog\/how-to-stream-live-video\/\" target=\"_blank\" rel=\"noopener\">stream live video<\/a> to any device \u2014 web, mobile, or connected TV \u2014 RTMP-to-HLS conversion is part of the foundation.<\/p>\n<hr \/>\n<h2>Transmuxing vs. Transcoding: Which One Do You Need?<\/h2>\n<p>Before choosing an implementation method, you need to understand the difference between transmuxing and <a href=\"https:\/\/liveapi.com\/blog\/what-is-video-transcoding\/\" target=\"_blank\" rel=\"noopener\">video transcoding<\/a>. They&#8217;re often confused, but they&#8217;re not the same thing.<\/p>\n<p><strong>Transmuxing<\/strong> changes the container format without touching the encoded video data. For RTMP to HLS, that means taking the H.264 video and AAC audio from the FLV container and repackaging them into MPEG-TS segments (<code>.ts<\/code> files) with an M3U8 index. The video is never decoded or re-encoded \u2014 it&#8217;s moved from one wrapper to another.<\/p>\n<p>This is fast and CPU-efficient. A single server can transmux dozens of concurrent streams with minimal overhead. The trade-off: you get one output quality. Whatever resolution and bitrate the encoder sent is what viewers receive.<\/p>\n<p><strong>Transcoding<\/strong> decodes the incoming video and re-encodes it at different resolutions and bitrates. You start with a 1080p RTMP stream and output separate renditions: 1080p at 6 Mbps, 720p at 3 Mbps, 480p at 1.5 Mbps, 360p at 800 Kbps. Those renditions feed into a master M3U8 playlist, and the player handles <a href=\"https:\/\/liveapi.com\/blog\/adaptive-bitrate-streaming\/\" target=\"_blank\" rel=\"noopener\">adaptive bitrate streaming<\/a> automatically.<\/p>\n<p>Transcoding is CPU-heavy. Each rendition requires a separate encode pass in real time, which means significant processing power. GPU-accelerated encoding (NVIDIA NVENC, Apple VideoToolbox) helps, but the infrastructure cost is still much higher than transmuxing.<\/p>\n<table>\n<thead>\n<tr>\n<th><\/th>\n<th>Transmuxing<\/th>\n<th>Transcoding<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>What changes<\/strong><\/td>\n<td>Container only<\/td>\n<td>Container + codec + quality<\/td>\n<\/tr>\n<tr>\n<td><strong>Output renditions<\/strong><\/td>\n<td>One (source quality)<\/td>\n<td>Multiple (ABR)<\/td>\n<\/tr>\n<tr>\n<td><strong>CPU usage<\/strong><\/td>\n<td>Low<\/td>\n<td>High<\/td>\n<\/tr>\n<tr>\n<td><strong>Latency added<\/strong><\/td>\n<td>Near-zero<\/td>\n<td>0.5\u20132 seconds<\/td>\n<\/tr>\n<tr>\n<td><strong>Best for<\/strong><\/td>\n<td>Single-quality delivery<\/td>\n<td>Adaptive bitrate delivery<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>If your encoder already outputs multiple renditions (some hardware encoders support this), you can transmux each one separately. If you have a single RTMP feed and need to reach viewers on different connections and devices, you need transcoding.<\/p>\n<hr \/>\n<h2>How to Convert RTMP to HLS: 3 Methods<\/h2>\n<h3>Method 1: FFmpeg<\/h3>\n<p>EXTLINK<a href=\"https:\/\/ffmpeg.org\/\">FFmpeg<\/a>EXTLINK is a free, open-source command-line tool that handles both transmuxing and transcoding. It&#8217;s the fastest way to test an RTMP-to-HLS conversion pipeline and works well for development or simple single-stream setups.<\/p>\n<p><strong>Basic transmux (single quality, no re-encode):<\/strong><\/p>\n<pre><code class=\"language-bash\">ffmpeg -i rtmp:\/\/localhost\/live\/stream \\\r\n  -c:v copy \\\r\n  -c:a copy \\\r\n  -f hls \\\r\n  -hls_time 4 \\\r\n  -hls_list_size 5 \\\r\n  -hls_flags delete_segments \\\r\n  \/var\/www\/html\/hls\/stream.m3u8\r\n<\/code><\/pre>\n<p>What each flag does:<br \/>\n&#8211; <code>-c:v copy -c:a copy<\/code> \u2014 copies video and audio without re-encoding (transmux only)<br \/>\n&#8211; <code>-hls_time 4<\/code> \u2014 creates 4-second segments<br \/>\n&#8211; <code>-hls_list_size 5<\/code> \u2014 keeps the last 5 segments in the playlist (about 20 seconds of buffer)<br \/>\n&#8211; <code>-hls_flags delete_segments<\/code> \u2014 removes old <code>.ts<\/code> files automatically to avoid filling the disk<\/p>\n<p>This writes <code>.ts<\/code> files and a <code>stream.m3u8<\/code> playlist to <code>\/var\/www\/html\/hls\/<\/code>. Serve that directory from NGINX or Apache and you have a working HLS stream.<\/p>\n<p><strong>Transcode to a single output (with re-encode):<\/strong><\/p>\n<pre><code class=\"language-bash\">ffmpeg -i rtmp:\/\/localhost\/live\/stream \\\r\n  -c:v libx264 -preset veryfast -b:v 3000k -maxrate 3500k -bufsize 6000k \\\r\n  -c:a aac -b:a 128k \\\r\n  -f hls \\\r\n  -hls_time 4 \\\r\n  -hls_list_size 5 \\\r\n  \/var\/www\/html\/hls\/stream.m3u8\r\n<\/code><\/pre>\n<p>Use <code>-preset veryfast<\/code> or <code>ultrafast<\/code> for real-time encoding. Slower presets produce smaller files but won&#8217;t keep up with a live feed \u2014 FFmpeg will start dropping frames and falling behind.<\/p>\n<p>FFmpeg is good for development, quick tests, and proof-of-concept work. For production with multiple concurrent streams, you want a media server that manages ingest, segmentation, and playlist updates without a separate FFmpeg process per stream.<\/p>\n<hr \/>\n<h3>Method 2: NGINX with the RTMP Module<\/h3>\n<p>The EXTLINK<a href=\"https:\/\/github.com\/arut\/nginx-rtmp-module\">nginx-rtmp-module<\/a>EXTLINK turns NGINX into a media server. It accepts RTMP ingest on port 1935 and outputs HLS segments directly \u2014 without requiring a separate FFmpeg process for each stream.<\/p>\n<p>This is the most common self-hosted approach for RTMP-to-HLS in production. One NGINX instance can handle multiple <a href=\"https:\/\/liveapi.com\/blog\/live-rtmp-stream\/\" target=\"_blank\" rel=\"noopener\">live RTMP streams<\/a> with minimal configuration.<\/p>\n<p><strong>Install NGINX with the RTMP module:<\/strong><\/p>\n<pre><code class=\"language-bash\">sudo apt-get install nginx libnginx-mod-rtmp\r\n<\/code><\/pre>\n<p>Or build from source:<\/p>\n<pre><code class=\"language-bash\">git clone https:\/\/github.com\/arut\/nginx-rtmp-module.git\r\n.\/configure --add-module=..\/nginx-rtmp-module\r\nmake &amp;&amp; sudo make install\r\n<\/code><\/pre>\n<p><strong>nginx.conf \u2014 complete RTMP + HTTP configuration:<\/strong><\/p>\n<pre><code class=\"language-nginx\">rtmp {\r\n    server {\r\n        listen 1935;\r\n        chunk_size 4096;\r\n\r\n        application live {\r\n            live on;\r\n            record off;\r\n\r\n            hls on;\r\n            hls_path \/var\/www\/html\/hls;\r\n            hls_fragment 4s;\r\n            hls_playlist_length 30s;\r\n        }\r\n    }\r\n}\r\n\r\nhttp {\r\n    server {\r\n        listen 80;\r\n\r\n        location \/hls {\r\n            types {\r\n                application\/vnd.apple.mpegurl m3u8;\r\n                video\/mp2t ts;\r\n            }\r\n            root \/var\/www\/html;\r\n            add_header Cache-Control no-cache;\r\n            add_header Access-Control-Allow-Origin *;\r\n        }\r\n    }\r\n}\r\n<\/code><\/pre>\n<p>Once NGINX is running with this config, push an RTMP stream from OBS or any encoder:<br \/>\n&#8211; <strong>RTMP ingest URL:<\/strong> <code>rtmp:\/\/your-server-ip\/live<\/code><br \/>\n&#8211; <strong>Stream key:<\/strong> anything (e.g., <code>mystream<\/code>)<\/p>\n<p>NGINX automatically generates HLS segments and a playlist at <code>\/var\/www\/html\/hls\/mystream.m3u8<\/code>. Your <strong>HLS playback URL<\/strong> is <code>http:\/\/your-server-ip\/hls\/mystream.m3u8<\/code>.<\/p>\n<p>The <code>Cache-Control: no-cache<\/code> header prevents CDN or browser caching of the M3U8 playlist (which changes every few seconds), while <code>.ts<\/code> segment files are safe to cache.<\/p>\n<p>The NGINX + rtmp-module approach is solid for self-hosted setups, but you&#8217;re responsible for server provisioning, scaling, health monitoring, and disk management. For a one-server setup handling a handful of streams, it works well. For a product feature handling variable concurrent streams, the operational overhead adds up.<\/p>\n<hr \/>\n<h3>Method 3: Cloud API<\/h3>\n<p>Both FFmpeg and NGINX require you to manage the underlying infrastructure. For teams building <a href=\"https:\/\/liveapi.com\/live-streaming-api\/\" target=\"_blank\" rel=\"noopener\">live streaming<\/a> as a product feature, that&#8217;s engineering time spent on server operations instead of your application.<\/p>\n<p>Cloud-based video APIs handle the full pipeline: receive your RTMP stream, transcode it to multiple HLS renditions, distribute segments through a global CDN, and give you an HLS playback URL. You push one RTMP stream and get back a playback URL that works on any device.<\/p>\n<p>With <a href=\"https:\/\/liveapi.com\/features\/\" target=\"_blank\" rel=\"noopener\">LiveAPI<\/a>, the setup takes a few lines of code:<\/p>\n<pre><code class=\"language-javascript\">const sdk = require('api')('@liveapi\/v1.0#5pfjhgkzh9rzt4');\r\n\r\nsdk.post('\/live-streams', {\r\n  record: true,\r\n  name: 'My Live Stream'\r\n})\r\n.then(({ data }) =&gt; {\r\n  console.log('RTMP ingest URL:', data.rtmpUrl);\r\n  console.log('HLS playback URL:', data.hlsUrl);\r\n})\r\n.catch(err =&gt; console.error(err));\r\n<\/code><\/pre>\n<p>Your encoder pushes to the <code>rtmpUrl<\/code>. Viewers play the <code>hlsUrl<\/code>. LiveAPI handles the conversion, ABR transcoding, CDN delivery (Akamai, Cloudflare, Fastly), and server failover automatically.<\/p>\n<p>This approach makes sense when:<br \/>\n&#8211; You&#8217;re building an app and want to ship a streaming feature in days, not months<br \/>\n&#8211; You need global delivery without managing your own CDN<br \/>\n&#8211; You need live-to-VOD \u2014 recordings available immediately after the stream ends<br \/>\n&#8211; You want to <a href=\"https:\/\/liveapi.com\/blog\/stream-to-multiple-platforms\/\" target=\"_blank\" rel=\"noopener\">stream to multiple platforms<\/a> from the same RTMP feed<\/p>\n<hr \/>\n<h2>Building an ABR Ladder for HLS Delivery<\/h2>\n<p>A single-quality HLS stream works for controlled environments where you know every viewer&#8217;s bandwidth. For consumer-facing products, you want multiple quality renditions \u2014 so viewers on slow connections get smooth 360p while viewers on fiber get 1080p.<\/p>\n<p>Here&#8217;s a standard ABR ladder for live streaming:<\/p>\n<table>\n<thead>\n<tr>\n<th>Rendition<\/th>\n<th>Resolution<\/th>\n<th>Video Bitrate<\/th>\n<th>Audio Bitrate<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>1080p<\/td>\n<td>1920\u00d71080<\/td>\n<td>6,000 Kbps<\/td>\n<td>192 Kbps<\/td>\n<\/tr>\n<tr>\n<td>720p<\/td>\n<td>1280\u00d7720<\/td>\n<td>3,000 Kbps<\/td>\n<td>128 Kbps<\/td>\n<\/tr>\n<tr>\n<td>480p<\/td>\n<td>854\u00d7480<\/td>\n<td>1,500 Kbps<\/td>\n<td>128 Kbps<\/td>\n<\/tr>\n<tr>\n<td>360p<\/td>\n<td>640\u00d7360<\/td>\n<td>800 Kbps<\/td>\n<td>96 Kbps<\/td>\n<\/tr>\n<tr>\n<td>240p<\/td>\n<td>426\u00d7240<\/td>\n<td>400 Kbps<\/td>\n<td>64 Kbps<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Multi-rendition FFmpeg command (3 outputs):<\/strong><\/p>\n<pre><code class=\"language-bash\">ffmpeg -i rtmp:\/\/localhost\/live\/stream \\\r\n  -filter_complex \\\r\n    \"[0:v]split=3[v1][v2][v3]; \\\r\n     [v1]scale=w=1280:h=720[v720p]; \\\r\n     [v2]scale=w=854:h=480[v480p]; \\\r\n     [v3]scale=w=640:h=360[v360p]\" \\\r\n  -map \"[v720p]\" -c:v libx264 -b:v 3000k -preset veryfast \\\r\n    -map 0:a -c:a aac -b:a 128k \\\r\n    -f hls -hls_time 4 -hls_list_size 5 \\\r\n    \/var\/www\/html\/hls\/720p\/stream.m3u8 \\\r\n  -map \"[v480p]\" -c:v libx264 -b:v 1500k -preset veryfast \\\r\n    -map 0:a -c:a aac -b:a 128k \\\r\n    -f hls -hls_time 4 -hls_list_size 5 \\\r\n    \/var\/www\/html\/hls\/480p\/stream.m3u8 \\\r\n  -map \"[v360p]\" -c:v libx264 -b:v 800k -preset veryfast \\\r\n    -map 0:a -c:a aac -b:a 96k \\\r\n    -f hls -hls_time 4 -hls_list_size 5 \\\r\n    \/var\/www\/html\/hls\/360p\/stream.m3u8\r\n<\/code><\/pre>\n<p>After generating the individual rendition playlists, create a master M3U8 by hand (or script it):<\/p>\n<pre><code>#EXTM3U\r\n#EXT-X-VERSION:3\r\n\r\n#EXT-X-STREAM-INF:BANDWIDTH=3128000,RESOLUTION=1280x720\r\n720p\/stream.m3u8\r\n\r\n#EXT-X-STREAM-INF:BANDWIDTH=1628000,RESOLUTION=854x480\r\n480p\/stream.m3u8\r\n\r\n#EXT-X-STREAM-INF:BANDWIDTH=896000,RESOLUTION=640x360\r\n360p\/stream.m3u8\r\n<\/code><\/pre>\n<p>The player fetches the master M3U8, reads the <code>BANDWIDTH<\/code> values, measures actual throughput, and switches between renditions during playback.<\/p>\n<p>One critical detail: your keyframe interval (GOP size) must match your segment duration. For 4-second HLS segments, configure your encoder to output a keyframe every 4 seconds. Misaligned keyframes cause segment boundary issues and buffering. Check your <a href=\"https:\/\/liveapi.com\/blog\/streaming-bit-rates\/\" target=\"_blank\" rel=\"noopener\">streaming bitrates<\/a> and keyframe interval settings before going live.<\/p>\n<hr \/>\n<h2>Low-Latency HLS (LL-HLS)<\/h2>\n<p>Standard HLS has 10\u201330 seconds of end-to-end latency. That delay comes from several sources: encoder buffer (1\u20132s), segment duration (4\u20136s), playlist update interval (one segment), and player buffering (2\u20133 segments). Each adds up.<\/p>\n<p>Low-Latency HLS (LL-HLS), introduced by Apple in 2019, cuts end-to-end latency to 2\u20135 seconds using partial segments. Instead of waiting for a full 4-second segment to finish, the server publishes 200ms partial segments as they&#8217;re generated. Players load partials incrementally, which means they start playing sooner and stay closer to the live edge.<\/p>\n<p><strong>To use LL-HLS with FFmpeg:<\/strong><\/p>\n<pre><code class=\"language-bash\">ffmpeg -i rtmp:\/\/localhost\/live\/stream \\\r\n  -c:v libx264 -preset ultrafast -tune zerolatency \\\r\n  -c:a aac \\\r\n  -f hls \\\r\n  -hls_time 1 \\\r\n  -hls_segment_type fmp4 \\\r\n  -hls_flags independent_segments+program_date_time \\\r\n  -master_pl_name master.m3u8 \\\r\n  \/var\/www\/html\/hls\/stream.m3u8\r\n<\/code><\/pre>\n<p>Key changes for LL-HLS:<br \/>\n&#8211; <code>-hls_segment_type fmp4<\/code> \u2014 required; LL-HLS uses fragmented MP4 instead of MPEG-TS<br \/>\n&#8211; <code>-hls_time 1<\/code> \u2014 shorter segments reduce latency<br \/>\n&#8211; <code>-tune zerolatency<\/code> \u2014 reduces encoder look-ahead buffer in x264<\/p>\n<p>LL-HLS also requires HTTP\/2 and a server that supports chunked transfer encoding to deliver partial segments before they&#8217;re complete. Standard NGINX needs additional configuration for this. Most managed video APIs handle LL-HLS automatically.<\/p>\n<p>Use LL-HLS for sports, live auctions, interactive events, or any use case where a 10+ second delay creates a broken experience. For lectures, webinars, or broadcast-style events where delay isn&#8217;t noticeable, standard HLS is fine.<\/p>\n<hr \/>\n<h2>Essential Infrastructure for RTMP-to-HLS Pipelines<\/h2>\n<p>Whether you&#8217;re self-hosting or using a cloud API, these are the components your pipeline needs.<\/p>\n<h3>Media Server (RTMP Ingest)<\/h3>\n<p>The media server receives your RTMP push, manages connections, and runs the HLS conversion. Your options:<\/p>\n<ul>\n<li><strong>NGINX + nginx-rtmp-module<\/strong> \u2014 free, open source, good for small to medium setups<\/li>\n<li><strong>SRS (Simple Realtime Server)<\/strong> \u2014 newer, supports LL-HLS natively, active development<\/li>\n<li><strong>Wowza Streaming Engine<\/strong> \u2014 commercial, enterprise-grade with built-in transcoding<\/li>\n<li><strong>Managed API (LiveAPI)<\/strong> \u2014 no server to maintain, scales automatically<\/li>\n<\/ul>\n<p>Your <a href=\"https:\/\/liveapi.com\/blog\/rtmp-server\/\" target=\"_blank\" rel=\"noopener\">RTMP server<\/a> configuration determines ingest stability, reconnection handling, and maximum concurrent stream capacity.<\/p>\n<h3>CDN for Segment Delivery<\/h3>\n<p>HLS <code>.ts<\/code> and <code>.fmp4<\/code> segments need to reach viewers fast, from edge nodes close to them. A <a href=\"https:\/\/liveapi.com\/blog\/cdn-for-live-streaming\/\" target=\"_blank\" rel=\"noopener\">CDN for live streaming<\/a> caches segments geographically, which reduces latency for distant viewers and keeps your origin server load flat even during traffic spikes.<\/p>\n<p>For DIY setups, Cloudflare&#8217;s free tier works for low-traffic streams. For high-concurrency events with SLA requirements, Akamai or Fastly are better options. Managed APIs include CDN delivery in the platform cost.<\/p>\n<h3>HLS Player<\/h3>\n<p>Any HLS-compatible player works for playback. Common choices for web:<\/p>\n<ul>\n<li><strong>hls.js<\/strong> \u2014 JavaScript library, works in any browser including Chrome on desktop, most widely used<\/li>\n<li><strong>Video.js<\/strong> with HLS plugin<\/li>\n<li><strong>Shaka Player<\/strong> \u2014 Google&#8217;s open-source player, supports both HLS and DASH<\/li>\n<li><strong>Native <code>&lt;video&gt;<\/code> tag<\/strong> \u2014 works on Safari and mobile browsers without any library<\/li>\n<\/ul>\n<p>For mobile, ExoPlayer (Android) and AVPlayer (iOS) handle HLS natively. If you want to <a href=\"https:\/\/liveapi.com\/blog\/embed-live-stream-on-website\/\" target=\"_blank\" rel=\"noopener\">embed a live stream<\/a> on a website, hls.js with a <code>&lt;video&gt;<\/code> tag is the path of least resistance.<\/p>\n<h3>Storage and Live-to-VOD<\/h3>\n<p>One advantage of HLS is that segments are just files. If you store them in object storage (Amazon S3, Google Cloud Storage, Cloudflare R2) instead of deleting them, you get an instant VOD archive of every live stream. Add <code>#EXT-X-ENDLIST<\/code> to the M3U8 when the stream ends and it becomes a standard on-demand video file \u2014 no extra processing needed.<\/p>\n<hr \/>\n<p><em>At this point, you have a complete picture of the RTMP-to-HLS pipeline: what&#8217;s happening technically, how to implement it three different ways, and how to add ABR and low-latency delivery. The remaining question is which approach fits your stack.<\/em><\/p>\n<hr \/>\n<h2>Is RTMP-to-HLS Right for Your Project?<\/h2>\n<p><strong>Self-hosted (FFmpeg or NGINX) makes sense if:<\/strong><br \/>\n&#8211; You have DevOps resources to maintain streaming infrastructure<br \/>\n&#8211; You&#8217;re running 1\u20135 concurrent streams at most<br \/>\n&#8211; Your audience is concentrated in a single region (no CDN needed)<br \/>\n&#8211; You want full control over the transcoding pipeline<br \/>\n&#8211; Budget is the main constraint<\/p>\n<p><strong>Managed API makes sense if:<\/strong><br \/>\n&#8211; Your team wants to ship a streaming feature without owning the infrastructure<br \/>\n&#8211; You need global delivery with multi-CDN redundancy<br \/>\n&#8211; You&#8217;re expecting unpredictable or variable stream volume<br \/>\n&#8211; You need live-to-VOD out of the box<\/p>\n<p><strong>Consider upgrading your ingest protocol if:<\/strong><br \/>\n&#8211; You&#8217;re building a new platform from scratch (SRT handles packet loss better than RTMP over unreliable networks \u2014 see <a href=\"https:\/\/liveapi.com\/blog\/srt-vs-rtmp\/\" target=\"_blank\" rel=\"noopener\">SRT vs RTMP<\/a>)<br \/>\n&#8211; You need end-to-end encryption at the transport layer<br \/>\n&#8211; You&#8217;re working with broadcast equipment that supports SRT or NDI natively<\/p>\n<p>For most teams building streaming features into an application, a managed API is the right call. Running your own RTMP-to-HLS stack at scale \u2014 handling failover, segment storage, CDN integration, and monitoring \u2014 is a substantial operational commitment.<\/p>\n<hr \/>\n<h2>RTMP to HLS FAQ<\/h2>\n<p><strong>What is the difference between RTMP and HLS?<\/strong><br \/>\nRTMP is an ingest protocol \u2014 it carries a live stream from your encoder to a media server over TCP. HLS is a delivery protocol \u2014 it serves the stream to viewers as short HTTP segments. RTMP is fast but browser-incompatible; HLS works on every modern device. Most live streaming pipelines use RTMP for ingest and HLS for delivery.<\/p>\n<p><strong>Can FFmpeg convert RTMP to HLS directly?<\/strong><br \/>\nYes. FFmpeg connects to an incoming RTMP stream, transmuxes or transcodes it, and writes HLS segments to disk in real time. Use <code>-i rtmp:\/\/server\/app\/key<\/code> as the input and <code>-f hls<\/code> as the output format. For transmuxing without re-encoding, add <code>-c:v copy -c:a copy<\/code>. For <a href=\"https:\/\/liveapi.com\/blog\/what-is-video-encoding\/\" target=\"_blank\" rel=\"noopener\">video encoding<\/a> to multiple ABR renditions, use the filter_complex split approach shown above.<\/p>\n<p><strong>What is transmuxing, and when should I use it?<\/strong><br \/>\nTransmuxing means changing the container format (FLV to MPEG-TS) without re-encoding the video or audio. It&#8217;s fast and CPU-efficient but produces only one output quality. Use it when your encoder already outputs the right resolution and bitrate for your audience, or when you&#8217;re optimizing for minimal server-side processing. Use transcoding when you need multiple quality renditions for ABR delivery.<\/p>\n<p><strong>What is the latency of RTMP to HLS conversion?<\/strong><br \/>\nStandard HLS adds 10\u201330 seconds of end-to-end latency. The main contributors are: encoder buffer (1\u20132s), segment duration (4\u20136s), playlist update interval (one segment), and player buffer (2\u20133 segments). With Low-Latency HLS, you can cut this to 2\u20135 seconds using partial segments and fMP4 containers.<\/p>\n<p><strong>Do I need an RTMP server to convert to HLS?<\/strong><br \/>\nYes \u2014 something needs to receive the RTMP push and run the conversion. That can be a self-hosted media server (NGINX + rtmp-module, SRS), a standalone FFmpeg process, or a managed cloud API. A plain NGINX web server without the RTMP module can&#8217;t accept RTMP ingest. You need the <code>rtmp {}<\/code> block in your NGINX config or a separate media server process.<\/p>\n<p><strong>What is the nginx-rtmp-module?<\/strong><br \/>\nThe nginx-rtmp-module is an open-source extension for NGINX that adds RTMP ingest and HLS output capability. It listens on port 1935, accepts encoder push streams, and writes HLS segments to disk automatically. It&#8217;s the most commonly used open-source solution for self-hosted RTMP-to-HLS conversion. Configuration goes in the <code>rtmp {}<\/code> block of <code>nginx.conf<\/code>.<\/p>\n<p><strong>Is RTMP still used in 2025?<\/strong><br \/>\nYes, but only for ingest \u2014 not delivery. RTMP remains the standard push protocol for encoders and broadcasting software because it&#8217;s fast, stable, and universally supported. YouTube, Twitch, and Facebook still accept RTMP push streams and convert them to HLS on the delivery side. Newer protocols like SRT are gaining adoption for production use cases where packet loss recovery and encryption matter.<\/p>\n<p><strong>What is the best segment duration for live HLS?<\/strong><br \/>\nFor standard HLS, 4\u20136 second segments are a good balance between latency and connection resilience. Shorter segments (1\u20132s) reduce latency but increase HTTP request overhead and playlist update frequency. Longer segments (8\u201310s) reduce overhead but add latency. For LL-HLS, use 1-second segments with partial segment delivery. Always match your encoder&#8217;s keyframe interval to your segment duration.<\/p>\n<p><strong>Can I push an RTMP stream to HLS and broadcast to multiple platforms at the same time?<\/strong><br \/>\nYes. You can configure NGINX to output HLS segments locally and push copies of the RTMP stream to YouTube, Twitch, or other destinations using the <code>push<\/code> directive in the rtmp block. Alternatively, use a <a href=\"https:\/\/liveapi.com\/blog\/best-live-streaming-apis\/\" target=\"_blank\" rel=\"noopener\">live streaming API<\/a> that handles multistreaming \u2014 LiveAPI&#8217;s multistream feature lets you push once and rebroadcast to 30+ destinations without managing each connection separately.<\/p>\n<hr \/>\n<h2>Start Building Your RTMP-to-HLS Pipeline<\/h2>\n<p>For a quick test, FFmpeg gives you a working HLS stream in minutes. For a production self-hosted setup, NGINX with the rtmp-module handles multi-stream ingest cleanly. For teams building streaming into an application, a <a href=\"https:\/\/liveapi.com\/blog\/live-video-streaming-platform\/\" target=\"_blank\" rel=\"noopener\">managed video API<\/a> handles the full pipeline \u2014 RTMP ingest, ABR transcoding, CDN delivery, and live-to-VOD \u2014 without any infrastructure to maintain.<\/p>\n<p><a href=\"https:\/\/liveapi.com\/\" target=\"_blank\" rel=\"noopener\">Get started with LiveAPI<\/a> and push your first RTMP stream to a global HLS delivery network in minutes.<\/p>\n","protected":false},"excerpt":{"rendered":"<p><span class=\"rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\">12<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span> When you push a live stream from OBS, Wirecast, or a hardware encoder, it leaves over RTMP. That&#8217;s the standard ingest protocol \u2014 fast, reliable, and natively supported by every encoder on the market. The problem is on the delivery side: browsers, iPhones, Android devices, and smart TVs can&#8217;t play RTMP directly. They need HLS. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":847,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_title":"RTMP to HLS: How to Convert Live Streams for Any Device %%sep%% %%sitename%%","_yoast_wpseo_metadesc":"Learn how to convert RTMP to HLS for browser and mobile delivery. Covers transmuxing vs. transcoding, FFmpeg commands, NGINX config, ABR ladders, and LL-HLS.","inline_featured_image":false,"footnotes":""},"categories":[5,13],"tags":[],"class_list":["post-846","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-rtmp","category-hls"],"jetpack_featured_media_url":"https:\/\/liveapi.com\/blog\/wp-content\/uploads\/2026\/04\/rtmp-to-hls.jpg","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v15.6.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<meta name=\"description\" content=\"Learn how to convert RTMP to HLS for browser and mobile delivery. Covers transmuxing vs. transcoding, FFmpeg commands, NGINX config, ABR ladders, and LL-HLS.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"RTMP to HLS: How to Convert Live Streams for Any Device - LiveAPI Blog\" \/>\n<meta property=\"og:description\" content=\"Learn how to convert RTMP to HLS for browser and mobile delivery. Covers transmuxing vs. transcoding, FFmpeg commands, NGINX config, ABR ladders, and LL-HLS.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/\" \/>\n<meta property=\"og:site_name\" content=\"LiveAPI Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-04T07:13:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-03T04:57:05+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/liveapi.com\/blog\/wp-content\/uploads\/2026\/04\/rtmp-to-hls.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"940\" \/>\n\t<meta property=\"og:image:height\" content=\"625\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"17 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/liveapi.com\/blog\/#website\",\"url\":\"https:\/\/liveapi.com\/blog\/\",\"name\":\"LiveAPI Blog\",\"description\":\"Live Video Streaming API Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/liveapi.com\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/liveapi.com\/blog\/wp-content\/uploads\/2026\/04\/rtmp-to-hls.jpg\",\"width\":940,\"height\":625,\"caption\":\"Photo by Wisam Alazawi on Pexels\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/#webpage\",\"url\":\"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/\",\"name\":\"RTMP to HLS: How to Convert Live Streams for Any Device - LiveAPI Blog\",\"isPartOf\":{\"@id\":\"https:\/\/liveapi.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/#primaryimage\"},\"datePublished\":\"2026-04-04T07:13:31+00:00\",\"dateModified\":\"2026-04-03T04:57:05+00:00\",\"author\":{\"@id\":\"https:\/\/liveapi.com\/blog\/#\/schema\/person\/98f2ee8b3a0bd93351c0d9e8ce490e4a\"},\"description\":\"Learn how to convert RTMP to HLS for browser and mobile delivery. Covers transmuxing vs. transcoding, FFmpeg commands, NGINX config, ABR ladders, and LL-HLS.\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/liveapi.com\/blog\/rtmp-to-hls\/\"]}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/liveapi.com\/blog\/#\/schema\/person\/98f2ee8b3a0bd93351c0d9e8ce490e4a\",\"name\":\"govz\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/liveapi.com\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ab5cbe0543c0a44dc944c720159323bd001fc39a8ba5b1f137cd22e7578e84c9?s=96&d=mm&r=g\",\"caption\":\"govz\"},\"sameAs\":[\"https:\/\/liveapi.com\/blog\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/posts\/846","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/comments?post=846"}],"version-history":[{"count":2,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/posts\/846\/revisions"}],"predecessor-version":[{"id":849,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/posts\/846\/revisions\/849"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/media\/847"}],"wp:attachment":[{"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/media?parent=846"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/categories?post=846"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/tags?post=846"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}