SRT

SRT Protocol: What It Is, How It Works, and Why It Matters

17 min read
Live broadcast streaming setup representing SRT protocol transmission
Reading Time: 12 minutes

Live streaming over the public internet has always been a problem of reliability. RTMP, the protocol that carried most live video for over a decade, was built for controlled local networks — not the unpredictable paths between a remote broadcast truck and a data center on the other side of the world.

The SRT protocol (Secure Reliable Transport) was designed to solve exactly that. Developed by Haivision and open-sourced in 2017, SRT delivers broadcast-quality video over any IP network by combining UDP’s low-latency transport with intelligent error recovery. It has since become the go-to ingest protocol for professional live streaming, winning an Emmy Award and being adopted by more than 600 technology companies.

This guide covers what SRT is, how it works under the hood, how it compares to RTMP and other protocols, and when it makes sense for your live streaming stack.


What Is the SRT Protocol?

SRT (Secure Reliable Transport) is an open-source video transport protocol that delivers low-latency, high-quality live streams over unpredictable IP networks, including the public internet. It uses UDP as its underlying transport layer but adds TCP-like reliability, end-to-end AES encryption, and automatic error recovery on top.

Unlike RTMP, which relies on a stable TCP connection and degrades significantly on high-latency or lossy paths, SRT is built to handle packet loss, jitter, and network fluctuation without dropping stream quality.

Key facts at a glance:

Property Value
Developed by Haivision (open-sourced 2017)
Underlying transport UDP
Default latency 120ms (configurable)
Encryption AES 128-bit and 256-bit
Error recovery ARQ (Automatic Repeat reQuest)
License Mozilla Public License (MPL)
Governance SRT Alliance (600+ members)
Emmy Award 2018 Technology & Engineering Emmy

The name breaks down the three core guarantees the protocol was built around:

  • Secure — End-to-end AES-256 encryption with passphrase-based key exchange
  • Reliable — Packet retransmission and error correction over lossy networks
  • Transport — A general-purpose video transport layer for any codec, any resolution, any bitrate

A Brief History of SRT

Haivision first developed SRT internally around 2012–2013 to solve a specific problem: delivering professional broadcast feeds from remote locations to production centers over standard internet connections without satellite uplinks.

The protocol was open-sourced at NAB 2017 under the LGPL v2.1 license. In March 2018, Haivision relicensed it under the Mozilla Public License, making it more business-friendly for commercial integrations. That same year, SRT earned a Technology & Engineering Emmy Award from the National Academy of Television Arts and Sciences — the first time an open-source streaming protocol received the honor.

Today, the SRT Alliance — founded by Haivision and Wowza — counts more than 600 member organizations, including AWS, Cloudflare, Google Cloud, Microsoft, NVIDIA, Sony, and YouTube. Over 450 technology vendors have built SRT into their products.

In a Haivision 2024 Broadcast Transformation Report, 68% of broadcasters named SRT as their top protocol of choice — ahead of RTMP (56%) and raw UDP (45%).


How Does the SRT Protocol Work?

SRT sits at the application layer and uses UDP as its transport, but it adds a control layer on top that gives it properties normally associated with TCP: sequencing, acknowledgment, and retransmission.

Here’s how a live stream flows through SRT from encoder to decoder:

1. Connection Establishment

SRT supports three connection modes:

  • Caller mode — The encoder initiates the connection (like a client connecting to a server). Most common for RTMP-to-SRT migration setups.
  • Listener mode — The server waits for incoming connections. A media server like Wowza or Nginx listens on a port; the encoder calls in.
  • Rendezvous mode — Both endpoints call each other at the same time, resolving NAT traversal without a dedicated listener. Used when both sides are behind firewalls.

During connection setup, SRT performs a handshake that includes latency negotiation and key exchange for encryption.

2. Latency Buffer and Timing

SRT uses a configurable latency buffer — the default is 120ms, but real-world deployments often set it to 300–500ms for transcontinental streams. This buffer is the “window” SRT uses to recover lost packets before the decoder needs them.

The latency setting should be at least 3–4 times the round-trip time (RTT) between sender and receiver. For a transatlantic stream with 80ms RTT, a latency setting of 320–400ms is appropriate.

3. Error Recovery via ARQ

SRT’s reliability mechanism is ARQ (Automatic Repeat reQuest). When packets arrive out of order or are missing, the receiver sends a NAK (Negative Acknowledgment) back to the sender. The sender then retransmits only the missing packets.

This is a fundamentally different approach from TCP, which retransmits from a fixed point and introduces head-of-line blocking. SRT retransmits only what’s needed, keeping latency predictable even during packet loss events.

SRT can maintain stream quality with up to 10% packet loss — a significant threshold that covers most real-world network conditions short of a complete outage.

4. AES Encryption

Encryption in SRT is applied at the payload level using AES-128 or AES-256 — the same standard used by government and financial networks. Key exchange happens during the handshake via passphrase authentication; neither endpoint transmits the passphrase itself.

This makes SRT secure on untrusted networks without requiring a separate VPN or TLS tunnel.

5. Decoding and Playout

Once packets are received, reordered, and decrypted within the latency window, they are passed to the decoder as a clean MPEG-TS stream. The protocol is codec-agnostic — it transports any codec (H.264, H.265/HEVC, AV1, etc.) without modification.


SRT vs RTMP: The Key Differences

RTMP (Real-Time Messaging Protocol) was the dominant live ingest protocol for most of the 2010s. SRT has largely replaced it for professional broadcast contribution, though RTMP is still widely used for last-mile delivery to social platforms.

Feature SRT RTMP
Transport layer UDP TCP
Default latency 120ms 1–3 seconds
Packet loss handling ARQ retransmission Head-of-line blocking
Encryption Native AES-128/256 None (RTMPS adds TLS)
Firewall traversal Yes (rendezvous mode) Limited
Max practical bitrate 20+ Mbps globally ~2 Mbps over long distances
Codec support Any (codec-agnostic) H.264 only (by convention)
Open source Yes (MPL) No
CDN/platform support Growing Still dominant for social platforms

In side-by-side performance tests, SRT handles up to 20 Mbps over intercontinental paths without issues — compared to RTMP, which typically fails or degrades above 2 Mbps at long distances. In hardware encoding tests, SRT throughput has been measured at 5–12x faster than RTMP on equivalent hardware.

For first-mile contribution (encoder to media server), SRT is the better choice on any network path with meaningful distance, latency, or packet loss. RTMP remains useful for studio-to-platform delivery over stable, short-distance connections, and for ingest APIs that haven’t yet added SRT support.


SRT vs Other Streaming Protocols

SRT isn’t the only modern alternative to RTMP. Here’s how it compares to WebRTC, HLS, and RIST:

Protocol Latency Use Case Reliability over Internet
SRT 120ms–500ms Broadcast contribution, first-mile ingest High (ARQ)
RTMP 1–3s Social platform ingest, legacy contribution Moderate
WebRTC Under 500ms Interactive video, video calls Moderate
HLS 4–30s Last-mile delivery to viewers High (HTTP CDN)
RIST 100ms–500ms Broadcast contribution (SRT competitor) High (ARQ)
RTSP Variable IP camera ingest, local networks Moderate

SRT vs WebRTC: WebRTC targets sub-500ms interactivity for two-way video calls and browser-based streaming. SRT is optimized for one-way, high-bitrate broadcast contribution. For publishing a 4K stream from a hardware encoder to a media server, SRT is the right tool. For building a video call app, WebRTC is. The article WebRTC vs HLS covers that distinction in depth.

SRT vs RIST: RIST (Reliable Internet Stream Transport) is SRT’s closest technical competitor for contribution workflows — also UDP-based with ARQ error recovery. RIST is an IETF standard with formal governance; SRT is open-source with Alliance governance. Both are valid for professional contribution; SRT currently has broader vendor support and wider ecosystem adoption.

SRT vs HLS: These protocols serve different parts of the streaming chain. SRT handles contribution (encoder → server); HLS streaming handles delivery (server → viewers). They work together: ingest via SRT, transcode to HLS, deliver to viewers. They’re not competing protocols.


Advantages of SRT Protocol

Low Latency Over Any Network

SRT’s default 120ms latency is configurable down to near-zero for controlled networks. Real-world contribution streams typically run at 300–500ms end-to-end — well below RTMP’s 1–3 second baseline. This makes SRT the standard choice for live news, sports, and event production where timing matters. For a deeper look at achieving sub-second end-to-end delivery, see the guide on ultra low latency video streaming.

Resilient to Packet Loss and Network Jitter

ARQ retransmission lets SRT maintain broadcast-quality streams through packet loss events that would visibly degrade or drop an RTMP connection. SRT operates cleanly with up to 10% packet loss, making it reliable on satellite links, cellular connections, and congested public internet paths.

Built-In AES Encryption

Most protocols treat encryption as an add-on. SRT bakes AES-128/256 encryption into every connection at the protocol level — no separate VPN, no TLS overhead, no third-party certificate management. Passphrase authentication makes it straightforward to configure in encoder and server software.

Open Source and Free to Use

SRT is released under the Mozilla Public License with no royalties, no licensing fees, and no per-stream costs. Any vendor can integrate it into hardware or software products. This has led to rapid adoption: OBS Studio, vMix, FFmpeg, GStreamer, and VLC all include native SRT support.

Firewall Traversal

The rendezvous connection mode lets two SRT endpoints establish a connection even when both are behind NAT firewalls. This matters for remote production workflows where the encoder is in a venue or field location without a static IP or open ports.

Codec Agnostic

SRT transports MPEG-TS as its payload container, which means it works with any codec the encoder produces — H.264, HEVC, AV1, or anything else. You’re not locked into H.264 the way legacy RTMP workflows often are.

High-Bitrate Support

RTMP degrades above 2 Mbps over long distances due to TCP’s congestion behavior. SRT handles 20 Mbps or more over transcontinental paths, making it viable for 4K HDR contribution feeds where bandwidth is not the bottleneck.

Broad Ecosystem Adoption

600+ SRT Alliance members and Emmy recognition mean SRT is supported across the hardware and software ecosystem — from Haivision and Teradek encoders to cloud platforms and media servers. Integration is straightforward with any compliant endpoint.


Disadvantages of SRT Protocol

Higher Configuration Complexity Than RTMP

RTMP “just works” in most encoder software. SRT requires configuring the connection mode (caller/listener/rendezvous), the latency buffer, the port, and optionally a passphrase. For teams new to broadcast contribution, this adds setup time.

Last-Mile Delivery Is Not Its Role

SRT is a contribution protocol, not a delivery protocol. You can’t send an SRT stream directly to a browser or most CDNs. The stream must be received by a media server, transcoded to HLS or DASH, and delivered over HTTP to viewers. If you want a single protocol from encoder to viewer, SRT is not that.

Latency Trade-offs on Very Long Paths

SRT’s latency is configurable, but the minimum practical setting depends on RTT. For a stream from Sydney to London (RTT ~280ms), you need a latency buffer of at least 840ms — which is better than RTMP but not sub-200ms. Very long-distance paths have a physical ceiling on achievable latency.

Platform Ingest Still Largely Requires RTMP

YouTube, Facebook, Twitch, and most social platforms still require RTMP for their ingest APIs. SRT adoption at the platform level is growing but not universal. Teams delivering to multiple social platforms via multistreaming often need both SRT (for contribution to their own media server) and RTMP (for distribution to platforms).

UDP May Be Blocked in Some Environments

SRT runs over UDP, and some enterprise firewalls block outbound UDP traffic on non-standard ports by default. RTMP over TCP port 1935 often has fewer firewall issues in enterprise environments. Rendezvous mode helps but doesn’t fully resolve all network policy restrictions.


SRT solves the core problem it was built for — reliable, low-latency video over unpredictable IP networks. If you’re building a live streaming workflow that handles contribution, remote production, or high-bitrate ingest, SRT belongs in your stack. The practical question is how to put it to work.


How to Set Up SRT Streaming

Setting up SRT requires two components: an SRT-capable encoder (the sender) and an SRT-capable media server or ingest endpoint (the receiver). Here’s a practical walkthrough for both.

Step 1: Choose Your Connection Mode

Decide which end initiates the connection:

  • Use Listener mode on the server and Caller mode on the encoder in most setups. The server waits; the encoder calls in.
  • Use Rendezvous mode when both sides are behind NAT and you can’t open an inbound port on either end.

Step 2: Configure the Server

If you’re running your own media server (like an SRT-enabled instance), configure it to listen on a UDP port. Ports in the range 1024–49151 are conventional. A common choice is 9998 or 4200. Example FFmpeg server listener:

ffmpeg -i srt://0.0.0.0:9998?mode=listener \
  -c:v libx264 -preset fast -b:v 4000k \
  -c:a aac -b:a 128k \
  -f hls -hls_time 2 -hls_list_size 5 /var/www/stream/output.m3u8

Step 3: Configure the Encoder

In OBS Studio (version 25+), set the stream type to “Custom” and use an SRT URL:

srt://YOUR_SERVER_IP:9998?latency=300000&passphrase=yourpassword

Note: OBS uses microseconds for the latency parameter — 300000 = 300ms.

In vMix, select “SRT” from the Output / Streaming settings and enter the server IP, port, and optional passphrase.

In FFmpeg as encoder:

ffmpeg -re -i input.mp4 \
  -c:v libx264 -b:v 4000k \
  -c:a aac -b:a 128k \
  -f mpegts srt://SERVER_IP:9998?passphrase=yourpassword

Step 4: Set the Latency Buffer

Set latency to at least 3–4x the RTT between encoder and server. To measure RTT:

ping YOUR_SERVER_IP

If RTT is 100ms, set latency to 400ms or higher. If you’re unsure, 500ms is a safe default for most cross-region streams.

Step 5: Enable Encryption

Add a passphrase parameter to both endpoints (minimum 10 characters). Both the encoder and server must use the same passphrase. SRT negotiates AES-256 key exchange automatically during the handshake.

Step 6: Test the Connection

Use FFplay to verify the stream is arriving:

ffplay srt://SERVER_IP:9998

Check for packet loss stats with the -v verbose flag or via your media server’s stream health dashboard.


SRT and Live Streaming Infrastructure: What You Need

A complete SRT-based live streaming stack requires several components working together:

SRT-Capable Encoder Software

  • OBS Studio — Free, open-source, native SRT support since v25 (2020). Best for desktop streaming.
  • vMix — Professional Windows production software with SRT ingest and output.
  • FFmpeg — Command-line encoder/transcoder with full SRT support via the libsrt library.
  • GStreamer — Pipeline-based framework with SRT plugin for Linux-based production systems.

SRT-Capable Hardware Encoders

Haivision, Teradek, Matrox, and LiveU all make hardware encoders with SRT output built in. These are common in broadcast trucks, remote production kits, and stadium installations.

Media Server or Ingest Endpoint

Your encoder sends an SRT stream to a media server that receives it, unpacks the MPEG-TS payload, and re-packages it for delivery. Options include Wowza Streaming Engine, SRT-capable Nginx builds, or a managed ingest API. The SRT encoder guide covers hardware and software encoder options in detail.

If you’d rather skip the infrastructure setup entirely, LiveAPI’s live streaming API accepts SRT ingest directly — no media server configuration required. You point your SRT encoder at a LiveAPI ingest endpoint, and LiveAPI handles transcoding, adaptive bitrate packaging, CDN delivery via Akamai, Cloudflare, or Fastly, and automatic live-to-VOD recording. Teams typically go from SRT encoder to a working HLS stream in less than a day of integration work.

CDN Delivery

SRT terminates at your media server. From there, the stream is transcoded to HLS and delivered via CDN for live streaming. SRT handles the first mile; the CDN handles the last mile to viewers at scale.

Monitoring and Analytics

Production SRT setups should monitor RTT, packet loss percentage, retransmit rate, and bitrate in real time. A spike in retransmits or rising RTT indicates network degradation before it becomes a visible quality drop.


Who Uses SRT Protocol?

SRT’s adoption spans broadcast, enterprise, and cloud infrastructure:

  • NASA uses SRT to distribute live video across control rooms for real-time mission monitoring.
  • The 2020 NFL Draft delivered 600+ live feeds via SRT during the virtual event — one of the largest known SRT deployments.
  • Fox News and Al Jazeera use SRT for live contribution from field correspondents.
  • AWS, Google Cloud, and Microsoft Azure have all integrated SRT into their media infrastructure products.
  • YouTube supports SRT ingest in limited configurations for broadcast partners.

The SRT Alliance’s 600+ member companies represent the full broadcast chain: encoders, decoders, media servers, CDNs, and streaming platforms — all with interoperable SRT implementations.


Is SRT Right for Your Project?

SRT is the right choice if:

  • You’re building first-mile contribution infrastructure — studio to cloud, remote location to production center, or IP camera to media server.
  • Your streams travel over unpredictable networks — public internet, cellular, satellite, or cross-region paths.
  • You need sub-second contribution latency — broadcast news, live sports, auction platforms, or any workflow where timing matters.
  • You need encryption without a separate security layer — regulated industries, secure enterprise environments, or any case where streams traverse untrusted networks.
  • You’re replacing satellite uplinks — SRT over fiber or broadband is significantly cheaper than satellite contribution with comparable reliability.

SRT may not be the best fit if:

  • You’re delivering directly to browser viewers — use HLS or DASH via a CDN for that.
  • You’re only streaming to social platforms — RTMP is still required for YouTube, Twitch, and Facebook ingest APIs.
  • You need under-200ms viewer latency — for interactive use cases, WebRTC is the right choice.

For a deeper comparison, the SRT vs RTMP breakdown covers the technical trade-offs with performance data.


SRT Protocol FAQ

What does SRT stand for?

SRT stands for Secure Reliable Transport. Each word describes a core design property: the protocol encrypts streams natively (Secure), recovers from packet loss without dropping quality (Reliable), and acts as a general-purpose transport layer for any video codec or resolution (Transport).

What port does SRT use?

SRT doesn’t have an officially assigned port number. It can use any UDP port from 1 to 65535. Ports in the range 1024–49151 are recommended to avoid conflicts with well-known services. Common choices include 9998, 4200, and 5000. Both endpoints must be configured to use the same port.

Is SRT better than RTMP?

For professional broadcast contribution over the public internet, yes. SRT handles packet loss via ARQ retransmission, supports bitrates above 20 Mbps over long distances, and includes native AES encryption — none of which RTMP provides. RTMP still has an edge in ecosystem reach for social platform ingest, where it remains the dominant protocol.

What encryption does SRT use?

SRT uses AES-128 or AES-256 encryption applied at the payload level. Encryption keys are negotiated during connection setup via passphrase authentication. Neither side transmits the passphrase itself — only a derived key. The same AES standard is used by government and financial networks globally.

What is the SRT Alliance?

The SRT Alliance is an industry consortium founded by Haivision and Wowza to promote adoption of the SRT protocol. It has more than 600 member organizations including AWS, Cloudflare, Google Cloud, Microsoft, NVIDIA, Sony, and YouTube. Members agree to implement SRT in a compatible way, ensuring interoperability across products.

Does OBS Studio support SRT?

Yes. OBS Studio has native SRT support since version 25.0 (released 2020). Under Stream Settings, select “Custom” as the service and enter an srt:// URL with your server IP, port, and optional latency/passphrase parameters.

What is the difference between SRT and SRTP?

These are unrelated protocols that share a similar acronym. SRT (Secure Reliable Transport) is a video contribution protocol designed for live streaming over IP networks. SRTP (Secure Real-time Transport Protocol) is a profile of RTP used in VoIP and video conferencing (and internally in WebRTC). If you’re building a live streaming ingest workflow, you want SRT. If you’re building a video call app, SRTP is part of the stack — but you’ll interact with it through WebRTC, not directly.

Can SRT handle packet loss?

Yes. SRT’s ARQ mechanism retransmits missing packets within the latency buffer window. Streams remain visually clean with packet loss up to approximately 10%. Above that threshold, the retransmission queue may overflow the latency buffer and cause visible artifacts or stream interruption — depending on the buffer setting and network conditions.

What is caller vs listener mode in SRT?

In caller mode, the encoder initiates the connection to the server (similar to a TCP client). In listener mode, the server waits for incoming connections (similar to a TCP server). In rendezvous mode, both endpoints call each other at the same time — used for NAT traversal when neither side can accept inbound connections. Most setups use caller (encoder) and listener (media server).

Is SRT open source?

Yes. SRT is open source under the Mozilla Public License (MPL) and maintained at github.com/haivision/srt. The license allows commercial use, modification, and distribution. There are no royalties or licensing fees.


Getting Started with SRT

SRT is now the default choice for professional live streaming contribution. It’s faster, more secure, and more resilient than RTMP — and free to use in any stack. The main work is setting up an ingest endpoint that accepts SRT connections and routes the stream to your delivery infrastructure.

If you want the encoder-to-viewer pipeline without managing media servers, transcoding, or CDN configuration yourself, get started with LiveAPI — it accepts SRT ingest out of the box, packages streams for HLS delivery via multiple CDNs, and includes automatic live-to-VOD recording.

Join 200,000+ satisfied streamers

Still on the fence? Take a sneak peek and see what you can do with Castr.

No Castr Branding

No Castr Branding

We do not include our branding on your videos.

No Commitment

No Commitment

No contracts. Cancel or change your plans anytime.

24/7 Support

24/7 Support

Highly skilled in-house engineers ready to help.

  • Check Free 7-day trial
  • CheckCancel anytime
  • CheckNo credit card required

Related Articles