{"id":965,"date":"2026-04-23T10:02:31","date_gmt":"2026-04-23T03:02:31","guid":{"rendered":"https:\/\/liveapi.com\/blog\/what-is-video-latency\/"},"modified":"2026-04-23T10:03:03","modified_gmt":"2026-04-23T03:03:03","slug":"what-is-video-latency","status":"publish","type":"post","link":"https:\/\/liveapi.com\/blog\/what-is-video-latency\/","title":{"rendered":"What Is Video Latency? Causes, Types, and How to Reduce It"},"content":{"rendered":"<span class=\"rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\">11<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span><p>Video latency is the time between when a video is captured and when a viewer sees it on screen. For a football match, it&#8217;s the seconds between a goal being scored and the celebration reaching your audience. For a live auction, it&#8217;s the gap between a price changing and a bidder acting on outdated information.<\/p>\n<p>For developers building live streaming applications, video latency determines whether your product feels real-time or falls flat. A viewer engaging with chat who receives responses to questions that were answered 20 seconds ago, or a betting platform where odds on screen no longer reflect the actual game state \u2014 these aren&#8217;t edge cases. They&#8217;re product failures.<\/p>\n<p>This guide covers what video latency is, how it&#8217;s measured across your streaming pipeline, what causes it at each stage, and how to bring it down based on your specific use case.<\/p>\n<hr \/>\n<h2>What Is Video Latency?<\/h2>\n<p><strong>Video latency<\/strong> is the total time delay between when a camera captures a video frame and when a viewer&#8217;s screen displays that same frame. This end-to-end delay is commonly called <strong>glass-to-glass latency<\/strong> \u2014 referring to the journey from the camera lens (one pane of &#8220;glass&#8221;) to the viewer&#8217;s display (the other).<\/p>\n<p>Glass-to-glass latency spans every stage in the streaming pipeline:<\/p>\n<ol>\n<li><strong>Camera capture<\/strong> \u2014 The camera sensor records a frame<\/li>\n<li><strong>Encoding<\/strong> \u2014 Raw video is compressed into a streamable format (H.264, H.265, AV1)<\/li>\n<li><strong>Ingest<\/strong> \u2014 The encoded stream is sent from the encoder to an origin server<\/li>\n<li><strong>Processing<\/strong> \u2014 The server segments and packages the stream<\/li>\n<li><strong>CDN delivery<\/strong> \u2014 Packaged content travels to edge servers worldwide<\/li>\n<li><strong>Network transit<\/strong> \u2014 Data crosses the internet to the viewer&#8217;s network<\/li>\n<li><strong>Decoding<\/strong> \u2014 The player decompresses the video<\/li>\n<li><strong>Rendering<\/strong> \u2014 The frame is displayed on screen<\/li>\n<\/ol>\n<p>Each stage adds time. Glass-to-glass latency is the sum of all of them. Understanding which stages contribute most gives you a clear target for where to focus your efforts.<\/p>\n<hr \/>\n<h2>How Video Latency Is Measured<\/h2>\n<p>The most direct way to measure video latency is the clock comparison method: display a running clock or timestamp on the source device, then photograph both the source screen and the playback screen at the same moment. The difference between the two displayed times is your glass-to-glass latency.<\/p>\n<p><strong>Wall-clock time<\/strong> is the reference point \u2014 the actual real-world time when content was captured. Comparing wall-clock time at capture versus wall-clock time at playback gives you a reliable latency measurement without specialized hardware.<\/p>\n<p>For production monitoring, you can log ingest timestamps via API and compare them to the player&#8217;s reported playback position. This gives per-session latency data you can track over time and alert on when it drifts.<\/p>\n<p>A practical shortcut during development: if you&#8217;re delivering via <a href=\"https:\/\/liveapi.com\/blog\/adaptive-bitrate-streaming\/\" target=\"_blank\">adaptive bitrate streaming<\/a>, most player SDKs expose a <code>latency<\/code> or <code>liveDelay<\/code> property you can read programmatically and log alongside other session metrics.<\/p>\n<hr \/>\n<h2>Types of Video Latency<\/h2>\n<p>Video latency exists on a spectrum. The right target depends on your application \u2014 not every use case requires sub-second delivery.<\/p>\n<table>\n<thead>\n<tr>\n<th>Latency Type<\/th>\n<th>Delay Range<\/th>\n<th>Protocol Examples<\/th>\n<th>Typical Use Cases<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Real-time<\/td>\n<td>&lt; 300ms<\/td>\n<td>WebRTC, WHIP\/WHEP<\/td>\n<td>Video calls, remote control<\/td>\n<\/tr>\n<tr>\n<td>Ultra-low latency<\/td>\n<td>300ms\u20131s<\/td>\n<td>WebRTC SFU, SRT<\/td>\n<td>Interactive streaming, gaming<\/td>\n<\/tr>\n<tr>\n<td>Low latency<\/td>\n<td>1\u20136s<\/td>\n<td>LL-HLS, LL-DASH, SRT<\/td>\n<td>Sports betting, live shopping<\/td>\n<\/tr>\n<tr>\n<td>Standard latency<\/td>\n<td>6\u201330s<\/td>\n<td>HLS, MPEG-DASH<\/td>\n<td>Broadcast TV, live events<\/td>\n<\/tr>\n<tr>\n<td>High latency<\/td>\n<td>30s+<\/td>\n<td>HTTP progressive<\/td>\n<td>VOD, archived content<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Real-Time Latency (&lt; 300ms)<\/h3>\n<p>Above 300\u2013400ms, two-way conversations become uncomfortable \u2014 speakers overlap because they can&#8217;t gauge real-time feedback from the other side. <a href=\"https:\/\/liveapi.com\/blog\/what-is-webrtc\/\" target=\"_blank\">WebRTC<\/a> is the standard protocol for sub-300ms delivery, transmitting media over UDP with minimal buffering. The tradeoff is scalability: WebRTC is designed for small-group sessions and requires a Selective Forwarding Unit (SFU) to reach large audiences without a massive server footprint.<\/p>\n<h3>Ultra-Low Latency (300ms\u20131s)<\/h3>\n<p>Achievable with WebRTC SFU architectures or SRT ingest paired with aggressive player buffer tuning. Suitable for live events where viewers need to act on real-time information \u2014 live betting markets, real-time social interactions, or interactive gaming streams where a one-second lag breaks the experience.<\/p>\n<h3>Low Latency (1\u20136s)<\/h3>\n<p>The sweet spot for most live streaming applications. <a href=\"https:\/\/liveapi.com\/blog\/ultra-low-latency-video-streaming\/\" target=\"_blank\">LL-HLS<\/a> (Low-Latency HLS) and LL-DASH achieve 2\u20135 seconds in real-world deployments. This range supports large audience scales, works with standard CDN infrastructure, and covers a wide range of devices without requiring special player builds.<\/p>\n<h3>Standard Latency (6\u201330s)<\/h3>\n<p>Traditional <a href=\"https:\/\/liveapi.com\/blog\/what-is-hls-streaming\/\" target=\"_blank\">HLS streaming<\/a> operates here by default, with 6\u201310 second segments and 2\u20133 segment buffers. Sufficient for broadcast news and live events where viewer interaction isn&#8217;t the focus \u2014 the stream is live, but the exact timing gap doesn&#8217;t matter to most viewers.<\/p>\n<h3>High Latency (30s+)<\/h3>\n<p>HTTP progressive download or large-segment HLS. Appropriate for VOD and pre-recorded content where real-time delivery is irrelevant.<\/p>\n<hr \/>\n<h2>What Causes Video Latency?<\/h2>\n<p>Video latency builds up at every stage of your pipeline. Here are the five main sources.<\/p>\n<h3>1. Encoding and Compression<\/h3>\n<p>Before a stream travels anywhere, the encoder compresses raw video frames into a format that can be transmitted over the internet. This takes time.<\/p>\n<p>Two common encoding approaches:<\/p>\n<ul>\n<li><strong>Frame-based encoding<\/strong>: The encoder waits for a complete video frame before compressing it. Better compression efficiency, but adds 100\u2013200ms of encoding delay.<\/li>\n<li><strong>Slice-based encoding<\/strong>: The encoder compresses portions of a frame as they arrive, cutting encoding latency to 10\u201330ms at the cost of slightly lower compression efficiency.<\/li>\n<\/ul>\n<p>Your choice of <a href=\"https:\/\/liveapi.com\/blog\/what-is-video-encoder\/\" target=\"_blank\">video encoder<\/a> and codec settings directly controls this stage. Hardware encoders (GPU-accelerated) typically process frames faster than software encoders and support low-latency encoding modes. Disabling B-frames (bidirectional prediction frames) and reducing keyframe intervals are the two most impactful codec settings for cutting encoding latency.<\/p>\n<h3>2. Streaming Protocol and Segment Size<\/h3>\n<p>The streaming protocol is one of the biggest variables in end-to-end latency. Traditional HLS segments are 6\u201310 seconds long. A player buffers 2\u20133 segments before starting playback, setting a baseline latency of 12\u201330 seconds before network transit is even counted.<\/p>\n<p>A useful rule of thumb: <strong>latency \u2248 3\u00d7 segment duration<\/strong>. Shorter segments reduce latency in proportion.<\/p>\n<p><a href=\"https:\/\/liveapi.com\/blog\/srt-protocol\/\" target=\"_blank\">SRT protocol<\/a> operates at the transport layer with 150ms\u20133s latency depending on network conditions. <a href=\"https:\/\/liveapi.com\/blog\/webrtc-live-streaming\/\" target=\"_blank\">WebRTC live streaming<\/a> bypasses the HTTP segment model entirely, sending media over UDP for sub-500ms delivery.<\/p>\n<p>LL-HLS and LL-DASH break segments into partial chunks (200\u2013500ms) and use HTTP\/2 push to deliver content before a full segment is complete \u2014 achieving 2\u20135 seconds while staying compatible with standard CDN infrastructure.<\/p>\n<h3>3. Network Transmission and Distance<\/h3>\n<p>Once encoded and ingested, the stream crosses the internet to reach viewers. Physical distance adds measurable delay \u2014 data travels at roughly two-thirds the speed of light through fiber, meaning a stream traveling from New York to Tokyo adds at least 80\u2013100ms of one-way transit time before any processing overhead.<\/p>\n<p>Network congestion, packet loss, and retransmission add further unpredictable delays. A single retransmitted packet on a TCP-based protocol like RTMP can stall the stream for 100\u2013500ms while the missing packet is re-requested and delivered.<\/p>\n<h3>4. CDN and Edge Delivery<\/h3>\n<p>CDNs reduce latency by serving content from edge servers geographically closer to viewers. But CDN architecture matters for live streaming. Traditional CDNs optimize for throughput and cache hit rates, which can introduce buffering when content arrives in segments rather than complete files.<\/p>\n<p>For low-latency streaming, you need a <a href=\"https:\/\/liveapi.com\/blog\/cdn-for-video-streaming\/\" target=\"_blank\">CDN for video streaming<\/a> that supports chunked transfer encoding and HTTP\/2 push to deliver partial segments in real time. Without this, the CDN itself adds seconds of delay even when your ingest pipeline is fast. Akamai, Cloudflare, and Fastly all support LL-HLS delivery \u2014 if you&#8217;re building on a streaming API like LiveAPI, this CDN routing is <a href=\"https:\/\/liveapi.com\/features\/\" target=\"_blank\">handled for you<\/a> across all three providers based on viewer geography.<\/p>\n<p>Without any CDN, your origin server handles every viewer request directly, adding latency as traffic grows and the server becomes the bottleneck.<\/p>\n<h3>5. Player Buffering<\/h3>\n<p>Even after data reaches the viewer&#8217;s device, the player holds content in a buffer before rendering it. This protects against jitter: if the network pauses for 2 seconds, a 3-second buffer means the viewer sees uninterrupted playback rather than a freeze.<\/p>\n<p>The tradeoff is direct: larger buffers mean higher latency. <a href=\"https:\/\/liveapi.com\/blog\/buffering-when-streaming\/\" target=\"_blank\">Buffering when streaming<\/a> is a quality-of-experience problem, and the instinct to fix frequent pauses by increasing the buffer size makes your latency worse.<\/p>\n<p>Low-latency players use 1\u20132 segment buffers and implement adaptive buffering \u2014 increasing buffer depth only when network conditions degrade, then reducing it again as conditions improve.<\/p>\n<hr \/>\n<h2>Video Latency by Streaming Protocol<\/h2>\n<p>Different streaming protocols make different trade-offs between latency, scalability, and device compatibility. Here&#8217;s how the major options compare:<\/p>\n<table>\n<thead>\n<tr>\n<th>Protocol<\/th>\n<th>Typical Latency<\/th>\n<th>Scalability<\/th>\n<th>Device Support<\/th>\n<th>Best For<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>WebRTC<\/td>\n<td>100\u2013500ms<\/td>\n<td>Low\u2013Medium (SFU required)<\/td>\n<td>Browsers, native apps<\/td>\n<td>Video calls, interactive streams<\/td>\n<\/tr>\n<tr>\n<td>SRT<\/td>\n<td>150ms\u20133s<\/td>\n<td>Medium<\/td>\n<td>Servers, encoders<\/td>\n<td>Contribution, low-latency ingest<\/td>\n<\/tr>\n<tr>\n<td>RTMP<\/td>\n<td>2\u20135s<\/td>\n<td>Medium<\/td>\n<td>Encoders, servers<\/td>\n<td>Ingest to streaming servers<\/td>\n<\/tr>\n<tr>\n<td>LL-HLS<\/td>\n<td>2\u20135s<\/td>\n<td>High<\/td>\n<td>All HLS-compatible devices<\/td>\n<td>Large-scale live streaming<\/td>\n<\/tr>\n<tr>\n<td>LL-DASH<\/td>\n<td>2\u20135s<\/td>\n<td>High<\/td>\n<td>Most browsers, devices<\/td>\n<td>Large-scale live streaming<\/td>\n<\/tr>\n<tr>\n<td>HLS (standard)<\/td>\n<td>10\u201330s<\/td>\n<td>Very high<\/td>\n<td>All devices<\/td>\n<td>Broadcast, VOD, live events<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>For most developer use cases \u2014 live streaming to web and mobile audiences at scale \u2014 the recommended path is <a href=\"https:\/\/liveapi.com\/blog\/srt-vs-rtmp\/\" target=\"_blank\">SRT or RTMP<\/a> for ingest into your streaming server, with LL-HLS or standard HLS for viewer delivery. This balances low latency with broad device compatibility and standard CDN support.<\/p>\n<p>If you need sub-second interactivity, WebRTC is the right choice \u2014 but requires a <a href=\"https:\/\/liveapi.com\/blog\/webrtc-server\/\" target=\"_blank\">WebRTC server<\/a> or SFU to scale beyond small group sessions. If you&#8217;re ingesting from an <a href=\"https:\/\/liveapi.com\/blog\/what-is-rtmp\/\" target=\"_blank\">RTMP<\/a> encoder, your streaming server handles the protocol conversion to HLS for viewer delivery.<\/p>\n<hr \/>\n<h2>When Video Latency Actually Matters<\/h2>\n<p>Latency requirements vary significantly by use case. Not every application needs to hit the low end of the spectrum.<\/p>\n<h3>Live Sports and Sports Betting<\/h3>\n<p>High latency creates spoilers \u2014 social media posts about a goal reach viewers before the stream catches up. For betting platforms, a 10-second gap between different viewers creates arbitrage opportunities that undermine fairness. <strong>Target: 3\u20136 seconds.<\/strong><\/p>\n<h3>Video Conferencing and Remote Collaboration<\/h3>\n<p>Above 300\u2013400ms, conversations feel unnatural \u2014 people talk over each other because they can&#8217;t read real-time cues. Video conferencing platforms target under 200ms for interactions that feel natural. <strong>Target: &lt; 300ms.<\/strong><\/p>\n<h3>Live Shopping and Interactive Commerce<\/h3>\n<p>Live shopping platforms need real-time synchronization between presenter actions and viewer responses. Viewers acting on product information that&#8217;s 15 seconds old leads to inventory errors and frustrated purchases. <strong>Target: 3\u20136 seconds.<\/strong><\/p>\n<h3>Broadcast and Linear TV Events<\/h3>\n<p>Traditional broadcast has 5\u20137 seconds of inherent production latency. Online streaming that matches or beats this feels live to most viewers. <strong>Target: 6\u201315 seconds.<\/strong><\/p>\n<h3>Surveillance and Remote Operations<\/h3>\n<p>Security cameras, traffic monitoring, and industrial remote control all require low latency for accurate situational awareness. A 2-second lag on a security feed can mean missing an event entirely. <strong>Target: &lt; 500ms to 2 seconds.<\/strong><\/p>\n<h3>E-Learning and Webinars<\/h3>\n<p>Presenter-to-audience delivery is acceptable at 3\u201310 seconds since the interaction is mostly one-directional. Live Q&amp;A benefits from lower latency, but it&#8217;s rarely the priority for educational content. <strong>Target: 3\u201310 seconds.<\/strong><\/p>\n<hr \/>\n<p>Building low-latency streaming requires decisions across every layer of your stack. Now that you understand where latency originates, the practical question is what you can actually do to reduce it.<\/p>\n<hr \/>\n<h2>How to Reduce Video Latency<\/h2>\n<p>Here are the most effective techniques for reducing video latency across a live streaming pipeline.<\/p>\n<h3>1. Choose the Right Protocol First<\/h3>\n<p>Protocol selection is your highest-impact decision. If your application needs sub-second delivery, standard HLS won&#8217;t get you there regardless of other tuning. Switch to WebRTC or SRT. If you need 2\u20135 seconds at scale, move from standard HLS to LL-HLS. Review the <a href=\"https:\/\/liveapi.com\/blog\/hls-vs-dash\/\" target=\"_blank\">HLS vs DASH<\/a> trade-offs to pick the right low-latency delivery format for your stack before configuring anything else.<\/p>\n<h3>2. Reduce Segment Duration<\/h3>\n<p>For HLS-based delivery, shorter segments directly reduce latency. Moving from 6-second segments to 2-second segments cuts latency roughly in proportion while increasing HTTP request volume. LL-HLS takes this further with partial segments of 200\u2013500ms, cutting latency under 3 seconds without abandoning standard HTTP-based infrastructure.<\/p>\n<h3>3. Configure Your Encoder for Low Latency<\/h3>\n<p>Encoder settings have a major impact on how much delay you introduce before the stream even hits the network. Key settings to adjust:<\/p>\n<ul>\n<li><strong>Disable B-frames<\/strong>: B-frames use future frames as references, requiring the encoder to buffer ahead. Disabling them removes this source of delay at a modest quality cost.<\/li>\n<li><strong>Reduce keyframe interval<\/strong>: Use 1\u20132 seconds instead of the default 10. This increases random-access points in the stream, which players need to tune in and start playback faster.<\/li>\n<li><strong>Switch to slice-based encoding<\/strong>: Reduces encoding latency from 100\u2013200ms to 10\u201330ms if your encoder supports it.<\/li>\n<li><strong>Use hardware acceleration<\/strong>: A dedicated <a href=\"https:\/\/liveapi.com\/blog\/srt-encoder\/\" target=\"_blank\">SRT encoder<\/a> with GPU encoding can bring encoding delay under 100ms for most resolutions.<\/li>\n<\/ul>\n<h3>4. Use a CDN That Supports LL-HLS<\/h3>\n<p>Not all CDNs handle LL-HLS correctly. You need a CDN that supports chunked transfer encoding and HTTP\/2 push to deliver partial segments before they&#8217;re complete. Confirm that your origin server outputs LL-HLS manifests correctly and that your CDN is configured to pass through chunked responses rather than waiting for a full segment before caching.<\/p>\n<h3>5. Deploy Infrastructure Close to Your Audience<\/h3>\n<p>Reduce the physical distance between your streaming infrastructure and your viewers. For contribution, choose an ingest server or cloud region geographically near your broadcaster. For distribution, use a CDN with global edge locations or deploy regional origin servers near your highest-traffic geographies. Every 1,000 km of additional network path adds roughly 5\u201310ms of one-way transit time \u2014 small individually, but significant when compounding with other delays.<\/p>\n<h3>6. Tune Player Buffer Settings<\/h3>\n<p>Work with your player configuration to reduce the target buffer length. Standard players buffer 3\u20135 seconds; for low-latency streams, configure a 1\u20132 second target. Enable catch-up mode \u2014 a feature most modern players support that gradually increases playback speed (1.05\u20131.1\u00d7) when latency drifts above target, closing the gap without perceptible audio pitch changes.<\/p>\n<h3>7. Use Wired Connections for Critical Broadcast Sources<\/h3>\n<p>For the broadcaster side, a wired ethernet connection removes 10\u2013200ms of Wi-Fi jitter from the encoding-to-ingest leg. This is especially relevant for high-stakes live events where a brief Wi-Fi dropout could cause visible stuttering. For viewer-side connectivity, CDN edge servers that terminate connections close to viewers reduce the last-mile variability that causes jitter and rebuffering.<\/p>\n<hr \/>\n<h2>Low-Latency Streaming with LiveAPI<\/h2>\n<p>For teams building low-latency live streaming into their applications, the infrastructure decisions above represent significant configuration work if handled from scratch: ingest servers, CDN configuration, protocol conversion, encoder compatibility, and failover logic.<\/p>\n<p><a href=\"https:\/\/liveapi.com\/live-streaming-api\/\" target=\"_blank\">LiveAPI&#8217;s live streaming API<\/a> handles this pipeline \u2014 from RTMP and SRT ingest to multi-CDN delivery via Akamai, Cloudflare, and Fastly \u2014 with the streaming infrastructure managed for you.<\/p>\n<p>Relevant capabilities for latency-sensitive applications:<\/p>\n<ul>\n<li><strong>SRT and RTMP ingest<\/strong>: SRT&#8217;s built-in error correction handles packet loss without the head-of-line blocking that affects RTMP over congested networks. Both protocols are fully supported.<\/li>\n<li><strong>Multi-CDN delivery<\/strong>: Traffic routes through Akamai, Cloudflare, or Fastly based on viewer geography, reducing the physical distance component of latency.<\/li>\n<li><strong>HLS output<\/strong>: HLS URLs are generated automatically from your ingest stream, compatible with all major playback devices across web and mobile.<\/li>\n<li><strong>Global server redundancy<\/strong>: Regional infrastructure reduces the server-to-viewer distance that a single-origin setup adds.<\/li>\n<li><strong>Live-to-VOD<\/strong>: Streams are recorded automatically, so tuning your live product for low latency doesn&#8217;t affect your VOD archive quality.<\/li>\n<\/ul>\n<p>This means your team focuses on the application layer \u2014 your player UI, viewer experience, and business logic \u2014 rather than managing ingest servers and CDN routing rules.<\/p>\n<hr \/>\n<h2>Video Latency FAQ<\/h2>\n<h3>What is a good latency for live streaming?<\/h3>\n<p>It depends on your use case. For broadcast-style events where viewers aren&#8217;t interacting in real time, 6\u201315 seconds is workable. For sports, live shopping, or audience engagement features, target 3\u20136 seconds with LL-HLS. For two-way interactive experiences, you need sub-500ms latency via WebRTC.<\/p>\n<h3>What is glass-to-glass latency?<\/h3>\n<p>Glass-to-glass latency is the total delay from when a camera captures a frame to when that frame appears on a viewer&#8217;s screen. It covers every stage: capture, encoding, ingest, CDN delivery, network transit, decoding, and rendering. It&#8217;s the most complete single measure of end-to-end streaming delay, unlike metrics that only measure part of the pipeline.<\/p>\n<h3>What&#8217;s the difference between latency and buffering?<\/h3>\n<p>Latency is the fixed delay from capture to display. <a href=\"https:\/\/liveapi.com\/blog\/streaming-video-buffering\/\" target=\"_blank\">Buffering<\/a> is the temporary pause in playback that happens when the player runs out of data before the stream delivers the next segment. Buffering is a symptom of network instability or a player configured with too small a buffer for current conditions \u2014 it doesn&#8217;t directly indicate high latency, though the two are related.<\/p>\n<h3>Does WebRTC always have lower latency than HLS?<\/h3>\n<p>WebRTC delivers 100\u2013500ms vs. LL-HLS&#8217;s 2\u20135 seconds, so yes in raw numbers \u2014 but the comparison involves real trade-offs. <a href=\"https:\/\/liveapi.com\/blog\/webrtc-vs-hls\/\" target=\"_blank\">WebRTC vs HLS<\/a> comes down to scalability and infrastructure: WebRTC requires a server-side SFU to scale beyond small groups, adding meaningful complexity. LL-HLS scales to large audiences using standard CDN infrastructure that most teams already know.<\/p>\n<h3>How does SRT reduce latency compared to RTMP?<\/h3>\n<p>The key difference is the underlying transport. RTMP uses TCP, which retransmits every lost packet and can stall the stream for hundreds of milliseconds while waiting for the retransmit to arrive. SRT uses UDP with selective retransmission \u2014 it only retransmits packets that have time to arrive before their playback deadline, dropping those that would arrive too late. This makes SRT more predictable under packet loss and better suited for contribution over congested or lossy links.<\/p>\n<h3>Does the video codec affect latency?<\/h3>\n<p>Both H.264 and H.265 can be configured for low-latency encoding. <a href=\"https:\/\/liveapi.com\/blog\/hevc-vs-h264\/\" target=\"_blank\">H.264 vs H.265<\/a> is primarily a quality-per-bitrate trade-off, not a latency trade-off. H.265 achieves better compression, but hardware encoder support for H.265 low-latency mode is less consistent than H.264. For live streaming where encoding latency matters most, H.264 is typically the safer choice.<\/p>\n<h3>How do you measure video latency without specialized equipment?<\/h3>\n<p>Display a running clock or timer as an overlay on your source \u2014 either through your encoding software or a clock app on the camera device. View the stream on a second device. Take a photo of both screens at the same moment. The difference in displayed times is your glass-to-glass latency. For automated monitoring, most player SDKs expose a <code>liveDelay<\/code> or <code>latency<\/code> property you can log programmatically alongside other session metrics.<\/p>\n<h3>Why does latency increase over time during a long stream?<\/h3>\n<p>Latency drift happens when the player&#8217;s buffer grows gradually \u2014 the player receives data slightly faster than it consumes it, or brief network jitter events cause it to buffer ahead to recover. Good low-latency players implement catch-up mode, speeding playback to 1.05\u20131.1\u00d7 to close the gap when latency drifts above target. Without this, latency grows steadily during long broadcasts until the viewer refreshes the page.<\/p>\n<hr \/>\n<h2>Wrapping Up<\/h2>\n<p>Video latency accumulates at every stage: encoding, protocol selection, network transit, CDN delivery, and player buffering. There&#8217;s no single fix, but there&#8217;s a clear priority order \u2014 start with protocol selection, then encoder settings, then CDN configuration, then player tuning.<\/p>\n<p>For most live streaming applications, LL-HLS gives you 2\u20135 seconds of video latency at full CDN scale. WebRTC gives you sub-500ms when your use case genuinely requires it. SRT covers the ingest side for both. Understanding where latency comes from in your specific pipeline is the first step toward hitting your target \u2014 and knowing which stage to fix first.<\/p>\n<p><a href=\"https:\/\/liveapi.com\/\" target=\"_blank\">Get started with LiveAPI<\/a> to build low-latency live streaming into your application without building the underlying pipeline from scratch.<\/p>\n","protected":false},"excerpt":{"rendered":"<p><span class=\"rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\">11<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span> Video latency is the time between when a video is captured and when a viewer sees it on screen. For a football match, it&#8217;s the seconds between a goal being scored and the celebration reaching your audience. For a live auction, it&#8217;s the gap between a price changing and a bidder acting on outdated information. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":966,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_title":"What Is Video Latency? Causes, Types, and How to Reduce It %%sep%% %%sitename%%","_yoast_wpseo_metadesc":"Video latency is the delay between capture and playback. Learn what causes it, how to measure it, and how to reduce it for live streaming apps.","inline_featured_image":false,"footnotes":""},"categories":[2],"tags":[],"class_list":["post-965","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-live-streaming-api"],"jetpack_featured_media_url":"https:\/\/liveapi.com\/blog\/wp-content\/uploads\/2026\/04\/what-is-video-latency.jpg","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v15.6.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<meta name=\"description\" content=\"Video latency is the delay between capture and playback. Learn what causes it, how to measure it, and how to reduce it for live streaming apps.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/liveapi.com\/blog\/what-is-video-latency\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What Is Video Latency? Causes, Types, and How to Reduce It - LiveAPI Blog\" \/>\n<meta property=\"og:description\" content=\"Video latency is the delay between capture and playback. Learn what causes it, how to measure it, and how to reduce it for live streaming apps.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/liveapi.com\/blog\/what-is-video-latency\/\" \/>\n<meta property=\"og:site_name\" content=\"LiveAPI Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-23T03:02:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-23T03:03:03+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"16 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/liveapi.com\/blog\/#website\",\"url\":\"https:\/\/liveapi.com\/blog\/\",\"name\":\"LiveAPI Blog\",\"description\":\"Live Video Streaming API Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/liveapi.com\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/liveapi.com\/blog\/what-is-video-latency\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/liveapi.com\/blog\/wp-content\/uploads\/2026\/04\/what-is-video-latency.jpg\",\"width\":940,\"height\":627,\"caption\":\"Photo by Jakub Zerdzicki on Pexels\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/liveapi.com\/blog\/what-is-video-latency\/#webpage\",\"url\":\"https:\/\/liveapi.com\/blog\/what-is-video-latency\/\",\"name\":\"What Is Video Latency? Causes, Types, and How to Reduce It - LiveAPI Blog\",\"isPartOf\":{\"@id\":\"https:\/\/liveapi.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/liveapi.com\/blog\/what-is-video-latency\/#primaryimage\"},\"datePublished\":\"2026-04-23T03:02:31+00:00\",\"dateModified\":\"2026-04-23T03:03:03+00:00\",\"author\":{\"@id\":\"https:\/\/liveapi.com\/blog\/#\/schema\/person\/98f2ee8b3a0bd93351c0d9e8ce490e4a\"},\"description\":\"Video latency is the delay between capture and playback. Learn what causes it, how to measure it, and how to reduce it for live streaming apps.\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/liveapi.com\/blog\/what-is-video-latency\/\"]}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/liveapi.com\/blog\/#\/schema\/person\/98f2ee8b3a0bd93351c0d9e8ce490e4a\",\"name\":\"govz\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/liveapi.com\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ab5cbe0543c0a44dc944c720159323bd001fc39a8ba5b1f137cd22e7578e84c9?s=96&d=mm&r=g\",\"caption\":\"govz\"},\"sameAs\":[\"https:\/\/liveapi.com\/blog\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/posts\/965","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/comments?post=965"}],"version-history":[{"count":1,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/posts\/965\/revisions"}],"predecessor-version":[{"id":967,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/posts\/965\/revisions\/967"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/media\/966"}],"wp:attachment":[{"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/media?parent=965"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/categories?post=965"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/liveapi.com\/blog\/wp-json\/wp\/v2\/tags?post=965"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}