Server

Building a Node.js Video Streaming Server: From Basic Implementation to Production-Ready Streaming

25 min read
Node JS
Reading Time: 17 minutes

You’re building a feature that requires video streaming—maybe a course platform, an internal training portal, or a media application. The question hits: should you build the video streaming server yourself or integrate an existing solution?

Node.js handles tens of thousands of concurrent connections on a single instance without degradation, making it architecturally well-suited for video streaming due to its non-blocking I/O model. Enterprise Node.js systems have been proven to manage millions of concurrent connections reliably, which explains why it powers over 60% of real-time web applications globally, including streaming platforms.

A Node.js video streaming server is a backend application that delivers video content to clients using streaming protocols, taking advantage of Node.js’s event-driven, non-blocking I/O model to efficiently handle concurrent video delivery through techniques like HTTP range requests and chunked transfer encoding.

This guide walks you through building a functional video streaming server from scratch. You’ll implement HTTP range requests for video seeking, chunked transfer encoding for real-time delivery, buffer optimization for handling concurrent viewers, and HLS segment delivery for modern streaming. Along the way, you’ll understand exactly what happens under the hood when video data flows from your server to a user’s browser.

Here’s the reality: building a basic streaming server is achievable and educational. Building production-grade streaming infrastructure—with video transcoding, adaptive bitrate streaming, CDN distribution, and global scale—takes months of development and significant ongoing maintenance. By the end of this guide, you’ll know how to do both: implement the fundamentals yourself and make an informed decision about when to use a video streaming API like LiveAPI that handles the heavy lifting.

Understanding Video Streaming Fundamentals in Node.js

Node.js processes video streaming requests 30–60% more efficiently in I/O-heavy scenarios compared to traditional thread-based servers. This efficiency comes from its event-driven architecture—instead of creating a new thread for each video request (which consumes memory and CPU), Node.js handles all requests on a single thread using an event loop.

When a user requests a video file, a traditional server might load the entire file into memory before sending it. A 500MB video file means 500MB of RAM consumed for that single request. Multiply that by 100 concurrent viewers, and your server needs 50GB of RAM—clearly unsustainable.

Node.js solves this through its Streams API. Instead of loading entire files, streams process data in chunks. The video file is read piece by piece (default 64KB chunks), sent to the client, and then the memory is freed for the next chunk. This approach keeps memory usage constant regardless of file size.

The data flow works like this: Video file on disk → Readable Stream (fs.createReadStream) → Processing (optional transforms) → Writable Stream (HTTP response) → Client browser. Each arrow represents data flowing in chunks, never accumulating in memory.

How Node.js Streams Work for Video Data

The fs.createReadStream() method with pipe() streams video in chunks using a configurable highWaterMark (default 64KB), preventing memory overload even for large files. Here’s the simplest possible video streaming server:

const fs = require('fs');
const http = require('http');

http.createServer((req, res) => {
  const videoPath = './video.mp4';
  const stream = fs.createReadStream(videoPath);
  
  res.writeHead(200, { 'Content-Type': 'video/mp4' });
  stream.pipe(res);
  
  stream.on('error', (err) => {
    console.error('Stream error:', err);
    res.end();
  });
}).listen(3000);

console.log('Server running on port 3000');

The pipe() method connects the readable stream (file) to the writable stream (response) and automatically manages backpressure—the mechanism that prevents fast readers from overwhelming slow writers. When the client can’t receive data fast enough, pipe() automatically pauses the file reading until the client catches up.

This handles concurrent viewers efficiently because each request gets its own stream instance, but memory per stream stays constant. You’re not loading files into memory; you’re creating a pipeline from disk to network.

Progressive Download vs. Adaptive Streaming

The basic Node.js streaming we just implemented is actually progressive download—not true adaptive streaming. Understanding this distinction is critical before building further.

Feature Progressive Download Adaptive Bitrate Streaming
Quality Single quality level Multiple qualities (360p, 720p, 1080p, 4K)
Network adaptation None—buffers or fails on slow connections Switches quality based on bandwidth
File format Single MP4 or WebM file Segmented files (.ts) with manifest (.m3u8)
Implementation complexity Simple—basic file streaming Complex—requires transcoding, segmenting, manifests
Infrastructure needs Single Node.js server Encoding pipeline, CDN, origin servers

Progressive download serves a single-quality file sequentially. If your user has a slow connection, they experience buffering. If they have a fast connection but you’re serving a low-quality file, they get a subpar experience.

HTTP Live Streaming (HLS) and similar adaptive protocols break video into small segments at multiple quality levels. The video player monitors bandwidth and switches qualities dynamically—delivering smooth playback regardless of network conditions.

Node.js can serve progressive downloads easily, as we’ve shown. For ABR, Node.js can serve pre-generated HLS content, but generating that content requires transcoding pipelines—typically using FFmpeg. Production ABR at scale requires encoding infrastructure, segment storage, manifest management, and CDN distribution—infrastructure that takes months to build reliably.

Implementing HTTP Range Requests for Video Streaming

HTTP Range requests enable seeking—the ability to jump to any point in a video without downloading everything before it. Without range request support, users must wait for the entire video to download before watching, and seeking is impossible.

When a video player needs to seek to the 2-minute mark, it sends a request with a Range header: Range: bytes=12000000-. Your server responds with HTTP 206 Partial Content, sending only the requested bytes instead of the entire file.

Here’s a complete implementation:

const http = require('http');
const fs = require('fs');
const path = require('path');

http.createServer((req, res) => {
  const videoPath = './video.mp4';
  
  fs.stat(videoPath, (err, stat) => {
    if (err) {
      res.writeHead(404);
      res.end('Video not found');
      return;
    }
    
    const fileSize = stat.size;
    const range = req.headers.range;
    
    if (range) {
      // Parse the range header
      const parts = range.replace(/bytes=/, '').split('-');
      const start = parseInt(parts[0], 10);
      const end = parts[1] ? parseInt(parts[1], 10) : fileSize - 1;
      
      // Validate range
      if (start >= fileSize || end >= fileSize) {
        res.writeHead(416, {
          'Content-Range': `bytes */${fileSize}`
        });
        res.end();
        return;
      }
      
      const chunkSize = end - start + 1;
      
      const headers = {
        'Content-Range': `bytes ${start}-${end}/${fileSize}`,
        'Accept-Ranges': 'bytes',
        'Content-Length': chunkSize,
        'Content-Type': 'video/mp4'
      };
      
      res.writeHead(206, headers);
      
      const stream = fs.createReadStream(videoPath, { start, end });
      stream.pipe(res);
      
      stream.on('error', (streamErr) => {
        console.error('Stream error:', streamErr);
        res.end();
      });
      
    } else {
      // No range header - send entire file
      const headers = {
        'Content-Length': fileSize,
        'Content-Type': 'video/mp4',
        'Accept-Ranges': 'bytes'
      };
      
      res.writeHead(200, headers);
      fs.createReadStream(videoPath).pipe(res);
    }
  });
}).listen(3000);

console.log('Video streaming server with range support running on port 3000');

The Accept-Ranges: bytes header tells the browser that your server supports range requests. Modern browsers and video players check for this header before attempting partial requests.

Parsing Range Headers Correctly

Range headers come in several formats that your server must handle:

  • Standard range: bytes=0-999 – Bytes 0 through 999 (first 1000 bytes)
  • Open-ended range: bytes=500- – Byte 500 to end of file
  • Suffix range: bytes=-500 – Last 500 bytes of file

Here’s a robust parsing function:

function parseRangeHeader(range, fileSize) {
  if (!range || !range.startsWith('bytes=')) {
    return null;
  }
  
  const rangeValue = range.replace('bytes=', '');
  const parts = rangeValue.split('-');
  
  let start, end;
  
  if (parts[0] === '') {
    // Suffix range: bytes=-500
    const suffixLength = parseInt(parts[1], 10);
    start = Math.max(0, fileSize - suffixLength);
    end = fileSize - 1;
  } else if (parts[1] === '') {
    // Open-ended: bytes=500-
    start = parseInt(parts[0], 10);
    end = fileSize - 1;
  } else {
    // Standard: bytes=0-999
    start = parseInt(parts[0], 10);
    end = parseInt(parts[1], 10);
  }
  
  // Validate
  if (isNaN(start) || isNaN(end) || start > end || start >= fileSize) {
    return { invalid: true };
  }
  
  // Clamp end to file size
  end = Math.min(end, fileSize - 1);
  
  return { start, end };
}

Invalid ranges should return HTTP 416 Range Not Satisfiable with a Content-Range: bytes */[fileSize] header indicating the valid range.

Setting Response Headers for Partial Content

Correct headers are essential for 206 Partial Content responses:

Header Required Example Value Purpose
Content-Range Yes bytes 0-999/5000 Indicates which bytes are being sent and total size
Accept-Ranges Yes bytes Confirms server supports range requests
Content-Length Yes 1000 Size of this response (not total file size)
Content-Type Yes video/mp4 MIME type of the content
Cache-Control Recommended public, max-age=86400 Enables caching of video segments

Common video MIME types: video/mp4 for MP4 files, video/webm for WebM, video/ogg for Ogg video. For cross-origin playback (video hosted on different domain), add CORS headers: Access-Control-Allow-Origin: *.

Chunked Transfer Encoding for Real-Time Video Delivery

Range requests work great for pre-recorded video files where you know the exact file size. But what about live streaming or on-the-fly transcoding where the content length is unknown? That’s where chunked transfer encoding comes in.

When you omit the Content-Length header, Node.js automatically uses Transfer-Encoding: chunked, allowing you to send data incrementally:

const http = require('http');

http.createServer((req, res) => {
  res.writeHead(200, {
    'Content-Type': 'video/mp4',
    'Transfer-Encoding': 'chunked'
  });
  
  // Simulate live data arriving in chunks
  const sendChunk = (data) => {
    res.write(data);
  };
  
  // When done
  const finish = () => {
    res.end();
  };
  
  // Your video source would call sendChunk() as data arrives
  // and finish() when the stream ends
  
}).listen(3000);

Use chunked encoding when:

  • Streaming live video where total duration is unknown
  • Transcoding video on-the-fly (piping FFmpeg output to response)
  • Generating video content dynamically

Use Content-Length (and range requests) when:

  • Serving pre-recorded files with known sizes
  • Users need seeking capability
  • You want browsers to show download progress

For production live streaming, chunked encoding alone isn’t enough. You need ingest servers accepting RTMP or SRT streams, real-time transcoding, HLS segment generation, and edge distribution. Video streaming APIs like LiveAPI handle this entire pipeline—you get RTMP/SRT ingest URLs and HLS playback URLs without building the infrastructure.

Buffer Optimization and Memory Management

A video streaming server lives and dies by its memory management. One memory leak or misconfigured buffer can bring down your server when traffic spikes. Node.js provides predictable memory behavior for long-running streams, but you need to configure it correctly.

The highWaterMark option controls how much data is buffered before backpressure kicks in. The default 64KB works for small files, but video benefits from larger chunks:

// Default - may cause more disk reads for video
const stream = fs.createReadStream(videoPath);

// Better for video - fewer disk operations, smoother playback
const stream = fs.createReadStream(videoPath, {
  highWaterMark: 256 * 1024  // 256KB chunks
});

// For high-bandwidth scenarios
const stream = fs.createReadStream(videoPath, {
  highWaterMark: 1024 * 1024  // 1MB chunks
});

Larger chunks mean fewer disk I/O operations but more memory per stream. For a server handling 100 concurrent streams with 1MB highWaterMark, expect roughly 100MB of buffer memory in the worst case.

Handling Backpressure in Video Streams

Backpressure occurs when data is being read faster than it can be written. Without proper handling, data accumulates in memory until your server crashes.

The pipe() method handles backpressure automatically—it’s the recommended approach for most cases. When the writable stream’s internal buffer fills up, pipe() pauses the readable stream. When the buffer drains, reading resumes.

For manual control (needed in some advanced scenarios):

const readable = fs.createReadStream(videoPath);
const writable = res;

readable.on('data', (chunk) => {
  const canContinue = writable.write(chunk);
  
  if (!canContinue) {
    // Buffer is full - pause reading
    readable.pause();
    
    // Resume when buffer drains
    writable.once('drain', () => {
      readable.resume();
    });
  }
});

readable.on('end', () => {
  writable.end();
});

readable.on('error', (err) => {
  console.error('Read error:', err);
  writable.end();
});

Signs you’re ignoring backpressure: memory usage grows continuously during streaming, server becomes unresponsive under load, streams complete but memory isn’t freed.

Monitoring Memory Usage During Streaming

Add memory monitoring to catch issues before they crash your server:

function logMemoryUsage() {
  const usage = process.memoryUsage();
  console.log({
    heapUsed: Math.round(usage.heapUsed / 1024 / 1024) + 'MB',
    heapTotal: Math.round(usage.heapTotal / 1024 / 1024) + 'MB',
    rss: Math.round(usage.rss / 1024 / 1024) + 'MB',
    external: Math.round(usage.external / 1024 / 1024) + 'MB'
  });
}

// Log every 30 seconds
setInterval(logMemoryUsage, 30000);

// Track concurrent streams
let activeStreams = 0;

function startStream() {
  activeStreams++;
  console.log(`Active streams: ${activeStreams}`);
}

function endStream() {
  activeStreams--;
  console.log(`Active streams: ${activeStreams}`);
}

Key metrics to watch:

  • heapUsed: Should stay relatively stable during streaming
  • rss (Resident Set Size): Total memory allocated to the process
  • activeStreams: Correlate memory growth with stream count

Warning signs: heapUsed grows continuously without returning to baseline, rss exceeds expected limits (activeStreams × highWaterMark × 2), garbage collection pauses cause playback stutters.

At production scale with hundreds or thousands of concurrent streams, even optimized single-server approaches become unreliable. Memory optimization buys you headroom, but eventually you need distributed infrastructure—CDN edge servers that handle delivery while your origin focuses on source content.

HLS Segment Delivery with Node.js

HLS (HTTP Live Streaming), developed by Apple, has become the dominant streaming protocol. It works by breaking video into small segments (typically 2-10 seconds each) and delivering them via standard HTTP. A manifest file (.m3u8) tells the player which segments to request and in what order.

Why HLS dominates:

  • Works over standard HTTP/HTTPS (no special protocols or ports)
  • Passes through firewalls and proxies
  • Supported by all modern browsers and devices
  • Enables adaptive bitrate switching for smooth playback
  • Easy to cache on CDNs

Here’s how to serve pre-generated HLS content:

const http = require('http');
const fs = require('fs');
const path = require('path');

const HLS_DIR = './hls-content';

http.createServer((req, res) => {
  let filePath = path.join(HLS_DIR, req.url);
  const ext = path.extname(filePath);
  
  // Determine content type
  let contentType;
  if (ext === '.m3u8') {
    contentType = 'application/vnd.apple.mpegurl';
  } else if (ext === '.ts') {
    contentType = 'video/MP2T';
  } else {
    res.writeHead(404);
    res.end('Not found');
    return;
  }
  
  // Check file exists
  fs.stat(filePath, (err, stat) => {
    if (err) {
      res.writeHead(404);
      res.end('File not found');
      return;
    }
    
    res.writeHead(200, {
      'Content-Type': contentType,
      'Content-Length': stat.size,
      'Access-Control-Allow-Origin': '*',
      'Cache-Control': ext === '.m3u8' ? 'no-cache' : 'public, max-age=31536000'
    });
    
    fs.createReadStream(filePath).pipe(res);
  });
}).listen(3000);

console.log('HLS server running on port 3000');

Note the caching strategy: manifest files (.m3u8) should not be cached (or cached briefly) since they may update for live streams. Segments (.ts) can be cached aggressively since they never change once created.

Understanding HLS Manifest Files

HLS uses two types of manifests:

Master Playlist – Lists available quality levels:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=800000,RESOLUTION=640x360
360p/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1400000,RESOLUTION=1280x720
720p/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2800000,RESOLUTION=1920x1080
1080p/playlist.m3u8

Media Playlist – Lists actual video segments:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:10.0,
segment000.ts
#EXTINF:10.0,
segment001.ts
#EXTINF:10.0,
segment002.ts
#EXT-X-ENDLIST

The player fetches the master playlist first, selects an appropriate quality based on bandwidth, then fetches that quality’s media playlist, and finally downloads segments sequentially as needed.

Generating HLS Segments with FFmpeg

Node.js serves HLS content, but generating it requires transcoding. FFmpeg is the industry standard:

# Basic HLS output
ffmpeg -i input.mp4 -hls_time 10 -hls_list_size 0 output.m3u8

# Multiple quality renditions
ffmpeg -i input.mp4 \
  -filter:v:0 scale=640:360 -b:v:0 800k \
  -filter:v:1 scale=1280:720 -b:v:1 1400k \
  -filter:v:2 scale=1920:1080 -b:v:2 2800k \
  -map 0:v -map 0:a -map 0:v -map 0:a -map 0:v -map 0:a \
  -f hls -hls_time 10 -hls_list_size 0 \
  -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" \
  -master_pl_name master.m3u8 \
  stream_%v/playlist.m3u8

Integrating FFmpeg with Node.js:

const { spawn } = require('child_process');
const path = require('path');

function transcodeToHLS(inputPath, outputDir) {
  return new Promise((resolve, reject) => {
    const outputPath = path.join(outputDir, 'playlist.m3u8');
    
    const ffmpeg = spawn('ffmpeg', [
      '-i', inputPath,
      '-hls_time', '10',
      '-hls_list_size', '0',
      '-hls_segment_filename', path.join(outputDir, 'segment%03d.ts'),
      outputPath
    ]);
    
    ffmpeg.stderr.on('data', (data) => {
      console.log(`FFmpeg: ${data}`);
    });
    
    ffmpeg.on('close', (code) => {
      if (code === 0) {
        resolve(outputPath);
      } else {
        reject(new Error(`FFmpeg exited with code ${code}`));
      }
    });
  });
}

The catch: FFmpeg transcoding is CPU-intensive. A single 1080p transcode can use 100%+ CPU. Real-time transcoding for live streams requires dedicated encoding hardware or cloud encoding services. Generating multiple quality levels multiplies the processing requirements.

For VOD, you can transcode files ahead of time. For live streaming, you need real-time encoding infrastructure that most development teams shouldn’t build from scratch. This is exactly where video streaming APIs provide value—they handle encoding pipelines so you don’t have to provision and manage transcoding servers.

Production Considerations: When DIY Isn’t Enough

Let’s take stock of what we’ve built: a Node.js server that streams video files with seeking support, handles chunked transfers, manages memory efficiently, and serves HLS content. For learning, prototyping, or small-scale internal tools, this is a good starting point.

For production streaming—serving real users at scale—here’s the infrastructure gap:

Requirement DIY Status Production Need
Progressive video playback ✓ Built Basic requirement met
Video seeking (range requests) ✓ Built Basic requirement met
HLS segment serving ✓ Built Can serve pre-generated content
Multi-bitrate transcoding ✗ Missing Encoding pipeline needed
Live stream ingest ✗ Missing RTMP/SRT servers needed
CDN distribution ✗ Missing Global edge servers needed
Auto-scaling ✗ Missing Handle traffic spikes
Redundancy/failover ✗ Missing No single points of failure
DRM protection ✗ Missing Content protection for premium video
Analytics ✗ Missing Viewer metrics, quality monitoring

The bandwidth math is stark: streaming 2Mbps video to 1,000 concurrent viewers requires 2Gbps of outbound bandwidth. A typical server has 1Gbps. Beyond roughly 100 concurrent viewers at reasonable quality, single-server approaches become unreliable without CDN distribution.

Infrastructure Requirements Checklist

Building production video streaming infrastructure requires:

  • Transcoding servers: CPU-intensive encoding for multiple quality levels. Estimate: 2-3 months to build reliable encoding pipeline.
  • CDN integration: Contracts with Akamai, Cloudflare, Fastly, or similar. Configuration, origin shields, cache invalidation. Estimate: 1-2 months.
  • Ingest servers: For live streaming, accept RTMP/SRT from encoders. Requires always-on infrastructure in multiple regions. Estimate: 1-2 months.
  • Origin servers: Store and serve source content to CDN. Redundancy across regions. Estimate: 2-4 weeks.
  • Monitoring and analytics: Stream health, viewer counts, quality metrics, alerting. Estimate: 1 month.
  • Video protection: DRM integration, geo-blocking, domain restrictions. Estimate: 1-2 months.

Total timeline to production-ready: 6-12 months for a dedicated team. Ongoing maintenance: 1-2 full-time engineers.

Scaling Challenges and Solutions

As viewer counts grow, specific challenges emerge:

10-100 viewers: Single server may suffice. Monitor memory and bandwidth closely.

100-1,000 viewers: Need CDN. Single origin can handle segment requests from CDN edges, but you’re now dependent on CDN configuration and costs.

1,000-10,000 viewers: Multi-region CDN required. Origin redundancy becomes critical. Transcoding must happen ahead of live events—real-time encoding at this scale requires dedicated hardware.

10,000+ viewers: Enterprise-grade infrastructure. Multi-CDN strategies for reliability. Dedicated encoding farms. 24/7 operations team. This is where building from scratch rarely makes business sense unless video IS your business.

Horizontal scaling with load balancing helps, but video streaming has unique challenges. Viewers need consistent access to the same segments, live streams need coordinated encoding, and CDN cache warming must happen before viewer spikes.

The decision framework: If video streaming is your core product (you’re building Twitch), building infrastructure may be justified. If video is a feature in your application (course platform, event streaming, internal communications), the months spent building infrastructure are months not spent on your actual product.

Integrating Video Streaming APIs: The Production-Ready Approach

Video streaming APIs provide the infrastructure we just discussed—transcoding, CDN distribution, ingest servers, analytics—through simple API calls. Instead of building months of infrastructure, you integrate in days.

Here’s how it works with LiveAPI:

const sdk = require('api')('@liveapi/v1.0#5pfjhgkzh9rzt4');

sdk.post('/videos', {
    input_url: 'http://example.com/source-video.mp4'
})
.then(res => {
    console.log('Video processing started');
    console.log('Playback URL:', res.data.playback_url);
})
.catch(err => console.error(err));

That’s it. The API handles:

  • Transcoding to multiple quality levels (up to 4K)
  • HLS packaging with proper manifests
  • CDN distribution via Akamai, Cloudflare, and Fastly
  • Instant encoding—videos playable in seconds regardless of length

Compare this to our DIY approach: we built basic file streaming, then range requests, then HLS serving, and still needed FFmpeg pipelines and CDN integration for production. The API provides production capabilities with fewer lines of code than our basic streaming server.

Video-on-Demand Integration Example

Building a video course platform? Here’s the complete flow:

const express = require('express');
const sdk = require('api')('@liveapi/v1.0#5pfjhgkzh9rzt4');

const app = express();
app.use(express.json());

// Upload a new course video
app.post('/api/courses/:courseId/videos', async (req, res) => {
  try {
    const { videoUrl, title } = req.body;
    
    // Upload to LiveAPI for processing
    const result = await sdk.post('/videos', {
      input_url: videoUrl,
      title: title
    });
    
    // Store video ID and playback URL in your database
    const videoData = {
      liveApiId: result.data.id,
      playbackUrl: result.data.playback_url,
      title: title,
      courseId: req.params.courseId,
      status: 'processing'
    };
    
    // Save to your database
    await saveVideoToDatabase(videoData);
    
    res.json({ success: true, video: videoData });
  } catch (err) {
    console.error('Upload error:', err);
    res.status(500).json({ error: 'Failed to process video' });
  }
});

// Webhook endpoint for processing completion
app.post('/webhooks/liveapi', async (req, res) => {
  const { event, video_id, playback_url } = req.body;
  
  if (event === 'video.ready') {
    // Update video status in database
    await updateVideoStatus(video_id, 'ready', playback_url);
  }
  
  res.sendStatus(200);
});

app.listen(3000);

The video is processed, transcoded to multiple qualities, and distributed via CDN. Your users get adaptive bitrate playback with zero buffering, and you wrote about 30 lines of integration code.

Live Streaming Integration Example

For live events, webinars, or fitness classes:

const express = require('express');
const sdk = require('api')('@liveapi/v1.0#5pfjhgkzh9rzt4');

const app = express();
app.use(express.json());

// Create a new live stream
app.post('/api/streams', async (req, res) => {
  try {
    const { title } = req.body;
    
    const result = await sdk.post('/livestreams', {
      title: title
    });
    
    // Return ingest credentials and playback URL
    res.json({
      streamKey: result.data.stream_key,
      rtmpUrl: result.data.rtmp_url,
      srtUrl: result.data.srt_url,
      playbackUrl: result.data.playback_url,
      status: 'ready'
    });
  } catch (err) {
    console.error('Stream creation error:', err);
    res.status(500).json({ error: 'Failed to create stream' });
  }
});

// Get stream status and viewer count
app.get('/api/streams/:streamId/status', async (req, res) => {
  try {
    const result = await sdk.get(`/livestreams/${req.params.streamId}`);
    
    res.json({
      isLive: result.data.is_live,
      viewerCount: result.data.viewer_count,
      duration: result.data.duration
    });
  } catch (err) {
    res.status(500).json({ error: 'Failed to get status' });
  }
});

app.listen(3000);

The broadcaster connects using any RTMP encoder (OBS, Wirecast, hardware encoders) or SRT for better quality over unreliable networks. LiveAPI handles real-time transcoding and global delivery. Viewers get an HLS URL that works everywhere.

Bonus: LiveAPI supports multistreaming to 30+ platforms simultaneously. One stream to your API, rebroadcast to YouTube, Twitch, Facebook, and LinkedIn automatically.

For live-to-VOD, streams are automatically recorded and available for replay immediately after the live event ends—no additional processing or manual steps required.

Complete Node.js Video Streaming Server: Putting It All Together

Here’s a complete, production-aware video streaming server that combines everything we’ve covered:

const express = require('express');
const fs = require('fs');
const path = require('path');

const app = express();
const PORT = 3000;

// Configuration
const VIDEO_DIR = './videos';
const HLS_DIR = './hls';

// CORS middleware for cross-origin playback
app.use((req, res, next) => {
  res.header('Access-Control-Allow-Origin', '*');
  res.header('Access-Control-Allow-Methods', 'GET, HEAD, OPTIONS');
  res.header('Access-Control-Allow-Headers', 'Range');
  res.header('Access-Control-Expose-Headers', 'Content-Range, Content-Length');
  
  if (req.method === 'OPTIONS') {
    return res.sendStatus(200);
  }
  next();
});

// Health check endpoint
app.get('/health', (req, res) => {
  const memUsage = process.memoryUsage();
  res.json({
    status: 'ok',
    memory: {
      heapUsed: Math.round(memUsage.heapUsed / 1024 / 1024) + 'MB',
      rss: Math.round(memUsage.rss / 1024 / 1024) + 'MB'
    },
    uptime: process.uptime()
  });
});

// Progressive video streaming with range request support
app.get('/videos/:filename', (req, res) => {
  const videoPath = path.join(VIDEO_DIR, req.params.filename);
  
  // Validate file exists
  fs.stat(videoPath, (err, stat) => {
    if (err) {
      console.error(`Video not found: ${videoPath}`);
      return res.status(404).json({ error: 'Video not found' });
    }
    
    const fileSize = stat.size;
    const range = req.headers.range;
    
    // Determine content type from extension
    const ext = path.extname(videoPath).toLowerCase();
    const contentTypes = {
      '.mp4': 'video/mp4',
      '.webm': 'video/webm',
      '.ogg': 'video/ogg',
      '.mov': 'video/quicktime'
    };
    const contentType = contentTypes[ext] || 'video/mp4';
    
    if (range) {
      // Parse range header
      const parts = range.replace(/bytes=/, '').split('-');
      const start = parseInt(parts[0], 10);
      const end = parts[1] ? parseInt(parts[1], 10) : fileSize - 1;
      
      // Validate range
      if (start >= fileSize || end >= fileSize || start > end) {
        res.writeHead(416, {
          'Content-Range': `bytes */${fileSize}`
        });
        return res.end();
      }
      
      const chunkSize = end - start + 1;
      
      console.log(`Streaming ${req.params.filename}: bytes ${start}-${end}/${fileSize}`);
      
      res.writeHead(206, {
        'Content-Range': `bytes ${start}-${end}/${fileSize}`,
        'Accept-Ranges': 'bytes',
        'Content-Length': chunkSize,
        'Content-Type': contentType,
        'Cache-Control': 'public, max-age=86400'
      });
      
      const stream = fs.createReadStream(videoPath, { 
        start, 
        end,
        highWaterMark: 256 * 1024 // 256KB chunks for video
      });
      
      stream.on('error', (streamErr) => {
        console.error('Stream error:', streamErr);
        res.end();
      });
      
      stream.pipe(res);
      
    } else {
      // No range - send entire file
      console.log(`Streaming full file: ${req.params.filename} (${fileSize} bytes)`);
      
      res.writeHead(200, {
        'Content-Length': fileSize,
        'Content-Type': contentType,
        'Accept-Ranges': 'bytes',
        'Cache-Control': 'public, max-age=86400'
      });
      
      fs.createReadStream(videoPath, {
        highWaterMark: 256 * 1024
      }).pipe(res);
    }
  });
});

// HLS manifest serving
app.get('/hls/:stream/:filename', (req, res) => {
  const filePath = path.join(HLS_DIR, req.params.stream, req.params.filename);
  const ext = path.extname(filePath);
  
  // Validate file type
  if (ext !== '.m3u8' && ext !== '.ts') {
    return res.status(400).json({ error: 'Invalid file type' });
  }
  
  fs.stat(filePath, (err, stat) => {
    if (err) {
      return res.status(404).json({ error: 'File not found' });
    }
    
    const contentType = ext === '.m3u8' 
      ? 'application/vnd.apple.mpegurl' 
      : 'video/MP2T';
    
    // Manifests: no cache (may update for live)
    // Segments: cache aggressively (immutable)
    const cacheControl = ext === '.m3u8'
      ? 'no-cache'
      : 'public, max-age=31536000';
    
    res.writeHead(200, {
      'Content-Type': contentType,
      'Content-Length': stat.size,
      'Cache-Control': cacheControl
    });
    
    fs.createReadStream(filePath).pipe(res);
  });
});

// Simple HTML player page
app.get('/player/:filename', (req, res) => {
  const filename = req.params.filename;
  
  res.send(`
 `); }); // Error handling app.use((err, req, res, next) => { console.error('Server error:', err); res.status(500).json({ error: 'Internal server error' }); }); app.listen(PORT, () => { console.log(`Video streaming server running on port ${PORT}`); console.log(`Progressive video: http://localhost:${PORT}/videos/[filename]`); console.log(`HLS streams: http://localhost:${PORT}/hls/[stream]/playlist.m3u8`); console.log(`Player: http://localhost:${PORT}/player/[filename]`); }); // Graceful shutdown process.on('SIGTERM', () => { console.log('Shutting down gracefully...'); process.exit(0); });

To run this server:

  1. Create videos/ and hls/ directories
  2. Place an MP4 file in the videos directory
  3. Run npm install express, then node server.js
  4. Open http://localhost:3000/player/yourfile.mp4

Test range requests with curl:

curl -I -H "Range: bytes=0-999" http://localhost:3000/videos/test.mp4

You should see a 206 response with Content-Range header.

This server handles the fundamentals well—progressive download, range requests, HLS serving, memory management. For production with real users, you’ll want to add a CDN in front of it, implement proper logging and monitoring, add authentication for protected content, and consider using a video streaming API for encoding and global distribution.

Conclusion

You now understand how video streaming works at a fundamental level—from Node.js streams and backpressure to HTTP range requests, chunked transfer encoding, and HLS delivery. This knowledge transfers regardless of whether you build streaming infrastructure yourself or integrate an existing solution.

Use the DIY approach when: you’re learning video streaming concepts, building prototypes or internal tools, serving video to a small, known audience, or video streaming IS your core product and you’re investing in proprietary infrastructure.

Use a video streaming API when: you need production reliability from day one, scaling to hundreds or thousands of viewers, time-to-market matters more than building from scratch, video is a feature in your application rather than the application itself.

Ready to add production video streaming to your Node.js application? LiveAPI lets you integrate live streaming, video hosting, and adaptive bitrate delivery in minutes instead of months. Try it free and see how the same functionality we built manually becomes a few lines of code with production-grade infrastructure behind it.

Either way, you now have the foundational knowledge to make informed decisions about video streaming architecture—and the code to build a functional streaming server when that’s the right choice.

Frequently Asked Questions

Can Node.js handle video streaming?

Yes, Node.js handles video streaming effectively using its built-in Streams API and event-driven architecture. It excels at serving video content through HTTP range requests and chunked transfer encoding, managing multiple concurrent connections efficiently without blocking. Node.js can handle tens of thousands of concurrent connections on a single instance. For production scale with thousands of viewers, pair Node.js with CDN infrastructure or video streaming APIs for global delivery.

How do I stream large video files in Node.js without running out of memory?

Stream large videos using fs.createReadStream() instead of fs.readFile() to process files in chunks rather than loading entirely into memory. Configure appropriate highWaterMark values (256KB-1MB for video), implement proper backpressure handling using the drain event, and use pipe() which automatically manages flow control. For very large files, implement HTTP range requests to serve only the requested portions rather than entire files.

What is the best video streaming protocol to use with Node.js?

HLS (HTTP Live Streaming) is the most widely supported protocol for Node.js video streaming, working across all browsers and devices including iOS, Android, and smart TVs. Node.js serves HLS content by delivering .m3u8 manifest files and .ts video segments over standard HTTP. For live streaming at production scale, consider video streaming APIs that handle real-time transcoding and HLS packaging automatically.

How do I implement adaptive bitrate streaming in Node.js?

Implementing true adaptive bitrate streaming requires transcoding videos into multiple quality renditions (360p, 720p, 1080p), segmenting each rendition into small chunks, and generating HLS manifests pointing to appropriate segments. While Node.js can serve pre-generated ABR content, the transcoding and packaging typically requires FFmpeg pipelines or video encoding APIs for production efficiency. The player then switches between quality levels based on available bandwidth.

What’s the difference between video streaming and video downloading?

Video streaming delivers content in chunks while the user watches, enabling immediate playback without waiting for complete download. Video downloading transfers the entire file before playback begins. Node.js implements streaming using HTTP range requests that return 206 Partial Content responses, allowing video players to request specific byte ranges and enabling seeking without downloading everything. Streaming uses less client storage and provides faster time-to-first-frame.

How do I add live streaming to my Node.js application?

Adding live streaming requires RTMP or SRT ingest servers to accept encoder input, real-time transcoding for adaptive bitrate output, HLS segment packaging, and CDN distribution for viewer delivery—infrastructure beyond basic Node.js capabilities. Most developers integrate live streaming APIs that handle this complexity. LiveAPI, for example, provides RTMP/SRT ingest URLs with automatic HLS output, requiring just a few lines of integration code while handling global delivery infrastructure.

Join 200,000+ satisfied streamers

Still on the fence? Take a sneak peek and see what you can do with Castr.

No Castr Branding

No Castr Branding

We do not include our branding on your videos.

No Commitment

No Commitment

No contracts. Cancel or change your plans anytime.

24/7 Support

24/7 Support

Highly skilled in-house engineers ready to help.

  • Check Free 7-day trial
  • CheckCancel anytime
  • CheckNo credit card required