If you’re building a mobile app that needs real-time video, voice, or data, you’ve almost certainly run into react-native-webrtc. It’s the de facto bridge between Google’s WebRTC stack and React Native — the same underlying engine that powers video calls in Chrome, Firefox, and Safari, wrapped in a JavaScript API you can call from your mobile app.
But “install a library and call getUserMedia” hides a lot of complexity. You still need to handle iOS and Android permissions, set up a signaling server, configure STUN and TURN, manage peer connections across screens, and figure out what happens when your app needs to scale past two users on a Wi-Fi network.
This guide walks through the full picture: what React Native WebRTC is, how the architecture works, how to install and configure it on both platforms, a basic video-call code walkthrough, and the production patterns most demos skip. By the end you’ll know exactly when to ship a peer-to-peer mobile call with react-native-webrtc and when to put a server in the middle.
What Is React Native WebRTC?
React Native WebRTC is a native module that exposes the WebRTC protocol stack to React Native apps through a JavaScript API that mirrors the browser WebRTC spec. The package is published on npm as react-native-webrtc and is maintained by the open-source react-native-webrtc organization on GitHub.
The library wraps Google’s libwebrtc — the same C++ implementation used by Chromium — and surfaces it on iOS, Android, and tvOS. Your React Native code calls familiar APIs like mediaDevices.getUserMedia(), RTCPeerConnection, and RTCIceCandidate, and the module forwards those calls down to native iOS or Android WebRTC bindings. As of recent releases the package ships with WebRTC M124 and supports unified-plan SDP, simulcast, and software encode/decode by default.
Here’s how it compares to other ways of adding real-time video to a mobile app:
| Approach | Latency | Cross-platform | Custom UI | Server cost | Best for |
|---|---|---|---|---|---|
| react-native-webrtc | <500ms | iOS + Android + tvOS | Full control | Signaling + STUN/TURN | One-to-one and small-group calls |
| Native iOS/Android SDKs | <500ms | Per-platform builds | Full control | Same as above | Teams with deep mobile expertise |
| Commercial video SDK | <500ms | iOS + Android + Web | Limited | Bundled per-minute | Faster shipping, less control |
| RTMP/HLS streaming | 2–30s | Anywhere a player runs | Full control | Encoding + CDN | One-to-many broadcasts |
The trade-off is straightforward: react-native-webrtc gives you sub-500ms latency and full control over the call experience, but you own the signaling, the TURN servers, and the scaling story. For a deeper comparison of the underlying protocols, see WebRTC vs HLS and WebRTC vs RTMP.
How Does React Native WebRTC Work?
Under the hood, a React Native WebRTC call is the same dance every browser-based WebRTC app performs — your mobile device just plays the role of the browser. There are three moving parts: media capture, peer-to-peer negotiation, and a signaling channel that brokers the handshake.
1. Media capture
Your app calls mediaDevices.getUserMedia({ audio, video }). The native module asks iOS or Android for camera and microphone access, opens the hardware, and returns a MediaStream object holding one or more MediaStreamTrack instances. You attach those tracks to an on-screen RTCView to show local preview.
2. Peer connection setup
Each side of the call creates an RTCPeerConnection and adds the local tracks. The peer connection is the object responsible for everything network-related — encoding, packetization, congestion control, encryption (DTLS-SRTP), and NAT traversal.
To find a path through firewalls and home routers, the peer connection uses ICE (Interactive Connectivity Establishment). It contacts a STUN server to learn its public IP, gathers candidate addresses, and falls back to a TURN server to relay traffic when direct connectivity fails. If you want to go deeper here, STUN and TURN are non-negotiable for any real-world deployment — most calls outside a single Wi-Fi network need them.
3. Signaling
WebRTC deliberately doesn’t define how peers find each other. You build a signaling server — usually a small WebSocket or Socket.IO service — that ferries SDP offers, SDP answers, and ICE candidates between the two clients. Once the offer/answer/ICE exchange completes, the peer connection’s ontrack event fires on both sides and media starts flowing directly between devices.
The whole flow looks like:
Caller Signaling Server Callee
| | |
|-- getUserMedia() | |
|-- new RTCPeerConnection | |
|-- createOffer ---------->|------------------------->|
| | setRemoteDesc |
| | createAnswer |
|<------------------------ |<-------- answer ---------|
|-- onicecandidate ------->|------ ICE candidates --->|
|<--------------------- ICE candidates -------------- |
|======== media (SRTP, peer-to-peer or via TURN) =====|
This is the same pattern a WebRTC server coordinates at scale. For one-to-one mobile calls you can keep the signaling layer tiny — a Node.js process holding a WebSocket connection per client is enough.
Key Features and Platform Support
The react-native-webrtc package isn’t a thin wrapper. It exposes most of the modern browser WebRTC API surface and adds a few mobile-specific helpers.
Supported features:
- Audio and video tracks — full duplex over SRTP with Opus and VP8/VP9/H.264 codecs
- Data channels — reliable or unreliable peer-to-peer messaging without going through your server
- Screen capture — share device screen on supported platforms (iOS 11+ via ReplayKit, Android 5+)
- Simulcast — send multiple resolutions of the same stream so an SFU can deliver the right one to each viewer
- Unified Plan SDP — the standardized, modern transceiver model
- Software and hardware encoders — H.264 hardware acceleration on iOS and Android where the device supports it
Supported platforms:
| Platform | Status | Notes |
|---|---|---|
| iOS | Supported | iOS 12+, arm64 and x86_64 simulators |
| Android | Supported | API 24+, armeabi-v7a, arm64-v8a, x86, x86_64 |
| tvOS | Supported | Same APIs as iOS |
| macOS | Not supported | Use the upstream Google WebRTC build |
| Windows | Not supported | No native module |
| Web | Via shim | Use react-native-webrtc-web-shim for React Native Web |
Expo Go does not bundle native WebRTC, so the package only works in projects using a development build (EAS Build or expo prebuild). The community-maintained @config-plugins/react-native-webrtc config plugin handles the iOS and Android setup automatically when you run a prebuild.
How to Install React Native WebRTC
The install is straightforward in a bare React Native project, with extra steps for permissions and a few platform-specific tweaks. Here’s the path most teams take.
Step 1: Add the package
npm install react-native-webrtc
# or
yarn add react-native-webrtc
If you’re on Expo, you also need the dev-client and the config plugin:
npx expo install expo-dev-client
npx expo install @config-plugins/react-native-webrtc
Then add the plugin to your app.json:
{
"expo": {
"plugins": [
[
"@config-plugins/react-native-webrtc",
{
"cameraPermission": "Allow $(PRODUCT_NAME) to access your camera",
"microphonePermission": "Allow $(PRODUCT_NAME) to access your microphone"
}
]
]
}
}
Step 2: Configure iOS permissions
WebRTC on iOS requires camera and microphone access at runtime. Open ios/YourApp/Info.plist and add:
<key>NSCameraUsageDescription</key>
<string>$(PRODUCT_NAME) needs camera access for video calls</string>
<key>NSMicrophoneUsageDescription</key>
<string>$(PRODUCT_NAME) needs microphone access for calls</string>
Then run cd ios && pod install to link the native module. Minimum iOS deployment target is 13.0.
Step 3: Configure Android permissions
In android/app/src/main/AndroidManifest.xml, add:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.WAKE_LOCK" />
In android/app/build.gradle, set the min SDK to 24 and enable Java 8:
android {
defaultConfig {
minSdkVersion 24
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
You also need to request CAMERA and RECORD_AUDIO at runtime through PermissionsAndroid, since they’re dangerous permissions on Android 6+.
Step 4: Verify the install
A quick sanity check is to import the module and log the device list. If you see the camera and microphone, the install worked:
import { mediaDevices } from 'react-native-webrtc';
mediaDevices.enumerateDevices().then(devices => {
console.log('Devices:', devices);
});
If the call returns an empty array, you’re probably running in Expo Go — you’ll need a development build. For more on the broader video SDK install patterns, the same pattern of permissions plus native pod/gradle linking applies across most real-time mobile libraries.
Building a Basic Video Call: Code Walkthrough
Below is a minimal one-to-one video call using react-native-webrtc, a WebSocket signaling server, and Google’s public STUN servers. It’s stripped down to the essentials so the call flow is visible. In production you’d add a TURN server, error handling, and a call-state machine.
Capture the local stream
import {
mediaDevices,
RTCPeerConnection,
RTCSessionDescription,
RTCIceCandidate,
RTCView,
} from 'react-native-webrtc';
const startLocalStream = async () => {
const stream = await mediaDevices.getUserMedia({
audio: true,
video: {
facingMode: 'user',
width: 640,
height: 480,
frameRate: 30,
},
});
return stream;
};
Create the peer connection
const config = {
iceServers: [
{ urls: 'stun:stun.l.google.com:19302' },
{
urls: 'turn:turn.example.com:3478',
username: 'user',
credential: 'pass',
},
],
};
const pc = new RTCPeerConnection(config);
localStream.getTracks().forEach(track => {
pc.addTrack(track, localStream);
});
pc.ontrack = event => {
setRemoteStream(event.streams[0]);
};
pc.onicecandidate = event => {
if (event.candidate) {
socket.send(JSON.stringify({ type: 'ice', candidate: event.candidate }));
}
};
Exchange offer and answer
The caller side:
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
socket.send(JSON.stringify({ type: 'offer', sdp: offer }));
The callee side, listening on the signaling socket:
socket.onmessage = async event => {
const message = JSON.parse(event.data);
if (message.type === 'offer') {
await pc.setRemoteDescription(new RTCSessionDescription(message.sdp));
const answer = await pc.createAnswer();
await pc.setLocalDescription(answer);
socket.send(JSON.stringify({ type: 'answer', sdp: answer }));
} else if (message.type === 'answer') {
await pc.setRemoteDescription(new RTCSessionDescription(message.sdp));
} else if (message.type === 'ice') {
await pc.addIceCandidate(new RTCIceCandidate(message.candidate));
}
};
Render the streams
<View style={{ flex: 1 }}>
{localStream && (
<RTCView
streamURL={localStream.toURL()}
style={{ width: 120, height: 160 }}
mirror={true}
objectFit="cover"
/>
)}
{remoteStream && (
<RTCView
streamURL={remoteStream.toURL()}
style={{ flex: 1 }}
objectFit="cover"
/>
)}
</View>
That’s the whole loop. The same pattern scales to audio-only calls (drop video from the constraints), data-channel messaging (pc.createDataChannel('chat')), and screen sharing (use mediaDevices.getDisplayMedia() on supported platforms). The logic on the WebRTC signaling server side is mostly relaying these JSON messages between the two clients.
Common Challenges and How to Handle Them
Most teams build a working react-native-webrtc demo in a day. Shipping it to production takes longer, because mobile WebRTC has a handful of edge cases the browser doesn’t.
Permissions that fail silently. On Android, you must request CAMERA and RECORD_AUDIO at runtime via PermissionsAndroid.request(). Forgetting this returns an empty stream from getUserMedia with no useful error.
NAT traversal. Roughly 15–20% of mobile calls can’t establish a direct peer-to-peer path because of symmetric NATs or strict carrier firewalls. You need a TURN server (typically coturn) to relay media for those calls. STUN alone is not enough.
Background and locked-screen calls. iOS aggressively suspends WebRTC when the app backgrounds. You need CallKit integration via react-native-callkeep and VoIP push notifications to keep audio flowing. On Android, a foreground service does the same job.
Audio routing. Switching between earpiece, speakerphone, and Bluetooth headsets is fiddly. The community uses react-native-incall-manager to handle proximity sensor wake-locks and audio mode changes consistently across both platforms.
Expo Go. WebRTC is not available in Expo Go. You must use a development build with the config plugin or eject to bare workflow.
Memory and battery. Video encoding is expensive. Always release the peer connection (pc.close()) and stop tracks (track.stop()) when a call ends, or you’ll see memory grow on every reconnect.
More than two participants. A pure peer-to-peer mesh stops working past three or four users — every client uploads video to every other client, which kills mobile bandwidth and CPU. For group calls you need a media server doing SFU (Selective Forwarding Unit) routing.
Production Architecture: When Peer-to-Peer Stops Being Enough
Up to this point we’ve been building two-person calls. That’s where most React Native WebRTC tutorials stop, and it’s also where most production apps need more infrastructure. This is the contextual border between “I have a working demo” and “I have a video product.”
The pattern shifts depending on what you’re actually building.
Group video calls. A peer-to-peer mesh of N participants requires every device to send N-1 outgoing streams. By the time you have four people on a call, mobile devices are uploading several megabits per second and burning through battery. Production apps route through an SFU — a server that receives one stream from each client and forwards it to the others. Open-source options include mediasoup, Janus, and LiveKit; each adds operational complexity but solves the bandwidth problem. A managed video conferencing API hides this complexity if you’d rather not run an SFU yourself.
One-to-many broadcasts. If 1,000 people need to watch one host, WebRTC stops being the right transport. The economics of low-latency streaming flip — you ingest from the host (over RTMP or WebRTC), transcode, and deliver via HLS or LL-HLS over a CDN. That’s effectively what every live-shopping, sports, and events app does. LiveAPI’s live streaming API handles the ingest-to-CDN path: RTMP/SRT/RTSP in, HLS out, multi-CDN delivery via Akamai, Cloudflare, and Fastly, with adaptive bitrate streaming so viewers on weak mobile connections still get a watchable feed.
Recording and replay. Two peers in a react-native-webrtc call have no recording by default — there’s no server in the middle. To record, you either pipe one peer’s track to a server-side recorder or route the whole call through an SFU that supports recording. If you also need viewers to catch up after the live segment ends, look at live-to-VOD workflows that automatically convert the recording to an on-demand asset.
Multistreaming. Hosts who want to broadcast a React Native WebRTC stream to YouTube, Twitch, and Facebook simultaneously typically push the stream from a server-side encoder to a multistreaming API. LiveAPI’s Multistream API sends the same source to 30+ destinations from a single ingest URL, which is hard to replicate by adding peers.
Hybrid architecture. The most common production setup is: peer-to-peer or SFU for the interactive call, with an optional RTMP push to LiveAPI for broadcast viewers. The interactive participants get sub-500ms latency, the broader audience gets reliable HLS playback at scale, and the host runs one app.
When to Use React Native WebRTC vs Alternatives
The right choice depends on the call shape and how much infrastructure you want to own.
Use react-native-webrtc directly when:
- Calls are one-to-one or small group (≤4 participants)
- You need full control over the UI and call logic
- You can run a signaling server and TURN cluster
- Sub-500ms latency is non-negotiable
- You want to avoid per-minute SDK pricing
Use a commercial video SDK when:
- You need group calls without operating an SFU
- You want call recording, transcription, or moderation features out of the box
- Your team doesn’t have WebRTC experience
- Time-to-market matters more than control
Use a streaming API like LiveAPI when:
- The shape is one-to-many broadcast (≥10 viewers)
- You need HLS playback on TVs, browsers, and players you don’t control
- You want multistreaming to social platforms
- Recording and on-demand replay are part of the product
Plenty of apps mix the three: react-native-webrtc for host-to-co-host interaction, an SFU for guest panelists, and a streaming API for the audience.
React Native WebRTC FAQ
Does react-native-webrtc work with Expo?
Not in Expo Go, because Expo Go doesn’t ship native WebRTC binaries. You can use it in any Expo project that runs a development build via EAS Build or npx expo prebuild, with the @config-plugins/react-native-webrtc config plugin handling the iOS and Android native setup.
Can I build a video call without a signaling server?
No. WebRTC requires SDP offer/answer and ICE candidate exchange before a peer connection can be established, and that exchange has to happen over a channel WebRTC doesn’t provide. Most teams use a small WebSocket service (Socket.IO, plain ws, or Pusher/Ably for managed signaling) to broker the messages.
Do I need a TURN server?
For local-network or testing scenarios, STUN alone often works. For production, yes — somewhere between 15% and 20% of real-world calls fail without TURN because of symmetric NATs, mobile carriers, or corporate firewalls. coturn is the standard open-source TURN server. Hosted TURN providers like Twilio and Xirsys exist if you don’t want to operate it yourself.
How many people can join a react-native-webrtc call?
Pure peer-to-peer mesh is practical up to about 4 participants on mobile before bandwidth and CPU become limiting. Beyond that, route the call through an SFU (mediasoup, Janus, LiveKit) so each client sends one upstream and receives N-1 downstreams from the server.
Is react-native-webrtc the same WebRTC as the browser?
Yes — the package wraps Google’s libwebrtc, the same C++ library Chromium ships. The JavaScript API mirrors the W3C WebRTC spec, so peer connections established between a browser and a React Native client work without protocol changes.
Can I record a react-native-webrtc call?
Not out of the box, because there’s no server in a pure peer-to-peer call. Options include using MediaRecorder-style libraries on one peer, routing the call through an SFU that supports server-side recording, or pushing one peer’s stream via RTMP to a video infrastructure provider for cloud recording and on-demand playback.
How do I handle background calls on iOS?
Combine react-native-callkeep (for CallKit integration), VoIP push notifications via PushKit, and a background audio mode so the OS keeps the WebRTC peer connection alive when the app is locked or backgrounded. Without CallKit, iOS will kill the connection within seconds of backgrounding.
What’s the difference between react-native-webrtc and react-native-twilio-video-webrtc?
react-native-webrtc is the raw WebRTC binding — you control the signaling, TURN, and SFU. react-native-twilio-video-webrtc is Twilio’s SDK that abstracts all of that behind their video infrastructure, in exchange for per-minute pricing and less customization. Pick the first if you want control, the second if you want fewer moving parts.
Ship Faster with the Right Backend
Building real-time video on mobile is a lot of moving parts: native modules, signaling, TURN, SFUs, recording, and a CDN for anyone watching from outside the call. react-native-webrtc solves the client side beautifully — what’s left is the infrastructure underneath.
LiveAPI gives you the rest: RTMP, SRT, and RTSP ingest from your React Native client; instant encoding to HLS; multi-CDN delivery via Akamai, Cloudflare, and Fastly; multistreaming to 30+ social destinations; and automatic live-to-VOD recording. You write the call experience, LiveAPI handles the path from “stream leaves the device” to “stream plays everywhere.”
Get started with LiveAPI and ship your video product in days, not months.


