WebRTC Input and Outputs
This guide is a continuation of the Input and Outputs article. It shows, with code examples, how to publish your source stream to a WebRTC room for translation and how to subscribe to the translated tracks published by Palabra.
Note: There are two sections: WebRTC input and WebRTC output. They are independent - you can use either without the other. For example, you can publish your source broadcast via RTMP (or another input) and output via WebRTC, or publish via WebRTC and output to RTMP/HLS/SRT. You don't need to use both WebRTC input and output if one meets your requirements.
WebRTC Input: Publishing of the source track
In case, you have specified the webrtc_push protocol as the INPUT for your broadcast, Palabra API will respond you with url and token in the input section of the responce:
{
"ok": true,
"data": {
"id": "c1236d4d-73eb-409d-b8f5-3780fb0d8a10",
// ...
"input": {
"protocol": "webrtc_push",
"url": "<WEBRTC_SERVER>",
"token": "<ACCESS_TOKEN>"
},
}
}
Use the url and token values to connect to the Palabra-hosted WebRTC room to publish your source MediaStream audio/video tracks, using Livekit:
Install the Livekit client:
npm install livekit-client
Connect to the Palabra WebRTC room:
import { Room } from "livekit-client";
const connectTranslationRoom = async (URL, TOKEN): Room => {
try {
const room = new Room();
await room.connect(URL, TOKEN, { autoSubscribe: true });
return room;
} catch (e) {
console.error(e);
throw e;
}
};
Publish the source MediaStream audio and video tracks:
import {
LocalAudioTrack,
LocalVideoTrack,
} from "livekit-client";
// 1. Get both audio + video from the Device
const media = await navigator.mediaDevices.getUserMedia({
audio: { channelCount: 1 },
video: true
});
// 2. Split tracks
const [rawAudioTrack] = media.getAudioTracks();
const [rawVideoTrack] = media.getVideoTracks();
// 3. Wrap them in LiveKit track classes
const localAudioTrack = new LocalAudioTrack(rawAudioTrack);
const localVideoTrack = new LocalVideoTrack(rawVideoTrack);
// 4. Publish audio to the Livekit WebRTC room
const publishAudioTrack = async (room, track) => {
try {
await room.localParticipant.publishTrack(track, {
dtx: false, // Important to keep DTX:false
red: false,
audioPreset: {
maxBitrate: 32000,
priority: "high"
}
});
} catch (e) {
console.error("Error while publishing audio track:", e);
throw e;
}
};
// 5. Publish video to the Livekit WebRTC room
const publishVideoTrack = async (room, track) => {
try {
await room.localParticipant.publishTrack(track, {
videoCodec: "h264" // Important to keep videoCodec:"h264"
});
} catch (e) {
console.error("Error while publishing video track:", e);
throw e;
}
};
// 6. Usage example
await publishAudioTrack(room, localAudioTrack);
await publishVideoTrack(room, localVideoTrack);
Notes
- Video is optional; you can publish audio‑only to start translation.
- Ensure
dtxis disabled when creating your local audio track. - Ensure
videoCodecis set to"h264"when publishing your local video track.
After you publish your source audio track, Palabra starts the translation pipeline and re‑streams the translated audio/video to your configured outputs.
WebRTC Output: Subscribing translated tracks
If you selected webrtc_push as an OUTPUT, to get your translated stream you have to:
- Call Get WebRTC room data request to get server
urland accesstoken. - Then, connect to the WebRTC server, using Livekit:
npm install livekit-client
import { Room } from "livekit-client";
const connectTranslationRoom = async (URL, TOKEN): Room => {
try {
const room = new Room();
await room.connect(URL, TOKEN, { autoSubscribe: true });
return room;
} catch (e) {
console.error(e);
throw e;
}
};
- Add a handler to the
TrackSubscribedevent:
import { RoomEvent } from "livekit-client";
// Handler to play the subscribed tracks in Web Browser (example)
const playTranslationInBrowser = (track: RemoteTrack, pub: RemoteTrackPublication, participant: RemoteParticipant) => {
const trackName: string = pub.trackInfo.name;
const trackType: "original" | "translation" = trackName.split("_")[0];
const trackLangСode = trackName.split("_")[1];
if (track.kind === "audio" && trackType === 'translation') {
const mediaStream = new MediaStream([track.mediaStreamTrack]);
const audioElement = document.getElementById(
"remote-audio"
); // Your HTML audio element
if (audioElement) {
audioElement.srcObject = mediaStream;
audioElement.play();
} else {
console.error("Audio element not found!");
}
}
};
// Add a handler for a TrackSubscribed event
room.on(RoomEvent.TrackSubscribed, playTranslationInBrowser);
Palabra publishes translated tracks to the WebRTC room. With { autoSubscribe: true }, you are automatically subscribed to new tracks as they appear. Handle RoomEvent.TrackSubscribed to access each track (for example, to play its audio in the browser).