Nobody Peels Mangoes

    TypeScript icon, indicating that this package has built-in type declarations

    1.5.0 • Public • Published

    JavaScript/TypeScript client SDK for LiveKit

    livekit-client is the official client SDK for LiveKit. With it, you can add real time video and audio to your web apps.


    Docs and guides at

    SDK reference



    yarn add livekit-client


    npm install livekit-client --save


    Examples below are in TypeScript, if using JS/CommonJS imports replace import with:

    const livekit = require('livekit-client');
    const room = new livekit.Room(...);
    await room.connect(...);

    Connecting to a room, publish video & audio

    import {
    } from 'livekit-client';
    // creates a new room with options
    const room = new Room({
      // automatically manage subscribed video quality
      adaptiveStream: true,
      // optimize publishing bandwidth and CPU for published tracks
      dynacast: true,
      // default capture settings
      videoCaptureDefaults: {
        resolution: VideoPresets.h720.resolution,
    // set up event listeners
      .on(RoomEvent.TrackSubscribed, handleTrackSubscribed)
      .on(RoomEvent.TrackUnsubscribed, handleTrackUnsubscribed)
      .on(RoomEvent.ActiveSpeakersChanged, handleActiveSpeakerChange)
      .on(RoomEvent.Disconnected, handleDisconnect)
      .on(RoomEvent.LocalTrackUnpublished, handleLocalTrackUnpublished);
    // connect to room
    await room.connect('ws://localhost:7800', token);
    console.log('connected to room',;
    // publish local camera and mic tracks
    await room.localParticipant.enableCameraAndMicrophone();
    function handleTrackSubscribed(
      track: RemoteTrack,
      publication: RemoteTrackPublication,
      participant: RemoteParticipant,
    ) {
      if (track.kind === Track.Kind.Video || track.kind === Track.Kind.Audio) {
        // attach it to a new HTMLVideoElement or HTMLAudioElement
        const element = track.attach();
    function handleTrackUnsubscribed(
      track: RemoteTrack,
      publication: RemoteTrackPublication,
      participant: RemoteParticipant,
    ) {
      // remove tracks from all attached elements
    function handleLocalTrackUnpublished(track: LocalTrackPublication, participant: LocalParticipant) {
      // when local tracks are ended, update UI to remove them from rendering
    function handleActiveSpeakerChange(speakers: Participant[]) {
      // show UI indicators when participant is speaking
    function handleDisconnect() {
      console.log('disconnected from room');

    In order to connect to a room, you need to first create an access token.

    See access token docs for details

    Handling common track types

    While LiveKit is designed to be flexible, we've added a few shortcuts that makes working with common track types simple. For a user's camera, microphone, and screen share, you can enable them with the following LocalParticipant methods:

    const p = room.localParticipant;
    // turn on the local user's camera and mic, this may trigger a browser prompt
    // to ensure permissions are granted
    await p.setCameraEnabled(true);
    await p.setMicrophoneEnabled(true);
    // start sharing the user's screen, this will trigger a browser prompt to select
    // the screen to share.
    await p.setScreenShareEnabled(true);
    // disable camera to mute them, when muted, the user's camera indicator will be turned off
    await p.setCameraEnabled(false);

    Similarly, you can access these common track types on the other participants' end.

    // get a RemoteParticipant by their sid
    const p = room.participants.get('participant-sid');
    if (p) {
      // if the other user has enabled their camera, attach it to a new HTMLVideoElement
      if (p.isCameraEnabled) {
        const track = p.getTrack(Track.Source.Camera);
        if (track?.isSubscribed) {
          const videoElement = track.videoTrack?.attach();
          // do something with the element

    Creating a track prior to creating a room

    In some cases, it may be useful to create a track before creating a room. For example, when building a staging area so the user may check their own camera.

    You can use our global track creation functions for this:

    const tracks = await createLocalTracks({
      audio: true,
      video: true,

    Publish tracks from any source

    LiveKit lets you publish any track as long as it can be represented by a MediaStreamTrack. You can specify a name on the track in order to identify it later.

    const pub = await room.localParticipant.publishTrack(mediaStreamTrack, {
      name: 'mytrack',
      simulcast: true,
      // if this should be treated like a camera feed, tag it as such
      // supported known sources are .Camera, .Microphone, .ScreenShare
      source: Track.Source.Camera,
    // you may mute or unpublish the track later

    Device management APIs

    Users may have multiple input and output devices available. LiveKit will automatically use the one that's deemed as the default device on the system. You may also list and specify an alternative device to use.

    We use the same deviceId as one returned by MediaDevices.enumerateDevices().

    Example listing and selecting a camera device

    // list all microphone devices
    const devices = await Room.getLocalDevices('audioinput');
    // select last device
    const device = devices[devices.length - 1];
    // in the current room, switch to the selected device and set
    // it as default audioinput in the future.
    await room.switchActiveDevice('audioinput', device.deviceId);

    You can also switch devices given a constraint. This could be useful on mobile devices to switch to a back-facing camera:

    await videoTrack.restartTrack({
      facingMode: 'environment',

    Handling device failures

    When creating tracks using LiveKit APIs (connect, createLocalTracks, setCameraEnabled, etc), it's possible to encounter errors with the underlying media device. In those cases, LiveKit will emit RoomEvent.MediaDevicesError.

    You can use the helper MediaDeviceFailure.getFailure(error) to determine specific reason for the error.

    • PermissionDenied - the user disallowed capturing devices
    • NotFound - the particular device isn't available
    • DeviceInUse - device is in use by another process (happens on Windows)

    These distinctions enables you to provide more specific messaging to the user.

    You could also retrieve the last error with LocalParticipant.lastCameraError and LocalParticipant.lastMicrophoneError.

    Audio playback

    Browsers can be restrictive with regards to audio playback that is not initiated by user interaction. What each browser considers as user interaction can vary by vendor (for example, Safari on iOS is very restrictive).

    LiveKit will attempt to autoplay all audio tracks when you attach them to audio elements. However, if that fails, we'll notify you via RoomEvent.AudioPlaybackStatusChanged. Room.canPlayAudio will indicate if audio playback is permitted. LiveKit takes an optimistic approach so it's possible for this value to change from true to false when we encounter a browser error.

    In the case user interaction is required, LiveKit provides Room.startAudio to start audio playback. This function must be triggered in an onclick or ontap event handler. In the same session, once audio playback is successful, additional audio tracks can be played without further user interactions.

    room.on(RoomEvent.AudioPlaybackStatusChanged, () => {
      if (!room.canPlayAudio) {
        // UI is necessary.
        button.onclick = () => {
          // startAudio *must* be called in an click/tap handler.
          room.startAudio().then(() => {
            // successful, UI can be removed now

    Configuring logging

    This library uses loglevel for its internal logs. You can change the effective log level with the logLevel field in ConnectOptions. The method setLogExtension allows to hook into the livekit internal logs and send them to some third party logging service

    setLogExtension((level: LogLevel, msg: string, context: object) => {
      const enhancedContext = { ...context, timeStamp: };
      if (level >= LogLevel.debug) {
        console.log(level, msg, enhancedContext);


    SDK Sample

    example/sample.ts contains a demo webapp that uses the SDK. Run it with yarn sample

    Browser Support

    Browser Desktop OS Mobile OS
    Chrome Windows, macOS, Linux Android
    Firefox Windows, macOS, Linux Android
    Safari macOS iOS
    Edge (Chromium) Windows, macOS

    We aim to support a broad range of browser versions by transpiling the library code with babel. You can have a look at the "browerslist" section of package.json for more details.

    Note that the library requires some specific browser APIs to be present. You can check general compatibility with the helper function isBrowserSupported(). Support for more modern features like adaptiveStream and dynacast can be checked for with supportsAdaptiveStream() and supportsDynacast().

    If you are targeting legacy browsers, but still want adaptiveStream functionality you'll likely need to use polyfills for ResizeObserver and IntersectionObserver.




    npm i livekit-client

    DownloadsWeekly Downloads






    Unpacked Size

    5.81 MB

    Total Files


    Last publish


    • ocupe_livekit
    • cacheonly
    • mdo
    • duanweiwei
    • benjaminlivekit
    • danm_livekit
    • feepslk
    • lukasio
    • cnderrauber
    • lk_hiroshi
    • dliulk
    • dc-livekit
    • shishir.gowda
    • raja-livekit
    • livekitherzog
    • noahlt-livekit
    • matkam_livekit
    • thedavidzhao
    • rdsa