leia-capture
    TypeScript icon, indicating that this package has built-in type declarations

    1.0.10 • Public • Published

    Leia Capture npm version

    Leia Capture allows you to perform liveness challenges, take pictures and record videos in your browser

    Installation

    Via npm:

    npm install leia-capture

    Via script tags:

    <script src="https://unpkg.com/@tensorflow/tfjs-core@3.3.0/dist/tf-core.js"></script>
    <script src="https://unpkg.com/@tensorflow/tfjs-backend-cpu@3.3.0/dist/tf-backend-cpu.js"></script>
    <script src="https://unpkg.com/@tensorflow/tfjs-backend-webgl@3.3.0/dist/tf-backend-webgl.js"></script>
    <script src="https://unpkg.com/@tensorflow/tfjs-backend-wasm@3.3.0/dist/tf-backend-wasm.js"></script>
    <script src="https://unpkg.com/@tensorflow/tfjs-layers@3.3.0/dist/tf-layers.js"></script>
    <script src="https://unpkg.com/@tensorflow/tfjs-converter@3.3.0/dist/tf-converter.js"></script>
    <script src="https://unpkg.com/@tensorflow-models/face-landmarks-detection@0.0.3/dist/face-landmarks-detection.js"></script>
    <script src="https://unpkg.com/leia-capture@1.0.1/umd/leia-capture.umd.js"></script>

    Usage

    For npm:

    import * as LeiaCapture from 'leia-capture'

    To avoid errors when using Angular, add this in your package.json:

    "browser": {
      "os": false
    }

    Basic face challenge (type can be 'TURN_LEFT', 'TURN_RIGHT', 'OPEN_MOUTH'):

    // Create a camera
    const camera = new LeiaCapture()
    
    // Register EventListeners
    window.addEventListener("cameraReady", () => {
      // Set your overlay and start a challenge when our camera is ready
      camera.setOverlay(overlayDiv)
      camera.startFaceChallenge("TURN_LEFT", "challenge01")
    })
    
    // If you choose to record challenges, it'll be returned using this event
    window.addEventListener("videoProcessed", event => {
      const video = event.detail.blob
      const name = event.detail.name
      // Do something with the video
    })
    
    // Start our camera when Facemesh model is ready
    // It can take some time depending on the device so it's better not to load it when the camera is running
    camera.loadFacemeshModel().then(() => {
      // Need a div to add our camera
      camera.start(containerDiv, "front")
    })

    Basic document capture:

    // Create a camera
    const camera = new LeiaCapture()
    
    onTakePicture(blob) {
      // Do something with the picture
    }
    
    // Add a callback to your button when an user takes a picture
    myOverlayCaptureButton.onclick = function() {
      camera.takePicture(that.onTakePicture);
      // You can also record a video
      camera.startRecording("document01")
    }
    
    // Register EventListeners
    window.addEventListener("cameraReady", () => {
      // Set your overlay and start a challenge when our camera is ready
      camera.setOverlay(overlayDiv)
    })
    
    window.addEventListener("videoProcessed", event => {
      const video = event.detail.blob
      const name = event.detail.name
      // Do something with the video
    })
    
    // Start our camera when Facemesh model is ready
    // It can take some time depending on the device so it's better not to load it when the camera is running
    camera.loadFacemeshModel().then(() => {
      // Need a div to add our camera
      camera.start(containerDiv, "back")
    })

    API

    start(container, facingMode, videoWidth, videoHeight, frameRate, drawFaceMask)

    Start camera in a given container

    Params:

    • container - an HTML element to insert the camera
    • facingMode - a sensor mode. Can be 'front' or 'back' (default: 'front')
    • videoWidth - a video width. Cannot be below 1280 (default: 1280)
    • videoHeight - a video height. Cannot be below 720 (default: 720)
    • frameRate - framerate. Cannot be below 25 (default: 25)
    • drawFaceMask - if true, detected face masks are drawn (default: true)

    stop()

    Stop camera and remove it from its container

    setOverlay(overlay)

    Display an overlay on top of the video

    Params:

    • overlay - an HTML element

    startFaceChallenge(type, videoOutputName, record)

    Start a challenge

    Params:

    • type - a challenge type. Can be 'TURN_LEFT', 'TURN_RIGHT', 'OPEN_MOUTH'
    • videoOutputName - a name for the recorded video, if record is set to true (default: 'challenge')
    • record - if true, the current challenge will be automatically recorded (default: true)

    startRecording(videoOutputName)

    Start recording a video. Note: during challenges, you don't have to use this method if you call 'startFaceChallenge' with 'record' to true

    Params:

    • videoOutputName - a name for the recorded video

    stopRecording(processVideo)

    Stop recording a video. Note: during challenges, you don't have to use this method if you call 'startFaceChallenge' with 'record' to true

    Params:

    • processVideo - if true, the current recorded video should be processed. Thus 'videoProcessing' and 'videoProcessed' are sent

    takePicture(callback, quality, area)

    Take a picture

    Params:

    • callback - a callback method for when the picture is returned as a blob. Your callback method must be in this format to receive the picture: nameofyourmethod(pictureBlob)
    • quality - quality of the returned picture, from 0.0 to 1.0 (default: 1.0)
    • area - (optional) an area of capture. Must be in this format [x, y, width, height]

    getVideoDimensions()

    Get video dimensions in this format: [width, height]

    detectAndDrawFace()

    Manually start face detection

    Events

    cameraReady

    Triggered when camera is ready to capture

    videoProcessing

    Triggered when a video is processing

    videoProcessed

    Triggered when a video was processed

    Params:

    • blob - a video blob

    loadingModel

    Triggered when Facemesh model is loading

    loadedModel

    Triggered when Facemesh model was loaded

    faceCentered

    Triggered during a challenge when a face is centered and in front of the camera

    faceIn

    Triggered when a face is in the center area

    faceOut

    Triggered when a face is out of the center area

    faceStartedChallenge

    Triggered when a face started to move as required by the challenge

    challengeComplete

    Triggered when a challenge was completed

    screenOrientationChanged

    Triggered when the device screen changed orientation (portrait <--> landscape)

    deviceOrientation

    Triggered when the device move on z,x,y

    Params:

    • z - rotate left or rights
    • x - tilt up or down
    • y - tilt left or right

    noFaceDetected

    Triggered when no face was detected

    Licence

    MIT

    Keywords

    none

    Install

    npm i leia-capture

    DownloadsWeekly Downloads

    0

    Version

    1.0.10

    License

    MIT

    Unpacked Size

    122 kB

    Total Files

    6

    Last publish

    Collaborators

    • leiaio