Have ideas to improve npm?Join in the discussion! ¬Ľ

    nativescript-speech-recognition
    TypeScript icon, indicating that this package has built-in type declarations

    2.0.0¬†‚Äʬ†Public¬†‚Äʬ†Published

    NativeScript Speech Recognition

    Build Status NPM version Downloads Twitter Follow

    This is the plugin demo in action..

    ..while recognizing Dutch ūüá≥ūüáĪ .. after recognizing American-English ūüáļūüáł

    Installation

    From the command prompt go to your app's root folder and execute:

    NativeScript 7+:

    ns plugin add nativescript-speech-recognition

    NativeScript < 7:

    tns plugin add nativescript-speech-recognition@1.5.0
    

    Testing

    You'll need to test this on a real device as a Simulator/Emulator doesn't have speech recognition capabilities.

    API

    available

    Depending on the OS version a speech engine may not be available.

    JavaScript

    // require the plugin
    var SpeechRecognition = require("nativescript-speech-recognition").SpeechRecognition;
     
    // instantiate the plugin
    var speechRecognition = new SpeechRecognition();
     
    speechRecognition.available().then(
      function(available) {
        console.log(available ? "YES!" : "NO");
      }
    );

    TypeScript

    // import the plugin
    import { SpeechRecognition } from "nativescript-speech-recognition";
     
    class SomeClass {
      private speechRecognition = new SpeechRecognition();
      
      public checkAvailability(): void {
        this.speechRecognition.available().then(
          (available: boolean) => console.log(available ? "YES!" : "NO"),
          (err: string) => console.log(err)
        );
      }
    }

    requestPermission

    You can either let startListening handle permissions when needed, but if you want to have more control over when the permission popups are shown, you can use this function:

    this.speechRecognition.requestPermission().then((granted: boolean) => {
      console.log("Granted? " + granted);
    });

    startListening

    On iOS this will trigger two prompts:

    The first prompt requests to allow Apple to analyze the voice input. The user will see a consent screen which you can extend with your own message by adding a fragment like this to app/App_Resources/iOS/Info.plist:

    <key>NSSpeechRecognitionUsageDescription</key>
    <string>My custom recognition usage description. Overriding the default empty one in the plugin.</string>

    The second prompt requests access to the microphone:

    <key>NSMicrophoneUsageDescription</key>
    <string>My custom microphone usage description. Overriding the default empty one in the plugin.</string>

    TypeScript

    // import the options
    import { SpeechRecognitionTranscription } from "nativescript-speech-recognition";
     
    this.speechRecognition.startListening(
      {
        // optional, uses the device locale by default
        locale: "en-US",
        // set to true to get results back continuously
        returnPartialResults: true,
        // this callback will be invoked repeatedly during recognition
        onResult: (transcription: SpeechRecognitionTranscription) => {
          console.log(`User said: ${transcription.text}`);
          console.log(`User finished?: ${transcription.finished}`);
        },
        onError: (error: string | number) => {
          // because of the way iOS and Android differ, this is either:
          // - iOS: A 'string', describing the issue. 
          // - Android: A 'number', referencing an 'ERROR_*' constant from https://developer.android.com/reference/android/speech/SpeechRecognizer.
          //            If that code is either 6 or 7 you may want to restart listening.
        }
      }
    ).then(
      (started: boolean) => { console.log(`started listening`) },
      (errorMessage: string) => { console.log(`Error: ${errorMessage}`); }
    ).catch((error: string | number) => {
      // same as the 'onError' handler, but this may not return if the error occurs after listening has successfully started (because that resolves the promise,
      // hence the' onError' handler was created.
    });
    Angular tip

    If you're using this plugin in Angular, then note that the onResult callback is not part of Angular's lifecycle. So either update the UI in an ngZone as shown here, or use ChangeDetectorRef as shown here.

    stopListening

    TypeScript

    this.speechRecognition.stopListening().then(
      () => { console.log(`stopped listening`) },
      (errorMessage: string) => { console.log(`Stop error: ${errorMessage}`); }
    );

    Demo app (Angular)

    This plugin is part of the plugin showcase app I built using Angular.

    Angular video tutorial

    Rather watch a video? Check out this tutorial on YouTube.

    Install

    npm i nativescript-speech-recognition

    DownloadsWeekly Downloads

    33

    Version

    2.0.0

    License

    MIT

    Unpacked Size

    37.3 kB

    Total Files

    12

    Last publish

    Collaborators

    • avatar