Simple, extensible React HOC for interop with the Web SpeechRecognition API.
$ npm install react-speak
$ yarn add react-speak
react-speak is pretty simple because it's designed to do one thing well: allow your React/Redux components to work with your browser's native Web SpeechRecognition API and give you access to a user's microphone.
Under the hood,
withSpeech is a function that takes a component and returns a wrapped component with the following PropTypes:
WithSpeechpropTypes =startListening: PropTypesfuncisRequiredstopListening: PropTypesfuncisRequiredaddToRegister: PropTypesfunc
The package was written to be used with Redux, so all three props were designed to be action creators that return action objects (see the Redux section below).
Here's a simple setup using Redux:
import withSpeech from 'react-speak'import React from 'react'import compose from 'redux'import connect from 'react-redux'import startListening stopListening addToRegister clearRegister from '../actions/speech'// Whatever state you care about:const mapStateToProps =isListening: stateisListeningregister: stateregisterconst mapDispatchToProps =startListeningstopListeningaddToRegisterclearRegisterconst YourComponentWithSpeech =<div ="ComponentWithSpeech">propsisListening? null:<button =>Start speaking</button><div ="transcript">propsregister</div></div>withSpeechYourComponentWithSpeech
withSpeech returns a component with the following props:
All three are action creators that return action objects.
stopListening are similar in that they
don't receive any particular payload from the withSpeech
component; your action creator could be as simple as:
const startListening = type: INIT_LISTEN
addToRegister on the other hand receives a transcript or "register",
which is just an array of strings from your user's microphone. You can pass
this as a payload to your reducers and do whatever you want with next.
An example action creator:
const sendToAlexa =type: TRANSCRIPT_SENTpayload: transcript
Q: Do I have to use Redux?
A: Currently this only officially supports a Redux or Flux-type model where you have reducers that listen for the actions that withSpeech returns, and manages the logic for how this component actually mutates state.
Q: How can I contribute?
A: Contributions are totally welcome! See the section on contributing below.
PRs that abstract this component's functionality to React in general are absolutely welcome! Also, drop me a line if you're interested in helping me work with the SpeechSynthesis interface, as currently
withSpeech only implements the SpeechRecognition protocol.