A beautiful, production-ready voice transcription package for React applications that leverages the Web Speech API to convert speech to text with real-time processing capabilities.
- 🎯 Real-time Speech Recognition - Convert speech to text as you speak
- 🌍 Multi-language Support - Support for 14+ languages including English, Spanish, French, German, and more
- 🎨 Beautiful UI Components - Polished, responsive design with smooth animations and micro-interactions
- 📚 Transcription History - Save and manage your transcription sessions with timestamps
- 💾 Export Functionality - Download transcriptions as text files
- 🔊 Audio Playback - Text-to-speech functionality to hear your transcriptions
- 📊 Confidence Scoring - See how confident the AI is in its transcription
- 🌐 Browser Compatibility - Works in Chrome, Edge, and other modern browsers
- 📱 Responsive Design - Optimized for desktop, tablet, and mobile devices
- 🔧 TypeScript Support - Full TypeScript definitions included
- 🪝 Custom Hooks - Flexible hooks for building your own UI
- ⚡ Zero Dependencies - Only requires React and Lucide React icons
npm install react-speech-recognition-ui lucide-react
yarn add react-speech-recognition-ui lucide-react
npm add react-speech-recognition-ui lucide-react
🛠️ Tailwind CSS Setup (Required)
This library uses Tailwind CSS for its UI components. You must configure Tailwind in your project to ensure the styles are applied properly.
npm install -D tailwindcss
OR just use the cdn link in the index.html
<head>
...
<link href="https://cdn.jsdelivr.net/npm/tailwindcss@3.4.1/dist/tailwind.min.css" rel="stylesheet">
</head>
import React from 'react';
import { VoiceTranscriber } from 'react-speech-recognition-ui';
function App() {
const handleTranscriptChange = (transcript: string) => {
console.log('New transcript:', transcript);
};
const handleResult = (result) => {
console.log('Recognition result:', result);
};
return (
<div className="p-8">
<VoiceTranscriber
onTranscriptChange={handleTranscriptChange}
onResult={handleResult}
language="en-US"
continuous={true}
interimResults={true}
/>
</div>
);
}
export default App;
The main transcription component with a complete UI for recording and displaying transcriptions.
import { VoiceTranscriber } from 'react-speech-recognition-ui';
<VoiceTranscriber
onTranscriptChange={(transcript) => handleTranscriptChange(transcript)}
onResult={(result) => handleResult(result)}
language="en-US"
continuous={true}
interimResults={true}
maxHeight="400px"
className="custom-class"
/>
Prop | Type | Default | Description |
---|---|---|---|
onTranscriptChange |
(transcript: string) => void |
- | Called when transcript updates |
onResult |
(result: TranscriptionResult) => void |
- | Called when recognition result is available |
language |
string |
'en-US' |
Language code for recognition |
continuous |
boolean |
true |
Keep listening after user stops speaking |
interimResults |
boolean |
true |
Show partial results while speaking |
maxHeight |
string |
'400px' |
Maximum height of transcript area |
className |
string |
'' |
Additional CSS classes |
A powerful custom hook for integrating speech recognition into your own components.
import { useVoiceTranscription } from 'react-speech-recognition-ui';
function MyComponent() {
const {
transcript,
interimTranscript,
isListening,
isSupported,
error,
confidence,
start,
stop,
reset,
} = useVoiceTranscription({
continuous: true,
interimResults: true,
language: 'en-US',
onResult: (result) => console.log('Result:', result),
onError: (error) => console.error('Error:', error),
});
if (!isSupported) {
return <div>Speech recognition not supported</div>;
}
return (
<div>
<button onClick={start} disabled={isListening}>
Start Recording
</button>
<button onClick={stop} disabled={!isListening}>
Stop Recording
</button>
<button onClick={reset}>Reset</button>
<div>
<p><strong>Final:</strong> {transcript}</p>
{interimTranscript && (
<p><em>Interim:</em> {interimTranscript}</p>
)}
{error && <p style={{ color: 'red' }}>Error: {error}</p>}
{confidence > 0 && (
<p>Confidence: {Math.round(confidence * 100)}%</p>
)}
</div>
</div>
);
}
Option | Type | Default | Description |
---|---|---|---|
continuous |
boolean |
true |
Keep listening after user stops speaking |
interimResults |
boolean |
true |
Return partial results while speaking |
language |
string |
'en-US' |
Language code for recognition |
onResult |
(result: TranscriptionResult) => void |
- | Called when result is available |
onError |
(error: string) => void |
- | Called when error occurs |
onStart |
() => void |
- | Called when recognition starts |
onEnd |
() => void |
- | Called when recognition ends |
onTranscriptChange |
(transcript: string) => void |
- | Called when transcript changes |
Property | Type | Description |
---|---|---|
transcript |
string |
Final transcribed text |
interimTranscript |
string |
Partial transcript while speaking |
isListening |
boolean |
Whether recognition is active |
isSupported |
boolean |
Whether browser supports speech recognition |
error |
string | null |
Current error message |
confidence |
number |
Confidence score (0-1) |
start |
() => void |
Start recognition |
stop |
() => void |
Stop recognition |
reset |
() => void |
Reset all state |
Manage and display transcription history with export capabilities.
import { TranscriptionHistory } from 'react-speech-recognition-ui';
<TranscriptionHistory
history={historyItems}
onDelete={(id) => handleDelete(id)}
onClear={() => handleClear()}
className="my-history"
/>
Prop | Type | Description |
---|---|---|
history |
HistoryItem[] |
Array of transcription history items |
onDelete |
(id: string) => void |
Called when item is deleted |
onClear |
() => void |
Called when all history is cleared |
className |
string |
Additional CSS classes |
A beautiful language selection component for multi-language support.
import { LanguageSelector } from 'react-speech-recognition-ui';
<LanguageSelector
selectedLanguage={language}
onLanguageChange={(lang) => setLanguage(lang)}
className="language-selector"
/>
Prop | Type | Description |
---|---|---|
selectedLanguage |
string |
Currently selected language code |
onLanguageChange |
(language: string) => void |
Called when language changes |
className |
string |
Additional CSS classes |
Language | Code | Native Name |
---|---|---|
English (US) | en-US |
English (US) |
English (UK) | en-GB |
English (UK) |
Spanish | es-ES |
Español |
French | fr-FR |
Français |
German | de-DE |
Deutsch |
Italian | it-IT |
Italiano |
Portuguese (Brazil) | pt-BR |
Português (Brasil) |
Russian | ru-RU |
Русский |
Japanese | ja-JP |
日本語 |
Korean | ko-KR |
한국어 |
Chinese (Simplified) | zh-CN |
中文 (简体) |
Chinese (Traditional) | zh-TW |
中文 (繁體) |
Arabic | ar-SA |
العربية |
Hindi | hi-IN |
हिन्दी |
interface TranscriptionResult {
transcript: string; // The transcribed text
confidence: number; // Confidence score (0-1)
isFinal: boolean; // Whether result is final
timestamp: number; // When result was created
}
interface HistoryItem {
id: string; // Unique identifier
transcript: string; // Transcribed text
timestamp: number; // When transcription was created
confidence: number; // Confidence score
duration: number; // Recording duration in seconds
}
interface Language {
code: string; // Language code (e.g., 'en-US')
name: string; // English name
nativeName: string; // Native language name
}
Browser | Support | Notes |
---|---|---|
✅ Chrome | Full | Recommended browser |
✅ Edge | Full | Chromium-based versions |
Limited | Basic functionality | |
❌ Firefox | None | Web Speech API not supported |
The components come with beautiful default styles using Tailwind CSS classes. You can customize the appearance by:
- Adding custom CSS classes:
<VoiceTranscriber className="my-custom-styles" />
- Overriding Tailwind classes:
.my-custom-styles {
@apply bg-purple-100 border-purple-300;
}
- Using CSS-in-JS or styled-components:
const StyledTranscriber = styled(VoiceTranscriber)`
background: linear-gradient(45deg, #667eea 0%, #764ba2 100%);
`;
const handleError = (error: string) => {
switch (error) {
case 'not-allowed':
alert('Please allow microphone access');
break;
case 'no-speech':
console.log('No speech detected');
break;
default:
console.error('Recognition error:', error);
}
};
<VoiceTranscriber onError={handleError} />
const processTranscript = (transcript: string) => {
// Send to API for analysis
if (transcript.includes('urgent')) {
sendNotification('Urgent message detected');
}
// Auto-save every 10 words
const wordCount = transcript.split(' ').length;
if (wordCount % 10 === 0) {
saveToDatabase(transcript);
}
};
<VoiceTranscriber onTranscriptChange={processTranscript} />
const ContactForm = () => {
const [message, setMessage] = useState('');
return (
<form>
<textarea
value={message}
onChange={(e) => setMessage(e.target.value)}
placeholder="Type or speak your message..."
/>
<VoiceTranscriber
onTranscriptChange={setMessage}
className="mt-4"
/>
<button type="submit">Send Message</button>
</form>
);
};
-
Use
continuous: false
for short commands -
Disable
interimResults
if you don't need real-time updates -
Implement debouncing for
onTranscriptChange
callbacks - Clean up resources when components unmount
// Debounced processing
const debouncedProcess = useMemo(
() => debounce((transcript) => processTranscript(transcript), 500),
[]
);
<VoiceTranscriber onTranscriptChange={debouncedProcess} />
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with the Web Speech API
- Icons provided by Lucide React
- Styled with Tailwind CSS