 # 🔊 Speech Synthesis Voices

 Detect installed text-to-speech (TTS) voices in your browser. Test voices interactively, explore voice fingerprinting, and understand how voices reveal your OS and language packs.

 

 

 0

Total Voices

 

0

Local Voices

 

0

Network Voices

 

0

Languages

 

-

Default Locale

 

🇷🇺

Russian Voice Detected

 

 

 ###  Test Speech Synthesis

   Loading voices...   

  Select a voice and click "Speak" to test how it sounds

 

 ###   Voice Fingerprint Hash 

 Calculating... 

  This hash uniquely identifies your voice configuration (click to copy)

 

 ####  Privacy &amp; Fingerprinting

 **Speech synthesis voices are a powerful fingerprinting vector.** Different operating systems and language packs have unique voice sets. Combined with other browser properties, your voice configuration can create a unique identifier for tracking.

  **Russian voices detected!** Russian TTS voices are typically only present on Russian-localized systems or machines with Russian language packs installed, making this a strong fingerprinting signal.

 

      

 

   

 The **Web Speech API** provides speech synthesis (text-to-speech) capabilities to web applications. The `window.speechSynthesis` interface allows JavaScript to:

- **List available voices** - Get all TTS voices installed on the system
- **Speak text** - Convert text to speech using selected voice
- **Control speech** - Pause, resume, cancel speech
- **Customize voice** - Adjust rate, pitch, volume
 
##### Voice Properties

- `voiceURI` - Unique identifier for the voice
- `name` - Human-readable voice name
- `lang` - BCP 47 language tag (e.g., "en-US", "fr-FR")
- `localService` - true if local, false if network/cloud
- `default` - true if this is the default voice for its language
 
// Get all available voices const voices = window.speechSynthesis.getVoices(); console.log(`Found ${voices.length} voices`); voices.forEach(voice =&gt; { console.log(`${voice.name} (${voice.lang})`); console.log(` Local: ${voice.localService}`); console.log(` Default: ${voice.default}`); }); 

 

 

 

  

 Speech synthesis voices create a **unique fingerprint** because:

##### 1. Operating System Detection

- **Windows** - Microsoft David, Zira, Mark voices
- **macOS** - Alex, Samantha, Victoria voices
- **Linux** - eSpeak voices, or none if TTS not installed
- **iOS** - Siri voices in multiple languages
- **Android** - Google TTS voices
 
##### 2. Language Pack Detection

 Installed voices reveal which language packs the user has installed. For example, if Japanese, Arabic, and Hindi voices are present, it suggests the user works with multiple languages or is from a multilingual region.

 **Special case: Russian voices** are particularly revealing. Russian TTS voices are typically only present on systems with Russian localization or Russian language packs explicitly installed. This makes Russian voices a strong fingerprinting signal, as they indicate either a Russian-speaking user or someone who works extensively with Russian content.

##### 3. System Customization

 Users who install additional voice packs (like premium voices or language-specific TTS) create more unique fingerprints. The combination of voices is highly distinctive.

##### 4. Voice Count &amp; Combinations

 The number of voices and their specific combinations create thousands of possible fingerprint values. Combined with other browser properties, this makes tracking very effective.

// Example: Create voice fingerprint hash function generateVoiceFingerprint() { const voices = window.speechSynthesis.getVoices(); const voiceSignature = voices .map(v =&gt; `${v.name}|${v.lang}|${v.localService}`) .sort() .join(','); // Simple hash function (SHA-256 recommended for production) let hash = 0; for (let i = 0; i &lt; voiceSignature.length; i++) { const char = voiceSignature.charCodeAt(i); hash = ((hash &lt;&lt; 5) - hash) + char; hash = hash &amp; hash; } return Math.abs(hash).toString(16).padStart(8, '0'); } console.log('Voice Fingerprint:', generateVoiceFingerprint()); 

 

 

 

  

Here's how to use the Speech Synthesis API in your own applications:

##### Basic Text-to-Speech

// Create an utterance const utterance = new SpeechSynthesisUtterance('Hello, world!'); // Speak the text window.speechSynthesis.speak(utterance); 

##### Select Specific Voice

// Get voices const voices = window.speechSynthesis.getVoices(); // Find a specific voice const selectedVoice = voices.find(v =&gt; v.name === 'Google US English' || v.lang === 'en-US' ); // Create utterance with selected voice const utterance = new SpeechSynthesisUtterance('Testing voice selection'); utterance.voice = selectedVoice; utterance.rate = 1.0; // Speed (0.1 to 10) utterance.pitch = 1.0; // Pitch (0 to 2) utterance.volume = 1.0; // Volume (0 to 1) window.speechSynthesis.speak(utterance); 

##### Event Listeners

const utterance = new SpeechSynthesisUtterance('Hello!'); utterance.addEventListener('start', () =&gt; { console.log('Speech started'); }); utterance.addEventListener('end', () =&gt; { console.log('Speech ended'); }); utterance.addEventListener('pause', () =&gt; { console.log('Speech paused'); }); utterance.addEventListener('error', (event) =&gt; { console.error('Speech error:', event.error); }); window.speechSynthesis.speak(utterance); 

##### Control Speech

// Pause speech window.speechSynthesis.pause(); // Resume speech window.speechSynthesis.resume(); // Cancel all speech window.speechSynthesis.cancel(); // Check if speaking const isSpeaking = window.speechSynthesis.speaking; console.log('Currently speaking:', isSpeaking); 

 

 

 





 ${voice.localService ? 'Local' : 'Network'} 

  ${voice.lang} 

  ${voice.voiceURI || 'N/A'} 

 ${voice.default ? ' Default voice for this language

' : ''} ${isRussian ? ' Rare voice (Russian-localized system)

' : ''} 

  

 `}).join(''); } // Speak with specific voice function speakWithVoice(voiceIndex) { const text = document.getElementById('demo-text').value || 'Hello! This is a test.'; const voice = allVoices\[voiceIndex\]; if (window.speechSynthesis.speaking) { window.speechSynthesis.cancel(); } currentUtterance = new SpeechSynthesisUtterance(text); currentUtterance.voice = voice; currentUtterance.rate = 1.0; currentUtterance.pitch = 1.0; currentUtterance.volume = 1.0; window.speechSynthesis.speak(currentUtterance); } // Speak with selected voice from demo function speak() { const text = document.getElementById('demo-text').value; const voiceIndex = parseInt(document.getElementById('voice-selector').value); if (!text) { alert('Please enter some text to speak'); return; } speakWithVoice(voiceIndex); } // Bootstrap 4 collapse is handled globally in \_tools.js // Initialize on page load document.addEventListener('DOMContentLoaded', () =&gt; { loadVoices(); });