Hello to all! In my project I use both the โnormalโ speech recognition and the streaming (live) recognition, where the speaker diarization is fundamental.
Sadly the diarization (the result contains the wordInfo array) doesnโt seems to work with the streaming recognition (the array is empty)
someone faced the same problem ?
here my initialization (Node.is)
```
const config = {
encoding: 'LINEAR16',
sampleRateHertz: 16000,
languageCode: languageCode,
enableAutomaticPunctuation: true,
enableSpokenEmojis: { enabled: true },
diarizationConfig: {
enableSpeakerDiarization: true,
},
} as GoogleRecognitionConfig;
// // https://cloud.google.com/speech-to-text/docs/streaming-recognize
const streamingRecognitionConfig = {
config: config,
singleUtterance: true,
interimResults: true,
} as GoogleStreamingRecognitionConfig;
const stream = this.googleSpeechClient.streamingRecognize(streamingRecognitionConfig);
```
Hello, the problem is still present in the last version of the API ..
Someone has the same problem ?