Good morning,
I have noticed that CCAI is available, which allows the transcription of what a user says (and subsequent operations) regardless of the language they speak. This is achieved by processing real-time language detection and speech-to-text (STT) without prior indication of the input language.
However, in Google's STT service, it is possible to specify a maximum of four input languages for real-time detection. Therefore, a truly comprehensive multilingual STT is not possible.
Why is that? Is there a way to have a truly multilingual STT without needing to specify the language code (or by specifying more than four)?
Thank you
Solved! Go to Solution.
Hello piaR,
Welcome to Google Cloud Community!
As for the features of CCAI, it now supports Live Transcription Speech To Text Model Adaptation to public preview. In CCAI Live Transcription, we currently support 11 locales (en-US, en-AU, en-GB, es-US, fr-CA, pt-BR, fr-FR, es-ES, it-IT, de-DE, ja-JP) and model adaptation support in Cloud Speech currently looks like, so only en-US is supported.
Currently, we have customizing languages, recordings and messages in order to support multiple markets, multiple languages can be activated and used in the channels you make available.
I hope the above information is helpful.
Hello piaR,
Welcome to Google Cloud Community!
As for the features of CCAI, it now supports Live Transcription Speech To Text Model Adaptation to public preview. In CCAI Live Transcription, we currently support 11 locales (en-US, en-AU, en-GB, es-US, fr-CA, pt-BR, fr-FR, es-ES, it-IT, de-DE, ja-JP) and model adaptation support in Cloud Speech currently looks like, so only en-US is supported.
Currently, we have customizing languages, recordings and messages in order to support multiple markets, multiple languages can be activated and used in the channels you make available.
I hope the above information is helpful.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |