Support for Custom Voice Models and Local Hosting Options
Andrew
Would it be possible to add support for custom voice models, especially cloud models?
It would be awesome if we could use an OpenAI API token to access the Whisper API. It's super fast and accurate – I use it with a Chrome extension for ChatGPT, and it's fantastic.
One more thing: adding the ability to provide customers the option to use external locally hosted voice models, deployed via container app for example.
I think it would result in much more accurate transcription, with top model accuracy running on a local GPU with the benifit of being complely private and off the cloud.
Thanks for considering this! I think it would take SuperWhisper to the next level.