Yeah, no basement dweller dev's are gonna be messing with that API until the costs drop by at least 100x, which I honestly only see as a near term incentive for Meta to get a Llama Voice model cookin'
I'll use it, but can't wait for an uncensored open source version. Text only is too boring. I lack the patience to use text only for too long for the tasks I want, like learning languages.
The Realtime API cost is high. I suggest that there is a cheaper way. 1.Using Google STT to get user's speech texts. 2.Send texts to GPT. 3. Get responses from GPT. 4.Send responses to Google TTS. 5.User gets AI responses in both texts and voices. The response time is longer and it costs lower.
Could you achieve these results in an app just using the text to speech and speech to text with native ios features alongside openai NON realtime api's?
Great video, thanks Kris! I'm interesting in the function calling and structured output from the voice websocket return. Can you use agents or agentic flows with constrained and structured outputs with the voice mode 🤔
Happy to be the first to comment. Kris you are always up to date. Once again cool stuff from you. Spaghetti code... 🤣. Great that you did talk about the costs as well. I like your creative and often real funny ideas. Please keep up the great work! Regarding your phone call: saw a video from a guy in the US weeks ago (no Realtime API) - he did let his AI order a Pizza and it worked great. Latency even back then was good enough - should work perfectly. Maybe try it with an italian accent 😉. Thx from Tom!
Can't you just better prompt it to have a less talkative output so you don't have to break it's response that often? That would make a big difference and everything more seamless :)
No one is going to be even able to develop at these prices other than those with deep pockets. Just testing and figuring things out would be too expensive to even try.