I am using a combination of "faster-whisper" and "whisper. cpp" offline. I will use "faster-whisper" for fast machines or servers with GPU and whisper in my project.cpp will be on a regular laptop on CPU. Thanks for sharing; your demo was crystal clear; keep it up. New subscriber
thanks for you content for West Indies in the carribbeans. Guadeloupe :-) I am curious to know on what kind of machine you are working on ? Is there a big GPU ? I saw metal, normal apple laptop ?
What different with speech recognition library?? As i know speech recognition support engine like whisper,watsonx,and google speech, but for offline it use vosk by default
How do you get the make command to work on windows?, i got the make command but i just get error saying cc not found and someone said gcc=cc but i dont know how to do anything from there
Your english is excellent. may i make a suggestion - python is not pronounced pie-ton but pie-thon - with the 'th' being the same as the 'th' in 'this'
The guide to install and make it working was not clearly captured in this video. In between it was only voice and no screen record visible to us. I appreciate your effort, but you need to cover the content for wide audience from beginner to Advance in step by step procedure. The command ''make" still doesn't work. The problem with all these AI youtubers are not providing solution to an issue and keep moving to other AI tools with new content. Try to follow-up and provide solutions to your audience in order to get more followers.
@@contactmebaba I recon the screen went black at a point, sincere apologies for that. That was an editing error. Will try my best to do a better job at double checking before publishing.
can you put this offline whisper with a local llm model lets say phi3 to get reply based on whisper? i mean lets see how fast it can actually put out what the llm model will reply, this way you can make an offline ai assistant with no latency in responses and local 100 %
@@codewithbro95 cool nice job keep it up, can you also add a way to use phi3 llm with phidata as well for Local RAG and also options for reading csv , pdf ,word documents as well ? this will give you a lot of views also, we are talking for an actual use of an ai assistant with this abilities !!!
@@snatvb definitely agree with you, inferencing with TTS is very bad at the moment, though I recently stumbled on a really promising project called ChatTTS apparently it’s being built specifically for this purpose, I haven’t tried it though, maybe I will and make a video on it.
@@codewithbro95 yep, I've seen recently. I tried "bark" from suno and it work pretty slow (I have rtx 3070) and sometimes it voices llm imagination text instad of I gave :D
Hi, noob here.. Trying to figure out how to get the `make` working from VSCode terminal, on windows so far I installed MSYS2 added C:\msys64\usr\bin and C:\msys64\mingw64\bin to PATH env variables but... still says command not recognized..
@@codewithbro95 I had to install Visual Studio and build the C code from there or something, but it didn't build the microphone one, and I don't know how to add it to the build step, so kinda gave up, also was trying to get my AMD GPU to work with ZLUDA which is a library that should make CUDA code work on AyyMD, but no luck there either even with AI helping with troubleshooting..