Was the delayed wake word response time bothering you?? We have solved the issue and updated the documentation. Read more here if interested: github.com/FutureProofHomes/wyoming-enhancements/discussions/4#discussioncomment-8225110
Happy to see my scripts living their best life lol Good job explaining how to setup though - HA is really getting there. I like your videos, keep up the good work
@JasonWeyermars, were you the inventor of the awake and done script? I’m so sorry I forgot to call you out! Sincere apologies! Do the modifications get your stamp of approval? Many thanks.
@@FutureProofHomes no credits required - other than there is a great community, doing great things, and we're all learning and helping each other out where we can! Voice satellite, although a WIP, is really making my Star Trek fantasies come true :) Keep up these videos, they're great to see.
I added another command. when there was a false detection of the wake word and no command was sent, the audio would not return to normal. Adding this command fixed it... --error-command '/home/loftsatellite/wyoming-enhancements/snapcast/scripts/done.sh' \
You would use the media player entities created by the Music Assistant integration, the same way you would for other devices that act as media players, like a Google Home Mini or Amazon Echo. If Brad doesn't get to it first, I could do this.
This is what I was hoping all those components would "someday" be able to provide. I didn't realise that "someday" was today. Great job explaining everything and showing us what Home Assistant, Music Assistant and everything else can do NOW.
Your work is NEXT LEVEL. I sincerely appreciate the time you are putting into this. I would "LOVE" a scaled back tutorial with maybe some scripts to get this going for just "audio" playback (i.e. SONOS replacement)...... I know, I can do that myself :)
(Update) It's currently in progress on Music Assistant Quote ( OzGav commented 2 days ago It is being worked on. There is a desire to have this done by the end of the month but it depends on workload.) (Old Comment) How would I go about having announcements via tts, play through the wyoming satellite? Music assistant doesn't support it at the moment I hope there's a way would be a total game changer
@8:28 you said set autospawn to "no" but you set it to yes, im assuming this was a mistake and you should have change the value to "no". good work on the video, perfect pace as always. keep it up
Wow!! I want to thank you and the community for the hard work you've put into these projects! I'm not even running HA yet, but am incredibly amazed by what people have accomplished with it. I'm looking forward to a time when so much of this is just plug 'N play and EVERYBODY can ditch Alexa, Google, Home Kit, and any other spy devices.
You must break down your dashboard for us dummies! The yaml and miles of documentation beat me every time I think it's time to attempt home assistant again! 😂
Just followed this method to a 'T', and still have no sound output from the device when playing music, or using the voice wakeword command. Its listening, because when using the wakeword, the LED's light up, and the command is executed. However snapclient/snapcast music playback does not work, and the sound output does not work. The error i get is Jul 07 09:59:04 wyoming pulseaudio[510]: Failed to open cookie file '/var/run/pulse/.pulse-cookie': No such file or directory Jul 07 09:59:04 wyoming pulseaudio[510]: Failed to load authentication key '/var/run/pulse/.pulse-cookie': No such file or directory Jul 07 09:59:04 wyoming systemd[1]: Started pulseaudio.service - PulseAudio system server. Jul 07 09:59:07 wyoming pulseaudio[510]: Failed to find a working profile. Jul 07 09:59:08 wyoming pulseaudio[510]: Failed to load module "module-alsa-card" (argument: "device_id="0" name="platform-3f902000.hdmi" card_name="alsa_card.platform-3f902000.hdmi" namereg_fail=false tsched=yes fixed_latency_range=no > Jul 07 09:59:14 wyoming pulseaudio[510]: Failed to find a working profile. Jul 07 09:59:14 wyoming pulseaudio[510]: Failed to load module "module-alsa-card" (argument: "device_id="0" name="platform-3f902000.hdmi" card_name="alsa_card.platform-3f902000.hdmi" namereg_fail=false tsched=yes fixed_latency_range=no > Jul 07 09:59:14 wyoming pulseaudio[510]: ALSA woke us up to write new data to the device, but there was actually nothing to write. Jul 07 09:59:14 wyoming pulseaudio[510]: Most likely this is a bug in the ALSA driver 'snd_bcm2835'. Please report this issue to the ALSA developers. Jul 07 09:59:14 wyoming pulseaudio[510]: We were woken up with POLLOUT set -- however a subsequent snd_pcm_avail() returned 0 or another value < min_avail. But when using the command paplay /usr/share/sounds/alsa/Front_Center.wav the sound outputs fine. I ran the command pactl set-default-sink alsa_output.platform-soc_sound.stereo-fallback to set the output to the 2mic pi hat. Any ideas
❓16:43 I also created player-groups. Is it somehow possible to change how the groupe-volume influences the individual volumes? Example: How it behaves right now. Group = vol.60 -> vol.30 (60-30) Player1 = vol.80 -> vol.50 (80-30) Player2 = vol.40 -> vol.10 (40-30) (So it's by addition/subtraction of the volume.) ❗Example: How I want it to behave: Group = vol.60 -> vol.30 (60/2) Player1 = vol.80 -> vol.40 (80/2) Player2 = vol.40 -> vol.20 (40/2) (So it's by multiplication/division of the volume.) BECAUSE than the relative volume relative to eachother is kept equal! So if I have 2 speakers in opposite corners, I sit closer to Player2 and I have adjusted their volume so they sound for this position equally loud they still sound equally loud with multiplication/division but NOT with addition/subtraction. I hope it's understandable what I mean. Can someone give me some ideas or help how to achieve this? Thank you all very much!
Unfortunately, after second try to do this i have to admit, that pulseaudio makes delay between wake word detection and voice capturing totally huge, which renders satellite unusable. I'm on Pi-Zero_2W, and it's awful. I really wanted it to work... Also for some reason pamixer isn't available for my OS (Lite, legacy)... Will ditch this, since i already have good working setups on ESP32-S3. I really wanted it to work...
Watch my other video about stealth voice assistants in the ceiling it discusses all of that. For the video though, I just temporarily plugged the voice assistant into an m-audio audio controller so I could record its response.
Awesome deep dive on getting all the components and tweaks together in working state and sharing with us all. Huge thanks. Now I just need to find some time to replicate it on my HA. I did implemented your GPT integration with my existing Piper/Whisper and it's working flowless. Kudos to you and all the other people who dedicating time and work towards the HA integrations and community.
Fantastic well organized tutorial! One thing though. After that you have done this tutorial, Snapcast has started dividing up the snapclient releases into 2 versions. One with pulse and the other without pulse. I wasted a day trying to figure out why my snapclient refused to work with pulse. Installed the version "with pulse" and it worked perfectly.
I followed your tutorial, but I can't get the snapclient to output sound. I tried setting the output device, but it can't seem to find the mic hat. Also, during the tutorial I was able to execute the paplay command, but after completing the steps it errors with: Failed to drain stream: Timeout. Could you help me with that?
I’m very excited to give this a try- even more so the ChatGPT pipeline stuff from your earlier video. I’m trying to get a handle on realistic costs to use the ChatGPT API. Any thoughts?
Have played with it for a few hours and it works out at fractions of a cent per exchange. I don’t think it’s a major concern. You will have to add on $10 every few months probably. That’s at current prices anyway. They could well start screwing people to boost their share prices, as per any of these services.
I’m a heavy user. I’ve managed to get my costs to approx $8/month. Still watching the data though and I hear cost optimizations are coming. Consider running the LLM locally so you don’t pay OpenAI at all. I’ll have a detailed video on this shortly.
you could use home assistant to send tts from the PI to home assistant and call the Sonos! or I am wrong, I saw something like that on GitHub but couldn't do it? i am building my house and I am looking to build something like you did on my house, could you tell me what is the best hardware to use to make this? do you think PI and the HAT mic is better then esp32 with a mic?
There are quite a few ways I can think of to hack the satellite to output it's TTS response to an external speaker/media_player. I/we should really think about the best solution here. Also, I bet Nabu Casa will build this at some point. It would be uber powerful for many folks.
I love all your videos, and I can't wait to try this myself when all my parts arrive! I have a quick question though. Have you found a way to reduce the delay for the response to the wake word? It's a pretty minor thing, but I can see that driving me crazy since I'm used to the near instant response from my google homes haha
Track the open discussion here. I think/hope we'll solve this in the next few days. Not gonna try tonight cause I gotta get some sleep. :) Human after all. github.com/FutureProofHomes/wyoming-enhancements/discussions/4
Well, if you are truly excited that I have watched your excellent, well-produced, and well-executed video, I'll just tell you I am so excited I just wet my pants! Seriously, keep it coming, when I can get caught up to your level I will be implementing this complete system in my small condo here in Mexico! Much love, happiness, and success to you!
Yes - Yes - this is AS COOL TO ME! I am focusing on the same stuff you have been providing and you are doing the heavy lifting. Please know hoe much this is ENORMOUSLY appreciated. I have been trying to come up with a solid SONOS replacement (I wont buy those $$$ items) but my work around have never been quite what I want. This is very much what I am after. TREMENDOUS WORK!
It would be cool to build this service to look at the room in Home Assistant and link the voice assistant and speakers in the same room with each other - so that going forward all you do is add the two 'devices' in the room and get this functionality.
I came to this comment section to ask/suggest exactly this. I’m wondering if HA can tell which Area the wyoming-satellite that provided the commands is in. If so, you can target the media player that is in that same area…
Thanks for your dedication to helping us keep our smart homes on the cutting edge of technology, and thanks for your detailed tutorials. Your videos are appreciated.
On 8:29 you left autospawn as YES instead of NO. I assume the right value is NO as you said and not yes... but just FYI :) And thanks for the video! It's awesome how you made it work. I've been trying to do this myself at my own pace and time but damn I was getting nowhere :D
You’re right. I goofed there, but the documentation is accurate. I wish I could easily fix that, but YT won’t let me edit the video to correct that footage. Hopefully people will see my mistake and use common sense. Apologies!
@@FutureProofHomes No need to apologize! It's all good :) Just wanted to raise to your awareness as I don't know if you can drop a small "annotation" over the video or something to refer that... You're doing an excellent job there! Keep it up!
I followed this, currently having problems with the audio jumping and skipping when playing music and just general voice commands. Increased the SNAP Buffer and still get it.
Running into some issues installing the snapcast client. The following packages have unmet dependencies: snapclient:armhf : Depends: libflac8:armhf (>= 1.3.0) but it is not installable. This is a fresh install ( just installed this all following the wyoming satellite tutorial from your last video on sunday)
Hi. Noob question. I have Sonos Amps in a rack in my utility room and ceiling speakers throughout my house and have ethernet w/POE in every room I would need voice assist. Is this a good solution for me? Also, I've built my own inference server with 4090 running Oobabooga and am using the LlaMA Conversation Integration with very limited success in affecting my Lutron lights and shades. I'm thinking of rebuilding using the LocalAI with Extended OpenAI Conversation but would like some reassurance that it is actually working before I go through the effort. It seems like its more miss than hit / miss for the local LLM for me so far.
I've built 2 of these...I've came across something that i'm having trouble with..(feel silly for posting this question here?) but not sure where or what part/software this effects......I cant seem to add items to home assistant shopping list with the wyoming satellite but i have not issue the assist itself??....any ideas where i'd start to tweak this????😊😊 Edit: I've wrote a function for this.. - spec: name: add_item_to_shopping_list description: Add item todo shopping list parameters: type: object properties: item: type: string description: The item to be added to shopping_list required: - item function: type: script sequence: - service: todo.add_item data: item: '{{item'}}' target: entity_id: todo.shopping_list In case anyone wants this?
This is great stuff, have it working fine except I can get the script to work, ChatGPT says "template list object has no element 0" any advice on that? Ah just realised it needs to be a playlist, cant just say play Elton John must be the name of a playlist
Hello, can you make it talk/play songs wirelessly from Sonos so you dont have to plug in physical cable into this Wyoming Satellite ? I want to use the satellite for speaking and for playing Sonos. TIA.
@@FutureProofHomesI did also watch that video (and subbed at that time) but you didn't show the actual response time. You did however indicate it was way too slow to be useful with the cut video and the "an Eternity Later" still.
ah, yes. Well I have now have in the lab an LLM hooked into HA that is producing 2-3 second response times and controlling my home. The next vid should go deep into showing how to REALLY build the holy grail. Getting closer and closer!
I'm at the point of installing Music Assistant and in my version of Home Assistant, they have removed the HACS explorer and you are not able to select any Beta options. All I see is the download button and it's installing the current version. Does anyone have any advice on how to request the Beta versions in HACS now with the new UI?
Thanks for the video. I found keeping the old arecord line instead of using parecord gave me a lot better performance with speed and voice accuracy. Also disabling the activation sound seems to speed things up. It feels like the voice stream isn't sent to the whisper server until a few seconds after the sound finished playing
Question. What can I modify in this setup to point the TTS output to play through my Sonos / AMP speakers in my ceiling rather than using the audio output of the Wyoming Satellite?
Thank you for these great videos, I'm getting the music out from 3.5 rpi jacket and not respeaker 2-mics pi hat as voice does, what have I missed here?
Does snapcast allow you to remove a client from a group while keeping the audio playing on the others, or to resume a previous session if it's stopped? Ooh and is there a way to use multiple mics in the same room with the same functionality, either with more rpis or with esp32 satellites that just handle audio input? Also, it would be very nice to have a prebuilt Wyoming satellite image with pulseaudio for peeps! Maybe I'll make one... Though I'd love to use pipewire instead if that's usable.
When I'm using wyoming satellite with wake word service enabled in it, it looks like chatgpt is invoked without any previous context. When I use the same "assistant" pipeline through HA web page ("Assist" chat) it keeps context. Any fixes?
Fantasic video. Thank you for all your efforts. Having go it working I am left with three issues. 1. Jarvis talks on my HDMI port speaker and music plays on my Mic2 card audio jack. I've tried to see why but can't work it out. 2. I get a fair few dropouts on the audio. 3. I tried adding some other pre-made wakewords (that work on the HA Server), but they don't work (error is can't find wakeword, but they are there in the same place as the ones that work. I'd appreciate any tips or advice. Cheers, Bill
Still didn't try the ReSpeaker with the Raspberry Pi 4 as I commented in your other video. The ReSpeaker is out of stock everywhere. Anyway, just to be clear, can you clarify your setup? Because I see two issues with this that I still want to solve and I think you agree. You are still using non-local solutions, both NabuCasa and ChatGPT. I know you want to replace ChatGPT but I think at this moment, it's almost impossible to use Home Assistant with Local Voice, instead of NabuCasa. From my personal experience, it's way too slow. I just want to be sure that for you this is still like this and that is not a problem only on my side. Thanks for another great video and waiting for the one about replacing completely cloud ChatGPT :P
@@FutureProofHomesI found one that says "ReSpeaker 2-Mic Pi HAT V1.0 4b Zero/Zero W/B+ Raspberry Pi Keyestudio", I'm not sure if this is the same because of the version but I will buy it and see :D
Excellent video. Could install as shown in the video. Everything works fine except for the last part to control volume. Is pamixer still around to be installed? Can I use pulsemixer instead?
What OS version do you have? I am running the 64bit latest raspberry pi os for the pi zero 2 w on mine and having a number of issues related to it not finding dependencies and the like.
Really cool. I would love to see it working with all the sounds off and just working. Side note, to call the Sonos speaker expensive is not fair. The current small speaker, the "Era 100" is really a very cost effective speaker. The amplification, the speaker, the technology of putting it all together in a great sounding speaker for its size, can't be beat. There is no way you can make something DIY to compete with it.
Great video, thanks! However saying you don't want Google or Amazon listening in but then sending all your interactions to ChatGPT seems to nullify your goal? It didn't look like you were running a local hosted LLM.
Looking forward to trying this out! Haven’t seen your tutorial on doing the initial setup just yet. Hoping maybe these additional steps can be combined somehow to make things slightly more automatic to get up and running.
@@FutureProofHomes I love watching the bleeding edge get developed, but I get even more excited when the community decides on the “good defaults” like these features and they get baked into simplified installations just waiting to be turned on.
Awesome work! I managed to get it all through til the end on my Raspberry Pi 3 with Respeaker 4mic-HAT. But I did not manage yet to get the Voice Assistant to control the Music. Maybe its because I am trying to use it in another language than english?
This is awesome! I've followed your other tutorials and now this one but seems to have not quite got it right. When I say "Hey Jarvis, play jazz on the living room satellite" it spits back this error in the voice assistant debug - "Something went wrong: Service script.play_music not found." Not entirely sure why as there definitely is a script.play_music entity and it has been exposed to Assist. Any ideas what I could be doing wrong?
Intro: 1. We don't want big companies to listen to us. 2. We hooked our custom microphones to ChatGPT (which states not to put private stuff on it). (Hypocrisy aside, I do know that it's for a local LLM in the end)
I am so excited about this but have a couple of questions...1. what is your audio setup. Looks like you are using in ceiling spaekers and some kind of amp connected to the headphone jack. Is this right? What are the downstream components? Amp? Speakers? etc... 2. Most Sonos are portable speakers. Can you recommend an audio solution that would be portable?
Very cools stuff! I got it to work somewhat, but adding the pulsaudio ducking isn't working for me. Using pulseaudio vs alsa seems to add a significant lag to the wakeword detection and often stops listening to me before I even finish speaking. I also have not been able to wake the assistant while music is playing, even at very low volumes. any ideas?
Add - - debug onto the end of your enhanced-wyoming-satellite.service and restart. Utter the wakeword and share those logs. That will help me determine your issue. Open ticket or QA discussions in the GH repo. Easier to debug there. PulseAudio causes a delay. I'm discussing that here: github.com/FutureProofHomes/wyoming-enhancements/discussions/4 The whole "stops listening before finishing or when audio is playing" sound like you need to tune your noise cancellation and sensitivity levels. You can play with those levels in settings -> integrations -> wyoming -> device settings in home assistant UI.
Fantastic series of videos. As a newby to HA your instructions and explanations are just brilliant. I had been planning to use Ligitech Media Server, until Media Assistant is out of Beta, and a bunch of picoreplayers (each with an amplifier Hat to drive some speakers) for the whole house audio. Is there any way to integrate the Wyoming voice assistant functionality you have created with the picoreplayers such that they would go quiet when a voice command was spoken etc? Basically to achieve the same end result as you have achieved but using LMS and picoreplayers for the audio distribution?
Built it with a raspberry pi 3, USB microphone off a USB webcam and headphones! It sounds so much better than the esp32 ones! Wow! Pretty easy to get the basics going! Adding all your enchantments now!👍
Thank you for the tremendous work that went into this! I'm now a subscriber. That said, Jarvis seemed slow to respond. Anyway to improve that or use Siri and Google assistant until better hardware/software happens? Another commenter said Jarvis trips randomly every 20 minutes, which is a deal breaker. Lastly, how might we tie in existing ceiling speakers? I have an old Nuvo whole house audio system with control panels that has died that I need to replace with a new amp and control system and tie into HA.
1. Yes, the extra PulseAudio layer seems to add a 1 second delay. I bet there is a solution and I haven’t found it yet. I do think more powerful hardware could help. 2. Try different wakewords and different mic gain/noise cancelling to avoid false positives. I’ve found Okay Nabu wakes more often than Jarvis. We need to train these wakewords even more to really solve this issue. 3. Watch my “stealth voice assistant” video. It will answer all your questions.
It works for my use case. But I do want to go further with those scripts and have ability to play songs, albums, artists too. Keep an eye on the repo and I’ll share those scripts if someone else doesn’t commit them before I do.
I've been following this as is life to do something similar. I notice that there's a pretty substantial delay between your wake word and then command. Is there no record buffer capability with the pi and hat setup you use?
Cool project! Gonna try it. But at the beginning you mention that Sonos voice control is powered by google or Amazon. Sonos Voice is actually local to the device. No cloud
You’re right. Sonos music streaming is local (although they love keeping all your logs in their cloud, go read about it). But, the built-in Google and Amazon voice assistant in their speakers very much rely on the internet to control your Sonos.
Looking forward to that present detection tutorial! Maybe can we get a tutorial on running scripts/ ssh commands on the pi through the voice assistant? Would greatly appreciate it!
Silly question but first....... your videos are SOOOOOOOOOOOOOOOOOOO helpful, Thank you!!! Okay so I have my 'jarvis' working in HA is there a way to funnel the output into a Sonos Speaker instead of a plugin in that pi hat LOL.... if you can help that would be appreciated but if not STILL much appreciated for your work!!
Yes. There is a way to do this. But nothing I’ve found that isn’t a major hack and that is stable. I’ll keep hunting for the right solution and share as soon I can.
wait i have in my living room sonos speakers but i have google hubs and nest speakers in all other rooms. How are you setting it to play onto the speakers
Watch this video. It shows you how I'm doing it. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-eN0_8GCsZm0.html There is a more convenient/wireless way to do this, but I/we haven't built a tutorial for this though.
Thank you so much for this! In my journey to remove Alexa, this was one of my concerns with my Sonos integration. You have built in all of the nuances that I was concerned about losing - dimming the music while talking, etc. I have already followed your ChatGPT integration and I'm loving it! Thank you for continuing this path in your videos!
I would just like to say, Thank you for making this fantastic video. You made it so easy to follow. The explanation really helps. Thank you again for your time and help.
Can you turn a esp32 s3 box 3 into a media player like this? Or does it have to be a raspberry pi. Edit bassd on watching more of the video seems like has to be a pi👍 so pumped to build this! Thank you again!
Pulse Audio, OpenWakeWord, Snapcast… I think these services are beyond what ESP32 can handle and thus requiring a Pi. Please anyone, correct me if I’m wrong here.
@@FutureProofHomesI am the snapcast developer for MA and just running snapcast on an esp32 is problematic, there are several implementations but none is as good for something like this.
Great content, thank you for the detailed instructions. New subscriber here. Ive ordered a raspberry pi to try and get this done. Question. Do you know if it is possible to have the response of assist run on another device? I know how to do this on a esphome satellite, but would like to on a wyoming satelite..
They’re included in the Rhasspy Wyoming satellite repo. Just use the wave files included in the project. Check out my other vids and it shows you how to use the them.
Absolutely amazing content, I'm definitely going to try this out. I do have one question though and I'm not sure if it will work, but maybe you can shed some light on it. I have existing Sonos speakers. Would it be possible to use the Sonos speakers (mapped as mediaplayers in Home Assistant) as output via Home Assistant or do I need to hard wire the pi + hat into a speaker?
@@FutureProofHomes If possible both, I'm using the cheap Ikea Sonos speakers throughout my house and if I could combine this, I don't have to get separate speakers just for this project.
another question.. when you activate the wake word, you don't speak until it lights up. is that a requirement? Really looking for a more natural engagement where you start with the wake word and just speak normally.
Kinda, yes. Gotta take a breath so it begins recording. It’s still kinda like riding a bike. Takes a moment to get used to. It will get better and I plan on experimenting with better hardware soonish.
@@FutureProofHomes have you replaced your Alexia with this new setup yet? Watching all your other videos you have a lot of stuff built around Alexia. I've been on a journey of removing Alexia and using HomePod as a media player. I can't seem to find the middle ground or best of all the worlds.. It does seem the Wyoming and HomeAssistant stuff is getting real close.