Hi. So glad I stumbled onto this tutorial. Can you confirm that whilst the new config is flashed onto the board is the battery still out of the board? As I don’t have a kill switch
How do I add Kyberphonics blade styles to my proffie board when there are no blade styles included? I used those files on my snv4 and they worked great.
You would need to add a blade style to the config file by getting one from a website like fet263 or something. I think I might have a dark saber video that coveres it. It’s been along time since I’ve made it and looked but if you look up something along the lines of “adding blade styles from fet263” on RU-vid you should find something that will show you how it works.
dang it. looking for one where I can clone my voice to narrate my books. limit 200 chars, ;( if I'm running it on my own system, should unlock, hopefully I can find something.
I just got a Kybersaber saber and when I go to the SD card I dont have a config file like yours so I cant add blade styles in the config file and the blade styles are in the font folder. Is it different for all sabers?
my Realistic model doesn't show in the refiner tab right corner side. I saved realistic vision in D:\Fooocus\models\loras. Please let me know if I am making any mistake
I work with solder and testing electronics so almost everything with electronics have something to do with solder. Before i only saw a motherboard as a pc component. Nowdays when i see a motherboard i see the caps, resistors, tranistors, PCB traces and much more. A whole world was hidden to me right infront of my eyes.
Note: The sponge does indeed expand upon contact with water. Personally, I have a preference for using a brass wire solder tip cleaner, but I still wanted to share this update for those interested.
So we cannot robustly train this model? like I can easily just narrate for an hour or so if need be in order to produce an actually good model. because these samples given are pretty bad.
I don't really care about cloning a voice, or doing it real time. I care about a good Text-To-Speech capability with voice from chosen sex, age, accent and some emotional status (normal, panic, etc speed, pitch etc) that would make a voice that sounds more realistic. For a content creation where example you need to produce a voice for device to inform user "Door Open, Door Open" or "Warning, Warning, obstacle behind". etc, such TTS is just beneficial when you don't need to hire a voice actor or manipulate your own voice to get it wanted like.
I love you bro, that was well said, show how to leverage AI to make your workflows, lives and overall creativity more streamlined and enjoyable! shit gave me goosebumps! Also its the model. There's already models out there that sound really really damn good. You gotta learn how to finetune them, which i'm trying to learn rn.
Definitely not claiming to have a lot of experience. Just not particularly fond of the one that came with the unit based off first impressions. Which I totally might change my mind given using it for the first time. Just super interesting in the whole aspect of electronics and my old one died on me. Had the little brass sponge in my last unit and really got used to it. I’ll still probably use that guy but I’ll give the sponge a go as well.
Yeah I’ve never really used them in the past either! Unless it was for a really blunt ended iron for how would I say…”non delicate or precise work” like plastics or other non soldering projects. 😂 ain’t that the truth tho! 🥶
Hello. If you know the creator of Fooocus please ask him to allow the option to create custom image sizes. It is a bit annoying to use already established measurements. Fooocus is a great tool but there is that problem of image sizes.
I’ll try and figure out a way to modify the code for the program to allow for custom image sizing and ratios. If I can sort something out I’ll make a video on the update.
@@wingnut_labs Thank you so much. I think the image size parameters are in the configuration file but it would be clearer if you put a video to understand it. For example, generating 1152 x 1152 images, etc.
You can definitely add different image size preset values to the configuration file. The problem is, not knowing what size images were used to train the particular model you're using, and sometimes you get freakish looking outputs.
Agreed 👍🏻 especially with my voice it seems to do a poor job. It gets some samples I give it better than others, but overall I’d say I’m not super impressed with the state of the tech.
@@wingnut_labs Like the LLM leaderboards, we should have a TTS leaderboard for best quality/efficient to run list to compare self-hosted TTS models. Nothing I've seen yet comes close to 11ai yet unfortunately.
you still cant full appreciate it until you draw up a schematic for one and price out all parts and noting its built by hand so it comes out to be about the size of a oven for the power of a $5 Arduino but built by hand i love triggered spark gap and vacuum tube logic gates and core rope memory. but a logic gate is a logic gate regardless i just like how i can build logic gates rather than buy them.
Unfortunately it’s an analyze to synthesize type model, so it doesn’t modulate any voice input in real time. I do want to make a video on some of the ai voice changing programs I’ve been messing around with. Those ones do intact change voice in realtime
I'm more than a little concerned using a "public" system for free. Does this mean if I upload my voice, that my voice is now available for anyone to access? Gee...what could possibly go wrong?
If you want one that's more uncanny you should use RVC, although it's mildly impressive what OpenVoice can do with just voice audio and not needing to craft an entire voice model around it.
The model 1 version of the python program would execute without too much fiddling. Model 1 is pretty horrible for cloning, though the effort is made with a few exposed parameters. There is a hard limit of 200 characters for TTS in model 1. Model 2 requires an OpenAI API key with funded account.
For what I see, the first link is just a demo. Cannot use that, as its too short. And the second has 2 ipynb demos in it, bot none of them works. Any way around this?
Thanks for sharing this, I've been trying to figure out how to clone my own voice but I don't have programming experience and other options have been complicated and hard to find support for. The music you use while you intro has some sort of popping effect that makes it sound like you're having microphone issues. Also, it's overwhelming your voice.
well i'm pretty sure this video was probably made years back, only being uploaded 4 days ago because vscode no longer supports such a thing as black, autopep8,yaf, prettier. the best thing to do is to stop using vscode.
Isn't the bracket pair colorization a VS Code settings since v1.60? Text Editor > Bracket Pair Colorization option? Anyway, still a great tip for productivity. About Better Comments extension, in a project with more than one dev, all of them must have the extension installed to see the colors too, right? Great video btw, thanks for sharing these tips
@@wingnut_labs Great to hear that! I went through every popular theme available with defaults included and tested them out to see which would be the best for me, finally setting on Monokai Vibrant. Sprited Away theme and Tokio Night Alt are honorable mentions. I changed some things manually within settings such as blue button to gray one to fit better as well as bracket coloring to street signaling colors to know how deep I am with nesting. Finally made my terminator punctuator (";" for example) red to never miss adding it before compiling and changed storage.type color as it could be confusing for class declaration and type to have similar shade of blue. With these additions, I finally have completed IDE UI customization B)