definitely a cool Ai. definetely cannot afford this, but awesome that it exists. I sadly still cannot get local piper TTS training running. there is always this error when trying to train. but also ,do you know if it's possible to install stable video on automatic1111?
Hey natlamir I was wondering, only if you want of course, to add a piperTTS extension to the oobabooga webUI? there is one for coquiTTS, and one for bark (it doesnt show on the webUI as an option to enable it) the reason i am asking is because when doing a voice clone in piper, it sounds closest to what the audio samples were, and also it is simply the fastest from all AI- TTS i know (and obviously i know this from you). but thanks either way even if you do not do it, your content is awesome and easy to understand.
Probably the worst programmed install of any of these AI programs. (Not you, I mean the original creator) Tried so long to get this running and just have so many issues. I got as far as running the gradio demo now get this error - ImportError: cannot import name 'ADDED_KV_ATTENTION_PROCESSORS' from 'diffusers.models.attention_processor' I think I'll just skip it as I've spent 24 hours trying to get Magic Animate to run without any luck.
I'm sorry you're having trouble with the installation. The error suggests a compatibility issue with the diffusers library. Try updating it to the latest version or downgrading to a known compatible version.
@@Natlamir Thank you for the reply! Good to see you active. I have notifs on for your channel. Looking forward to see what we're doing next. Be well Brother 😎
good call, I updated the page with a link to this github issue which might be sufficient for the pretrained model files needed to download rather than the full directory: github.com/magic-research/magic-animate/issues/13
For my voiceovers, I use a combination of manual recording and AI-assisted text-to-speech, depending on the content. I'll consider making a video on my process in the future. Through Eleven Labs, or Piper TTS. I think I remember creating a video a while back too on the process I used for that.
I know that when you are making the video, it tells you in the top left how long it has currently been processing the video, but I render these overnight because it took quite a while when I was trying to make my first video. Is there a way to see how long the video takes after it has been rendered?
Can you make a video about imageDream and DreamCraft3d. They both image to 3d model ai. And they require tinycudann. I cant install them because tinycudann not installing on Windows for some reason. Thanks
Thanks for the suggestion. I'll look into ImageDream and DreamCraft3D for potential future videos. Regarding TinyCudaNN, Windows installation can be tricky. Consider using WSL2 or a virtual environment as alternatives.
I'm not aware of one specifically for Piper, but I'll keep an eye out. RVC to onnx sounds interesting, I will need to do some research to see if that is possible.