Thank you so much for your hard work!❤ Looking forward to the release of the tutorial on model training for EmotiVoice! 😊 Natlamir, if you read this comment, can you 🙏please answer if you are going to do this release? If yes, when? I would be very grateful for an answer ✊
@@Mavrik9000 i will keep an eye out when that functionality is implemented. they created a milestone for its implementation: github.com/netease-youdao/EmotiVoice/milestone/3
The emotions are in the data\youdao\text\emotion file. Open that in a text editor and you should see several lines containing Chinese characters. Each line is a different emotion. Copy/paste into the prompt field in the web UI.
Gather up children, and I'll tell you a story, from long, long ago...about a man who dared to make a text thumbnail for youTube and mock the great Greta. A renegade. A pioneer. A rebel with a cause. They don't make them like that anymore...
🤣🤣🤣 when i first heard the quote from the DINet audio samples, i thought it might be some quote from a harry potter movie or something. but now it is engraved into my brain after using it for the test between wav2lip vs videoretalking vs DINet, so it was the great Greta that said this and not harry potter! 🤣
@@Natlamir I thought you have mastered all these things but now it seems I just have to wait for a new creator or new video on RU-vid. Sorry for wasting your time.
might be able to use that with segment anything / or inpainting. there is probably some automatic 1111 extension that does that or it might be built in, i haven't explore automatic 1111 much, but that might be a way to do that
too bad it can only support english and chinese and they don't have a training procedure yet do you know if there are any other high quality and multilingual emotive voice tts like this one (maybe coqui)?
Yeah, I think training procedure is on their TODO list, so we might get that soon. They have listed some projects on the bottom of their github page with credits to what they used, those linked projects may provide similar functionality with perhaps multilingual support, would need to research.
no problem, you would just need to run the 3 commands: 1. first activate the environement: conda activate emotivoice 2. cd to the folder it is installed in: cd c:\ai\emotivoice 3. run the web app: streamlit run demo_page.py
Hi! Thanks for your work, it goes a long way in helping people dive into the world of neural networks. I would like to offer you cooperation, the thing is that I am a blogger from Russia, who also talks about neural networks, but I make portable builds, so that people who are far from Python and everything like that can run and use different programs. How about doing a collaboration? If you're interested, give feedback. Thanks.
thanks. that is great, portable builds without needing to go through the python package installation process sounds great. i just do this for fun and share what i learn while i learn new machine learning concepts along the way. if you have a github that i can contribute to in any way, feel free to share that and we can see about how we can streamline the installation process.
streamlit run demo_page.py and result ModuleNotFoundError: No module named 'yacs' File "C:\Users\derpy5\miniconda3\envs\EmotiVoice\lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 534, in _run_script exec(code, module.__dict__) File "C:\Users\derpy5\EmotiVoice\demo_page.py", line 18, in from yacs import config as CONFIG