@@latestmusic1575 in this link the script files will install latest version of rope pearl and rope development fork (Alucard24 fork) : www.patreon.com/posts/most-advanced-1-105123768
Hey Great tutorial, I've been using this tool for a while. You can also create embeddings by holding shift and selecting several images from the same subject, this greatly increases the quality of the swap.
i wish i had found this video before going through the install that i did! awesome work man! i had no idea what i was doing but got it installed and would have loved to have found your video first haha
This is really useful. Thanks. The only issue with DeepFakes/FaceSwaps is the face/head shape as it doesn't replace the entire head, and in many instances, it makes the subject look like but not so like the target face.
It would be cool a tutorial to combine this with LivePortrait to have a vid2vid reenactment, with the focus on maximizing quality, but also avoid the warping of anything that is not the face (which means the rotation in the driving video is not taken into account).
If you don't use the 1-click installer, when you manually install Rope will it download the needed models like you showed in your video or do you manually have to install them from another site?
How does this compare to deepfacelab? I tried using deepfacelab and got stuck when the characters head is behind someone elses and the face glitches when the camera pans. The unobscured parts worked somewhat. Not entirely accurate but more training will fix that.
this is the most accurate one. but i can't tell it will work 100% in every scenario. Also on patreon we have a development fork installer as well with more features
Thank you very much for such a detailed video.I have a question to everyone, please tell me, I have a video promt where a person is standing and waving his hand, and there is another photo with a different background and another person, how can I make the photo turned into a video so that the background remains the same but the person from the photo completely repeats the movements and poses of the person from the video promt? what ai tool to use for this task?
I just learnt with pressing shift you can use multiple images of subject to make multi image using. It generates embeddings automatically thus may improve quality try it
it is here in the attachments : www.patreon.com/posts/most-advanced-1-105123768 Rope_V4. zip Rope_LandMarks_V4. zip Rope_Live_Stream_v4. zip - this one is most advanced version
I would like to understand better how to download the application you mentioned for generating videos and images. The explanation wasn't very clear to me, and I'm unsure if it's necessary to pay to access it. Could you please provide more details on how to install the application and if there are any associated costs? I appreciate your help in advance!
yes if you want installers it is at my patreon, monthly 5$, but you can use forever downloaded scripts. you can also use author original repo and install manually
firstly thank you for putting out such amazing videos. I have been playing around with rope ai and it is fantastic. I do however have one particular issue. There is this one particular face that keeps turning up as a Black Box. Why is that? Is it like a security thing? I tried using a high res image but still the same and I can only assume it is to do with the face? Ps. this is just me fiddling around and whilst I don't care if I cant use that face, i am curious as to why does that black box show up?
@@SECourses I downloaded all the models, tried different ones and kinda also used another image, but same issue. Ahh it's alright I was just curious when that happens. I can only assume when the ai can't detect the face correctly, that's when I guess.
@@SECourses Ahh i just joined their discord.. apparently it is a known issue. The issue being with East Asian Faces. lol. Not really sure how to fix that one, but at least I know why it is happening. :)
in this link the script files will install latest version of rope pearl and rope development fork (Alucard24 fork) : www.patreon.com/posts/most-advanced-1-105123768
Mac with cpu? Is gpu a must? It’s uses inswapper? Inswapper only support 128(very sad), what is the 256 and 512 in ui? That’s not inswapper I’ll know that. Enhancers and upscale?
One problem with this one and for example faceswaplab (they use the same model inswapper_128) And as you mentioned they cannot correctly determine mouth changes. (as I remember roop didn't have such an issue) So it's almost useless for "talking" videos, because it just ignores some pronunciations and the mouth movements do not match the sound. Face parser is a strange fix that replaces the target mouth with the original mouth which immediately destroys the desired result. For not talking videos and static pictures, mouth is not a problem
Roop also had same issue. both using that old inswapper_128 but this guy made some improvements to the resolution. until someone else releases a better model than that 128 px this is what we got best
@@SECourses He uses some upscaling tricks but the results can often be worse and the 512 is close to unusuable quality. They also do not work well with the Strength value. They're just faster to process than the prior upscaling methods.
that would likely to be caused by missing FFmpeg. can you verify that? please watch this tutorial : ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE--NjNy7afOQ0.html
Can I ask, when I am recording the video to export, the VRAM seems full (red color) and its a lot slow. Should I clear the VRAM during recording. Still isn't clear on how to use this function?
Can anyone comment on the differences between ROPE, Roop, and Facefusion? Roop is great for photos. But not so great when things move in front of the face for video. Facefusion does better masking for video, but you can only convert a single pic at a time. I cant find any decent video showing the end results of a ROPE swapped video. Just a few frames less than 5 seconds of video.
ROPE has the best face tracking features. in my patreon post i included experimental development fork as well which has even more features. by the way entrance is 21 seconds that i changed with rope
I am using a 3060 12GB and when I turn on Restorer only and press the playback button, it slows down. Are there any possible improvements? It remains slow even if I set the thread to 1 and the swapper resolution to 128. Thanks.
How much vram your system uses before starting the app? 12 gb should work fairly well. Even 8gb works single thread. If you need speed on massed compute 20 threads works at best settings : m.ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-HLWLSszHwEc.html
@@SECourses The task manager shows that the CPU is always at 100% while the GPU has room to spare. Is the CPU the cause of the slowdown? Of course, Python, Git, CUDA and FFmpeg have been tested. :'(
@@SECourses Intel i5-12400 is used. The VRM is 16 GB and only 50% is used when playing with restorer on. The VRAM in the top right-hand corner of the screen is also less than half. The video plays at normal speed for scenes where no faces are shown, but slows down when faces are shown and a face swap is performed...
@@mainevvv if you have used my installer and followed python tutorial i guess that is limit. if you are a gold patreon supporter i can connect your pc and check it out
maybe with a good template not that I know. but i just published massed compute many times better : ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-HLWLSszHwEc.html
hi! just subscribed your channed as member thouhg, used this program only for image changing, but the result is not really good enough , just wondered, is there another to get the better result, only for better image, not video. thank you! in advacne
Hi. Thank you for sub. Now we have 2 installer files. The landmark version has new experimental features. Have you tried it? Also to get better results your used faces need to be similar looking person. And can you tell me more info why bad image? I mean which part.
@@SECourses thank you for really quick response. firstly just used the file in the video, no idea of land mark, (bad image : compared to the face of the high-definition original image, the printed image is not sophisticated and the color is very awkward)
@@SECourses now testing new version with same setting~ as of now, GPEN1024 is making much better result than any other option!!! for making iamge , GPEN1024 is best , thank you for help
@@behrampatel4872 this can run perfectly run locally. in this video i run this locally on my pc. cloud tutorial here : ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-HLWLSszHwEc.html
@@SECourses Thank you for the clarity. From now on I will assume there will always be 2 videos showing both workflows. Local and remote. Cheers and thanks for such detailed videos.
@@harsh_108 Massed Compute is a cloud platform like RunPod. you can watch this tutorial first 10 minute to see : ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-LeHfgq_lAXU.html
@@SECourses I am using your google colab 1_click_deep_fake_for_free_by_SECourses.ipynb, but I found an error when trying the colab, the result is the face is shaking or swaying, please fix it
then I found this in your google colab : ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behavior is the source of the following dependency conflicts. pandas-stubs 2.1.4.231227 requires numpy>=1.26.0; python_version < "3.13", but you have numpy 1.24.3 which is incompatible. tensorflow-metadata 1.15.0 requires protobuf=3.20.3; python_version < "3.11", but you have protobuf 4.23.4 which is incompatible. tensorstore 0.1.64 requires ml-dtypes>=0.3.1, but you have ml-dtypes 0.2.0 which is incompatible. tf-keras 2.17.0 requires tensorflow=2.17, but you have tensorflow 2.14.0 which is incompatible. torchaudio 2.4.0+cu121 requires torch==2.4.0, but you have torch 2.0.1+cu118 which is incompatible. Successfully installed addict-2.4.0 albumentations-1.3.1 basicsr-1.4.2 customtkinter-5.2.0 darkdetect-0.8.0 facexlib-0.3.0 filterpy-1.4.5 gfpgan-1.3.8 google-auth-oauthlib-1.0. 0 insightface-0.7.3keras-2.14.0 lit-18.1.8 lmdb-1.5.1 ml-dtypes-0.2.0 numpy-1.24.3 onnx-1.14.0 onnxruntime-gpu-1.15.1 opencv-python-4.8 .0.74 opennsfw2-0.14.0 pillow-10.0.0 protobuf-4.23.4 qudida-0.0.4 tb-nightly-2.18.0a20240908 tensorboard-2.14.1 tensorflow-2.14.0 tensorflow-estimator-2.14.0 tk-0.1.0 tkinterdnd2-0.3.0 torch-2.0.1+cu118 torchvision-0.15.2+cu118 tqdm- 4.65.0 triton-2.0.0 wrapt-1.14.1 yapf-0.40.2 WARNING: The following packages were previously imported in this runtime: [PIL,numpy] You must restart the runtime in order to use newly installed version.
this has more features. we have a development fork installer on Patreon and it has way more features. sadly it wasnt ready when i published video so it wasn't shown. also this is uber fast with 20 threads on Massed Compute - 31 cents per hour
I'm an English teacher. I used to jave an accent like your current accent😅 but you're the teacher here in youtube. Thanks for this amazing tutorial man. Really appreciated!
@@SECourses bro its paid But I thought it was free, so what if I make it myself? i known that its possible but it has more many files i can't be done it.so what i do bro can i buy this to you?
what is better out there? i just deleted FaceFusion, this tool here is lightyears ahead, on my 4090 the videos with faceswap almost runs in realtime, lol, incredible
I'm going to tell you guys the truth about this very misleading video. This tool is no where near this good. Not even 20% as good as they try to make it out. It is easier than Roop and stuff, but does not produce better results. They, literally, cherry picked the example in the video just to make it look this good. It will not work remotely close to this well typically. Imagine making a competent video only to outright lie in your presentation of the results, claims of value, and title ruining an otherwise informative video.
@@SECourses hey but Q is for kaggle simple policy its not wrong if we import our own personal any AI tool notebook so like that way I think we can upload ts code too there?? simply