Not dangerous at all as wont work for most people, takes to long and the results are very poor to be honest, need professional software and knowledge to pull off anything remotely good really.
@@hayleeeeeee dude, in a matter of a year it went from needing thousands of high quality pictures of a person in all different kinds of lighting and days of training to a single video and 1 hour of training. Soon it will be one picture of subject A to perfectly replace the face in video B in 1 minutes. Then we’ll have audio deepfakes and bam.
I don't get it either. They just skip right over really important and basic stuff while adding in all sorts of useless info. Its like they expect you to know how to do it before even watching the tutorial.
This is probably the best walkthrough explanation on deepfakes I've ever seen. Quick question, how long does it take to merge? Will it depend on the type of PC your using?
Merging should go pretty quickly, at least a few FPS. Adding extra options like color transfer and super resolution will slow it down, sometimes dramatically. I believe the process is highly CPU.
when I click on train Quick96, it iniciate models at 100%, then loading samples at 100%, then the second loading samples do not start, I have a Rizen Tyhreadripper 1920 with 12 cores and 24 threads and an Nvidia GeForce RTX 2070, do you think the problem is because I am using feepface lab from an external hard disc? I will copy it to my internal hard disc and see what happen
@@brockphillips6411 you need 2 videos, but any free video editor software can put pictures after each other using a user friendly interface and extract it as mp4
They're right, it's a different software called First Order Motion. You can make a kind of Dame Da Ne meme using DeepFaceLab but it would look weird. Best bet is to use the other method...
honestly, that janky method is better left out of the deep fake side of things. More like the obviously bad fake to annoy your friends with, side of things... lol
*IMPORTANT: CHOOSING THE RIGHT BUILD* Download Here: mega.nz/folder/Po0nGQrA#dbbttiNWojCt8jzD4xYaPw DeepFaceLab is designed to run on Windows 10 and Linux. DFL 2.0 NVIDIA RTX3000 series build - NVIDIA 3000 series GPU required. DFL 2.0 NVIDIA up to RTX2080Ti build - NVIDIA GPU with CUDA 3.5, 5.0, 6.0, 7.0, 7.5, 8.0 and higher. Check your GPU here: developer.nvidia.com/cuda-gpus CPU training requires AVX instruction set. DFL 2.0 DirectX12 build - AMD, Intel, and NVIDIA devices with DirectX12. DFL 1.0 OpenCL build - Devices supporting OpenCL. This version is no longer maintained.
I'm trying to help all the people who are in the same situation I was. It took a long time to figure out how all this works and I'm still learning each day.
Having issues with the samples loading part in the training, just stops after all the samples load then says press any key to continue, which causes nothing to happen
First of all, great video. You're a great teacher. Second, I know deepfake was invented for creating "adult" content and I shouldn't do it, but I can't resist the urge to put some of my favorite celebrities into my favorite "adult" videos.
Hi there! I really enjoyed the video but I came across a little issue... I get the following error when doing the step at 3:09: Error: No training data provided. Did I miss a step? Please could you help me? Thanks.
Wow really good and straight to the point. Thanks for the video man I've been interested in this for a while. Is there some sort of guidelines, for example if I want to attach my own face to a video and I'm recording a source vid. What would be the fastest way to capture all my face angles and have the source be as small as possible? Just look around in all directions?
That sounds like a good start for a general faceset, but you also want many facial expressions, so maybe try reciting something during the recording or have someone interview you. If you’re doing a specific video then anything you can do to match the angles and lighting will be a great help. You can delete unnecessary src images during the View Facesets step.
The same software has a more capable model called SAEHD Quick96 is more so meant for tests or fun. Warning: I recommend you to learn everything you can about deepfakes to get good results. You ***NEED*** a powerful computer, more importantly, a powerful GPU If you want to go for highly realistic models, I cannot recommend anything less than a 3090
Hi again. I'm almost at 2000 iterations in my training but I still have just weird pixels next to the faces. There is no progress. Is this to do with my drivers?
2000 iterations isn't much when using Quick96. Let it go overnight at least. If you're just seeing weird colors and no shape face its possible you're using the wrong build for your card.
On Step 5 when i do the training it, my training preview window doesn’t open. Please help me I’ve been looking for the solution and for help everywhere and noone can help me:/
See this deepfake at 1 Million Iterations! - ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-lnUbEPFlgKA.html DeepFaceLab 2.0 Quick96 Deepfake Video Example
I tried training quick96 with my GPU(RTX3060) but it stops at "Press any key to continue". I had to use my igpu(AMD Radeon Graphic) to render but do you have any solutions to fix this?
I can't extract the facesets. If i extraxt even with default settings(src or dst), it extracts just 1 image and then shows a bunch of errors. What do I do?
Hey man, this is a great video! I just have one question. After I did everything in the video the result I got was just a still image. What can I do to fix this?
@@Deepfakery Faceset extraction is not working properly on command prompt... Extracting faces ... Error.. I didn't get any options to type CPU or GPU . . . Unable to start subprocess . Press any key to continue 😞
Theres a new version for DirectX 12 available. There's also DFL 1.p OpenCL in the /2020 folder. If you're using the NVIDIA build try running '10) make CPU only.bat'
@@kaoe145 I have a nvidia 1060 6GB max q, I only did about 3000 iterations and it took probably less than an hour. Let me know if you want to know the other specs but I'm pretty sure gpu is the most important
@@ronanm4418 thanks for replying I have a gtx 970 I been training a 5 min video it’s been running for 6 days the preview is still Blurry how long was your video
So I’m having a problem with step 4 when I go to check the faceset extracts from data_dst and data_src it doesn’t show me anything it just pops up with some sort of browser and and only shows images extracted from the video and they aren’t aligned I followed all the step correctly do you have a solution?
You should be seeing a file browser with square images of all the faces that have been pulled. Those are the aligned faces, meaning they’ve been rotated upright and cropped to a square. Are you seeing something else?
I ran into another problem, when I go to convert the face to the video after I finished training it says no faces found for 00001.png, copying without faces did I miss a step after the training
After I launch the training file and it initializes models and loads samples, I hit any key to continue, and nothing happens. You didn't say what this could be caused by or how to fix.
ok, that was cool. So we only need to rename the images to this format if we are using custom data for the process. Is there any specific size ratio in which the data should be made?
i dont think so, i took a look at the code and i think it automatically converts the pixels and size ratio for you. the video sources need to be mp4 or something as the video formats though
@@abrahamnunez9761 In my case the interactive merger is not working properly, I use it only to play around with the settings, after that I rerun the merger and just say no to the interactive merger, and everything goes smoothly.
I am running with a 4080 super on the 30 series executable as thats the most modern, however, my iterations per second is only 1.46it/s when a 1080ti in the video is getting 3.71it/s , how do I increase the iterations, my GPU isnt being utilised at all, thanks
Thanks for the support! Try the SAEHD model next, its a similar process and you can even use the same files from this tutorial. Only thing I would mention is turn off 'flip faces randomly', and use random warp for a good while then turn it off. There's alot more options to SAEHD but it doesn't have to be complicated either.
SO cool, thanks for the tutorial. Will this method handle applying the fake face onto a target face that is turning to a full-on side profile? What about being partially in shadow? Or does it need to be a pretty well-lit target face? Thanks!
Yes, it will work for almost any face with enough effort. However it will require using an XSeg mask to really get the tough angles and dark faces. Also a good faceset of course. Profiles are kind of difficult to do well; you can see some examples in my latest videos.
hello great video!! but i have a problem with step 7.. it doesnt open up the merger window.. i tried everything, but it just wont work! any ideas what i could do?
Hello! I was following your tutorial and man! It's very detailed but I do have a question/problem. When using the face extraction, Some faces are either upside down, blurred, out of placement or not even there. Do I just delete it and re-run the extraction or Can I just delete and continue from there?
Check out this Faceset Extraction Tutorial - ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q44LPygdMxU.html If you still have questions feel free to ask!
when arrive to step of merge, i dont see commands. if i click on tab i see first frame than if i click w or s or shift + > etc ... freeze all what is the error?
Could be a few things. First off you can reset the merger settings by deleting the merger session file or selecting not to use it when you run the merger. If your frames are very high resolution you may have trouble loading them. Also make sure you still have all the destination frames and didn't move or delete any of them.
If you remove upside down photos of face in the dst will it cause the mask to cut out in the video? Or getting rid of unwanted faces? I know it won't affect src but does it affect dst?
Upside down photos are usually a result of false face detection. You will need to remove them and they will be skipped in the merger. There can be more than one face per person sometimes, so those upside down phots might have good ones in the previous or next file. Check the filenames for possible duplicates. If you have to remove those faces then you can either fix it in post by duplicating a nearby frame, or skip that section of the video altogether.
Hi just a question how you got so many it/s per second mines running at 1.20 is it depends on the hard ware? or the software i downloaded, i use the DX12 something
I think you will have to use CPU. I have just checked that DeepFaceLab requires your GPU to have CUDA Compatibility 3.0 and above. Yours appears to be version 2.1
hello, follow these steps but when try to train the mdel with quick96 it got stuck at "Initializing models: 100%", have a 3060ti, tried with cpu and got the same issue, know how to proceed?
Faceset extraction is not working properly on command prompt... Extracting faces ... Error.. . . . Unable to start subprocess . Press any key to continue 😞
@@gustavkirchoff4633 Do I have to hold down P or is there a way to make it automatic? Or can I just leave the program, come back in two hours, press P and I'll get 90k?
i got 603037 iterations and some of my result previews are still blurry, do I have too much src/dst (9 k and 30 respectively), do i need more iterations, better face set, or fix some settings?
merging doesnt work for me. even when i choose NO on the "use interactive merger". nothing works. it stops at like 4 6 or 8 percent and never finishes. can anyone help me please?
@@Deepfakery @Deepfakery Faceset extraction is not working properly on command prompt... Extracting faces ... Error.. I didn't get any options to type CPU or GPU . . . Unable to start subprocess . Press any key to continue 😞
Its from Nuance, they have a business solution, but I just used their web demo page to generate it for free! Some ppl like it, some hate it. Sadly the voice I used is no longer public. I had this idea of doing all my videos by making my own TTS voices but I haven't gotten a decent enough result yet.
hello i like make my own dfm model with SAEHD. i tried based on Instruction but its failed. Can you make explain how to do that. i Want make new dfm model which will work for all relatable face in Deepfacelive like Pretrained Tom cruise dfm model
If I have a frame where the face is at a weird ngle, do I delete in Aligned Results, will I still get to manully extract it? Not sure what deleting a frame means for masking in that frame.
Is there a way to run multiple instances? So I can 'prep' a video while another one is training. I've noticed Face Extract won't work if there is another process running. Is there a way around this?
Hi... I love this video and thank you for making this... I want to ask, is the training preview fast or slow? Because on the video it goes fast but on my pc its slow.
heyy! i am getting the error in the first step itself where it says, the system cannot find the specified path. ' "" ' is not a recognized as an internal or external command
Nice Tutorial. But i have sometimes frame issue. idk exactly.. guessed when i have 8000~10000 Frames, then the frames are intertwined some. then i have no option and just delete or quit that project. do you have a solution? i checked both extracted original frames name sorting and aligned faces' name sorting (file name and order was not intertwined).
When I loaded in the merging window, it crashed once and then froze the next time and won't stop crashing. It won't let me fix any issues that are going on with merging at all. The result is also broken as well.
does 2060 work? if so, any tutorials how to fully utilize it? else, the one i downloaded from deep face lab's git doesn't recognize it and goes to quick96 :( i don't want to use the CPU...
Once my merging process is complete, the line about this session being saved & press any key doesn't pops up in the prompt. Any leads on what I can do?
I have exactly the same gpu (1080gtx ti) than you, but training takes a lot of time. Did I miss something? Should I install some CUDA library in order to accelerate the training process? Thanks a lot and congratulations!
I have 2 x 1080 Ti so with 1 it will be significantly slower. CPU may affect the speed as well; I'm using an i5-9600K at around 4.30GHz. You cannot easily change the speed of training with the Quick96 model. You can remove some data_src/aligned images to lower the overall package size. The SAEHD trainer, while a bit more complicated, allows you to tune various settings to your system.
hey this is a good video But i have a question. How long should you train him? and you can, for example, let him train too long so that it no longer works.
'"D:\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\python-3.6.8\python.exe"' is not recognized as an internal or external command I am getting this error
Some NVIDIA GPUs, even among the newest ones (but for notebooks), will not do any of the processes. You also can't activate hardware scheduling for these types of GPUs on Windows 10. Because of this I had to do everything with CPU only, slow but worked, thanks a lot.
It seems like a lot of their notebook GPUs do not support the CUDA Compute version needed for DeepFaceLab, even if they have enough VRAM and other resources
HELP! When I try to extract images it shows ! it's the same data videos. 0% Images found : 566 faces detected : 0 Done. Specs: RTX3060, AMD 7 5800, 8 RAM