I want to APOLOGIZE for something. A friendly viewer of the above video reminded me of the following: -In English, there is a HUGE difference between referring to a non-white person as "person of color" vs "colored person". "Person of color" (PoC) is the preferred terminology. "Colored person" is an extremely racist term popularized in pre-1970s America and is found extremely offensive in modern English-speaking culture. I wasn't aware of that and i feel really bad that i used the word colored person :-( This wasn't intentional, I thought i was doing the right thing and in this way i wasn't discriminating anybody. If anybody who has watched this video feels discriminated by me then i am very sorry for that. I am not a racist at all, i worked on a school for migration children and i have good feelings to all people all over the world.
Don't worry it's an easy fix and you already did it. This is why it is so important for us (Yes I'm a Black American) to stop allowing all these new terms for our ethnicities. We're all human and there really are no so-called "races". If I'm a Doberman and you're a German Shepard aren't we both dogs? Many people were indoctrinated into this line of "racial-thinking" but there is no real biological science to back it up at all. I grew up in New York City as a kid and James Brown still sings "Say It Loud, I'm Black and I'm Proud!" So how am I now African American and I haven't even been to Africa yet? (But I plan to visit in a few months. From what I can see, Elon Musk and Charlize Theron are really "African American" but there is no pathway for them to say that either. Person of Color is ok I guess but even when I hear that I look for Blue Smurfs or a Yellow Bart Simpson. The US has everyone totally confused and how come South America is called that and not America. And how come professional American baseball claims to crown the "World Champions" but the only other teams are in Canada? Makes absolutely no sense at all. And don't get me started on American Football where they rarely even kick the ball LOL! Sorry to belabor the point but I wanted you to know that we all understand you loud and clear! We're simply just Black people that will carry universally. Thank You @digital_magic
Take the automatic1111 stable diffusion roop extension for nearly perfect face swaps. This works also for grid images. All you have to do is to input into the control face option "0, 1, 2, 3" to face swap all four faces. This works great. Thank you for this video
Thanks for your great comment. I was thinking about doing something like this and make a tutorial about it but didn't know about the "0, 1, 2, 3" option. Thanks for the tip hopefully I find time to make a tutorial about it
I also thought about using Roop for the faces. But somehow they always look like perfect masks that are above the real face. You know what I mean? They stick out to much most of the time.
@@chrisbraeuer9476 That's interesting, does that also happen if you use the 'swap in source image' instead of 'swap in generated image' option? (I don't actually know what they do since I've never tinkered with it. Guess I'll do that now!)
@littlered6340 you can get really good results in roop. But only for the face. But for turns and if things cover the face, even for a short time it becomes visible. Atm i have several routs but none of them gives perfect results. There are always little pieces that dont work 100%. That ebsynth method gave me the clearest and best quality results so far. But I am sure if one would put enough effort into it( cutting backgrounds and changing stuff by hand I could get it perfect. But that's not an option since it would take to long. Yesterday's video failed because my initial starting video was a bad choise. I tried to bulk remove the backgrounds but around 150 frames needed edit. Her trousers, the background and a table nearby had next to the same color. So bg removal got confused. But thats fixable. The character itself, sharpness and fluid movement was there. Just the seams were visible. Need to play around with ebsynth settings more. It was the first time yesterday that I used it. I used txt2img with controlnet.
These AI tools are getting so crazy, that if you don't use them to your own advantage, you will be left so far behind everyone else. To be fair, it's really good for content creators (and even better if they use it along with Famester). Create some content, and make it go popular straight away, I like that.
I am glad you liked it :-) and am delighted by your comment :-) I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
@@digital_magic yes we can do just waiting for the right time to be updated thanks by the way for the good explanation but need more to be detailed to clear or myths and concept about img 2 img as you used txt 2 img
You're so welcome! I am glad you liked it :-)I will start working on the 2nd tutorial tomorrow. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment. And thanx for you great comment :-)
@@digital_magic I wish you well amigo 🙏 I’m sorry to hear about your health situation and I hope you get better soon. The only thing I can suggest to fight this since I have crohns, is eat as much anti-inflammatory foods as possible and be careful of processed foods. I truly wish you well 🙏
@@digital_magic yes, chayote and asparagus are a winning team. Soups I would cut some chayote with potatoes, meat(fish,chicken,or beef) spinach and garlic. For hot plates I would do mashed sweet potatoes 🍠 and steamed asparagus with a little bit of spices for flavor. Adding turmeric to the dishes help a lot too. I also do veggie protein shakes with different fruits like shaved apples, pineapple, and banana with raspberries, ginger and blueberries. Milk wise I use coconut milk as cow milk actually can hurt the gut. So most often I use coconut milk for my protein milkshakes too.
I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take about 10 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
I am glad you liked it :-)I will start working on the 2nd tutorial today. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
Yes, Tokyojab is a legend :-) I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
I don’t know why people are having the Ai generate the background in every frame. Animators do not draw the background over and over. We have a BG plate the character animation is an overlay. Instant flicker-free
Thanks for your comment I understand what you mean. I have created many videos for kids years ago and I filmed myself on a green screen. so I always worked with the background plate. I probably should have done it with this video as well but as I'm currently suffering from an autoimmune illness I can only work for 2 hours per day on the computer cuz I have inflammations on both elbows and shoulders. so I didn't do it cuz it was saving me time and creating the tutorial I hope you understand and here is the link to my old Channel where I created the kids videos: www.youtube.com/@ZupalandFunLearn
I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
looking forward to video 2 also...still experimenting with this method, got some OK result with 1 frame but a little tricky getting the 2x2 frames to diffuse without border changes...and still experimenting with amount of keyframes to avoid blurry interpolation between....but hey its an interesting approach looking forward to next video
I hope to knock the universe over and do some scenes in the book that aren’t in the movie from ‘the exorcist’. There is some great scenes I would love to bring to life ..and in the audiobook the author does the dialogue perfectly….
I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
@@digital_magic I sincerely hope that your immune illness improves soon, allowing you to recover fully. Your dedication and perseverance, despite the challenges you're facing, truly inspire me. Take care and prioritize your health during this time
Thanx i am glad you liked it :-) Hahahahah,...yeah i wish that was ready already as well 🙂 I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment 😞But i am going to start working on it on monday....
@@digital_magic Sorry to hear. Hope you feel better soon. I had another quick question. Can you talk a little bit more about how you dealt with the blinks? Eye blinking is causing me a big headache in ebsynth right now. Thanks and best wishes for a speedy recovery!
@@831digital Thanx , my homeopathic doctor said that it will get better within 5 months 🙂I guess i was a bit lucky with the eyes, cause at frame 11 it went wrong in the 1st keyframe. And then in the 2nd keyframe i was lucky that frame 12 started with a good shot from the eye. But in general i think it matters how close your keyframes are to each other. I would suggest to start with a short sequence maximal 100 frames, to start to learn the method. What you also could do is, to choose a new keyframe , there where the eye is open again. Something else what matters is, if the head turns, then EBsynth can loose track.Hope this helps 🙂
@@digital_magic Hmm I had a comment here that I think got auto-removed. Just inquiring if you had heard of Wim Hof method re. immune illnesses. Cheers and thanks for the great video.
@@digital_magic Oh amazing!!! Look forward to seeing it. I'm working on something that this would be super useful for. I'll set an alarm for then!!! Thank you!!!
I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take about 5-10 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
Yes it is very helpfull, i also have 8gb vram at the moment. I will start working on the 2nd tutorial tomorrow. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't :-(
@@digital_magic no no no no. It's not offensive by any means. I was just joking. Maybe some overly sensitive people would think that's offensive but I think it's perfectly fine because the black woman's appearance in the video is not a caricature or a mockery of black people. There's obviously no negative intention behind it. No need to apologize😄
@@bruhmoment3731 Thanx for your kind comment :-) And that is exactly as it is, there is no negative intention behind it at all, i am glad you see this:-) Thanx my friend
Yes by time I think it will all get less complicated, but that's always with new techniques in the beginning it is very hard. but my goal is to create tutorials which are easier to follow in the future but I'm also depending on how the software develops
I'm kinda new to AI but in the past months I've had much better results just using the EBsynth extension. Messed up eye and mouth and still lots of flickering from a vid which almost has no motion. And all you did was changing the color (can I still say color?). And why exactly did I have to download 3 models? Anyway, thanks a lot for your time and effort
Do you have a tutorial on how to do this with DaVinci with a free version from the beginning? I"m not familiar with the software and your initial steps in davinci are hard to follow/duplicate on teh free version.... Where can I go to get this portion sorted out? Thanks for the amazing videos!
I follow these instructions but don't see the lineart model after choosing lineart realistic in pre-processor - any suggestions? I want to try this so bad
This is the one you should see in the model: control_v11p_sd15_lineart [43d4be0d] if it is not there then you should probably update control net extension and stable diffusion
Hi, In Davinci Resolve, after running the saver, I get 240 frames as output but you have only 75 in the 10 second video. Could you please tell me if I am missing something?
My video is not 10 seconds, so don't worry about it. if your video is 10 seconds then 250 frames is perfect. wish you good luck with creating and enjoy it. always feel free to ask any more questions
Is there any method to do this with an anthropomorphic animal? like a bear, shark etc? I tried different settings and I can't control the movements with the original frames... and it's also difficult to maintain consistency...
Yes it is definitely possible to also do it with an animal, here is an example from Tokyojab: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-3_fb2y9NrAE.htmlsi=NgjCwPeqTmD7iVS8
@@digital_magic So, I only managed to move from animal to animal, try to use a video of mine and transform it into a bear by doing basic movements like "moving head and mouth", just, it doesn't work?
I am glad you liked it :-) I am working on the 2nd tutorial now. I guess it will still take about 10 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment. What kind of davinci tutorials you mean?
@@digital_magic get well soon, about davinci I was trying to learn how to extract frames and stuff but I managed to do it after searching on the net :)
So, I only managed to move from animal to animal, try to use a video of mine and transform it into a bear by doing basic movements like "moving head and mouth", just, it doesn't work?
I'm confused, I've got 30 keyframes and I drag the 30 keyframe folder into Ebsynth and it's not adding anything to Stop: on the bottom section when I do and then it says I'm missing 0001 keyframe when I try to click generate but my first frame is 0000?
Sorry no only in my computer as a storyboard, But in RU-vid you can download the script. just searching it on RU-vid and you'll find how to do it it is very easy
yes he has, @THEJABTHEJAB He didn't want me to mention it in the video cuz he just uses it as a dumping place for his videos. And I'm thankful for your great comment but it's not very generous of me but it is very generous from him that he shared everything with people on Reddit and also with me he has been very helpful in creating this tutorial I owe him a lot
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't :-(
I don't believe that you meant it to be offensive however, no portion of the African diaspora is "colored", as if -- were there a default for human skin tone -- it would be something other than that of the original homo sapiens, from whom we all descend and owe our existence. Our skin isn't "colored" any more than your skin was erased. When unsure as to what to call a Korean you would identify them by the known general region of origin, Asian. The depiction is of an African woman, not a "colored", not a "woman of color", or some other branding that serves to remove the notion of Africans as being from a place on Earth. This planet is our home too. Fair tidings.
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't 😞
@@digital_magic, I hadn't read through your comment section, but again I don't believe you meant to offend, in fact quite the opposite. Your desire is inclusion. That's a great thing; you just had some dust on your shoulder. Navigating these things -- specifically the reality that there exists populations of people whom the various majorities are so very accustomed to shitting on, when YOU aren't a person who is trying to shit on them -- it's just tough to navigate. I can't imagine there NOT being someone else else in your comments who is also of African descent but is telling you that they prefer to be called "black". But, certainly you understand English well enough (and even in your own language), to know that the "n-word" means everything that "black" does -- so -- yeah, I can understand (and appreciate) your not wanting to brand someone with such a negative word as it's pretty obvious that you are a person who ISN'T trying to shit on anyone. It's a conversation that needs to be had, but it only matters to the people for whom it matters (again, I have zero doubt that there are plenty of people who replied to your post {which wasn't pinned by the way} something to the effect "why do you care what they think?"). And certainly your channel isn't geared for all that. Nor should it have to be... you know? I took no offense, but only because it's obvious to me that absolutely none was meant. I appreciate your response.
I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
I tried a few variations but I cannot get this to work at all. The 4x4 grid never outputs 4 frames, but a chimera of sorts with everything mixed in together. I tried with different checkpoints and VAEs but it only made it worst.
@@digital_magic I got A1111 v1.6, controlnets 1.1.415; I did a certain test, and I can make it work with that VAE and model however when I tried with a different VAE or model it failed. I am working with a XYZ plot to check what the cause may be: either the VAE, the checkpoint or something else.
@@digital_magic so i did an xyz plot using various VAEs and checkpoints. NOW it works somehow... because of course it would! The detailing however is very difficult to change. I tried a walking animation and I cannot turn a stick figure into a person of proportion unfortunately. That or the details are missed. It's an interesting technique and to be fair it makes sense. However, your best bet would be to create a LoRA that generates these frames and then use than in EbSynth.
one question, whenever I try to do the tile technique for generating, the images end up very wacky even with the same settings and higher number of steps. What is a solution to this?
@@digital_magic No worries! I think its a matter of the checkpoint I used since I tried a different one and it works better. But I just saw you uploaded a new video with a better technique so I'm checking it out!!!
@@digital_magic thanks for the video! I subscribed. Could you include a quick step to do animation in your new video? I think a lot of people are looking for animation style.
ebsynth makes my frames look terrible, where in your example it messed up maybe 1 or 2 frames, mine ended up with most of the video being unusable. Alot of trails and bad encoding but for many frames.
If there is to much motion in the video, this technique has it's limitations unfortunately. Could you send me a link to your video, so i could have a look??
No not at all you could also use a 512 x 1024 resolution and you could add up to 16 or maybe even more images in the grid. in the second tutorial I made about this I show how this works. hope this helps wish you a nice day
I'm back. Thanks again for this. i have a question as i'm now super stuck and would be so grateful for help :-) when i txt to img with my grid the result image doesn't see the grid. its like its not seeing controlnet. have you come across this? I'm guessing theres a box somewhere deep in settings i've missed. thanks again! dave
Thanks again for replying. So I'm def. no coder. i looked in the cmd and it told me this: Launching Web UI with arguments: --ckpt-dir D:\AUTOMATIC1111\SDMODELS\models\Stable-diffusion no module 'xformers'. Processing without... Could that be the problem? I'm based in Amsterdam. Just in case that's a dutch accent. :-) Thanks again.
@@davewaldmancreative Yes i am dutch 🙂 Yeah i think that could be the problem. I would suggest that you ask in one of the stable diffusion discord groups like the deforum group for example. Here is the link: discord.com/invite/deforum or ask in the reddit stable diffusion group: www.reddit.com/r/StableDiffusion/
@@digital_magic 1:10 'cause now they can do their favorite thing (which is pointless turning all white canon characters into black characters) 10 times faster!
@@Satan_said_DRAWdude acting like white washing every character didn't occur. Many Japanese characters are white washed in live action and black and Hispanic. Now when it happens to you y'all mad. Thought whites "don't see colour" as they always say.
Hopefully someone will answer to my question. I have installed all this on MacBook M1, all good for now I am doing the same thing with the controlnet, exactly, but when I press generate it doesn't generate the 4images that ive been selected but a random one. Its like my Controll net doesn't work and I don't know what to do please help!
I have had many other comments where people had the same problem and this helped for them in the end I hope it will help for you as well I wish you a very nice day
Thanks for answering. The problem was solved. I didn’t had the model file next to Preprocessor. I downloaded it works. But, now there’s another problem I have a MacBook Air m1, it’s generating slowly, when I try to put 1024/1024 at the end , when I’m looking at the terminal it says “ MPS backed out of memory( mps allocated 5.32 GB, other allocations 3.02GB, max allowed 9.07GB) Use PyTorch_MPS_HIGH_WATERMARK_Ratio_=0.0 to disable upper limit for memory allocations
I didn't understand the revolutionary of this method. If it's just Ebsynth, and Stable Diffusion is only used to create stylized keyframes. It seems to me to be a very limited tool that doesn't allow me to realize all my fantasies.
Yeah you are right that it won't realize all your fantasies but it will realize a lot of fantasies I think with this method you can create amazing videos but I see what you mean and artificial intelligence will develop more and then in the end we can realize all our dreams. I wish you a very nice day
Nice job. Kindly share the second part. I have worked on a similar video in my channel. instead of using stable diffusion, I used faceswap method to get realistic picture. However, for a full body style it will not apply. I would like to know if there is possibility to use midjourney picture and get an exact pose to a reference image
I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take about 2-3 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
Hi!, love the videos, i have a question.... what can i do if Ebysinth its just giving me "Ugly" results?, it looks like its melting, its just not working for me.
Probably if EBsynth, is giving you ugly results then there's too much motion in your video. or you didn't use enough keyframes. unfortunately this technique is not very good at fast motions then it's better to use a technique like I showed in my last video using deforum. but this is a bit less consistent, but it normally doesn't give ugly results but nice results to look at although that they aren't 100% consistent
Thanks for your message. I haven't tried it myself yet and I didn't know it is possible that you could also use temporal kit in text to image. in my contact with tokyojab, he said that he gets much better results without using temporal kit. but I've never tested it myself. did you use this technique already with Tokyojab's technique?
@@digital_magic Thank you for sharing Tokyojabs method. You can apply that method to the Temporal kit, you can skip some of the workflow in temporal kit, and just let it handle the image to grid alignments and keyframe seperation etc. If you try it out, you can easily adapt Tokyo into that, and have a 50% automated solution.
Yes in my new tutorials I'm trying to be a bit slower, I know I can talk sometimes a bit too fast. thanks for letting me know and I'm working on it. at the moment I just have to figure out how to use sdxl, because I just bought a new graphic card that can cope with it
Hey there have you solved the problem already? After choosing line art realistic as the pre-processor, then in the model you should choose this: control_v11p_sd15_lineart [43d4be0d] So There is not a special line art realistic model. I hope this help and I wish you a very nice day
@@digital_magic Lol I'm from the United States. It seems like every RU-vid tutorial I watch on AI is made by somebody from Northern Europe. It's almost like AI is a parasitic alien that landed in your area of the world and now it's using you all to make it stronger.
i am sorry for that, they will be there in the 2nd tutorial as well, but after that i will figure out something more elegant :-) Thanx for letting me know 🙂
Hello there! And I hope you are doing better! I just did everything step by step but I'm getting single odd images and not 4 images as you are getting, do you have any idea of what can be happening here? thank you very much!
@@digital_magic oh, I don't think so! I'm trying to look into this but I can't seem to find it. Also I keep getting deformed people in the images, so strange! Also thank you very much for the fast answer, super appreciate it!
@@natniszakov_ Hey there did you find the solution already. are you working in text to image or image to image? And are you using exactly the same prompts as I do? and are you using the same model that I used?
@@digital_magic Thank you so much again for your response! I'm doing everything exactly as you do in the video, this is why I can't figure out what's going on. The only different thing is that I'm using a Mac M1, do you think this can affect this whole process? otherwise I'll just keep trying! Thank you very much again for the concern!
@@digital_magic Also No, I'm using the same prompts but in my own image, the image is following the processs you explained in the first part of the video
Hi, my sequence created 240 images, Ebsynth is saying missing files. I researched about it. Initially my dimensions were wrong. There was something wrong with the Spirte Cutter so had to slice the images using Photoshop, which resulted in the dimensions of 256pxX256px, i had to recorrect it to 512px * 512px manually. Should i try shortening the frames to 74 and try, because ebsynth is not recognizing the size
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't 😞
i am sorry, if it is to fast for you, it's what youtube pushes creators into 😞 Make the grid with the sprite sheet cutter tool: it is in the link in the description
You obviously put an enormous amount of work into this video thanks very much. But the outcome is very flawed. It's half baked in terms of professionality. You did the best you could within the constraints - so it's not your fault and well done for that but the result is poor: distortion of mouth et cetera. I feel like this is one point where AI videos especially are overhyped. I also think that for good or bad, content creators who are on a tighter budget are the pawns in the experimentation process (just like social media algorithms) to enable the big guys to pull in the profits. I guess within a year though, we may have affordable and *easy* AI video creation for many people that actually is of good quality.
Hey there and thanks for your comment, I agree with you in most of it, the AI video quality is still on the rise and it will take some time before it really is on something like the level of deep fake. but I love playing with it and I really love how it involves and develops and that's why I'm working with it. I hope you can enjoy it as well and you have fun creating
@@digital_magic It would seem that everything is in order, everything is on the shelves, but damn it, how much water is there, and it’s not clear anything =))
i can't find the video, but i guess they used roop or something like swapface. i have a tutorial about swapface on my channel if you are interested. It's the 1-click thumbnail....
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought i was doing it right, but i clearly wasn't :-(
I appreciate this tutorial, but let's try to not use the word "colored" in the future. Woman of color is a much better way to describe someone with a darker skin tone. Otherwise, amazing!
@@KINGLIFERISM Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't :-(
@@KINGLIFERISM Yeah that's sort of what I mean. as I know that in Holland It is very offending to say ‘’zwart’’ so that's why I thought I shouldn't say black in my video. I really didn't know that the word colored was offending in countries like America. I really feel sorry about it but I can't really change it anymore but I think most people know that I wasn't being discriminating
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought i was doing it right, but i clearly wasn't :-(
@@digital_magic nah bro you didn't offend me. I just know that people, especially here in America are super butt hurt about race swapping any person of color to anything white. Don't listen to the haters and do what makes you happy artistically. Though, if you're super self conscious about not wanting to offend people, you could probably just stick to swapping white people to other races, bc in today's culture that's more acceptable. Great video! I hope to see more awesome content from you going forward!
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't :-(
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't :-(
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't 😞
Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't 😞
@@digital_magic it's ok my guy, no harm no foul but GEEZ! you're information on us is out dated as the first computer.... you need to upgrade and immerse in the culture NOT through music but somehow organically and i as well would love to learn more about the Netherlands. btw keep making content , but pronounce black with a soft b 😂