@@theAIsearch if AI can generate deepfakes then surely it can be taught to distinguish between real and fake also. That's if there aren't already easier ways to tell.
@@dailydoseofshitpost751 Isn’t it the least incel take to not trust „cute“ streamers pointing to their only fans on twitch. Better not watch that. Also there are a couple of guys posing as chicks to get freebie’s. (in online games - as the cute girl with skin, name and/or behaviour)
@@antman7673 yes, its the least incel to not trust cute "streamers" pointing to their OF link, but the original comment never mentioned streamers, it only mentioned all of the cute/pretty girls who use the social media in general.
I don’t think people caught this, but it seems like he is utilizing the ai hide his glasses in his webcam. You can see the bottom rim of his glasses occasionally appearing through the video.
It is the part I was most impressed by. Imagine being able to filter out your glasses so you look cooler. (Well I guess they usually hide the bags under my eyes, so it needs a little more AI adjustments before I look like a decent person without glasses)
@@linusgustafsson2629 You: Imagine being able to filter out your glasses so you look cooler. Me: Let me introduce you to a cool invention called "disposable contact lenses". 1-800-contacts
It's actually just another filter that's becoming more mainstream in conferencing software (another one makes you always look at camera for instance). I expect more to be baked into other face cam softwares if they haven't already.
I can see this used for online tabletop games like Dungeons and Dragons as well. You can have your orc, elf, or dragonborn characters while also using a AI altered voice.
Most people see this as a malicious tool, failing to realize its full potential. It's like sculpting 3D models for 3D printing; once we achieve AGI/ASI, our level of biotechnology and nanotechnology, as well as plastic surgery, will very likely and very quickly catch up to it.
It is a malicious tool. I haven't seen any benefit of AI other than being used for malice, silly things and curb creativity. And when all that AI biotech you mention comes along, I'm pretty sure it will only be in the hands of a few powerful people. Just like big pharma is today.
The Turing Test of this will be when you see several simultaneous vids running in real time, of various ethnicities and genders, and you have to guess which one is the human that is originating them. Maybe put a 4 ms lag on the original to make it fair for the computers.
I think the Vtubers will have competition, because now showing his face and keeping anonymity is possible in another way. And yeah please, it would be nice to have a video on how to download these AI, or maybe on how to change your voice and face at the same time (that's not very complicated but having an example like that could be really cool though)
I weep for this generation. My wife and I have been married for multiple decades now and my late 20's boys (4 of them) are going to have a Hell of a time figuring out what's REAL from what's FAKE! I WEEP for them! YWHW doesn't make mistakes and telling an AI (or a plastic surgeon) what you WANT to look like is not the answer to your self worth! YWHW and His Son YASHEWA ARE! Shalom from TN, USA.
They have had to dumb down all the public AI tools for atleast a decade and a half. But as more people become skilled and the hardware is more accessible, the gap is going to close very quickly.
Imagine this little girl runs up to you because she's lost then the dudes voice comes out. "Hey bro! I am so Effin lost! My parents were just up ahead but I got stuck staring at the photos of the ramen bar because I keep telling them I am hungry, even though my parents said I eat too effin much."
It's only the beginning, however there is a ceiling, a limit. Not much more can be done. The face changing technology is as good as it can get or almost. I don't see much more that can be done. It's weird how if you compare now, to when we were kids, let's say the 90s, we would 't even think of this technology. 20 years later here we are looking at it. Will there be this much of a difference in 20 years? I don't think so. I think what will happen is we will hit a wall with technology and it will pause progressing. Look at my laptop, this thing has 32gb of ram, that's like 100 laptops from the 90s combined. It is scary though.
I don't think Deepfacelive is the tool being used for cheek pinching etc. May be an Apple proprietary filter or similar. DF Live doesn't typically work on that level with eating food or again, grabbing portions of the face and having it remain flawless. It may be possible to create a custom trained model where the src face library includes the person doing all those things and it may translate to the finalized model, but I haven't tested that personally (and I've made ~6 dozen models at this point).
In a very long stream the cracks would start to show, but still only subtly. Especially the bottom of the chin and double chin. In a real usage scenario there is connection speed to contend with. Chinese, Japanese, korean, and some Vietnamese cam models already use these, but they mostly use the ones to make them look thinner and lighter skinned in the face.
I can see a temporary way to verify streamers is for them to mark their face with a marker cause most likely the filter would try to cover what they put on their face. But I say temporary cause im sure someone will add a feature to circumvent that
AI would never talk with its mouth full, if I was an AI model I would be like "that's the last time I'm imitating one of those meat sacks again, so uncouth.
@@TheRatlord74 Fair point, but if you had to depend on it you have to make sure you have complete control over it. Lots of this advanced tech unfortunately is controlled by people we don't know and/or trust. If the individual can keep full control of it the same way the individual controls a simple tool like a hammer than I'm all for it. This tech is only as dangerous as the people controlling it and what they choose to use it for
Could you show us the source of this video? And, is it possible that this guy just record a talking video, process it offline, edit them together, and post it afterward,, to pretend it's "realtime" deepfake?
The thing that's even more crazy is that in China (since he is speaking Chinese) is that EVERYONE uses filters to make themselves look like that with flat tones, so the little issues of the realtime deep fake is not going to be expected. The viewer is just going to suspect that she is using a Meitu美图 filter to slim her face, enlarge her eyes, or sharpen her chin. So it is easy to fool them. For westerners who are not use to this stuff it might fool them if they are not looking at or if they are using a low bit stream. The danger is people are either too comfortable or too trusting. While it would be really cool to do these things and the positive side is limitless, there are people who will use it for negative thing.
This could be your friends people you know and trust fooling you or using your entrusted data to scam and trick others around you.. it could be your trusted RU-vidrs.. It could even be a "journalist" for some news channel.. And it can be so realistic you wouldn't tell the difference In fact within China they have laws and regulations stipulating that every social media account within the country must have real face and data attached to it.
luckily it's still so easy to see these are AI, it's got the typical stereo type AI face and has clearly blurred and fuzzyness in movements and so on. I see so many ai videos all over now on social media that's obvious to me, maybe not to people not familiar with it, but as someone who's played with AI a lot myself it's so easy to know it's AI. I think with current tech it's not going to be that hard to identify deep fakes. It'll take an entirely new AI from scratch making some major innovations to create truly believable animations in real time and i think we are maybe 15 years off from that point, just based on how much more powerful pc's need to get first and additional time for more advancements in AI too.
@6:25 Some beauty filters from 5 years ago (snapchat) didn't use AI or ML but simply overlayed a transparent mask and some pixel distortion over the detected face. Deepface also does the same (overlays a mask) but the mask itself is AI generated based on the facial features.
I always express it as "An AI Solution that works at scale in a production environment is...just a solution." We don't think anything of the fact that we can (not that we DO, but we CAN) call our bank or cable company or whatever and we don't have to input our account number or telephone number, or press 1 for yes and 2 for no anymore. We can just SAY yes or no, or read off our phone number and (while The Solution reads the number back to us for verification) The Solution gets it right... We don't think of it as "An AI Solution" - it's just a thing that happens when you call your service provider - it's "just a solution."
I just ran into this problem in AIgencover when I connected and run it the public link is not showing anymore could you please look into it plz, also the usage limit did not show yet so its different error
I remember a Japanese man who broadcasted playing online and pretended to be a very pretty and young girl. One day the software failed and the show ended.
Interesting how the performance, of these tools, dominishes the greater the difference original and the target images. In most of the examples, presented in this video, the difference between the original and target image is small. Not overly convince thus far, more work is necessary.
Well i think we can just conclude you're not sure wich tool this guy used. I thought i'd discover a new tool with your vid, not the case. I knew about DeepFaceLive since a while, and i'm honestly not sure it's good enough to drive such a quality live swap (with all the gestures, pinches etc) we saw in your reference vid, got my doubts about it. could be another tool, or maybe it evovled/improved enough for it, can't say.
Movie "Player One Ready" in action. There it was. You might be talking to a pretty girl, but it's actually a guy named Chuck who lives in his mom's garage. 😁The boundaries between real and fictional are blurring more and more. It's scary.
The biggest noticeable difference to me is the overall image is dulled and not as clear compared to the original. Especially, the eyes. They're not shiny/reflective anymore.
Wow. I have never been so relieved I am not into "beautiful girls online". The one thing I do not get videos recommended for is "turning stupid sentences into intelligent ones". That is of more concern to me. But wouldn't that be great to finally eliminate hate speech and teach people to talk properly again? We could even leave the whole internet discussion to AI and stand up from the computer and walk in the park - or like those cool people do: look up from their phones. Beyond the edges.
I am reminded of something from years ago. Someone with the nick "bigal", I don't remember the full story but that is both "big al" and "bi gal". 😂 With this tech "why not both?"
I'm 2/3rds the way through this video & the qustion I have is who says this isn't a fake demo? I mean we could just take two ppl & record them doing the same thing & then claim it's an AI tool. You might think this would be extremly hard to pull off but not in comparison to the tool being this good in reality. If you look frame by frame you can see the ears for example being occluded on the guy but present on the girl. Also the face angle is off. If AI, why would it choose different angles or to show features that don't from part of the imput reference image?
who said it is in "realtime" ? it can be a pre-recorded video that he input inside a stable-diffusion model with a swapface/faceID for each image of the video. Or maybe he deliberatly made a lora model of the girl from pictures to instruct the AI to be consistent on the face. The movement/face-expression can be tracked with controlnet dwpose, depthmap, maybe both or more (there's canny processor too that can help). But doing it live, you can if you know comfyui with some custom nodes that can capture live frames and generate each image to switch the person. If you are thinking of how did he "prompt" so fast ? ... no need, comfyui can integerate LLM directly inside the workflow and describe what it sees and prompt for him every single frame.
This is why we need AI to discern who someone is using metrics that collectively are probably at least twenty years from masking sufficiently if someone focuses on them until done. Then AI can be trained to discern the probability that all is natural. How someone speaks, breathing rhythms and heftiness, involuntary body movement patterns. I wouldn't be surprised if these methods of pattern recognition aren't already taught for forensic analysis and identification since decades ago. It's already next to, if not, impossible to hide your location while online if national intel, or anyone with sufficient insiders, are involved. Don't trigger a need to investigate you.
Those AI image processing tools produce really cool videos. The only thing it misses is the voice processor. When such a pretty girl speaks man voice it looks scary. (uncanny valley)
I wonder how it would react to a mirror being introduced into the scene? If it was unable to process the “real” face and the mirrored image that MIGHT be a way of proving you are who you say you are. If not, well, we’re screwed.
i downloaded this thing on github and extracted the zip file and no program pops up it's just a bunch of files. How do I turn this into a usable program? Sorry I've never used github before.
I notice that the point of view is not the same between the two videos. In the small frame, you seem more of their left ear; but in the large frame more of their right ear. At first I thought it was another of those side-to-side swaps; but the gestures are on the same side. Thus, it seems that something is fishy with this demo.
@@theAIsearch I figured that the point was to have a raw video, run it through a process, and see a new (modified) video. As someone mentioned, the smaller picture has some artifacts of glasses being present -- or added. So, it seems that BOTH videos are modified or the larger one is the unmodified and the smaller the modified.
Eventually AI, along with computer graphics and sound, will hopefully awaken enough people to the fact that although the internet draws data from reality the internet itself is not reality, just an easily manipulated version of it. That hopefully people will then start treating the internet as a novelty not worthy of so much human attention.
Is one positive side of this that the market for actors and presenters has opened up to everyone because they no longer need to try to find a particular look from the person they’re hiring. So lots more people can pursue their dream job in media. We just need to crack down on them being used without consent to imitate other people. Pretending to be people that never existed is fine.
Can I also do this the other way around? I want my image to talk. But in Avatarify it starts looking really weird.. And I don't have the right PC for deepfacelive
for realtime, you can also do it in deepfacelive. see the later part of this ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-w0Wkhz4G6OA.html for non-realtime, you can use this ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-rlnjcRP4oVc.html or this ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-uyjSTAOY7yI.html
Rather than a deep fake maybe it's an advanced version of those TikTok filter? Two different things. Deep fake has a target you need to transform the input into, while a filter alters the input based on a set of parameters.
so if you go onto some platform, and the girl who looks like 23 may sing or play the piano and you gift her US$1 per song or US$3 per song, and 5 or 6 people are doing it, it could be a guy singing or a woman in her 50's pretending to be 25