"MKBHD can sleep soundly tonight knowing he doesn't have to kill another company." Haha, too funny and too true (unless of course he's talking about Apple, who he love loves)
It's such a cool idea. And I'll be the first to say the "It's not quite there" line here, but I think it is such an excellent first step in terms of taking a swing!
Not gonna lie, the first time they came around with the Editing workstation (featured at the end of this video) i thought it was a prank. But, nope: it was a legit device. I actually didn’t mention in the video, but you also can edit on the camera as well. Choose in and out points, etc.
The one sentence that got it right: The company works on real world implementations for AI. They are probably desperate to do that. Find real world uses for the most useless and broken yet overfunded technology that has ever been in existence.
So let me get this straight. This big ass box does the same thing that some of the apps on my iPhone already does, in a smaller form factor and at higher resolution? Am I sure today isn’t April 1? This kind of seems like a really sophisticated practical AI joke.
@@PhilAndersonOutside I’ve messed around a little bit with Runway. Most of them I still find disappointing for the inability to hold onto a face or environment longer than a few seconds. (not worth a subscription for me yet ) Although, that quirky quality is fun, it runs its course as a novelty soon enough. That said - to have something in-camera interacting with the subject for minutes on end, is what appeals to me. A few years ago, there was a fun free phone app, which changed your face in real time and it was pretty good. I have to say. I can’t even find that particular one anymore because there’s so many of them. So many of them are just garbage. That’s why I was asking.
Total waste of time … Why would anyone want to shoot with what you could create from your normal everyday footage, therefore minimising ability to manipulate your footage later … It’s just pointless
I don’t know about pointless. Novelty? Sure, but there are tons of weird cameras that are used for effect. To be honest, I consider this more of a proof of concept for what we might see in the future.
Love he calls it out, corporations weaponizing him for preferential treatment and to shit on other people's products thanks to his larger-than-ever fanbase of people that can't think for themselves.
MKBHD's take was based - R1 was a scam from day one and he was right to call it out and shut down trash products that fleece customers on false promises.
Haha, I get ya. I mean, that said, most DSLRs do it now. Different breed, but totally get you. I do think you can probably capture and then upload. At least I'd imagine that to be the case.
The FLIR sensor captures Infrared. If you ever seen one, it looks like depth map, that is used for Image to image generation. The infrared requires special optics, generally made from germanium, silicon, zinc sulfide or zinc selenide. 12 FPS is probably a limit of the FLIR sensor itself. US government limits export of thermal imaging hardware with frame rate above 9fps.
But with that sensor, can you get “normal” video as well? We do see in the camera monitor that we aren’t getting thermal vision. Unless they’re splitting it?
@@TheoreticallyMedia It is used to provide more metadata that can be used for different kind of video enhancements. For example, it can be used to color grade the picture more lifelike using the metadata. Or get a better sense in depth (especially on darker video's). It's all about combining metadata to enhance the video and/or image you're taking.
I believe that is a Angenieux c mount lens they’re putting on the camera in the video I own a couple of those 16 mm lenses in it looks virtually identical to the ones that I own.
Oh cool! A few other have name dropped the Angenieux as well, so I think we've got a confirm there. I actually just reached out to them for a possible follow up so we can shine some light on those other blind spots!
I really don't see a use for this. There's so much value and quality in post - that it doesn't seem like a real need. Would be like a camera company releasing a camera that can color correct or a camera that can edit. Some things should just be left for post. And with the ability to shoot with on set with CGI capabilities - that would seem like the route it should or will go
Don’t disagree. I see this more as an interesting experiment, or maybe even niche gimmick camera. Remember then video camera that shot into analog tape? Kind of a weird effect that someone like David Lynch would have a lot of fun with.
@@TheoreticallyMedia David Lynch would find a use case for anything. haha. I'm willing to bet a camera like this will fetch a huge price 20 years from now - There will be a handful released to the public and it will become some sort of Cultish AI collectable. Which reminds me, I think I have a ton of short ends from the films I did back in the 2000's. I always struck a deal with the AC in the bag after wrap to grab short ends. Shot some cool shorts on those! Oh the days!! haha
AI image generation or manipulation may be very difficult to predict. Being able to preview while you’re shooting and adjust your shooting session at the moment may be very useful.
@@TheoreticallyMedia But this really adds nothing that you couldn't accomplish with a separate stable diffusion box and whatever camera you choose... seems really limiting and really gimmicky for what it is
@@ValdemarDeMatos Oh, I agree but this can be done without having it be the camera and the AI generator. Just like when shooting with CGI - thats done with a separate setup. I see the tech being useful just like its done with CGI today - on set.
The first? I've been a full-time professional photographer for my entire adult life (40 years pro and counting). And I've never used a camera that didn't have "intelligent" means of manipulating images.
Literally NOBODY wants this garbage and they made a whole camera around it. Some investors will throw money at useless stuff like this. Like the AI pins who on earth thought that was a good idea worth $200 million investment… some dumb people
its basically an art project. I'll hazard a guess that the box is a pc with an rtx in there and its running an interface into SD 1.5 with animatediff. clever stuff and just a fun project
I somehow missed it in the edit, but it's running a snapdragon processor. And yeah, this isn't a commercial project or anything. Totally an interesting and fun project, but I'm hoping will spark some innovation for the future!
This company shouldn't exist. They think they find solutions of problems that doesn't exist. The investors who put money in this could just as easily burned that money. This makes me so mad!
Are you trolling us Tim? As you said, we can already do all of this out in the field with our phones (or a pro camera) and no keyboard required. Shoot the video, process it through your browser on your phone, tablet, laptop, done. This is ridiculous. Fun, but far from practical. Maybe these guys should have shot all their footage while riding a penny farthing.
Haha, not trolling, I promise! Agreed on the phone point (and mentioned at the back half of the video)-- but I also kind of love the dedicated device approach. I don't think this will have mass-appeal, but I do think there are aspects of it that might eventually find their way into pro-cameras. At the end of the day, I saw this thing and was like: This is so weird I have to make a video about it! The Penny Farthing comment had me rolling!! SO GOOD!
Okay, this is neat, but I think I have to agree with a past comment. The only value I see in this is being able to use the stylization effects. Which, are not supper necessary because what it is doing is just post-production work. I am trying to think of a use case where this sort of thing would be needed and the only thing that comes to mind is shooting conceptual videos and not any high level sericeous productions. If this were to go commercial, I think it would find its home among indi-film makers and may even get a niche of its own. As weird as it is with AI doing all that morphing, it is also actually kind of interesting. Wait... I got it. Horror. This is going to find a home in horror. Especially with all the morphing. I am calling it now. It would do well. If you thought Eldritch abominations were freaky, just wait until they change color, shape, size, etc from frame to frame lol.
Oh, man-- I want to see that movie now! Totally agree as well. I can see it being something like the Fisher Price PXL-2000-- which was a video camera that recorded onto analog audio tape. The results were weird, distorted, and hella dark. BUT-- also super cool! Totally relegated to experimental filmmaking, but there's also nothing wrong with that! Sometimes things just need to exist because they can.
Defending Marques...sort of. As soon as the Rabbit and Humane Pin were announced, I wondered why anyone would be interested in them. They added nothing to what I could already do with my phone except obvious problems and limitations. I did not understand the hype or the widespread appeal. I guess the masses don't know the potential of their current devices and are easily swayed by interesting industrial design?
Oh, the Marques thing was a total gag on my part. I followed that whole thing from the Rabbit Hype launch, to MKBHD's takedown, to Coffeezilla straight up committing murder. It was one of the best rollercoasters I've ever been on!
This is such a profoundly stupid idea. It takes two cool things, AI style transfer and retro inspired industrial design and combines them to get something that is smaller than the sum of its parts. I get why you would add analog buttons to something, or fake cartridges, but the execution from a user experience standpoint makes it complicated, clunky, not fun, for something, that could achieve the same results with a better experience on any modern smartphone. You're getting a unergonomic low quality camera and post effects, that don't need the camera to work and need time to process, when you could just use a good camera and process in post. Am I too harsh?
I mean, it's all fair to be sure. And maybe if they continue to iterate on the idea, we'll see different forms. I totally agree (and I think mention in the video) that all this stuff could be done on a phone-- but, there is something about using the physicality of a real camera that has merit. Like, one of the best feeling cameras of all time (imo) was the Canon XL-1. I mean, it is totally obsolete at this point-- and nowhere near modern specs, but man-- do I miss holding and shooting with that thing.
Hm. I can see this as an app for my phone or some sort of toolbox to connect to my hdmi. But a whole concept around a physical camera? I don't know... I would not shoot that much stylized, so I would not invest into a new camera just for the sake of this. It's cool though. And will find it's uses if we could apply that to our gear we already have.
There are already a handful of apps that will do similar things-- For example, Skyglass will do roto/background replace on your phone in real time (although it does a final render in the cloud)-- and potentially, since you can attach pro lenses to your phone, you could build out something similar. That said, there is still something cool about a dedicated form factor like this.
Its plausible that the NFC is not actually being used to store the LoRas themselves, just a reference to which one (that is stored on the main devices internal drive) should be selected. As such the cards just act as a form of user interface, equivalent to having many magically updatable LoRa selection buttons on the device but that can increase in number as you would like. Some sections of the retro gaming scene have been doing this too - the NFC cards dont store the games, they just provide an old-school way to select which game to play, giving people something of the physicality of inserting a cartridge but without the actual data living on a cartridge. And its probably not practical to store the actual LoRas on the NFC, but cheap NFCs would have suitable capacity to store the path to the LoRa.
Ahhh. Brilliant. Don’t know about that about the retro gaming crew either. It’s funny, my immediate thought on that side is that it’s almost a spiritual successor to putting quarters in a machine. And I guess in the era of ROMs where you can put literally every game made in the 80s/90s onto one flash drive, that idea makes sense! Thanks for the comment!
Your idea is correct. NFC chips cannot hold LoRas due to memory limitations (512 bytes). Another downside is the speed of NFC communications for large data. It's interesting they want to go this route, but highly impractical. Also, the GPU processing needed to run locally is huge so local processing would be a problem.
The reason they use NFC instead of SIM is probably that NFC does not have physical contact points that are prone to corrosion and other kinds of trouble. Also it is cheaper on both the card and reader side.
Maybe? But also better in ways. For example, I think the Insta360 has a time limit cap on shot length (last I saw was like, 4 seconds...but I think that was a bug at launch)-- and with the Insta, you're locked into their looks-- whereas with this, you can add in your own LoRas. (Basically, you can make your own styles)
Haha, see, I go through phases of collecting weird contraptions. I think that's the musician side of me. I'll have tons of stupid guitar pedals and Midi Controllers. And then realize I don't use half of them because I do most everything in software, sell them, and then cycle back through the process of collecting because I miss turning knobs. I know...it's stupid. I've accepted it.
This has potential, provided that everything is also built-in on-device, without the need for an internet connection. And, of course, it needs professional models such as SORA or Stable Diffusion 4 along with excellent temporal coherence. Also precise generative AI control like ControlNet. For example segmentation is a must for this thing for quick background removal, character swap or easy editing.
I get you. I mean, I think the benefit here is just: "we made it"-- Again, this isn't a commercial project, and more of a prototype/experiment. To that, I think it's a worthy first step.
His name is pronounced "Mar-kez," not "Mar-keys." Also, you should edit out all of your "ya knows" or work at avoiding filler words when presenting. I counted 24 in the transcript. "Like" 25 times and "I mean" 11. Ooh, you also said "first foray."
Haha, yeah, I caught that Mar-Keys in the edit. Eh, I do filler and flubs all the time. This is more like hanging out with me than a professional newscast. I'm just a guy producing 30 to 45 minutes of content a week-- stopping down for every flub isn't attainable, so I just focus on trying to get the informational details right. Also, people yell at me for not looking directly at the camera the whole video, but that always makes me feel like a serial killer.
Seems like a lot of money spent on achieving AI augmented captured video footage that seems could have been done for much, much, much, much, much, much, much less money with tools that already exist. 🤔
Haha, totally agree. But, hey-- fun experiment. Sometimes someone needs to do a thing just to do it. (and...maybe burn a lot of money in the process...)
Might as well record at a higher speed to get that 24fps out of the AI footage. 48fps should fix that problem and slow it down to 24fps which would lead the AI footage being 12fps and speed it back up to 24fps, I mean it's a no brainier.
I do think this direction, especially for film stock emulation, is great. However, this is the same, if not worse quality, than on PC, but with a lack of the control. I guess it's like a point-and-shoot in a way, but AI especially is so turbulent in how much it progresses and how quickly, that it's foolish to hardware current gen into a camera.
The lense is Angenieux Retrofocus. They are a highly regarded maker. I’m guessing they are using the Flir to measure depth not heat. Interesting ideas. Thanks for the great channel
Thank you so much!! So, I did also reach out to them for a possible follow up on the other blind spots of the FlIR sensor and the NFC card. Hopefully, we can arrange something and I'll get us answers to those questions as well!
Glitching and morphing Stable Diffusion video to video in a big expensive retro hardware box.. that's going to age like milk, especially now state of the art video generation AI tools are way ahead of what SD can do.
Why not just use an AI video generator for the same result? Feels like a scam, just like those other AI gadgets. Honestly, I bet there are plenty of gullible folks out there buying this stuff, just like with NFTs
I get it now. It is the precursor to an iPhone with the following feature . . . a menu with choices like: "MAKE A FEATURE LENGTH MOVIE ABOUT YOUR PET AS A SUPERHERO." Instructions: "PRESS RECORD AND KEEP YOUR PET IN FRAME FOR AT LEAST 10 SECONDS." "Then select "MAKE MOVIE." Wait 30 seconds while it says "DREAMING...DREAMING." You are now ready to watch your very own feature length film - enjoy!
Was this company started by someone who always puts "First" as a comment under videos? Because what this company seems to keep doing is making a completely unnecessary device to do something just so they can say they were the first to do it. What is the benefit of hardware that isn't as good as what people can shoot on their phones, when all it does is send that low quality video to a cloud that people could send things to from their phone?
I hope I don't need to explain why a FLIR mid-infrared microbolometer sensor will see absolutely nothing whatsoever with conventional GLASS lenses in front of it.... vaporware / scam / fake / you pick the descriptor here.
oh, not at all. Just being jokey! I followed that whole saga from the Rabbit Hype Launch, to MKBHD's take down, to Coffeezilla straight up committing murder. It was an amazing ride.
Great video as usual. At 2:57 you make reference to 'the door where Sam Altman keeps the thing that Ilia saw'. Given the context of Sam Altman and Ilia Sutskever both being associated with OpenAI, can you tell me what you are referring to. Was it something like: A breakthrough or innovation in AI: Ilia Sutskever might have discovered or envisioned a significant development in AI that Sam Altman is ensuring remains secure or is being developed under strict confidentiality. or A strategic decision or plan: Ilia may have identified a strategic opportunity or risk that Sam Altman is keeping under wraps for the benefit or strategy of OpenAI. or A metaphorical reference: Could it be more figurative, implying that Sam Altman is safeguarding an idea, vision, or a piece of knowledge that Ilia Sutskever brought to light. Or was it something else completely that I've missed in any AI news. Thanks.
'Wild !', 'pretty crazy', 'weired' - Summarized in this way, these keywords also make sense. But 'This(...)Camera Is Wild!' is not true, especially if you rate it as probably not useful. The potential of a number of cameras were wild in their context, but they weren't yet. And there's always a reason for the design and form factor, but as a reminiscence of an old product (which certainly had less scope), the reasoning is different to the handling.
Could we please Stop calling it AI? ChatGPT, OpenAI, stabole diffusion etc. all are still on the level of machine learning, not AI. A general AI would be revolutionary, but theres a good reason it can only regurgitate atm
This has no future. You can compare this 1:1 with todays cameras and post processing. Todays cameras could have strong post processing settings implemented but they dont. Thats because its always better to capture the RAWest footage and edit it in "post"processing later. Having AI mounted into your device will make it more expensive and you probably wont even use 99% of the available settings. The only place where real time AI processing will find a use on the device itself might be phone apps or something for your webcam.
Ergonomic disasterpiece. However, shooting stuff with this thing could definitely inspire some weird post VFX ideas kinda like you mentioned with the "lighting in-camera" comment. Strange times. Great work talking through it all though :) thanks man
That quotes seems like something I would say, so if they are anything like me, I get it. To design the actual tech in the future, creating a prototype to play with let's you find design and usage needs by actually doing the thing you are designing the tech for. By going out and filming with this camera, I might come up with new ideas on how AI could be integrated into filmography in the future. The weird design would just be because prototypes can be ugly and boring, but if you have enough money, why not make an art project out of it and do something you think is cool. And again, that's just the way I think. I wouldn't build this yet, because the specs would just be too low for me to have fun creating with it, so I probably wouldn't get many new ideas out of using it. "Sometimes to image what the future might be like you have to prototype it... The CMR-M1 is a prototype for how hands on creators will use AI."
I'm giving this a break because it seems more concept art futurism than a real AI camera. There wouldn't be any point in doing AI on, or streaming to, the camera. What you'd really want the camera to be doing is recording data with lidar, depth perception, luminosity, etc, so that when you apply AI later you have as much data from the scene as possible to provide a canvas to work on. I think the reason it's using IR is that it's lower RES, and the output of IR is similar to image preprocessors like Canny or depth mapping so I assume it speeds up the process because it probably can work without an image preprocessor, which is an interesting hack. The use of NFC chip cards is just a style - it's much less practical than using an sd card, but it looks cool. Everyone just needs to chill a bit, it's a fun device. 🤣
They look pretty cheap and tacky like the first airplanes from the Wright brothers, but I rather think they're onto something very, very big if a deep-pocketed developer takes over and truly pushes the capabilities, precision and utility of these a.i. devices to produce jumbo-jetliners of a.i. devices for big budget commercial use.
The one thing I would like to see from Apple without a 3rd party app is video recording a depth map as a separate track. And also NERF 3D tracking data.
Haha, I mean, can’t argue, other than: someone had to be the first one to try it. To be honest, I’m glad it’s these guys. For sure there were probably others who would have tried to sell it as a kickstarter or some other crowded funded plane crash. These guys seem to have decided to do it just for the experimentation and novelty of it all.
@@TheoreticallyMedia If it was giving the footage a fantastic, cinematic look, I would be screaming "take my money". But making it look like stable diffusion just left me scratching my head. 😕
@@icvideoservices haha, well- they gotta start somewhere. I do think that down the road, we’ll get the camera you’re looking for. It’ll be interesting- possibly image referencing a scene and then having that style applied to your footage in real time. I mean, you can almost do it with stills currently, so it’s only a matter of time.
Yes, the new internet ''this & that subscription, pay forever formula'' messed up the world a bit for creators, it seems like we can't own any product these days. If a company decides tomorrow to shut down for whatever reason..were fkd! I must mention that I'm really tired of seeing all these codes, it's so ugly for an artistic person. However, the reality is that AI is here to stay.
great video, but the camera is utter dogshit. I would be more impressed if it filmed stuff and put segmentation maps onto everything it filmed, that way you could put it into something like Comfy UI and get mapped footage for editing and controlled with AI morphing. Considering it still uses NFC chips it just feels silly, and the fact that it comes pre loaded with like 5 loras is really stupid because I cant imagine slotting in lora after lora when I use like 30 on a daily baisis for different tasks. The camera films in infra red because it creates a depth map which is pretty interesting method but I think inaccurate considering infra red is temperature so some objects wont be defined well depending on the material. Over all the idea is seems promising but the way they engineered it seems really stupid and uncomfortable. The only cool part is using Flir to create a basic depth map.
This is going where no movie camera has ventured before. Remember the first video cam was the size of a ghetto blaster cassette player that needs to be carried on your shoulder. From that pioneering effort we have today's digital video SLR and iPhones. As the old world crumbles, a new one rises to take movie making to new frontiers. The dream merchants never fail to surprise and entertain, come what may.
Why should the post-production process of video, which is a trial-and-error process with the director's instructions, whether or not AI technology is used, have to be built into a particular camera and taken all the way out to the shooting location? To me, that seems like an inconvenience and a disadvantage.
You’ve always been so well spoken in your videos and now the editing feels more clean and more in vibe with your online personality and I’m loving each new uploads because of it. Keep it up good sir, always a pleasure to have you on alert 🫡
Sounds like nothing you couldn't do in post; and then if it doesn't also save the original untouched footage, you're stuck with whatever is the first result you get. Seems like just one of those artsy-fartsy machines that do things in a weird unideal way on purpose just get away with it by saying it's art.
Yeahhhh... We're definitely too early for this. This product is not ready for any level of commercial use. This thing is basically just a phone that sends the video to the cloud to get AI processed.
What a stupid pointless item. It's just a fancy box for software. This is not an "AI camera" whatever that might mean. It's a camera attached to a shitty computer that runs an AI app.
This camera’s quality can be compared to early glass plate cameras on tripods which, ultimately, turned into the iPhone. I will wait till it becomes as easy as an iPhone.