Video Tech Explained is a show dedicated to explaining the technology, concepts, and standards that make the modern world of digital video production. From the smallest RU-vidr up to big-budget Hollywood production, digital video has revolutionized the way we create and share content with the world.
Come along for the ride as I share what I've learned about this fascinating topic.
== My gear == Cameras: - Sony a7SIII - Sony a6400 - GoPro Hero 8 Black
Lenses: - Sigma 24-70 F/2.8 DG DN Art - Sigma 30mm F/1.4 - Sony 16-50mm F/3.5-5.6 - Sony 55-210mm F/4.5-6
In fact, I was inside the editing application and found many options for color spaces and I changed the space and it said that it does not support that, so I wondered and searched and found your clip. I really thank you. I understood everything.
Thanks for the video. I used it to check if I made my DCP in resolve correctly. I have one question though as my video file is Rec709 with Gamma 2.4. Do I need to change the gamma at all or does it also do that automatically?
I wish you could've shown more the difference between the colorspaces and any benefit of the gained range vs just a "it'd be more expensive". Great video otherwise.
Well done. I've been a professional colorist for years... Now I teach color at a university in the film department... I was racking my brain for a good way to explain color spaces without overwhelming my students. This has helped exponentially.
It's kinda cool that metametism isn't actually a phenomena of physical light, but a limitation of our 'three cone' based measurement tools :) While it makes sense that we use RGB sensors to account for this, I kinda wonder how useful it would be to use a different architecture to capture the light spectrum so the data can actually differentiate from a spectral 'yellow' yellow and a yellow created by red/green pollution. Things light scene light pollution could be targeted directly in colour finishing without effecting something which is actually supposed to be that colour.
There's one more complication to add into the concept as well. A frame of video will contain audial and visual data, the visual information being made up of not just luminance data (luma) but also saturation and hue data (chromaticity). So choosing to use a gamma curve (or more accurately, a set of OETFs and EOTFs) such as Sony's sLog3 will certainly improve the /luma/ range possible to capture, but will not necessarily guarantee colour /depth/ at the extremes of the luma range (where the response curve is compressed) This is why chroma sub sampling and bit depth are also important as theyboth allow a larger 'box' to store these extreme values into without having to clip or scale them down to fit a smaller 'box' (bit depth) and also describe how data is handled as it approaches the limits of the recordable range of that 'box' (chroma sub sampling). This is often why during some shadow recovery work you can certainly recover the spatial information in the scene's shadows, but it is often markedly lower saturation or even skewed/inaccurate hue values. Every camera sensor at every price point will have an 'optimal' dynamic range where a minimal amount of this distortion happens, which is often a lot smaller than the maximum 'dynamic range' possible. So keep that in mind when comparing spec sheets on camera systems ;)
As a person who's been working with film since the 1990's I appreciate seeing the film grain on 4k scans of super 8 and 16mm films, some slight scratches, splices and artifacts from optical printing. Aggressive denoising really gets under my skin. Can't wait for 8k and more, maybe a VR experience where you can put your head directly into the celluloid and emerge covered in it like a fat baby having a milk bath.
Being a gamer as well as someone who does video stuff, i landed on a Samsung Odyssey Neo G7 with 1196 backlight zones, 4k resolution, 165hz, 10 bit colors, "2000" nits peak brightness, HDR support. It does what i need for both tho i cant seem to get Davinci Resolve to recognize it as an HDR display
A 2K digital scan of film will resolve more detail than digital video shot in 2K. Just as 8K video downscaled to 2K resolves more detail than video shot in 2K. That's why the final resolution isn't as important as people think, and why 1080p Blu-ray movies still look so good.
before was working on ground in movie production, and he is partially right. In 2007 film was scanned on arriscan in 2k. In 2010 arii Alexa was introduced that had sensor resolution 2k. in 2015 most of movies were shot on Alexa mini or Alexa STX and our new standard was 4k LogC3 {rec 2020} film got scanned at 4k or even 6k. Than i stopped working and not aware of last standards of movie production. if he made his vid in 2014 it would be somewhat an accurate info but its definitely outdated for year 2024.
A digital video shot at 2K with a three-chip CCD setup will have the same quality as a 2K scan of film. A video shot using a 2K Bayer sensor will obviously have ~30% less resolution. So video shot on an Arri Alexa at 3.5K will resolve 2K perfectly. This is also why the FX30 (6K sensor downscaled to 4K video) gets cleaner images (if the subject is well lit) than the FX3 (native 4K sensor). That said, this only matters in a movie theater or for a home cinema enthusiast watching on blu-ray. Most people are using streaming, often on their phone or laptop, and the bandwidth limitations can barely resolve 480p in moving shots. So unless your entire movie is locked off shots in front of a static background, most people won't even be getting proper 2K.