Very much appreciate the time and energy you took putting this very understandable video together. I'm just diving into Gamma, Color Spaces etc. and have yet to shoot a video - but this will happen in the future. From what I've learned in the last week or so, I heard one statement that caught my ear (although I know what you mean). "In real life there are colors out there. There are colors out there. There's green etc". Again, sorry about this, but from what I have learned, there are not colors out there. Just spectra. Colors are spectra that are given names by humans to describe them. Silly technical point, but the video is so good, that at least I'm listening closely. I can honestly say, with the reading and watching I've done over the last few days, this entire video made sense even though I have no "learned" experience with any of this. Really appreciate the detailed and simplified view. Very helpful. I hope you have some videos for getting started with DaVinci Resolve and how to find the values needed (i.e. Nikon Z9 video, luts, etc). Very nicely done. If you don't have it already, a video on minimum requirements to work on such video would be sweet! I do have one basic question (if you come back here and check some DAE: Is the statement you make at the end, that is, working in a bigger color space and then squeezing the information down into a smaller one (i.e. REC709) a desired thing? The reason I as is two-fold. 1st: As a new photographer, I've frequently heard that picking a color space like ADOBE RGB, when your output devices are in sRGB can lead to undesired color shifts (i.e. math rounding errors) is is not worth the hassle. This point was also made with respect to the potential gamut of even sRGB of the camera, software (i.e. Photoshop), and monitor might not overlap even though you are using the same space. So, in photography and video, is working in the bigger space worth it? Depends on the intended output display? 2. Does working in a larger color space benefit from a higher bit-depth? Lastly, can you recommend any of your videos (or others) recommended for a beginner getting started with video?
There are a lot of questions in here and I will try to answer what I recall. 1. Colors are a thing we see and I am using the term in a colloquial sense. I'm not going to make a philosophical debate in this video about blue actually being only a reflection of a certain goobadygoob and the object is actually every color but what we see etc. as that's not the point of the vid. 2. Shooting in a large color space and working in a large color space has advantages since capturing allows lots of information that you can then edit in a program later. Working in larger color spaces tends to allow more room to make micro adjustments to dial in the exact look you want for the final output. When you view the image while working, it should be presented in the actual output color space and all of the large gamut math is happening "under the hood" meaning you don't see it... You only see the output. Make sure your output file is in srgb for web. You can output adobe RGB for prints if your computer monitor is correctly calibrated and able to display it and if the printer can properly print argb. 3. Higher bit depth capture is always desirable if you are looking for the most flexibility in post production. In video, 12bit log encoded raw is usually more than enough data as it is practically the same as 16 but linear. In photo, 14 bit linear is enough for the vast majority of photographers.
@@TonyDae Tony, thanks for the response and getting back to me. Really appreciate it. One last quick (like, right) question: Is shooting video in 8bit H264 or equivalent, like shooting in JPG? That is, is the color space and gamma "baked" in the codec? (i.e. compared to video working in RAW S-RAW, C-RAW, N-RAW)? Again, thank you for your kind response and time. I'll make sure to follow the other videos as they apply.
Yes that's a good way to think about it. 8 bit gives far less room to edit than 10 bit, 12 bit far more room than 10, etc. The difference is a magnitude of ^2 where for example, 8 bit has 256 varieties per color and 10 bit has 1024. The difference is absolutely huge. But also, real raw files are different from h264 because they don't have an established color balance baked in. A 12 bit raw should always provide more room to correct than a theoretical 12bit h264 because of how the data is stored.
Hello, I have questions about the calculation of the lens mm in relation to the crop factor of the m4/3 and the Rokinon lenses. Should I multiply the mm by 2x (for example: a 24mm becomes 48mm) and then 0.71? In other words, a 24mm is equivalent to 35mm, a 35mm is equivalent to 50mm, a 50mm is equivalent to 70mm and an 85mm is equivalent to 120mm?
Really helpful, thanks. I am getting the same problem with a DaVinci Wide Gamut timeline, after importing Canon Log from a Canon EOS R. Any suggestions?
Thanks for the video, not many people talk about this feature. I also have a fairly important question about this; I had discovered "bypass color management" and I did all the color grading of a project that lasted about 30 minutes. The problem that came up is the export, it is not possible to export the project with all the corrections and color grading that I have done. Is there a way to do this even if the "bypass color management" function is active for each clip? The problem is that inside Resolve everything looks very beautiful and well done, but the exported video is faded and lifeless, as if I hadn't done anything. Thanks in advance 🙏
Thanks for the video , So helpful! The only thing I Can’t find output tone map in davinci 18.6 , the settings are slightly different, any suggestions? Thank you 🙏🏼
beautfiul...and y r correct....newbies like me...woulld easily think wrong as i did recently....i made a mistake and too much white was recorded...but i did correct myself and all is well now...but...yes...the tools need to be used correctly...thx...good one...
Missed this when you first published it. Agree with your findings. I went with the GM f1.4 primarily for size and weight. Love the f1.2GM, it's special but I know I will use the smaller GM more just because it's easier to handle and carry.
It's a way to convert, but it is not the same as a color space transform. Luts are an individual's interpretation of color given by a specific model camera, lenses, etc. and generally the goal is to provide nice colors. Sometimea the result isn't what you expect because your model and lenses will be different from whatever was used for the lut creation. A conversion using color space transforms is purely mathematical without any personal interpretation or beautification of the color. Neither is necessarily better than the other, just different.
I work a lot with braw shooting because of it's smaller size than prores, it's also the only way to get acces to the full 12 bit dynamic range of the bmpcc4k sensor, thank im also using ACES as an Open EXR intermediate for my CGI it's well implemented with Blender and Substance painter and designer. Those converting process was painfull in the begining, but when your final workflow is full ACES it became easier for export like dolby vision or even netflix or regular DCP and rec 709.
i do not want my images processed or stored on line, AIs will end up stealing them to get 'smarter' also faces of people in my images should not be on the internet. we have to stop giving big tech our private information.
hi!My Davinci is 18.6 and i can't find ARRI Alexa (Output) and ARRI LogC (Output Gamma) when i add Color Space Transform there are just ARRI Wide Gamut4 and ARRI LogC4(gamma)
What if I shoot ISO 1000 to expose for highlights, leave it over exposed and bring the gain down later. Won’t that give me the highlight details and clean up the noise? Essentially a combo of high iso and ETTR?