Funny how the development so closely mirrors the development of AV1 I wonder in what way h266 could differentiate itself from AV1 also why is h265 barely used at this moment? Is it purely because of the difficulty of licensing the codec?
One of the differences is the size of the video. H.266 is 50% smaller size with the same quality. For example one fullhd movie will be 700mb instead of 1.4gb without any loss of quality. I love compression. One day big video files will be very small size with amazing quality. Better quality at lower sizes, baby!
Hi! thanks for the shared knowledge! Do you know of a community where I can find more people to talk to about it? I am working on a project involving this content and it will be very important to know more about this world of video coding.
I remotely edit the video in Apple Final Cut Pro. Most liked the software Splashtop and Acronis Cyber Protect Connect. The usual mobile internet is enough. But after all, you can further reduce traffic. To do this, it is enough to develop a synchronization protocol not only for the mouse and keyboard, but also for the operating system and the active application, in my case - video editor. Capture the video will need only what is in the Viewer window. This should greatly reduce traffic, and improve image quality from the remote computer. To broadcast games there is a software Parsec app, you can not worry. The biggest pain is the screen capture for the training course, when you need to relive the work in the spreadsheet editor, which is small enough font, or a lot of information. For such purposes hope H266 will be applied, because the signal 8 bit in all applications for screen recording is «digital VHS», which greatly impairs vision...
Why to save few cents and at same time get bad access time on rewind? Internet traffik now is very cheap. Does is matter whether i pay 0.5 or 0.3 cents in hour?
Not so much about the consumer. On the Scale of Netflix, 40% bandwidth saving on 4K could be tens of million of saving. And that is already with their appliance sitting locally with ISP.
@@ksec6631 But at same time waste billions on consumer side for bad access time. It does not matter for consumer, one consumer does not pay for other million consumer, one consumer pays only for his traffic, which is fraction of cent in hour. But pays for waste of time 0,2-50 dollars depending on his wage and amount of rewind.
On a much larger scale? Yes. Billions of devices connected and streaming video content and everyone is reducing 40% on data bandwith? You dont need math to understand that the cost will be tremendously cut
Longer endcode time, longer decode time... meaning more time on processor... meaning more power draw... all the advancement in cpu or soc to keep the battery time up become null. lol
@@morfx9911 1.8 that is nearly 2 time longer, nearly double the decode time, nearly double the power draw. if 1x equal to 30 seconds then 1.8 equal to 54 seconds. Cpu, soc advertises as mostly at 20% faster than older gen rarely at 50% faster never 80% fater and at most 50% less power draw. Here 1.8 time increase decode time is 80% slower, %80 more power draw. So all the cpu, soc advancement become null. You may not need to read what is below. Just a example: 2chips of 2 generations, 1st gen chip draw 1W for 1 second. the 2nd gen chip is 20% faster, draw 50% less power than its older gen chip. 1 clip h.265 and that same clip in h.266, 1st gen would need 1 second to decode h.265 clip draw 1W and 1.8 second draw 1.8W to decode h.266 clip 2nd gen chip would need 1.6 second, 0.8W to decode h.266 clip, 2.25 time battery time, 0.8 seconds, 0.4W for h.265 clip, 4.5 time longer battery time. So with example of 20% processing increase and 50% less power draw. If both machine has same battery and 1st gen machine can stay 20hours if decode h.265, then 2nd gen machine can stay 90 hours (3day 18H), but if 2nd gen machine decode h.266 it would only last 45H (1day 21H). 3day 18H sure is more useful than 1day 18H.
Does the world need yet another patented, license-encumbered video codec? I hope not. I hope nobody in the industry will still work against the implementation of open standard / royalty free codecs. We really need this business to become completely irrelevant asap. If there are equally good open standard codecs (and there are now - both for lossy and lossy). So nobody should continue to promote license-encumbered codecs.
It's about steering development efforts with funding as well as long term expectations. The standardized things achieve higher overall quality of experience across dimensions due to (hopefully) extreme end to end optimization. The price to pay is low for the productivity output. It's like road tax but lower. Open source tend to be short term and not fully optimized across dimensions. Open source lives alongside patented stuff and this model seems to be O.K. How would you sponsor high complexity work with RnD cost in open-source?