Little secret of JPEG: It actually supports two entropy coders. Huffman coding, and arithmetic coding. The arithmetic coding is superior in performance, and yet almost never used and supported by almost no software. The reason for this is historical: Back when JPEG was new, arithmetic coding was subject to multiple patents. Mostly held by IBM, but not all. That made it very difficult for any program to use arithmetic coding legally, so all the early JPEG implementations were huffman-only. Once the patents expired, it became the classic chicken-and-egg problem: No-one wants to make software that saves jpegs with arithmetic coding because all of the existing software wouldn't be able to display them, and no-one has a reason to make their software able to display arithmetic-coded JPEG because there are none in use to display. So even up to today, we are all using JPEG in the low-performance mode. If it were practical to use the arithmetic option, JPEG files could be about 10% smaller while still maintaining exactly the same quality.
Yes, this is a great tidbit of history that most people don't know! Fun fact you may already know: in video codecs such as H.264 and H.265, where compression ratios are really important in terms of saving bandwidth, most entropy based encoding is based on context-adaptive binary arithmetic coding (CABAC). The improvements of arithmetic coding vs Huffman coding were enough of an incentive for most developers of video codecs to implement this logic in both the encoding and decoding side.
@pyropulse Looks like one to me. A circular dependency: No one will use a feature that has no software support, and no-one will make software to support a feature that is never used. The problem cannot be solved because of a condition that can only be altered by solving the problem.
@@katiebarber407 no, serenity is a different project to temple os. If temple os feels like an 80s os, serenity feels like a 90s os. It's basically a Unix style system with a windows 95 style desktop environment
I did a degree in electrical/computer engineering. This is BY FAR one of the best explanations I've seen about this. Doing the math is nothing compared to the understanding this video gives you. Thank you!
I agree. One thing I would have added though, is why we used sine instead of cosine. But perhaps if this is of interest to you, then you already know the answer why😉
If you've ever studied differential equations or Fourier series in general as a mathematics student, then the concepts also make a lot more sense intuitively than I expect they might as an EE student alone. 3Blue1Brown has some great videos on this.
@@Reducible qhy is only one coefficient in dct output if most of the cosine wave values were positive? Since the others were too low? But youd think some would still be positive and just lower positive value, no?
It should probably be phased out, really. JPEG's compression was cutting-edge when it came out, in 1992. There have many several attempt to replace it since then with more sophisticated compression that can achieve higher quality for the same size, but they've all failed because they can't compete with JPEG's universal support. The latest is WebP, which is making some progress because it has the giant of Google to promote it. JPEG2000 was a big flop. Though amusingly to me, every web browser today /does/ support it sort-of... not as a JPEG2000 file, but because it's one of the image compression methods supported within PDF files.
All/most of the image formats that followed are based on the basic idea of stacking waves, and even more so for the video formats. So in a way you are already paying homage to this genius design by watching this video! The JPEG people didn’t stop working on pictures after good ol’ JPEG either. Their latest JPEG XL comes with tricks to make it much more efficient in terms of beauty/fidelity-per-bit. It is able to go toe to toe with video-based image formats like HEIC, AV1F in terms of efficiency while staying easy on the CPU to encode and decode.
This was *really* good. Well paced, well explained with great visuals. I have a much greater appreciation for what JPEGs do now. I'd love to see a video outlining some of the other various transformations used in signal processing or some more neat applications of them!
Other interesting compression algorithms for people to look up: - Opus, the successor to MP3/AAC that powers audio on the internet these days. - QOI, an amazingly fast and simple to understand image format (1-page specification!) - JPEG XL, the cutting-edge expansion of the original JPEG format shown in this brilliant video :-)
@@Dorumin It could be though. Opus performs really well at any bitrate, though it does excel especially at the low end. Outperforming MP3 isn't that impressive a performance though. There are lots of codecs that can make that claim. MP3 is just /old/.
I'm going to have to agree with the discussion above. Opus isn't a successor to MP3/AAC. AAC is reasonably considered a successor to MP3, but Opus is more of a peer to AAC.
I’ve actually been studying a lot of control theory and signal processing on my own time, continuous and discrete. The moment you said to look at the brightness component, and how if you move along it it’s like a signal in a way, I put my phone down and went “Oh. My. GOD.” I immediately knew exactly what was about to happen: pick out the lower frequencies and just store those, and reconstruct the signal later. That is absolutely INCREDIBLE. Incredible video. I’m also very proud of myself for recognizing that so naturally
It is no exaggeration to say that the quality of this video's presentation of the subject is beyond superb. Utterly fascinating and presented with outstanding clarity and insight. Left me wanting more, more, more of this content, please! Thank you for the effort and care you put into its creation.
I would just like to take the time and say; Thank you for making these. As a mathematical engineer, I really appreciate these type of videos, which go into something that is extremely interesting, but I don't have time to explore myself.
This is one of the best videos I have seen about how the Fourier Transform is used in JPG compression . The amount of effort, time and money put into it is incredible. Thank you for sharing.
Thank you for this! I've watched and read many explanations of JPEG and they all talk about the DCT like that is the part that makes JPEGs smaller. Your video finally made it clear that the DCT doesn't reduce the size of the data, but does put it in a form where the less important information is easier to identify and remove, and why its okay to get rid of the high frequency content. The explanation is great and the visualizations were clear and helped a lot. Excellent work!
I always knew jpeg has got some interesting maths going behind the scenes, but man, this is like super impressive. And it actually sounds like a great coding challenge to create an actually functioning jpeg encoder/decoder Btw I can't admit the quality of the work done to bring this video to us, I just love it. Thank you a lot for what you are doing, your videos are fascinating as always
I remember coding the DCT/IDCT functions in Borland Pascal two decades ago... it was a day of work - not full JPEG scheme, just playing with the coefficient (e.g. erasing them a seeing what it does) but quite some fun
I imagine it's not a good coding challenge, since there's not much elegancy you can bring here, often coding math-related stuff is tiresome and ugly, unless you use languages like Julia.
@@VivekYadav-ds8oz Well although this might actually be very true for someone, I personally feel kind of ok with coding applied math stuff. After all, coding a JPEG encoder/decoder is not only about coding a math part of it, it's also about engineering a software because that's what we, as programmers, do
This video is by far one of the best explained video on JEPEG compression. Not only this video presents intuitive explanation but also puts the right amount of mathematical details for any brain to comprehend. Kudos!!
I've been working with multimedia encoders and decoders for most of my professional life, and I've watched many videos that try to explain what is going on behind the scenes. This is the first video I've seen that touches important technical details like chroma subsampling 4:2:0, which is literally the second thing any decoding software like FFmpeg will report to you, right after the encoder (e.g. H.264). Good job.
As a designer, I very often go through these concepts and therms without the understanding of what they actually mean. And I have to say it, this video has already helped me in an artistic experiment that translates image to audio. Beautiful work, thank you!
I appreciate how this video gives great length to the broad overview of compression, and then very quickly runs through specific details about the jpeg system... Very meta
32:14 There's also the progressive stuff like spectral selection and successive approximation that also break every assumption of your logic and makes you question why you even want to write your own JPEG decoder.
Theoretically, you have to give up some efficiency in order to compress any sort of information. The difficult part is about by keeping the data as original or comprehensible as possible, how much would you give up. But yeah, nowadays we have bigger and cheaper digital storage, so the problem of jpg is gradually getting noticed.
There are better alternatives to DCT available nowadays. For example, JPEG2000 uses wavelets: when you push the compression too far on these, instead of getting blocky like DCT, they become fuzzy, which is generally less objectionable.
Perception is such a massively important field in IT. This is why they teach cognitive psychology as part of a software engineering degree (or at least they did when I studied).
It depends on what exactly you're majoring in. Computer science and software engineering are vast fields. I touched on those things because I liked doing usability stuff, but I could just as easily have avoided if I hadn't.
What I find incredible about this excellent video is that it helped me understand something I never expected to find here. I’m currently in an Electrical Engineering program, and for months, I’ve had a very limited understanding of the Fourier series concept we covered a few months back. The way you explained the DCT so clearly and concisely somehow crystallized the concept in my head. I deeply envy your ability to keep an audience so engaged with all this math I previously thought boring. Thank you so much for the well done video!
You, sir, managed to hit all your goals, in my humble opinion: a very clear explanation of a fairly complex algorithmic pipeline, very visual examples/demos, and inspiring awe of how people can be immensely creative to problem-solve. 11/10, GREAT video!
When people ask if a computer science degree makes any sense in the modern world I should point them to this video. I don’t have a degree myself and work in web development, and I never ever come across a problem as localized and deep as this. Makes me think about going back to school honestly. The hardest problems I need to solve, while definitely difficult, is always about managing lots of data, managing lots of network failures, managing large code bases, managing race conditions and synchronization issues, it’s all just trying to solve these large, messy code management problems. No doubt, there are thousands of people in web development working on really deep problems like this, but they’re all working for the big 5 and making large sums of money for it. Most developers in my field just don’t need to interact with code as a mathematical problem. The math has been solved, the tools have been built, and we need to figure out how to use them as best we can. It’s definitely a different job entirely.
Most of the things people interact with nowadays during their typical day is brought to you by electrical engineering and computer science. The fact that people take this for granted is a compliment to the fields, albeit can be frustrating at times. This also applies to many other fields of science, including chemistry, biology, math, physics, and all fields composed of those base sciences. Just think about the things you use, own, and touch, all created because of material science bred from those sciences.
in one of my university courses we made a few image filters using the SIMD instructions set in assembler. now i understand the horror of the professor when someone said they wanted to make a jpeg encoder in ASM
Great video, I never took the time to understand the jpeg algorithm but this video really explains it efficiently, with relevant illustrations. Well done ! The only remark I would make is about the curve you plot on the frequency coefficients (when you explain the DCT). I think it kills the idea that it is a discrete sequence of coefficients. The interpolating values have absolutely no meaning, whereas the curve on the left (the signal) is relevant because it represents the "real" signal that was sampled.
Yeah, very good point! Now that I think about it, you are right. I think I wanted some visual symmetry when I made it, but truth be told, it serves no purpose. Sometimes, when you are so deep into a project, you can forget how something so superficial can possibly lead to some confusion. Thanks for the feedback!
It's not only a discrete sequence (as the sequence of samples would be), but it's still discrete when you consider the extension to continuous signals. On the time domain you can interpolate using the cosines. Thus, on the left side I think the continuous line is helpful. In the frequency domain, you would still see discrete delta impulses, because the dct requires (assumes) your signal to be periodic. Other than that, great video! Thanks
@@Reducible I think the continuous line does serve some purpose. It makes it easier to understand why the discrete values are the way they are when you slowly shift the frequency between integers.
Though the interpolating values have no meaning, they serve as a reminder that cosines are in fact, continuous and not discrete. Also the signal transformation is better to visualize and understand with this in mind. This is ultimately what the step of quantization gets rid of, as it samples this continuous interval back to discrete space. In my mind it was definitely not in vain to have it included and visualized.
@@Reducible I think one problem is that it's a bit hard to spot the actual coefficient points because they're the same colour as the curve, which is why making the curve less bright is a valid solution.
I did a degree in Electrical Engineering but I do software engineering and this video is awesome! I love the visualizations and the explanations of signal processing concepts. If they taught signal processing like this in school I would've been MUCH more interested! Really well done!
I’m usually not a huge fan of JPEGs and prefer highly compressed PNGs but this video made me respect the file format more. I’m mind blown by how cleverly designed this is.
This reminds me of the way it felt when I first saw 3blue1brown‘s video showing how the Fourier transform works. I *got it*. It was miraculous. Reducible, you‘re up there with the best of them.
As someone who hasn't touched math since High School "special needs" classes, it's insane how intuitive you made this. Of course I don't understand some stuff like *how* a signal gets transformed with a DCT in the middle of the video and why the transformed values get so weird at a first glance, but otherwise... I've got a vague understanding of how this works now, even how you can use a collection of "fixed" cosine waves to roughly represent values. And I can see how the large-scale luma / chroma simplification leads to the sort of splotchy patches you see in heavily compressed JPEGs.
As a pixel artist, I admit that I somehow hate JPEG, mostly because of its qualities. It's a lossy image format that is decent at what it does in most cases. The main issue is that pixel art is one of these rare cases where JPEG is the worst option; it's only sharp and sudden transition from one pixel to another in terms of colour or contrast, just what JPEG "hates". A lot of websites automatically convert your image into JPEG if it's not animated or not transparent; which can absolutely ruins your work. So there is this old trick of leaving a single pixel transparent on your image to keep it as a PNG instead. So, now I still hate JPEG, but at least I understand a bit more why.
It's annoying that everything is forced through lossy photo compression, especially when pixel art is already so insanely compressible. A detailed 320x240 32-color piece can be 20KB, but it must be upscaled and converted to a fuzzy JPEG that's an order of magnitude larger than the original.
Lossy encoders were designed with mainstream usage in mind. You, as a professional with strict requirements and technical knowledge, are responsible to find another medium and suitable file format to carry your information. Although JPEG was somewhat forced onto Internet users as digital hardware and software grew in usability and prevalence, there was never a point in computer history when you couldn't use a lossless format or find another lossless workaround, mostly because raw solutions are FAR EASIER to implement and far more robust and cost effective, and rarely have anything to do with fashion or industry trends. I am a DTP professional and a graphic designer from the early 90's, I still remember IFF and PCX file formats on the Amiga. Don't mingle 'technology for the masses' with the 'technology as is'. Since the 2000's I remember people were struggling to find a good carrier for print-ready photography in certain workflows. TIFFs with ZIP compression were widely available and offered a superior lossless compression both in CMYK and RGB. We also had EPS DCS2 which would natively store grayscale color separations for high-resolution film development. Video and audio were something else due to monstrous demands on the memory for the time, but pixel art? Man. It all started from indexed palettes and simple pixel art. Why would it ever devolve into media intended for megapixels and high-freq noise? Vector graphics took more than 10 years to develop fully and it's still quite a niche technique if we look outside the DTP, but pixel art was there from the very beginning. Though, to be fair, I remember one historical gap. It was thanks to the holders of the LZW patent (used by GIF) on one side, and thanks to Apple pushing for high color palettes on the other, and so the browsers were caught between a rock and a hard place, but only browsers! It was some time before CompuServe finally got PNGs running throughout the ecosystem, in the late 90's. Though Microsoft always had the Bitmap format, the most native thing one can imagine, but it was completely discouraged on the Internet. In any case, since the 2000's, *having* to use JPEGs for anything it wasn't made for (high-res photos and common image interchange), was definitely not a thing if it ever was. Whoever had to mess around with upscaled JPEGs was someone who figured out stuff very wrongly.
That's because pixel art is pretty much the opposite of natural vision. It's also the reason why you can never use chroma subsampling on a PC monitor (it will screw up the GUI and text) and why desktop recordings without zoom look so horrible.
I know some basic computer science, but this is way over my head. I finished the video with amazement of how complex a JPEG compression method work, even though I don't like the nature of blocky image stuff, but still impressed. Thank you.
Thank you for such wonderful visuals. Even though I honestly don't get all these concepts at all, I find it super interesting to watch these concepts explained visually. I hope this video will be a vital complement to my upcoming signal processing course.
This is a great explanation, we actually implemented a simplified image decompressor with an image format based on JPEG but without 2D downsampling and without huffman decoding on an FPGA in undergrad.
These same concepts (run-length encoding, bandwidth compression) work pretty well with radar images as well for certain classes of radars. Some radar signals need to be captured, compressed, transmitted over long distances, and reconstructed to its original form (warts and all) for further processing. Thinking of an image as a signal processing problem is very logical. Excellent video.
I’m out of words for the quality of this content. Really, wtf is happening. How can this be free and always available knowledge? I’m a chemical engineer btw, I will probably have no use for this ever in my life but I’m deeply interested, specially on the math part. I just love the internet man. You sir are a hero.
This is a excellent video, I really appreciate your putting so much effort into both covering the actual math and also giving a visual run-through of it's implications. So often this stuff is explained with a page of equations and maybe single figure of featuring the DCT 2D basis functions. This was way better than that! This is a great example of how a well-done video with good illustrations and animations can explain concepts way better than a text book can, though at the cost of making it much harder to skim ahead when part of the information presented was already understood.
Such an amazing explanation!! as someone who started to watch the video without any information about the topic, but still understood everything in detail, I must say this was a great video
I work for years in JPEG decoding IP, especially in Huffman Decoding Algorithm. and swear that JPEG is wonderful. other little secret is JPEG also has "Head" which contain information for decoding purpose while still maintain the entire file in small size. and JPEG Header Analyze is also a very interesting topic. I hope I could watch this video in my early year of my career in JPEG codec. I just refer this video nowadays for other people who ask me about the JPEG.
Not only the content of the video is interesting and well explained, but also the animations are incredible. I dream of one day being 10% as good as you are with Manim.
It is incredible how complex and thought through technology is that we use without thinking about it. This video Really makes you appreciate the hard work and genius ideas that have been put into what we use today. It always amazes me to understand and learn how things work. Thank you for explaining it.
You clarified a lot of the math involved in JPEG compression so that now I think I understand it, or at least the most important parts. Great explanation!
Holy crap, that was a ride. I knew JPEG used DCTs and I did dabble in signal compression a bit back, but the other details putting it all together was very illuminating. ngl I was not expecting that the JPEG quality slider I see in some software is actually from a set of quantization tables set by standard. Or that stuff like Huffman coding was used to clean up after eliminating the high frequencies.
Had a course in signals and systems where we learned Fourier transforms. Decided to try using a 2D FFT transform of an image, then essentially cropping or removing high frequency components, then doing an inverse FFT to make a very crude image compression algorithm. It still achieved a filesize reduction to about 20-30% of the original before obvious artifacts became visible. Not bad for not doing any block operations or other data compression. Got real interesting applying matrix operations to create filters like blur, sharpen, edge detection, and color shifting.
Thanks for this. How video compression actually works is something thats not easy to explain to most people. I can certainly see my self referring people to this video instead of trying to explain it myself. You've done a much better job explaining it than I could have.
This topic has been honestly stuck in my head for so long, but I found no content satisfying my need to learn it until now!! this has been greatly explained, awesome job!
I recently started saving my photos as HEIC/HEIF and am incredibly impressed after playing around with some really low-bitrate video encoding, cause it sort-of reveals the inner workings of particular codecs/settings. (same with listening to low bitrates on, say, MP3 vs. HE-AAC v2) I thought I was impressed by HEVC before I saw AV1 at comparable settings. It's incredibly effective in my amateur, compression-loving POV and I'm excited for and already in disbelief of even further advancements in the future. This kinda leads into my appreciation of the Demoscene. I have absolutely no idea how real-time graphics and a full soundtrack like in Conspiracy's "Clean Slate" (premiered at Revision 2021) fit into a 64KB file...
I understood several of the words in this explanation. Seriously though, this was very useful. I have an extremely hard time learning math without a real-world example to tie it to (e.g., FM radio is a real-world example of calculus derivatives), and I made it to age 39 before finally finding a real-world example of linear algebra in this video. I still don't understand what linear algebra _is,_ but now I at least have a starting point.
The amount of work that goes into our display-systems really staggers my mind when I try to think that we just take a 4K 8 bit per channel stream of 60fps as not that big of a deal. Even expecting increases to 8K at 10 bpc 120fps as something we should have by now dammit. And here I am with my 1080p 8bpc 24fps home cinema projector and I say... "that's good enough, friend... that's good enough."
I've been lurking around DFT/FFT explanations for the past couple of days, both in your channel and 3b1b including some others, I understood many underlined concepts but why cosine functions itself pull out the contribution info out of the input was mystery, this video solved it for me on that vector similarity based on dot product part. I was blown away by the simplicity of the concept. Next I plan to check more on orthogonality. Great video as well, mate! My utmost respect!
You sir, just earned a new sub, absolutely brilliant content quality! I'm loving how many new channels are adapting the 3blue1brown style of teaching, I honestly find these videos so clear I'm learning faster than I've ever before!
Awesome video and great explanations! The only part that wasn't quite clear was the 2D DCT part which felt rushed. It wasn't clear to me why you can just DCT twice on top of each other in orthogonal directions and what the resulting matrix numbers actually mean. But after thinking about it some more, I think I got it.
I added this to watch later and really wanted to watch it after work. But I was hooked, I couldn't stop the video even if I wanted to. Awesome video and amazing engagement!
your explanation and quality of video is so amazing. It really reflects the amount of hardwork you have committed for this cause. Thanks a lot for your service to humanity,
29:00 well this is clever, I thought you'd just store the matrix of the DCT and then change everything but the top right to zeros, but the actual way means that if this block has a lot of higher frequency signals, it is retained after the rounding.
I finally understood the concept after watching a fee computerphile videos on it. But I also had some information theory classes in university since. This video does a great job explaining all the details and I believe I have found another one of those gem channels. Will look for a video on JPEG2000 wavelet compression if it exists
Hi, in the "Playing around with the DCT" section, you were changing the value on y-axis. I felt like it was confusing when you were changing both the amplitude and the y-axis scale, i think it'll be better if you first change the scale and then change the amplitude.
@@Reducible I also thought this was slightly confusing, but i tried to see it from the vantage of someone who is far less knowledgeable. I personally understood your point just fine (it was actually mildly annoying how slow you were getting to the point, but please don't take this as a criticism). The reason I'm even mentioning this in the parentheses is because then you suddenly jumped off of a cliff and went full on linear algebra. I was still able to understand everything you said because I'm professionally versed with the linear algebra and matrices, but I'm guessing it has to be some sort of curriculum you all take for granted if that's like the obvious stuff, while the basic mapping of amplitude/frequency with DCT had to be cautiously and meticulously explained to a 3rd grader. I'm just saying this out loud, I thought that was weird, but it might be just me.
@@milanstevic8424 yeah I was well and truly on board, but "dot product" is well and truly into "complex scary maths" for me. I can handle the concepts just fine, I just haven't been educated that deeply in what certain math things mean.
@@patrickhector well I don't blame you because it definitely sounds scary. however, let me demystify it in two simple statements: 1) a vector is a mere set of independent real values, typically used to represent coordinates (or direction), where each individual component's index correlates with a particular spatial dimension. i.e. a three-dimensional vector would usually map as (x, y, z). 2) a dot product between two vectors is an operation that is used to multiply the vectors together, but because you can't tell what I mean by this exactly, it is algebraically multiplying the components to each other, index-wise, then adding the results into one final number. it is beyond my ability and deep within the realm of magic as to why this operation makes any sense whatsoever, but when you apply a dot product you literally get the cosine relationship between two vectors, multiplied by their individual lengths. so this almost childish mumbo-jumbo of mixing stuff together ends up being a . b = ||a|| ||b|| cos(angle) where ||x|| brackets denote the (primitive value aka "scalar") length of x. when used with unit vectors (vectors with a length of exactly 1, which lets us ignore the lengths), this operation has a very convenient effect of telling you the actual spatial relationship between two directions (in however many dimensions you need), in terms of whether they're parallel/antiparallel or orthogonal to each other, or anything in between. look, let's say we had unit vectors a = (0.6, 0.8, 0), b = (-0.8, 0.6, 0) and c = (0, 0, 1) imagine a and b lying down on the table surface like clock hands, and c sticking upward. (c is obviously of length 1, but you can use Pythagoras theorem to prove that both a and b are of unit length. i.e. ||a|| = √(0.6² + 0.8² + 0²) = √(0.36 + 0.64) = √1 = 1) a . b = 0.6*-0.8 + 0.8*0.6 + 0*0 = 0 a . c = 0.6*0 + 0.8*0 + 0*0 = 0 which happens to be the same as cos(90°) and indeed, if you'd draw these vectors, you would see that they are truly orthogonal to each other. however, if we had d = (-0.6, -0.8, 0) this vector would be antiparallel to a, and so a . d = 0.6*-0.6 + 0.8*-0.8 + 0*0 = -0.36 - 0.64 = -1 which happens to be cos(180°) as you can see, it's a really dumb math, kids can do it, but one that has profound consequences that are still investigated and heavily used in many disciplines of math, physics and science in general.
@@milanstevic8424 see here's the thing I am perfectly fine with the concept of vectors, they're pretty ubiquitous to highschool ATAR physics in australia. Dot product? Ok you're combining two vectors- not quite sure how that works exactly, but that basic concept I can definitely handle. Everything else following is a haze of words I need to scour my memory to find any semblance of an intuitive understanding combined with a jumble of numbers I might be able to parse were they not sprinkled with symbols I was never told the meaning of, or have simply failed to remember. Maybe I could work it out given half an hour and free access to wikipedia, but I'm guessing that it'd be more like a couple of hours at least to properly wrap my head around it- either way, it's not great that the beginner-friendly youtube video suddenly switches to requiring at least 90% of your memory of an advanced mathematics course you did in highschool
This is extremly amazing! I loved the technology, the maths, the cleverness, the explanation. This one of the best ways I have invest my time in weeks. Thank you for this video!!
Signal processing, including image/video compression and coding is mostly studied by electrical and computer engineers. Computer scientists might study this as well sometimes, but that's rare and certainly not the norm.
Your videos are really great. Your delivery is quite simple to 3blue1brown. I mean that as a complement. You take time to explain things very clearly, your cadence is rhythmic, and your voice is pleasing to the ears.
So pixel art as PNG and photographs as JPEG. Got it. Also it's so cool to see easily graspable applications for vector and matrix mathematics like this.
I know you said you wouldn't wish it on your worst enemy, but learning about all this really makes me want to write my own implementation of JPEG, just as a challenge. Probably a great educational experience if nothing else!
There are so many interesting algorithms in this space, from run length encoding, to LZW, Huffman encoding, Arithmetic coding, and even looking at some newer implementations like LZ4 would be pretty cool. Probably more than one video’s worth though.