Dithering is still used in printing. In fact modern day printers use a clever combination of computed dithering like FS and seamless tiling like Penrose tiles.
^ as serb says, some cheap panels use 6bit color exploiting us not being as receptive to spatial high frequency color changes. Also, cg graphics sometimes use dithering to reduce banding that even with 8bit can be noticeable with dark gradients, it looks a bit like the quantized image, just not as exaggerated. Dithering is basically just free color precision similar i believe to why printers do it as well but im not too knowledgeable about them
@@Smittel printers have the problem of having only a very limited set of colors, usually 6 in your typical inkjet printer. Fortunately printers today have extreem high density of “pixels” so that dithering doesn’t effect the end result. You only see it under a microscope. Straight on dithering is actually not used as the end result can make the image muddy, because how ink droplets flow and mix with each other. I use it as a first stage filter, but only for photos since it’s relative cpu expensive.
What I love about this channel, is that you can just watch the video and learn something without actually having to code along. You certainly can is you want to, but just watching and learning some new algorithms is really nice too.
As always a great tutorial. I like that you start with the demo, that's a nice hook to keep watching. I'm curious if I could use this algorithm to make emulation of old 16bit art using high res pictures . I'll have to try it. One more project in my bucket list hahaha
lol thanks! Yeah its a great way to make pictures look retro. Most art software will have an equivalent "filter". In fact I tested a version of mu implementation against Affinty Photo, and got exactly the same results, so we know how they're doing it :D
My thought on that would be to take VGA output, dither it and connect to EGA 64-color monitor. That could be interesting. Something that back in the days video cards weren't capable of (at least not 60 times a second).
Very nice! I've wanted to learn about color quantization and dithering for a long time and this video explained them in a very understandable way! Thank you!
Dithering still has uses in "modern" applications. You can still get some more dynamic range for a specific display using it. Floyd-steinberg is rarely used for this nowadays, but the principle remains. Think of things like displaying higher-bitdepth images or videos on "normal" 24bpp monitors, or display-stream compression, etc. EDIT: Also dithering is not a scanline-algorithm. Floyd-Steinberg is, but not all dithering algorithms are. But most of them are simple matrix operations!
If you want to see some impressive dithering, have you seen the presentation on the Lucas arts guy who only used 16 colours but could get crazy colour ranges and even cycle the palette to make them seem animated?
I used this technique to display pictures on white/black/yellowish-brown E-ink display. I was limiting palette to RGB values mimicking three colors of display and then dithering the picture. Works great for photos.
this was another superb tutorial. Old (Robert) Floyd was certainly one of the giants of 20th century computer science (e.g. Floyd's algorithm for finding cycles in lists, Floyd-Warshall shortest path algorithm, correctness, work with Knuth, etc.). BTW, with modern C++ and template argument deduction, if you're creating a std::array you can leave out the item count - and even the type if elements are all of the same - if you''re initializing from an initializer list. e.g. std::array a{1, 2, 3, 4, 5}; // creates a 5 element array of type int
Hey javid youre a great guy! Im currently at university and I am looking up to you! Its amazing that you share all your knowledge to all us people for free and you teach as an excellent teacher!
Years ago I developed dithering algorithms for printing and Floyd Steinberg is one of the algorithms we used. There was a similar algorithm called Stucki which worked the same way but distributed the error to more pixels using different weights and produced a more pleasing image. There's another problem that arises in printing in that often your pixels are not square and a printed pixel will overlap the neighboring white pixels so you have to weight them differently. We had one printer where this was so bad that if you printed a 50% gray by painting pixels like a checker board, the black pixels completely overlapped the white pixels and you got black. For that we ended up using a completely different algorithm.
I've always wondered how to implement this algorithm. Thank you OLC. You always have the answers I've always needed. Console game engines and pixel game engines for example
So great to have you back! Really awesome for people new to image processing. If you did linked dithering and to printing you would have completed the circle. I could have imagine their AHA moment especially if you mentioned CMYK color space. Awesome stuff! Keep it up! Really this video opens many interesting topics regarding signal processing. Reducing a dithered image shows the limit of nearest neighbor/bilinear filtering. This could be the starting point of an image sampling video. All the very best for 2022!
I just did my own Floyd-Steinberg dithering for photographs displaying on an ePaper screen :D (Connected photograph frame, where you can choose an image on your smartphone to be displayed on the ePaper, sent to the Arduino with Bluetooth) Can't wait to see how you did it !
dithering is still everywhere, in every conversation for audio /image resampling (rgb32 float -> rgb8), i also used floyd steinberg for service fees applied to the whole document distributed to its the positions
The best video I've seen since the year started. I am working on a program to produce pixel art from high def images, and this is super useful for that! Thank you javid!
David..your content is awesome...The information that you present is pure gold...keep up the good stuff.. I have a similar background like you (minus the game development and 10 years of automotive development). However, it find a lot of content carrying over to auto dev for experimentation... I would like to know if you have some book recommendations.
I love your videos man. Even though your channel is based around C, and I know nothing about it, i in fact watch a lot of these tutorials and program in Lua with my own pixel engine. That’s the great stuff about your videos is that you visualise everything. Keep up the great work!
I love your videos. :) I always learn something interesting and can't wait for you to release more. On another note, I've always found old pixel art using Bayer dithering to look very nice.
25:30: CMYK! And sure enough it looks like a photo from a color newspaper! What's amazing to think about is that back in the 1930's, the first fax machines did a similar operation with vacuum tubes and used capacitors to hold the error values.
Very interesting video. I once wrote an algorithm to apply a weight to a pixel (related to A*) where it would look at its neighbours. One of the problems I got was the fact that when you iterate from top left to bottom right, it affects the results and I needed to do it again in the reverse way, but this was quite inefficient. This algorithm made me think of that.
Another really good video about a subject that I've always found really intriguing, I remember the first time I came across dithering playing around with The GIMP to convert full colour to true black and white images and I thought it was some magic voodoo algorithm that must be beyond mere mortal levels of comprehension. That's why it's so satisfying to find out that the algorithm is very accessible and intuitive to understand but there is still a solid level of mathematical thinking and nuance behind it. I think there's a probably a natural follow on video surrounding the generation of 'optimised' palettes (where the computer decides what colours will best approximate the source image) too if you're so inclined to do so. :)
A big factor in the brightening of shadows might be not the dithering algorithm itself, but using it on sRGB (i assume) data, with a linear distance function. Two pixels set one to 0 and the other to 64 will emit more light than two both set to 32.
Exactly. And to test that you can pictures of your screen at some distance, while your screen is either displaying a 127,127,127 color or a pattern of altering black and white pixels.
I want to point something out about pointers. In C++, the star is part of the declarator. int *i, j; will create one int pointer and one int. This is why we put the star on the right. It is not a style choice, since the star is not part of the type specification, but the declarator. The same goes for references. This is in contrast to unsafe C# code, where the above snippet would create two pointers. Hope this helps!
The brightening effect is caused by doing arithmetic with gamma-compressed values instead of linear ones. E.g. middle gray (128) actually encodes a brightness of 24%, not 50%. See sRGB on Wikipedia.
Amazing video, I really like your explanation it is so clear and is very easy to understand ! Thanks for the nice content. I was looking for cache optimization videos but couldn't find a good one, maybe you can make a video about it, that would be awesome !
Am I correct in thinking that when clamping to 0 and 255 you will lose some of the error propagation? Of course its better than wrapping around, but wouldn't storing the altered value and only clamping when actually assessing the pixel result in a better dithering?
You are! In fact I was intrigued by this too, and created a version where this doesnt happen. What I observed was the error propogation goes out of control and quickly saturates, so the bottom right of the image is garbled. I thought about including it in the video, but then I'd have to explain the new custom pixel type required and it didnt really fit. I would guess that the clamping is required to keep things under control - this could probably be achieved by other means however, if you're prepared to go beyond just the basic FloydSteinberg algorithm.
@@javidx9 We'd probably have to hold at the place where the clamping would potentially kick in, to investigate what is happening as to information overloading or miscalculating or translating unexpectedly. Partial dividends accumulating a discrepancy from a rounding or something like that. Debug when "clamp" is used and peek the memory values of the vars.
@@javidx9 That makes sense, since errors would propagate and propagate diagonally, and since the algorithm's bias is towards "brightness" it would got out of control. It seems that the clamping was a fortuitous side effect that the algorithm needs. Did you try altering the bias constants too, to see if you could produce something more interesting?
RGB error dithering ... but after you said there's no cross dithering colours all I can think is what happens if you dither Hue Saturation Brightness (HSB or HSL or HSI or HSV if you prefer). Wonder what sort of funky things would happen ... In theory it should still work but possibility of rotating all the way to a complimentary colour. But then converting to HSB and back after dithering might just be too much of a pain. Still you'd get some funky results ... Very much something that would have been tried on TV signals maybe games consoles like the Sega Genesis, Super Nintendo or Amiga (assuming you're using a composite out). Edit wait no that's Y'CbCr .... So much technology that's mostly gone and I've forgot about.
nice one; I've been meaning to go back and have a look at the optical flow video.. try and figure something out for horizon tracking on the ESP32 Cam, a nice refresher 😁
The serial nature of Floyd-Steinberg dithering isn't the only problem with it. It's also not a good fit for animations. The way FS dithering propagates the error through the image means that if you change a single pixel, everywhere after it will change as well, resulting in shimmering noise in an animation, which looks pretty bad. An animation-safe form of dithering needs to be localised and keep its pattern still relative to the screen. A Bayer-matrix ordered dither works quite nicely. Well-enough that the software renderer for the original Unreal from 1998 uses a 2*2 version of it on environmental textures to fake bilinear filtering. Interestingly, it's not dithering between colour values, but texture coordinates. Which makes sense as a way to save on performance. Much easier to add offsets to the coordinates of the texel to look up than to do bilinear filtering. Note that it only applies to the environment. Objects such as enemies and your weapon models are unaffected. Those just use nearest-neighbour texture filtering.
This tethering method I will by amazing fore hardware or software that have limited color support for example nes, gb, gbc, ECT for people that create games or other software for those system I believe this information is very useful Personally in not that good with codes and algorithm but I appreciate that video personally the only way I'm able to create I video games is because the game engine existing
Been using SDL2 with my follow-alongs, as it's a tried and true frontend to the standard graphical APIs, with a few additional goodies such as a render scaling function
I wish dithering was used these days, even with 24bit palettes. For example, the splash screen on the Hulu app (on Samsung TVs) uses a teal gradient, but has a lot of posterization banding. Dithering would eliminate that.
In general, I think it would be easy to have shaders ( I am talking about glsl here, and if you don't know what it is this comment is going to make little to no sense ) outputting colors in a much broader range of colors. Everything is calculated not with integers between 0 and 255, but with "reals" between 0 and 1. The precision of those "real" is invisible for the programmer, so it could be very high. Then, in the last step, when would come the time to display it on the screen with only 24 bits per pixel, the GPU could dither the whole result ( it would have the real result of what the color should be in those "reals" and the transformation to 24 bit would be the sampling ). Invisible to the programmer ( maybe a boolean to set to true ), retro compatible with a lot of games and improving the result a lot.
Hello, there. I'm using plain C at the moment for basic graphics programming. Thus all the C++ lambda-goodies feel like some sort of black magic, ha ha ha! But still from your good explanations, I understood most of your video and the Floyd-Steinberg dithering and it's very interesting. While watching the B&W dithering at 18m15s when you add the clamping, I started to wonder... I understand pixel values must not wrap around when diffusing the error. But isn't the clamping a little problematic??? For potentiall two reasons?: #1) slight decrease in dithering quality when we delete part of the error to be diffused in next steps. #2) significant amount of branching operations is now being added to perfom clamping of all four adjacents pixels for every pixel we scan. I thought perhaps it can be avoided? By simply computing Floyd Steinberg in a signed buffer? Or even for in place dithering, maybe one could add just a simple simple preprocessing step to half the intensity of the input buffer and then cast that pixel array to a signed type during the processing. I dunno? Maybe it sounds too "hacky"? But it's an idea I'd like to explore.
No, you need gamma correction, and the more bits per channel you have, the less gamma correction you need. sRGB to linear color space is already too much for even one bit per channel.
Thanks, it entirely depends on how many bits per pixel you filter to, for an uncompressed memory surface of 100x100x3x8 you can get it down to 100x100x3xN where N is the number of bits.
Looking at the very distinct artifacts it does look like the dithering algorithm used in the GIF format, which also uses indexed colors to reduce filesize. I had no idea this algorithm could be that efficient! By the way I noticed the use of sqrt and pow to find the closest match by minimizing the euclidian distance... it would be far more efficient to completely ditch the square root (useless if you just want to find the shortest distance, minimizing the squared distance is enough) and replace the pow by a good old multiplication :D
Dithering isn't part of the GIF format. If you've got a GIF file with dithering, that was done by whatever program saved it. Also, it really should be avoided as it doesn't play well with the format's compression. Data compression is all about finding patterns that can be represented more simply. Noise doesn't contain any patterns, so you can't compress it. That's why it ruins compression for images in PNG and GIF format. Dithering makes a trade-off between spatial resolution and colour resolution. It fakes there being more colour by using patterns or noise that average out when viewed from a distance. And there's why it doesn't play well with compressed image formats: many dithering approaches rely on noise. One form of dithering that isn't terrible for use in compressed images is ordered dither, which relies on a regular pattern. Of course, if anything ends up resizing the image, that'll introduce a whole new set of problems to ruin things.
@@Roxor128 Although dithering isn't part of the GIF format, color indexing is. If you take an image originally encoded using some 24 bit format, and convert it down to 8 bits, you'd probably want some form of dithering if you want it to look anything like the original. As for compression, I would argue that if you're using dithering, your goal is to reduce the size of the total pixel space you're using, so you're ALREADY kind of compressing the image in some way. As to whether it's the best way to compress, that's besides the point
@@eddiebreeg3885 Indexed colour was designed for saving memory by separating the colour representation from the pixels while still allowing good colour accuracy. Yes, it does have the downside of limiting the number of colours you can have, but you can usually pick _which_ colours you'll have in your palette, and those will be the full accuracy of the display device (18-bit for VGA). When you only have 256KB of memory for the framebuffer, you're not going to waste it directly specifying the RGB values for every pixel. For some numbers, 640*480 with 16 colours takes about 150KB of memory (and was the highest resolution supported by plain old VGA). If you wanted to directly encode the 18-bit RGB values VGA uses for it, you'd need nearly 700KB (and it would have been a pain to program because 18-bit values do not fit neatly into byte-addressed memory). Compressed file formats are for saving disk space given a certain kind of image data. GIF was designed in the late 1980s when everyone was using indexed colour for everything. If you had an image saved as a GIF back then, it would almost certainly have been created from scratch with an artist-chosen palette, not converting it from an RGB form. That's what GIF's compression was designed to work with. Dithering undermines that. Just tried an experiment. Starting with a photograph resized to 640*480, I reduced the colour depth to 256 colours, generating the palette by the same method, but mapping the colours differently. One image used nearest neighbour, the other used error diffusion. The uncompressed image was 301KB. The nearest neighbour match was 157KB. The error-diffused dither was 192KB. Okay, that's not really fair, given GIF wasn't designed with photographic content in mind. So I tried another experiment with an image downloaded from FurAffinity that'd be a closer fit, even though it needed conversion. Same process but left at original size. Uncompressed 256-colour version was 1.1MB. Nearest neighbour was 339KB, error-diffused was 519KB. Also tried 16-colour versions. Uncompressed was 605KB, nearest neighbour 80KB, and error-diffused was 199KB. As for how they look, while the banding is a lot worse in the 16-colour version than the 256-colour one for nearest-neighbour, it really doesn't look too bad. I could buy an artist producing something similar from scratch (though obviously they'd do a better job). In all three cases, error diffusion makes the compression significantly worse. 22%, 53% and 148% larger than nearest neighbour, respectively for each of the test cases. You really have to ask yourself "Is it really worth potentially more than doubling the file size for the sake of some nice dithering?" EDIT: Realised that Paint Shop Pro can do ordered dither if you limit your results to a standard web-safe palette. Results: Photograph: Uncompressed: 301KB, Nearest: 46KB, Ordered: 70KB, Diffused: 107KB Drawn image: Uncompressed: 1.1MB, Nearest: 97KB, Ordered: 209KB, Diffused: 492KB. While ordered dither isn't as compression-friendly as nearest-neighbour, it looks a hell of a lot better, and compresses significantly better than error-diffusion, with error-diffusion coming out 52% bigger for the photograph and 135% bigger for the drawing.
@@Roxor128 This is the part where I have to concede I am no expert about dithering, I haven't looked at all possible algorithms although I know there are a few. The only thing I wanted to point out in my original comment was that the look of that specific algorithm strongly ressembles what you would see on *modern* GIFs, that have been converted from RGB formats even though it wasn't designed for that originally. Thank you for taking the time with the experiments, I did learn from them :)
I've noticed that the quality on a 9th gen 2021 standard edition front facing ipad camera is quite poor, this video has shown me that it is because it dithers quite a lot.
I got my first IBM-compatible PC in the early 90's with my monitor being able to only display 256 colors in windows 3.1. I remember that when I wanted to save an image or videos, I would play around with quantization and dithering options in whatever graphic program to make it look right on my display. After watching this video, it really makes me appreciate what dithering does (approximate with far less information and still get the idea of the image across!). I think it would make for a cool post-processing effect for a pixel game engine based games but not sure if it is speedy enough for the FPS?
If you have something CPU-rendered, then you can make Floyd Steinberg work, it's fine, but it also looks terrifyingly bad in motion, as when you have moving and non-moving parts of the image, every little movement causes a ripple of value changes to the right and bottom of it (assuming you process from top right), while everything to top and left stays static, it distracts you from actually moving parts of the image and pulls your attention towards noise at the bottom left. You can use blue noise instead to achieve a similar looking dither effect. With precomputed blue noise, diffusion style dither is insanely fast on the GPU (or CPU), trivially parallelisable, and you can control the behaviour, you can make it stable frame-to-frame or vary it uniformly between the frames, there's even 3D or spatiotemporal blue noises specifically for the purpose. Computing optimised noise is extremely slow, but it can be precomputed such that it wraps around seamlessly and just shipped as a texture or array.
Some thoughts after seeing this video: I am curious about how we should handle borders of the image. I mean you assume it's always possible to delegate the error to the 4 pixels you mention, and it's not the case. I also strongly disagree with your sentence saying that the number of colors is greater than what we can see. I think it's in general false ( can you provide a pair of colors which, put side by side even a immense area, would be indistinguishable ? ), and I know that it is false for for special yet frequently encounter cases: dark shades. There is only 256 variation for hueless colors, and it's quite a small number. Create a picture made of bands of those shades, put it on any screen, and you'll see bands. There is a reason why so many games have banding issues ( at least old games ). I also do not understand what you said about storing negative values. Yes, if you try to do IN PLACE dithering, with a fixed amount of memory, you may face this problem. But in your case you are already creating a whole new array anyway. Nothing stops you from having an array of signed shorts that will contains errors. Finally, I am not an expert in dithering at all, but I am quite sure that there must be alternatives that would allow for parallel dithering ( with a random function based on a _blue_ noise to avoid artifacts ? ). And dithering is still useful today ( for printing for example, and as I mentioned to deal with dark areas, in movies or games ). Last but not least, your approach seems to assume a linearity of the brightness. But if you have a field made of gray pixels of, let's say (127, 127, 127), it's in incorrect to approximate it with a field of altering blacks and whites pixels. Doing so will result in a much brighter image, and it has nothing to do with negative values or anything like that. ( And to check that and avoid sampling issues ( that would merge the black and white pixels ) you can display on full screen your gray image, stand away from your screen and take a picture of it. Then do the same with the black and white pattern ). I banged my head on this problem for quite a long time.
Forgot to mention: is it possible to create an image that would look like a grid of black and white (like a chess board) but with slightly modified pixels so WHEN you dither with the right colors you actually obtain a totally different result? In general: is it possible to hide an image into another one so you can only see the hidden picture with the right dithering algorithm ?
Sorry if I write to you here. Could you do a tutorial on how to write a program in Visual Studio C ++ 2022 to connect to a Firebird 4.0.1 database (maybe using Boost.Asio or other library like SOCI or IBPP) ?
I'm looking at my Odroid Go Advance, and it could actually benefit from dithering, the colour resolution is very low. Maybe temporal dithering though, aka FRC. Today, you feed 8-bit image to your PC monitor, but the pixel resolution is only 6-bit and with a nonlinear transfer curve, so temporal dithering is how they restore some of the requisite quality; or alternatively displays with 10-bit input and 8-bit panel. I feel the modern rendering also has something similar, where you add a noise function to your sample vector together with multiple samples or filtering to try to fight aliasing in the shaders or raytracing, so you dither/vary the sampling point instead of the colour value. A very unpleasant trait of Floyd Steinberg is that it looks good in a static picture, but absolutely horrendous in motion, as with minimal movement, parts of the image near the top left are stable, but near the bottom right are increasingly flickery, especially disturbing in parts of the image that haven't really changed. Blue noise was often the accepted substitute here, just like in shaders, until low-discrepancy noise functions were invented recently. As to its tendency to lighten up dark areas, wouldn't it make sense to convert the intermediate values to linear space from gamma space and then back? You also don't need to clamp the intermediate values then, just leave them as floats. Then the small-value diffusion that occurs would be more careful not to visibly brighten the image, i would think.
Javidx9, hi. Do you want to touch on the topic of neural networks and artificial intelligence? I think with your teaching skills, I and other viewers could easily understand this topic.
Thanks, but sadly no. My academic background is actually in machine learning and network construction/simulation... I'm done with it. I find it quite dull.
Quote from the Linux kernel coding style: "Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged - the compiler knows the types anyway and can check those, and it only confuses the programmer. No wonder MicroSoft makes buggy programs."
Hello, i'm trying to do my own pixel game engine but i encountered a compiler error C2089: "Class too large" about the pixelgameengine big class. Did you encounter the same error? If so how did you solve it? Hope you will find the time to answer. Great video btw, i'm learning so much from you!
Thanks Samuele, sounds like you are allocating too much memory on the stack. Big areas of memory need to be allocated on heap and accessed via pointers or other appropriate interfaces.