I've recently discovered this channel and you're doing a really great job. There is a lack of low-level staff like that on YT because everyone want's to create "Yet another React tutorial". Keep it up :)
The idea that 754 is a compression algo is a really profound way of looking at the world of computation. It brings a new level of thinking to the implementations of the world and helped me better think critically of these systems. Thanks!
Isn’t the key point that JS numbers are encoded with base 2 and that base 2 can’t represent 1/10 and 2/10 precisely (similar to how decimal numbers can’t precisely represent 1/3, even with lots of finite storage)? If JS numbers were encoded with base 10, then 0.1 and 0.2 could be represented precisely.
You're completely right. Though I like to think there are many key points that come with IEEE 754 system. There is also a ton of nuance in how the operations work, and how rounding is handled. And many interesting things that happen when you start dealing with denormalised numbers - which is almost like a secondary system embedded in the specification. The main understanding I wanted people to come away with from this video was how the representation, encoding, and precision parts work, and the non-representability of 0.1 was left as more of an implication. I hope to come back to this topic in the future and dive deeper. In particular I'm interested in exploring the famous "Carmack" fast inverse square root hack. Also I'm a big fan of your blog. Thanks for watching!
Thank you so much for the detailed walkthrough of the specification. Though, one thing that's still bugging me is the fact that neither 0.3, nor 0.30000000000000004 can be represented in binary without recurring digits. 0.3 would actually be stored as 0.299999999999999988897769753748... , and 0.30...04 would be stored as 0.300000000000000044408920985006... . I understand that the discrepancy is caused by the fact that 0.1 and 0.2 also can't be represented accurately. So, my point is, why does Javascript show the first digit where the imprecision takes effect, instead of leaving it out entirely, or showing more, maybe even all decimal digits? Is it to prevent possible errors where the check for 0.1 + 0.2 === 0.3 would fail? Was the number of digits chosen as to be able to uniquely identify any number with the least amount of digits? Thanks in advance :)
I think it is a case of a standard amount of precision letting the little error sneak in- if that standard number of significant sigits had been one less, then 0.1 + 0.2 would for all intents and purposes equal 0.30000000000000000 or 0.3, and it just so happens that in this case we grab 18 significant digits, instead of 17.
Thank you so much for this! I learned floating point before but completely forgot how it worked but it just came up in another class where my professor's explanation didn't make any sense. Now I'm actually understanding it from your video. :)
Appreciate the very detailed and hands on tutorial! One question, doesn't the mantissa also need a (10-bit) bitmask in your encode function? I.e., function encode(n) { ... const mantissa = 1025 * percentage; mantissa = mantissa & 0b1111111111; ... } That way, in case of an overprecise mantissa, we don't clobber the sign and exponent bits in the return value.
0:11 I think you missed a zero EDIT: Jokes aside (there really is a missing zero though) great stuff as always! Was kind of hoping you would actually namedrop posits at some point after you mentioned you got intrigued by them last week, but otoh that's definitely a bit too deep into the hypothetical weeds for now
Haha I was hoping someone would count the zeros! But I have been looking into posits and unums quite a bit. Once I've wrapped my head around them enough to get a software implementation up and running, I think I'll make a video.
Yeah, there's a weird kind of nerdy fun in figuring out how they work and about what kind of special optimised use-cases you can have for them, no? Also just to become aware that all these systems for representing numbers have design trade-offs and aren't as "finished" as we might think. PICO-8 for example uses its own 16:16 fixed-point number system because Joseph White (its creator) thought it would make for a more interesting fantasy console, and it creates some interesting limitations.
For sure. There is so much to enjoy there - figuring out the system and contrasting it with IEEE 754, the fact that it's one guy just coming up with stuff like a mad scientist/evil genius, the multiple iterations, the controversy with william kahan, and actually just reading the wikipedia discussion page and seeing these bitter debates. It's amazing - so actually thank you for introducing me! I didn't know that about PICO-8 either. That's a project I've been watching from the outside a bit - I really enjoy the crazy procedural animations people are able to crack out of it.
Awesome, great video! Is there still plans to make a follow up video with the operations? I've been looking for a video explaining how the operations work on IEEE 754 numbers but couldn't find much.
Dumb question. Why not represent numbers as something that takes up more bits if more accuracy is needed, or takes up less if less is required. 0.5 vs 0.39201329 just inherently have different amounts of information in them right?
Yazeed, I assure you this is definitely not how the "big" microphone sounds like. The desk mic would be even clearer and crisper, imagine radio recordings on RU-vid without music or anything, just voice. That's how it's going to be with a decently priced mic.
pppfff. It IS a js fault. js was supossed to be easy for scripts, why didnt they use a human notation? 0.1 + 0.2 = 0.3 =( that and arrays starting on zero, no need in high level langs. Day 1 Month 1... february 1 ? =( Interesting video. And channel. intense. =)
The VM series is making use of typed arrays - I think so far only UInt8Array has been used (to place raw bytes into an array buffer) but the essence is the same. I'm sure they'll be used there later as well.
Thanks very much. This was one of the best videos I found on this subject. Does this mean the following : In a 16 bit floating point representation, I can represent a maximum of 1024 unique values in each range of numbers. (0-2), (2-4), (4-8), (8-16) And if yes, it implicates that we have a better representation in the smaller exponent ranges, as we get more number of unique values for a significantly smaller range .? Pl clarify. Thank you very much for the informatory video.
Yes that's exactly right - in floating point, the closer you are to zero, the better the approximation can be, and the further you travel from zero, the worse the approximation gets.
@@LowByteProductions Thanks for the quick response too. Can you pl shed some light on this too.. I read that the max positive number that can be represented through IEEE 754 32 bit floating point is 3.403E38. But as I understand, there's only 2^32 values that can be uniquely represented using 32 bit binary. In this case, how do we even reach a number as huge as 3.403E38.. I have difficulty inferring this, can you please help in the decoding this for me...?
@@reddyharishkannapu1850 if you're still interested, the longer you travel through the number line, the more of the floating point numbers get skipped. For example: the next possible value after 32.768 might be 32.770 (example), but for 2837.768 the next possible value will be 2837.794 already. And the bigger the number gets, the bigger the gap.
This video is about building a model of floating point, not necessarily in the same way it happens in hardware. The implementation uses floats internally, but we're not trying to bootstrap a system from the ground up; we're trying to learn how the algorithm works.
In JS, unfortunately not. It might be possible in node by writing a C++ extension that could actually examine the bit pattern of the NaN and pass the result back to JS.
Really good video! But how would this work when you don't have floating point number calculation available? Because Math.log (and so does Math.pow / **) returns a float. I kinda doubt that this would be possible in JS or rather easily doable.
IEEE 754 is fully implementable in hardware (and, or, not, xor, shift), so these operations are definitely possible in js without falling back on the standard library. Interestingly, if you simply cast a floating point number to an integer, it acts as a crude, out of scale logarithm. This is the basis for the famous "fast inverse square root".
@@LowByteProductions Thank you! I think I already readabout the fast inverse square root somewhere but I looked it up and its pretty cool.
3 года назад
I don't think that most people say imprecision of floating point numbers is a fault of JS. I believe they say that it's a fault of JS to force all numbers into being floats and not giving programmes appropriate tools to tackle the imprecision as the given domain requires.
Which would be wrong anyway, since JS has ArrayBuffers, Uint{8, 16, 32}Arrays, Int{8, 16, 32}Arrays, and BigInts - for when specific or even arbitrary integer precision is required.
3 года назад
@@LowByteProductions I wonder how well-known they are in practice? I don't remember seeing them in the wild but I didn't look at too much JS anyway.
On this channel they are very well known 😁 If you're an everyday web developer making landing sites in react then you might not come across them, but if you do any work with audio, webgl, pixel pushing on the canvas,or transferring and/or parsing binary data then you'll be familiar. Most people that have worked with node will also be familiar with the idea of a Buffer object - which these days is now just an abstraction built on the ArrayBuffer/TypedArray standards.
3 года назад
@@LowByteProductions good 🙂, I did mostly simple stuff although I came across ArrayBuffer. So it seems I misunderstood those JS critics.
Sounds great 👍 let me know when you've made the video that explains AND implements floating point numbers, which somehow fits in 5 minutes and makes sense to people. I'm sure it will be fantastic.
@@LowByteProductions please brother i have watched your video 3 times and i have been learning about floating points for the last 2 or 3 months I was just trying to make a joke about his comment great contents by the way i would love to watch a 5 hours video from u about floating points 🙂
5:05 idk but isn't negative should be mirror of positive and not fractions of positive? I'm not math fellow but I'm sure this is fallacy of math as defining it.