Funnily at 10:10 there might be a mistake - because the number "1 0000 111 0000" after adding "00000 111" lengthens to "1 0000 111 00 111", so it "feels" like there is additional 0 between 2 triples of "1". And it doesn't feel like we needed to expand due to number being too big (256,512 - we are in between). But I didn't have the time to check it.
Somebody else also noticed that the condition in the C version of the algorithm is wrong. `str[i] < '0' && str[i] > '9'` will always return false, since it's checking if str[i] < 48 and str[i] > 57, which is never true. The condition should be `str[i] < '0' || str[i] > '9'` My apologies for these mistakes.
@@zionmelson7936 I was formatting 1 and 0 separately, so one could see there was additional number there. I didn't go for actual formatting like it should be.
It's funny that right now at my job, I am dealing with serializing ASCII characters and you are making this video. I'm really glad I'm here George. Nicely done.
While on the topic, I know it's a bit early for the channel to explain it now, but whenever you get to architectures, please don't forget endianness explanation, there are always explanations of how but not of why. Great video as always!!
Jesus is the only way to salvation and to the father. Please repent today and turn away from your sins yo escape judgement 🙏🙏 There is no other way to get to the father but through him.
This is not casting, this is converting. Casting is a grammatical operation (forcing the compiler to think that a data has a certain type, but not actually doing any conversation).
Another way to do: 1. Take the string as argument 2. Access every character 3. Use fixed values with switch cases for every character till '0' to '9' like switch(str[i]) case '1' : 001 4. Do bit shifting to create a BCD value containing all characters 5. Convert BCD to binary 6. return binary It may or may not be faster
Amazing. I’m literally addicted to learning like this through your videos. They’re awesome ! I can’t wait for the next one and yes I would love a video on conversion of the binary values back do string to understand how the print function works !
Just want to say that you are the one i was searching for. You answers same questions as mines and in a way that i wanted. Hope you would get more known
I'm really happy I found this channel... I somewhat knew how it worked, but this just makes it really clear. You are great at explaining things. I am eagerly waiting for more videos
Using IEEE-754 binary floating point 32 or 64 format, you would have to manually decode the floating point. First bitcast the floating point to an unsigned integer of the same size, I.e float -> ui32 or double -> ui64, then using the encoding specification you extract the sign, exponent and mantissa from the integer.
Great video! I would really like to see a video explaining the problem with null values inside languages and how to avoid them, that would be very educative!
This channel is perfect to watch alongside taking CS50 to start my programming journey. Pretty excited about understanding everything in this video and learning more. Thanks for the quality videos.
Another video! I'm glad I checked your channel, since there was no notification. Typical of RU-vid sadly. Though it probably has to do with delay between the last part and this video. RU-vid deprioritizes notifications if you normally have 1 week cadence and then suddenly release video month later. Honestly being a RU-vidr is a ton of work.
When it gets to converting decimal fractions as strings to floats things get a lot more complicated. Looking forward to seeing a new video about this case in the future!
literally Str(number) - 0x30 for 0-9, Str(uppercase letter) - 0x41 for A-Z, Str(lowercase)-0x61 for a-z Converting between the two is as simple as char(lower) = char(upper) ^ 0x20
ASCII allows for the use of a bitmask to get the number itself. The probably preferred way to convert these BCD numbers to an integer is reverse double dabble. There's a wiki article about it. This algorithm gets rid of expensive and area intensive (depending on your architecture, first for CPU, second for FPGA/custom silicon) multiplications and relies on fast/small shifts and add/sub operations.
This is actually easy how I would think Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them
My man your videos are awesome. Can you do an explanation on how the clock is used to move the process forward from the transistor level? For example, how do transistor gates use the clock to take the next instruction into the instruction register at the right time?
Great, that's a perfect illustration of what happens internally with the atoi() function. Ah, I noticed there is minor difference between converting a numeric string to a binary integer vs converting a numeric string to a BCD number. And that is multiplying by 10 vs shifting by 4 bits (since BCD numbers represents each numeric digit every 4 bits). I find it rather interesting, with the IBM mainframe, existing a single machine instruction (CVD) which can convert a numeric string (up to 31 digits) to BCD number. Likewise, there's another instruction (CVB) which can convert these BCD number into integers.
0:07 Yes... Just yes. Maybe this will be SUPER slow but yes) I have this in mind: 1. Represent each character in string with 4-bit binary number (Using Unicode) 2. Make BCD number from all characters 3. Convert BCD to binary. Now you have a number. For example: "532" 1. || "5" = 0101 || 3 = 0011 || 2 = 0010 || 2. 0101 0011 0010 (BCD to Binary algorithm) 3. "532" = 1000010100 __________ Now I'll watch video) ---------------------------------- Ps. Subtracting 48 is a very cleaver solution!! Now we can do same thing as i did. But initially i just wanted use table to store Unicode and number like this: | Unicode Number | Number in Binary | And use this table to convert each symbol to a number but yeah we can just subtract '0' encoding to get a number!
I would like a future video about converting an int to a string, but I am more interested in the much more complicated process of converting a float to a string.
This is actually easy how I would think Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them I mean we can even start backwards just tell it(computer) how long the number is ourselves but that means we have to know tell the length parameter so that way is better
Well, actually, there is a limit for integer numbers (as well as float), at least in C. And there is also negative numbers. So the more proper function is a little bit more complex. I wrote mine like this: int64_t StrToNum(char *Str) { int64_t Result = 0; uint32_t Index = 0; bool IsNegative = false; if (Str[0] == '-') { IsNegative = true; Index = 1; } while ((Str[Index] != '\0') && (Str[Index] >= '0') && (Str[Index]
"Shipping to Alaska, Hawaii, Puerto Rico, and International addresses is currently not available." -> pity I was actually looking for a new chair Anyway, good video, it's nice to see easier topics now and then.
I work on a php application where someone in the past reimplemented the string to number conversion... And if you have questions... Yes, it involved a loop with a bunch of ifs to check each digit Yes, they messed it up Yes, changing the usages of the function to "(int)$value" fixed a lot of bugs Yes, the person who did it (acording to git blame) still works there but was promoted to manager No, we dont do code reviews or anything like that
The conditionals you add at 11:06 are incorrect, the C code should have || instead of &&, and the Python code should have a ‘or’ and check both ends the same way the C code does; the way you wrote the C condition can never possibly trigger to raise the error you intend, because a character can't possibly be below 0 and above 9 at the same time, and the Python condition will behave completely differently than the way you intend, because first the “‘0’ < char” will evaluate to a boolean, and thus will never trigger the “char > ‘9’” because, just like in C, booleans are either 0 or 1. And even if the Python code behaved the way you intended, it's still missing a ‘not’, so it would trigger when the char IS numeric, not when it's NOT. I believe it's also a better idea to return null in C in this case, because -1 is a valid integer and is thus much more difficult to detect as an error value. Overall, still a great video! You explain the computer science concept very well, which is ultimately the value this video provides, and I'm perfectly happy to overlook erroneous code examples because this is not a programming tutorial. I've learned an incredible amount about computer science from your videos already, and this video has been no exception.
Before watching the response, this was the algorithm I came up with: ``` base = 10 str = "1030" println(string_to_int(str, base)) fn string_to_int(str: string, base: int) { let number = 0 each (index, char) of str { let digit = lookup_from(char) let exp = base ** len(str) - index - 1 number += digit * exp } return number } ```
Please make a video about big and little endianness, I always forget the order and don't understand the order of bits itself in comparison to the byte order.
Great video, and it is a very introductory version of the algorithm. However, this is not an efficient algorithm. The reason is due to the fact that the alu can't parallelize the multiplications and the additions. You should see Andrei Alexandrescu's lecture on this! But this can be a cool continuation of this video.
Thanks for the advice, I'll take a look at the lecture as soon as I get some free time. I'm assuming it is related to SIMD but if not I'm sure I'll enjoy it anyways.
I think it's more intuitive to multiply the numbers by magnitudes of 10 first and then adding them up. After that the better algorithm that you showed in the video would've been more clear I think
How to convert a number to a string: The key instrument is integer division. Let's consider the number 4327. Dividing by 10 we obtain 432 and remainder 7. Now, we already know how to convert a single digit to its corresponding ASCII code: just add 48 or ord('0'). So in this one step we obtained the so called least significant digit (7) and are left with 432. Now, we just have to repeat the same procedure until we are left with no more digits (when the last division yields 0 as the quotient). PS: Integer division is just a single processor instruction and actually gives both the quotient and the remainder in one go so it's pretty fast.
done the string to float double and it myself but a different approach stuff skiped in this video - Sign of a value for applaing a Sign multyplay output value by -1 if the '-' is found at the start of a string - decimal parsing the same way as string to int but - do it 2 times and when . was found instead of multiplying value just divide decimal it by 10 for each Ituretion and cheak if value is not to large
Yes please, make those 2 videos that you talked about in the video! Great job!! And may i give you a suggestion? Why don't you also make videos on DSA? Your animations are great! That way everyone will be able to understand completely and one more thing, can you please make the next video on recursion?
I'm guessing that in order to convert an integer to a string you have to make reverse process. Instead of multiplying you have to divide the number, take the reminder and add '0'
I've always found it rather beautiful that ASCII encodes decimal characters as 0x30 to 0x39 in hex, so mentally you can just remove 0x3 and know what the number is.