Тёмный
No video :(

How can Computers Calculate Sine, Cosine, and More? | Introduction to the CORDIC Algorithm  

Bobater
Подписаться 3,5 тыс.
Просмотров 145 тыс.
50% 1

In this video, I'll explain the motivation for an algorithm to calculate sine, cosine, inverse tangent, and more in a fast and efficient way. I'll cover topics like geometry, calculus, and computer science to explain where and how these ideas are developed.
Music by Vincent Rubinetti
Download the music on Bandcamp:
vincerubinetti...
Stream the music on Spotify:
open.spotify.c...

Опубликовано:

 

5 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 315   
@bobater
@bobater Год назад
I figured I’d make this comment to address some common questions I’ve been seeing: EDIT: I made a mistake in saying that we don't need to calculate the angles a_n = tan^-1(2^-n). We do need the angles in order to keep track of the total angle we've rotated by, but we don't need to calculate tan(a_n) since we know it's 2^-n. Apologies for any confusion this may have caused. 1. Is this algorithm the main way to calculate these functions today? Today, this algorithm is rarely used. The main advantage of it is how few transistors it needs, at least compared to implementing a floating point unit to use a polynomial method. For this reason, the main use of CORDIC today is in circuits where hardware is limited, and not in more powerful modern computers. I think it's still a useful algorithm to learn because of how interesting the ideas behind it are, and how it gives you an insight into how computers function. 2. If CORDIC is used to calculate sin, cos, and arctan, how do we get the cos and arctan values we need for the rotation matrices in the first place? For the angles we use, which are tan^-1(2^-n), since we know the tangents of those angles are 2^-n, -we don’t need to actually know the angle on the computer- we don't actually have to compute tan. That’s due to the fact that the rotation matrix only has terms that are tan(a_n), which will always be some 2^-n. -We’re not actually using the values of the angles a_n when we do the computation.- We can calculate a_n = tan^-1(2^-n) using something like a Taylor series, which we need to keep track of the total rotation we've done by adding or subtracting each angle from the total depending on the direction of rotation. For the cos(a_n) constant, we can compute this using a method like a Taylor polynomial or some other approximation besides CORDIC. As @Demki noted, we could also use CORDIC to calculate the scalar of all of the cos product by starting with a vector on the x-axis with length 1, rotating it onto the y-axis, and then finding the final length of the vector L. If we multiply by 1/L, we’ll re-scale our vector to a length of 1 again meaning the cos scalar = 1/L 3. Why is the cos product K always the same? Wouldn’t it change each time we run the algorithm? The reason the cos scalar K remains constant is because the series of angles we rotate by, +/- a_n, remain constant in magnitude with only their signs changing. Cos(x) is an even function, meaning cos(-x) = cos(x). Therefore the sign of a_n doesn’t affect our cos(a_n) value. Since every cos(a_n) value is the same every time we run the algorithm, the constant K to scale the vector by will be the same each time. 4. Why not use something like a Taylor Series to approximate the values instead? Polynomial approximations like Taylor series are pretty simple in their computations, but they require a lot more multiplication operations. The main advantage of CORDIC is how easy it is to implement with very little in the way of transistors, at least compared to a floating point unit. 5. Why use the repeated addition algorithm for multiplication? Aren’t there faster ways to do it? The main reason for using that algorithm was because it was simple to showcase without having to first introduce binary. I know that there are much faster algorithms for computer multiplication, but the goal was to show something simple that provided motivation for the bit-shifting optimization while still giving a general overview of how everything works. I do regret not putting a note in the video that explained this, but hopefully people will read this and not be mislead.
@r-prime
@r-prime Год назад
Hey I'm feeling really stupid rn but how do you keep track of whether or not you've overshot the angle without knowing what the individual angles are? Surely you would either need arctan(2^-n) or tan(theta) to be able to draw a comparison? But then you would need to calculate tan(theta) which obviously defeats the purpose...
@bobater
@bobater Год назад
@@r-prime Sorry for the confusion, I updated the comment. You are correct, we do need the angles stored on the computer, arctan(2^-n), to keep track of the total rotation that we've done. We still don't need to calculate the tan(a_n) though, since we know it will always be 2^-n.
@FrankHarwald
@FrankHarwald Год назад
CORDIC is indispensible for when you have a very simple or low-power computer and no transcendental functions available & look-up table are also too big which is why no usual computer uses them - nowadays they are only used in tiny special case niche. Mostly computers don't calculate them at all: programs mostly multiply by normal vectors if you need rotation or heaven forbid if you have to a built-in floating point sine cosine (which internally use a combination of function symmetry & repetitive properties + polynomial approx + maybe table lookup). same is true for exponential functions, logarithms & the inverses of the trig functions.
@dimastus
@dimastus Год назад
​@@bobater So, for the N-bit angle we need to precalculate N arctans?
@domenickeller2564
@domenickeller2564 Год назад
​​@@dimastuses, basicly each Cordic micro rotation gives you ca. 1 bit of precition But there are newer versions of the cordic that work a little different
@rizalardiansyah4486
@rizalardiansyah4486 Год назад
I was expecting lookup tables and interpolation. Boy, how wrong am I...
@genehenson8851
@genehenson8851 Год назад
I was expecting wizard. I was wrong in the first thirty seconds.
@Jono98806
@Jono98806 Год назад
That wouldn't be very accurate and it's not necessary. Trigonometric function have well-defined Taylor series, which would be the easiest way to compute them.
@davidhand9721
@davidhand9721 Год назад
I've seen lookup tables in older machines.
@domenickeller2564
@domenickeller2564 Год назад
​@@davidhand9721in very old machines and depending on the precision needed. The CORDIC scales really well for high precision. But if you don't care too much for example in machine learning stuff you use Look up Tables.
@lisamariefan
@lisamariefan Год назад
Isn't that what some games on the SNES do with mode 7 though?
@nanamacapagal8342
@nanamacapagal8342 Год назад
I was expecting an infinite series expansion but this is so much more clever and optimized for computers
@NumbToons
@NumbToons Год назад
me too, I have been thinking this for so many years.
@NumbToons
@NumbToons Год назад
But do any calculators use the taylor series expansion? I mean, before CORDIC, were taylor series used?
@FabienAurejac
@FabienAurejac Год назад
@@NumbToons If I understand correctly, Cordic was made for calculators with inadequate registers, in 1959, with techniques described similar to the ones described by Briggs much sooner... So I think, like the comment of WarpRulez says in this page, that techniques like CORDIC were used sooner than equivalents of taylor series, in the case of computing. You can also find on the internet the pdfs of publications by William E. Egbert, that described the algorithms behind most of the elementary operations in old calculators.
@flameofthephoenix8395
@flameofthephoenix8395 Год назад
I was thinking it might be a binary style algorithm, where the angle you want to find the sine of would be converted to a binary decimal number. Where 1 is 180 degrees .1 is 90 is degrees .01 is 45 degrees, and so on. Then you can find the sine/cosine pairings through this method. Start with A = {Angle:180,X:0,Y:-1} B = {Angle:90,X:1,Y:0} C = {Angle:0,X:0,Y:1} Then to find D = {Angle:45} you would do D={Angle:(B.Angle+C.Angle)/2,X:Normalize((B.X+C.X)/2,(B.Y+C.Y)/2).X,Y:Normalize((B.X+C.X)/2,(B.Y+C.Y)/2).Y}. Then with your previous binary decimal, you would use rotation matrices to find the exact angle, if the bit signifying 180 degrees is active, then you use a rotation matrix with A, if the bit signifying 90 degrees is active you do the same with B, and so on.
@diobrando7642
@diobrando7642 Год назад
@@NumbToons Taylor series expansion is valid, 10 iterations are enough to calculate accurately sine and cosine.
@baconeko
@baconeko Год назад
Amazing that you managed to put in an intuitive explanation of L’Hopital’s rule in the middle of it!
@LinesThatConnect
@LinesThatConnect Год назад
Awesome! I've been curious about CORDIC for a while, but I never had the motivation to trudge through technical papers about it.
@bobater
@bobater Год назад
Thanks for watching! Your videos are part of what inspired me to do SoME this year. Love your content
@NithinJune
@NithinJune Год назад
lines literally one of my favorite math yt
@Nik-dz1yc
@Nik-dz1yc Год назад
Ive always wanted to figure out how CORDIC woked. I ended up using a basic Taylor series with 3-4 terms while using a repeated Chebyshev polynomial to make it way more accurate
@AquesticYT
@AquesticYT Год назад
I was thinking Taylor series with an if statement to just repeat every 2pi radians
@Nik-dz1yc
@Nik-dz1yc Год назад
@@AquesticYT Yeah you're correct, modulo is used because the Taylor series only covers -pi to pi. The problem with just using a Taylor series is that even with quite a few terms, it's quite inaccurate near the edges. One thing we can exploit though is that near 0, even with just 2-3 terms, it's very accurate. This is why I mentioned a Chebyshev polynomial. if you take the cosine of a number x, and it's equal to z = cosx then you can calculate cos(2x) using 2z^2 - 1. So this simple polynomial lets you calculate cos(2x) using cosx and in fact, there are infinitely many of these but only the first few are practical and you can repeat them too. I repeat the 2x^2 - 1 one 2 or 4 times which allows for very high accuracy
@NXTangl
@NXTangl Год назад
​@@Nik-dz1yc You don't even need to be accurate everywhere. Combining wrapping, mirroring, and the Pythagorean identity will get you anything as long as you can cover the first pi/4 radians, although for stability and to avoid sqrt(x) a full pi/2 radians is recommended...
@DjVortex-w
@DjVortex-w Год назад
If I understand correctly, CORDIC is useful when there's no support for hardware multiplication (or if multiplication is extremely slow), which is the case with more primitive or simple CPUs. However, if the processor supports fast multiplication (which modern processors do, as 1-clock-cycle multiplication is very common), then a power series (optimized with some lookup tables) is faster then CORDIC.
@Nik-dz1yc
@Nik-dz1yc Год назад
@@NXTangl you're right, I just proposed a simpler process here but there are a lot of optimizations that can be made to work on smaller spaces than 2pi, in fact I've heard people talk about very tiny spaces like pi/6 but I never looked into the math
@thechiliman500
@thechiliman500 Год назад
I've been a casual math enjoyer for a long time (with only really doing a small bit of maths up to DE and Liner Alg.) and I have to say your intuitive description of L’Hopital’s rule was one of the best I've ever seen. Short and to the point and without alot of the messiness that you sometimes get when describing some facts in math. Thanks :)
@prwf
@prwf Год назад
Very nice presentation. The one thing that I found a bit cloudy was the K constant but after thinking it through I realized that K will always be equal to the product over the same angles, because cosine(-a)=cosine(a). This product looks like it should converge quite quickly so I’m assuming that we just use the constant (roughly 0.607) every time as within just 4 angle adjustments/rotations, K is within about 0.002 of this constant (K). We can safely assume that for almost every angle we are using to calculate the sin and cos of that it will take 4 or more rotations, so this constant is justified to be used for every calculation, and doesn’t need to be recalculated for each use of the algorithm. Thought I would explain my thought process just in case others were a bit confused.
@bobater
@bobater Год назад
Spot on explanation! That’s one thing I regret not making as clear in the video now seeing people’s reactions to it. Glad you liked the video though!
@cdunku
@cdunku Год назад
I was looking for resources on implementing my own trigonometry functions in C just for practice and this is perfect!
@spegee5332
@spegee5332 Год назад
oh man, this was my idea for SoME3! guess I gotta find a new one :P awesome to see this cool concept well explained though. Hope you keep making videos, you def know what you're talking about and the visualizations are great too.
@bobater
@bobater Год назад
Haha that's a cool coincidence! I was surprised to see that not many people had done videos on this topic when I was first thinking of making this video, maybe because it's somewhat more obscure and less important in the modern day. Thanks for the support!
@thingthingthingthingthingthing
@thingthingthingthingthingthing Месяц назад
I love channels that explain complicated thing with small steps and examples that are understandable
@vari1535
@vari1535 Год назад
Beautiful presentation! I especially liked how you built a visual intuition for the sine and cosine angle addition formulas and for L'Hôpital's rule rather than just presenting them as established facts (hell, you didn't even mention their traditional names--and there was no need!).
@byronwatkins2565
@byronwatkins2565 Год назад
Presenting L'Hospital's work as his own is called Plagiarism; it is illegal.
@huvarda
@huvarda Год назад
@@byronwatkins2565 It's L'Hopital not hospital. Also, L'Hopital didn't discover that formula, his teacher Bernoulli did. Also also he didn't present it as if he discovered it, he just described how it works. This reply has to be one of the silliest things I've read today
@wesleytaylor-rendal5648
@wesleytaylor-rendal5648 10 месяцев назад
I'm really impressed. I'm currently implementing a cordic calculation in an FPGA. Both HLS and HDL implementations for two separate projects, but i have to admit this was a great refresher. Please keep it up. You're doing gods work.
@anon_y_mousse
@anon_y_mousse Год назад
I think you've just given me the insight I needed to figure out how the equations were derived in the first place. Taking the simple ratios of a unit circle and making them more complicated by stepping makes so much sense I don't know why I didn't think of it before, but hopefully I'll build up a better understanding of math because of this.
@literallyjustayoutubecomme1591
Thank you for this video. I remember seeing a knowledgeable person on Reddit mention some techniques for trigonometric function computations, which included the CORDIC algorithm as well as pade approximants, I had not seen an explainer on cordic aimed at non-experts yet
@AjitSharma-km6ev
@AjitSharma-km6ev 15 дней назад
Was always curious about this! Excellent explanation and visualization!
@herbie_the_hillbillie_goat
@herbie_the_hillbillie_goat Год назад
Thanks for making this. I've been trying to understand CORIC for quite some time now.
@marcfruchtman9473
@marcfruchtman9473 Год назад
When I first started watching this video, I didn't expect that I would also learn some interesting pieces of knowledge re: Limits and series convergence. Thanks for the video.
@OmegaQuark
@OmegaQuark Год назад
Glad to be here before this channel blows up. Such amazing visuals and intuitions are a scarce thing
@ingenuity23
@ingenuity23 Год назад
you just dropped an intuitive proof of l'hôpital's rule and an explanation of cordic together, loved the video
@artsmith1347
@artsmith1347 Год назад
I thought what he did at least rhymed with l'hôpital's rule -- which has always seemed to be magic. This deserves a re-watch because this may be the best explanation anywhere on how and why l'hôpital's rule works. Likewise, the explanation of the CORDIC Algorithm is beyond excellent -- in both words and graphics. Subscribed.
@ingenuity23-yg4ev
@ingenuity23-yg4ev Год назад
@@artsmith1347 i think 3blue1brown also did a video on l'hôpital's and its a similarly intuitive approach
@Nim2
@Nim2 Год назад
Kudos. This is one of my favorites of this SoMe so far.
@kaustubhpandey1395
@kaustubhpandey1395 Год назад
Really cool entry P.S. love the effort in the 8 bit computer you made. I have seen ben's series on it
@calvindang7291
@calvindang7291 Год назад
i've never seen this proof for the trig identity before, that's cool
@gunhasirac
@gunhasirac Год назад
very impressive video. Every college level concepts are explained in simple words and graph. Incredible work sir.
@someone_who_is_in_this_wor4536
Now this is what I would like to actually like!
@mr.fishfish570
@mr.fishfish570 Год назад
You surprised me when you whipped out that breadboard computer. I actually made one following Ben's tutorial too! Although I have bugs to squah...
@squ1dd13
@squ1dd13 Год назад
fantastic video. i was really surprised to see you don’t have that many subscribers! you’ll be pleased to know you gained my subscription, and hopefully more people will find your channel soon. ❤ keep it up
@Murderbot2000
@Murderbot2000 Год назад
I really liked this. The solution that I arrived at (and a solution that I’ve used for other transcendental functions) is to use a Taylor series expansion with just enough terms to be within the precision of the LSB of the data type that I’m calculating with.
@sloshy1840
@sloshy1840 Год назад
Such a beautiful presentation! I've always been curious about this
@oceannuclear
@oceannuclear Год назад
Thanks for this! I've been meaning to understand CORDIC for a while but there's just too much fluff/ computer science background that I couldn't be bothered to read into. This cuts right through all the fluff (especially the (1/2)^n series at the beginning of the video) and helps me understand it!
@bobater
@bobater Год назад
That was exactly my goal in making this video, to just present the ideas and motives behind CORDIC without going too far into the technicalities of it. Glad you liked it!
@ximin_sleepy
@ximin_sleepy Год назад
what a channel, please do a series on directed search algorithms.
@yolamontalvan9502
@yolamontalvan9502 6 месяцев назад
This is cool stuff. Highly advanced for rocket engineers.
@sdspivey
@sdspivey Год назад
Your computer is not multiplying, it is repeated adding. That's why it is slow. Using a Shift-Add or similar algorithm, it should be much faster. At a minimum, adding "16" seven times would be twice as fast as adding "7" sixteen times.
@bobater
@bobater Год назад
You are correct. Modern computers have multiply instructions that can complete in just a few clock cycles. The main purpose of showcasing the slower repeated addition algorithm was to show how bit-shifting is faster than multiplication by numbers that aren’t powers of 2, and to give motivation for that optimization
@Pence128
@Pence128 Год назад
@@bobater Not the point. A computer with a parallel adder can multiply in linear time using grade 4 math. Your code runs in _exponential_ time.
@dsdsspp7130
@dsdsspp7130 Год назад
@@Pence128 that most certainly isn't true. by that logic computers have parallel processors so everything parallelizable can be done in linear time. obviously there is a finite number of these parallel adders so the time complexity can never be linear and what do you mean his code runs in exponential time? multiplication by repeated addition runs in polynomial time. plus that is exactly the point, the comment says "Using a Shift-Add or similar algorithm, it should be much faster" which is literally explained in the video and used to explain the CORDIC algorithm. what's the problem here exactly?
@yewo.m
@yewo.m 3 месяца назад
Back when I was first learning to program (using C++), I was so enthusiastic that I wanted to make my own command-line calculator. At first I made it using just basic arithmetic operations. And then later on I added trig functions (along with the other typical math functions). But at the time I didn't know the standard library already provided built-in sine/cosine/tangent functions, so I ended up writing my own rudimentary ones using Taylor's series approximations 😅
@chrisd561
@chrisd561 7 месяцев назад
I've been trying to find this. Thanks!
@katefick4246
@katefick4246 11 месяцев назад
This is awesome and so are you!!! Excellent graphics and clear explanation!
@martinnguyen9720
@martinnguyen9720 Год назад
Pretty good video and lots of good feedback in the comments. A few expanded explanations (now covered in the comments but would be better in the video itself) and slowing down the pacing a tad would be greatly beneficial. Great job!
@bobater
@bobater Год назад
Thanks for the feedback! This is only my 2nd video so obviously there is still a lot to learn and improve, but all things I can do better on in my next video.
@medazizmhenni2249
@medazizmhenni2249 2 месяца назад
Even if the terms of a series are gradually getting smaller when n is getting bigger, the series can diverge. For example, the harmonic series (1/n) diverges even though the sequence 1/n converges to 0. In fact, if a series converges, then the corresponding sequence converges to 0. But, if a sequence converges to 0, the corresponding series can either converge or diverge.
@osama_zn
@osama_zn Год назад
Wonderful video!! please keep going!
@arijeetsarkar1512
@arijeetsarkar1512 Год назад
What an amazing channel Hope that you grow to exponential heights. You help in visualization so well❤
@loszhor
@loszhor Год назад
Thank you for the information.
@sreyanchakravarty7694
@sreyanchakravarty7694 Год назад
Thank you for this video. It is over 10 years I wanted to know this.
@hydropage2855
@hydropage2855 Год назад
My calculus teacher last year taught us about Taylor series expansions, and he based the explanation off wondering how calculators get trig values, implying they use Taylor series. I spoke with him later in private to explain that there exists this algorithm and calculators really don’t use series, which he was surprised about
@mattiaviola7152
@mattiaviola7152 Год назад
Maybe your teacher was not totally wrong. In the video the author says that this method is only one option, used in old computers.
@hydropage2855
@hydropage2855 Год назад
@@mattiaviola7152 I just didn’t appreciate that he didn’t point out that it was an outdated or primitive way of doing it, he asserted it was how all calculators did it, which is misinformation. I spoke to him in private about CORDIC later
@walkingturtle6646
@walkingturtle6646 Год назад
excellent video, glad I discover your channel. Hope to see more videos in the same style.
@gigantopithecus8254
@gigantopithecus8254 9 месяцев назад
i found my own method for calculating cos(x) 1.choose accuracy n 2. divide the input number by 2^n-1 3.square the number and subtract 2from that 4. repeat 3 n times 5.divide the answer by two this works due to the double angle formula and chebesev polinomials
@yolamontalvan9502
@yolamontalvan9502 6 месяцев назад
Thank you for this lesson. I learned something beyond my capability. You forgot to mention the software you use to make those circles and triangles.
@MrSilversMathSheets
@MrSilversMathSheets 11 месяцев назад
I was disappointed that this did not receive a mention in the final announcement. I hope you continue to make videos though.
@tobysuren
@tobysuren Год назад
I'm not sure if I just missed it, but if to calculate inverse tangent you need to use this method and you need inverse tangent to use this method, how are the inverse tangents of 2^-n calculated?
@bobater
@bobater Год назад
Good question! There are actually other ways besides this algorithm to calculate these functions. They are usually more complex hardware-wise and might take more operations, which is the advantage of using cordic over one of those. One way to calculate those values in advance though would be to use a Taylor series, which is a way to approximate functions with polynomials. Another thing to note is that modern more powerful computers probably don’t use the cordic method, but some of the ideas from it are still used
@aleksandersabak
@aleksandersabak Год назад
​@@bobater so to actually use the method you presented we need precomputed values of all invers tangents of powers of two we're going to use and the final product of cosines?
@bobater
@bobater Год назад
@@aleksandersabak -actually we just need the cos constant, since we know the tangent of the nth angle is 2^-n, therefore we don’t need the angles in advance since we already have the cos constant and the tan(a_n) values- Yes, we need both the cos constant and each angle, that way we can scale the vector correctly at the end and know what total angle we've rotated by.
@3snoW_
@3snoW_ Год назад
@@bobater Then how do we know whether to add or subtract the next angle? How do we iterate to the desired angle if we don't know the values of the angles we're adding/subtracting to reach it?
@bobater
@bobater Год назад
@@3snoW_ You're correct, we do need the angles, we just don't need to actually calculate their tangents. Sorry for the confusion.
@rudypieplenbosch6752
@rudypieplenbosch6752 5 месяцев назад
In the old days, i calculated the Arctan in assembler using a the CORDIC. There is a mathematical proof of the CORDIC somewhere.
@NithinJune
@NithinJune Год назад
There’s a #SOME3 ??? i’m so excited?!!
@dyld921
@dyld921 Год назад
A quicker way to prove convergence of the inverse tangent series: Note that tan^-1(x) < x for x > 0, therefore the sum of tan^-1(2^-n) is less than the sum of 2^-n, which is finite.
@Speak4Yourself2
@Speak4Yourself2 Год назад
Excellent video.
@gabedarrett1301
@gabedarrett1301 Год назад
12:08 Each term in the harmonic series is smaller than the one before, but it doesn't converge
@loganpockrus6882
@loganpockrus6882 Год назад
However, taking the limit as n approaches infinity of [1/(n+1)] / [1/n], which is equivalent to [n / (n+1)], leads to 1, and 1 is not less than 1. Hence, the ratio test does not say the harmonic series converges.
@MrSilversMathSheets
@MrSilversMathSheets Год назад
This is a really nice video. I am excited about the content on this channel because it corresponds well to the content on my new channel. Your video is very professional and easy to understand - a feat considering the content. One quibble: the part about convergence misses a really important point - you need to cover all possible values. It turns out that the atan(.5^n) basis works, but not because the rate of converge is 0.5. For example, tan (pi/4*(.5^n)) would have the same asymptotic rate of convergence but leaves the range of computable values a Swiss cheese of holes. The property you need is sum of all remaining basis numbers must equal or exceed the last included basis number.
@bobater
@bobater Год назад
Good point. I'll admit this is something I missed when researching the video, but it makes intuitive sense as to why that property needs to be satisfied. Thanks for bringing this up.
@kodirovsshik
@kodirovsshik Год назад
Everybody's gansta untill bro causally pulls up a home-made cpu to calculate the algorithm
@zandrillon4764
@zandrillon4764 Год назад
Very awesome video! Education and funny!
@CMRTUBE
@CMRTUBE Год назад
This is insane. Love it!!
@piman406
@piman406 Год назад
You deserve more subscribers
@inyobill
@inyobill Год назад
"How CAN computers compute (trig functions)" vs. "How DO computers ...". The hard real-time systems I worked on used Taylor Series approximation. 3 or 4 terms are adequte to achieve required accuracy (usually LSB).
@Tynach
@Tynach Год назад
I'd be curious if there are any hardware-based optimizations for this. For example, with multiplication, you don't have to add one number over and over again. You can instead AND the two values together several times simultaneously, each time at a different offset, which gives you 'partial products' in O(1) time. Then, you can have a series of 'reduction stages', where the partial products with the same place value are added together. Each reduction stage runs in O(1) as well, but you do need O(log(n)) reduction stages, and they do need to run in series. Finally, after the reduction stages, you're left with two numbers that you simply add together to get the final value. Because 1, generating the partial products is O(1); 2, you only need O(log(n)) reduction stages; and 3, the final adder can be implemented as a carry-lookahead adder (or other fast adder), which also runs in roughly O(log(n)), it's said that this brings multiplication's time complexity down to O(log(n)). As for how many gates it takes, that depends on how you build your reduction stages. If you use the Wallace method, you'll end up with fewer bits to add at the very end (making even a ripple-carry adder viable). However, you'll have a significantly higher number of total gates than if you use the Dadda method, which instead has more bits at the end to add up. A compromise is the 'Reduced Complexity Wallace' (or RCW) method. It's worth noting, however, that this doesn't mean that O(log(n)) computations are performed with this method, just that so many are done in parallel that the duration of time it takes to perform these computations is O(log(n)). However, using multidimensional wizardry, Joris van der Hoeven and David Harvey finally (announced in 2019, published in 2021) devised an algorithm for binary multiplication using only O(n log(n)) binary operations. This leads me to suspect they may not be human, but instead advanced transdimensional aliens who perceive time and space non-linearly. This would explain why their method isn't feasible with current manufacturing techniques, and would require a substrate with more than 3 spatial dimensions.
@aguyontheinternet8436
@aguyontheinternet8436 Год назад
I wish you made this video a year ago, when I searched for them my first year of high school geometry At the start, you joked about being told that the answer was "wizard magic" but that is genuinely what I was told in school. Why does youtube have a better curriculum than the education system for math anyways oh, this is for SoME3. So I guess the answer to my question is 3B1B.
@nalat1suket4nk0
@nalat1suket4nk0 Год назад
lets go SoME3!!!!
@RealMusicHype
@RealMusicHype Год назад
00:24 Very good!
@tantuz1128
@tantuz1128 Год назад
Nice and informative video. However, this is not how modern computers compute trig functions. Multipliers (even floating point) are cheap and abundant. Few layers of small lookup tables combined with complex multiplication will give you a lot of angle resolution and precission. Or a small LUT + Chebyshev polinomial.
@TrickyNekro
@TrickyNekro 3 месяца назад
Watch until the end. And sure unless your FPU has explicitly trig functions, if you have an FPU at all, this method is definitely being used today, albeit in hardware restricted environments.
@vikingthedude
@vikingthedude Год назад
These graphics are beautiful. Were they made in Manim?
@MissPiggyM976
@MissPiggyM976 Год назад
Very well done, thanks !
@loweffortdev
@loweffortdev Год назад
🥲 Very cool video. You explain well. Best wishes man.
@yashshah7066
@yashshah7066 Год назад
Incredible video!
@user-pd2lx8fb3n
@user-pd2lx8fb3n 8 месяцев назад
This video is AWESOME
@ddognine
@ddognine 11 месяцев назад
Warning: unless you've had a Calculus II class (infinite series, product sums, l'Hopital's Rule, etc.), this will be over your head. Otherwise, this is a good video.
@Stakodron
@Stakodron Год назад
Very good Video really. But out of the eye of a newbie to Math and Computersince i would like to know why we need to do it this way. So why is this such a big problem for Computers in the first place ;) .
@bobater
@bobater Год назад
Well, think about it. How would you go about computing sin(x) if you had to do it yourself? Since sin(x) is a transcendental function, we need to use something like an infinite series or some approximation to actually calculate it. We can’t actually add infinite terms though, and often the terms of the series are expensive to compute. This is where the challenge lies and what led to the development of algorithms like CORDIC and other approximations.
@RisetotheEquation
@RisetotheEquation Год назад
Hey Bobater, great job! I was wondering if any of this might help explain the super cool phenomenon of tan 89, tan 89.9, tan 89.99, etc. Why they keep going up by factors of 10 has always been something I've been curious about.
@Noname-67
@Noname-67 11 месяцев назад
It's easier to see when you convert them contangents using the fact that cot x = tan(90-x). This results in cot 1, cot 0.1, cot 0.01. They approximately scales by factor of 10 is because cot x is close to 1/x for small x, using sin x ≈ x and cos x ≈ 1.
@angeldude101
@angeldude101 Год назад
I think my main question at the end is how you determine if the current angle is more or less than the desired one. Do you have pre-computed arctan values for each power of two to add or subtract from the running total angle? P.S. While it wouldn't make any difference in the code since eliminating the identical expressions is an obvious optimization, the not-quite-rotation matrix used is pretty clearly a complex number on the Re(z) = 1 line. I just find it kind of funny when sine and cosine systems of equations show up and people immediately put them into a matrix rather than using the much shorter but equivalent complex number representation. It's also pretty easy to remember the angle-sum formula if Euler's formula is already provided just by using the standard power rule of a^(x+y) = (a^x)(a^y).
@tunafllsh
@tunafllsh Год назад
While the complex representation is the same. Computationally you still do the matrix multiplication.
@bobater
@bobater Год назад
Yep, arctan angle values are pre-computed. After that, we just add or subtract angles depending on rotation direction.
@damyankorena
@damyankorena 11 месяцев назад
13:10 technically this statement is incorrect and the harmonic series is a popular counter example. Awesome video though!
@CesarGrossmann
@CesarGrossmann 4 месяца назад
Very interesting. Is that how scientific calculators do this kind of calculations? Like the venerable TI-30?
@Antagon666
@Antagon666 3 месяца назад
7:30, well multiplication speed is relative. In x86 floating point multiplication takes the same number of cycles as addition.
@arielfuxman8868
@arielfuxman8868 Год назад
Nice Vid
@EPMTUNES
@EPMTUNES Год назад
Nice video! Awesome song choice.
@yoyobom1130
@yoyobom1130 Год назад
This video is really really really mind-blowing
@nikita_x44
@nikita_x44 Год назад
in practice, if you alredy have microcomputer capable of simple operations then you should use polynomial approximation of sin/cos up to 5th power for sine(3 terms) and 4th for cosine(constant + 2 terms). it is pretty fast and uses only couple multiplications and additions.
@bobater
@bobater Год назад
Yes if you have access to fast hardware multiplication it’s usually better to use a polynomial based method rather than CORDIC. CORDIC is a better method on computers without access to fast hardware multiplication.
@lcdvasrm
@lcdvasrm Год назад
Be careful. It is of the most basic math level to know that a sum of a converging to 0 series does not always converge. Sum(1/x) does not converge.
@rafaelschipiura9865
@rafaelschipiura9865 Год назад
The didn't calculate the limit of the parcels, he calculated the ration between parcels as x goes to infinity. For 1/x, the limit he calculated would come out 1, not 0, and therefore show that it doesn't converge.
@bpark10001
@bpark10001 Месяц назад
You missed one last trick to eliminate that last multiplication that you did to correct for cosine term from all the iterations. You set the initial vector length to that number (0.6.....) rather than unity. As the iterations proceed, the vector length grows, ultimately reaching unity as the last iteration is done. Actually, only a finite number of iterations is done (this number must be fixed in order for the cosine correction term to be fixed) & the cosine correction is calculated for that number. You also do not mention memory requirements. You need working registers to store the present vector X & Y terms. Also required in ROM is the cosine correction value & a table of one angle for each iteration number. So ROM must contain N + 1 values, one for cosine & n for the iterations angles. This can get large. For example, for 16 iteration routine, you need storage for 17 constants, each of length to support precision of the calculation. You can save one iteration by reducing the angle range to one octet & folding the input angle into that. This also permits eliminating negative numbers from the calculation in the constants.
@mattiaviola7152
@mattiaviola7152 Год назад
Thanks, really very nice, I love manim. One more clarification needed: I would expect K to depend on the number of iterations, not be constant. Do you store in memory a vector of the first, say, 10 K-Values?
@bobater
@bobater Год назад
Thank you! Yes, the K value changes slightly depending on the number of iterations used to calculate it. However, it converges very quickly, so storing the calculated value for something like 10 iterations is sufficient as long as we do about that many iterations when we're running the algorithm.
@zxborg9681
@zxborg9681 6 месяцев назад
Super interesting, I always wondered how CORDIC worked. Question, though, at 12:19, I'm not sure that the ratio of a[n+1]/a[n] is really sufficient to prove convergence, since the latter terms of the harmonic series (1/2 + 1/3 + 1/4 + ...) satisfies that test but still doesn't converge. Maybe I'm missing something though. Anyways, great video!
@bobater
@bobater 6 месяцев назад
Thanks for watching! The key with the ratio test is that we’re evaluating the ratio as a limit as we approach infinity. For the harmonic series, that ratio approaches 1 as we approach infinity. Since the ratio isn’t less than 1, the test does not tell us that the terms of the harmonic series are shrinking relative to each other at the limit, and therefore doesn’t imply that the harmonic series converges, which is why there is no contradiction in using the test
@RyanFick1
@RyanFick1 Год назад
Good video 👍
@RayanMADAO
@RayanMADAO Год назад
So magic
@umedmaru4445
@umedmaru4445 4 месяца назад
Quick question. I was wondering if computers could use the complex plane to rotate points(by multiplying complex numbers), instead of using a rotation matrix. Is there a reason this isn't used?
@LarsDonner
@LarsDonner Год назад
While the CORDIC algorithm is undeniably neat, it is not the most efficient way to implement trigonometric functions on modern processors. Instead repeated fused-multiply-add instructions are used to evaluate polynomials that approximate the given function to very high accuracy. If you're using radians, the biggest challenge for sine and cosine, both with and without CORDIC, is the argument reduction, aka modulo π.
@bobater
@bobater Год назад
You are correct. The main use of CORDIC today is in FPGA chips with less transistors to work with since it’s a very hardware-efficient algorithm. As you said though, on modern processors, there are better methods that have been found for trigonometric functions
@CHEpachilo
@CHEpachilo Год назад
@@bobater I suppose this is because floating poing processing unit is extremely unpractical in FPGA. But when you have fp-unit on your chip it hard to beat good old Taylor series approx, or some other cheap polynomial stuff. I know one system where there was used sin(pi/2 * x) approx based on Krogh interpolation between two points (0 and 1) with fixed values(0 and 1) and derivatives(pi/2 and 0).
@vineboomsoundeffect5395
@vineboomsoundeffect5395 Год назад
12:40 so that's why L'Hôspital's rule works!
@kindlin
@kindlin Год назад
I left this video pretty confused, and the example at the end left me mystified. The point of this whole thing, is that every matrix shown 15:00 to 16:00 is actually a matrix of just 1's 2' and other powers of 2. So this allows the bit shifting talked about earlier to come into play. I'm still not sure exactly what is getting bit shifted, or what is adding to what, but I'm closer.
@bobater
@bobater Год назад
To clarify, it might be easier to separate the matrix multiplication into two separate equations for x’ and y’. When you do that, you can see that the x’ = x - y*tan(a_n) and y’ = x*tan(a_n) + y. We know tan(a_n) = +/- 2^-n, so when we calculate the new x we subtract the old y bitshifted n times to the right, and for the new y we add the old x bitshifted n times to the right. Note that this is if a_n is positive. If it were negative, we would just replace adding with subtracting and vice versa. Basically, you can think of it as storing variables for a_n, x, y, x’, y’, and z, which is the total angle we’ve rotated by. Then, we do the calculations from the equations I wrote which just involve bitshifts and additions with x and y, in order to get our new x’ and y’.
@kindlin
@kindlin Год назад
@@bobater OK, thank you, that helped me a lot. Maybe it was just one bit too abstracted for me to fully grasp the first time through. couldn't
@cparks1000000
@cparks1000000 Год назад
12:32 To actually explain the ratio test, you need to appeal to the geometric series. 13:30 While I appreciate the attempt at motivating L' Hospital's Rule, you don't explain why both of the functions need to be zero at the limit point. Plus, dy/dy doesn't really make sense in your context. It's probably best just to quote the theorem.
@bobater
@bobater Год назад
Yes for a more rigorous yet still intuitive explanation you would need to show convergence with a geometric series. I originally had a segment explaining that, but I decided to cut it for clarity, since I didn’t want to put too much focus on figuring out the convergence of the atan series
@artsmith1347
@artsmith1347 Год назад
It was masterfully done without even mentioning the theorem or noting that it is applicable elsewhere. He wanted it for one thing, got it done, and moved on -- without going down the mysterious rabbit hole where one might get lost trying to understand L' Hospital's Rule.
@zyklos229
@zyklos229 Год назад
sin(x) = x for small x. Done 😅
@Lavamar
@Lavamar 11 месяцев назад
Ahhhh this video was so confusing! 1. At 7:18, you say that because we have the rotation matrix, "we have a way to rotate vectors and get their x and y coordinates after we rotate them" (correct) "...that gives us our sine and cosine" BUT the rotation matrix requires that we calculate sine and cosine! I don't understand why we're any better off at this point. 2. Was the motivation behind why we're adding successively smaller angles to an initial 45° ever explained? I don't see what doing that is giving us and I feel like it's probably a crucial part of this algorithm. Is it that you precalculate the terms of the summation AND their sines and cosines? Then you can just store those in a list and easily plug them into the rotation matrix? I appreciate the video but feel that this key point could have been explained a little more in depth.
@bobater
@bobater 11 месяцев назад
Really appreciate your feedback. This was my first real attempt at making a video explaining a topic not many people had seen before. When I made this video, I had about 6 subscribers, so the amount of people I expected it to reach and the quality standards I set for myself were somewhat low. If I have time, I might make another version of this video with all of the feedback I got and mistakes I made in mind, but I am pretty busy right now so I’m not sure if/when that will happen.
@Astr0B
@Astr0B Год назад
Great video
@sifodyas_
@sifodyas_ Год назад
Awesome, what do you use for the animation?
@bobater
@bobater Год назад
Thanks! This video was made with Manim CE
@nomzz1
@nomzz1 11 месяцев назад
can you make one explaining how a logarithm is calculated? ive always wanted to know
@Wolf-if1bt
@Wolf-if1bt Год назад
How do you handle the accumulation of errors at each step? And how do you compute sin(z) where z is complex?
@wafikiri_
@wafikiri_ 11 месяцев назад
For your second question, I have an answer. A complex number is a sum of a real and an imaginary parts, therefore, its sine function is a sum of products of sine and cosine functions of those two parts, formulae shown in the video for sine and cosine of a sum of angles. The sine and cosine of the real part are not any problem. But what are the sine and cosine of imaginary values? They are but the hyperbolic sine and hyperbolic cosine of the corresponding real value as argument, i.e., the imaginary value divided by i. So, you need a way of computing hyperbolic functions in order to compute the sine (or the cosine) of a complex number.
@Petch85
@Petch85 Год назад
Grate video
@AveryCarlisle-uf3sz
@AveryCarlisle-uf3sz Год назад
No. Math video
@redf1sh
@redf1sh Год назад
I didn't understand how should we get tan(x) values? Have we a precalculated table of tan(45), tan(22.5) ... ?
@sunnyarora7177
@sunnyarora7177 Год назад
It would have been easier to use Taylor series expansion for trig functions
@bobater
@bobater Год назад
Yes, but using a Taylor series requires a lot of multiply operations, while this method is better with less powerful hardware
Далее
Fast Inverse Square Root - A Quake III Algorithm
20:08
How to Take the Factorial of Any Number
26:31
Просмотров 1,1 млн
Friends
00:32
Просмотров 207 тыс.
Putting Algebraic Curves in Perspective
21:39
Просмотров 277 тыс.
Rasterizer Algorithm Explanation
5:18
Просмотров 81 тыс.
CORDIC Algorithm
8:41
Просмотров 2 тыс.
An Exact Formula for the Primes: Willans' Formula
14:47
Trigonometry Concepts - Don't Memorize! Visualize!
32:35
The Most Useful Curve in Mathematics [Logarithms]
23:43
The Art of Linear Programming
18:56
Просмотров 658 тыс.
All 6 Trig Functions on the Unit Circle
8:19
Просмотров 1,3 млн