Hello, mathematician here. Your method is awesome, but you only need to check for divisors up to the floor of sqrt(p), what i mean is, if you wanna check if one thousand is prime, you only need to check if 1000 is divisible by the numbers 2 to 31 (square root of 1000). Hope this helps, this makes it a lot faster.
Question: Does the respective script exclude numbers ending in 0, 2, 4, 5, 6, 8? Because these are certainly not prime numbers. Where can I get the scripts?
printing to stdout took way more time than the actual computation itself also there are algorithms that can find all primes under 10^9 in less than a second (in C++)
Thank you for the video! I love quick little algorithm comparison showcases. A quick note: on my machine, my own version of the trial division algorithm (up to 1 million) took 12 seconds to run while printing the numbers and just over one second without printing the numbers. This is because system calls (which are required to print to the terminal) are around a thousand times (or more!) slower to run than typical math operations, depending on the language and environment. This also explains why the sieve and trial division algorithms took about the same time for the first million primes, they both spent most of the time printing instead of actually doing math. It's worth keeping this in mind for future comparisons. Again, thank you for the video; I would love to see more naive vs. advanced algorithm implementation comparisons!
Hi! Really glad to hear that you enjoyed the video😊 I’ve also tried the algorithm without printing and realised they were quicker, however I felt like it would be a little more visually interesting if the prime numbers were printed out in the video 😅
Good catch, this was actually an oversight when I made the video. The trial division algorithm actually had the square root as the limit, I just forgot to explain it during the video. Hope that answers your question!
The algorithm explanation is genuinely super easy. Great job as always :D Also a quick question; if we use a faster compiling language than python will the time change be noticeable or same?
Thanks!🙏🏻 Yes the time difference if we use a faster language (e.g C/C++) should be noticeable! I just used python because it’s the language I am most familiar with 😁
I think, the main problem with faster algorithms here is results output. Most of the time this code would spend printing the results. If it is true, the way of working with i/o is more important (in terms of performance) than programming language choice
Yes, I actually did this a while back with 3 languages and this sieve. Nodejs, C++, and Python Also, a quick note, these tests were automated back to back, mixing the order. along with that I did not have each number printed into the console because printing, especially on Windows, is slow. On going to 1m, and averaged over 5 times, the times were: Python: 1797.4 ms Node: 149.8 ms (12x faster) C++: 46.7 ms (38.4x faster) then for 10m, again averaged over 5 times: Python: 23752.1 ms Node: 1214.1 ms (19.6x faster) C++: 699.8 ms (33.9x faster) Also, here are the peak ram usages going from the 1m run, then 10m run, were Python: 86.7 mb, 772.9 mb Node: 36.6 mb, 119.7 mb C++: 4.9 mb, 15.6 mb C++ is so much better here btw because I used bit arrays (std::vector<bool>), basically each number bool takes up a single bit in the array. (this is how the sieve works btw)
I really enjoyed video. Could you please share the source code ? It would be really helpful to see how you implemented it. Also, if possible, could you provide a brief demonstration of how to run the code? Keep up the good work ,thanks a lot!"
Hi, glad to hear that you enjoyed the video! The source code is so messy so I don’t think I’ll show it 😅 However if you’re interested, perhaps some time in the future I’ll make a tutorial on how to get started with ml-agents for ai reinforcement learning 😄
What input change is it waiting for in Level 3? surely if its standing still it would continue to output the stand still command? (unless there's a timer input or the raycasts are semi random? maybe the outputs are only allowed to change every so often and its waiting for the next output cycle??)
One thing I did was set the ai to only request a decision every 20 academy steps. However I think where the ‘waiting’ mostly came from was lag - sometimes the programme would freeze for a second or two when I was recording it because I had many simultaneous environments in the background. However I changed this in level 4 onwards so it would lag less. Hope this helps! :D
The Monty Hall problem as stated is a one time event. If you want to simulate it, you need to make further assumptions first. For this simulation, you assumed the host would always offer the switch. If the host only sometimes offered the switch, the simulation would look different and the result would be a different one.
What the hell are you talking about? These events aren't connected, you're just doing the same "one time event", where you ARE offered a switch, 1000 times. That's literally how probabilities work. The situation where you only get to swap "sometimes" (when?) is a different thing entirely that you haven't even described properly.
@@AbiSaysThings no, it's not, because there's obviously a human being involved (the host) whose behavior isn't specified. Unlike a dice that predictably will roll 1 through 6 on average 1/6 of the time, with a human there's no telling what will happen next time. To assume that the same thing will and must happen is rather naive.