@@Nohomosapien the joke was that the whole point was to speed test it, not to actually have "1000000000" in the console. So it makes no sense to write n = 1 billion lol
@@TheBuilder the only avalible optimisation is just to skip iterations and add billion at once. Compiling with optimisation flag will change nothing or will make the test meaningless, I think.
@@MrMeaty6 Well, @filipburcevski9566 is right because it's going to take you more than one second to name each number... So yeah, it's going to take a few centuries.
Fun fact: the -O2 flag on the gcc compiler will most likely optimize the C++ program to the point that the result is immediate…because it generates machine code that just puts the number 1,000,000,000 directly into the CPU register rather than actually counting anything. At least this is what happened when I tried it with plain C.
You see, Python is actually good since it gives you time to get a glass of water, do something, play a game, graduate and get a diploma, get a job, witness the year 3000, all while the compiler is doing its thing.
I told one of my professors that python was probably the best language to learn about parallelism and concurrency. He looked like I just admitted to support dog fighting. I then told him that python was so slow and inefficient, you would be able to visibly see the time difference from running on multiple threads. He laughed.
@@croma2068 It does have some niche uses. And I honestly believe it should be taught to children instead of cursive. That being said, it is pretty slow. And when its not slow, its because its using some other language.
Python is honestly the best. Sure if you're working on a large product or in a competitive setting, you'll need Cpp, Java, or sometimes C. But languages are just syntax, after a while using a new language is just deciding on the best language for the task then a few google searches. Python has so many uses, not just for learning. It's very heavy in ml, has many useful libraries (graphing, numpy), and can do straight up magic compared to other languages. Whenever I want to automate or check something, I just open idle. Edit: people replying about the "magic" just being simple pseudocode, yes that's exactly right lol. In just a few seconds I can reformat data with nested dict/list comprehensions into a structure that would take a dozen lines in other languages. I know lots of the stuff are written in C/Cpp anyways, that's like saying go use assembly because everything ends up being like that anyways. My whole point is that python is extremely useful, obviously there's a time and place.
@@noornasri5753 "straight up magic" aka it can do what other languages do in slightly shorter syntax but it by no means has an advantage over other languages library wise. The only reasons python is so heavily used are: 1. its installed by default on all major linux distros, so its a prime choice for automation without further setup on fresh boxes. 2. Due to its simple syntax, its been picked up by data scientists to use for ML, but mind you ALL the heavy lifting is done in C++ so you could easily take say javascript which is also very easy to use (and has equivalent packages for everything) but is also much faster (using this shitty benchmark, written in node I get performance thats just a few ms slowler than c++) Python is not the best, never will be. Its just a very conveniently placed language. Want a language that actually does magic that no other language can? Learn rust.
I cut my teeth on C and only came to Python for work purposes target recently. I'm often told in coffee reviews that I'm getting "paren happy" in my conditionals and a few other places. Not so much an issue with semicolons, weirdly.
You might think that the major time difference was because he used different coding languages, but python would’ve been way faster if he just removed a few zeros.
I had never delved into programming before, but this seemed so straightforward that I couldn't resist giving it a try. Using VS Code, I successfully executed it! Thank you; it was an enjoyable experience!
We're studying FEM/FEA by implementing it in Python code atm. And while it sits there forever using 26GB RAM and 96% CPU I think to myself, how fast would it be if written in C++? (Maybe it wouldn't be that bad of a difference since Numpy is written in C though. Idk really...) However, I wouldn't stand a chance to be able to implement the ideas/concepts we are using it for in such a short time if it was written in C++... It would at least double if not triple the time spent on the project is my guess 😅 Off course such an application would be implemented in C++ for an actual program. But each tool has its place. For us it is having an easy language, in order to learn FEA by implementing it in code, the coding part being secondary to the concepts we're coding at least within the course we're taking. That's where python shines I guess. It minimizes "barrier" between the scientific/engineering concept and the code implentation, and it's a great language for just that reason. Different tools for different purposes.
"the coding part being secondary to the concepts" always code is the second. It doesn't matter what you're doing whether it's a store or business rules in DDD or a game. In our company, we model the business in DDD using C#. (python is too slow and has poor objectivity - only by convention).
C++ and Fortran are best if you're solving CFD or FEA problems because as far as I know, C++ and Fortran are fastest when handling multidimensional array. OpenFOAM has its entire library written in C++ and is a great open source CFD solver.
I studied C++ and C# in university, but before that I was learning python (through online courses) for exam. Now it is funny to remember how teacher said that if we won’t optimise our code, it will calculate for longer than the exam lasts
When I was at university (ok, it was Thames Poly. It only became a university when I left), if you wanted to add a gigabyte of memory to your computer you'd first have to apply for planning permission to build an extension. Now, 32GB comes in those handly blister packs sometimes near the till for 'impulse purchases'.
@@herseem I know the feeling. I wrote my first programs in 1978 in high school. I wrote my first professional programs in Fortran in 1984 on a VAX computer shared by about 20 people. It only had about 256 MB of RAM to support all the users.
@Game Plays 1230 Avoid dynamic features in inner loops, cache, use aggregate functions like the array functions in numpy since they are written in C and use vector instructions
This is so true. I had to rewrite an entire program during an exam because the way i wrote It the first time was so slow that i couldn't reiterate it the number of times needed before running out of time
@@TheBuilder My own, though I'm going to use LLVM as a backend because I don't want to lose my sanity. It's gonna support both high-level and low-level features. Also, it's gonna be gradually typed. I can tell you more about it if you want
You can get even faster c++ code if you enable optimizations with the -O2 flag. Although it's possible that the compiler optimizes the loop and removes it 😅
There are ways to ensure that loop is not removed. #include #include #include uint32_t n; int main() { clock_t begin = clock(); volatile int deopt; for (n = 0; n != 1000000000; ++n) { (void) deopt; } clock_t end = clock(); double spent = (double)(end - begin) / CLOCKS_PER_SEC; printf("%f ", spent); } 0.350250 with -O3, 0.98 with -O0.
Sometimes you don’t need the program to run quickly. You just need it to run. That’s why I love Python. Although it’s certainly not the fastest language, its ease of use is great for beginners, and I can write a program quicker in Python than I can in c++. I don’t think I would be a programmer if Python didn’t exist, quite honestly. And keep in mind I love C++ too just for different reasons.
You said everything right. However, there are people who are sure that there is only 1 language for all tasks in the world and that is python. I say this because I know a "hacker" who brutforce passwords in python (brutforce to find hash colision) and wondered why it took so long
Jokes aside, the level of draft code you can make in python is unmatched by any other language. And honestly, for most stuff nowadays python is trully enough.
@@ruynobrega6918 That depends on the application. There are a lot of languages that are MUCH easier to use in particular applications. I have done a lot of scripting programs in Perl and it is much easier than Python for that. And MATLAB is much easier for many engineering applications.
Is there a reason why that's not done on default? How would the assembly code look like? Would it just be an add instruction bilion times? Wouldn't that binary be absurdly large?
@@morgard211 sometimes optimization gets in the way of debugging so its an option. Enabling optimizations would probably result in the compiler computing the loop at compile time and giving you a binary that just prints the value
@@morgard211 also optimization is not perfect and sometimes breaks functionality in complex programs, requiring additional time to figure out what the optimized did and how to tell it to not do that.
@@morgard211 Assembly is also capable of loops bro (jumps/calls) So about 5 lines are enough for this (if you actually loop through it unoptimized) If you Compile on -O3 the compiler would register the loops and set the variable to 1000000 instantly instead of adding 1000000 times
@@U20E0 Optimalization only breaks code if there is UB in it. (Assumeing that you have a good compiler) If it breaks a complex program then that is because someone wrote a wrong code somewhere in it. But if the faulty code is in a wrong place, then it is extreamly hard to figure out how to fix it, maybe harder than it worth.
@@TheBuilder it's not even that, the range psudo iterator is implemented in c, not in python. You spend much more time running c code when using range compared to a manual while loop
Python enjoyers: NOOOOOO you're not doing it right. Rust enjoyers: Let me try my best to show why Rust is faster than C++ than this video does, and why it should be the best! C++ Enjoyers: I wonder which libraries written in C++ they're going to use...
As far as I know most of the libraries used by python or rust are written in c not c++ Edit: just to clarify rust mostly uses libraries written in rust but there still are a lot of c libraries which are used in rust mostly because they are already high performance, well established and we'll documented.
It would be really nice to include javascript timings. On my machine: C++ is 0.66s, Python is 42.37, and JS (Node) is (amazingly) 0.53s. Presumably V8 is doing some crazy optimization, so I added a Math.random() check in the loop, and it went up to 6.3 seconds. Still very impressive.
To be honest, Python was my first language to then go on to Java, C and C++. Its very good to learn with very high level and general programming but when you dig deeper, it falls short except from certain tasks like machine learning.
Similar story with me, except I started with C, switched to Python and thought it was the best thing ever since I could write 5 lines of code to do practical things....then I started working and realized lacking a strict type system makes Python and JavaScript prone to errors especially on larger projects
I think there’s some settings you can tweak for cout speeds. I saw it at an article wrote by geeks for geeks or something it’s about competitive coding
Now process data on each iteration and you'll see how python performance decreases dramatically. Depending of the case I've noticed that python is a about 500× slower. For example try calling a method within a custom class.
@@ILubBLOfficial that's OK. However not everything can be run with those libraries. I'd prefer integrating python with C++ using a wrapper such as Pybind11, SIP, shivoken, boost::python, etc . That approach for optimisation is much more powerful as it brings the best of python and C/C++ together. By the way, Numpy is a descent approach. However Eigen which is the C++ counterpart is much faster and optimised. Pandas is a huge package. Too many things there that aren't always necessary.
@@themystic5935 realistically speaking. Many companies specially in embedded systems, prototype in python and rewrite everything in C/C++ for production. I still consider that Python and C++ work fine together. They are just tools not religion.
@@ILubBLOfficial Eigen which is the C++ equivalent to numpy is about 20 or 30 times faster than numpy. And also, not always you can avoid Python bottlenecks. However I don't critice Python for that. I think it is still a great scripting tool for a range of applications. What I critice is that there seems to emerge a new generation of developers, most of them beginners that frenetically love a specific programming language as if it were a sort of religion. They wrongly think they can do everything in a single programming language and they don't admit the weaknesses of their favourite tool. They are just tools, not religious. There are also many areas in which Python is a better choice than other tools, but when it comes to optimisation, performance, and concurrency, Python performs very poorly. Probably worst than the vast majority of its competitors, and that's why it is recommended to learn more than just python.
@@arthur1112132 But it's c/c++, so the compiler gives you the power to choose if you, the creater of your own code, want to let the compiler do that or not.
The Cpp integer needs to be volatile otherwise the compiler can optimise it's value and see that the function is basically just "print on billon" and bypass rhe time counting
@@jebbi2570 The great thing about Python is that whenever you truly want/need C's speed, you can just write a shared library providing that functionality in C, then call it from Python. Of course this too has its limits, but it's a nice workaround for when there's one significant bottleneck.
Perfect example, why there are optimized languages for different usecases. This comparison is like compare a kitchen knife and a leatherman by chopping onions with both.... or by turning a screw into wood with both.
I had my “road to Damascus” moment in 1981 when I wrote and ran a similar routine on the Original IBM PC. . . first, interpreted BASIC, then Compiled BASIC, then in C. I was showing my kids. Well, I was blown away. Even though I had written some “pretty good” code (worth about $1,000,000) for my employer, I NEVERwrote another line in BASIC.
My first programming experience was with ZX Basic. Speccy allowed to do Assembly (or rather literal machine code) by "poking" memory (e.g. storing literal values in it and then calling a desired memory address to execute), but sadly my child brain couldn't comprehend it. Basic was slow of course and I had to scratch my head trying to understand how all the other programs ran so fast for a while. At some point I learned there was a C editor, but I couldn't find the tape anywhere.
IF I run this raw C with no optimization flags in my asus potatobook from 2016, then I get similar times as this guy. But just using -O2 flag drops it to less than 10ms. It's an incredibly weird "flex" anyway considering python also takes well under a second to do this if you wrap the thing in a function and use a JIT compiler
Can someone explain this to a non-programmer? Wouldn’t the code to the CPU’s be the same? Don’t compilers, even if they’re being done in real time, change the code into assembly instructions for the processor? I would assume that such a simple program would result in the same instructions being seen by the CPU.
I know I am two months late to answer, but I think your question is interesting. Your mistake is to considered Python as a « compiled at runtime » language, which is not true. Compiled language are transformed to CPU instructions during compilation as you seem to know. Python is an interprated language, that means for each code line, the python program (python.exe or /bin/python) reads the line (reading a string is a set of CPU instructions), and updates its state in memory according to the python code. It is like there is a layer between CPU and code, the python code will never be translated into instructions. The python program is for python what the CPU is for C/C++ language : the executor. However the python program is made of CPU instructions,and running it to execute the code line takes CPU time to execute. The same applies to CPU, you can have an algorithm done with instructions that runs on a CPU, but you can also design a chip that is specified to do this same algorithm and it will be quite faster than the CPU (if it is well-designed) because there is one layer less (there is no more instructions that translates the algorithm). However you lose the versatility of a CPU which can execute different algorithms. It is the same with python, it is a language where programs can be run and debugged fastly because there is no compilation time, but with less efficiency than compiled language (some tools try to avoid that by compiling python code at runtime like Numba and get better performances).
@@vladimirarnost8020 Thanks, that's interesting. Is there ever a danger this type of 'optimisation' happens in the wrong situation?...where it is absolutely not wanted?
@@ChrisM541 Good question. The compiler is usually doing a good job not breaking code by over-optimising it. However, when the goal of such a loop is to wait for a certain amount of time, e.g. in embedded code touching hardware directly, the loop removal might break such code as it would simply run too fast.
I ran it in Common Lisp and Racket (translated by ChatGPT because I still suck at them) on Linux Mint with a 5600x CPU: - Python took 39 seconds. - CLisp using SBCL took 0.222 seconds. - I gave up on GNU Clisp after 6 minutes. - Racket took 0.845 seconds. I altered them to print the results to make sure they actually did the task, and they printed one billion. The Racket script is relatively complicated and might not be optimized well. But these results are crazy!
True story, my friend and I went to a coding interview where they treated all languages the same and had a run-time limit. We both got the question, I used python w/ DP, he used C w/ brute force :)
@@Hephasto Nowadays if you use some better C compiler it generates like 95% perfect asm code. For "more realistic" = more complicated programs it's even faster to write in C unless you REALLY know how to optimize in assembly just because there is already 50 years of optimization experiences in the compiler
@@donovan6320 You can try it, just do the same loop but compile with the /02 or /03 flag. It won't compute values that aren't read. Now a better way to test how quickly a computer can increment would be to start a timer and use some time limit as the condition to exit the loop. The result should be in the hundreds of millions per second, but I'll test it myself in a sec
A modern compiler such as the GNU C++ compiler you use in the video, do many passes and optimisations. There is a possibility the compiler may have changed the code to something other than 1 billion increment instructions that would make this comparison fair. However the point still stands that python will always be much slower for operations like this; it's not the right tool for this job, just as you wouldn't use a screwdriver to punch a nail into a wall, though the screwdriver has uses of it's own
I actually came across this recently. I was trying out micro-python for the first time on an RP2040 Rasberry Pi PIco W. The embedded version of "Hello world" is "Blinky", you blink the LED on and off. Wow! I figured out the gpio library, but I couldn't bother to find the "Delay" function/method/module, so just counted to 100,000. I was expecting to have to ramp that up much higher and thankful I didn't need to think about datatype in python.... but no. The LED blinked with about 1Hz frequency. counting to 100,000 took 500ms! To be honest I haven't switched it back on again. I don't think it's the RP2040's fault, I think it's python's fault. If I did that in C on the same board the LED would just look a little dim and flickery. If I did it in ASM it would just look like it was on.
"counting up to a number" is a very bad baseline for comparisons. For one, these languages have extremely different performance characteristics. (Micro)Python is naively interpreted. It absolutely doesn't have advanced optimization passes that would make it able to reason about this. A C++ compiler will be able to make sense of your counting, and is likely to trash away your loop, 1. because it'd be able to figure out that you're just counting up to 100000, and 2. because you're probably not even using the value... so it can just optimize it away entirely. Though, if you need anything reasonably fast then MicroPython is not a good idea, for sure. I heavily use Python and I dislike the idea of using Python in embedded programming. Assembly is not a magic bullet. If you know your C/C++ and optimization, then dropping down to assembly is rather unlikely to let you write faster code.
Bear in mind that the raspberry pi foundation's goal is to teach people programming. Python is their standard language for everything they do (hence raspberry "pi"), because it's easy to learn. Having said that, you can program the pico in cpp as well.
Ported an NQUEENS algorithm to both J2ME (SPH-M330) and an arduino (ATMEGA328P). The arduino was ~20% faster. Arduino UNO clones usually clock the ATMEGA328P at 16Mhz. The Samsung has a full 32bit ARM processor at (probably) 192Mhz. It might even utilize the "jazelle" instruction set. Thats honestly really fucking pathetic for Java. Needless to say, I was incredibly pleased to find out that my stupid conway GOL demo ported to C (Qualcomm BREW) was so fast that blinkers are a blur on screen.
The cool thing about python is that there are so many ways to optimize things. Basic code like this may be really slow, but there are ways to 100x things like this
@@F14_Tomcatter the fact that the code runs in about 4 seconds when you run it not 1 minute because he ran a precompiled C file, but the python file wasnt so it compiled then ran. if you run it a second time it is about 4 seconds as tested
@@EWILD99 i remember implementing the dutch flag problem for large suffix arrays for a algorithm course i had and running one large sample set that would take like 3.0s on java would take the python code like a hour.
Python code’s 1st execution is also its compiling stage. On the 2nd run, it is way faster. You should have either calculated c++ compiling time + run time or compare python’s 2nd run
@@catfan5618 And if you think it is compiled. So where does the binarie goes? Where is executable file? Because when you run python script it does not change it's form. It's still editable. Even if it would be compiled to RAM, there is no sense to not get compiled files.
@@sinoichi Compiled does not mean we get a binary. See Java, we compile our Code, but dont get a binary. Python compiles it code into bytecode and stores this into .pyc files.
every data type in Python is a lot more complicated compared to C or C++, for example, every time you interact with a number in Python, the program will call various other functions while in C or C++ the operation is only a few assembly instructions
Idk whether it's still the case but Pypy (a python interpreter designed for speed) used to beat C++ on some regular expression benchmark (basically C++ had to redo all the work for every iteration whereas the JIT could hardwire the regular expression and optimize it).
if the comparison was against std::regex, then that's no surprise, because std::regex is hilariously slow. like, "outperformed by literally any other option" slow. CTRE would be a different story, though :) tracing JITs are interesting tech though! there are definitely cases where it would be totally expected to trash an AOT compiler's optimizations.
@@asuasuasu JIT shine when you have data that is constant in practice but variable in theory. Like, matching repeatedly against a fixed regex. A precompiled regex lib has to analyse the regex every time. A JIT can compile a short program that represents the regex.
@@cmilkau C++'s std::regex is not a good choice (nor a good way to benchmark C++ vs anything else), it is slow due to it's design, almost anything outperforms it.
@@cmilkau you can harness similar optimizations on compiled languages using PGO with a modern compiler to generate more optimized machine code for your input space.
I thing gcc has some optimisations enabled by default. Is this also the same result without optimisations? It could just be that the compiler sees what the code is doing and simply sets n to a billion right off the bat
That is not the difference. You change is because of what optimization the compiler did. I god 30 microsecond for the code above and -O3 optimization. The compiler realise the final value of n do not depend on anything unknown at the time of compilation and can just set it to the final value and remove the while part completely. Test to add volatile as a keyword to n or test both print methods in the same compiler
@@target844 Good catch, execution time was significantly slower once I used volatile. I've had bad experiences with std::cout in the past, to the point where almost 1/3 of my execution time was spent on basic I/O. A rewrote the backend to use `printf` and performance increased quite a bit.
Nobody just counts- except to understand the overhead. The things I would do at each increment are probably written in C, and likely designed to run on a GPU. I use python to execute functions in libraries. These are likely to be faster implementations of code than I could write in any language simply because they are typically open source and have lots of eyes on them. Was that the point- python is slow at loops?
@@uwirl4338 no, with O2 optimization, the compiler will just set n to 1 billion before the while loop. To truly optimize the code, he can add a register keyword when declaring n, that will make the program runs 4 times faster on my pc
@@wrnlb666 Oh really? I was under the impression the register keyword didn't even really do much on modern C/C++ compilers and was more a relic of the past, since modern compilers are so much better at optimizing now.
@@uwirl4338 -O2 will make an enormous difference. The optimizer can and will realize that the end value of n does not depend on anything unknow at he time of complication that can change so it can be precalculated. If you look at the generated assembly -O2 will eliminate the loop completely. I tested the code on godbolt with the option and -O0 takes 3 seconds compared to -O2 at 30 microseconds. It is an optimization of a factor of 100,000, so an enormous difference. That is a runtime measure in the program around all of the code shown in the video so any startup time is not included. If iI ht optimized variant measure time to before the value is printed the time I get is 0 µs. So the 30 microseconds is the time to print the result. If you add the volatile keyword for n it can be optimized away and you get 3s gain regardless of -O0 or -O2.
@@taragnor compiler optimization optimize things that doesn't make the result looks different. In this case, setting n to 1 billion before the while loop doesn't make the result different. Because n is not volatile, and the compiler knows that it will not be changed outside of the program.
Was this comparing just the initial compile times or the finished program? How did the c++ do 2.4 seconds? I ran the same thing in visual studio 15 minutes ago and it's still counting. You must have a beast of a workstation or I messed up somewhere.
That's a important trade off. Does anyone here have experimented compiling Python code with Cython or Numba? I wonder how it will perform in this situation. Also, now with Python 3.11 it got a bit faster. But still much slower than Cpp off course.
Numba needs to JIT compile your loop. It converts the code into C and compiles it on-the-fly. That's why the first execution is slow. If you use the function a lot, it's as fast as C. And it's even smart enough to put your vectorizable code onto the GPU without you noticing, it's crazy good 😂
@@marcotroster8247 Really I was not expecting that with numba the time is about 1/3 of a plain c++ loop... I was expecting something similiar in performance, so 6s about...
Recently I did a project at my Uni, that focused on testing the performance of few languages - in essence C, Rust, Java and Python. I wrote a mlp neural network (same architecture - same amount of layers, each layer having the same amount of neurons) in each of these languages. The goal was to measure time of classifing 10000 data samples from MNIST dataset. The results looked like this: C ~ 2 sec. Rust ~ 3 sec. Java ~ 4.3 sec. Python ~ 8 minutes
The thing is that the C++ compiler will understand the loop and optimise on its own. -O3 does the trick. The loop will not even be generated as machine code. check compiler explorer.
The overhead here is the Python standard print function. If that function could be optimized maybe with cython or similar, i wonder how the results would be.
C++ has the advantage of using register variables. That reduces variable memory accesses to zero while in the loop. Python must push and pop n on and off the stack.
@@blip666 Compiler explorer can show the assembly output of compiled programs and you can see it directly interacting with cpu registers (eax, edx, etc.). Cpu registers are far faster than memory accesses. I don't use python so I don't know how it works internally but it's physically impossible for a language to optimize with cpu registers without compiling to binary. There are way more reasons why c/cpp/rs are magnitudes faster than python. 1, yes, it is a compiled language so any "actions" are just one instruction to the cpu. 2 Static typing also makes it faster because memory is constant size and no need for extra memory to store types. 3. Static typing and other restrictions allow optimizing compilers, esp. llvm to analyze data flow and other stuff to inline/remove/optimize assembly. 4. garbage collectors do add overhead to the code, i don't think by much but languages without gc, especially rust, know exactly what parts of memory to deallocate.
@@Fl4shback it would do that if any optimizations were enabled for the compilation in the video, the author said in another comment that it felt unfair to do since python cant do that, lol
Nice comparrison. However I think a lot of people are a bit too invested in efficient code. In many environments, computation time is much less expensive than coding time. So if you only have to solve a given task once or a few times, the computation cost is easily mitigated by the saving in coding time.
Why do you think python coding time less than c++ coding time? maybe true about c if one codes from scratch (instead of including libraries) for say data structures etc. but definitely not true about c++.
@@sitting_nut I can’t speak for all disciplines. I am doing data analysis and measurement and control for experimental physics. And for that purpose Python is the faster and more flexible option. Could I write the same thing in C++ and have it be faster? Sure. But I don’t need to and using python my code will also be more accessible to people following up on my work. I don’t need an F1 car that only the cool kids know how to drive. I need a Ford that benefits more people.
@@Hooverdreng iow you are supporting your bias with personal anecdotes instead of objective data (and say all that unironically while you are saying your work is about data analysis, measurement, and experimental physics). anyway, once again my point was that your claim in op, that there is a coding time increase for using c++ over python, is false. that point had nothing to do with f1 cars used as metaphors.
@@sitting_nut Seriously? Idk, maybe he got that impression with a simple hello world? lol. In c++ you have to import/include a std library, use a bitshift operator and put it in a main function. In python you type "print".
@@cagefury3789 if you know anything, you know you don't actually need to include std lib or io. and why do you need a bitshift operator in hello world? fact that you had to make up such absurdities to make your claim, indicate even you , ignorant as you obviously are , know that, claim there is a coding time increase for using c++ over python, is false. typically you pretend you don't know python scripting mostly consist in importing other libraries. and how is " "__name__ == “__main__”" , and other such nonsense to structure any significant program , an improvement on main function in c++?
I got the same result in C and Python using huge random complex tasking with optimized code in Cython / CPython and better results with Pytorch. Python provides me interpreted and compiled worlds with simple code and few hours of programming.
Tried the same thing on my PC with c++ and c Found something weird. With the exact same c++ code my PC finishes in 1.27 seconds. But if I replace the 'size_t' with an 'int' it finishes in 1.1 seconds. Which is kind of to be expect since size_t is bigger than int but when I tried the same in C there was NO DIFFERENCE in speed for both int and size_t??? Meaning that c++ is slower with size_t than c for some reason. C did both in 1.1 seconds btw
@@noapoleon_ C and C++ performance is identical. That's why I suppose one of the programs got compiled for 64-bit and other for 32, because in 64 size_t type is twice larger and takes more time to process
Well you should always use the for with range in python if you can. Since that is C under the hood (in python 3). So kinda misleading. For more info look at m_coding’s video
@@Ruchunteur That is true, but it is faster then the while loop, you can even speed it up more with pure python to just use build in functions instead of using the for looping. max(range(1_000_000_000) speed is up from 1m30sec to 19sec. you can then speed this up more with numpy but also you can speed it up more with just math and seeing that the highest one will be 1_000_000_000 and then just print 1_000_000_000 making it run in 0.5 sec. would reccomend the mcoding video ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Qgevy75co8c.html for a explanation. but yea that is the dangers with python it can get really really slow if you dont know what you are doing
Python is a relevant language for some tasks, but it’s main claim is it’s a modern interpreted language - easy to teach. A few years ago, it would have been BASIC, compiled BASIC, then Pascal. Java tried to slide in, but was pigeonholed for slow web applications. Things have to move forward, just like C, C++ and so on.