Being hard on code and neutral on LLMs is not a contradiction. If someone submits terrible code it doesnt matter if they used AI or not, if they submit good code it also doesnt matter. The point is that you judge code on the merit of the code, not the source. Its honestly really strange to try so hard to apply a moral or even value judgement to something which is just an inanimate tool that can be used or misused.
Yeah it just a tool, but knowing how much Linus hated C++ it's kinda surprising that he is so neutrals towards AI when there is so much garbage code produced by it.
Its cause if its shit you dont really improve anything and have no idea where to even look to make it not shitty. Its fine if your end game is immediate satisfaction. But if your aiming to exceed the average. Then relying on a "tool" that averages isnt the way to go
You have to understand the context of the situation. This was a live interview. Linus couldn't afford to be as much a keyboard warrior as he naturally is. I'm completely convinced that would blow a fuse if someone submitted a patch to the kernel containing LLM code.
I think Linus is ambivalent or neutral about LLM coding because he doesn't direct his anger towards unconscious, inanimate agents. What he gets upset about is when a human, who should know better, tries to merge garbage code generated by an LLM without understanding what they are attempting to merge.
I think that you are correct but Id also like to point out that getting worked up over llms is fruitless at its core. Its an exercise for people who dont understand the world.
Yeah. There's a lot of money to be made in capturing attention by selling outrage stories of LLMs and just AI in general, when it's really just repackaging people problems into a new shiny exterior. That's not to say skepticism is unwarranted, but you're gonna have a more focused discussion once you isolate human decision making elements.
AI for coding is (currently) basically a replacement for Stack Overflow and Google If you just plug in AI generated code into your system, you're gonna have problems, just like you would if you copy code from SO as-is If you consult the AI, learn from it, and review what it produces and how it stacks up to your needs, then it becomes a net positive force that can both help your with trivial boring tasks, and also teach you things
Re: LLMs - you should read/review the recent paper on LLMs and learning outcomes for students. They basically found that although LLMs helped students improve *while* they had access to them the overall learning outcomes were poorer when access was taken away vs when access was never provided. Basically students who learned with LLMs became poorer at learning things in general or at least didn't improve at learning things compared to their peers who didn't use LLMs.
@@-weedle that's the weird bit. Just saw it reported on yesterday and now I can't find it. Actually have the video paused at ~7:30 because I'm off in another tab looking for the darned thing 🤣 Will update as soon as I find it.
I am objectively a great programmer (as judged by my peers over the years during my carer), and I like Copilot very much. I don't think it made me better, quality wise, but it made me faster on the boring tasks.
The reason he's so chill about LLMs is because he trusts his review process. He nitpicks everything and takes it seriously. Therefore it doesn't matter if the code came from LLM or from a human, it would still need to go through him or his trusted review body.
@@atiedebee1020 Block them. If you submit bad code and don't correct I am very confident they just block them, or through them in spam, or whatever. and maybe, just maybe if AI leads to more new contributors who feel confident to submit code for the first time because of AI, create a learning channel for them if they need additional guidance on how to review LLM output before submitting it. You know assuming ignorance but good faith.
And in 3 years time, we have neural networks in our compilers warning you about your novel solution because it deviates from the average quality of all the code it was trained on.
Why would data scientists make average quality code datasets? You have to assume that data scientists are complete imbeciles for them to purposely train LLM's to make dumb suggestions. If it's overly suggestive, then the dataset will be changed to make it stop suggesting so much. probably through RLHF.
Kernighan's Law suggests that debugging is twice as hard as writing code. Letting the LLM write the code and then debugging the result is a direction with subtle issues. It probably means that you can crank out your low-end work even faster than before, but you may not be able to improve the quality of your high-end work at all. And Amdahl's Law would suggest that making your low-end work easier to do may not free much if any time up to put more hours into your hard jobs. The problem in that case isn't in having time to do the actual hard work, it's that your job involves grinding through boilerplate.
My general rule of thumb is to use LLMs to learn how to *approach* a problem, then go figure out the details myself. If I am ever asking it about specific numbers in a problem, I have strayed too far from its purpose (in my opinion). Something like: "I want to make ____ kind of project, how might I start that?", or even: "I am stuck on ____ step, what might be a few good things to try?" are both fine. But nowadays, as soon as I search anything like: "will this loop go out of bounds of this array?", I start a new chat because I shifted its focus too far in the original. Once the numbers are wrong, I don't think I've ever seen them correct themselves. In rare cases, I'll ask it to explain what some code will do, but that's only if the documentation is truly abysmal, which to be fair, sometimes it is. I just see it as a way to sift through all the more niche or hidden code discussions online.
I agree, it's the best rubber duck short of an actual human subject matter expert, which often times you may not have access to depending on what kind of problem you are working on. Bouncing ideas off your significant other for instance, is probably not going to be useful if you are trying to write something like an inverse fast Fourier transform, but the LLM will have the context needed to plan an approach. Using it to actually write code is iffy, it can get you like 50%-70% there often but you may end up spending more time fixing the output than it would take you to just write it.
yeah this is how I've used it. I wanted to make a music app and all the code it gave me didn't work because the library had been updated but it suggested patterns I could choose to adopt or dismiss
@@gljames24 Basically, though it's a great rubber duck that LIES. And if someone doesn't know enough they are literally incapable of spotting the lies. And way too many treat these LLMS as sources of truth. In part maybe because a subconscious misconception views everything the LLM generates as based 100% on exact-words a person has written in that sequence like it's a search engine and not an adlibs slot machine.
I'm not a great or experienced coder, but one issue I already see with LLMs is that it breaks an important aspect of coding, which is the dissection of idea implementation. A huge benefit I get from coding is that it forces me to really think about what it is that I am trying to do.
AI is going to be the corporate equivalent to buying a $3000 Gibson, Les Paul guitar thinking it’s gonna make them a better player without learning how to actually play.
Even though I believe in the potential of AI, I am against corporations having a stranglehold on access to it. The future should be a place where we can all develop AI the same way we develop applications. Corporations apply the classic tactic of turning people into helpless consumers so they keep paying for whatever services that are being peddled. Independently assembled AI should be the direction to move towards.
Actually it is more like buying a $3000 collectors edition of a Harley Davidson bike. In 1:24 scale. With "real working engine", that is literally just a translucent engine block that gyrates the pistons if you turn the wheel.
@@TheSulross As it should be, CRUD already been done for almost three decades, will be weird if AI can't learn it with that amount of dataset. But last time I check it kinda shit at doing visual to code based task (like asking to generate HTML+CSS with certain visual specification), they will do the bare minimum and then unable to expand it to something that you want.
@@TheSulross there are already alot of projects for this, it's super optimistic to even refer to the output of such toolchains as "prototypes" considering the amount of unmaintainable garbage, nonsense code, and refactoring they need.
@@ch3nz3n It never will, the whole technology is a step in the wrong direction as far as real AI goes. All LLMs by their very nature can only rearrange and regurgitate that which already exists, it literally can't come up with something new because the underlying algorithms just take what it's been trained on (stuff that already exists) and tries to rearrange it in ways that best fits the prompt. That's a massive oversimplification but at it's core that's what it's doing
I have been programming in C++ for 30+ years. I use LLM's in all their forms for coding. Using an LLM for coding successfully involves breaking off chunks of functionality that it can handle... and it usually involves defining function signatures for it. You'll only know what an LLM can handle by using it a lot. More complicated uses can only be tackled by providing it with extensive guidance in the form of pseudocode. Also, I never "trust" an LLM. I have to maintain the code so I have MUST understand it. Yes, they do make mistakes... but given the size of functions I'm asking it to write those mistakes are usually easily spotted.
@@censoredeveryday3320 I pay for and use ChatGPT & Claude mostly for technical discussions and exploring ideas (though I sometimes use them for code generation as well). I use github copilot is vscode... and as of last night I use Cursor pro.
It's ok for self isolated functions, usually, but it falls apart when it needs to interface with multiple systems already designed. And I would NOT use it in memory unsafe languages.
that just means you don't know how to use them or are bad at natural language. It's like being given freshman college interns and giving them tasks too hard for them
If tell the LLM to write something it's usually bad (even GPT4o struggles with regular expressions). But something like codeium in vscode auto completing a line I've started is almost always correct, saving on keystrokes.
Totally agreed. I tried a LLM stress test where I asked Mixtral 8x7b to make a console Hello World but with WinMain entrypoint in C++ without using any function from the standard library, only functions from the Windows API. In my requirements, the code had to work properly whether the UNICODE macro was designed or not and there had to be no #ifdef UNICODE in the LLM answer. Let's say that it was an abject failure. There are exemplars on how to do each of the specific tasks I asked for on GitHub and on Stackoverflow but they are few and far between. To code it, you're better off just with the MSDN doc 😂
Then dont write prompts that produce mistakes... Its not hard to guestimate the ability of an LLM and decide to only ask it questions within its range of ability.
A lot of coding is repetitive work, gpt is really good at repetitive stuff, that's where it helps. If you manage to build your code good enough where it is composable and reusable, the LLM will see the pattern and suggest you ways to compose it correctly
@@specy_ But that isn't making my code 10x better, it's just getting it done faster. If all it's doing is recognizing the pattern of what I'm coding and completing it, then the AI isn't making me do my job better, it's just letting me do the same job but slightly faster.
@@kaijuultimax9407 yeah ofc it won't help u make better code, but it helps u make faster code. If u can save 50% of your time when writing one feature, you can use that time to make it better yourself.
LLM turns all bugs into subtle bugs. LLM turns compilation errors into syntax correct bugs with logical flaws that take ages to discover what when wrong.
@@alexandrecolautoneto7374 users asks "make this" > LLM outputs > user copy and pastes > compiler fails > user gives LLM negative feedback > LLM model evolves to avoid negative feedback > stealth code. Currently reminds me of so many web performance "services" that just insert javaScript to trick auditing tools to spit out higher points. Mission accomplished for everyone that doesn't understand the actual code.
This is a skill issue. In 10 years, the mark of a good programmer will be their ability to debug LLM code. Prime & others are coping because the introduction of LLMs caused a total paradigm shift in regards to writing good code quickly. NVIM and all this other DX shit they obsessed over is a brick compared Cursor AI + Claude. These guys jerked each other off over their WPMs, but are trapped in their old ways when something better comes out.
@@exec.producer2566 I loved the theory, but the reality is that LLM are just the not right model for coding. No matter how we improve it will always hallucinate, it's just how they work.
As someone who is a novice with code, I concur with your opinion. It's vital to have a strong foundation of understanding, and an LLM should supplement this, not replace it. If you always take shortcuts you will never build up the knowledge and skills to do anything well, and I think this is true for everything, not just coding
@@Karurosagulinking the entire documentation and then asking your specific query is faster as the "getting started" might not answer the thing you need.
@@HRRRRRDRRRRR Most of them? IDK, I think the quality depends on a lot of factors And by the way, English is not my main language and I've read many docs without problems wether they are poorly written or not. Most problems i've had have been with: very new libraries and frameworks, very specific topics within existing docs that haven't been updated after a new release, misinterpreted features that turned out to be hotfixes and then got removed, and so on TL;DR Skill issues
I used gemini for a short while while writing my thesis work, but after it didn't help but instead i had to give them the answer, i stopped using it. i dont have much experience with them but i still think it can be useful for simple tasks or to have a starting point for figuring out something, but as a replacement or trusting its output without confirmation, i dont think it's perfect.
LLM's are great replacement for searching for obscure methods in API documents. When string matching doesn't cut it, I always resort to LLMs. And they find stuff that I couldn't find in a few minutes with 70%~ accuracy
For me personally, as a low/mid level IT support, LLM's help me a lot, because i was always a quieter person and somewhat shy to ask questions to the seniors at work... With the LLM's there is no such issues and i excel at the tasks quicker ^_^
I am with Prime here, in the sense that making gpt write parts of the code is no problem is fast is great, but eventually u don't know what is going on in the codebase. Specially when you return to a project 1 month later.
nah, I use copilot all the time to create little scripts for me. They are often wrong at some places, so I have to either modify the code or tell copilot to correct it. But it still saves me a lot of time, as I just have to tell it which language, which input I have and which output I expect and it writes code that mostly works.. the rest is some discussions. Or the 10% bug fixing. It also helps a lot if you are not really familiar in that language. e.g. scripting for powershell, while you are more a Linux guy ;) Still the code is more readible than most developer code I deal with most of the time, Maybe a little too much comments in the code :P but you can remove that as well
His view is probably not negative because the code that he reviews is from developers that know how to use AI. Meaning they don't just tell a LLM to "make a driver in rust" they just use it for tedious repetitive code tasks.
5:45 I interpret Linus's opinion here as "LLM can be a great code linter but you should assume its output as opinion about the code and then decide by yourself if you want to actually change the code". Though this obviously assumes that the developer skill issues are more about the accuracy of the implementation instead of overall algorithm or mis-understanding data structures or thread locking.
For me it's pretty simply. I meet developer, software engineer, sysadmin, network admin, cloud admin, qa tester, data analyst, data engineer, devops engineer or cybersecurity professional who lost his/her job because a random person who is not IT/CS professional can work their job using LLMs, and I'll be like "God exists, and it's AI."
I think it'll be either domain professionals using LLMs to do their job, or it will just be entirely automated. If you have a random person who knows nothing about the domain using an LLM to do the job, then you can likely just automate the job at that point.
AI is great for remembering syntax with context. You don't ask it to build a house which its terrible at; you ask it to build a wall 4 times, a floor, a roof... Etc which it's actually pretty good at
Handwritten code or AI tools code are the same, if you have the patience to test/debug the code Don't blame AI tools for your own limitation, for not knowing how to properly test a code or just being plain lazy, if you don't know how the code works it is your responsibilty to learn it, otherwise it is like a dagger above your head once it is deployed in production
It seems like he sees it like a logical consequence of C -> compiler magic -> assembly. Now it is AI -> some black box magic -> C -> compiler magic -> assembly
16:32 not negative, but he has very high standards. He's like Gordon Ramsay, who can be an asshole in the kitchen, but is a sweetheart when you see him outside the kitchen or interacting with children
@12:00 This is the crux of the problem. Using the 'always blow on the pie' quote. Always create and run the tests even on LLM created/guided work. This isn't just field specific.
LLM-assisted development development is worse than StackOverflow-assisted development. I am not worried. There will be more work to fix the LLM fallout for the professionals who know what they are doing.
As a freelancer with 30+ years of experience, mostly working on projects for a few consulting companies, I’ve found LLMs incredibly useful. Switching between languages like Python, React, C++, and C# means I often forget specific details, especially with new updates. Recently, I was assigned to a next.js project. Yet another js-framework, 😞 , and LLMs have been a lifesaver for quickly getting up to speed with syntax. They also help generate solid code with good comments on common code stuff, including unit tests, which keeps my workflow pretty efficient if I can say so.
Linus talked about LLMs in future tense. He never said anything about using them for programming here and now. I think he's just optimistic about their potential in the future.
My problem with LLMs are currently: - For my hobby projects they are a useful help to bootstrap and do boilerplate. For example: Create me an ORM for this database schema. - At work: Large codebase in Typescript, Dart, Go and Rust - Copilot expecially is useless.With recent gemini and Claude Sonnet i had slightly better experiences but it's still awful. My longterm concern is: Is the production -> training data loop without much feedback mechanism in between. It has already been shown that the quality of LLMs declined the more you feed them LLM produced input data. So i currently won't rely on them much. At least not for code where it matters.
Best use I have for LLMs is as a user integrating very basic features in websites through grease monkey. Doesn't take long to change a website that requires hover on an icon to show a picture to one that shows them by default. It's not hard, it's not code that will see reuse. It's just fiddly normally. And with chatgpt you can actually just roughly ask it with the right info and receive a good enough result.
15:50 maybe not dumber, but out of practice. I'm a full stack independent developer that employed a frontend dev for a few years. I find that using LLMs is the same if you don't "take in" all the code that is produced
That article about the dude telling the LLM that it isn't even answering questions and is just stating untruths repeatedly while saying "Sorry, " before.. that's happened loads of times in almost every LLM I've tested on so many topics and uses
I have been arguing since GPT3 that AI will be amazing static analysis tools one day, awesome to see linus agree. It makes perfect sense as a good 30% of the bugs we catch in code reviews at work probably could have been caught by an AI (although maybe not a large language model with the current design)
The LLM's are very useful to people who know enough to check their work. They're a productivity multiplier. Additionally, they do serve as a decent second set of eyes. They do catch bugs.
I really like whatever Microsoft has baked in to the new visual studio where when you're refactoring a project and you make a similar change 2 or 3 times, vs will give you the red "press tab to update" the next time you move to a similar line. It sure beats trying to come up with a regular expression to search and replace. Simetimes.
I have been using LLMS since quite some time. I am not a full time developer but have to write some python code from time to time. It really helps me when I start a script. You just tell it - I need a class named XYZ with methods a,b,c which have the following parameter and which return that. The script has the following command line parameters … etc. And it will perfectly do that. Then you have to kick in and write what you need your script to do. From time to time you ask how I can get this and that done. At the end you can let it help you to optimize your code in a short time if you for instance have a little bit too much if then else stuff in your code. But at the end you have to understand each line and judge what is a good recommendation and what not.
it might also be because even though you are using an LLM for help, the PR you produce will always depend on the person doing it. Whether they rely fully on it, or not.
You should do the recent François Chollet ARC Prize talk at some point. Getting takes on LLMs from engineers like Linus is more of a personality test than anything else at this point. You should listen to what actual AGI researchers think about LLMs.
Fear of "LLM collapse" feels to me like fear of Y2K. Engineers saved us from that, and engineers will save us from this. I'm happy to contribute to making that statement true.
@@riakata Engineers usually help by identifying enough technical issues that their managers start yelling at the right rich people to quit being idiots. Once the right rich people quit being idiots, the technologies will begin to be more targeted and we'll see less random crap everywhere. I'm not buying the apocalypse scenario yet. None of the numbers I've seen have felt convincing. However, I'd appreciate you providing some if you think it's worth your time.
There is a Chinese term called: 盲人摸象 ( Blind men and an elephant). This is what I feel like it I ask LLM to generate the code I don't have experience to work on before
Whats bad is LLMs help me remember things ive forgotten. So the really rare things i use once every few months. And it speeds that up. WHICH MEANS I never remember those little things. An example is building the initial code for a program. run through all the logic a 1000 times and loops and so on. but when you want to do something like open a file, or grab a different library, if you only do that once every few weeks it's harder to memorize.
Your brain is a very efficient cache. If it doesn't retain something, it's because you don't need to retain it. Something you do once in a blue moon is not something worth remembering off the top of your head, since the extra time taken to re-familiarize with the concept isn't that big compared to everything else.
@@ruukinen Except I've found that it also means that you get less ideas that involve the things you rarely do. And some of my best ideas combine my deep understanding of the current system with smaller rarer things I encountered in the past. But when I use LLMs, my brain got rusty at doing that. So I backed off, and it feels a lot like rebuilding cardiovascular endurance after not exercising for a few months. Maybe that's just me...
I've found LLMs mixed. The line-based auto-complete is 80-90% useful, especially writing similar repeated code. The other times it has got in the way, but on balance I generally prefer to have this functionality. Using LLMs to ask questions, I found it helpful when trying to identify a Bootstrap class -- my Google searching didn't find the class, but asking an LLM helped me find the class name that I then looked up in the docs. Some other approaches I've asked for I've ended up adapting the code to the way I wanted to write it, using the LLM code as a basis. In some other instances it didn't help me solve the problem so I used different approaches.
The difference is this: He cares about the results that land in his inbox. What tools people use is kind of besides the point. He will rake you over the coals if the patch you submit sucks regardless of how you got there.
AI, by definition, will never be smarter than the code on which it was trained. It can only hope to accurately match your prompt to something already written by a human.
Which is why I find it funny seeing people claim AI has 10x their workflow.. and their workflow was a copy paste chat webapp that has been made a million times before.. of course the AI helped out a lot... your project has already been done and repeated countless times, its not anywhere near novel, your not solving a real problem..
There is a line where LLMs are helpful on one side and problematic on the other side and today they exist on both sides. You can use an LLM as a tool to help you work, some might like copilot others may like using LLMs to scope out a problem, and in other cases they may not be useful at all. I don't think that they will be replacing devs anytime soon but they will be alleviating us from some tasks. Using a LLM is like being part of a team an having to read the code of some other developer, you can read it for structure or you can read it for solving the problem it is trying to address.
I've always resonated with Linus, but when he said 'we all are autocorrects on steroids' it was solidified. This man is my spirit animal! 'AI isnt actually intelligent' neither are most of the people I meet, but it works the same way as those people and gives better results. If we're gonna refer to humans as an intelligent animal, we shouldn't have a problem with calling AI 'intelligent'. It's all just pattern recognition, VERY few people apply logic instead of pattern recognition.
On unit tests: they may be useful for simple systems but they have nothing on functional and integration tests. As soon as you have to introduce a mock for a unit test, you're using the wrong kind of test
It’s interesting what you said about software development and understanding the problem domain, and bugs due to edge cases or things you’d not quite fully understood. Because that’s sounds like you’re coming up with a solution to some kind of problem within a set of constraints (or trying to understand what the problem actually is about and what the constraints are.) It’s like a level higher than a particular programming language. It’s more like designing and being able to understand certain types of problems. So say if you were using a language that just didn’t allow certain classes of bug (like memory errors) so it was high level and the LLM didn’t need to generate that kind of code, and it became more about expressing solutions to known problems (I want to say applying patterns but I don’t mean design patterns - something probably more high level) and if an LLM was working at this level then I think they’d be really useful. “I can see you’re trying to do this kind of software, have you thought about applying technique / approach / algorthim X.” If you could somehow turn that knowledge about problems and their solutions into some abstract model that LLM could use to spot patterns and suggest techniques, to help you understand the problem space, then that’d be good. I genuinely don’t know if LLMs work like that at the moment. 🤷♂️
I kinda feel like Linus is like that friend who tells you when your code is bad, but in a nice way 😄. It’s nice to see someone not just blindly hating on LLMs. Makes me think there's hope for the coding world! 🤖💻
100% it still takes SMEs to be effective. The LLM still depends on the inputs being perfect to be right. Copilot depends upon inputs that are not always right, or not always relevant, or not always applicable. Code that is older version, etc. Still takes insights and SMEs to be useful
LLMs have come a long way for coding even in the last few months. Sometimes it's about identifying the use cases its good at. One of the best use cases I use it for is dumping in an entire project, often times with lots of spaghetti code into context (Which now get as big as 2 million tokens and can be cached on repeated calls to save money) and asking it to locate the parts of the code that do X. It will surface the code, I can do a quick find and I'm on my way. It probably grabs what I need 80-90% of the time on the first shot with modern models. It seems kind of like common sense but it's a good idea to use LLMs for the aspects of coding they are good at and probably a bad idea to use them for coding tasks they underperform on. Unfortunately, things emerge and change so fast what an LLM is good or bad at coding wise is shifting quite a bit and not obvious out of the box.
LLM's writing code is my dream come true , i remember writing code on basic as a child on a commodore 64 now i can just ask the LLMfor a program at it just produce awsome results and evolving constanly . (i usaly use chat gpt , i remember the chat gpt's first steps in writing code from the codex playground sand box , it was so awsome for me)
LLMs use natural language, which can be ambiguous. Programming languages are non ambiguous, each command does exactly one thing. That’s why LLMs can’t fully replace developers.
It's like asking a developer to recite an algorithm from memory without even writing it down. Because LLMs can't think before they speak. They have to immediately generate the next token one after another
@@JasminUwU at least the LLM has context which is always there. Our memory is far bigger and much deeper, but not as precise (we misremember, and such)
In the early days of linux I downloaded slack, IIRC. But I got a kernel panic because I was supposed to change over from the boot floppy to the root floppy (something like that, maybe someone will remember how this worked better -- basically this was the first problem anyone trying it for the first time would likely have, and surely well documented). So, I e-mailed Linus. He explained via private e-mail that you had to change the floppy or whatever. Yeah, problem solved, I was "in!". Can't remember if I thought to thank him.
Linus isn't negative - abrasive, maybe, but he's realistic and tells it how he sees it. Based on this interview, he sees the LLM as a tool that the programmer can use - making poor use of a tool, or not accounting for issues the tool may have, is on the developer - not the tool. I agree 100% that the current generation of LLMs work best when you know exactly what you're after, and can verify the output from the LLM as being what you want. I use them daily and they are a massive time saver, particularly when context switching between front end, back end and infra and all the different languages and tooling and headspace that entails; LLMs in that context help maintain flow
He is a traitor who let Rust into the family. And they started rewriting everything. Linus Torvalds also expressed his disappointment that Rust adoption is not happening faster, and this is partly due to the fact that many old kernel developers are used to C and do not want to learn a new language.
Of course, it would be what would happen. Progress is slow, It will takes a lot of times. rust is still a "new" languages too. but its a step in the right direction
I've pasted in code that doesn't work into an LLM that was non-trivial and it spotted a bug for me, where I got the memory ordering wrong on some atomic operations. I feel like this is the sort of thing where the right answer exists out there a multitude of times and the LLM can pull together all these resources and explain why your code is broken. It's super useful for that stuff, and is way better than just trying to only absorb this from the docs. Also it's a great rubber duck.
That's what I've been saying!! Even if you had a "perfect" AI that was always right, If you just use cheat codes for everything you never learn. that's like half the fun in life is learning