Yeah, in his latest video Carl has also some issues with sounds (mostly does not maintain a constant sound level). It looks like Carl is just learning how to use his new Shure MV7
I've spent the past 15 months by comparing models for one of the large companies and to be honest, in the past few months they mostly got worse. There is not way that they will ever replace anyone who is competent in any domain.
They are getting worse, because they are running out of code generated by humans. Pretty soon LLMs will only have sh** code generated by other LLMs to train on...
@netherportalsAI wont get better at coding with agents, not while agents are powered by LLMs. LLMs have no reasoning built into them, they are just sequence completion engines. Reasoning is key to coding, because it’s problem solving.
About the "old way vs new way" problem that Carl brings up, I noticed something similar, which is that these models have trouble doing things that are similar to something common, but slightly different. In one case, I asked ChatGPT about the Nebulabrot/Buddhabrot fractal. It correctly explained that the Nebulabrot is an alternate method of rendering the Mandelbrot Set fractal, and even described the algorithm accurately. When prompted to then write the code implementing the algorithm, it would only produce code for the conventional Mandelbrot fractal, not the Nebulabrot. When I pointed out the problem, it would apologise and refactor the code, but only ever producing the conventional Mandelbrot. The problem here is that rendering a Mandelbrot is a popular programming project, with countless examples in many different languages available, but the Nebulabrot is much rarer, so it kept falling into the Mandelbrot "happy path".
Similar story building UIs. There's an old way and a new way to write JS in Nuxt.js for example, the Options api and the Composition api. The latter being the recommended way nowadays. Unfortunately ChatGPT and Claude both often recommend the former, and even mix the two up, which anyone/thing with any common sense would never do. So anyone who asks the LLM how to start their project (vs reading the docs!) ends up starting off on completely the wrong foot.
I think once AI can actually replace a software engineer, we have bigger problems to worry about, such as it being able to self improve and having the potential to replace every digital job ever. In other words, I think it's likely that we've reached AGI once that's possible. It's been a while now since we've seen major improvements in the LLM space. In their current state they're only really useful for spewing out templates to work with, explaining code, and acting as general aid for programmers. With the current top models operating in some software like Devin, it's very likely they'll just get stuck in a loop/deadend, require hand holding and general guidance with code from an actual programmer.
My company wants to do just that. The problem is we all hate it, and secondly, the AI is not as smart. It's actually really unfunny to fix its code afterwards. You feel more like a bugfixer rather than an actual developer. It goes to the point where you would have been more effective doing it yourself from the start. It's still like copy pasting from stackoverflow but much worse. Because while from stackoverflow you would get a method for something and adapt it to your needs, the AI hands you and entire microservice that you now need to fix.
Totally agree. Possibly programmers who try to use LLM to write code are just bragging to appear mentally tough. In their private coding they are likely not using LLMs.
If AI can code and code well, why is Windows still so crappy and non-secure? Isn’t Microsoft OpenAI’s biggest investor? My second question is: If Windows eventually is re-written by AI and evetually a bugg occurs that AI cant fix…human will definetly be able to fix AI-written code. Nor re-code the AI. What then?
it will definitely replace copy-writers and maybe designers. Will it replace devs? Maybe it can.. by using building blocks and making solution modular and out of the box... who knows ... the lure of cheap code vs newly created more bespoke content. At the end of the day if companies need the same solution again and again.. maybe AI can? But I suspect our coding skills will just end up in the machine learning sphere as that industry expands instead.
First it was going to be offshored to India. Then it was going to be taken over by desperate immigrants. Now it's apparently going to be automated away by AI. Meanwhile I continue to earn 6 figures and regularly get multiple job offers. 🤷
Your argument is that it took too long and used wrong algorithm ? Well, those things can be improved and learned. What it can do now was unthinkable years ago, you think it stops here ? If it can do something in principle, then it can also do it better, faster, more efficiently and completely.. So yes, it can and it will eat everyone's lunch,
They are glossing over the fact that AI is pretty good at saving developers time on some annoying menial tasks. I regularly use an LLM to for code auto complete of config files and regular expressions. I'd never hire anyone to just do that work, but it does save time and I often make fewer errors using it than I might make not. LLMs may take the jobs of some junior engineers by making the jobs of more senior engineers slightly easier. Many companies outside of Silicon Valley have okay or bad engineers they keep because a good or great engineer is either too expensive or won't apply.
Wrong. One, that's not the marketing. That's not what managers are reading. Managers are reading "AI will soon replace all your programmers so go ahead and fire them now". Ignoring that. Maybe you're a thoughtful programmer who carefully checks all your PRs. But the average developer - as in 80+% of people using AI coding tools - just blindly accept suggestions. Multiple reviews of code bases show vastly increased churn due to AI code hurting maintenance. How do you know that the unit test the AI wrote covers all the edge cases? You haven't even thought through the problem. You just see the AI did something vaguely decent looking and hit accept. Related problem. What if AI is being infinitely groomed on having people accept its suggestions, and not at all on having suggestions be correct? Imagine if you start seeing PRs that don't even have code, they just have a politically biased statement like "elonMusk = sux" and people keep accepting it because they like that result. That's way more likely than properly curated results, with the way the industry is going.
@@KevinJDildonik How am I wrong for stating how I use it and my opinion about the future? I don't use AI in the way you are describing, and most people I know don't use it that way either. I use it as an additional tool in my arsenal like linting or auto correct. I agree with this video that software development jobs aren't going away, but I don't believe the tools which take advantage of LLMs strengths are going away.
Devin is overstated, but let's admit, there's a lot of software work done maybe 99% that are repetitive and lower quality than Claude can generate No point just criticizing Devin, sw eng needs to shrink and reinvent ourselves Fast