Frankly Developing is your channel for becoming a better developer. Dive deep into our coding adventures and continue mastering the craft the software development in small focused sessions. Your host, Frank Raiser (PhD), has been professionally developing for decades and also taught thousands of developers in-person. Now you can also profit from his experience - check out the videos and you might just learn something that helps you on your next success.
I doubt it. There's a theoretical point at which AI is so good that we humans need no longer look into any code at all. At that point, it is clearly irrelevant what that code looks like. I'm not seeing us anywhere near to that, though, as detailed in my other video.
I am at the bleeding edge of ai and I can tell you, your comments are dangerously ill informed. This is not a time when people should be burying their heads in the sand. You sound reminiscent of those that said smart phones or the internet were a fad. I can now do the same work I would employ 10 devs to do with 2, what happens to the other 8? Most of this if you say the llm didn't work it made an error it means you don't understand yet how to use it. It is deceptive in that it speaks like a human but it cannot be prompted that way. My full time devs now have a share in the company because that is the right thing to do in this present moment. People need to be doing everything they can do to adapt with the change. The minute a digital decision was made without using any type of logic that set us down a path that will change everything and the quicke you see the better off you will be,
Thank you for your insights. I wouldn't go as far as saying anything about burying my head in the sand. I use AI, I even like some parts of it, and yes, I may just not know how to use it well. But at this point, I just have a different opinion and will continue to watch closely how it develops.
The facts are that there are already economic changes with large companies firing thousands of developers globally. At this point a single developer can do the work of 10, and even people with almost no knowledge about software development that have a working brain can make incredible stuff in just a few hours. I've used Cursor to create entire apps for personal use without writing a single line of code just by talking to an LLM and have it create the entire thing by itself (and you don't even need to have any knowledge about the field to make it understand, it is already better than humans at understanding what you want). Even current LLMs are capable enough to make more than 90% of all developers useless (o1 is better than almost all developers maybe except the top 1% which are the ones driving innovation in the field). AI is just too hard for you to understand, and you don't even know the facts about the current state in the field. Also, You clearly can't even comprehend the rate of progress in the field, whatever they can't do now, they will be able to do in a few months. edit: I hope to see you in just 2 years from now saying how blind and wrong you were, it will be fun (if you have what it takes to admit your mistake).
@@JohnMcclanedmy advise is to not any LLM at all while learning a new language and also write out all code by yourself (only copy paste alllowed is of your own code). Once you master your language/platform LLMs are fine and can improve your performance (but be vary of hard to spot bugs)
This was a very realistic and reasonable POV imo. People who don't know software engineering might not understand however. The current AI is just another tool for fast searching and other use cases, but surely not building software. This however might change with the next big leaps of AI.
Which one in 60 years as ELIZA🤣. LLM's are vectorial databases with generative capabilities no more no less. BTW I am from the other side of madness what is know as weak AI... we never believed those crazy statements. However we enjoyed the spectacle as I told my friend bob "pass the popcorn, this is going to be quite interesting... many go in .. none come out"
I will tell you you honest, old heads you stand on past accomplishments will be brushed aside by a flood of newbies with "ideas" that for now the programming world looks down upon. very soon, just knowing CS won't sufice, creativity and applied knowledge from other fields will be the largest distinguishing factor. purists will be left out in the next cycle
AI is just overhyped; i heard so many times it "speed up development bt 50%" or by 2x; while in reality it is good only sometimes (most for basic use cases or just as shortcut to stackoverflow answers/better search) and if u account for the time it slow u down by generating bullshit/not working code, then overall it actually speed up more like by 5% MAYBE
Disagree. AI is much more than what AI is today. It is also what AI is tomorrow, next week, next month, next year and next decade. AI in the near future will make your statements entirely wrong.
you're really sleeping on how much frosted flakes are going to revolutionize computer science. I mean, sure, they're not doing anything _now_ but you gotta think about what they'll be doing next month!
The only reason any programmer has a job is because someone needs them for a job. Technically they don't need them they can do things themselves. Hopefully AI helps people realise that.
No, they can't. They event can't articulate their problem/idea. Nor have analytical thinking. Programming is not only converting ideas into code. That's why the term software engineer was born. Everyone can build a house by themself, yet you have construction workers, electrician, plumber ...
@@FranklyDeveloping sorry man, ostrich effect and I hate to say it but you’ve buried your head in the dirt. I’m getting my masters in software engineering right now and though I’m sympathetic to your position, it’s avoidant.
I was in a small startup with an AI wizard CEO, he constantly was telling us to "use LLMs to move faster!" It just doesn't work for senior devs. When I need help it's a novel problem, and since LLMs can't actually think, they hallucinate answers to these problems. Boss man offered to write some code that would take, in his words, "A couple hours using chat GPT". It ended up taking him the entire weekend and he expressed how much harder it was than he expected.
You can get pretty good and solid results investing time and effort to instructing them. I suppose it depend on what you want them to make easier for you. And we are still at the LLM of the AI not like actual AI yet if I understand right ?
"Actual AI".. depends on what you mean. The usual term is AGI, explained at the disclaimer part of the video. And that we definitely haven't reached at all yet.
@@FranklyDeveloping Yeah, that in its self seems a bit paradoxing and revailing that there is still some pages left on that chapter to be made. its one many topics I like to thought experiment with using my recently first successfully loaded and my own model I have generated. I have bin famillar and used some of Googles models now and then over the last 2-3 years. But the last 8 months I have bin very imerged with it only using open source and have learned a lot just not about actual coding from the ground and up but I am recognizing patterns here and there and consider a carrie adjustment adding what automation and the current possibilities they can help improving coming from a background with biologi fitness instructor and nutritionist/ nutrition structuring assistent for others I see great potential for improvements for streamlining alot of process as well as the ease of access to relevant information on the go. and my project in that context would have only bin at like a 3rd of the way without the Large Language models. But I also get what you are saying and I dont think you are wrong. More like a counter balance to the over hype it has at the moment
@@zilezia Yes I did, do you understand why I mentioned trajectories? The video was about the limitation of the state of the art with absolutely no reference to where things are heading. AI cannot replace programmers, that's true but as a programmer of 38 years who now works exclusively in AI, I'm extremely confident that AI will replace programmers in just about every corner of the industry. It's not going to happen this year or even this decade bit it will happen and it will happen completely.
@@zilezia it is already better than the vast majority of people in most fields. Have you even tried Claude 3.5 or even better OpenAI's o1 Preview? The second is much better than almost every developer maybe except the top 1% of humans which are the ones that drive progress in the field, the rest are already useless.
Good video. I've used Cursor a bit, mainly on fresh projects, and it seems very nice for smaller projects like this. I may just be sceptic, but I am really having a hard time imagining stuff like this used in the backend on large enterprise applications - at least reliably. I've noticed that when I use tools like Cursor and just 'apply' code suggestions, I quickly lose "touch" with the codebase if it makes sense... and that might make it harder for me to track down bugs that will inevitably be introduced at some point by the AI as a biproduct of hallucinations etc.
Thanks a lot Frank for giving your view on the SOFR - we are currently working on Part II - so what methods in wich context the companies use --> then it will get even more interesting!
This material seems rather basic. I actually can't think of another way to debug. At the same time, this videoessay makes sense of and aligns the ah-hoc process of debugging with the scientific method; which I've never thought of. I think this is useful. I would be helped by a brief example of application of the method to a specific bug.
Hi I just want to say this is interesting content and people will start watching this soon. Not really a C# guy myself, but this can definitely be helpful to someone using different tech too.
The usefulness of SP is in communicating opinions during estimation. Like one person can say that a task is small and another one says it's large. When each of them explains what their opinions are based on, they share knowledge.
If it works for you, great! I think "small" doesn't need discussion and if someone notices a "large" then just discuss it. No need for story points really 🤷♂️
Hey Your expertise is absolutely top-notch! 👏 I really appreciate the depth you bring to explaining the 'Tell Don't Ask' principle and how it affects dependency management. I’m excited to implement your advice and refine my code using these insights. Can you share any additional tips for avoiding unwanted dependencies during the early stages of development? What’s your go-to approach for maintaining clean code from the start? Thanks so much for responding to my previous comment. I took your advice, and it made a huge difference! 🙌 You’ve genuinely transformed my approach to coding, and I’d love to share my story with you-I’ve sent you a DM on RU-vid, and my profile picture matches this one. I’m also interested in learning more about your services and possibly working with you. Could you provide some details about your coaching or other offerings? Thanks again for everything!
Thank you for the kind words 🙏 my approach is to simply try your best and embrace imperfection. There will always be situations where you end up with unwanted dependencies. It's not about avoiding them entirely, but being able to quickly address them.
This was really cool. Refactoring real code with a real worked-through and evolving solution instead of these sterile and contrived examples. Keep it up.
In game dev it’s not uncommon and typically much more performant to utilize switch statements instead of using a vtable to accomplish polymorphism. Sometimes this is even taken to the extremes to improve readability and usability by utilizing an enum per class or implementation and follow a pattern that is commonly called enum dispatch. When the number of classes, methods, or functions is relatively small this often outperforms the compilers generated vtable that is used to accomplish polymorphism as the access pattern is far more predictable.
Absolutely! These are trade-offs you make when you need that kind of performance. But even though this is about a small game, I still wouldn't consider the menu to be in need of such massive speeds. And if one doesn't need that speed, it would be a bad deal.
If I recall correctly, this doesn't apply here at all. If a C# class isn't declared "sealed" AND doesn't inherit from other classes, all method calls get called via a vtable. There are only a couple specific cases in C# where a vtable isn't used. Point being, a switch with enum will not help because the "Update" method call is invoked via vtable no matter what
Very nice, I think you have a nice balance of leaving the code in a good readable shape but not perfect. There are many devs that don't know when to quit :) So you have good hand washing hygiene ( we say that in Sweden, think you understand what I am after )
Thank you for the kind words 🙏 I'd say in this video there are still a few rough edges left. You can look forward to the next video, in which I'll show my approach to the final cleanups before doing a PR. Again, not perfect, but presentable to a maintainer.
I’ve never used story points as an estimation of how long something will take, but how complex a piece of work is. This can then be used as an indicator of the quality of your code. If something of equal complexity is taking longer and longer then your software is becoming hard to change. The main reason for this is because more and more logic is just being piled on to existing code without it being continuously refactored to make it easy to change. Decreasing velocity means your code base is getting smelly.
Good points, though I prefer not to measure these things. Usually everyone on the team knows quite well when complexity is reaching that bad point. So I think the story points are not so much the means to detect the problem, but instead a possible, but maybe not the best, way to communicate it.
Clickbaity title. You cannot effortlessly write bug-free code of any non-trivial complexity. And then there's the question of how you test it, which often consumes more effort than writing the code in the first place.
I grant you the maybe clickbaity.. but there is no need to test! The point of using the type system is that successful compilation is already a way stronger correctness than any automated test could give you (limited to what you can express in the types of course).
@@FranklyDeveloping I'm sorry, and with respect, but I have to disagree with you. I wrote software professionally (mainly C) for 20 years. Proving that a function or program or system meets its requirement is what I mean by testing. If we take the old chestnut of a sort function, I don't see that a type system addresses typical bugs such as "out by one" on an integer (e.g. a loop count) that might leave an unsorted number at the end of the list. Easy enough to test - just give random input data and check it's all in order. I don't see how your type system addresses this kind of programming bug.
@@greyshopleskin2315 ever heard of lean? agda? idris? type systems can be so strong that you can prove correctness of just about anything. the main reason (I think) why this is not used widely is because it's optimized for proving mathematical theorems (which are expressed as types!), not proving real world applications. also, at this point, it would require tons of people to learn something rather complex and build parts of their infrastructure again from scratch (because obviously this is only useful if the libraries have stated and proven properties). and there's of course still no guarantee that the properties you encoded in the typesystem match the expectations of the actual users. but that's true of tests too.
Feels like in the last example with RocketWithFuel and RocketWithO2 if we bring this approach to more complex example with more complex invariants this will be messy (not speaking about performance). I think we need some kind of compile time asserts for this, but don't sure how it will look like.
If you mean messy as in many classes for combinatorial variants, then yes. What we usually do in those cases is something like the builder pattern to ensure invariants on the final objects only.
Yes of course. You may not always have the compiler to check it automatically, but a good API similarly tries to limit the number of invalid or inconsistent requests that can be possible. Less checking needed, less errors possible.
Old school C/C++/Java developer looking at dynamic language coders suddenly finding utility in a "type system" be like 🤭 you just figured that out huh?
Indeed.. pretty much well known since the 70ies.. like most things development. But then again, some weren't even born at that time, so it's maybe worth repeating.
IMHO all these arguments are not arguing against Story Points itself, but describe a bad usage of them. IMHO you are not supposed to carry your SP estimates outside of the team. You dont communicate them and you dont use them to compare to other teams. It's also not a usefull information for steakholders. Rather should they be interested in when the feature they are interested is going to be delivered, which usually corresponds to a certain sprint. Storypoints are a tool to estimate work inside the team and to be used to prioritize tasks based on thos estimates. They are a tool to compare task complexity, not time required to do the task.
That's the theory at least. I've never encountered a team that could keep them internal though. You still have to reason with stakeholders and if it works for you to base that discussion on storypoints without exposing them, awesome! Personally, I find that too hard and instead opt for the alternatives.
@@FranklyDeveloping I didnt get any alternative from that video. Not estimating? That's just chaos. How does that even work? You just pull a Blizzard and say it's done when it's done? How does that even fly with stakeholders? How would you know when a task spirals out of control, effor wise? I am curious about how this is supposed to work. From the video I only get some, imho. very weak excuses for not using Story Points but not really an alternative.
Also, how would you estimate costs for a customer without doing even some rough estimation? How woud you reprioritize tasks when you dont know if the effort / value ration is bad for a todo?
I've planned to make another video on this. But yes, not estimating is a controversial topic. I stopped doing it years ago and actually find it a lot less chaotic. That may not be true for everyone though.
Ideally I wouldn't use estimates, and would instead build trust with the customer with frequent releases that deliver value. Estimates are so BS at my current workplace. Deadlines are completely artificial. The client often takes months, even a year, to complete their own testing on a feature before going live with it. So when internal product people put pressure on us to deliver it's nonsense 😂
There's something to a healthy amount of pressure, but your team should be in control of it preferably. No pressure and you'll go too lax, a little bit of pressure is fine, but if it's at the cost of quality/ burnout/ etc. it's too costly
As a developer, story points are dysfunctional in my team. Yet the tech lead and product owner insist on using them. Any tips on dealing with that? (1SP = 1 day, and I'm held accountable to that as a due date even if i estimated differently during planning poker because the majority estimate wins. And we never go back and look at how difficult tickets actually were to what was estimated. Etc etc...)
Address the root cause: you should not be held accountable for someone else's estimate. And if you don't give an estimate, the discussion has to move to a more meaningful place. I think I'll do a video on how to argue these points soon 🤔
Back when I was a PO I would explicitly leave the discussion when story points or any other estimations was brought up. I told my teams that I just wanted an answer to the question: Is this goal achievable in the next sprint. If not, then we re-worked the goal. If yes, we're done. Planning is over. I didn't care how teams estimated whether or not a sprint goal was achievable, I just cared that they gave me an answer. My plannings usually lasted only 10-15 minutes because I always made sure to include investigations for future sprints as a part of the expected work which meant that when we moved into the next planning we already knew how to technically solve the next set of potential goals without having any boring refinement meetings.
@@JohnDoe-sq5nv But then the quality of the feature resides on if it could be done in a sprint, not whether or not it delivered what the customer actually desired (which could take longer)?
@@magne6049 Depends. And this is also a place where Scrum itself breaks against reality and why it might not always be a suitable framework with its consistent sprints. Scrum states that every sprint should result in a potentially releasable increment, in practice that might not be possible but a scrum breaking work around is to simply communicate this to the stakeholders/customers and split the feature into as many steps as needed where the steps themselves at least provide value.
i am a web app developer working in an established company. we have a couple of SaaS products where both have long running customers and one also has one-time or repeating projects. the comany is pretty small and there is only one dev team. we continuosly try to improve our process, while dealing with accumulated work debt (it not only technical debt), fixing bugs, and adding features. we usually have deadlines agaist which we need to estimate the required work, and then negotiate priorities and scopes to fit in as much as possible, where later refinements and proper solutions can come later. and its working out pretty well so far. as the product is very stable and the requirements are straight forward, its not out of the question to give a rough estimate with the ever lasting caveat that its a guess based on prior experience and we also add to the estimate our degree of confidence in ti to dictate tolerances. if the requirements are too uncertain, the confidence in the estiamtion in very low, and the tolerances are very large, so we have time to explore the probelm space and get a better idea of the issues, which we continously iterate on.
Cynefin is a useful model for considering context, but some of the details are hard to fathom. I have difficulty with understanding how coupling fits into the definitions of the domains. For example, a system that has everything tightly coupled could be regarded as a single unit. That seems to fit the Simple category. Yet a system that has everything decoupled would allow each part to be changed independently. That seems very Simple too. (I'm not sure if this even qualifies as a "system" any more.) But it isn't chaotic in the sense that Snowden seems to suggest. I would understand a chaotic system to be one where there are *too many* (binding) constraints, and they are interacting in ways difficult to predict long-term. A system with *no* constraints should be easy to move in any direction, to an infinite degree. Perhaps I am misunderstanding the terms here. Perhaps "chaotic", "complexity" and even "constraint" have become buzz-words. Or do they just have a different meanings in different contexts?
Not sure if it helps, but I tend to not think of these systems in terms of their structure (like you said regarding the different couplings) but instead focus on the cause- effect relations or constraints.
@@FranklyDeveloping Interesting. I tend to agree, but again - terminology! What do you see as different between a coupling and a cause-and-effect relationship?
Coupling has many interpretations, some are simply static. For example service A talking to service B. Cause and effect says more towards how such a coupling behaves. F.ex. a very bad coupling in which every change to service A needs a corresponding change to be made to service B.
All models are wrong, but some are useful. Cynefin is useful because it gives us a way of discussing context. Which is what you and Dave Farley are doing. Great stuff!
That Farley book is terrible. His work is extremely mid at this stage. He needs to stop the public speaking and go work on stuff again for a few years to recharge.
Too bad you didn't like it. I agree that it's not really containing new revelations, but I still think it's a great summary of where we're at as an industry.
@@FranklyDeveloping I don't think the internal logic holds together. Farley seems to be in the business of being famous rather than contributing value.
Interesting point, although I don't like going as far as making assumptions about his intentions. But that aside, would you mind sharing what exactly you think doesn't hold together?
@@FranklyDeveloping I think his anecdotal evidence is presented as proof. In some places he articulates that claims are only correlative but then his rhetoric makes it sound causative.
@@FranklyDeveloping I'm a simple man: I see a smart Chess metaphor, "I thumb up". If the next stuff turns out to involve blindfold Chess and Chess Boxing, it's gonna be wiild!