Me: Know what’s _not_ my favourite thing? Debugging other people’s code. Copilot: Hey, how would you like to spend most of your dev time debugging other people’s code?
@@shawningtoneven trying to use it to learn something totally new, you can catch obvious mistakes, and sometimes it will continue to repeat them ad nauseum.
@user-lw5wm2hg7s Ironically, it seems that marketing people will be the first in line to be replaced by AI. Both of them works on numbers and statistics anyway
@@MatichekRU-vid The difference between visual composers and text code is that a single visual node is ~equivalent to a single line of code. If visual was so much better, programmers would have used it themselves.
Got my first car around '95. It had a manual transmission. It was awesome, my car felt like an extension of my body. Bought my first car with an automatic transmission in 2012. While the experience wasn't as fun, it was still awesome because I no longer dreaded my 45m+ commute in stop and go traffic. Bought my first car with some AI self driving in 2022. It was awesome because after a long drive I didn't feel drained. I strongly think I'm a better AI supervisor because I have decades of experience driving without it. I worry about people getting into any skill or craft using automation without first understanding the older, manual processes. A few months ago I almost got into a head on collision in a wonky intersection. By the time I knew what had happened, I had already overridden the AI, swerved out of the path of the oncoming car, and came to a safe stop all on instinct. Don't think a teenager using my car would've been able to do the same. I think developers will see the same.
Absolutely! There is no replacement (yet) for years of reasoning through code. I use Copilot to help me with coding, but find that it occasionally takes me down the wrong path. My years of experience coding allow me to notice this and redirect it, were I less experienced, I would end up letting it write me code that can run, but has dangerous corner cases.
Exactly my thoughts on it. It's similar to the argument about resources during exams and stuff in schools: students are going to have those resources in the workplace, why not give them those during tests? Problem is now they rely on regurgitating from the resources and have no genuine knowledge for themselves. The resources act as a crutch, rather than a reference. An AI programming assistant is great when you know what you're looking for: it can hopefully get something close enough that you just need to tweak it a bit to make it work for your specific situation. But if it's wrong? Well, you better be able to tell.
8:52 Don't worry about that, only like 40% of students in my class of just completed Computer Science education used ChatGPT and co-pilot exclusively to crawl through the education.
If you work in a large legacy system, with not much documentation, and a ton of business rules enforced through copied and pasted code, and need to interact with it respecting existing rules, you do end up reading 10x more than you write, because no function or other abstraction sums up what you need done, separated from details of the most user facing part of the old system. (could be UI forms, could be http controllers etc). It takes even more reading to decide the least damaging place (or if you're lucky it feels like a search for the most appropriate place) to make changes.
Man, do you work with me?! Also, finding the best place to put your change to ensure the least amount of testing resources are needed... even more important than how it performs lol. got to love it.
he also read that line wrong, ironically, it didnt say "people spend more time reading" it said "code spends more time being read", because multiple people will need to read that code
I work in consulting world and I spent 90% of my time reading code. On a good day, I might spend 3 productive hours coding if there isn't a ton of meetings. If there's 2-4 meetings that day, I might get 1 hour to code. More important than writing code is making code easy to maintain and well documented.
As a learner AI mainly serves a the role of docs + stack overflow. It immediately gives an answer without belittling me and it knows utility functions off-hand that I didn't know exist or couldn't find. I don't let it write code that I don't understand. If it does I either delete it or look up whatever function I don't understand. It can also explain what's wrong with my code. I feel the follow up is important here. If you ask questions until your problem is resolved it's a crutch that hampers learning. If you prompt it until you understand it speeds along learning.
Not my experience with CoPilot in my limited time using it, but my exp with ChatGPT-4 is close to what you said. Albeit I don't think it teaches very well - it often hallucinates ideas that fit your bias, rather than actually teach you valuable insights like a senior could (or tinkering with debugger or reading good documentation). It's very dogmatic and can often waste your time leading you astray - and waste case let your problem solving skills become rusty. Mind you, so can SO very often. With experience, I've found SO to be an incredibly unreliable source except for quick help with a poor library needed for prototyping in something terrible like Android.
yeah i think the most helpfull tool i would need is something where i say the language im using, and what im trying to do and it just lists the libraries i need to import
I've never had an AI teach me something above college-level. I tried, but I simply haven't been able to make it useful. Maybe I'm a bad "prompt engineer", but I just don't see how it can come close to replacing reading real documentation and mentorship from real experts. I mostly use it for boilerplate stuff.
17:07 For every line of code written at my office at least 2 people need to read it during review. Then an additional 2 people need to read it for testing and review. So I kinda believe that in general a lot more time is spent on reading code compared to writing it.
Yeah, and I'd be surprised if you didn't re-read your code and the surrounding code it interacts with as you're writing. Maybe 10 to 1 is too high, but it is undeniable that you read more than you write.
so nobody puts an LGTM on code reviews? Nobody reads code. Nobody spends time thinking "what if this implementation is wrong?" or "what if I have to maintain this code in 6 months, when the requirements change?" Doesn't matter if your branch protections requires 4 code reviewers and an affidavit from management that this code is free and clean of bugs.
Exactly, I have seen someone demonstrating generating a spectrogram to find some audio anomaly, instead of analysing the spectrogram's values in a normal way and come to some conclusion, he passed the image through a Convolutional neural network (analysing pixels) and tries to explain how great his solution was.... he has no clue what he is doing, I'm afraid we will see more of these "solutions" in the future..
Writing code vs reading code: When I'm in the middle of developing something, I can write hundreds of lines a day. That does not happen often. But then I have to come back to that code, say a month later, and need to fix some bug. In that situation I'll spend an hour or two reading code (and maybe writing a bunch of _printf_ commands to figure out the edge-cases), and at the end of the day I'll write maybe 10 lines of code. The bug fix was a *lot* of reading and relatively little writing. And the nature of my job is I'll spend much more of my time fixing bugs than writing brand new code.
I am a TA for for software development for bussines informatics students and the quality of assignment submissions has skyrocketed while the exam quality has become abysmal. Copilot is a real hazard when yo uare learning the basics. The stuff we teach goes up to basic patterns in Java so it's pretty much only the ground work you need to later be able to read code. You should be able to produce a simple Java class, some loops etc. without having to resort to pseudo code or reading anything at all will be much more of a struggle. AI won't replace senior devs in quite some time but for juniors and beginners it is a real danger. Not only in terms of employment opportunities but also in when it comes to learning the fundamental skills.
Aren't you saying here that it's not good enough to teach but it is good enough to replace junior devs? Not sure both can be true. Why are we so quick to assume LLMs can replace devs? Devs who get fired, sure...
@@nickwoodward819 Having a tool do a thing for you doesn't mean you also learn how to do it yourself. A calculator is great at replacing someone whose main task was crunching numbers. It is awful at teaching someone how to do that number crunching in their head. Similarly an AI might be great at comprehending and working on simple assignment code for you, but not good at teaching you how to do so yourself. And you'll need those basic skills once you work on code that's too complex for the AI to handle.
@@cameron7374 Lucky then that I didn't say that :) Like you said, they're good tools - I never said they weren't. But ultimately they can't teach juniors for the exact same reason they aren't good enough to be juniors: They're pretty rigid, frequently wrong and need constant supervision. Granted that's *similar* to a junior, but not the same. For some reason we've been superficially wow'd into thinking otherwise.
I’d like to mention is that for me, churn is much higher at the start of a project when I haven’t established the patterns I’d like to use. Often I’ll implement things one way, then realize it won’t fully fit my requirements, so I refactor. As a project matures, the established patterns have proven effective, and existing code doesn’t need to be changed to fit new ones as frequently. So maybe there are just more new projects entering the GitHub space causing an increase in churn that’s probably typical of new projects
Sometimes, it's burning obvious what you're about to write. Sometimes, nobody except you knows what you're about to write. Based on this, Copilot's suggestion will be somewhere in the range of convenient to not at all useful. As long as the obvious lines are frequent and as long as reading a suggestion is faster than typing it, Copilot is useful. If you're a zombie and you just complete everything, you have a problem.
It feels like something I finally have an advantage on. People seem to be trying to use it the wrong way. My code is still my code, just accelerated in being complete
That "55%" faster is somewhat true to me although I'd say only for specific case. For most case, Copilot autocomplete felt like distraction because it infer too much context that might not available at the time of us writing the code. One way that's valid (without even considering typing speed) is that for some intermediate/ common algo, Copilot could "feed" you a good boilerplate based on publicly available implementations. E.g I couldn't remember imperative detail of some case-specific quicksort variant (quickselect, mo3, pivot+cache) but with a proper hinting I could get some starting point to work with. The tricky part, however, is that if you don't really understand the underlying algo you might end up botched the implementation.
The "55% faster" number is actually not really representative. This number is based on a short study where people were asked to implement an HTTP server in JavaScript. The group with AI assistance completed this task 55% faster. arxiv.org/pdf/2302.06590.pdf
We definitely read much more code than we write, because 1. Most work is around existing code, e.g. bug fixes, writing tests, small changes in behavior, refactors, enhacements (e.g adding a new field to an API) 2. Before adding new code we have to look at existing code for reference, or at the minimum find the right place for our code 3. In any reasonable professional context (job or open source) someone has to review the code we wrote - this alone means half our code interaction is spent reading Consider a team setting where half the work is maintenance and six devs have to review all code submitted just to stay on top of their team codebase, that's already over 90% reading
Sadly I am in the season of "spend hours reading some guys massive project to find one weird bug and then make a 1 line change." I hope to get back to adding features soon because it is more satisfying for me.
Another confounding factor is that over the time period in question Covid happened. A lot of devs would have changed to work from home. It may be the case that junior devs who would ask another dev questions weren't able to as easily, so were more likely to check in code and ask someone to look over it or for other advice.
who would've thought that stealing code from the web and automating a plagiarism bot would result in garbage code, amiright? /s modern techbro world is truly backwards 😂
"stealing". Despite basically all open source licenses prior to 2023 permitting it. Despite highly transformative usage of the data. Despite it being explicitly legal in countries like Japan. Etc.
You're only getting a pass on calling it "stealing" because it's copilot which *_does,_* for some reason, have a memory of its training data. Your above water, but not by much.
As a general rule, statistics on any company's sales page shouldn't be taken seriously, especially when one of them is an abstract thing like "fulfilled"
the read write ratio depends on the code base. tools, greenfield, etc is mostly writing. legacy applications that has turned into a big ball of mud and the devs are new is mostly reading.
Uncle Bob said that the influx of new developers was for years so, that the number of devs doubled every 5 years. So the experience level of devs is in average 2,5 years - a new "Senior". This, and the quick change of tools/technologies is constantly eating away on any mastery that the older devs are building up. Btw: Yes, you also read more code than you write. Everytime you go and check what a method you call actually does, or how something handles something or where to place a new piece of code. All that time you are reading the existing code. Still you feel like actively editing/adding code, but mostly you need to find the right spots. So clear/clean and understandable code is important.
Not being DRY has consequaces, especially in js and CSS shipped to the client, but not only, also server side can be affected at scale, more code to parse, more files to read from disk. That said as you stated many times being too DRY is the issue of premature bad abstractions.
As a web developer I really only use Copilot for basic frontend shit to quickly implement design ideas and build rough layouts. But %90 of the generated code never makes production.
Copilot is pretty bad outside of boilerplate code, but i was pleasantly surprised by GPT 4 last night. I still had to make some suggestions which significantly improved the code quality, but it was definitely faster than writing it myself.
I can cartainly relate to the reading 10x more code thing, at least when i do Don't Starve Togther modding, which as a hobby coder is probably the closest thing to working on a big project i've done. Certainly tons of legacy code, with the bonus that you can't even modify the code directly and also no debugger, just a dev console to type print(whatever) into.
This and chat gpt 4 only makes me write my code 100% slower. The amount of time I have spent explaining the problem to it, with examples, and then cross checking it vs the documentation is actually more than what I could have looked up myself. It sometimes adds unexplained complexicity and doesn't even consider nulls in data unless I tell it to consider that while making code snippets
I am doing some CS studies rn, because I needed a line on my resume to actually get a dev job in France, and I swear I am worried about my fellow students. I haven't spoken a word about copilot due to this worry, and I still see some of them using chat gpt, and I wonder if they'll ever even get half the knowledge and competency they're supposed to acquire. To top it off, they're about 4 years younger than me, and they're the generation that got not only the new highschool reform (thus went through a fresh, unproved system) but also covid during their highschool time. Some of them are smart, capable of things, but I don't know if they'll reach half of their capacities. 16:40 One of the first thing I told them was to train to type faster, I don't think they did even a minute, it's been around 6 months.
I feel very similarly, every day in CS classes I'm overwhelmed by amount of people who so openly talk how their code was all AI generated. I had a group partner on a lab where we were doing WiFi socket programming and he generated code for the server in few seconds during the class, while it took me dozen of minutes to write code for the client. Then we spent an hour debugging his code to make it work, while it became clear he got no idea how sockets work. All he tried to do was to paste code errors into ChatGPT as well while I went looking for documentation. It makes me seriously worried about IT field.
I describe Copilot as a senior dev with an eidetic memory who is an outstanding teammate as long as he's sober, but you gotta watch it when he's on the sauce.
I'll spend days writing 1.5k new net lines and get a rubber stamp approval in like 30 seconds...how many shops out there actually budget for devs to do quality reviews? Cause the push for new features is constant from what I've done in 7 years as a dev.
I don't know how I stumbled upon this channel, but I am fascinated about this view into world of computers and math which I am both very bad at, but envy the skilled. I am curious as to how software translates to what hardware is actually doing. Like I am guessing all code translates to zero and one which zero and one means an open or closed circuit?
From someone with 4yrs of exp, Copilot is very good at repetitive tasks like wiriting something similar to what you've already written (i.e. DB access layer of a program), it's also good at writing base documentation to build upon. For actual code it's: 5% amazing code, 10% serviceable, 20% a little bug in there that you won't notice until you've read it 10 times 50% not really what I want and 15% "go home you're drunk"
Haven't watched yet, but don't agree with the headline. It's no different than saying "stack overflow ruined code" or something of that kind. The truth is, devs who blindly copy other people's ' code ruined code. If you blindly accept code from anywhere, that is a conscious decision as a programmer. Bad code, just a series of unicode characters, don't have any power unless you allow them to be in a source file.
Not really. At least you needed to read around and figure out how to fit the stackoverflow answer to your specific use case. Now you can just get it done without a whole lot of thinking.
My fav thing about copilot is not the code. I'm writing a game mod and sometimes it will just know IDs for mobs and NPCs, saving me writing out a long list where I have to look up each and every one. If someone made a copilot which only autocompleted constants from various games I would be the happiest man alive.
About the code being read 10x more than being written, imagine what percentage of code you change when you open a file. Sometimes it's going to be 100%, but usually it's going to be very little. You open a file to see what that function does. You open a file to add a line to it. I write a lot of code too, sometimes, but usually I'll be editing a file, not creating a file, and in those existing files I'll be scanning a lot of names and structure.
Remember just a few months ago when devs without any understanding of AI thought AI would take their dev jobs and I was like the only person in the world that said the opposite because AI mostly produces trash because there's no ACTUAL intelligence in there?
GitHub's CEO basically valued each line of code written at more than $13,500 per line to reach a valuation of $1.5 trillion being added to the economy by the use of Copilot. I'm willing to bet that the total value of the entire volume of code written by Copilot is worth far less than $110 million, a rough valuation of $1 for every line of code added, updated, deleted, copy/pasted, find/replaced, moved, and no-op. I further assume that most of this code had absolutely no commercial value, so the valuation is so far-fetched that no rational person could ever take it seriously.
I think that a recent paper has shown that LLM suffer from fix point problems: i.e. a LLM is less accurate when its input data was generated with an other LLM. So as internet gets full of non-human text, this text is used to feed LLMs which performance is worse than before... I am trying to find the reference.
pearson correlation coeff is between -1 and 1, so 0.98 is an insane correlation if there's enough (reliable) data. basically means increasing the % of copilot use increases the amount of mistake code, in almost a straight line. it doesn't say how steep that line is though, just that e.g. the median copilot usage data point is also (very close to) the median mistake code point. the actual increase could be tiny, and again its correlation not causation, or whatever.
16:42 I think the quote is about the code that’s written, not the person writing it. I can easily imagine if you write a new feature, you spent 90%+ of your time writing rather than reading. But over the lifetime of the code itself, I would totally believe 10x more hours were spent *by others* reading over it as needed.
Keep in mind, we are in the honeymoon period where the people using & overseeing the code that Copilot is generating, are usually competent & experienced devs themselves. They can catch the bugs & security problems because they recognize them. What happens in 1 generation when we have vastly fewer senior devs who understand code in a deep way? And what happens when Copilot trains on older copilot generated code? Quality is going to degrade over time & the ability to debug & catch mess in review will be a fading skill. Yikes.
As far as percentage of time reading vs writing code... It's just a difference of style! When I code, I will think very deeply about what I'm doing and not really start writing until I know pretty much exactly what I am going to do - then it takes like, 2 seconds to actually write out (assuming all my assumptions are correct, which they often aren't 😉). While others (like yourself, no doubt) are more about getting in and writing code as soon as possible to experiment and tinker and find the solution - I have a colleague who is the exact same way. It's a whole spectrum, I imagine This is a good thing, though! Different styles of coding are better suited to different kinds of tasks and having a diversity of coding styles available to a team will better ensure they are always best suited to deal with whatever issues might come up :)
17:00 - "code is being 10x more read than written" is not about you, it's about code. If you write code that lives for 10 years, then people read this code 10 years, but it is written only once. If your code lives for 30 years oh boy people will read it 1million more times than it is written.
The future is looking sad for people learning to program imo. Im not that old but I remember learning to program 10 years ago by reading books and written tutorials online and it was amazing
Personally, I really dislike having to read learning material. Most of the time I'm not actually interested in the learning part itself, I'm interested in building stuff and having the skills I need for that. So if people are able to do that, I don't think it'll be all that bad.
Possible reasons you read 10x more code than writing: - Bad modularity - no matter what you change, even the most distant and seemingly unrelated parts of a big project could break because of it - You are using a game engine or some other huge proprietary piece of code. Some game companies even have teams to write the tools everyone else uses and sometimes they never pick back any bugfixes, so every team around the world keeps fixing the same bugs again and again. - You've joined a very old product and you need to learn a lot about it before you can extend it. The older it is, the higher the changes the other devs already left the company, or maybe even already died by natural causes. Also, the older it is, the more bugs you inherit. - Even a newer project can be mismanaged to such an extent, that it's a mess and nothing can be added without needing to fix half the project first
To me, AI-assisted is like a drunk senior friend with a bit of bad memory but it is very good at reading and seeking references in documentation and reading large sets of output for debugging.
when i first saw it in my second year college i was like that's good but untill i get enough experience to know what it is typing while on other hand a friend of mine was just installing anycool extensions and using copilot and when i asked him what's it doing answered no
I started to learn go three days ago and Copilot in GO is amazing, in typescript it give you shit code all the time, but in GO is like the opposite. Perhaps is because GO code is usually very repetitive and simple
Students where i live at least isn't allowed to use co-pilot while learning , whatever it would be. But then on bigger projects they're allowed to turn on co-pilot, also co-pilot not allowed during exams. So all it does it help them create bigger projects where i live which i think is kind of nice. Could be different at other schools of course.
A professor of mine who also teaches in the CS faculty told me that projects now are better than ever but people are eating shit in the exams at a rate never before seen. He believed it was because of AI assistance when doing projects.
@@andresmartinezramos7513 Interesting i would have thought it would help the students. Since you have to learn the basics and understand everything before you are allowed to use AI. We'll i guess you find out a lot of problems during a project where AI solves it for you. But honestly thought it helped with learning process not hurt it.
@@zivkobabz The thing is that he suspected that his students were using the AI without knowing the basics. There is nothing to stop them from using AI at home, but there is in an exam setting.
@@andresmartinezramos7513 Ah that is correct they could be cheating themselves on purpose, even if the teacher says it's going to hurt them if they use it for while learning basics.
Honestly, I use it for “advanced rubber ducking” at work. Maybe it generates some worth while code or boilerplate but ultimately I fall back to reading vendor docs or peeking the class. Sometimes it’s good for quickly summarizing your code in a comment…sometimes!
Ai at code generation: meh results Ai at code explanation (and Reverse engineering): really good! That has been my experience so far. However, if you beat it long enough it also produces good code, but in that same time you would have written in yourself probably
Btw: When using feature branches: Committing and pushing in small Steps happens with modifications of already pushed code. This could lead to false positive “churn” I guess.
I would like to add to the discussion, that you might think Copilot code is good code when learning. And once you have learned wrong patterns is might be hard to get rid of them. Therefore for beginners I would suggest, that reading seeing well written code is really important. It's like the paper where they tested how many code from bad or outdated tutorials/stack overflow articles went into some project. Those multiply with each new developer seeking information.
As far as fulfillment, the thing that makes me want to stop and work on something else is consistently the most boring 20% of a project; either boilerplate or uninteresting glue that should've been a library function. Copilot does great with that (because I'm probably the 10,000th person to ask...), which is huge. Also does great if you need a minimum implementation of something unrelated to what you're working on to make things compile and run - copilot makes it cheap to slap some throwaway code there while you working that you fully know you're going to blow away later.
DRY should be a promotion process: * start by DRY in same file that use the logic * keep copy/paste until at least 3+ use cases * promote to DRY in a common file after enough files are using it * keep copy/paste between packages/modules/projects until at least 3+ distinct packages/modules/projects are using it * promote to DRY in importable packages/modules/projects after enough packages/modules/projects are using it * keep promoting through hierarchy (e.g. organization repo)
lol a coworker just today told us (and the boss) that we should get Copilot because it's just so darn helpful. The part about junior developers made me laugh out loud because it makes so much more sense now.
Regarding the statement about code spending 10x more time being read, it's talking about the code itself, not individual devs. Code is going to be read by many people who are familiarizing themselves with the codebase, while usually only being written once with an occasional refactoring. A lot of a code in old projects has spent far more time being read than it took to write it.
No the 55% faster claim can easily be inferred from typing speed and how much code you're letting co-pilot write for you instead. That one's not hard at all. If they wanted to account how much less r&D or rewrites that number would probably be much higher actually.
The more copilot is used, the more code in github is made by copilot, so it learns on worse and worse data set, producing worse code and becoming worse with each cycle. At least there is a risk that this could happen.
15:55 well, the problem is most devs don't code at full speed cause they go slow to jog their memory. And CoPilot makes that bit faster by suggesting shit. Just like how regular autoComplete/intellisense helps you when you actually don't remember the exact function name.
I'm a Jr web dev. I use it for writing annoying things like css utility styles. But I generally turn it off as the time to do just enough, then wait, then check if its correct, then tab, I could've written it myself and not felt a weird limbo and forced decision of checking the accuracy.
Our company has copilot and it's total garbage. Am I expecting incorrectly that it should recognize a function I have just written and auto complete it in the function I'm using it in? I've never had it successfully do that.
In year 9 I had my first introduction to programming with my year 9 IT class being to build a site in Weebly and then a game in gameMaker... I hated it. My next real introduction was year 11 IT which I completed in year 10 - html, css, javascript (no advanced text editor allowed) and Visual Basic... Suffice to say at this point I thought programming was shit and went back to my calculus, but my third and final "intro" to programming was the coding train, and I can say with confidence that is the reason I have a programming job today. I hope the future generations also find passion from a youtuber or person they look up to and find joy in the creation, rather than being bogged down by the countless libraries and deployment pipelines that are required to even qualify for a job nowadays.
0:19 yesn’t - these sorts of papers should really have a DOI number that would outlive whatever medium it was distributed through and their corresponding URL, whether it be blog posts or PDFs
i mean, i think most of the problem comes from "new programmers", the full js stack ones, the ones that learn to write code to be a react/nodejs developer for the rest of their life, they come from the "easy way" (i'm not insulting anyone, but it surely is easier to learn this stuff from an udemy course instead of going to a proper university) and probably they have a tendency of "getting the stuff done" more then thinking about it... if we cut those "out liner" from those graph, the numbers are probably going to be really different.
I learned how to program in 80x86 assembly without attending university. In fact I learned C and C++ long before I went on to get my CS degree. So, not everyone out there that is self taught just writes Javascript.
@@alexaneals8194 never said that everyone who didn't go to uni is like that, in fact 2 friend of mine are backend developer and great developer overall and they couldn't go to uni, I'm just saying that people who's whole programming life is JavaScript are the biggest part of this "problem"
For what I see in my perspective, the downward pressure of code quality also come from management that more eager to see functionality out, we has developers can only work faster by reducing the time accorded to quality. But maybe it's more personal experiences, not a real trend
If ChatGPT's primary use is producing boilerplate code, then there is a problem with the programming paradigm. We should be striving for ZERO boilerplate code. Boilerplate code is a symptom of a flawed development platform. We are severely due for a massive revamp of how we write code. We keep banging the same two sticks together expecting better results. We need a game changing disruptor on the level of General Relativity disrupting Newtonian physics. I'm talking Star Trek Next Generation futuristic shit.
Well, when I was contracting for two years on one huge project my whole was task go though hundreds of bug reports and fix the ones that were actually bugs. I spent orders of magnitude of time searching around and reading the code to figure out the best way to fix the bug than actually writing the code that was the fix. That's after having run the code and observed the symptoms. Only after all that time was I trusted to actually build new features for that thing.
I am in College right now and they heavily advertise copilot and how it is free to use. These brain dead kids in class don't know how to code, and they think they know something when they make a half baked website with copilot. On the software engineering project, my team was the only one to actually deliver on what we promised. The reason why is because I have been coding since 2017. These kids probably started coding last year. 😂
It speeds up boilerplate, but it also speeds up producing all repeated code. Making it so easy to write repeated code is a serious downside. I hope that it doesn't take long to add first order refactoring assistance to extract common code and push back on copypasta, that will be a big improvement.
"10x more time reading code than writing code" 17:12 This might be a definitional weirdness in reading code. I spend a good bit of time looking at code and planning what I'm going to write before I do write it, but that looking and planning phase often blend together a good bit especially on more complex systems. It's also a good bit of debugging stuff and is that reading code or reading logs it's tricky? Basically what is reading code and what is planning what you are going to write is a blurry topic and I think that's the confusion.
So, some time ago, I read a blog or somerhong that said that when you download vscode from the "installer" you implicitly accept to share some data with vscode while if you build it from the open source repo, there is no such data gathering. Dont know if it was rlly true but seems to kinda make sens ?
Yes, telemetry is the sole reason we have VSCodium as "clean" alternate build. Even the name is a hint (homage to Chrome vs Chromium that strip Google's telemetry as much as possible). You can disable extended telemetry in settings. But I can't remember whether it stop all telemetry or strill send context-free data.