Тёмный

DjangoCon US 2023: Don't Buy the "A.I." Hype 

Tim Allen
Подписаться 199
Просмотров 16 тыс.
50% 1

In my 2023 talk at DjangoCon US, I implore y'all not to buy into the "A.I." hype.
“Those who cannot remember the past are condemned to repeat it.”
- George Santayana, The Life of Reason, 1905.
Congratulations, technologists! We have reached a new record for the height of the peak of inflated expectations with the hype surrounding “A.I.” If you believe the recent press, “A.I.” is going to be capable of everything, with some even talking of immortality.
It is wonderful to be excited about new technology available to us, but this is at a level I have never seen in my career. There have been numerous lessons from the past that illustrate why we should avoid these levels of hype.
Is “A.I.” going to change everything? I don’t buy it, and in this talk, I’ll explain why.

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 161   
@chawaphiri1196
@chawaphiri1196 9 месяцев назад
I thought he was going to give a view that was going to say these things will have no impact. But he chose to say we need to be careful with how we use these things (LLMs). Which makes sense. It was a good talk
@KOSMIKFEADRECORDS
@KOSMIKFEADRECORDS 3 месяца назад
okay but was pretending that crypto bros and AI fans are equal to uneducated masses that get indoctrinated by the slightest BS. The most indoctrinated and mass media controlled NON CRITICAL THINKING people in media always claim to be "experts", or "educated". I bet this dude is already battling mysterious health issues that he would believe has nothing to do with his vax or meds. his condescention on innocents was just like the fouccis and experts that forced us to take experimental drugs that are now turning out to be partly lethal. We are all lucky if we can but resist hype cycles or better yet profit, true!!! but these arguments were so weak and condescending. I dispise this "breed" of argument. So unhelpful.
@wonseoklee80
@wonseoklee80 7 месяцев назад
Fair point. Exactly happened on crypto hype a few years ago. Sometimes we need to step back and see what is really happening.
@KOSMIKFEADRECORDS
@KOSMIKFEADRECORDS 3 месяца назад
soooo crypto not advancing tech or changing the game? what is the reason you say this?
@ManPursueExcellence
@ManPursueExcellence 2 месяца назад
@@KOSMIKFEADRECORDS Crypto isn’t really solving customer problems right now, not so much yet.
@MisterDevel
@MisterDevel 9 месяцев назад
An incredibly mature talk on such a hot topic. Bravo sir! 👏
@KOSMIKFEADRECORDS
@KOSMIKFEADRECORDS 3 месяца назад
em your idea of "mature" is a little different than mine. But im old and have seen some shit.
@przemyslawpodczasi7787
@przemyslawpodczasi7787 3 месяца назад
The greatest irony of "Dunning-Kruger Effect" from the talk is that this is not Dunning-Kruger Effect, but nobody read the paper, everyone just repeat the chart
@FlipperPA
@FlipperPA 3 месяца назад
Thanks for watching the talk. I've read the paper more times than I remember throughout the years, and the Wikipedia page gives a good enough summary for anyone who is curious. I couldn't go into details in a 25-minute time limit, for practical purposes. The point I was making is that hype cycles will most likely affect those who overestimate their own abilities, and the United States education system in particular leaves far too many people unable to clear that hurdle.
@Billy4321able
@Billy4321able 3 месяца назад
​@@FlipperPA Paired with virtually no media literacy the younger generations are left extremely vulnerable. Though there is hope on the horizon. People adapt in ways we can't always predict. My favorite example being the tendency for younger people who listen to AI music to be able to tell it apart from real music. A difference a lot of people unfamiliar with AI just can't hear. I'm hopeful that a similar adaptation will happen with generative media in the future, and people will learn to be more skeptical of what they see online. Unfortunately this may require us to suffer first before such a cultural shift happens.
@BajoranEngineer
@BajoranEngineer 11 месяцев назад
I was like "How did you get your talk up so fast!" As always, incredibly impressive, engaging and fun. Love you, friend!
@JamesJosephFinn
@JamesJosephFinn 2 месяца назад
Absolutely and utterly brilliant. I've been saying the same thing-albeit less eloquently-to anyone with ears. Whether they hear or not is another matter.
@adam7802
@adam7802 6 месяцев назад
Can't help but notice the video yelling the truths on this has very few likes and views.
@lame_lexem
@lame_lexem 4 месяца назад
it's a goddamn Django conference what have you expected
@KOSMIKFEADRECORDS
@KOSMIKFEADRECORDS 3 месяца назад
likes or loves have nothing to do with truth and everything to do with confirmation bias and emotional triggers. More triggeres dont mean more truth. More triggeres means more EMOTIONS. Excellent for gaging psychology only not accuraccy. That said, thought the dude was a bit penissy
@hydrohasspoken6227
@hydrohasspoken6227 Месяц назад
Truths are not exciting.
@enricobulic
@enricobulic 4 месяца назад
Brilliant talk, thank you!
@BruceWayne15325
@BruceWayne15325 8 месяцев назад
A minor correction on the AI pie diagram. Current AI isn't just 1/6th of the approach. They currently use Machine Learning, NLP, Decision Making (to an extent), Object Recognition, and Robotics is being integrated with it now. This means that only 2.5 of the 6 pieces of the AI puzzle aren't being done currently, though they are working on it. Additionally, while I agree that AI is currently over-hyped, I think the presenter is incorrect when he compares current AI to the 3D-TV, and other tech that went no where. There's a key difference here. Current AI, even in its limited state, improves the rate at which you can work, it reduces costs, and makes some mundane tasks easier. Anything that reduces costs, or makes life easier is not a fad that is going to go away. It may not live up to the hype, but it's not going to disappear either.
@MartijnBaltes
@MartijnBaltes 7 месяцев назад
AI saves costs? Some hidden / overlooked costs are mentioned in this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Nd7wrC62LEk.htmlsi=4KCqvl_0dl0RD8VO
@Ynerson9003
@Ynerson9003 6 месяцев назад
I’d love to understand how it saves costs, its code that it writes is copied and mediocre at best. I’ve used it to write some boiler plate python, but not that helpful after that.
@BruceWayne15325
@BruceWayne15325 6 месяцев назад
@@Ynerson9003 As an author I use it for brainstorming. What used to take me several weeks or months to plan is now done over a weekend. Time = money.
@vincent074
@vincent074 6 месяцев назад
Yes but the costs are still there, shifted to computing power and infrastructure. Currently you're not paying for it hence why they're hidden / overlooked costs.@@BruceWayne15325
@artxiom
@artxiom 6 месяцев назад
@@BruceWayne15325 I agree. Even just for summarizing long texts - now I can read stuff way faster than before. If that's good or not is another issue ;)
@rob99roy
@rob99roy 2 месяца назад
This video is not going to hold up well. I already use AI to enhance so much of my work and life. He is so wrong on this one. Anyone who listens to him is going to be left behind. This technology is still in its infancy and it has many challenges, but it's well on its way to disrupt almost everything.
@joeschmoe3815
@joeschmoe3815 Месяц назад
Can you give any examples?
@Nirex_
@Nirex_ 3 месяца назад
Very thoughtful talk, helps one see more of the world's real problems.
@TracFone-xn7fj
@TracFone-xn7fj 8 месяцев назад
AI is a bigger deal than any of those tech fads you listed. Hype and irresponsible uses aside, the underlying machine learning tech is transformative in a way that the metaverse and blockchain never could have been even if they were as big a deal as the techbros made them out. Machine learning solved the protein folding problem. Machine learning found improved sort algorithms that are already merged upstream in the LLVM libc++ library. The previous code was hand tuned assembler and hadn't changed in ten years. Machine learning has come up with matrix multiply and hashing algorithms that beat the best humans had been able to do. While searching for new chemical synthesis paths for some existing pharmaceuticals, a machine learning algorithm produced 40,000 new potential chemical weapons, some predicted to be more toxic than VX (it also independenly invented VX). None of the other tech hypefests in our lives involved technology powerful enough to do these things. AI/ML is different. I do agree with you that turning loose these LLMs trained on god knows what random internet data was the height of irresponsibility. But any powerful new tech is going to be abused by the unscrupulous.
@dasrit3
@dasrit3 7 месяцев назад
Machine Learning yes, but AI, no.
@ekki1993
@ekki1993 4 месяца назад
It didn't "solve the protein folding problem". It's just a new standard, even if it's orders of magnitude better. We still have to use experimental confirmation for those structures, and will keep having to do so for anything coming out of these systems, because LLMs aren't engines of reality but black boxes whose main purpose is making people believe that the string that comes next is reasonable. It will surely help advance science and it's verifiable useful technology, but it's still being overhyped by techbros who, may I remind you, have a vested interest in inflating their portfolios by getting funding for AI startups.
@TracFone-xn7fj
@TracFone-xn7fj 4 месяца назад
@@ekki1993 It is indeed being overhyped by techbros. But the reality to hype ration is certainly far, far better than, say, blockchain
@ekki1993
@ekki1993 4 месяца назад
@@TracFone-xn7fj Oh, of course. That's a very low bar, though.
@InfiniteQuest86
@InfiniteQuest86 4 месяца назад
@@TracFone-xn7fj If you believe that, then you understand nothing about any of this or that. This is identical to crypto. Blockchain is beneficial, and bitcoin is at all time highs right now. But all the extra crap that people tried to do on top of that got overhyped. Bitcoin is still underhyped. AI is currently in the other crap over hype part. Devin was a complete hoax, and there are millions of similar companies getting millions of dollars providing literally nothing. It's all scams at this point.
@thePocketWatch45
@thePocketWatch45 3 месяца назад
"Facts are not discovered, facts are not created, facts are simply acknowledged. A truth on the other hand, is almost the opposite. Truths are those things that are not simply acknowledged, but must be discovered, or created."
@kylegaspar4420
@kylegaspar4420 8 месяцев назад
Hey Tim. Kyle from Penn. Good going.
@Pwj579
@Pwj579 3 месяца назад
FTX should be the biggest "Red Flag" that tech is dead
@KOSMIKFEADRECORDS
@KOSMIKFEADRECORDS 3 месяца назад
dead? what?
@Karim-ik5ij
@Karim-ik5ij 8 месяцев назад
I don't think it's all hype but we are way too soon in the game.
@palashsharma891
@palashsharma891 8 месяцев назад
100th like! Great talk!
@zacharydaniels3186
@zacharydaniels3186 9 месяцев назад
As a person who fully believes the hype, i did enjoy this talk. Downplaying AI's current state is like downplaying a flu pandemic. Exponential Tech is difficult to comprehend, and comparing it to past tech after decades of Moores law is a mistake.
@phylocybe_
@phylocybe_ 8 месяцев назад
Except tech doesn’t advance exponentially anymore. That was only true in the 1900s.
@zacharydaniels3186
@zacharydaniels3186 8 месяцев назад
@@phylocybe_ I'd like to see that graph.. what is your source for that idea?
@siddardhab
@siddardhab 8 месяцев назад
Moores law is for hardware… and transistor gate width limits have been reached. You’re an overconfident tech bro, see the slide on overconfidence
@zacharydaniels3186
@zacharydaniels3186 8 месяцев назад
@@siddardhab Moore's law is about compute doubling while price goes down. That's still happening. Watch the latest Nvidia keynote. Physical chip size isn't stopping architecture and software optimization improvement continuing Moore's law.
@SW-by9ob
@SW-by9ob 8 месяцев назад
@@zacharydaniels3186 As you say in your original post you fully believe the hype. Flipping your analogy around, the current AI hype train is like overplaying a flu pandemic so a small number of companies can make huge financial gains and exert control over the masses . Sound familiar?
@Pwj579
@Pwj579 3 месяца назад
I also feel the "A.I" hype is overrated because the Tech Industry is becoming stagnant. Just watch an episode of "Silicon Valley" and it gives you an idea of how the different tech companies are so competitive, but can be so not independent in thinking and just bickering over who has the better version of '"the new thing".
@DEBO5
@DEBO5 Месяц назад
This guy is a goof ball in his own right. Sure machine learning is being overhyped so Silicon Valley bros can secure funding but he seems to be personally offended by a tool. Also he doesn’t understand what the dunning Krueger effect actually is
@dankal444
@dankal444 6 месяцев назад
When it comes to stock investing, you may be right. When it comes to the way "AI" (i would rather say Machine Learning) is and will be changing the world in near future - you are wrong. Chat-GPT may be overhyped, but remember that there are tons of ML applications that flourish and people don't know about.
@egoalter1276
@egoalter1276 3 месяца назад
It will help with rapid prototyping in any crearive or engineering profession, and it will make counterfinting exceptionally easier. I would rate it on par with relyable 3D printers.
@Nirex_
@Nirex_ 3 месяца назад
Though I don't think it does Deep learning justice to compare it to web3... I think it's more comparable to some tech like the Internet or the C programming language.
@jwmeirose
@jwmeirose 4 месяца назад
this was a very powerful video until you sidetracked into second life a bit too deeply, then picked up, but bogged down n FTX, MS, etc. sorry.
@FlipperPA
@FlipperPA 4 месяца назад
Absolutely fair. I only had a few days to put together this talk as a fill-in for a speaker who dropped out, so the pacing could have been better in that section. I've given an updated version of this talk which is now on my channel, or click here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-T7X7TW7Yz4A.html
@edmundkudzayi7571
@edmundkudzayi7571 3 месяца назад
This talk is unhelpful. It does not explain how AI is being overhyped. As a programmer, he undoubtedly knows that he can now do the work of several days in three hours. With self-healing orchestration techniques, you can now get code that is tested and sent back until it works. This is serious stuff in one domain. In video, we have seen the Chinese Kling deliver results that appear to outpace OpenAI's SORA. This is not trivial; it changes an entire industry. These are two examples of profound changes to industry arising from AI, and I cannot see how they have been overhyped. If anything, their true disruptive potential has not quite been fully expressed yet, but it's here now, being used right now. That said, the only part I agreed with was that this is not intelligence.
@FlipperPA
@FlipperPA 3 месяца назад
Thanks for watching! I've been programming for four decades, and as I've advanced as a Software Engineer, I spend less and less time during actual code. I've been using GitHub Copilot for quite a while now, and while there are certain basic situations when I'm in the editor where it is certainly helpful, "several days in three hours" is far too hyperbolic. The suggestions are often laughably wrong, and sometimes dangerously so, by "inventing" packages that don't exist, threatening the supply chain. Take a look at this study that came out of Purdue: futurism.com/the-byte/study-chatgpt-answers-wrong Thanks again for chiming in!
@olx8654
@olx8654 3 месяца назад
copilot is sub sub par. Try using a modern model
@FlipperPA
@FlipperPA 3 месяца назад
@@olx8654 It has been running on GPT-4 since November, 2023. What would you recommend instead?
@matthew.m.stevick
@matthew.m.stevick 4 месяца назад
ai is very useful to save time and easily increase productivity which in turn creates more profits, that said … you can’t stop this title wave. 🌊 it’s real. Pray to Jensen 💚🖤🇺🇸
@manslaughterinc.9135
@manslaughterinc.9135 3 месяца назад
Imagine telling people to reject AI in 2023...
@FlipperPA
@FlipperPA 3 месяца назад
Thanks for the comment, but I didn't tell anyone to reject "A.I." anywhere in this talk. I talk about how it is over-hyped, and that is preventing us from finding the true utility of the many technologies that are being caught under the marketing term "A.I." Imagine telling people that Second Life was overhyped in 2007, or that blockchain was overhyped in 2022!
@veteransniper6955
@veteransniper6955 2 месяца назад
Applying ai to different tasks is a way to find it's true utility. Probably only way.
@csbarathi
@csbarathi 9 месяцев назад
Well, the people who jumped into the hype train of blockchain were wrong. Many of them are staying away from the AI hype
@SW-by9ob
@SW-by9ob 8 месяцев назад
Only the ones who lost all of their money. The rest are fully onboard with the next shiny thing through fear of missing out and a lack of understanding as was exactly the case with Crypto.
@nsambataufeeq1748
@nsambataufeeq1748 8 месяцев назад
@@SW-by9ob exactly.
@csbarathi
@csbarathi 4 месяца назад
@@kh9242 Simply trying a new technology is different from hyping up something new.
@TheMirrorslash
@TheMirrorslash 6 месяцев назад
There is some solid points in here about ethical use and alignment but these comparisons just fall short for me... AI is already providing immense value in so many domains and is changing work life in the information age for good. AI ins't overhyped. It's overused as a term. AI has been used for decades for the dumbest things. State of the art neural nets of today are already doing things we didn't think were possible just a few years ago and there's a million more ways to teach neural nets useful concepts.
@FlipperPA
@FlipperPA 6 месяцев назад
This is valid, "A.I." is a marketing term, not a technical one... the models we have now are neither artificial nor intelligent. There's no doubt there's something useful here, but I'm having trouble separating the wheat from the chaff. The hype is getting in the way of finding what is going to help improve the human condition. The people leading the way are the least qualified to provide the kind of moral, ethical-driven leadership we need so desperately. The environmental cost is also so incredibly massive for a glorified grammar check, code reviewer, and image generator.
@FlipperPA
@FlipperPA 4 месяца назад
@@kh9242 Thanks for taking the time to watch! I was short on time, but here's more detail. "A.I." is not artificial. The models' generated text and media responses are derivative from the work of real humans: artists, musicians, programmers, and writers whose creative and professional output is appropriated without consent. "A.I." is not intelligent: models don't think, don't have an I.Q., and follow instructions input by humans for specific tasks. The first real-world applications from neural networks, the core tech behind ChatGPT and friends, were military: spotting ships from satellite photos.
@KOSMIKFEADRECORDS
@KOSMIKFEADRECORDS 3 месяца назад
@@FlipperPA Always triggers me when i hear the words "ethically driven leadership" used in this way.... as if what you really meant was "we need more governance". Bro you need to forget what your friends and family taught you in UNI and you better start observing reality as it really is on a fundamental level... like a mad scientist trying to crack open the universe, racing against the clock. You will see the difference between empowerment/safety vs "leadership". Truth is leaders lead because of strength and courage no matter their moral compass. We could all do with some more personal power and agency and courage and way way way less "ethical leadership". I think what we need is to be LESS risk averse, LESS globally concerned and MORE empowered to change things LOCALLY... for some indicidual DIRECTLY. Unless you make a living doing the opposite. But i sense your intentions are pure.... but just like the rest of us normies... sorely MISTAKEN.
@RonyPlayer
@RonyPlayer 4 месяца назад
There certainly is hype over the current state of AI (machine learning, LLM's), but it definitely has real use cases, differently from metaverse, or Blockchain. I know, cause I use it daily. So, even if for some reason, it stopped improving today, it would already be a piece of disruptive technology. But it has not yet stopped improving, and we don't know were this will lead to.
@troywill3081
@troywill3081 9 месяцев назад
17:00 I will give you three names. Tell me if any of their tunes have changed: Eliezer Yukowsky, Connor Leahy, Robert Miles. - This is not hype. This is reality.
@amonkeysden
@amonkeysden 3 месяца назад
This is awesome but you should review your view on "A.I." without getting to argumentative claiming it is only Machine Learning is simply incorrect. It is Deep Learning (often seen as a part of Lachine Learning). It is also Natural Language Processing. I didn't see Deep Learning on your graph so assume it is part of Machine Learning. NLP is a separate track though. In short, yeah it is in a massive Hype Cycle. And it is AI.
@spiralizing
@spiralizing 3 месяца назад
Not AI, it has nothing about "intelligence"
@amonkeysden
@amonkeysden 3 месяца назад
​@@spiralizing you mean what he is describing is not AI? Or, in general, that AI has nothing to do with Intelligence?
@spiralizing
@spiralizing 3 месяца назад
@@amonkeysden I was just saying that current "AI" models don't actually represent any "Intelligence". They were not develop for that purpose, and they were develop by people mostly outside cognitive sciences (we know very little about what Intelligence is and how it can be understood from a computational cognitive science perspective.
@amonkeysden
@amonkeysden 3 месяца назад
​@@spiralizing I don't disagree in pure scientific terms. However, from an "engineering" point of view, it does have understanding that I find difficult to label as anything but Intelligence.
@angryktulhu
@angryktulhu 3 месяца назад
Lol bro is saying "Don't buy the AI Hype" while my everyday job (I work mostly with Django) ALREADY has transformed A LOT. I won't name the LLMS I use, no free ad lol, but they already play a huge role in my development process. Yes I know their weaknesses - like refactoring a huge codebase - and I'm not even trying that. No need to waste time. But lotta other things, they work really well IF you know what you're doing. Idc about exponential growth or singularity, it might not really happen in the next decade or even longer. But AI has already transformed programming, and anyone who can't adapt, will be thrown away. I already see how companies and projects incorporate AI into their development processes, and it's inevitable that it will continue to happen
@FlipperPA
@FlipperPA 3 месяца назад
Thanks for watching the talk! How much of your week do you spend actually coding, in the editor? Because as I've grown into more senior engineering roles, that time has gotten less. We celebrate PRs where there are more lines deleted than added. I'd hardly call LLMs for code revolutionary. The truth is, we've evolved from having no-frills text editors, to syntax highlighting, to code intelligence, to autocomplete over time. GitHub Copilot and friends are just the next step in that logical progression, not a revolution. So many people pretend that writing code is the hard part of software, when it is the easy part. The difficult parts are understanding it, explaining it, evolving it, and maintaining it over the SDLC. As for adapting, that's also been a part of software engineering for the four decades I've been coding. Otherwise, I'd still be using Apple // BASIC or Borland Turbo Pascal! Cheers.
@TheManinBlack9054
@TheManinBlack9054 2 месяца назад
Honestly, very disjointed and unconvincing talk. First of all, the semantic squabble over the term Artificial Intelligence is presented not with its actual history or the place in the scientific field of AI, but through the lense of a layman and a linguistic nitpicker. Artificial intelligence is the name given by the AI scientist John McCarthy in the 1950s to the field that studies any machine that can mimic elements of human cognition and to said machines as a whole. If you have a problem with that scientific term the time has already passed, and you should not misunderstand it for something else entirely, laymen usually confuse it with AGI, you could have dispelled such confusion instead of adding to it. More coherency and research would have been great. And I do not know for what reason you decide that AI is not artificial, but no, it is artificial and not natural, it is man-made. This whole semantic tangent was completely unnecessary and wrong. Second of all, concerning the main part of your video, I want to confidently state that analogy is NOT an argument. What you did is you showed past tech trends and said that AI will be just like them, that is not an argument, its an analogy. And secondly, some of the examples provided actually DID match the hype, if not initially, in the long-term, such as internet or Big Data (another one not included would be social media which also matched the hype eventually). True, there was the dotcom bubble, but it would be very wrong to say that the internet somehow was just a fad and didnt eventually match the hype. Just because there is a bubble does NOT mean that the technology that is behind is useless or not going to be fundamentally revolutionary. As for crypto it did not match the hype (at least yet) but it does have utility and is already pretty useful for unjustly sanctioned people of countries like Iran and Russia. Overall, this is a very low level of argumentation, it does not attack any of the main points of the "AI is not pointless hype" side and does not produce any of its own. All it has is analogies which is not an argument. Now to some minor things, the idea that it was the foreign actors who swayed the election of 2016 was not found to be true and is nothing more than putting your head into sand and denying the shifted political situation that many were unprepared for as they were not listening and looking to the political will of a certain subset of a population, and such explanations only deepen that view and deny that population any agency. And yes you can make up a fact, that is a perfectly acceptable phrase with the definitions of that word, such (wrong) semantic nitpicking adds nothing to the point and further confuses the listener. The talk would have been much nicer if at least engaged with the "Ai is revolutionary" arguments and either provided counter-arguments to them or provided your own as to why it is not. Currently, there are multiple high-profile academics and NGOs who are warning the public of excessive danger of AI systems precisely BECAUSE they are powerful and could be so revoluionary. You dont have to be a "tech-bro" to see the clear and direct danger not only to any political system but to the entire human race.
@TheManinBlack9054
@TheManinBlack9054 2 месяца назад
On second assesment of my comment I think it might have unnecessarily harsh tone and loose use of pejorative terms, I am sorry if it came out that way and hope it did not offend you.
@Skunkhunt_42
@Skunkhunt_42 9 месяцев назад
Interesting. But it was def not just "4 people in a RU troll farm" 😂 we got alota smart folks all thinking the same things like that and that in itself proves dudes point
@JustinKreule
@JustinKreule 4 месяца назад
He said “foreign actors” not “four actors”
@Skunkhunt_42
@Skunkhunt_42 4 месяца назад
@@JustinKreule took the bait
@JustinKreule
@JustinKreule 4 месяца назад
@@Skunkhunt_42 i have been swindled
@KOSMIKFEADRECORDS
@KOSMIKFEADRECORDS 3 месяца назад
ppl who think SBF was the end of crypto live in alternative universe. In this one, crypto just went federal
@YEETSWORLDWIDE
@YEETSWORLDWIDE 3 месяца назад
Great talk but yeah what the fuck do we all know
@olx8654
@olx8654 3 месяца назад
I like how he doesn't understand the first thing about AI. he is saying developers choose to program hallucinations???? Also AI is already crazy beneficial for design/software development companies.
@FlipperPA
@FlipperPA 3 месяца назад
Howdy, thanks for watching. I didn't say that, but I also didn't have time to go into detail in a 20-minute time slot. Developers face pressures from the business side of these companies to make these models give responses. There should be more pressure to allow the interface programmer to return no response when the prediction threshold in the decision tree isn't high enough, instead of randomly picking a bad direction to go. No response is better than bad response, or completely wrong response. The random seeding is also why there isn't reproducibility in responses. By buying into the ELIZA effect, and giving life-like qualities to these models and algorithms, it is a category mistake: it removes ethical and moral responsibility on the part of the model builder and algorithm programmer, and puts the onus on the "black box" instead. This should never be acceptable. We're seeing this start to change, with LLMs now starting to offer dials for randomness: how "creative" do you want it to be. But the random seed that I reference in the talk is at the initialization of the process. Here's some further reading: neuralgap.io/understanding-randomness-within-llms-neuralgap/ I'd also like to see an actual case study of how "crazy beneficial" this has been for companies. For me, the introduction of multiple monitors, code intelligence, and syntax highlighting had far more of an impact on my programming efficiency, but those didn't get a mammoth hype waves. Again, as I give in the thesis of the talk: I'm not saying the models won't be useful tools in the toolbox. Just that the hype is making it nearly impossible to find out what those uses will be. Cheers!
@olx8654
@olx8654 3 месяца назад
​@@FlipperPA You can disable random seeding and reproduce the same output every time, this is not an issue or challenge at all. It is literally a boolean flag. The issue is that slight variations in input can lead down different continuations and you cant just produce "no continuation" if some threshold is not met because defining such a threshold is extremely complex. For example, let's consider a simplified scenario using words instead of tokens. If someone asks, "What is your favorite color? More importantly, can you solve this equation [equation]?", the LLM might start with "My favorite color is...". Even if each possible color has a relatively low probability, should the model just stop and give no response? This approach could lead to incomplete answers, as the threshold for continuation isn't straightforward to determine. Your Eliza argument is a non-sequitor. Regardless of how you perceive AI, the ethical and moral responsibility still lies with the model builders and programmers. Whether you view it as a token predictor or a large-timestep consciousness, you are still responsible for its creation and progress. My opinion is that there is a chance that LLMs are self-reflective during training time, it just takes them longer to reflect on their thoughts because it is done in batch. They generate billions of outputs before they get to reflect on that data, which will happen the next time developers decide to train using the generated data (inner monologue). But it is true that during inference there is not much going on. As for the benefits of using AI, they should be clear to you if you are a developer. When using an unfamiliar language, even the "autocomplete" level of github copilot is useful. It can teach you about a new language if you ask it questions. As for bigger tasks, I was able to bootstrap a fully functioning multiplayer card game based on vue3 and nestjs with server and client for around 5 dollars of traffic, only giving it the initial prompt myself. I suggest you write an agentic implementation and use gpt4o or Claude3. Agentism really elevates their performance by a lot, since they are able to reflect, test and correct the failures of their code. Be sure to add a field for it to have inner planning and monologue with itself.
@MSIContent
@MSIContent 2 месяца назад
Well, this didn’t age well… I get his case - I just don’t agree with it. There is a huge difference between where AI currently IS, where it’s going and any thing like Crypto etc. It WILL impact the entire world and we need to be vigilant and not ignore this as a fad.
@FlipperPA
@FlipperPA 2 месяца назад
Thanks for taking the time to watch! I'll respectfully disagree with you, and time will tell, but we're already seeing backlash to the absurd hype wave. The thesis of my talk is NOT that LLMs have no utility (let's be specific, since "A.I." is a marketing term these days). As I state in the talk, there's something undeniably cool to the tech. My thesis is that the hype is absurd, and it is getting in the way of the technology's true utility being realized. The signal-to-noise ratio is WAY out of whack. All the technologies I list in the talk have some level of utility - so does blockchain - but not to the hype level. We're starting to see the backlash to the "A.I." hype wave now: "More mirage than miracle", that the glorified spell check and code completion is generic and not worth the environmental cost, that it is all built on stolen content without permission or consent. Here are just a few recent articles, don't take it from me: finance.yahoo.com/news/exclusive-multiple-ai-companies-bypassing-143742513.html www.nytimes.com/2024/05/06/business/dealbook/ai-power-energy-climate.html?unlocked_article_code=1.1k0.PUUy.ZRwZvhLx0wBi ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
@veteransniper6955
@veteransniper6955 2 месяца назад
​@@FlipperPA ML/AI stuff is not autocorrect or code completion. It's a way to create programs nobody know how to write.
@RM-xr8lq
@RM-xr8lq 5 месяцев назад
people are just starting to realize that many of the things humans do don't actually require much creativity or thought beyond the given instructions western digital art over the last decade or so has notably become more and more limited to specific styles, with much of it being fan art of intellectual properties following premade design choices and compositions. what makes money in their society is a very narrow and limited subset of what is actually possible, and when such extreme repetition in their "expression" is the majority of their popular art it has been easy to replicate that with statistical learning software
@fredjohnson6115
@fredjohnson6115 3 месяца назад
This guy is a sophist.
@AW-lo7sz
@AW-lo7sz 9 месяцев назад
Dude confidently espouses debunked conspiracies from 2016 while asking to be taken seriously
@siriusmain1763
@siriusmain1763 8 месяцев назад
Bro Putin himself admitted to manipulating 2016 elections 😂😂
@Robdobalina
@Robdobalina 8 месяцев назад
Blue anon runs rampant in tech
@zacharydaniels3186
@zacharydaniels3186 8 месяцев назад
​@@Robdobalina Oo! I like that phrase! I'm stealing it.
@PRIMARYATIAS
@PRIMARYATIAS 6 месяцев назад
Indeed but he is not wrong.
@donnysailor4127
@donnysailor4127 9 месяцев назад
People make mistakes all the time. Does this mean they are useless? Sorry but I couldn't find meaningful arguments in this talk. AI is real and it is more intelligent than 90% of the humans. It optimizes your work and performance. No arguments can deny that.
@gJonii
@gJonii 8 месяцев назад
GPT-4 is dumb as rocks. I think future tech is rapidly gonna be better... But if GPT-4 is smarter than any portion of humanity, we're in deep shit.
@SW-by9ob
@SW-by9ob 8 месяцев назад
An LLM is not intelligent. It takes information from other sources and presents it in a more human like way. There is no actual intelligence involved and it has been proven to give a lot of false information as has no ability to tell the difference. Shit in shit out effectively. And overhyped does not mean useless anyway.
@v0ldy54
@v0ldy54 4 месяца назад
It's literally not intelligent at all
@johannesdolch
@johannesdolch 3 месяца назад
So he became part of the Ai hype by getting speaking gigs for assuring people that the hype he is part of isn't real... kudos. Also, he is wrong, but that's another topic.
@pyphilly
@pyphilly 3 месяца назад
I wasn't intending to speak at DjangoCon US in 2023 (as I mentioned in the first two minutes of the talk). I was a last-minute replacement, and put this talk together in three days. So I hardly had an agenda trying to land "speaking gigs", as you put it! As to your other comment, time will tell. Let's check back in five years and see who was closer to right. Cheers!
@WolongGao
@WolongGao 3 месяца назад
Yes, of course, you don't have the time to explain how wrong he is, but you know
@johannesdolch
@johannesdolch 3 месяца назад
@@WolongGao Look up my other comment and also its pretty obvious. I dont blame him. A lot of people were wrong about this, maybe all of them. But they didnt go in front of a camera.
@WolongGao
@WolongGao 3 месяца назад
@@johannesdolch thanks, not going to search for you dropping knowledge. Nobody is "wrong about this", because AI has not finished, it just started.
@johannesdolch
@johannesdolch 3 месяца назад
@@WolongGao So you agree.
@carkawalakhatulistiwa
@carkawalakhatulistiwa 7 месяцев назад
This will age like milk . after sora ai destroy video stock industry and porn industry
@FlipperPA
@FlipperPA 7 месяцев назад
I'd like to borrow your crystal ball which can see the future once you're done with it, my friend. In the meantime, I'll remember the bold predictions of the past. 25 years ago bold claims were made that technology would cause healthcare employment to be halved (it ended up doubling). And 20 years ago, after The Matrix trilogy, many insisted that CGI would mean no more actors in TV or movies. I'll talk to you in a year, and we'll see who was closer to right, because I don't think either of us will be spot on. But I think I'll be *closer* to right than this comment!
@carspotting4325
@carspotting4325 6 месяцев назад
Sora looks crap and uses copyrighted content... It's doomed
@drk3249
@drk3249 3 месяца назад
​​@@FlipperPA Hey Tim! I really like this video but could you provide us with even more examples like these? I need help with staving off AI anxiety because I keep switching on and off with all the advances in AI technology instilling in me fear that AI is gonna replace me in the next 10-20 years. So I would really appreciate it if you could produce a video or write some article showing all the information on why I shouldnt be worried and all the examples you could think of, of people saying this will replace this, that will replace that, etc in the past with all these "groundbreaking" technologies
@kirankumarsukumar
@kirankumarsukumar 8 месяцев назад
AI is not a hype. Codex, chatgpt are mature enough products. Saying AI is a hype is the new hype 😂
@kyriosity-at-github
@kyriosity-at-github 7 месяцев назад
Crediting them with intelligence and potential of self-conscience is fat hyped lies. Period.
@ekki1993
@ekki1993 4 месяца назад
Calling it AI is hype, and you fell for it.
@vitalyl1327
@vitalyl1327 9 месяцев назад
LLMs making things up is not a bug, it's a feature, and it's the most important feature one could ever hope for. LLMs (in particular, small 7B ones) are exactly the missing piece of the puzzle to make everything else work. So this dismissal of such a massive achievement is really uncalled for. On the other hand, what else to expect from the web coders, right?
@pyphilly
@pyphilly 9 месяцев назад
That's certainly an interesting ethical take, absolving model trainers from any responsibility for accuracy. It would probably be popular with the "alternative facts" crowd. Painting all "web coders" in one fell swoop in such a demeaning fashion is also a paradox, considering the "web coders" made it possible to have a vast sea of content for LLMs to train on (copyright issues asise).
@vitalyl1327
@vitalyl1327 9 месяцев назад
@@pyphilly look, if you're expecting "accurate answers" from a generative model, no matter how well trained, you're the problem here. For accurate answers use RAG + critical loop. And for a lot of things to work (e.g., automation of engineering inventions) you *need* your model to make things up, the more insane - the better. And I'd recommend you to read the paper "Textbooks is all you need" - all the web content is an utter trash, we have much more high quality material.
@pyphilly
@pyphilly 9 месяцев назад
@@vitalyl1327So you're opposed to plugging generative models into search engines? Because that is *exactly* what is happening at the primary sources human being go to seek information. People aren't just looking for accurate answers from generative models, they've been having it shoved down their throats. Just look at the Windows 11 start menu changes rolled out a few weeks ago. The proliferation continues. At least we can agree that the web content is utter trash, but when you look at LLMs and usage, that's what 99.9% of the people are using. The average web surfer isn't a discerning Hugging Face consumer, and finding the diamonds in the rough there isn't exactly a walk in the park either.
@vitalyl1327
@vitalyl1327 9 месяцев назад
@@pyphilly the raw inference of a generative model is similar to a stream of consciousness. It's meaningless. It works only if you restrain it in a proper mental discipline loop and provide with all the relevant information *in place*. It should never rely on its recollection of the facts from training. It can work in theory if you plug it into a search - e.g., see how Stackoverflow or Flux AI did it, it's 100% RAG-based and it's citing its sources for every single fact it summarised. What definitely won't work is to just plug in the raw inference (which is what most people see when they talk to ChatGPT interface), but I doubt anyone is proposing to do so. I'm quite confident now when I use the LLM-based search for datasheets - as they always cite the sources anyway, and the search accuracy is much better than any plain text search can be.
@ekki1993
@ekki1993 4 месяца назад
What's this "everything else" you're talking about? Because the hype being talked about here is people expecting LLMs to give back information about reality. You can't really have a system built for simulating the training data and expect it to be useful for applications that require models of reality that are as accurate as possible.
@kyriosity-at-github
@kyriosity-at-github 7 месяцев назад
"The voice of one crying in the wilderness". I mean, just compare the number of views to videos which praise AI and only "juggle" the buzzwords. My question is what happened to AI from 1950s? (Then 1970s and 1990s.) Where are they?
@FlipperPA
@FlipperPA 7 месяцев назад
It definitely feels that way sometimes! But it feels that way during the hype cycle. Soon enough, the media will turn on the "A.I." buzz - it is starting to happen already. The ELIZA program I mentioned is still used as a teaching example, to help people understand how a very basic LLM works. The "training set" here would be the data phrases, instead of whatever can be grabbed (copyright be damned) on the Internet: www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/classics/eliza/basic/myeliza.bas
@egoalter1276
@egoalter1276 3 месяца назад
Algorithmic expert systems date back to electromechanical ballistoc computers on WW1 era battleships. We have gotten better and better at making the input/output feed more comperhensible only in the last two decades though. LLMs, however, well, they are nothing but an IO feed, hooked up to the encyclopedia that is the internet. Hook them up to a bunch of expert systems, perhaps even recursively with output being reinterpreted as new input, and maybe you will start getting something resembling intelligent.
@cartossin
@cartossin 3 месяца назад
He literally has no idea what he's talking about. He needs to read Hinton, Sutskever, Rob Miles. His understanding ML is just incorrect. We don't really know how intelligent LLMs are. We've come so far with just scaling. People say "LLMs can't do ___" then a few months later they do it. They don't do it because a programmer wrote a new thing to make that one thing work. New functions emerge just by scaling the neural network. To suggest there is some fundamental hard limit on what these models can do just isn't supported by facts. We have found no such limit. Also the amount of times some models give incorrect information is greatly exaggerated. If you're using the best models like claude 3 opus or GPT-4o, you really don't see that often. If you claim that "ever" is a problem, do you think that asking a human would ever yield a wrong answer? It is not reasonable to expect perfection. They are already very good.
@FlipperPA
@FlipperPA 3 месяца назад
Thanks for watching. It is quite possible we're near the peak of utility of LLMs, as they will soon be out of data to train on - and that's even without having issues like copyright and consent addressed. What then? As for the random seeing which makes reproducibility impossible, we're starting to see interface options for "how creative do you want", to reduce the randomness I mentioned. Check out this study from Purdue as far as utility in programming: futurism.com/the-byte/study-chatgpt-answers-wrong There is definitely utility is predictive algorithms, but the hype is finally starting to die down. But calling a predictive algorithm "intelligent" is falls right into the ELIZA effect trap, and these all suffer from the first rule of data: garbage in, garbage out. en.wikipedia.org/wiki/ELIZA_effect Cheers!
@veteransniper6955
@veteransniper6955 2 месяца назад
​@@FlipperPAthere is no "running out of data" problem. Bigger models and (according to scaling laws) more training improve performance with same data. AI trained on general discussions now can do some tasks in various areas, and it provides more data to evolve. Example is Tesla's FSD, it was trained on some synthetic data or data recorded for this purpose, probably it is not sole source of data anymore, because using FSD provides relevant data in conditions that it need to act in. And FSD getting better and better. Internet pages was probably cheapest option to jumpstart LLMs. It's like child learns how to talk and understand language when parents discuss who will do the dishes today. But there is more data in various areas than that, and more data generated when ai interacts.
Далее
Gary Marcus: Has AI Hit a Wall? | The Agenda
13:42
Просмотров 16 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Don't Buy the "A.I." Hype with Tim Allen
26:26
Просмотров 6 тыс.
Is the AI Boom Real?
13:43
Просмотров 359 тыс.
Why AI Is Tech's Latest Hoax
38:26
Просмотров 700 тыс.
Has Generative AI Already Peaked? - Computerphile
12:48
AI Deception: How Tech Companies Are Fooling Us
18:59
The A.I. Bubble is Bursting with Ed Zitron
1:15:21
Просмотров 866 тыс.
CS Professor Sounds Alarm on AI and Programmers
12:21
Просмотров 290 тыс.