Тёмный

Jonathan Blow on the "exponential" growth of AI (also Gemini fiasco) 

Blow Fan
Подписаться 13 тыс.
Просмотров 23 тыс.
50% 1

Jonathan Blow's Twitch: / j_blow
Tip me: ko-fi.com/blowfan
Jonathan Blow doesn't believe that the current AI tech can scale up to get that much better. It's amazing how much AI has improved but we need new tech to reach the next level, he explains. Blow disagrees with Chamath Palihapitiya in this regard. The Google Gemini AI fiasco is also mentioned in the video. The game on the stream is Balatro.

Наука

Опубликовано:

 

28 фев 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 171   
@Evilanious
@Evilanious 4 месяца назад
"could be a really long low exponent" Lmao!
@gnull
@gnull 4 месяца назад
recircling their own farts LOL
@cold_static
@cold_static 4 месяца назад
"Don't get high on your own supply"
@necuz
@necuz 4 месяца назад
AI progress looks impressive when you compare it to the typical Silicon Valley rate of delivering features.
@MenkoDany
@MenkoDany 4 месяца назад
This is exactly why I keep telling Jon to stop schizoposting on twitter. He's so much more reasonable and articulate when talking out loud
@timothyvandyke9511
@timothyvandyke9511 4 месяца назад
His Twitter is a good time, if not always the best look
@MenkoDany
@MenkoDany 4 месяца назад
@@timothyvandyke9511 The other day he insinuated there's a global conspiracy to taint the food supply and that's why everyone is fat. He's schizoposting. At least he deletes the worst offenders after..
@danielwalley6554
@danielwalley6554 4 месяца назад
That's true of almost everyone. It's rare that someone ends up looking good in short form text.
@dimtool4183
@dimtool4183 4 месяца назад
he is often schizoposting in these clips too.
@HairyPixels
@HairyPixels 4 месяца назад
what's his twitter handle? can't find it.
@stokaty
@stokaty 4 месяца назад
These are awesome. Can you also link to the original streams or full videos?
@Titere05
@Titere05 4 месяца назад
Even an exponential increase wouldn't change the fact that LLM can't innovate, they can only recycle what they know. Also in programming approximation isn't enough. You need to be 100% precise or the whole thing won't compile, or weird bugs will be waiting to happen.
@cosmiclounge
@cosmiclounge 4 месяца назад
My simplistic intuition is that genuinely novel insights probably necessitate some level of embodied cognition. When your entire ontological reference-frame consists of a textual abstraction/projection of the material world, you can brute-force permutations of that referential substrate to a degree which might be confused for innovation and even solve certain NP-hard problems that (by definition) defy classical computation, but new paradigms ultimately require access to the 'ground truth' which no amount of pattern-matching recycled data can possibly impart.
@La0bouchere
@La0bouchere 4 месяца назад
I don't think that's accurate. Human brains innovate by making things up and then checking them with some sort of refutational process (plus generating a cool story to tell others about how clever you are for social cohesion, but that isn't necessary here). That type of cycle seems completely possible with LLM's or something very similar. Getting the refutational process to be human-level accurate seems really hard though.
@sadkurtable
@sadkurtable 2 месяца назад
Ok, make me a new color, a new sound, a new taste. You are human, you can "innovate", and surely as musician you always use your own self made tone system, and you never look at other people works, and everything you produce is totally unique and never been done.
@tipplewick
@tipplewick 4 месяца назад
The thing is that the AI problems also scale exponentially the more you try to refine LLMs in general. That means that exponential scaling of computing will make progression linear if both are on the same pace. What will make it exponential would be another way to approach things, as JB more succinctly said
@meow2646
@meow2646 4 месяца назад
anyone know where i can read his argument
@samuelbucher5189
@samuelbucher5189 4 месяца назад
I hate how thick his taskbar is.
@uioup7453
@uioup7453 3 месяца назад
AI has come to be a marketing buzzword to push unimpressive technology and products, like chatbots, which have been done like 10+ years ago. Most people think that these "AI" will just suddenly improve on it's own. AI will never mean anything unless the underlying system/s are well thought out and designed, it's a broader design issue that people don't seem to mind. Noone seems to be upset at the fact that whenever I prompt a image of a cartoon cat, I always seem to get a practically identical style, color palette and lighting.
@tapwater424
@tapwater424 2 месяца назад
There are also fields where AI has been used widely in industry since the 90s, but yet some people seem to think AI is going to be a revolution there as well. Look at investing for example.
@Bhanukamax
@Bhanukamax 4 месяца назад
Where does this clips come from? I don’t see anything related on Jonathan Blow’s main channel
@Seacle14
@Seacle14 4 месяца назад
Twitch Livestream
@boggledeggnoggler5472
@boggledeggnoggler5472 4 месяца назад
How are you accepting tips for Jon Blow's content? Aren't you worried about him coming after you?
@pinlo
@pinlo 4 месяца назад
If AI keeps getting better and better, but you still "can't really control it," shouldn't that be precisely the main cause for alarm?
@principleshipcoleoid8095
@principleshipcoleoid8095 4 месяца назад
Depends on what it does. If it just generates images or videos.. That's not dangerous. Fake code that's full of bugs and lowering down Github code quality on another hand...
@pinlo
@pinlo 4 месяца назад
@@principleshipcoleoid8095 Generating images and videos seems innocent until you consider that it might be altering our perception of reality in ways that we may not even be aware of.
@BinaryDood
@BinaryDood 4 месяца назад
​@@pinloi saw a model which can generate 1000 pictures in 1 second (though yet limited to cats). You need only a few bad actors for something like that to cause some serious damage.
@chrisc7265
@chrisc7265 4 месяца назад
@@nonamenolastname8501 if politicians can make a small percentage of people surgically amputate parts of themselves to be "cool", we should not underestimate the power of propaganda and belief
@Contemplative05
@Contemplative05 4 месяца назад
My understanding is AI is simply a program with a *large* collection of labeled data and uses it to approximate a desired output. That mimics human brains a lot in how we gather our schemas through life. Yeah, AI will seemingly continue having the issue of not truely "knowing" what it's really doing. The thought is that AGI will be closer to that because I'd assume AGI entails AI + understanding which's far more similar to actual human intelligence. We still don't know if AGI is real or if it will happen in our lifetime (if ever) but that is the eventual fix to the problem.
@neerajkashyap3963
@neerajkashyap3963 4 месяца назад
Can someone tell me what game he is playing?
@neerajkashyap3963
@neerajkashyap3963 4 месяца назад
Never mind, he shows it towrads the end of the video: Balatro.
@MisterFanwank
@MisterFanwank 26 дней назад
I have yet to see an example of exponential growth in nature that didn't wind up just being a sigmoid function in the end.
@kemalatayev
@kemalatayev 4 месяца назад
When was this broadcast? I see his last broadcast on twitch is his convo with Casey 2 months ago.
@CsharpPreza
@CsharpPreza 4 месяца назад
It seems like he turned off the feature where twitch saves broadcasts as VODs.
@More_Row
@More_Row 4 месяца назад
He deletes his broadcasts.
@kemalatayev
@kemalatayev 4 месяца назад
@@CsharpPreza oh man. I used to watch his past broadcasts. Here's hoping that UNOFFICIAL Jonathan Blow livestreams channel is able to get them.
@james-s-smith
@james-s-smith 4 месяца назад
@@kemalatayev Jon and the UNOFFICIAL archive had come to a mutual agreement some years ago where UNOFFICIAL won't upload VODs that Jon has deleted.
@heroclix0rz
@heroclix0rz 4 месяца назад
Seems very possible that we "run out of levels" and that's when the bubble bursts.
@MatiasKiviniemi
@MatiasKiviniemi 4 месяца назад
Everyone needs to remember that a) your savings account interest is also exponential growth, it's all about the coefficient and iterations b) Tesla selfdriving has been cllimbing local maxims for years now, just growing your training data (and Tesla has the best one) does not guarantee constant progress. c) hard part of machine learning is that when it fails at something, you can't fix it with more if-statements like the rest of software development. There might ceilings you can't punch through with your current approach.
@xevious4142
@xevious4142 4 месяца назад
Tesla self driving is considered a joke in the self driving space
@La0bouchere
@La0bouchere 4 месяца назад
@@xevious4142 This isn't true, it's recognized as the as the best vision based driving system in the industry. People just try to downplay it on the internet because of biases/Elon hate
@xevious4142
@xevious4142 4 месяца назад
@@La0bouchere it is true. I regularly speak to many actual engineers who work on self driving. Tesla only uses cameras and is a fucking joke. I've met several engineers who refuse to get into teslas unless the driver turns the "full self driving" off. Real self driving requires a fuck load more redundancy than Tesla has. You need multiple redundant radars, lidars, cameras, and a lot more local compute than is available on a Tesla. Don't accuse me of bandwagoning on some internet trend when you clearly have no fucking clue what you're taking about.
@K9Megahertz
@K9Megahertz 4 месяца назад
Current AI (Chat-GPT and LLM's) cannot program something it has not seen before. I've given it problems from game programming that have been somewhat lost to time, but just aren't really used all that much or the documentation that was out there somewhat rotted. I've not tested this on Gemini or GPT-4 but GPT-3 couldn't nail it. It was reasonably close, but unfortunately close doesn't cut it in programming, either the code is correct or it's not. The basic idea is to take a unsorted list of co-planar 3d points and sort them into either a clockwise or counter-clockwise order. If you've ever taken the quake map format and parsed the brushes into 3d faces, depending on the method used, this is something you will need to do. While the method is straightforward, there is an edge case you had to take care of in order for things to work 100% of the time. GPT-3 never accounted for this edge case. LLM's (unless something has changed recently) are nothing more than probabilistic text prediction engines. You give it a series of tokens and it'll tell you next token in the sequence. It doesn't understand or care what the code is doing. If this were the case, you could have this stuff recursively self improve itself, but it can't... It cannot create new content. Will things get to that point? Maybe.. but it would be drastically different than what we have now.
@tx7300
@tx7300 4 месяца назад
GPT3 is very mediocre compared to GPT4 and Gemini Advanced, but yeah, theyre generally good for copypasting code that you know is easy to do but you can't for the life of you easily find online, but for actually complex problems requiring creativity it generally falls flat on its face. LLMs won't be able to solve that fundamental problem, it needs dedicated problem solving abilities. something you could maybe call... ARTIFICIAL INTELLIGENCE... (im pretty sure OpenAI & co are making stuff that actually tries to generate code by "thinking" rather than plain predictions though, there's a reason they specifically promote improvements in code generation abilities. certainly not improving at an exponential rate though)
@fupopanda
@fupopanda 4 месяца назад
Why the fck are you using GPT3 for anything?
@denisblack9897
@denisblack9897 4 месяца назад
Dude, you haven’t tried gpt-4, but has a lot of phantasy philosophy to tell) go try it right now! don’t stay in those phantasy places, it’ll 100% lead you to doom
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
What it does with images (and soon video) is way more impressive than text. Cat astronauts riding dinosaurs underwater weren't in training data.
@K9Megahertz
@K9Megahertz 4 месяца назад
@@JohnCena-te9miWell I have no use for a picture of cat astronauts riding dinosaurs underwater.
@KimTiger777
@KimTiger777 3 месяца назад
When the AI starts reinventing its own source code than exponential growth is exactly what is going to happen.
@thomas-hall
@thomas-hall 4 месяца назад
Yet another S curve hyped as exponential. Sad!
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
Many such cases
@slurmworm666
@slurmworm666 3 месяца назад
Right, and when has any exponential curve in the real world not turned out to be an s curve at best? AI failure mechanisms are mysterious and with the self pollution of public training data we might even see their effectiveness decay. These people really think their machine god will overcome the laws of thermodynamics
@TrueSaintly
@TrueSaintly 4 месяца назад
Nice to see Balatro being played everywhere, it's a cool game.
@adamhenriksson6007
@adamhenriksson6007 4 месяца назад
My thoughts on why computers cannot truly understand valid program syntax: 1. Encoded strings is kind of a bad programming format to begin with. It relies on our ability to visually parse structural patterns, but the format is just a sequence of characters which is a far cry from ASTs, which is what code actually is. Creating a general AST representation file format for code would probably help a lot. To bad the DION project has seemingly been put on ice. Yes i know that this point is an unrealistic, unpractical pipe dream. 2. Current LLMs are pretty bad at reading things that are not a sequence of words, bad at understanding context, and bad at reading a lot of it. For more information, see: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-SbmETE7Ey20.html. I think AI agent programming has a future, but current tools definitely cannot get us there.
@rubberducky5990
@rubberducky5990 4 месяца назад
Mysterious generic exponential increase = deep learning
@rt1517
@rt1517 4 месяца назад
So true. The current tech is a dead end. LLMs are text completion software, they are missing the logic side of brains. They seems to understand what they do, but they understand nothing. We could see a jump in IA capabilities in coming years, but it would require a breakthrough, not just a scaling up. And it is entirely possible that we won't see this breakthrough in our life time, if it ever happens.
@IronFire116
@IronFire116 4 месяца назад
What? My dude. I'm sorry, you don't know anything about neural networks. 100% not text completion algorithm. Take a class on NN.
@rt1517
@rt1517 4 месяца назад
@@IronFire116 I was not talking about neural networks, but about LLMs (a kind of neural network), which are used for text generation according to wikipedia. And I don't see much difference between text generation and text completion. I have a basic knowledge on how neural network works, and I use LLMs every day for both professional and recreational use for a bit more than a year. These things have absolutely no logic and can contradict themselves in the same sentence without realizing it. They can give you the correct (or wrong) answer to a riddle, with a reasoning that make absolutely no sense. All they do is replicate and adapt the text they have been trained on. No matter how many neurons they have, we cannot even say that they are stupid, because they are beyond stupidity: they don't think.
@0x5DA
@0x5DA 4 месяца назад
NN != LLMs.
@slurmworm666
@slurmworm666 3 месяца назад
​​@@IronFire116 I wouldn't call myself an AI expert but as someone who has taken a few such classes and even built nns and other ml models commercially since 2016, I'm getting pretty tired of seeing people posture like experts by saying stuff like this. OP makes valid points. You're the one who doesn't know what he's talking about, and you are arrogantly condescending to make matters worse.
@Saganist420
@Saganist420 4 месяца назад
he is right
@joseoncrack
@joseoncrack 3 месяца назад
Exponential fiasco.
@returncode0000
@returncode0000 4 месяца назад
AI is currently plateauing hard. We had our iPhone moment but it's like everything in technology they try to scale it up or it ends in folding smartphones. I assume that there will be no crucial jump for a certain time.
@minhuang8848
@minhuang8848 4 месяца назад
I'm sorry what? Plateauing hard by what metric? I definitely see people exaggerating its current capabilities, but the amount of optimization that went into the pretty slow pre-3.5 requests to those pretty quick and dirt cheap gpt4 tokens or really, really fast 3.5ers... I don't see it at all. Gains in output quality and capabilities are still as tractable as before, if not more so, and the speed is keeping pace really handsomely. Never mind over the years, look at how well suno performs with v3 alpha against wavenet and friends: things barely have picked up and already people are calling end of the road. And about the crucial jump lurking in hiding for a certain time: well, what certain time? Releases aren't really continuous, and the guesswork surrounding this topic is already so fuzzy, this is just cold reading at this point. I'll buy it when I see actually the slightest hint of AI plateauing, until then, everything points to it starting to pick up some serious speed. That's mostly before we devise some ingenious way of properly scaling self-improving nonsense. As much as it is the number one sci-fi trope, it's what's really going to bake people's noodles. We'll see soon enough, I bet.
@xviii5780
@xviii5780 4 месяца назад
​@@minhuang8848 no major release in the last two weeks = plateau
@agenticmark
@agenticmark 4 месяца назад
Just got through the "intro" and you are "dead on". The people making claims are looking for praise, or not informed. This is still incremental and will be until "take off" which might not ever happen. AI will take your job, its unlikely it can "escape" or "steal compute" - only people who dont know how hard it is to build and deploy these systems say this shit.
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
Upvote if RU-vid deletes your comments.
@DRKSTRN
@DRKSTRN 4 месяца назад
Framing is off in this video. Majority of work, especially in web and computer software is technically solved. The issue that would be preventing the replacement of work, is having access to each companies proprietary code base. So, even in the framing of auto completion, the only true barrier is an artificial one. But since the main barrier is an artificial one, then the evolutionary factor likewise means some method of logic that is able to reproduce work. Without knowing the internals. What Google's Genie accomplished, was recreation of a video game based on video input rather than code base. Meaning there is a workaround to the artificial barrier that does represent some Intellgience, or patterned understanding capable enough to recreate gameplay from screen capture. If we are speaking on exponential improvement, the change from generating video to a lifestream would be an exponential improvement. Lastly, what LLMs constitute is some pattern recognition. That can likewise be refined by a logical thinking process. Therefore, what we have in comparison to a human mind is one-half of the equation. The next step beyond the ability to have pattern recognition that is filtered via a logical process. Would be a moment to moment improvement of the underlying network. Seeing as 2^2 is 4, and 4^2 is 16. Perhaps being in the second year of a certain something may be too early to call what is available, just an interesting tool. As if we do get streaming generation next year. That would just be 16^2, 256. If we are speaking to bits in a computer, this is when computers get interesting. 256^2, 65,536. Would be some network of models that would be capable of this generative streaming process. So when we get to: 4,294,967,296. Would be a network capable of this streaming process that can likewise have that moment to moment improvement. The real issue here is figuring out the curve fit alongside the time step. As for this to truly be an exponential process. That the process itself would merely be coexisting with AI that are improving moment by moment capable of any job. That halving time step may not be that far away. Especially if you throw out perfection and expect an AI capable enough to bumble around on a problem, then give the output when it has determined a solution. What we have isnt capable of that time series yet. But may sample high quality output from the Nth output to place back into the next training run. Most likely when we get to that exponential, it will be far more mundane than anyone could imagine. What comes after though, that is where things get interesting.
@SkeleTonHammer
@SkeleTonHammer 4 месяца назад
IMO we've been seeing exponential decrease across the board with Stable Diffusion, ChatGPT, etc. Every single release is more stagnant, less impressive, more the same as the last. There's something fundamental in the marrow of these things that is preventing them from improving beyond a fairly low ceiling. Look at Stable Diffusion. 1.5 was its cresting point. 2.0 was a resolution bump, but did not improve image coherence. All releases after 2.0 have been them fiddling with making it output faster. These systems are dead ends. They're already hitting their limits.
@SwimmingBird13
@SwimmingBird13 4 месяца назад
based
@johnpwn
@johnpwn Месяц назад
It's so over for non-STEM people
@BinaryDood
@BinaryDood 4 месяца назад
as bad as Gemini was, the underlying ramifications of generative AI are so imense considering the current structures of power and socioeconomic incentives that it surprises me that this is what catches the attention. And I said this in another Jon Blow vid, but it doesn't have to be exponential growth for us to see unfathomable results. In the ways that impacts the world ae most fields of study/crafts/jobs it is far more significant the rate of adoption than the actual rate of progress of the tech.
@c0xb0x
@c0xb0x 4 месяца назад
I'm mostly worried that malicious state actors will be able to use mass troll AIs to essentially kill democracy and herd the western world off a cliff
@BinaryDood
@BinaryDood 4 месяца назад
@@nonamenolastname8501 use your head a little bit
@chrisc7265
@chrisc7265 4 месяца назад
I'm not sure if this is your point, but it seems to me that while AI is harmless in and of itself, if the right people prop up AI as a god, that can lead to catastrophic results
@BinaryDood
@BinaryDood 4 месяца назад
@@chrisc7265 things arent released into a vacuum and the current economic model seems like itll self cannibalize should AI run its course unrestricted. Job loss- misinformation - saturation - etc lots of stuff coming too fast in all directions. I think only a few bad actors should be able to wreak havoc once models get good enought
@josephp.3341
@josephp.3341 3 месяца назад
@@BinaryDood Imagine how slow your phone will be when the apps are written by people who not only don't understand how dumb it is that all objects are fucking maps but don't even know what the fuck an object is. I'm really excited for it personally! I didn't know software could get worse but then I'm consistently proven wrong
@nexovec
@nexovec 4 месяца назад
Hahaaa, now you can't train your AI in a feedback loop with a compiler because all your training data is in python and javascript and that's so slow to even start up that it's essentially impossible. And noone sees this. We could be progressing further, and now we don't, because people didn't care about their tools. Again.
@thomassynths
@thomassynths 4 месяца назад
I can’t believe this channel has patron donors. You literally upload someone else’s stream without any additional commentary
@Kknewkles
@Kknewkles 4 месяца назад
"I do not forget" Nor do I forgive :D This crap about "it will scale"... which one would you choose if you had to bust up a wall - a crowbar or a trillion q-tips? There's your scaling argument.
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
Lemme whip out a calculator for you. A trillion q-tips is approximately 500,000 metric tons. Berlin wall wouldn't stand a chance.
@Kknewkles
@Kknewkles 4 месяца назад
@@JohnCena-te9miand how exactly are you going to use them in a way where total compound mass is even remotely a factor, pray tell?
@unaliveeveryonenow
@unaliveeveryonenow 4 месяца назад
@@Kknewkles I do have an answer, but that account got shadow-banned 🤖🙊
@fennecbesixdouze1794
@fennecbesixdouze1794 4 месяца назад
I've never seen anything approaching "reasoning" done by ChatGPT or any LLM. Whenever you spot something that looks like reasoning, you can slightly modify the inputs without logically changing the problem and it will fall on its face, which proves that no sort of "reasoning" capacity is emergent: it just looked like it was reasoning because you stumbled on something that was already in its training data. I think that wedding an LLM with some sort of logical feedback based on rule-governed behavior might end up working. It would certainly fit well with the "system 1/system 2" model of cognition, where what the LLM provides would be the system 1 non-rational fast intuition and the system 2 would be the as-yet-undeveloped rule-based system imposed on top.
@LeEnnyFace
@LeEnnyFace 4 месяца назад
Isn't reasoning, in a sense, the articulation of information from training data?
@IronFire116
@IronFire116 4 месяца назад
Yeah, you're just wrong. Read about what neural networks are please.
@simonl1938
@simonl1938 4 месяца назад
"logical feedback based on rule-governed behavior" is pretty much what google did with their math olympiad winning system.
@La0bouchere
@La0bouchere 4 месяца назад
@@IronFire116 Actually they're completely right. LLM's don't perform corrective reasoning at all, and current work at open AI and presumably other companies is directly focused on adding corrective (read logical) feedback loops to emulate reasoning.
@slynt_
@slynt_ 3 месяца назад
I have a background in philosophy and I usually test LLMs out with philosophical reasoning. GPT makes blunders more often than a 1st-year college student would. Even when you correct it, it usually proceeds to make the same or a similar logical or conceptual error very soon after.
@Sarmachus
@Sarmachus 4 месяца назад
Most cases of exponential growth aren't truly exponential. Logistic growth is more common.
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
Guess what, "being good at bullshitting" is leaps ahead of "not understanding anything". There's an emergent property of modelling the world and humans. Heck, it learned to do photorealistic rendering just by looking at photos. How many humans can paint at that level?
@SaHaRaSquad
@SaHaRaSquad 4 месяца назад
Also recently a paper showed that image generation models can produce accurate depth maps, which means they have a basic understanding of 3d shapes and scenes. The limitations are still very visible though. And bullshitting already gets you quite far in this world.
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
@@SaHaRaSquad Haha they approximate refraction and stuff while genius first principles artist-philosopher-programmer people write Blinn-Phong lighting for the 100-th time.
@BinaryDood
@BinaryDood 4 месяца назад
Humans dont think in the same way at all. And calling it "thinking" for a neural network is a stretch as it has no understanding of what it is doing. It interpolates from billions of data sources, humans extrapolate from even small isolated experiences.
@danielwalley6554
@danielwalley6554 4 месяца назад
It didn't learn to do photo-realistic rendering. It learned to copy the photo-realistic rendering that humans learned to do. Significant difference.
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
@@danielwalley6554 Sorry, but no. For photorealistic results it references photos, not people's work.
@lucarossi8442
@lucarossi8442 4 месяца назад
Exponential what? Do you know how much energy and water ChatGPT consume to seems just slightly competent? Everybody continue to pump up this "revolution" while what we have now is just a glorified search engine and statistical optimization functions.
@lisandroCT
@lisandroCT 2 месяца назад
AI is just a buzzword now.
@alexvisan7622
@alexvisan7622 4 месяца назад
Those who believe ai will solve any problem should learn some computability theory.
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
Did you invent it?
@cjjb
@cjjb 4 месяца назад
A parrot can talk but it doesn't know what it's saying.
@dimtool4183
@dimtool4183 4 месяца назад
compared to AI, you're that parrot though.
@henriquemarques6196
@henriquemarques6196 4 месяца назад
​@@dimtool4183 yeah, people forget that 99% of the time we are just repeating things that some programmer already made some months/years ago. That kind of job is easy for AIs to do. It is very rare, specially for web devs, to build something actually new, something that no one never has already made. I work with PHP and Vue, 99% of the things I do is basically "connect" Laravel methods into our endpoints/routines and place javascript components into the front-end, it is nothing new. I don't think that AI will 100% replace every programmer. In a near future a single programmer will be capable of doing the same job as 50 programmers do nowadays. If we don't have an increase of demand for programmers the most common thing to happen is to have fewer job positions. Of course we can't predict the future, but it is crystal clear for me: most of us won't be working with programming in like 10-20 years. I don't expect to retire as a programmer. The future is bleak for humanity.
@grygry12345
@grygry12345 4 месяца назад
This man really has a god complex.
@cold_static
@cold_static 4 месяца назад
Well, he is god.
@grygry12345
@grygry12345 4 месяца назад
@cold_static no, he is a narcissist man.
@Bigdaddymittens
@Bigdaddymittens 4 месяца назад
Why do people value what this man has to say? He's only shipped like 3 games. IF THAT. Why are people acting like he's a voice worth hearing on game development when his actual real-world experience is so minimal.
@Megalevel95
@Megalevel95 4 месяца назад
Perfect example of why experts in one field shouldn't make the mistake of thinking their opinion holds as much value in other fields... Jon needs to stay on the topic of software and tech (where his ACTUAL qualifications reside). His takes on society and culture are... cringe, at best, and are generally a discredit to himself and his other well-informed takes on tech.
@Desstromath
@Desstromath 4 месяца назад
Take your own advice and stop commenting. Its prudent that you defer to "experts".
@digitalspecter
@digitalspecter 4 месяца назад
I watch for the cringe but would be delighted if he managed to broaden his horizons.. but not holding my breath :D TBF, his knowledge about tech and software is also quite game-oriented.
@Razumen
@Razumen 4 месяца назад
What's with all these posts fellating this mediocre game dev?
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
He has more things to say than you.
@Razumen
@Razumen 4 месяца назад
@@JohnCena-te9mi None of them insightful 🤣
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
@@Razumen Interesting, tell more about it.
@Razumen
@Razumen 4 месяца назад
@@JohnCena-te9miI would, if you had anything interesting to say. 🤣
@JohnCena-te9mi
@JohnCena-te9mi 4 месяца назад
@@Razumen So yeah, you are like 2/10 calling someone mediocre.
Далее
Random Jonathan Blow Moments 3
18:32
Просмотров 55 тыс.
3.5M❤️ #thankyou #shorts
00:16
Просмотров 543 тыс.
D3 Ваз 2107 Не умри от зависти!
18:57
Jonathan Blow shops for keyboards
6:18
Просмотров 40 тыс.
Jonathan Blow on work-life balance and working hard
19:18
Random Jonathan Blow Moments
7:01
Просмотров 28 тыс.
The Reasons for America's Decline
12:03
Просмотров 44 тыс.
Jonathan Blow on the declining relevancy of music
7:11
98% Cloud Cost Saved By Writing Our Own Database
21:45
Просмотров 314 тыс.
Has Generative AI Already Peaked? - Computerphile
12:48
OZON РАЗБИЛИ 3 КОМПЬЮТЕРА
0:57
Просмотров 289 тыс.
Blackview N6000SE Краш Тест!
1:00
Просмотров 38 тыс.