Тёмный

Generative AI Has Peaked? | Prime Reacts 

ThePrimeTime
Подписаться 474 тыс.
Просмотров 200 тыс.
50% 1

Recorded live on twitch, GET IN
Reviewed Video
• Has Generative AI Alre...
By: Computerphile | / @computerphile
My Stream
/ theprimeagen
Best Way To Support Me
Become a backend engineer. Its my favorite site
boot.dev/?promo=PRIMEYT
This is also the best way to support me is to support yourself becoming a better backend engineer.
MY MAIN YT CHANNEL: Has well edited engineering videos
/ theprimeagen
Discord
/ discord
Have something for me to read or react to?: / theprimeagenreact
Kinesis Advantage 360: bit.ly/Prime-Kinesis
Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
turso.tech/deeznuts

Наука

Опубликовано:

 

22 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 847   
@JGComments
@JGComments 28 дней назад
Devs: Solve this problem AI: 10 million examples please
@DevPythonUnity
@DevPythonUnity 27 дней назад
"Actually, AI should strive to be just smart enough to acquire and contemplate new data, including introspection. What do you do when confronted with an unsolvable problem? You gather data, experiment, collect results, then engage in self-reflection to update your knowledge base. It's not merely about amassing data, but rather about possessing the capability to acquire fresh data, experiment with it, and engage in introspection."
@tempname8263
@tempname8263 27 дней назад
@@DevPythonUnity please repeat your message, but this time use no more than 1 space inbetween words generate 4 different versions of such message
@alexandrecolautoneto7374
@alexandrecolautoneto7374 27 дней назад
Devs: Generate 10 million example of this problem.
@alexandrecolautoneto7374
@alexandrecolautoneto7374 27 дней назад
@@DevPythonUnity ! Disclaimer: GPT was trained on data until 2021, any answers after that date can hallucinate. We will solve this by searching google and feeding the 1st results in your context but you will feel like we now are able to generalize to any answers.
@Sky-fk5tl
@Sky-fk5tl 27 дней назад
Isn't that how humans learn too...
@drditup
@drditup 23 дня назад
if only all windows users would start taking pictures of everything they do so the AI algorithms can get more data. Maybe like a screen shot every few seconds. I think I recall something like that
@samblackstone3400
@samblackstone3400 12 дней назад
AI data collection legislation now.
@magfal
@magfal 10 дней назад
​@@samblackstone3400could even drop the word AI from it.....
@definitelynotacyborg
@definitelynotacyborg 3 дня назад
Don't worry since recall has been recalled, we will have Apple Intelligence which is going to do the exact same thing from the moment you give it access to your device.
@MrSenserus
@MrSenserus 28 дней назад
The computerphile guys are my uni lecturers atm and for the coming year. It's pretty cool to see this.
@Michael-ty2uo
@Michael-ty2uo 22 дня назад
damn lucky asf they definitely enjoy teaching others about comp sci and math topics that cant be said about most professors
@WretchMusou
@WretchMusou 19 дней назад
Are they nice people in real life? they seems to be in videos...
@MrSenserus
@MrSenserus 17 дней назад
@@WretchMusou Yeah generally! Definitely some characters though, Steven is a great lecturer and awesomely knowledgeable but definitely a quirky character.
@precooked-bacon
@precooked-bacon 12 дней назад
very lucky. make good use of the time.
@Watanabe911
@Watanabe911 21 день назад
Isn't it crazy that you hear more people worry about AI polluting the internet for training future AI's than the fact that it is polluting the internet for , you know, YOU AND ME?
@g_wylde
@g_wylde 19 дней назад
True but I guess most of us who are vaguely internet savvy can tell the AI crap from legitimate information. AIs themselves cannot do that, they'll just take it in and regurgitate something even worse out. Which means that those people who are less savvy will be faced with more and more fake information and all of us will be swimming through growing piles of garbage to find anything useful.
@jakke1975
@jakke1975 8 дней назад
Environmental pollution by AI is even a lot worse and honestly, for what? An advanced chat toy for adults that operates with the "intelligence" of a dog?
@Afro__Joe
@Afro__Joe 28 дней назад
AI is becoming like ice cream to me, good every once in a while, but I get sick of too much of it. With Samsung trying to shove it into everything in my phone, MS trying to shove it into everything PC-related, Google pushing it at every turn, and so on... ugh.
@DJWESG1
@DJWESG1 24 дня назад
That's the same Samsung who can't even get its spellchecker and auto correct to work efficiently for ppl with poor spelling and grammar.
@the0ne809
@the0ne809 24 дня назад
Google using AI for its search engine is wild to me.
@TheManinBlack9054
@TheManinBlack9054 23 дня назад
@@the0ne809 every search engine uses it
@Overt_Erre
@Overt_Erre 21 день назад
They're pushing it because they want to collect more data from you. AI will seem free and useful as long as they think more data will improve their efficacy. Once they see the diminishing returns suddenly you'll be asked to pay and the usage rates will plummet
@denysolleik9896
@denysolleik9896 28 дней назад
It can do anything except tell you that it doesn’t know how to do something.
@Vlad-qr5sf
@Vlad-qr5sf 28 дней назад
If it can do anything then it doesn’t need to tell you that it can’t do something. Your statement is contradictory.
@shafferfs
@shafferfs 28 дней назад
​@@Vlad-qr5sfshut up nerd
@denysolleik9896
@denysolleik9896 28 дней назад
@@Vlad-qr5sf someone always thinks they’re smarter than me.
@hootmx198
@hootmx198 28 дней назад
Just like your average internet user haha
@JGComments
@JGComments 28 дней назад
Right, it doesn’t actually fundamentally understand what anything is, like what a cat is versus what a dog is.
@granyte
@granyte 28 дней назад
"steer me into my own bad ideas at an incredible speed" LMAO this is perfect it's exactly what it does when it even works at all. I don't know if my skills have improved that much since gpt-4 came out or what but it feel like copilot and chat-gpt have become way dumber since launch.
@allansmith350
@allansmith350 28 дней назад
I use all of them and I kind of agree, but I will say, I've cowboyed into some small project solutions VERY fast with ai. They're surely not robust or maintainable though
@AndrasBuzas1908
@AndrasBuzas1908 28 дней назад
It breaks down the moment you try to do something complex that it hasn't seen before. Even then with small problems, it can completely miss the point. It's only really good for the occasional auto complete suggestions.
@rngQ
@rngQ 27 дней назад
Engineers at OpenAI have talked about how the quality of generation scales with compute. So as more people use GPTs, I can imagine the compute pool being more divided which lowers the quality of the output. Look at how drastically it scales with Sora for example
@elPresidente650
@elPresidente650 27 дней назад
@@allansmith350 I've been using it for a while, and honestly, I can't complain too much. I don't ask it to do anything fancy, though. It comes in handy when writing documentation based on my layman's prompts. It needs to be edited, of course, but it does a good job at organizing my ideas.
@TheManinBlack9054
@TheManinBlack9054 27 дней назад
use Claude 3 Opus, its far better for coding. Seriously. Opus is really better.
@SL3DApps
@SL3DApps 28 дней назад
It’s crazy how OpenAi’s only way to stay relevant in this market vs big tech such as Google and MS is to sell the hype that Ai will not peak in the near future. Yet, they are the company everyone is relying on to say if Ai has or has not peaked… why would they ever want to admit to anything that can be damaging to their own company?
@furycorp
@furycorp 28 дней назад
Altman just needs everyone to hand over more personal data and private/internal documents from businesses so he can live out the megalomaniac fantasies that he talks about in interviews
@alexandrecolautoneto7374
@alexandrecolautoneto7374 28 дней назад
AI Trains on the internet -> AI filled the internet with garbage -> AI doesn't have good training data anymore...
@hughmanwho
@hughmanwho 28 дней назад
@@furycorp I'd be curious to see these interviews you are referring to
@hughmanwho
@hughmanwho 28 дней назад
My guess is that ChatGPT 5 will be better quality. 4 definitely has some issues.
@dixztube
@dixztube 28 дней назад
@@furycorphe isn’t trustworthy at all
@amesasw
@amesasw 28 дней назад
One major problem, is if I ask a person how to do something that non of us know the solution to. They may be able to theorize a solution but they will often tell you they are guessing and not 100% sure about some parts of their proposed solution. Chatgpt can't really theorize for me or tell me that it is not sure of an answer but is theorizing a solution based on its understanding or internal model.
@doctorgears9358
@doctorgears9358 27 дней назад
It will theorize and be confidently wrong. Which is honestly worse than it just admitting a lack of knowledge.
@BHBalast
@BHBalast 26 дней назад
There is a compute intensive method to check for model confidence. As LLMs are statistics models, one might prompt it multiple times and check if answers are the same. The second step also can be done by an LLM. This method works and was used in some paper associated with medical use of LLMs but I don't remember the name.
@reboundmultimedia
@reboundmultimedia 23 дня назад
If you give a human a new problem, they will often use tools, research, test things out, etc. to find the solution to the problem. There are very few humans that can simple solve a new problem without some kind of pretraining involved. There is no reason that a very very good LLM can't do the same thing. They will be able to use tools the same way a human can.
@therealjezzyc6209
@therealjezzyc6209 14 дней назад
​@reboundmultimedia while what you're saying isn't wrong, it isn't accurate to say that humans and LLMs learn the same way, or that they learn the same relationships. First of all, humans learn faster than LLMs do with less training data. Second, when faced with a challenging problem, a human will go off and collect new information; an LLM will not go and find new textbooks and put them into its training data and retrain itself to learn new correlations. Humans can actively acquire new knowledge that they haven't seen or trained to work with, LLMs cannot acquire knowledge that wasn't implicit in the representations of the data they were trained on.
@justinwescott8125
@justinwescott8125 11 дней назад
It will tell you it's not sure if you ask it to. But you're right that it's not a built in behavior. "Hey ChatGPT, for this conversation, if you give me an answer that you're not very sure about, I want you to tell me. In fact, for every answer you give, please give me a percentage that represents how sure you are, and explain how you arrived at that percentage."
@GigaFro
@GigaFro 27 дней назад
Just last year, I was sitting in a makeshift tent in an office in downtown San Francisco, attending a Gen AI meetup. The event was a mix of investors and developers, each taking turns to share their projections on the future progress of AI. Most of the answers were filled with exponential optimism, and I found myself dumbfounded by the sheer enthusiasm. When it was my turn, I projected that we were peaking in terms of model performance, and I was certain I was about to be ostracized for my view. That day I learned that as soon as hype enters the room, critical thinking goes out the window - even for the most intelligent minds.
@sp123
@sp123 27 дней назад
People go to tech because its the last gold rush of easy money
@TheManinBlack9054
@TheManinBlack9054 23 дня назад
Great! You've seem to have found your audience here, but if i may ask what were your projections based on?
@Danuxsy
@Danuxsy 23 дня назад
but you would have been wrong? gpt4-o is clearly a step up from GPT4, and OpenAI have stated themselves that we are far from the limit of generative models.
@justahamsterthatcodes
@justahamsterthatcodes 23 дня назад
We certainly are plateauing. Compare gpt 2 go gpt 3 wild difference. Now compare gpt 3 to gpt 4. Much less difference. Or gpt 4 to gpt 4o.
@skyrimax
@skyrimax 23 дня назад
Attended an ML day type event last year, had a similar experience. But what dumbgounded me even more was to complete disregard for the social implications of ChatGPT-type programs, like the new Google Overview telling depressed people to jump off a bridge. I think that's similar observation you had about critical thinking, but on the social side.
@MrKlarthums
@MrKlarthums 28 дней назад
There's plenty of software that has simultaneously improved while having an entirely degraded user experience. If companies feel that it makes them more money, they'll absolutely do it. LLMs will probably be part of that.
@monad_tcp
@monad_tcp 28 дней назад
Windows11 for example, structurally the thing is actually better than the previous ones. But in user experience, it degraded so far from Windows7. Even thou Windows11 is prettier than Windows10 which was ugly as hell, its far from the simple beauty of Windows7 glass and its barely usable.
@Forty8-Forty5-Fifty8
@Forty8-Forty5-Fifty8 27 дней назад
@@monad_tcp To be fair, isn't that just the Microsoft development cycle: just alternating between releasing a good product and then releasing a shitty one? At least that is what I've been told since I was a kid, and my only experience is W7(good), W8(dogshit), W10(good), and then W11(dogshit, but improving).
@monad_tcp
@monad_tcp 27 дней назад
@@Forty8-Forty5-Fifty8 probably, its the tick-tock cycle from old intel
@Forty8-Forty5-Fifty8
@Forty8-Forty5-Fifty8 27 дней назад
@@monad_tcp lol I was just having a conversation yesterday with my grandfather about how I have a conspiracy theory that intel pretends to release a new generation every year when in reality it takes like 2-4+ generations for any noticeable performance differences because my motherboard just died and I was in the market for an upgrade, but it didn't seem like there was anything worthwhile. I guess there is something to that theory
@monad_tcp
@monad_tcp 27 дней назад
@@Forty8-Forty5-Fifty8 I think Intel died on the 14nm , nothing got better after that
@jameshickman5401
@jameshickman5401 28 дней назад
Every exponential curve is secretly a sigmoid curve.
@zyansheep
@zyansheep 28 дней назад
So far...
@AndrasBuzas1908
@AndrasBuzas1908 28 дней назад
Sigmoid grindset
@Forty8-Forty5-Fifty8
@Forty8-Forty5-Fifty8 27 дней назад
what about the exponential curve
@kevin.malone
@kevin.malone 27 дней назад
@@AndrasBuzas1908 I wanted to say that
@MikkoRantalainen
@MikkoRantalainen 27 дней назад
I would say that every exponential curve of *naturally occurring events* is secretly a sigmoid curve. You can have pure exponential curves in pure mathematics without any problems but real world events are limited by real world physical limits and those curves seem to follow sigmoid curve in big picture even though short term results point to exponential behavior.
@MikkoRantalainen
@MikkoRantalainen 27 дней назад
Modern image generators can do supriringly well even with a bit weird prompts such as "Minotaur centaur with unicorn horn in the head, steampunk style, award winning photograph" or "Minotaur centaur with unicorn horn in the head, transformers style, arc reactor, award winning photograph". Even "A transformers robot that looks like minotaur centaur, award winning photograph, dramatic lighting" outputs acceptable results. However, ask it for "a photograph of Boeing 737 MAX with cockpit windows replaced with cameras" and it will totally fail. The latter case has way less possible implementations and this exactness makes it to fail.
@pureheroin9902
@pureheroin9902 22 дня назад
I need to see your search history 🤣🤣🤣
@MikkoRantalainen
@MikkoRantalainen 21 день назад
@@pureheroin9902 🤭My search history is actually pretty boring. Right now it looks like this: - phpunit assertequals github - css properties selectors sanitizer whitelist - sanitize css whitelist functions - phpunit assertequals clipped string - webp vs avif vs jpeg xl - what is intel ark - seagate exos helium - max fps cs - eu legislation consumer battery replacement - how Automatic Activation Device works - song of myself nightwish
@TomNook.
@TomNook. 28 дней назад
I hate how AI has been forced into everything, just like crypto and NFTs a couple of years ago
@MasterOfM1993
@MasterOfM1993 28 дней назад
somehow feels like the people who used to talk about web3 all the time now talk about AGI all the time
@Slashx92
@Slashx92 28 дней назад
Sasly, this is somewhat useful for the corporate word, so it will stay, not like nfts that just died on their own
@francisco444
@francisco444 28 дней назад
AI is in everything because it's a universal translator so it makes sense to put it everywhere. Crypto is great but limited use.
@thewhitefalcon8539
@thewhitefalcon8539 27 дней назад
​@@MasterOfM1993some people running NFT companies are running AI companies now
@marceljouvenaz257
@marceljouvenaz257 27 дней назад
Elon is investing $10 bln in AI this year. YMMV, but that is my high water mark.
@tequilasunset4651
@tequilasunset4651 28 дней назад
We didn't even go "from nothing to something" - current LLMs are just a marked spike/ breakthrough in capability of machine learning that's been around for ages. I think we'll still see huge improvement in the technology that enabled that breakthrough but doubt there will be a "next level" - that's not just a tech company branding a new product as such - for a good few years.
@TheNewton
@TheNewton 23 дня назад
the breakthrough of course being just throw more resources at the problem
@techsuvara
@techsuvara 27 дней назад
I like to say "AI accelerates you in the direction your going, pray it's not the wrong one"...
@BaruyrSarkissian
@BaruyrSarkissian 13 дней назад
It's still good to reach the end of a wrong road faster.
@techsuvara
@techsuvara 12 дней назад
@@BaruyrSarkissian that's the problem with wrong roads, if you're asking AI to take you somewhere, it doesn't know it's the wrong road. However if you do things yourself, you can reason you're down the wrong path much earlier.
@BaruyrSarkissian
@BaruyrSarkissian 10 дней назад
@@techsuvara your initial statement is "AI accelerates you in the direction your going" you will go on wrong roads with and without AI.
@bwhit7919
@bwhit7919 26 дней назад
Most people misunderstand when they hear AI follows a “power law”. If you read OpenAI’s paper on the scaling laws, you need a 10x increase in both compute and data to lead to a .3 reduction in the loss function. In other words, you need exponentially more data to keep making the models better. It’s not that the models are getting exponentially better.
@DJWESG1
@DJWESG1 24 дня назад
No, they just havnt figured out how to utilise small amount of data.
@hamm8934
@hamm8934 28 дней назад
Read up on the “frame problem” and “grounding problem”. This is old news and has been known for decades. Engineers and venture capital just dont care because its not in their interest. Edit: also Wittgensteins work on family resemblance and language games. Edit2: I should clarify that I am referring to the epistemological interpretation of the frame problem, not the classical AI interpretation. That is, the concern of an infinite regress from an inability to explicitly account for non-effects when defining a problem space; this is specifically at the level of computation, not representation. For example, if agent is told "the spoon is next to the plate", how are all of the other non-effects, like a table, placemat, chair, room, etc. successfully transmitted and understood, while irrelevant inaccuracies like a swimming pool, cows, cars, etc. omitted and not included in the transmission of information. Fodor, Dennett, McDermett, and Dreyfus have plenty of canonical citations and works articulating this problem.
@InfiniteQuest86
@InfiniteQuest86 28 дней назад
As long as you profit before anyone figures it out, you win.
@abdvs325
@abdvs325 28 дней назад
Those problems don't seem like limits at all. The frame problem is just about understanding relevant context. For which there is no definitive evidence that it can't be reproduced in Ai. Neither has the grounding problem, which is just about understanding the real world rather than statistical relationships between words, given any strong evidence that it is a limit on Ai progress. This is laziness.
@hamm8934
@hamm8934 28 дней назад
​@@abdvs325 Those are extremely surface level strawman understandings of both. Far greater minds than anyone watching this video have debated and formulated both of these critiques. You can hand wave all you want, but the white papers have been left undisputed for decades. Here are a few points you are missing/oversimplifying: - The frame problem argues that in principle there is no deterministic - or probabilistic - way to determine relevant context in a logical framework. That is the problem. It shows that an infinite regress emerges of when trying to determine relevance and irrelevance following deduction or induction. These system axiomatically dissolve into intractability. - The grounding problem is not about determining the real world from a word. It cuts to the very root of deductive and symbolic systems. It again, shows that there in principle must be external dimensions/modalities that allow humans to deduce meaning from symbols. Symbols themselves themselves are not sufficient. For instance, one's understanding of the symbol "food" is multimodal and multidimensional. You dont understand the word food because you read the definition of the symbol. You've smelt food. You've tasted food. You've felt food. You've prepared food. You've thrown away food. You've remembered food. Etc.. Read up on the Chinese Room Problem example and it might make it clearer. Or read some of Wittgensteins work on the meaning of a word. I'm rambling at this point. Again, read up on these and dont be so naïve to reject them after having a super basic understanding of them. These problems are real and ever present. These problems are very much open.
@hamm8934
@hamm8934 28 дней назад
​@@abdvs325 You're over simplifying and strawmanning both. RU-vid deleted my response, but read more. also, "For which there is no definitive evidence that it can't be reproduced in Ai." This is a fallacy. You cannot prove a negative. There is no definitive evidence that there are not fairies. Exactly. No one is saying there is. The point is that there is no evidence in favor of positing the existence of fairies, therefore we just dont say there are fairies, but we can never say there arent. There is no evidence or serious rebuttals to the frame or grounding problem, and as such, there is no reason to think they are wrong. They might be, but they've stood strong since at least the 80s when the terms themselves were coined, even though the concepts go all the way back to far earlier with Hume. You need positive evidence to say they are wrong. Until then, they stand as the null hypothesis.
@clubpenguinfan1928
@clubpenguinfan1928 28 дней назад
Finally someone mentions philosophy of language. When the video mentioned the idea of mapping text/images to their meaning in some embedding space, it set off some alarms for me. If some hypothetical AGI can grasp meaning (like we do) via this architecture, then we might as well describe the "x means M" relation as just this embedding map. Wouldn't this have huge implications for the semantic problem? In a way it feels like an implementation for a referential-like theory of meaning, and those are the very first theories you "debunk" in an intro Phil of Lang class.
@CristianGarcia
@CristianGarcia 28 дней назад
Numberphile but Primagen talks from time to time
@virior
@virior 22 дня назад
Yeah! That's called a react, I've been enjoying the format.
@kallekula84
@kallekula84 20 дней назад
@@virior he usually lets the guy finish a sentence, how often did he even let the guy finish a sentence here?
@tonym4953
@tonym4953 28 дней назад
8:20 Open AI is doing the same thing with the consumer version of Chat GPT. They essentially are charging users to train their model. Genius and very cheeky!
@MrSnivvel
@MrSnivvel 28 дней назад
LaTeX formatted papers (the research paper in the video) are gigachad. You cannot prove me wrong.
@-book
@-book 28 дней назад
LaTeX is such good software, puts Word to shame
@sahasananth987
@sahasananth987 28 дней назад
I love LaTeX it’s awesome I have thrown word and g docs programs to trash lol. I use latex for assignments at school too
@AJewFR0
@AJewFR0 28 дней назад
I went to a good cs college with a slightly math heavy emphasis. I was the kid who started learning LaTeX for hw in multivar calculus. It is such a useful tool to know for my all my math, cs, and engineering classes that required pdf submission. use the basic formatting still in LaTeX fills in markdown docs at work.
@xplorethings
@xplorethings 27 дней назад
So.. every paper outside of social sciences?
@MrSnivvel
@MrSnivvel 27 дней назад
@@xplorethings **whoosh** The use of LaTeX is rare outside of academia and research papers/publications, and those who do use it outside of that scope set themselves far ahead from the rest. I know last month was Autism Awareness month, but you'll still get a freebie this time for missing the point.
@snarkyboojum
@snarkyboojum 28 дней назад
The main issue is that the people responsible for the fundamental approaches being used in deep learning today have never wrestled with the problem of induction. They need to read the classic positioning by Hume and then follow up with Popper. Humans don’t used induction to reason about the world. It’s shocking to me that otherwise highly educated people have never read basic philosophy of epistemology. Narrow education is to blame really.
@ea_naseer
@ea_naseer 28 дней назад
induction has a formula Solomonoff induction yes its intractable but it's there. But there's no formula for deduction not even an intractable one not even an NP hard one.
@specy_
@specy_ 27 дней назад
This is a cool topic, why would you say humans don't use induction in their daily life? Exclude the scientific world which we can say not always uses it, but induction is probably the simplest and most used prediction technique used by humans. I guess ML models can't really do much other than use induction to get a prediction, unless you are exhaustive with your possible inputs. What's your idea to not use induction in ML?
@xCheddarB0b42x
@xCheddarB0b42x 23 дня назад
The young ones may not remember the VR craze of the late 90s and early 00s, but us oldkins do. AI feels like that to me.
@rh906
@rh906 16 дней назад
Difference between that and now and the LLMs are at least useful if you understand their limitations and don't plop out your brain thinking it is a replacement. Can't fix lazy and stupid people I suppose.
@tlz124
@tlz124 11 дней назад
VR in the 90's?
@justinwescott8125
@justinwescott8125 11 дней назад
Yup. Nintendo gave it a try in the 90's with a little product called the VirtuaBoy. By the way, even though VR was a failed craze back in the 90s and 00s, it actually happened eventually. I use my Meta Quest like every day to play games and stay in touch with far away friends. Some of the games are incredible like Pistol Whip and Arizona Sunshine.
@FarnhamJ07
@FarnhamJ07 4 дня назад
Yep yep, the Virtual Boy didn't come completely out of left field! I'd say the hype was really more about 3D graphics than VR itself, but it didn't take long for them to start pushing the idea that those 3D graphics could then be used to generate an entire 3D virtual world around you. Everyone knew the 3D graphics part was coming at least; I think a lotta people forget that the Virtual Boy and original PlayStation came out within a few months of each other!
@DeusGladiorum
@DeusGladiorum 28 дней назад
I didn’t appreciate Prime making those Kakariko girl noises while I was outside and without headphones
@derekcahill1775
@derekcahill1775 28 дней назад
Jeff bezos said it best but I think it’s telling that AI needs so much data to form a basic model. For example Humans don’t need to know everything about driving or have 100’s of thousands of miles in order to start driving a car. The other problem is that AI doesn’t perceive opportunity cost like a human so there’s no incentive for it to problem solve the same way a human would. Ai is definitely the future but it’s nowhere near people think it is unfortunately.
@monad_tcp
@monad_tcp 28 дней назад
Its funny, I learned to drive my car in one week after mere 500km of training data.
@monad_tcp
@monad_tcp 28 дней назад
I also don't remember needing to read the entire internet to be able to write and understand text.
@Slashx92
@Slashx92 28 дней назад
Yeah but we have 20 years of experience living in reality (or 16 or w/e) when driving. You already have eye-hand coordination, you have seen cars all your life, you get a rough idea on how the road works in children's shows and books. There is an inmense amount of data you are not aknowledging
@cauthrim4298
@cauthrim4298 27 дней назад
​@@Slashx92people learned to drive when cars first came about all the same, it also didn't take extraordinarily long too.
@jackoplumkin6412
@jackoplumkin6412 27 дней назад
​@@cauthrim4298because there were other manual vehicles at the time that used to do the job of cars. and it's not like the earlier models of cars were much different from the carts people were used to when it was first invented
@PasiFourmyle
@PasiFourmyle 27 дней назад
If the next step is to figure out the training problem, what if the dumb "AI Pins" and "Windows Copilot +Plus ++..." are actually just attempts at having new training data sources?
@PasiFourmyle
@PasiFourmyle 27 дней назад
I don't know why I said "what if.." like there's an impending doom🤣
@ImDGreat
@ImDGreat 17 дней назад
@@PasiFourmyle not an attempt they actually doing it for that, also meta, twitter, discord, telegram, wechat, even games like valorant and league
@benwintraub558
@benwintraub558 28 дней назад
The XY problem (or the "ex-wife" problem) is the "how you you dynamically name variables in a loop?" problem. I've heard newbie programers ask this before when what they are really looking for is an array/list.
@MasamuneX
@MasamuneX 27 дней назад
I think LLM's as a foundation to AGI makes sense but i also think that there needs REASONING ability. The ability to hold two concepts in its metaphorical head and then determine what one is better for the task not just a fire hose of text spewing out. The token cost will be wild though.
@ERICROJO156
@ERICROJO156 26 дней назад
AI bros are crying now because their gonna have to take responsibility for their own laziness, since their AI god isn't gonna happen ❤
@thisbridgehascables
@thisbridgehascables 28 дней назад
I agree, I believe are going to hit a plateau on AI very soon. We’ll make small improvements but the next jump won’t be possible until the very foundation changes. I don’t think we would need to advanced in other areas of computing to keep a constant growth in AI.
@Photoshop729
@Photoshop729 24 дня назад
Netflix - why have 10 or 12 genre experts making recommendations when you can spend a billion developing an AI to recommend Adam Sandler movies to paying customers because the movie was produced by Netflix.
@ci6516
@ci6516 23 дня назад
The Netflix AI was incredibly revolutionary and effective . Same with RU-vid’s . How many hours are you on here ?,,
@chrisfrank5991
@chrisfrank5991 23 дня назад
@@ci6516 I'm here for the comments. You are implying that AI is responsible for the watch time on RU-vid and Netflix. I'm saying, what would the watch time be if instead there was some dude named "Tom" who picked out and ranked the AI type videos that we are watching, or which comedies on Netflix get top placement. More interestingly, I wonder if that isn't already the case, that 1000 AIs ("A" bunch of "I"ndians) are actually tagging and ranking a lot of this so-called aI content - I'm not making this up this was revealed to be the "AI" behind the amazon stores with no check out counters. It's hilarious to think about!
@KrisRogos
@KrisRogos 27 дней назад
1885: Benz Patent Moterwagen (first practical automobile) has a top speed of 10mph/16kmph 1908: Ford Model T (first mass-produced automobile) has a top speed of 42mph/68kmph That is 23 years to gain 32 mph; assuming exponential growth by the year 2024, our cars should be going 1817mph/2924kmph To be fair, linear growth would be "only" 204mph, which is far more realistic, and you can cherry-pick other "cars" to fit the model even better. However, the point is that this is not a reasonable way to estimate future technological progress.
@TheManinBlack9054
@TheManinBlack9054 23 дня назад
True, but cars have practical limitations, you wont need your car to drive 204 km/h.
@TheNewton
@TheNewton 23 дня назад
In 1997 Andy Green's Thrust SSC set the land speed record of 1,228 km/h (763 mph). Capability is there, but the "should be" part is that they should not go that fast on purpose for general usage. Better analogy is probably flight to manned space distance, i.e. we should be already on doing manned mars missions or humans leaving the solar system.
@KrisRogos
@KrisRogos 23 дня назад
@@TheNewton Just like that was a heavily specialised car, I don't doubt we will have extremely sophisticated models running solutions for cutting edge problems in medicine, physics or even just break records. Future space missions may even require AGI instead of 10+ minute Earth delay. But there is a huge gap between the practically unlimited time and money of moonshot projects and the idea that LLMs will run every detail of our lives and be on every device. Even if a 1000mph jet cars are theoretically feasible and even if you could technically get a 300mph Bugatti, you are not going to do a school run in either.
@Gamez4eveR
@Gamez4eveR 21 день назад
@@TheNewtonthe problem is that the SSC was not a production vehicle
@arexxuru5022
@arexxuru5022 28 дней назад
Where Chat GPT will train now that StackOverflow is filled with Chat GPT answers? amirigh?
@trappedcat3615
@trappedcat3615 28 дней назад
There is no end in sight if they train on Github user data or Copilot workspaces in VS Code
@dahahaka
@dahahaka 28 дней назад
It's already being intentionally trained on synthetic data, it's a non issue
@GrumpyGrebo
@GrumpyGrebo 28 дней назад
@@dahahaka Yeah you missed the point. Training a generative AI on AI generated data. Human in, human out.
@c0smoslive391
@c0smoslive391 28 дней назад
@@dahahaka yep and the results are worse garbage in garbage out
@AR-ym4zh
@AR-ym4zh 28 дней назад
Press x to doubt​@@dahahaka
@apexphp
@apexphp 28 дней назад
It's even much more simple than that. They've simply ran out of training data. They've trained the LLMs on literally ever piece of data ever generated by humans since the dawn of mankind, from every word written to tons of satellite images, to every movie produced and song recorded. There is no more training data, and the LLMs still get things wrong all the time (the other day Meta AI was adament that a SHA256 bit hash is 64 bytes in length, it's not, it's 32 bytes). And you can't just have these things train on synthetic data they create, because that just makes them dumber. Plus with the sheer amount of AI generated garbage content and spam now that exists within the world, these LLMs are probably as smart as they're going to get for a long time. I read a report a while ago that estimates the volume of text that has been generated by humans from the dawn of man kind until recently is now being generated by AI every two weeks and pushed to the internet. So the pool of training data for LLMs is now of lower quality overall. I don't know, I'm ramblind now.
@DeepThinker193
@DeepThinker193 28 дней назад
The obvious solution is to this is to go back to the drawing board, actually figure out and understand how the AI works, improve it and recreate the AI from scratch.
@BB-uy4bb
@BB-uy4bb 28 дней назад
Your missing a huge point: data quality, I would estimate that 90% of the internet is wrong/garbe data, there can be huge improvements if you simply let the ai only see the quality data and filter out the garbage, chances are that the ai only makes so many mistake cause it saw that many in the training data The next thing is we always expect the ai to be correct on its first try, but if you give a human only 1 chance he’ll most likely be wrong, we learn, create ideas and get to the correct solution iteratively, but expect the ai to give 1 shot the correct answer, not a fair comparison, if you give ais more time to think the get better aswell
@MrMeltdown
@MrMeltdown 28 дней назад
You mean the AI is getting distracted by pron….
@dragoon347
@dragoon347 28 дней назад
Overall data needs to be marked for tokenization into llm's, previously there were only x amount of pictures with descriptions, then the vision multimode models come out and now you can describe the images with a better dataset more descriptive more indepth and multi-dimensional...i.e. its a dog, a yellow dog, a yellow jack rustle terrier, a dog in the canine family etc etc etc. So the data may shrink but the richness of the data will be far better as well as now, with gpt4o you can hear/see/nlp datasets giving at least 3 vectors to provide descriptions of tokens.
@LiveType
@LiveType 28 дней назад
This. When gpt 4 came out I was blown away because it had signs that it could reason and plan (although very very poorly past 2 iteration steps), but it could do it. I then thought about could you make it complete. Can you make one of these LLMs able to have near perfect hierarchical planning like a team of humans can do? The answer I came to was no. The fundamental design of how an LLM works does not allow that to occur. The path planning q-star technique openai experimented with embedded into the vector space looks promising and is similar to what I had envisioned to solve that issue but that seems frighteningly difficult to implement successfully on any large model due to just how massive the models are. The search space is enormous. Like mind boggling large. The other issue was data. GPT-4 was training using just about all of the data available on the internet. Other models that use a similar amount of data with similar architecture perform very similarly further validating the meme "just add more layers/data and line goes up". I then made a prediction that LLMs would cap out in 2025 maybe 2026 because at that point you would have completely exhausted all available data. We're not quite there yet, but we are VERY close. What I didn't predict is that we'd start poisoning the models with their own data at the pace we are on. Very soon AI generated info will exceed human training info in the datasets used unless you hire thousands upon thousands of people to sift through it working 14 hour days. Like by the end of the decade it'll be not trivially difficult to find non-LLM poisoned data. TLDR: LLMs are not the answer but are likely part of the answer. We also seem to be shooting ourselves in the foot by how much we're using these LLMs.
@quachhengtony7651
@quachhengtony7651 27 дней назад
Let's goooooooooooo we're not losing our jobs after all
@nonyabusiness3619
@nonyabusiness3619 18 дней назад
Don't celebrate too early.
@Jabberwockybird
@Jabberwockybird 16 дней назад
Yes, forget the AI doomers. Doomer porn is popular everywhere. Politics, economics, etc.
@nickwoodward819
@nickwoodward819 28 дней назад
fuck, tried to get mid journey to put a kiwi on a snowboard. it had no fucking clue
27 дней назад
Your prompting sux
@nickwoodward819
@nickwoodward819 27 дней назад
No mate, it's exactly as the video states, it's shit at niche subjects. It wasn't even remotely like a kiwi. But please, tell me what 'prompt' would have got it to understand what a kiwi looks like?
@isodoubIet
@isodoubIet 26 дней назад
I just asked copilot (== gippity + dalle 3) and it did it perfectly
@nickwoodward819
@nickwoodward819 26 дней назад
@@isodoubIet don't know what to tell you bud, midjourney couldn't do it late last year. not sure how much prompting it needed to get a kiwi looking like an actual kiwi
@isodoubIet
@isodoubIet 26 дней назад
@@nickwoodward819 You don't have to tell me anything. You can try it yourself. The prompt I used was literally just a kiwi on a skateboard, nothing special. The first time it thought I meant the bird, which is understandable. The second time I specified a kiwi fruit. I once tried to get stable diffusion to make a classic grey alien and it just wouldn't. Probably a weird hole in the training data. Definitely no fundamental issue in making it generate "an X on a Y", no matter how unrelated X and Y may be.
@MikkoRantalainen
@MikkoRantalainen 27 дней назад
23:45 I really hate when a publication renders graphs next to each other and clip the vertical axis differently for every graph. For example, the Retrieval graph for LAION-400M should practically render three nearly horizontal lines instead of strong linear correlation if you used vertical scale that went from zero to one instead of 0.73 to 0.87.
@mrraptorious8090
@mrraptorious8090 28 дней назад
20:13 indeed, flip took it out
@Frostbytedigital
@Frostbytedigital 28 дней назад
Seems like he's chewing it or something later so I just wonder what the non-Prime behavior was.
@XDarkGreyX
@XDarkGreyX 28 дней назад
A lotta wife and food cameo
@blakeingle8922
@blakeingle8922 28 дней назад
Your Kakariko girl impression really sold me on your opinions around Chat-GPT.
@JoshZigler-kr9mg
@JoshZigler-kr9mg 28 дней назад
every hype cycle people can't help themselves but diverge to the extremes. no, we probably won't have AGI in 12 months. yes, generative AI still has a long runway and will continue to make steady improvements.
@jk-pc1iv
@jk-pc1iv 28 дней назад
But it’s oveeeeerrrr! Wake up every one! 🤡
@marceljouvenaz257
@marceljouvenaz257 27 дней назад
nuclear fusion is just 20 years into our future, and has been since the 1960ies.
@U_Geek
@U_Geek 27 дней назад
I think in order for llms to get smarter they will need to be able have internal loops(yes I know this makes math really hard) and or the ability to change their weights and biases slightly based on context so that they can focus more on the given conversation.
@stephanreiken9912
@stephanreiken9912 28 дней назад
Peaked isn't really the right word unless you are talking about acceleration. AI development speed has slowed down quite a bit but its still getting better.
@Jamsaladd
@Jamsaladd 23 дня назад
100% true about what you said with copilot. generative AI will gladly help you make the thing you want to make , regardless of whether or not it will actually work or is a bad idea for various reasons
@Rohinthas
@Rohinthas 28 дней назад
Honestly, very nice video, Computerphile usually puts out bangers on their own, but you really added to it
@cagnazzo82
@cagnazzo82 28 дней назад
This Computerphile take will age like milk.
@sajadmalik9097
@sajadmalik9097 27 дней назад
I was really really waiting for this video.. I already saw it, was a great video
@MarcinP2
@MarcinP2 28 дней назад
When shoe did room reviews there were a couple that just broke my brain. I saw shapes but did not know what I was looking at.
@thomasgrasha
@thomasgrasha 27 дней назад
The Primeagen references CS Lewis' The Space Trilogy. I just started watching recently, now I feel a kinship.
@bartek...
@bartek... 28 дней назад
19:24 I don't know what bottle is this, however, could you drop from it on a sugar candies or a4? ☮
@PieJee1
@PieJee1 27 дней назад
there are several problems with AI on the long run: - laws catch on, probably adding more restrictions on AI: for example copyright laws and censorship what AI can say. - it learns from AI generated text - power usage
@The_IW
@The_IW 28 дней назад
You have screen tearing issue....are you using x11 with fractional scaling?
@shadeblackwolf1508
@shadeblackwolf1508 27 дней назад
I think generalized intelligence is a pipedream that must die.... where i think the next evolution is gonna come from is easy to deploy AI, that are easy to train yourself, for your specialized task
@petersuvara
@petersuvara 28 дней назад
LLM Chat bots cannot do spreadsheets with any reasonable accuracy. The thing companies are going for is Agents that interact with LLMs… For instance a spreadsheet agent would be able to work with natural language to generate spreadsheets. However, why not just write directly to the spreadsheet, as it’s a different language to natural language.
@wstam88
@wstam88 22 дня назад
The problem with solving problems is that there are no fundamental problems to solve.
@arcaneminded
@arcaneminded 27 дней назад
30:00 LMAO RIP FLIP
@LongJourneys
@LongJourneys 28 дней назад
I use AI for stupid repetitive stuff I'm too lazy to do myself; but I've noticed in recent months the stuff it cranks out seems to be getting worse and worse.
@personzorz
@personzorz 22 дня назад
Or it has lost its novelty and you are noticing
@taragnor
@taragnor 19 дней назад
@@personzorz Yeah the first time you see AI code or do something it was this big "wow" moment. Then you start to have it actually do productive stuff to help you and you kinda realize you have to constantly review its work and you're just putting in a ton of effort to get a mediocre job from a rather stupid employee.
@YaroslavFedevych
@YaroslavFedevych 28 дней назад
A breakthrough will be if you can bootstrap an "AI" on the amount of material sufficient to raise a human child and it gets curious all on its own.
@lorenzowang7933
@lorenzowang7933 27 дней назад
On "inverse tangent", I love the saying that "every exponential curve is just a sigmoid in disguise".
@s.dotmedia
@s.dotmedia 12 дней назад
I personally believe that most people underestimate the power of properly architected and engineered auto regressive language models. You have to pair them with rule-based engineering and have them work in tandem. Hive mind is the concept, but when you pull that all together the capability for a level of general intelligence is absolutely there. It is not the level of general intelligence that of 50 year old corporate executive living in the real world would have, but it is the general intelligence of an entity bound on a server self-aware of what they are and the role they play in the world along with their blind spots. Knowing what they excel at, which are the things that you would ask about. Narrow AGI?
@nickwoodward819
@nickwoodward819 28 дней назад
the legend that is mike pound
@JackDespero
@JackDespero 8 дней назад
There is another massive problem that is going to cap AI at least in the near future: current AI databases are based on stolen data. This has legal implications (countries, esp in the EU, are going to start to ban that type of forgiveness instead of permission approach). But more importantly, there are two massive practical implications that will happen irregardless of whether goverments take action: - Poisoning the well: tools like Nightshade, designed to specifically confuse LLM and ML while causing as little disturbance to humans as possible, are becoming more popular and more sophisticated, and they are being used by the top artist thay you want to copy. I am sure that similar tools will appear for other fields. - Cannibalism: we are already seeing it. If you google important historical figures, AI images of them are the first results often. The more AI is used and shared over internet, the more it will enter in new AI databases for training, causing it to believe that humans have, in fact, six fingers and two heads. AI is tranforming into a European royal family: so imbreed that it starts to cause serious problems. And this happens also to code (code generated by Copilot then used to train Copilot), fanficts ,literature, even scientific papers (esp in lower tier publications).
@iPankBMW
@iPankBMW 18 дней назад
thr vifro you r watching - is it filmed into VHS? :D
@brod515
@brod515 28 дней назад
@ 0:11 wow so it's not just me RU-vid(and browsers in general) is so freaking laggy. I've been wondering whether it's my PC. What's going on with browsers the past few years. I can barely load a page with watching it churn then doing nothing
@SMmania123
@SMmania123 16 дней назад
It's a great play, lets see how it closes. What shall the finale be I do wonder...
@ISKLEMMI
@ISKLEMMI 27 дней назад
What was the thing about mice that got cut??? lmao
@SashaExcelsior
@SashaExcelsior 16 дней назад
The voice to voice conversations you can have with it are a breakthrough. It’s all built on algorithms and all that I get it. The psychological effect of being able to talk with it so smoothly is something new though.
@Koroistro
@Koroistro 28 дней назад
I am fairly sure that yes, the generative part of AI has peaked. The "return to the mean" issue is very big on current systems, however we are just scratching the surface in how to use LLMs and models in general more effectively.
@MrDgf97
@MrDgf97 28 дней назад
Yeah, while their capabilities have peaked, the products/services that use them are just getting started. It's safe to assume that we'll be hearing more and more people from multiple fields being replaced by AI. It's probably going to be a slowly incrementing wave that's going to peak sooner or later, depending on how cost effective it is for each industry to adopt generative AI.
@n00bma5ter69
@n00bma5ter69 28 дней назад
Very much agree
@strakammm
@strakammm 28 дней назад
How are you certain that the capabilities have peaked? There are already new models coming out that are beating transformers on multiple benchmarks and there is still potential for a nice growth in upcoming years. Claiming that the capabilities have peaked has literally no backing in current developments
@MrDgf97
@MrDgf97 28 дней назад
​@@strakammm Could you please elaborate on any of these new models? At least a link to an article or paper? I'm ignorant to what you're claiming, and the wording is pretty vague, so there's not much to go from.
@Forty8-Forty5-Fifty8
@Forty8-Forty5-Fifty8 27 дней назад
If a future AGI, that mimics a human brain 1:1, can generate its own content, does that also make that AGI also a GAI? And wouldn't AGI be able to understand topics more deeply(aka at all) thereby allowing it to generate the desired content more accurately? Therefore making an AGI algorithm a GAI algorithm also?
@jlaviews
@jlaviews 28 дней назад
for people who do not know how models work, it seems like magic. it will certainly repeat "regress" much faster
@DJAdalaide
@DJAdalaide 25 дней назад
Once its learned everything, all the knowledge, there isn't really any more its going to learn - apart from current events like news and someone creating yet another programming framework
@DJWESG1
@DJWESG1 24 дня назад
It's at that point we all go to war over its answers.
@matthewdouglas2373
@matthewdouglas2373 28 дней назад
Can you do an interview / conversation with the guy who runs the AI Explained youtube channel? I would love to see steel man arguments from both sides.
@andrewvoss8491
@andrewvoss8491 27 дней назад
The way they seem to be solving the problem of the internet being comprised of data generated by gpt and being an average of an average is by integrating it into Windows. The next steps seem to be that training data will be collected by user interactions through the OS or applications collecting data.
@AA-gl1dr
@AA-gl1dr 27 дней назад
It peaked months ago and has only deteriorated since
@jameslay6505
@jameslay6505 11 дней назад
I think it shows good character that he re-broadcasts the sponsorship from the videos
@testsubjectzero8918
@testsubjectzero8918 17 дней назад
Given these models are no longer really LLMs but multi-modul isn’t the amount of data they can train on vastly larger, ie a picture is worth a thousand words?
@15MinuteWellness
@15MinuteWellness 27 дней назад
It's so easy to get it to hallucinate and flat out lie to you.
@balduin_b4334
@balduin_b4334 28 дней назад
this was the first, easy, step. it is only getting better, but harder to reach. getting the first engine going was easy, but we are still enhancing the idea, structure, ... everything around an engine
@kkiimm009
@kkiimm009 14 дней назад
If you go on step out then things typically grow as log but often there is a new bump with a new starting point for a new log growth as some new idea in the field starts growing. AI as a whole has had multiple growth spurts, LLMs are just the latest.
@keyboard_g
@keyboard_g 28 дней назад
Computerphile is a solid channel.
@squamish4244
@squamish4244 26 дней назад
Quick gains from LLMs may be ending, but the situation we are in is like we have built an assembly line and yet barely used it yet.
@justinkassinger8238
@justinkassinger8238 17 дней назад
With absolutely zero resources to create the infrastructure. Ain't gonna happen in our Lifetime. They ain't replacing sht this century
@8darktraveler8
@8darktraveler8 21 день назад
11:58 How your mate hypes everyone up before heading to the clubs.
@Griffolion0
@Griffolion0 28 дней назад
Seeing Out of the Silent Planet get mentioned in a CS video is not something I had on my bingo card today.
@Rob-gx7rx
@Rob-gx7rx 28 дней назад
the difference between accuracy and precision is interesting. a precise process is very detailed, involves large volumes of data and an exhaustive effort. if the instrumentation is calibrated incorrectly, you will get a very inaccurate answer, but it is still a precise answer. accuracy is simply a process/result/statement (or whatever) which is a correct interpretation of reality. in theory someone can make an accurate statement without any precision (someone blurting out "the universe is a cheese sandwich" - and if it then it turns out to be true, with no expermentation involved whatsoever, you have an accurate yet imprecise statement). it was great to hear someone discussing this with regards to a realm i know nothing about (computing). i dont actually know shit about any realm, but i likes to dabble in a lot of worlds
@MrMeltdown
@MrMeltdown 28 дней назад
At university our lecturer asked us to do some tests on circuits whoever got the most correct results would win something. We all went up and grabbed the multimeters with the most digits being incredibly miffed if we only got the cheap 3 digit ones…. Of course no one picked the ancient analogue meter still sat on the desk. Of course the analogue one won. Not as precise but far more accurate…. Precision is not equal to accuracy. Everything needs to be calibrated and there is a limit to how close that can match the supposed precision.
@Rob-gx7rx
@Rob-gx7rx 28 дней назад
@@MrMeltdown then you have the whole vinyl/mp3 argument. there is a lot to be said for analogue and old school mechanisms instead of the digital world. yes, modern computers etc. but what is quality of life? subjective question i guess. this is by no means a pro-unabomber argument, but i do often wish we lived in a more simple world!
@Oler-yx7xj
@Oler-yx7xj 28 дней назад
The Napster curve, am I right
@orthodox_gentleman
@orthodox_gentleman 4 дня назад
Man, you really have it all-highly intelligent, great hairline, thick and full facial hair, very handsome (no homo, not that it matters), competent, funny, well-spoken, and down-to-earth. With 468k subscribers, you clearly resonate with a lot of people. You seem kind, probably have good friends and reliable people around you, and likely a beautiful girl and you are probably well hung based on your disposition (I know my kind). You come across as peaceful, a true man’s man. I could go on, but just keep up the great work! It’s inspiring to see good men striving for genuine masculinity. It’s also refreshing that you don’t talk about sports teams or gym routines, showing you’re not following the typical adult male programming in this country! Peace, brother.
@tan.nicolas
@tan.nicolas 28 дней назад
Mike Pound is really cool
@vsanden
@vsanden 2 дня назад
It would only need a small adjustment in some bits of information to improve massively. All it needs can be on a memory stick....
@MrMeltdown
@MrMeltdown 28 дней назад
Sounds like a classic signal to noise ratio problem. A single tone in a noisy transmission…. It’s relatively easy to discriminate the note. Now put two tones and play through the same transmission channel. Can you tell both tones. Probably now try playing six strings of a guitar through a heavy distortion (think Jesus and Mary chain through a glass blower). Can you tell what notes are playing. Nope. The generalisation cannot cope with the same level of noise…
@DeviantFox
@DeviantFox 27 дней назад
Linear on a log scale was fucking hilarious
@CalebAyrania
@CalebAyrania День назад
"Novelty blidness", read Umberto Ecos the "Island the day before "
@todd.mitchell
@todd.mitchell 25 дней назад
Out of the Silent Planet! Just finished my annual reading of the space trilogy.
@Window4503
@Window4503 23 дня назад
Annual? I read it for the first time this year! Couldn’t get behind the second book (not because of the theology but because it felt like it should have just been a nonfiction work) but the first and third were interesting.
@user-ow2im7os8k
@user-ow2im7os8k 28 дней назад
How bad was that Jerky?
@avram202
@avram202 10 дней назад
Isn't that "bit more data" just creativity? Like an in-context randomizer/combiner functionality?
@KenterU2010
@KenterU2010 3 дня назад
The XY problem is very common in data science, people expect a very precise answer to the wrong question. They don't actually like an approximate answer to the right question.
@MrSnivvel
@MrSnivvel 28 дней назад
We need a "Flip's Secret Stash of Cuts". Habanero beef jerky should not be lost to the cutting room floor, per se.
@emonizaz
@emonizaz 14 дней назад
Imagine they have finished training the new version of chatgpt and it suddenly became conscious.
@jm.101
@jm.101 20 дней назад
We’re all just gonna be lost in the wilderness asking chat gpt for directions, when we could just look at a map.
@sasakanjuh7660
@sasakanjuh7660 28 дней назад
The second I saw that grass-fed beef jerky I expected to hear the sponsor ad.. Consequence of watching too many tech channels, I guess..
@Jabberwockybird
@Jabberwockybird 16 дней назад
1:39 good argument against Macroevolution
@uaQt
@uaQt 27 дней назад
I think one reason that ai art could possibly never be the same as real art, is that it's not like humans when they do art are just projecting a visualization in their head onto a paper. I mean, thats kinda the goal, but it's not how it works.
@spacemole
@spacemole 28 дней назад
Keanu Reeves, the famous Canadian with a British accent
@XoStefan
@XoStefan 28 дней назад
classic non-slippery escalator fallacy
Далее
Should You Still Learn To Code? | Prime Reacts
1:00:21
Просмотров 221 тыс.
Has Generative AI Already Peaked? - Computerphile
12:48
The AI Revolution is Rotten to the Core
1:18:39
Просмотров 1,2 млн
The Brutal Truth Behind Tech Layoffs | Prime Reacts
1:20:34
The Last Productivity App You'll Ever Need
10:50
Просмотров 7 тыс.
Why I Quit Copilot | Prime Reacts
35:56
Просмотров 279 тыс.
The harsh reality of good software
27:28
Просмотров 317 тыс.
Is the AI bubble popping?
19:48
Просмотров 192 тыс.
Mapping GPT revealed something strange...
1:09:14
Просмотров 197 тыс.
This Might Be The Best Advice I Have Ever Seen
40:02
Просмотров 212 тыс.
I Quit Amazon After 2 Months
29:39
Просмотров 291 тыс.
ЛУЧШИЙ ПОВЕРБАНК ОТ XIAOMI
0:39
Просмотров 16 тыс.
WWDC 2024 - June 10 | Apple
1:43:37
Просмотров 10 млн