Тёмный
No video :(

Beyond the Hype: A Realistic Look at Large Language Models • Jodie Burchell • GOTO 2024 

GOTO Conferences
Подписаться 1 млн
Просмотров 102 тыс.
50% 1

Опубликовано:

 

5 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 155   
@tiger0jp
@tiger0jp Месяц назад
Excellent talk. More than the USING LLMs section which was great, however quite a few would be familiar with given the massive focus on GenAI...it was the first 30 minutes that was really useful especially to our responsibility as technologists. Listening to the categorisation of intelligence and where current LLMs are set the context on where they should be used for and where they shouldn't e.g. fighting wars.
@etunimenisukunimeni1302
@etunimenisukunimeni1302 25 дней назад
This is the best talk I've seen on any subject in months. Very well put together, super informative even in this short time. You can tell there would've been more where that came from, and the knowledge and experience of the speaker shows. No over/underhyping, just how, where and why LLMs work the way they do.
@TechTalksWeekly
@TechTalksWeekly Месяц назад
This is an excellent introduction to LLMs and Jodie is a brilliant speaker. That's why this talk has been featured in the last issue of Tech Talks Weekly newsletter 🎉 Congrats!
@dwiss2556
@dwiss2556 Месяц назад
This has been one of the best if not the best demonstration of what AI is actually capable of. Thank you for a great talk and most of all keeping it on a level that is understandable even for non-Gurus!
@Crawdaddy_Ro
@Crawdaddy_Ro Месяц назад
She isn't taking exponential growth into consideration. It's the same reason many researchers didn't see the current AI boom coming yet others did. If you understand exponential growth, you'll also understand that, even though true AGI is a long way off, it will only take a few more years. Look up the Law of Accelerating Returns.
@dwiss2556
@dwiss2556 Месяц назад
@@Crawdaddy_Ro Exponential growth is as limited as other growth. Just because there are more transformers does not automatically increase the actual quality of the outcome, which is still the major factor why the label 'hype' is very true in the context. We can make cars drive insanely fast but that doesnt mean we can actually get them to be driven safely on public roads at that speed. This very much translates to AI as the energy consumption at current stages doesnt correlate to the outcome it provides. Any advantage in time saving is eaten up by other negative factors currently.
@desrochesf
@desrochesf 2 дня назад
@@Crawdaddy_Ro Which exponential growth is she not considering? Even training counts haven't been exponential. Transistor counts / Moore's law hasn't been exponential since 2010 if not sooner and is about to run out with transistor size-shrink return on equity gains coming to an end. Current LLMs aren't much different than the first 'big L' version pre-2000. The AI "boom" currently taking place is largely consumer grift and investors marketing.
@seanendapower
@seanendapower Месяц назад
This is the clearest explanation of how this works I’ve come across
@alaad1009
@alaad1009 Месяц назад
Jodie, if you're reading this, you're amazing !
@mikemaldanado6015
@mikemaldanado6015 Месяц назад
Chatgpt measured its performance in the Bar exam against a set of people that have taken and failed the exam at least once. Research show that people that have failed once have a high probability of failing again. ie. do not trust research funded results, independent research disproves a lot of the gpt claims. ex. independent research found that gpt3.5 gave better answers than 4.0 just not as fast. Thanks you for this "no hype" talk, it should be the norm when it comes to discussing LLM's.
@samsonabanni9562
@samsonabanni9562 4 дня назад
She's a great teacher
@aishni6851
@aishni6851 Месяц назад
Jodie you are a great speaker! Amazing talk, very insightful ❤
@ankurbrdwj
@ankurbrdwj 7 дней назад
thank you for such a great great talk , best I have seen to get an introduction to current state , really great , no bs no hype talk, that's why everyone should study psychology not moral science .
@apksriva
@apksriva 24 дня назад
woah .. ! brilliant talk. Very well constructed, I was immersed in the talk till the end.
@prasad_yt
@prasad_yt Месяц назад
Great presentation - concise and loaded. Removal of the hype and capturing the essence .
@yeezythabest
@yeezythabest 29 дней назад
The bare mention of hallucinations is the weak point of this presentation, especially on RAG part but it was very interesting
@ManuelBasiri
@ManuelBasiri Месяц назад
I wish we could mandate watching this talk for all of those over excited business decision makers.
@VahidMasrour
@VahidMasrour Месяц назад
Great talk! The best introduction to LLMs i've seen so far.
@MaciejHajduk
@MaciejHajduk 24 дня назад
She makes it so clear. Best introduction to LLMs and "AI" I have seen ❤
@dp29117
@dp29117 21 день назад
Thank You Jodie for very nice explanations
@fabiodeoliveiraribeiro1602
@fabiodeoliveiraribeiro1602 Месяц назад
Last year I created an ancient philosophy quiz ("To pithanon ecypyrosis apeiron") and submitted it to ChatGPT. OpenAI's AI did very badly, because it calculated the answer by giving too much value to a word in the sentence and this led it to attribute it to a philosopher Heraclitus whose work emphatically repels another philosophical concept present in the sentence ("apeiron" was coined by Anaximander and used by Xenophanes, philosopher ridiculed by Heraclitus). Some time later, I applied the same test to another AI and the result was surprising. The same mistake was made, but this time the AI ​​cited the ChatGPT test result that I had previously published on the Internet. People in the field of philosophy do not make similar mistakes, especially if they intend to maintain their credibility. And yes, generative text AIs don't just infringe copyright, they do so by mixing good content with bullshit they invent themselves and inappropriate responses provided by other AIs.
@antonystringfellow5152
@antonystringfellow5152 Месяц назад
Yes, this is what's referred to as "contamination". It's a growing problem for models that are trained on publicly available data from the internet.
@trinleywangmo
@trinleywangmo 20 дней назад
@@antonystringfellow5152 And in a day and age when facts and truth mean so little.
@sayanmukherjee1216
@sayanmukherjee1216 9 дней назад
Loved it! She is such a wonderful narrator and presenter. I have a question regarding #generalintelligence #ai #agi #llms - what if we link AI and one with “regional agency” forms a network with others?
@GregoryMcCarthy123
@GregoryMcCarthy123 Месяц назад
Excellent talk! Would like to see more from Jodie
@kehoste
@kehoste Месяц назад
Excellent talk, thanks for recording and sharing!
@santhanamss
@santhanamss 18 дней назад
Excellent talk, very concise.
@Anna-mc3ll
@Anna-mc3ll 26 дней назад
Thank you very much for sharing this interesting and detailed presentation! Kind regards, Anna
@InsolentDrummer
@InsolentDrummer 24 дня назад
17:04 Jodie, your remark about scientists being well established in physics, mathematics, computer science etc but not in psychology is rather important, but a bit incorrect. I've been following the development of LLMs loosly, and still, every one seems to be missing the most important point. How many linguists were involved in such endeavours? Natural language does not boil down to just learning strings of characters by heart, generating new string upon them. Unless we consult those who study natural language as it is, LLMs are doomed to be just T9 on steroids.
@mortenthorpe
@mortenthorpe 21 день назад
You are correct in what you write, but this is a mere single subset of the main issue with AI… it needs to know the context, if required to actually solve a problem (the outcome being useful, somewhat correct, and repeatable in solving it). Since no one can communicate context exhaustively, if tasked to input this into an AI generator, this is impossible… AI for generating solutions remains impossible.
@stevensvideosonyoutube
@stevensvideosonyoutube 13 дней назад
That was very interesting. Thank you .
@rflorian86
@rflorian86 Месяц назад
Got damn....vI will have to listen multiple times, thank you.
@miketag4499
@miketag4499 Месяц назад
Absolutely lovely talk
@AnthatiKhasim-i1e
@AnthatiKhasim-i1e 22 дня назад
"As a curious AI enthusiast, I'm fascinated by the potential of SmythOS to make collaborative AI accessible to businesses of all sizes. The ability to visually design and deploy teams of AI agents is a game-changer. What use cases are you most excited about?"
@Flylikea
@Flylikea 14 дней назад
15:51 I genuinely believe this part is very clear and eloquent, but will still confuse a lot of people (partially due to these people's difficulty coping with the notion of raw ability in other humans). Train an LLM on dictionaries and grammar books and then ask it to write The Odyssey. If that cannot help people understand intelligence and its difference to how ML/AI models (statistical models on steroids) work, I don't think we can help anyone here. It's a great tech. It's not revolutionary (though I can see how it can be used to trigger a revolution), and it is helpful to move faster through repetitive or repeatable components in a task.
@jmonsch
@jmonsch 27 дней назад
Great talk! Thanks you!
@andrewprahst2529
@andrewprahst2529 13 дней назад
I like when she says "so" I would be sad if Australia stopped existing
@trapkat8213
@trapkat8213 24 дня назад
Great presentation.
@mioszdaek1583
@mioszdaek1583 Месяц назад
Great talk. thanks Jodie
@jamesreilly7684
@jamesreilly7684 Месяц назад
All of this can be summarized in the statement that agi will not exist until ai systems can learn by the socratic method as well as it can teach it
@NostraDavid2
@NostraDavid2 Месяц назад
Note that chatgpt-3.5-turbo has been replaced by chatgpt-4o-mini, it's successor. It was likely not live when this talk was given.
@NuncNuncNuncNunc
@NuncNuncNuncNunc 22 дня назад
As long as LLMs get the answer to questions like, "three towels dry on a line in three hours. How long will it take for nine towels to dry on three lines*" wrong, I am not too worried about AGI. LLMs are basically cliche machine that happen to know a lot of cliches in a lot of different domains. * Gemini provides this reasoning: If it takes 3 hours to dry 3 towels, it means it takes 1 hour to dry 1 towel (assuming consistent drying conditions). If you have 9 towels, and each towel takes 1 hour to dry, then it will take 9 hours to dry all 9 towels.
@dimitriostragoudaras8682
@dimitriostragoudaras8682 Месяц назад
ok the content is suberb, the presentation is top notch, but OMG I would like to be able to replicate her accent (especially when she says DATA).
@BasudevChaudhuri
@BasudevChaudhuri Месяц назад
That was a fantastic presentation! Absolutely loved it!
@davidporter6041
@davidporter6041 Месяц назад
Jodie rocks generally, but also this exactly the kind of talk we need
@jodieburchell
@jodieburchell Месяц назад
Thanks so much David!
@ytdlgandalf
@ytdlgandalf 14 дней назад
such a clear talker!
@serakshiferaw
@serakshiferaw Месяц назад
Fantastic speech, now i think AI is on a stage where kids do what parents do and they don't know why. just imitating
@samvirtuel7583
@samvirtuel7583 Месяц назад
Disagree. Humans also simply obey the functioning of their network of neurons. Reflection, understanding are emerging properties, these properties also emerge from LLM and will be more and more precise.
@sUmEgIaMbRuS
@sUmEgIaMbRuS Месяц назад
@@samvirtuel7583 Counter disagree. Human neurons are non-linear, which makes them way more versatile than digital neurons. And a human brain also constantly evolves and adapts its own structure to problems it encounters. These are both fundamental properties that will never emerge from simply scaling up the number of parameters in linear pre-trained NNs.
@samvirtuel7583
@samvirtuel7583 Месяц назад
@@sUmEgIaMbRuS This is why I talk about precision, this precision makes it possible to make the LLM less myopic. The hallucination is linked to this myopia due to the lack of precision. But I remain convinced that LLMs 'understand' in the same way that we understand.
@sUmEgIaMbRuS
@sUmEgIaMbRuS Месяц назад
@@samvirtuel7583 GPTs are pre-trained, i.e. they never learn, they're completely static. Reasoning is sequential. You can't get sequentiality out of a static system by just making it bigger (or more "precise" as you prefer to say). They are also transformers, i.e. their entire thing is taking some text as input and pushing out some other text as output. Compilers do the same, they transform C code to x86 assembly for example. They even "optimize" their output by applying certain transformations that don't affect the observable behavior of the program. But this doesn't mean they "understand" the program in any way. I'm not saying we'll never make an AGI. I'm saying that if we do, it will probably be very different from today's LLMs.
@samvirtuel7583
@samvirtuel7583 Месяц назад
@@sUmEgIaMbRuS This is precisely the magic of these systems, and this is why even scientists do not fully understand them, we simply know that properties or behaviors emerge that go beyond what the system is supposed to do. It is not a question of a programmed expert system based on statistics and a knowledge base, it is more of a sort of holographic database formed by a network of information fragments. If you understand what these systems actually do, you will understand that LLMs will soon be able to reason like us. I agree with Geaffrey Hinton and IIlya Sytskever, it's just a question of scale.
@emralcanan9556
@emralcanan9556 19 дней назад
Nice talk
@shikida
@shikida 17 дней назад
Great preeentation
@seenox_
@seenox_ Месяц назад
Very comprehensive and informative, thanks.
@shoeshiner9027
@shoeshiner9027 Месяц назад
Agree. This video contents are same as I have thought before.😊
@hcubill
@hcubill Месяц назад
Jodie is awesome. Really cool presentation!
@soulsearch4077
@soulsearch4077 Месяц назад
I really enjoyed this. I actually got an extra knowledge, and it kind of aligned with my suspicions about the current state of AGI.
@Hank-ry9bz
@Hank-ry9bz Месяц назад
42:50, still it's impressive in its own way. Admittedly not AGI though (whatever that means)
@sirishkumar-m5z
@sirishkumar-m5z 22 дня назад
Exciting news about META's new open-source model! SmythOS is perfect for integrating and experimenting with the latest AI models. #SmythOS #OpenSourceAI
@toreon1978
@toreon1978 Месяц назад
30:17 did you forget Embedding, Context and Prompt Engineering?
@campbellmorrison8540
@campbellmorrison8540 22 дня назад
Excellent talk thank you. My primary concern is the suggestion that Extreme generalization is at the human level. While Im sure it is I'm equally sure it doesn't apply to a very large percentage of humanity. It seems to me a very large amount of humanity needs to be trained and they too would fail given problems that have not seen before. That suggests to me that there is a very large percentage of the population that are only, in employment terms are only at or below the level of current LLM's and hence very likely to be replaced by AI. as a result I dont think its unreasonable to think that AI as it stands in going to be a threat to human work and hence income. However my real fear is, as the models get larger and the volume of information input becomes wider the less anybody will be able to predict what the output from AI will be. While that may not be a problem in a fixed role, the more we give AI control of infrastructure and especially military and scientific realms the less we will actually be able to control these as we will not be able to predict problems before they happen. From what I am seeing LLMs are not my real concern as I agree they are about natural language so have applications the require language manipulation but what about systems that really appear to have little to do with language such as things that now manipulate images along with geographic data the sort of thing you might need for a self driving car or missile. Tell me I'm wrong please
@davidsault9698
@davidsault9698 23 дня назад
She's scary intelligent combined with the ability to speak - which is not necessarily intelligence based.
@musicbuff81
@musicbuff81 Месяц назад
Really wonderful talk. Thank you!
@jonchicoine
@jonchicoine 19 дней назад
If you're new to AI and python, good luck getting the example notebook working on windows. It appears to me that more than one package doesn't support windows.
@AIhyp
@AIhyp 20 дней назад
Just woww
@lancemarchetti8673
@lancemarchetti8673 Месяц назад
Facts
@charlessmyth
@charlessmyth 22 дня назад
Good talk :-)
@SimonHuggins
@SimonHuggins 25 дней назад
Hmmm. But you can encode lots of data as though it was language - more efficient tokenization of symbolic representations of different modalities will most likely get us a long way too. And finding generalizations from this may well help LLMs speed further towards AGI. I think the origins of LLMs hides their potentials outside this space. But yeah, there’s a lot of … ahem, attention on this problem.
@generischgesichtslosgeneri3781
@generischgesichtslosgeneri3781 21 день назад
Don't call it hallucinating, it fabulates.
@BoomTechnology
@BoomTechnology 14 дней назад
AI solving problems that are irrelevant to us 42:49 > Like bad humans, = Delete human genetic error 🤖;
@ahmedeldeeb6893
@ahmedeldeeb6893 13 дней назад
Good talk all in all, but I found the section on "are LLMs intelligent" to be less than coherent. The placement of current LLMs on the chart is completely subjective, and the classification of generalization levels relevant but wasn't really brought to bear. The method of generating a "skill program" is only a preferred way of designing a system and by no means the only way, so why bring it up?
@pmiddlet72
@pmiddlet72 Месяц назад
Generating the ultra generalized model-of-everything doesn't even meet the variation in humans that can remotely constitute these vague notions of AGI. So more domain specific models would appear, to an extent, more solve-worthy. Generating the 'Rennaissance SuperIntelligence' isn't IMO a reasonable goal, for so many reasons - a large part of which are philosophical/ethical. What's the point, unless what we're generating is a reasonable model for better understanding ourselves - specifically, how the human brain works and processes the world around it (this is a notoriously complicated area of study)? Conducting scientific research simply 'because we can' (or more likely, because it brings in the Benjamins), while it may generate some new insights, more often than not has driven historical bad actors (and I"m being QUITE nice here) to engage in some ethically horrendous activities. So the hype in this regard and the droves of misunderstanding created around it is important to look at with healthy skeptical eyes. Sometimes 'excitement' over an idea, and over-promising various facets of value can run rampant enough to drive this seeming dichotomy between 'accelerationists' and 'doomers', as if there's NO SUCH thing as a spectrum of thought nor the existence of a middle ground. Is it important to have watchdogs over big tech? You bet. We wouldn't expect any less of watchdogs over big finance, big oil, big watermelon growers - you get the idea.
@MaxMustermann-vu8ir
@MaxMustermann-vu8ir Месяц назад
Today I asked MS Copilot aka GPT4 to return a list of African capitals starting with a vowel. It returned 20 results, some of them being repetitions, and 13 out of 20 did NOT start with a vowel. I'm sure AGI is near 😀
@jan7356
@jan7356 Месяц назад
I asked GPT-4o to do the same. It gave back 7. All starting with a vocal. All different Only one of them wasn’t a capital (but former capital and biggest city in the country). It needed 2 seconds for something I couldn’t have done. I’m sure AGI is near 😀
@MaxMustermann-vu8ir
@MaxMustermann-vu8ir Месяц назад
@jan7356 I will try it out. But it's still not correct. And you could have done it by yourself. Not in 2 seconds but you would have checked if the result provided by the LLM is correct.
@arnavprakash7991
@arnavprakash7991 Месяц назад
@@MaxMustermann-vu8ir it doesn’t break down words into individual letters like we do so it will struggle on tasks like that. Co-pilot gpt 4 also is not good compared to chatbot/normal gpt 4 and normal gpt 4 is now surpassed by claude 3.5 sonnet.
@TheRealUsername
@TheRealUsername Месяц назад
Yeah of course, AGI is near, thanks to mathematical algorithms, which don't have any components required for intelligence, but I'm sure statistical pattern within a data distribution is sufficient to outperform the human brain, not to mention the data has to be mathematically readable (tokenization), you can surely get AGI from text and (encoded) images, even though it isn't building a unified representation of the world, even though it isn't capable of extrapolation, abstraction and extreme generalization therefore unable of create novel patterns, unable to be creative, even though the model can't detect its own mistakes while inferencing. AGI is near ? Thanks to mathematical algorithms.
@markmonfort29
@markmonfort29 25 дней назад
It doesn't do math so getting an AI model to do math or counts or sums etc is not great. It's why it can't properly count how many Rs there are in the word strawberry etc However, it could if it's told to use function calling .. that's how ChatGPT can pull in an excel file and work on it. It turns your query into code and then does that. Not sure if copilot can do function calling but if you type into ChatGPT the followjg "Using function calling tell me all the African capitals that start with vowels " Response is The African capitals that start with vowels are: Abuja Accra Addis Ababa Algiers Antananarivo Asmara Ouagadougou
@Darhan62
@Darhan62 Месяц назад
I think there is reasoning going on in LLMs, what can only be called a form of reasoning, and by that token (no pun intended) a form of intelligence. It may be just reasoning based on language, but it is reasoning. Also, what about multi-modal models, that can look at a photo and give you a text description of what's in it, or can analyze a piece of music and tell you the genre, or give you a text description of it?
@vasvalstan
@vasvalstan 28 дней назад
There should have added the new research from Deepmind and what they did, not just Kasparov and chess
@millax-ev6yz
@millax-ev6yz Месяц назад
Excellent video...although i had to stop my mind from thinking about what would happen if i mixed Foster's and vic bitter together because of the accent. Thats my own neural net working against me
@kevinamiri909
@kevinamiri909 19 дней назад
GPT3.5 is not 355 B
@seanys
@seanys 20 дней назад
“FORTY TWO!” Universal intelligence solved.
@InfiniteQuest86
@InfiniteQuest86 Месяц назад
Thank you. She's one of the few rational people left on Earth. You would not believe the type of hate speech pure rage angry arguing I get when I mention that a language model should be used for language, and if we want to do something else we should use whatever tool is suited to that task. How are we living in a time where people think this is a controversial idea?
@tsilikitrikis
@tsilikitrikis Месяц назад
Bro if you say that GPT4, a Human level language understanding system, have to work only to translation-like tasks is like you say that a man have to work only as translator🤣🤣
@TheRealUsername
@TheRealUsername Месяц назад
​​@@tsilikitrikis "Human-level" ?? do you think GPT-x can think like you ? Can reason ?
@tsilikitrikis
@tsilikitrikis Месяц назад
@@TheRealUsername Why you ignore the rest of the sentence ? it can understand language at human level. The other things are the results of this understanding. If you cannot distinguish from human and can do work like human...what is it?
@richardnunziata3221
@richardnunziata3221 Месяц назад
test data in the training data is a rookie mistake..i have to wonder if that is true or there is a misunderstanding here
@aaabbbccc176
@aaabbbccc176 Месяц назад
This is what I think. You might try asking GPT4o who wins the gold medal RIGHT AFTER the 100m dash at Paris Olympic Games. If it answers, "I do not know," or it answers wrong, then you know if there is a rookie mistake. The answers to MOST of the questions people ask GPT are indeed in the training data, somewhere. GPT is just smart enough to extract (probabilistically) the right answer and put it in well-organized natural language sentences.
@suisinghoraceho2403
@suisinghoraceho2403 Месяц назад
@@richardnunziata3221 When you have bots automatically crawling the internet data and OpenAI being very opaque about how their models are trained, this is actually quite difficult to avoid. You can only try to validate whether the test data is in training data after the facts.
@Theodorus5
@Theodorus5 Месяц назад
OK for folks that know something about the subject but a woefully inadequate introduction for those who may not
@NostraDavid2
@NostraDavid2 Месяц назад
"goto" is a software development conference, so the target was developers, which makes sense.
@sdmarlow3926
@sdmarlow3926 Месяц назад
Is anyone else distracted by the person taking a pic of every slide? *pro tip: announce where slides can be found online before starting a talk
@mortenthorpe
@mortenthorpe 21 день назад
Notice that you can literally substitute the term AI with statistics, and the content and message remains the same… what does this mean, semantic fun aside? Well, for starters it means that Generative AI delivers statistically predictable results, which is the crux and reason to completely negate AI for generative. - it will never deliver correct solutions! The solutions completely rely on quality of data input, and knowing context - neither is trivial, or achievable in any meaningful sense… and you don’t even have to be a programmer or technical to know this - the mere foundation of AI as a concept relies on these factors… in brief, for anything truly meaningful, AI is and remains useless, forever!
@stratfanstl
@stratfanstl 20 дней назад
@mortenthorpe EXACTLY. There is no "intelligence" in such systems, only statistical probabilities concerning the next likely token to appear given the "context" of a set of prior tokens ("the prompt") based on everything the model has been supplied to calculate its statistics ("its training"). If you prompt such a system for "energy as a function of mass," it might spit out e = mc^2. But since it is only representing the likelihood of next tokens based on prior tokens, if the world was filled with a million idiots who all believed e = mc^3 and blogged about it 20 times per day and responded to other blogs on the topic five more times each day reiterating their belief that e = mc^3, that scientifically INCORRECT content would eventual distort the probabilities in a LLM to the point where the INCORRECT formula would become increasing likely to appear as output. These models have zero means to weight probabilities based on TRUTH. They are solely capable of weighting based on frequency of appearance. That's not intelligence.
@dennisestenson7820
@dennisestenson7820 Месяц назад
Artificial general intelligence will be built from components that are algorithmic, systematic, and not intelligent at all. Same as us.
@TheRealUsername
@TheRealUsername Месяц назад
I think you should learn biology, our brain is incredibly complex, even more for the neo-cortext, it couldn't be farther from mathematical algorithms. All ML models are statistical patterns learners, they can only learn patterns instead of actual data, because it's mathematically possible, and it requires all the dataset to be mathematically readable.
@marccawood
@marccawood Месяц назад
Sorry? 11:20 she says you can use an LLM to generate training data?? I’’ gonna call BS on that claim. If you’re not learning from real world data you’re pissing in the wind.
Месяц назад
@@marccawood Was she meant was that NLP are really good for getting the parameters you need from raw texts/reports. Something that can be very time consuming when setting up and training ML models.
@pristine_joe
@pristine_joe Месяц назад
Just thinking, LLMs have been a subject of research for the last few decades & is limited by the computing capacity our technology has thus far produced. Human intelligence is backed by training through evolution & has perfected the art of passing it down to the next generation through DNA, the community etc.,. Could we be in the very early stages of trying to replicate our consciousness & maybe it may eventually emerge if we overcome the limitations we currently face 🐝🌻
@TheRealUsername
@TheRealUsername Месяц назад
LLMs and all ML models are pattern learners and only work when the data is mathematically readable (tokenization) whereas biological intelligence is firstly omnimodal, secondly it relies mostly on abstraction, constant reasoning, intuition and continual learning (neuroplasticity), LLMs aren't a form of intelligence, they're sophisticated mathematical algorithms.
@afterthesmash
@afterthesmash 15 дней назад
"I actually got my PhD in psychology." Immediate translation, from her next comment: I'm a widget produced by the Concern Industrial Complex. It's sad the world we now live in automatically translates "watching with a lot of concern" into "oh, you must have a recent humanities degree" but there it is.
@afterthesmash
@afterthesmash 15 дней назад
Having now finished the video, she did a good job factoring the world as a practicing data scientist after this brief, but worrying moment.
@MarkArcher1
@MarkArcher1 Месяц назад
I enjoyed the talk but a bit of a red flag that the speaker isn't familiar with the difference between AGI and ASI.
@nikjs
@nikjs Месяц назад
sentience is not a pre-requisite for destruction of civilization. for that, the primitive the better
@afterthesmash
@afterthesmash 15 дней назад
Intelligence is _not_ controversial. It's divergent. It's the same as religion. There is no controversy between Christianity and the Muslim faith. But there are definitely major points of divergence. To call perspectives on IQ controversial rather than divergent gives far too much voice to the squeaky wheels.
@afterthesmash
@afterthesmash 15 дней назад
What I just did there is a mode of generalization, that of rising above brainless cliche, that I would dearly _love_ to see manifest in my future chatbot companions.
@janicewolk6492
@janicewolk6492 3 дня назад
Do you think the emergence of these ideas is leading to the significant drop in birth rates worldwide? As in, who wants to expose their child to neural nets? Maybe this partially explains the rise of right-wing anti-intellectual political movements? As in, who benefits from all of this? It certainly seems as if this is a lovely intellectual game that, like social media, has significantly serious consequences. I have a Master's degree in Slavic Linguistics. Am I now extraneous? Am I just supposed to say, oh well? Is the speed worth the social upheaval? It is no doubt the response of espousers of these ideas that determining outcomes isn't their responsibility. By the way, who watches computer chess games? I love the way the speaker refers to "humans". Who is her constituency?
@askurdija
@askurdija Месяц назад
GPT's performance was measured on various intelligence benchmarks and tasks outside of its training data. She doesn't explain what's wrong with these measurements and she doesn't give a concrete proposal of how to measure intelligence in a better way. Human intelligence is measured on tests that are similar or equal to the ones given to LLMs. Instead of engaging with all this body of research, she just shows an anecdotal counterexample (the Codeforces problems).
@afterthesmash
@afterthesmash 15 дней назад
"it's been an overwhelming flood" Really? An overwhelming flood of sensationalist headlines that you tune out completely if you wished to do so, and it barely impacted anything in your day to day life? At least not yet. And possibly not ever.
@afterthesmash
@afterthesmash 15 дней назад
Having now finished the video, to Jodie's credit, this was a passing turn of phrase and the rest of the talk never went here again.
@irasthewarrior
@irasthewarrior 18 дней назад
AI is sophisticated degeneracy.
@rob99roy
@rob99roy 16 дней назад
This presentation is going to age very badly. Let's not underestimate how quickly AI will progress. I suggest you revisit this talk in a year.
@Klayhamn
@Klayhamn Месяц назад
anyone who has spent enough time with GPT-4 and GPT-4o would easily know the presenter is wrong. Good enough LLM's are capable of REASONING and not just "text generation". i have myself crafted arbitrary problems that require math and logic to solve, and had GPT-3.5 fail at them and GPT-4 SUCCEED in solving them, and there is no way it has "saw this problem before" because i made it up on the spot. so i think the title of "emerging AGI" is perfectly fine. you have no idea what would be the path to AGI, and just like we "unintentionally" discovered LLMs by trying to solve something else, we might end up creating AGI out of LLM's without directly trying to instill some kind of "general intelligence" into them. the main thing i believe LLMs are currently missing is the ability to LEARN - i.e. dynamic plasticity in real-time. they also lack memory, for the very same reason. so i think the KEY to achieve AGI would be to grant them memory and plasticity (in some form) - and this would probably be the stepping stone that takes us to AGI levels. Even if they don't START with AGI capabilities from the outset, they might EVOLVE to have AGI capabilities just like babies grow up and learn more about the world and gain skills and mental faculties.
@muhammeryesil3331
@muhammeryesil3331 Месяц назад
Definitly agree with you
@ASmith2024
@ASmith2024 Месяц назад
lol
@jpphoton
@jpphoton 26 дней назад
hmmm
@sblowes
@sblowes 19 дней назад
Wow, this is so off! AGI didn’t become ASI because it’s sexier, it’s a different class of AI. LLMs have a base level of reasoning properties already at the ChatGPT-4 level, and the _general consensus_ among current leading AI researchers is that we expect a higher level of reasoning once we multiply the size again.
@nikjs
@nikjs Месяц назад
where's the code
@tsilikitrikis
@tsilikitrikis Месяц назад
Got 10 problems for the periods of trainning got them all right aand got 10 SAME difficulty problems after and got them.. all wrong?? No way this is right guys
@tsilikitrikis
@tsilikitrikis Месяц назад
Also the understanding of language is much more broader than cracking the game of chess. You learn something of the real world so it brings you more close to a general entity!
@jan7356
@jan7356 Месяц назад
I am sure this is completely outdated. This as done on the first version of GPT-4. Coding abilities and generalization abilities have gotten much better with later version.
@InfiniteQuest86
@InfiniteQuest86 Месяц назад
Lol I hope you are being sarcastic. That's always been my experience. These companies are lying about training on the test data. It's actually pretty sad that they don't score perfectly having trained on it. That's pretty pathetic actually.
@InfiniteQuest86
@InfiniteQuest86 Месяц назад
@@jan7356 On anything I've asked GPT-4 and GPT-4o, the original 4 was far better. So this statement doesn't really hold.
@tsilikitrikis
@tsilikitrikis Месяц назад
Bro you understand nothing of this technology. Read the paper "Spikes of AGI". You sya that they train them on test data 🤣🤣. I hope you have no relation with software
@ggrthemostgodless8713
@ggrthemostgodless8713 16 дней назад
GROK will rule them all. ... Elon has been at it for two years only... and look!!
@PACotnoir1
@PACotnoir1 16 дней назад
It's interesting to see that she can't figure that compression of informations in a trillion of parameters constitutes an elaborated form of intelligence, alien to us but still with "cognitive abilities" and that reducing it to simple maths correlations is like reducing human intelligence to chemical reactions. It just forgets that emergence properties arise in complex systems.
@tyc00n
@tyc00n 24 дня назад
first half was great, 2nd half was just terrible
@odiseezall
@odiseezall Месяц назад
the speaker presented 0 (zero) evidence regarding the timeline of increasing generalization of AI.. she's saying "there's a long way to go" but there is no proof to support that conclusion
@larsfaye292
@larsfaye292 Месяц назад
@@odiseezall because it's self evident...
@TheRealUsername
@TheRealUsername Месяц назад
I believe LLMs can mimick a certain form of understanding of certain parts and aspects of their training data, it won't achieve AGI but it can still be useful for certain tasks.
@ASmith2024
@ASmith2024 Месяц назад
bananums.
@Peter.F.C
@Peter.F.C 23 дня назад
What we have here is a lazy person who doesn't even do basic research and doesn't know what they are talking about. Take for example her description of the 1997 Kasparov match against Deep Blue. In that match, Kasparov did lose the second game. At least she got that right. But he did not lose the third game and he did not lose the fourth game. Both those games were draws. He did lose the match and that was despite at that point in time still being stronger than the chess engine. This is information on the match that she could have easily checked if she'd bothered. The chess engine's play had had a psychological effect on him which is why he lost the match despite being a stronger player. But it was very well understood at the time that the chess engine that had defeated him had the intelligence of a cockroach. As it is now, chess engines are far beyond the strongest humans, but they still only possess a cockroach level of intelligence. And are irrelevant anyway in discussion of the capabilities of these LLMs. This talk sheds no light on the subject matter.
@user-wr4yl7tx3w
@user-wr4yl7tx3w Месяц назад
Basic info
@raiumair7494
@raiumair7494 Месяц назад
Nothing new at all - in face boring to some degree - except a short at why current LLMs are not on its way to AGI - which I agree with
@antikras666
@antikras666 Месяц назад
babka
Далее
Unreasonably Effective AI with Demis Hassabis
52:00
Просмотров 155 тыс.
Паук
01:01
Просмотров 2,4 млн
Linus Torvalds: Speaks on Hype and the Future of AI
9:02
[1hr Talk] Intro to Large Language Models
59:48
Просмотров 2,1 млн
The moment we stopped understanding AI [AlexNet]
17:38
AI isn't gonna keep improving
22:11
Просмотров 179 тыс.