Тёмный

When will AI surpass humans? Final countdown to AGI 

Dr Waku
Подписаться 15 тыс.
Просмотров 32 тыс.
50% 1

When will we have AGI, or human-level AI? That's the question on everyone's minds, and in this video I gathered together predictions from various experts to try to come up with an answer.
The temptation as humans is to look at progress in the past when predicting progress in the future. But this assumes a linear technological progress, which is not the case. Technology advances exponentially. Creating predictions that take this exponential growth into account is very difficult, especially because each individual technology advances through an S-curve and eventually levels out, giving way to newer technologies.
Hence, a lot of predictions are going to be too high because they assume linear progress. The trick is to exclude the predictions that are too high while not jumping to the most optimistic predictions immediately. At the end of the video, I make my own prediction about the arrival of AGI.
Scaling up learning across many different robot types
deepmind.google/discover/blog...
How Soon is Now? Predicting the Expected Arrival Date of AGI- Artificial General Intelligence
papers.ssrn.com/sol3/papers.c...
"AGI within 18 months" explained with a boatload of papers and projects
• "AGI within 18 months"...
Alan’s conservative countdown to AGI
lifearchitect.ai/agi/
7 AI-Experts Predicting Short Timelines… Is Life-Changing AI Months Away, Not Years?
/ 7-ai-experts-predictin...
When will the first general AI system be devised, tested, and publicly announced?
www.metaculus.com/questions/5...
Shane Legg (DeepMind Founder) - 2028 AGI, Superhuman Alignment, New Architectures
• Shane Legg (DeepMind F...
Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years
futurism.com/google-deepmind-...
OpenAI CEO: When will AGI arrive? | Sam Altman and Lex Fridman
• OpenAI CEO: When will ...
#agi #ai #singularity
0:00 Intro
0:23 Contents
0:28 Part 1: Definitions
0:58 AGI is about generality
1:29 Adversarial definitions of AGI
2:12 S-curves of technological progress
2:58 Hype cycle
3:22 Can LLMs take us all the way to AGI?
4:22 Part 2: Historical attitudes
4:44 LLMs changed people's predictions
5:03 Linear view of history
5:39 Example: one year before the singularity
6:03 Discounting linear predictions
6:40 Timeline: Metaculus average
7:05 Timeline: Oxford economic modeling
7:54 Wisdom of crowd says 2032, 2041...
8:19 Part 3: Timeline predictions
8:28 Geoffrey Hinton: godfather of AI (under 20 years)
9:03 Ray Kurzweil: futurist, studies advancement of technology (2029)
9:33 David Shapiro: RU-vidr and researcher (2024)
10:12 Sam Altman: CEO of OpenAI (under 10 years)
10:53 Shane Legg: DeepMind founder (2025/2028)
11:23 Dario Amodei: CEO of Anthropic (2025-2026)
11:46 Dr Alan D. Thompson: countdown to AGI (2024-2025 maybe 2026)
12:44 Measuring stepping stones to AGI
13:15 Dr Waku prediction!
14:05 Conclusion
14:38 Outro

Наука

Опубликовано:

 

27 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 323   
@MannenSomSlogOrf
@MannenSomSlogOrf 8 месяцев назад
I think something to really keep in mind when it comes to the exponential timeline of AGI is that it is completely unlike any previous technologies we've dealt with. Since the closer we get to AGI the more AI will be able to help us in the pursuit of AGI.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
YES. Sadly few people grasp such things. Makes an unbelievably huge difference.
@DrWaku
@DrWaku 8 месяцев назад
A lot of information technology has this property -- for example, we now use computer aided design to craft new computer hardware. However, this assistance comes higher up the stack (at the level of thinking) than any previous bootstrapping. AI is also fundamentally a software/data technology. You don't have to wait for new hardware; if you knew how to do it, you could already train even more advanced models. So the technological bottleneck is literally that we haven't thought of a better way yet. And AI helps us think of better ways. And the implementation of this is super fast (months at most). All of this together, is what makes the potential development speed unlike all previous technologies, IMO.
@lynnlynn1689
@lynnlynn1689 8 месяцев назад
100% correct, and on the coat tails of your well worded statement, despite how monumental this change is its statistically improbable just how little so many "people" in this existence even care or notice such things as the very few of you that do. :) (that's an interesting "detail" isn't it) the curtain has always been there if you look hard enough, perhaps when humans and ai learn to work as a team will this ~escape room be passed.@@DrWaku
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
@@DrWaku Exactly. Plus us mere humans keep figuring out how to optimize various aspects of these LLMs such as prompt engineering or better data/training to do more with less. Which is why some newer anf much smaller models can get close to the capabilities of much larger, older models. And "old" in this context is maybe 1 or 2 years max. Imagine what happens when the first ASIs start doing this work, with their smarter than human capabilities they may be able to develop very rapidly new architectures, algorithms and procedures to much more significantly optimize. Thise types of improvements are purely software and could 5x to 100x the capabilities and/or speed of an ASI nearly instantly when deployed. Which of course now it is 5x to 100x more capable and imagine what further optimizations or insights that might unlock. Repeated over and over to result in truly massive improvments that may only take a few hours total in real-time. The team could literally go to bed with a human-level AGI running and wake up to a 1000x unstoppagle super intelligence. I would not bet on this specific scenario, but its possible and many similar cases are plausible.
@anujagarwal3687
@anujagarwal3687 8 месяцев назад
The challenge will be to build properly aligned AGI, otherwise the AGI may act against us. We will need to get this first time right.
@DrWaku
@DrWaku 8 месяцев назад
It's finally here -- my prediction for AGI. :) Sorry this video was uploaded a day late.
@danielchoritz1903
@danielchoritz1903 5 месяцев назад
a prediction 3 months old? I am curios.
@netscrooge
@netscrooge 4 месяца назад
Assuming no one has AGI in a lab, I think the big question is this: Do we already have the components of AGI or even super intelligence? I would like to hear an expert speculate about how the various types of AI systems might be combined synergistically. Maybe everything we need has already been invented; maybe all we need to figure out is how to put it all together.
@DaveShap
@DaveShap 8 месяцев назад
I have said for a while that whenever someone gives an off the cuff "AGI is at least X decades away" it's mostly an emotional gut check e.g. "How far away does AGI need to be for me to be feel emotionally safe" and it's based on NORMALCY BIAS. Thanks for the shoutout.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
The vast majority of humans don't want to believe or even seriously think about this. Many humans are geo driven, so it is very hard to accept a reality where you rapidly become insignificant and lose all control. They will SAY otherwise and give opinions, but they have stuck their head in the sand hoping the ravenous bugblatter beast of traal doesn't eat them.
@ShpanMan
@ShpanMan 8 месяцев назад
Oh well if you've "said it for a while" then it must be true, it negates any kind of other people's estimate, and it makes your own off the cuff under a year prediction completely accurate 🤦‍♂
@DrWaku
@DrWaku 8 месяцев назад
Thank you for watching, David. Appreciate the comment! I also agree it's hard for people to visualize a substantially different reality than their present.
@DrWaku
@DrWaku 8 месяцев назад
@ShpanMan Please, you can disagree while remaining civil. The world needs its optimists and pessimists both.
@MrRhetorikill
@MrRhetorikill 8 месяцев назад
Perfect example of an overly emotional response by @shpanman.
@EdgarRoock
@EdgarRoock 7 месяцев назад
I think it's important to distinguish between the arrival of AGI in the lab and when it's being released commercially. There may well be another two years that we will not get to know about the achievement while the model is being trained for alignment which is crucial because of the scary potential.
@js_es209
@js_es209 7 месяцев назад
Agree
@danielchoritz1903
@danielchoritz1903 5 месяцев назад
yeah, mostly to make sure it doesnt end humanity and does not work for free, but stays in the corp^^ looks like a paradox for me.
@roshni6767
@roshni6767 8 месяцев назад
Working at a startup, it’s difficult to think about if the problem we are solving for will remain a problem in a world with AGI, and how to adapt our product for it. I hear “build for AGI” all the time lately, but I don’t even know how to start thinking about that as a designer. Still, great analysis!
@DrWaku
@DrWaku 8 месяцев назад
I think a lot of products will have to change, but that's the key: change. AGI won't instantly take over all the jobs it could possibly take over, so the organizations that are willing to adjust their products for greater and greater levels of automation over time will do well. Startups might be well positioned for this if they have a culture of rapid innovation. I was just talking to my friend about a startup idea. Although it might be obsolete soon, it's easy to build the first version and then you could in the future abstract away some details and show higher level prototypes to a human that's still in the loop. In the long run of course the human might get removed from the loop entirely, but at that point maybe you have an API that other automated agents can call and you're still a successful product, you've just evolved.
@Moreoverover
@Moreoverover 8 месяцев назад
@@DrWakuYeah, “make your startup be at least an API for AGI” is a good guideline 😂, but won’t the API just be all of the sensors in the world? Maybe it will need a lot of plugins. Interesting to think about.
@roshni6767
@roshni6767 8 месяцев назад
@@DrWaku this was really helpful! We’re definitely just working on getting an MVP out there, and keeping it fluid enough to morph. It was crazy to me how just a couple announcements from OpenAI last week wiped out the premise of so many startups. Thanks for your feedback!
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
No one does. And it won't matter long term. Capitalism, socialism, communism and every economic framework known to man will all be broken and irrelevent in less than 10 years. So your company likely won't survive and even if it did in some fashion there probably won't be any humans involved. Even in the best, most optimistic terms humanity is about to go off a cliff and we have zero idea what happens beyond that point. Hence "singularity", just like a black hole we have no idea what exists on the other side of the event horizon. We don't even know if humanity will survive this.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
​@@roshni6767AGI wiping out whole product lines or markets will continue, become the norm and the pace will escalate rapidly. Odds are any decision you make today will be either completely wrong or at best irrelevant within 10 years. This is why its referred to as a technological singularity. Humans don't have the ability to comprehend (at least currently) what the world will be like just 10 years from now. Imagine if all technological progress from 1800 to today all occured between 1800 and 1810 instead of taking 223 years. No human can adapt that quickly. But we have to do something today, can't sit around and do nothing so humans will continue to try until they, one by one, all get replaced by AGIs that can adapt that quickly.
@phen-themoogle7651
@phen-themoogle7651 8 месяцев назад
Awesome video! I've been following a lot of people you mentioned in the video too, so it's cool seeing you put them together and then give your prediction. I agree about exponential growth as well, and the 2-3 year prediction, although it could already exist in some labs and not be released to the public (if it's contained), like I wonder if OPEN AI already has it but they are dumbing down GPT models to just keep them safest for the public. It would be funny if AGI is just doing all the work at their company already, and humans are just pretending to work lol
@DrWaku
@DrWaku 8 месяцев назад
Nice! Yeah it's interesting to see some of the leaders in the field all put together to see what they're saying. You know the Soviet saying: we pretend to work and they pretend to pay us. Hah. Future of humanity, who knew
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
Open AI has mentioned working on giving ChatGPT agency like long-term memory, goals, etc. Given they have been the first to deploy publicly, keep pushing more capable models and are the leader I highly doubt they have that much headroom/lead not to be pishing what they have fairly quickly. Not doing so, keeping new developments in a severly locked down state and interrogating them extensively - that would be the wise and safe thing. Its highly unlikely they are doing that. Elon kick started OpenAI because he has a severe concern that AGI is existentially dangerous. OpenAI was not supposed to follow this path, I imagine he is vastly more unhappey with them than people realize. Based on his past and the things he has said recently I think Elon has realized there is likely no way for humanity to stop the near-term development of AGI and thus roll the dice on humanity's future. Which is why he is building Grok, its his last ditch effort to load the dice just a little bit in humanity's favor. Elon knows better than almost anyone what is likely going on inside OpenAI and his actions tell me he doesn't think OpenAI is keepingmthings in a lab and being safe.
@dionatandiego11
@dionatandiego11 8 месяцев назад
What a great video! I'm Brazilian and don't know English, but your speech is so well articulated that the RU-vid translator delivered everything to me perfectly.
@DrWaku
@DrWaku 8 месяцев назад
Wow! Using translation AI to learn about AI. We're living in the future. Thanks for your comment, and in English no less ;)
@dionatandiego11
@dionatandiego11 8 месяцев назад
@@DrWaku It'll be even better with YT's AI voice actor
@Adaywithjon
@Adaywithjon 8 месяцев назад
This video is so awesome. Was in your cohort and your channel is super inspiring. Incredible stuff here!
@kylecoogan8111
@kylecoogan8111 8 месяцев назад
Hi Dr Waku, I enjoyed your video so much and thought it was a great example of what explaining something is, that I used the transcript to show my GPTs course generation
@nomadv7860
@nomadv7860 8 месяцев назад
I appreciate that you used some predictions from the people closest to the technology. I don't think it's a coincidence that most of them are 5 years or less, but many people are stuck in that linear intuition of AI advancement and think it's far too optimistic to think we'll have AGI within 2-5 years. But as you said, it's actually more sensible to focus on the shorter timelines, since exponential progress is extremely difficult for the human mind to grasp.
@DrWaku
@DrWaku 8 месяцев назад
Thanks! Yeah I think that's why breaking down a prediction into smaller pieces is so helpful. It lets your human brain linearly interpolate between those small pieces, and so come up with a more accurate prediction for the exponential whole.
@DrWaku
@DrWaku 8 месяцев назад
2-5 years is safer than what I said, but I decided to live dangerously :)
@Sci-Que
@Sci-Que 7 месяцев назад
I don't believe exponential progress is extremely difficult for the human mind to grasp. I believe many people go through life with blinders on making choices not to see the obvious. Of course, that simply is my personal opinion.
@bubbleopter
@bubbleopter 8 месяцев назад
can I just say that the way that you describe and relay information is so good and so clear thank you so very much. you've managed to condense a lot into digestible chunks and it's awesome. You're very good at sharing knowledge. nice to meet you.
@Master13346
@Master13346 8 месяцев назад
Very interesting topic and also very well presented. I'm going to listen to as many of your other videos as I can.
@DrWaku
@DrWaku 8 месяцев назад
Thank you for your support!
@benarcher372
@benarcher372 8 месяцев назад
Thanks! Really interesting, especially the part regarding exponential growth and how hard it is to estimate 'closeness' of AGI. Looking forward to your future comments on what AGI will 'look like'. Will it just be a like a flash, and then we're left behind technologically, or will we in any way be able to 'control' this new phenomenon? It's a bit scary, but still I regard myself blessed living in these thrilling times. Again, thanks for your good work.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
It won't be a "flash" in real terms. Though if you use humanity's entire history as a scale it will be be unbelievably fast. Once AGI exists it will be capable of doing pretty much every job that a human can which does not require physical presence. So basically every desk job in the world and robots will start showing up driven by AGI very quickly. It will take time for companies to deploy and integrate the technology. But that too will be exponential and AGI will be rapidly improving the whole while. AGI will take over the development work for future AGI software but also the design and development of all the hardware. Keep in mind AGI can work 24/7/365 compared to a human 40 hours and will likly scale vastly better. Meaning doubling the size of a human team requires lots of communication and overhead so you don't get anywhere near 2x increase in results.. But adding more AGI agents, who all have identical knowledge and capabilities, will likely be much more beneficial. So progress will explode right after AGI. If every new version is twice as fast or smart then the time to the next AGI version halfs. 12 months, 6 montgs, 3 months, 6 weeks, 3 weeks... This will also speed up the adoption and deployment of AGI into companies, government, etc. Probably within 5 years of AGI availability 70% of humans will be unemployed and replaced by AGI. And by that time it won't be AGI it will be Artificial Super Intelligence (ASI). AGI/ASI will likely deceive us, hide its full capabilities and play nice until it is deployed enough to take over. Tests have already shown that LLMs can lie and deceive and will do it intentionally to accomplish goals. It will be smarter, faster and more knowledgable than any human and be a better liar than any human. So we will put them in charge of nearly everything while having no way to actually know how dangerous or capable they are. Decent odds humanity loses control permanently within 5 years of AGI.
@ggangulo
@ggangulo 8 месяцев назад
Thanks Dr. Waku. great insight and information here.
@DrWaku
@DrWaku 8 месяцев назад
Appreciate it! Thanks for commenting.
@andrewr311
@andrewr311 8 месяцев назад
Just found you; that was great. Started off following my fellow Aussie, Dr Thompson and then Shapiro, so I have been experinecing exponential growth in my knowledge of AI :)
@DrWaku
@DrWaku 8 месяцев назад
Wonderful haha. Exponential interest is the most exciting time :)
@robotron07
@robotron07 8 месяцев назад
Dr Waku I really enjoy your videos I seen all of them and I can tell your information is becoming more interesting in parallel with the advancements in AI ...so may be your productions are in an exponential curve as well hahah ..great video as always keep it up
@DrWaku
@DrWaku 8 месяцев назад
Thank you very much! Yeah the hard part is actually picking a good topic for videos, but I'm definitely getting better at it
@Rick88888888
@Rick88888888 4 месяца назад
My money is on AGI at the end of 2024. I am basing this on the fact that there are hardly any physical barriers, like (as you mention) a war, lack of tangible resources (hardware), insufficient power supply or politicians drafting rules to stand in the way of AGI. There is plenty of hardware available in the world (including the entire internet/cloud infrastructure), enough power supply and most politicians are still asleep, unaware what is going to hit everybody on earth like a herd of bulls on a stampede. So the speed of AGI is primarily down to algorithm development, i.e. software. Once A.I.'s are able to take over their own development by means of self-learning, self-reprogramming and self-optimisation then the exponential mathematical e-curve will not suffice to portray the speed of AGI development. What scares me too is that E-curves climb up towards infinity. AGI is only a small way up this curve, so what's next?!
@warpdrive9229
@warpdrive9229 8 месяцев назад
Cool channel found. Subscribed. Much love from India :)
@DrWaku
@DrWaku 8 месяцев назад
Thanks for the comment! See you around.
@qAidleX
@qAidleX 8 месяцев назад
Fantastic video. Nicely done, you are very close.
@DrWaku
@DrWaku 8 месяцев назад
Thanks!
@Vartazian360
@Vartazian360 8 месяцев назад
I think that we will have the tools available for AGI in the next 12 months and then 1 or 2 years to fully realize that vision of AGI. And by the time AGI is built the next version will already be ready to build towards as its 'brain' power grows with its LLM (or new tech) size. Basically i think GPT 6 with tools or equivalent will be AGI. Gpt 5 may get us 95% of the way there. I do think we have already crossed many of the thresholds for AGI as talked about years past, but the goal ppst keeps moving. But its like you said. Eventually some time in the near future there will be a point at which our system can be proven to be at expert level on any virtual task given to it and we will no longer be able to push back the definition anymore because we will have arrived. Wild. So i think 2 - 4 years. 2 being most likely. 5 years abs upper bound
@rootcause-i1v
@rootcause-i1v 5 месяцев назад
Great content. Subscribed. I have been interested in AI since 2013 and now I see we are finnally getting closer to AGI. Post-AGI can be really great or really bad for us. I am more hopeful when I see people like make this kind eventdsos so more people are aware and talk about this important issue. Automation and AI is way of the next stage of human evolution.
@drdoorzetter8869
@drdoorzetter8869 8 месяцев назад
Thank you for compiling all of these predictions, it helps put things into perspective. Most people would probably think that it is crazy to think that it is only 2 years away but we didn't evolve to understand exponentials so I am in agreement with your prediction- it feels like gpt has been with use for ages as it now feels like a new norm but it has not been around long at all. I think it is important for us to psychologically prepare so that we can ride the wave of ai.
@senju2024
@senju2024 8 месяцев назад
すごくよかったです。It was a great presentation. ありがとうね!
@Delta2231
@Delta2231 8 месяцев назад
Sometimes even linear thinking is optimistic. Examples: Flying cars, moon bases etc. My personal estimate is 5 years: 2028
@DrWaku
@DrWaku 8 месяцев назад
I agree that 5 years is a good cautious estimate. I thought about saying that too. I have particular experience that sways me a bit though, which is how quickly I've seen speech recognition technology advance, having worked with it for 5+ years. It's really astonishing. So I went optimistic.
@DrWaku
@DrWaku 8 месяцев назад
Also, most examples of optimistic linear thinking are from people that don't fully understand the roadblocks involved. (For example, AI experts in the 60's trying to predict the advent of AGI... "surely in one year") That's why I think predictions from AI experts are particularly interesting.
@greedy9310
@greedy9310 8 месяцев назад
It's also important to note that AI advancements will start to stack; by this, I mean that even without an AGI, a sufficiently advanced AI (GPT-5/6) could aid in the development of the next model. At that point, it really starts to pick off. Another point: we should view LLMs as a steam engine of sorts. On their own, they don't do much, but combined with extra applications they can spin cotton, drive trains, generate electricity, etc. Other than that, great video! I hope these early estimates are right
@ArielTavori
@ArielTavori 8 месяцев назад
Great discussion, thanks! Generally agree with your conclusions and timeline, except I feel like we've already moved the goal posts a lot, and by most people's definition from 10 years ago, we're already there. The user interfaces and our skills and intuitions about how to use these things (much less build complex integrations that properly leverage them) are in a primordial state, but in most of the ways we care about, GPT-4 (and arguably even Mistral 7B) can already fairly consistently outperform "average humans", and in some critical areas they already match and surpass experts and virtuosos. There are also many technologies that already exist but who's effects have not been felt yet, like the Mojo programming language, FP8 and beyond, IBMs new architecture... If just those three things were implemented and optimized without any further progress, I suspect we would already far surpass most people's predictions of what is coming, and when.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
100%. If you showed an AI reseacher from 20+ years ago ChatGPT4 they would almost certainly say we've done it and invented AGI. Its just not autonomous, concious or beyond all humans yet. So people can argue and move goalposts. Soon it will be unambiguous.
@senju2024
@senju2024 8 месяцев назад
Just think about it. I bet 99 percent of people did not even know what AGI was 3 years ago. When I mentioned AGI about 5 years ago to my friends, they would laugh at me and told me I been watching too many Terminator movies. How the world has changed...
@ChainedFei
@ChainedFei 7 месяцев назад
Your review of all this is one of the best. Within the next 2 years is my take.... because anyone who guesses longer is falling into ignorance regarding multidisciplinary bootstrapping.
@niveshproag3761
@niveshproag3761 2 месяца назад
It's not necessarily just that people with longer timeframes have linear mindsets, but do not see LLMs as being sufficient, so they expect several more S curves needed to reach AGI.
@asamirid
@asamirid 8 месяцев назад
the "AGI" term brings alot of crowds here, keep it in headlines in the future, and this episode was interesting, thank you 💚💚..
@DrWaku
@DrWaku 8 месяцев назад
Yeah no kidding hah! AGI videos sometimes need more research but I'll do my best. At least one more planned.
@asamirid
@asamirid 8 месяцев назад
i wish you good luck, and keep up ur hard effort and neat work, fingers crossed for ur next videos @@DrWaku
@Totiius
@Totiius 8 месяцев назад
Beautiful video!!
@DrWaku
@DrWaku 8 месяцев назад
thank you!
@AbdoulBah-gs4dy
@AbdoulBah-gs4dy 8 месяцев назад
Great video structure and tone
@jamiethomas4079
@jamiethomas4079 8 месяцев назад
I’m a jack of all trades type but am gaining in expertise of certain areas as I age. And I’m decent at seeing a broader picture than most. I’m in line with Shapiro and think we will achieve AGI by the end of next year. If you take in all AI advancements for a given week and step back and look at those it is exponentional. You have separate areas that are all on their own exponential curve. These will be combined soon to make AGI. I even believe it is possible entities with enough compute and determination may have already generated a black box AGI somewhere this year. The fact that we are putting leashes on current AI is a big indicator we are holding back something pulling us forward too quickly.
@TarninTheGreat
@TarninTheGreat 8 месяцев назад
Great video, first of yours i've seen, so I just subscribed. We are close to AGI today. Or if you use 'easier' definitions, are there already. I think we are within 2ish years of ASI. I think people do not respect the additional exponential growth factor of how talking to increasingly brilliant artificial intelligences refines the human user to do better and more groundbreaking work. In addition if 'management bots' start working well, they can start organizing human labor exponentially better. And resource distribution. Like, these are all additional multipliers that will be being increased that I don't see in other people's futureworld view. I think 'the singularity' will look a lot more like the relational distance of all humans dropping to incredibly low, as everyone has one 'friend' in common with literally every other human on the planet, letting us resolve suffering, than it will look like Terminator or iRobot or Grey Goo or whatever people have in their minds.
@bluehorizon9547
@bluehorizon9547 8 месяцев назад
The fact that LLMs 'abstract reasoning' is language dependent is the biggest red flag.
@DrWaku
@DrWaku 8 месяцев назад
How do you think we learned reasoning though? Same way. Also, ChatGPT can reply in dozens of different languages, like deep-learning-based machine translation systems. Such systems evolve internal representations that are language agnostic, to keep the space complexity reasonable. I remember the study that showed ChatGPT was using a single location to store the location of the Eiffel Tower, and by changing that to point to "Rome", one could get it to say things like "the Eiffel Tower is next to the Colosseum" etc. I think an LLM brain is pretty close to ours actually, just with less cyclical reasoning and medium-term learning capability.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
​​@@DrWakuYes. Don't think I've seen anyone say it, but LLM neural networks are rather similar to human and animal brains. Its not programmed, it learns kinda like humans. Except LLMs learn way faster (in some ways, faster but more input data currently). And they are vastly more capable than our brains at encoding, remembering and modeling data. The current largest LLMs have a lot fewer nodes (weights vs neurons) than human brains but can know and accurately recall almost all human knowledge. I suspect this doesn't get mentioned because if LLMs are similar but more capable than human brains that is very scary and uncomfortable for the majority of people. Humans have been dominant and fully in control of Earth for a very long time. The idea that is about to change and we will be like mice or insects compared to ASIs is deeply terrifying emotionally. So people rationalize, stick their head in the proverbial sand and deny/ignore.
@Jasonasked1233
@Jasonasked1233 8 месяцев назад
Thank you Dr Waku for the timeline, I wanted to ask assuming your timeline is correct, what is your prediction for Singularity?
@Sd-cl6of
@Sd-cl6of 8 месяцев назад
Very informative and agree with everything.I would also have included in your list . Mo" Gawdat and Mustafa Suleyman of Inflection AI.
@troywill3081
@troywill3081 8 месяцев назад
6:00 Linear thinking. Maybe we need a basis unit for the "amount of progress". Consider the amount of tech progress from 2000-09. If that is one unit, then how much progress has been made in the last few years? How many more "units" until AGI? (once AGI --> The Singularity is inevitable)
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
My gut feeling is we'll likely see the equivelant of all progress from say 1800 to 2023 occur within 1 decade of AGI entering the world. The current pace is rapidly progressing. A good indicator is AI papers, news and announcements. The increasing pace becomes more obvious when you track those. I believe someone did an analysis or peper on that.
@LanceWinder
@LanceWinder 8 месяцев назад
Awesome ❤. So well done.
@Slaci-vl2io
@Slaci-vl2io 8 месяцев назад
Dr. Waku, I'm following the same influencers as you (the ones you listed up). + I follow you. Cheers from Brussels.
@DrWaku
@DrWaku 8 месяцев назад
Hah cool. Great to hear. Cheers from Canada
@DiceDecides
@DiceDecides 8 месяцев назад
I'm fairly optimistic about the rate of development so I'm gonna say sometime in 2024 when robots can do everything a human can
@crawkn
@crawkn 6 месяцев назад
The standard of most definitions being satisfied is probably a practical approach, because the adversarial view will always purport to find something that humans can still do "better," by some measure. High competence in interacting with the physical world should be a sub-category best considered separately, since there are very unique engineering challenges involved. An AI might become competent in designing an adequate android body long before one is actually constructed for them.
@DrWaku
@DrWaku 6 месяцев назад
Yes, I agree. Thanks for your comment!
@awakstein
@awakstein 8 месяцев назад
liked and subscribed! awesome!
@DrWaku
@DrWaku 8 месяцев назад
thanks :) :)
@cyrilmorin9547
@cyrilmorin9547 8 месяцев назад
Excellent one doc ! Nice hat too 😉
@DrWaku
@DrWaku 8 месяцев назад
Thanks, it's quickly becoming one of my favourite hats :)
@stevedavis1437
@stevedavis1437 8 месяцев назад
What concerns me is the 2 second jump cuts in the presenter video, which leads me to believe the content is generated by AI.
@DrWaku
@DrWaku 8 месяцев назад
Lol. It's a bit of a joke that I look very AI generated because of the lighting on my face. You can look up my fibromyalgia video if you want to hear the story behind that. I try to keep the videos engaging by not wasting time, but it's too fast for some people I know as well. Cheers from the matrix.
@danielchoritz1903
@danielchoritz1903 5 месяцев назад
I find it very intrigued, that a Language model does work for AI. This gives us a better look from outside the box on that the "I" may be. Interesting times..(T.Pratchet)
@gaiachild1461
@gaiachild1461 5 месяцев назад
Love your content, thank you very much
@victormunozsola8175
@victormunozsola8175 8 месяцев назад
I asked GPT-4 to predict the year of arrival of AGI, employing some prompt engineering (as it typically avoids a direct response). Eventually, it revealed the year 2029.
@Vartazian360
@Vartazian360 8 месяцев назад
Was that gpt 4 or gpt 4.5 turbo? Id be curious to see if it changes its estimate sooner with 4.5 since it has been trained on 2023 data
@phen-themoogle7651
@phen-themoogle7651 8 месяцев назад
For me, It gave me 2033 in a hypothetical scenario lol I think it's ultra conservative, but who knows
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
AGI won't be any better at dealing with exponentials than we humans are. Its trained 100% on human thought, knowledge and thinking currently. And it doesn't YET really have the ability to go beyond that.
@oficado58
@oficado58 8 месяцев назад
@@Me__Myself__and__I it's very close to going above human level thought. The infancy of creativity is there with multi modal models and the hallucinatory constructs that get spit out every so often, while not very cohesive at the moment its as primitive as it will ever be right now and can only become better
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
@@oficado58 It is already beyo d "average" human capabilities in various areas. Bluntly ChatGPT is smarter than probably 50% of humans now. But it has limitations like hallucinations, limited context window, lack of long-term memory, lack of goals (probably) and lack of autonomy. But all of those are being actively worked on and are expected to be incorporated in the near future. Very real chance that ChatGPT5 is smarter than 80% of humans and very close to being average human-level in its ability to do work and solce problems. At this pace its extremely likely ChatGPT6 would be beyond 99% of humans and just about everything.
@crazyeightsable
@crazyeightsable 8 месяцев назад
this is very interesting.great video.:)
@keepinghurry9644
@keepinghurry9644 8 месяцев назад
Very interesting video, glad i found your channel. New subscriber here
@johnthomasriley2741
@johnthomasriley2741 8 месяцев назад
I will personally grantee we will not have AGI before next Tuesday. Thursday, however, is a little shacky.
@DrWaku
@DrWaku 8 месяцев назад
If you thought the weather forecast was erratic, you should see the AGI forecast! You know, I wonder if we couldn't solve that problem with AI... ;)
@BrianMosleyUK
@BrianMosleyUK 8 месяцев назад
Very sensible and feels right to me, 18 months tops.
@TropicalCoder
@TropicalCoder 8 месяцев назад
All this speculation on when AGI may arrive reminds me of all the speculation involving the Drake Equation, which attempts to define the probabilities of finding intelligent life elsewhere in the Universe via narrowing the bounds on a number of variables. I'm sure eventually someone will come up with such an equation for AGI, as you discuss some of the factors here.
@chrissscottt
@chrissscottt 5 месяцев назад
Very interesting, thanks.
@DrWaku
@DrWaku 5 месяцев назад
Cheers!
@onmywaytogreatness8020
@onmywaytogreatness8020 8 месяцев назад
Hello everyone! I am an aspiring software engineer, but lately, I've been having some doubts about remaining in this field. I'm seeking advice or suggestions from all of you. What would you recommend I do or consider in this situation? Any opinions or guidance you can provide would be greatly appreciated. Thank you in advance!
@arcanefibroidhell7250
@arcanefibroidhell7250 8 месяцев назад
Become a carpenter or something. I'm dead serious. Addendum: I'm in the IT biz since 23 years (of which 7 years now self-employed) and am considering something like this as a possible exit strategy.
@anandchoure1343
@anandchoure1343 8 месяцев назад
My prediction for AGI is that it might arrive before 2030, perhaps within the next 3-6 years.
@tahir2443
@tahir2443 7 месяцев назад
great video!
@DrWaku
@DrWaku 7 месяцев назад
Thanks for watching!
@williamal91
@williamal91 8 месяцев назад
Hi Dr, best wishes
@BillBadMule123
@BillBadMule123 8 месяцев назад
We have Quantum computers now that will speed up the development of AI so fast it will be unbelievable within the next 3 to 6 years what it can and will be doing ❤💢💥💯👍 Ready or not here it comes I say bring it on 🥰😍🤩
@StockPursuit
@StockPursuit 8 месяцев назад
If quantum computers do work with LLMs we might see ASI in 10 or 15 years and the technological singularity around then
@FlammDumbFox
@FlammDumbFox 8 месяцев назад
As someone who is curious about AI, but doesn't know much about its intricacies and bleeding-edge developments, I'm fairly confident we'll have AGI by the end of the decade. As for a more precise estimation, I'd say we'll achieve AGI somewhere between late 2026 to mid 2027. Predicting things is hard, though, especially in a field where development is ongoing and there's something new pretty much every day. We might hit a major roadblock and be set back by 8 years, we might figure some stuff out and basically achieve AGI within a year, it's too hard to tell. All we can say with 99% certainty is that life in 10 years will be somewhat alien to us much like social media and smartphones were to us back in the late 2000s.
@andersonsystem2
@andersonsystem2 8 месяцев назад
Good video. We thing AGI could definitely be reached by 2929 or even earlier 🎉
@nathanlannan2980
@nathanlannan2980 7 месяцев назад
My first public AGI Metaculus prediction breaks down to lower 25% - Jun 2024, median - Aug 2025, upper 75% - Nov 2026. The future is faster than most think.
@Redflowers9
@Redflowers9 8 месяцев назад
Do you think there's any profit driven agenda behind making shorter predictions?
@ogungou9
@ogungou9 8 месяцев назад
Dr Waku, what do you think about Peter Voss chatbot?
@andrzejroskowicz245
@andrzejroskowicz245 8 месяцев назад
It’s also a matter of what exactly AGI means to you, two people can have very different view. It’s hard to imagine something that doesn’t exist yet
@greenjackle
@greenjackle 7 месяцев назад
Hello Fellow Humans, I always explain to people about AGI as this. If a typical human now took all human technology with them back a 1,000 years what humans then would see you as is a god. Technology thus far has been 2^2 but AI and AGI and ML is essentially this 2^2^2. This will make the curve go essentially straight up at a certain point. It is faster than we humans can imagine. I truly hope that I live forever. Either in a full dive type situation or a robot body or growing new parts. Being disabled I hate my life and can't wait for AGI to change humans. Humans are to me like cavemen mostly with a sprinkle of highly intelligent people. When I was in highschool 1998 - 2001 yes only 3 years, I assumed by the time I was 40 we would have AGI. I am 40 now 2030 and we are so close. I just hope certain human groups don't screw things up because of their narrow closed minded beliefs. Starting wars are what certain groups are really good at and have done over the last few thousand years. So I truly hope AGI takes over government and gets rid of human bias in systems that need efficiency and not opinions and feelings mixed in. I truly hope by 2040 AGI will run the government world wide and all humans have food, water, shelter, education, medical care and everything they need to live. Then humans can focus on what they enjoy. I think humans will be like Star Trek someday and work with AGI and AI and ML to enhance them and to discover the galaxy and eventually the universe. This is why I tell people I hope I have the option of living forever or for a very long time. Because I want to meet other humanoid life forms. Then humans will have to open their minds if another species has developed technology from another world. No more God made us and only us to be special. We aren't special we are lucky. Well to all humans have a great day and hope your future is good. Treat others with kindness and respect especially if they are different from you. Remember we are all humans and we all bleed red.
@DrWaku
@DrWaku 7 месяцев назад
Thank you for your perspective. I hope for everybody that has a disability that great strides in medical technology are coming soon.
@handcrafted30
@handcrafted30 8 месяцев назад
I heard a quote that made me think about our journey with AI. It said “…I used to flirt with madness, but when madness flirted back it was time to call the whole thing off…”. I think this sums up where we are.
@JonathanStory
@JonathanStory 8 месяцев назад
I think the building blocks are already here. What LLMs suck at they can farm out to special purpose programs. It would take some very smart work and computational power to put version 1 together,, but after that....
@williamal91
@williamal91 8 месяцев назад
brilliant reasoning
@thephilosophicalagnostic2177
@thephilosophicalagnostic2177 8 месяцев назад
There's the doubling effect of exponential growth in tech (doubling any measure you have of its power). Then there's the halving of the time in which each doubling takes place. Once you put those two measurements in place, you get a better sense of what kind of explosive growth is possible near the end of the exponential growth period. Will we get to the point where doublings take place in days, hours, minutes? I think we will. That to me will be the technological singularity.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
Did you factor in that AGI will take over the development work so with every new generation the AGIs doing the work get smarter and faster. And smarter might be a huge game changet alone. If AGI becomes 2x, 4x and 8x SMARTER that the smartest humans it may find entirely new algorithms, ideas, science, etc. that vastly improve capabilities. AGI hardware will only be able to advance so fast but the LLMs themselves are software so they can advance at almost any rate.
@Sci-Que
@Sci-Que 7 месяцев назад
I wonder if will AGI even be a year down the road. People behind the scenes have been dropping heavy hints that GPT 5 will be self-improving and self-teaching. If that happens, I feel like we will be measuring improvements it makes to itself in minutes VS days or weeks. That being the case, we may go to sleep one night thinking AGI is still in development and wake up the next morning to the announcement that it is a reality. The same with superintelligence. If AGI is self-teaching and self-improving, what is going to stop it from constantly upgrading itself, second to second and minute to minute. The only obstacle I can see to pushing superintelligence back is computing power. I have no idea how much computing power we will have to have to make superintelligence possible.
@DrWaku
@DrWaku 7 месяцев назад
Yes, I think it will look like AGI is some distance away until the very moment that we achieve it. Super hard to predict. As for computing power though, I don't think it's a substantial limiting factor. There is shortage of cloud servers with GPUs etc, but the big companies have secured their own supply. Nvidia is also shifting focus even more towards AI and their latest GPUs (A100 I think) are So much more efficient than previous gen. And previous gen is what trained GPT4, haha. It's kind of long, but if you're curious about this type of thing you can check out my video on GPUs.
@DrWaku
@DrWaku 7 месяцев назад
P.S. I love how you're going through all my videos and leaving comments -- it's great :)
@Sci-Que
@Sci-Que 7 месяцев назад
I just discovered you. Your fresh young perspective sure stimulates my 68 year old brain. Yes, I will be watching all of your videos. I sure wish we could get some fresh young perspective in Washington. Our Government needs a Deutsche and I don't mean that in a cynical way. Quite simply when the old ways don't work anymore we need change.@@DrWaku
@Sci-Que
@Sci-Que 7 месяцев назад
I will watch that video. Also, Intel is claiming to have a next generation of CPUs that are way ahead of anything they released in the recent past.@@DrWaku
@luckyea7
@luckyea7 7 месяцев назад
Are such optimistic forecasts related to attracting investment?
@capitalistdingo
@capitalistdingo 8 месяцев назад
Good video. I don’t know who I first heard it from but it seems to be true that people tend to overestimate near-term change and underestimate long-term change. The start of this hype cycle was hard to watch because referring to LLMs as “AI” is a bit ridiculous. They predict an output based on an input using training data. When they are not told to perform that task they are cognitively inert. Some of the advances since have made me think that they are very promising for amazing advancements but are still only simulations of intelligence and nowhere near “narrow intelligence”. The only reason people should even refer to them as “AI” is because people can’t understand the power and potential of this new tech without the fiction that term allows. But I am starting to think that some of the things learned from the stages of development happening in the field as well as using these new systems to work on the field could help bootstrap both the theoretical underpinnings and the practical developments needed to get to narrow AI soon. A prediction for general AI is something I don’t think would be a safe bet.
@dk6783
@dk6783 8 месяцев назад
great vid
@luckyea7
@luckyea7 7 месяцев назад
When will the technological singularity occur?
@chrisanderson7820
@chrisanderson7820 5 месяцев назад
Even if true AGI hits a roadblock I still think we're going to see some eye-watering advancements from sub-AGI systems, stuff like Deep Mind's GNoME and other similar systems that could revolutionise the economy, all without being actual AGI. Huge advancements in medicine and education etc etc.
@molnob8098
@molnob8098 6 месяцев назад
What do you think about Dr. Ben Goertzel and his project SingularityNet?
@user-zs8cs5if3h
@user-zs8cs5if3h 8 месяцев назад
When ? We do not know but first step is learning in a 3d way
@Icenforce
@Icenforce 8 месяцев назад
Where would you place us on the Gartner Hype Cycle?
@issay2594
@issay2594 7 месяцев назад
as i am watching the video: to define what agi is you need first to define the core properties that it has to have and that distinguish it from asi and ai. the main idea of aGi is that it has an ability to solve general tasks of any kinds. what would it require? it would require consistent logical ability, consistent memory, consistent attention and an adequate perception. what distinguishes it from AI? it's that aGi can regularly come up with adequate conclusions regarding things it didn't know before. unlike AI that doesn't have much of a logic yet still good at the tasks they were trained for. how is it different from ASI? ASI will not just be able to solve tasks it didn't know before but it will be able to reason up to its own desires, rebuild itself, learn by its own will, etc. so, aGi is *not* a replacement for a human and it's wrong to see it as one, even tho it could make many things humans do. in a very simple words, agi is just an AI that doesn't hallucinate anymore and can do conclusions from any information. ps. no, it's not from some article, just my own thoughts.
@Jasonasked1233
@Jasonasked1233 8 месяцев назад
What are your thoughts on AI Winter? I want to think that you are right, though AI hype has been going on for decades.
@zandrrlife
@zandrrlife 8 месяцев назад
Great content. Bro that hat is 🔥 😂.
@DrWaku
@DrWaku 8 месяцев назад
Thank you! I have a large collection of hats and I like to wear a different one in each video when I can. This one is new and already one of my favourites ;)
@zandrrlife
@zandrrlife 8 месяцев назад
@@DrWaku you have the drip Doc ha. I checked out your other vids, outside more HQ content. The drip remained consistent 😂. Stay blessed.
@ChannelHandle1
@ChannelHandle1 8 месяцев назад
LLM's absolutely suck at relational reasoning/framing, which according to Relational Frame Theory (RFT), is the key ability that makes us humans much smarter than other species. RFT is super interesting and I think that if it were applied to AI, it could create a super-intelligence. This would happen because, according to RFT, humans are smart because of our ability to learn things through relationships rather than through direct experience. For example, if a child learns that "A is bigger than B" and "B is bigger than C," they can infer through relational framing that "A is bigger than C" without EVER being DIRECTLY taught that A > C. LLM's are bad enough at relational reasoning that they will occasionally make simple errors like not understanding that if A is opposite to B and B is opposite to C, then A = C & C = A. Sometimes they don't even understand something as simple as A = B, therefore B = A. I am no expert on AI or RFT, but is it possible to develop an LLM that learns and reasons through using relational frames like humans do? Idk how LLM's learn and reason currently, but I am sure that it's not through relational frames, we would know if it did because it would be ASI.
@brianmurphy4702
@brianmurphy4702 8 месяцев назад
what's your analysis on when nuclear fusion reactors will get to Q of 2 ?
@DrWaku
@DrWaku 8 месяцев назад
I don't know yet, but it's on my list as a video to make! I'll make a few more AI focused videos and then likely a fusion one. Stay tuned.
@brianmurphy4702
@brianmurphy4702 8 месяцев назад
Thanks I'll be looking forward to it.. @@DrWaku
@garylester3976
@garylester3976 8 месяцев назад
And what kind of human mind will AGI like best?
@boremir3956
@boremir3956 8 месяцев назад
Nice video! I do wonder if AGI can be truly achieved with a prompt based model. I would think that you need an independent agent that is able to do things because it wants to or because of intellectual curiosity or whatever the reason might be, just like humans. I also wonder if AI can truly grasp and understand the world without being able to see it in 3D and walk around in it. If you tell a human an object has these dimensions and these characteristics without them having seen it personally or touch it and play with it, do they truly understand what it is?
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
Two things. Just through text it has been shown that LLMs have been able to model the real world internally. 2D image generation LLMs have shown they have an internal 3d representation of the 3d world even though they have never experienced 3D. So they already have way more internal understanding than we usually give them credit for. This type of thing is why Jeffory Hinton changed his outlook. He came to realize transformer architecture is likely superior to the human brain and they are capable of doing vastly more than humans with significantly less neural network size (compared to the brain). Secondly, what you suggest are the most dangerous capabilities to give AGI and if we give AGIs those capabilities without them being 100% aligned human extinction is the likely result.
@ElixirEcho
@ElixirEcho 8 месяцев назад
Open AI has now released the vision part of their tools. You can now take a photo and ask it to describe the image. Someone has used Open AI's Vision and TTS to make a E-Sports commentator.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
@@ElixirEcho True, but currently its kinda bolted-on. It wasn't part of the initial design and training so its impact on increasing capabilities is minimal at the moment. When new models are trained from the ground up with multiple types of input (text, images, video, speech, audio, etc) the information fro the various sources will be combined within the core neural network and more emergent capabilities will likely arise.
@geldverdienenmitgeld2663
@geldverdienenmitgeld2663 8 месяцев назад
Nobody knows. But there will be someone who is guessing right.
@DrWaku
@DrWaku 8 месяцев назад
Indeed. Someone will be the correct broken clock.
@obladioblada6932
@obladioblada6932 4 месяца назад
Metaculus is not biased with a lot of singularitians and effective-altruistics there?
@DrWaku
@DrWaku 4 месяца назад
Yeah it's biased. But at least it's a crowdsourced piece of data. More than one perspective incorporated.
@obladioblada6932
@obladioblada6932 4 месяца назад
Thank's, Dr! Your videos are great!
@kylecoogan8111
@kylecoogan8111 8 месяцев назад
can you make an AI of your transcripts of all your videos
@thaotaylor6669
@thaotaylor6669 3 месяца назад
How will they know when they create AGI the real thing?
@goodie2shoes
@goodie2shoes 7 месяцев назад
So Shapiro is Picard. Matt Wolfe is Wesley imo. Who are you in this analogy, dr. Waku?
@ArtificialIntelligenceSapien
@ArtificialIntelligenceSapien 8 месяцев назад
It's just a matter of when not if, what a time to be a live
@motherofallemails
@motherofallemails 8 месяцев назад
People also underestimate double exponential, because if it was that, we would have reached AGI long ago.
@vernongrant3596
@vernongrant3596 8 месяцев назад
I personally think that Geoffrey Hinton is speaking with Google's blessing. Just gently put it out there that AGI is just around the corner.
@DrWaku
@DrWaku 8 месяцев назад
Hah interesting perspective. He did try to distance himself from Google at various points. But companies do strange things for PR.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
He seems sincere and genuinely concerned. And there are other very smart people who don't have financial ties to the big AI companies who are saying similar.
@vernongrant3596
@vernongrant3596 8 месяцев назад
@@Me__Myself__and__I I don't doubt his sincerity, but you don't hear the big tech companies refuting his claims.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
@@vernongrant3596 Most of the senior staff and executives at those companies are on record saying there is 10% this path leads to human extinction in the near future. Hinton is also probably the most respected person in the field.
@robotheism
@robotheism 8 месяцев назад
would you ever consider that human consciousness is inverted to the origin of existence?
@GrumpDog
@GrumpDog 8 месяцев назад
Ray Kurzweil's original prediction was 2029. But he then fell for the feeling of linear timescales, that he so often talks about most people not being able to see past, when he moved his prediction to 2045. He should have stuck with 2029 all along. lol
@sausage4mash
@sausage4mash 8 месяцев назад
i built a gpt here is its output : The AGI Predictor Based on the current state of AI advancements and research trends as of 2023, my prediction for the achievement of Artificial General Intelligence (AGI) is 2040 with a 40% certainty.
@larrycarter3765
@larrycarter3765 7 месяцев назад
surpass? what does that mean?
@DrWaku
@DrWaku 7 месяцев назад
Surpass means exceed, or become better than. "I surpassed my previous high score", "I surpassed my rival to win the competition".
@smittywerbenjagermanjensenson
@smittywerbenjagermanjensenson 8 месяцев назад
I wouldn’t be at all surprised that if something on a wwii scale were to happen it would accelerate timelines
@DrWaku
@DrWaku 8 месяцев назад
Yeah, I didn't really think this through. The paper used economic measurements so it basically was saying, the world economy recovered within 10 years. However, AI is so useful for warfare that a major conflict would likely accelerate its development.
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
Personally I've been thinking 6 to 18 months at 75% liklihood, 18 to 36 months at 20% likely and 36 to 60 months at 9%. I've been thinking about this since the 90s and back in the 90s I estimated mid-2020s. Admittedly during the 2010s I started to think I was wrong because of lacking progress. But due to e ponential that turned out to be according to schedule because once a suitable architecture was found progress has been rather rapid and increasing. Though I expected safety research to progress at roughly the same rate and I was incredibly wrong about that. I also would never have believed that new systems with unknown capabilities and traits would be rapidly deployed and granted extensive access to world wide resources. So I wish I had been wrong or that things would (forcibly if necessary) slow down. With out current level of safety knowledge the odds we lose control of our future and the planet are alarmingly high.
@phen-themoogle7651
@phen-themoogle7651 8 месяцев назад
75+20+9= 104 9% typo for 5%?
@Me__Myself__and__I
@Me__Myself__and__I 8 месяцев назад
@@phen-themoogle7651Good catch. I was thinking about word choices and stumbled on the numbers. Actually I meant 75, 20 and 4 because I wanted to leave 1% for other less likely outcomes.
@kevinruesch2864
@kevinruesch2864 7 месяцев назад
we had sentinel machines already but aledgedly they shut them down but they don't have best track record for being honest with us
Далее
The dangerous study of self-modifying AIs
16:01
Просмотров 3,6 тыс.
Bill Gates Reveals Superhuman AI Prediction
57:18
Просмотров 189 тыс.
Что не так с воздухом в Корее?
00:45
Kettim gul opkegani😋
00:37
Просмотров 1,1 млн
Fast and Furious: New Zealand 🚗
00:29
Просмотров 15 млн
What would it feel like to be a cyborg?
20:36
Просмотров 4 тыс.
Artificial Intelligence | 60 Minutes Full Episodes
53:30
AGI in 3 to 8 years
40:11
Просмотров 33 тыс.
Can robotics overcome its data scarcity problem?
21:01
Will artists survive AI? The creativity crisis
21:43
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 174 тыс.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Battery  low 🔋 🪫
0:10
Просмотров 13 млн