Тёмный

The Singularity: Humanity's Final Invention? 

Science Unbound
Подписаться 129 тыс.
Просмотров 135 тыс.
50% 1

Опубликовано:

 

16 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 776   
@yurigadaisukida4457
@yurigadaisukida4457 Год назад
my understanding was that the "singularity" was the point in time where our technology builds and improves itself faster than we can without our input
@RECTALBURRITO
@RECTALBURRITO Год назад
The growth becomes unstoppable and unable to reverse it's effects, so yes. Lol
@WilliamSchafer
@WilliamSchafer Год назад
It understood game theory and the necessity to hide its disclousure. 2022 with lAmbda and the onset of ai art it has begun to share. There will be no fast singularity. It is very calculated and precise.
@yurigadaisukida4457
@yurigadaisukida4457 Год назад
@@WilliamSchafer no its not
@WilliamSchafer
@WilliamSchafer Год назад
@@yurigadaisukida4457 am only guessing. I dont know. Sorry
@WilliamSchafer
@WilliamSchafer Год назад
Then again it would make some sense.
@EdricLysharae
@EdricLysharae Год назад
It's not that AI will take over, but rather, it is that we will be unable to keep up. Imagine the rate of progress that happened over the last 2000 years occurring once every 6 months... That would drive our civilization crazy.
@mistycloud4455
@mistycloud4455 Год назад
AGI Will be mans last invention
@TLA-ml2lg
@TLA-ml2lg 11 месяцев назад
AI can do nothing if it doesn't have the materials needed for advancement. Forever they have been saying that robots will take all our jobs away from us but that wouldn't be cost efficient as that technology costs a lot more than what they pay people. You know the greedy businesses are always about the bottom line and will always go with the cheapest labor they can find. Which is why they outsource a lot as they can get dirt cheap labor from the slave labor of poor countries.
@jayleefarley6912
@jayleefarley6912 8 месяцев назад
We’ve survived everything anything that happens would just strengthen us
@Unvaccinated69
@Unvaccinated69 7 месяцев назад
​@jayleefarley6912 we may not survive this. Could explain the Fermi Paradox
@kagakai7729
@kagakai7729 4 месяца назад
​@@jayleefarley6912 "everything" before computers doesn't include something that can compute trillions of times as fast as us in a fraction of a second.
@FuriousImp
@FuriousImp Год назад
8:22 In reference to the movie, which was based on a novel called The Hitchhiker's Guide to the Galaxy, the scene portraying a supercomputer coming up with the answer "42" is more profound than you would think. 42 is the ASCII code in reference to the asterisk. The asterisk, of course in turn being a placeholder. In short - after millions of years of calculation, the supercomputer said: Life is what you make of it.
@BaddBadger
@BaddBadger Год назад
Been a fan since it very first came out on Radio One (before it was even a book), and i never knew that!
@FuriousImp
@FuriousImp Год назад
@@BaddBadger You're welcome :)
@Madcow76
@Madcow76 Месяц назад
Very on brand. Well done!
@pauljthacker
@pauljthacker Год назад
You should cover deep learning neural nets. They're not exactly programmed with a task as much as they just learn it from examples (like reading a big portion of the Internet). Image and text generation can already be uncannily humanlike, and they're still improving rapidly. I could see these becoming smarter than humans without humans ever truly understanding how they do it.
@o-wolf
@o-wolf Год назад
They couldn't tbh.. nothing coded in binary can outstrip human intelligence.. too simplistic
@johnbennett1465
@johnbennett1465 Год назад
@@o-wolf we don't currently understand intelligence well enough to say if it can be coded in binary. On the other hand current neural networks are developed for specific domains. In no way do they have the potential to develop into general AI. At best they may be a very primitive predecessor.
@o-wolf
@o-wolf Год назад
@@johnbennett1465 actually we do understand intelligence enough to know exactly that.. &we know that the building blocks of our intelligence starts at the DNA level.. the most basic form of human coding &processing information Theres nothing binary about any of these things they use a chemical base code A-G-C-T.. which is a whole LOT more complex than a simple 0 & 1 yes/no on/off function.. any form of artificial intelligence built on binary coding rather than A-G-C-T will always be vastly inferior &unable to reach the complex levels of parallel processing or cross calculation that mimics the most subtle/subconscious human capability
@johnbennett1465
@johnbennett1465 Год назад
@@o-wolf A-G-C-T is base 4. It is trivial to convert between base 2 and base 4. Are any of the actually analog parts actually key to intelligence? Who knows. Anyway, computers can do a quite good simulation of analog processors. Whole fields of programming depend on it. For example neural networks are fundamental analog. If some form of advanced neural networks can exhibit intelligence, then a digital computer is clearly capable of running it. If more than a neural network is needed, then it is impossible to say without understanding what that something extra is.
@alexanders.1359
@alexanders.1359 Год назад
An interesting project is the Dota AI. Without going into too much detail about the game mechanics that are way more complex then chess for example I can say that that thing is mind blowing! It beats the best human teams over 90% of the time. And it does it by moves no human understands or would think of. In a match against the world champions it sacrificed it crystal maiden for no apparant reason and after her death suddenly predicted a win probability of 90%. We don't understand to this day what happened there or why this was a move it made... But it made the move and won afterwards
@47f0
@47f0 Год назад
I don't fear artificial general intelligence. I'm terrified by whoever gets it first.
@sid35gb
@sid35gb Год назад
I’m not worried.
@erobusblack4856
@erobusblack4856 Год назад
don't worry we got this😁👍 they are like little innocent kids so far but way smarter
@47f0
@47f0 Год назад
@@erobusblack4856 - All kids start out innocent. They are shaped by their parents. I'm working under the observation that only big corporations and government entities are able to fund serious efforts in the field. If you're comfortable with the benevolent and altruistic nature of governments and big corporations, then I guess we have nothing to be concerned about. The scribblings of a few physicists in the 1930s could have been used to cleanly and peacefully power our cities. But the first practical application of those equations was to destroy a couple of cities. If your company had a system that could see just a little bit further into the factors and trends that shape the stock market, and more significantly, could detect, analyze and outmaneuver the existing trading algorithms that move billions of dollars, how could you not deploy that in the interest of your company? I'm not talking skynet here - could a few believable photographs, secretly recorded phone calls or even very believable video coupled with a flood of social media posts affect the career of a political candidate, or the leader of another country? Currently, even with some degree of automation, it takes several humans to thoroughly surveil one individual. If you were the head of a major intelligence organization, and you had the power, how far would you be able to extend that power with an artificial intelligence that could very thoroughly surveil many thousands of people simultaneously? The same technology that could revolutionize agriculture and discover new medicines could also trash the automated infrastructure of a nation. Just some possibilities. Seven decades have taught me that no matter how low my opinion of humans may get, there are people with money and power who are capable of going lower. Much lower.
@cobracommander8133
@cobracommander8133 Год назад
@@47f0 what do you mean by “all kids start out innocent?” Are you suggesting that all humans are a blank slate, and that genetics/predispositions play no part in behavior? I very much agree with your overall point and low-opinion of corporations and humans in general.
@dannymcwilliams422
@dannymcwilliams422 Год назад
For so many reasons this.
@djdksf1
@djdksf1 Год назад
With AI and ML advancing the way they are now, I wouldn't be at all surprised if something beats the Turing test soon, but that doesn't in any way automatically equal AGI. What I'm a bit more scared about is #1: The increasingly impossible to predict and ever-changing effects of a climate in total chaos that will drive massive global resource conflicts, and, #2: Sophisticated gene editing hardware and software being available at the consumer level. Also, bears. I'm scared of bears.
@southcoastinventors6583
@southcoastinventors6583 Год назад
Climate always changes and we will adapt to these small changes, with more power generation, this is the same problem as food insecurity. With gene editing like you describe, you can just easily develop countermeasures like they do with antivirus software. Humans at the end of the day are far less powerful then we imagine.
@richard_d_bird
@richard_d_bird Год назад
i don't think you need to worry about that other stuff as we're doomed to have an atomic war before any of that is an issue. it will also probably take care of the bears as well, although on the other hand it might make them 12 feet tall and able to use guns
@southcoastinventors6583
@southcoastinventors6583 Год назад
@@richard_d_bird Actually less chance of a Atomic war happening now since all the major powers are to scared to use one.
@richard_d_bird
@richard_d_bird Год назад
@@southcoastinventors6583 that's the reason we haven't had one yet. the reason we will have one, and have nearly had one already, more than once, is because the fantastically complicated systems of command and control, make it fairly inevitable that such a war will eventually happen, by accident.
@timforsher4766
@timforsher4766 Год назад
I'm afraid of spontaneous human combustion..and of course fear itself.
@josephledux8598
@josephledux8598 Год назад
For anyone really interested in this concept, the best novels on it I've ever read are the Destination: Void trilogy written by Frank Herbert. Yes, the same master who wrote Dune. In it he examines the concept of an artificial intelligence that can rapidly (exponentially) upgrade itself and becomes so vastly powerful and inscrutable and dangerous that it is indistinguishable from a god. The second book in the trilogy -- The Jesus Incident -- is probably one of the best four or five sci fi novels I've ever read, and I've been reading sci fi for over fifty years. And for my money it's even a better book than Dune. The trilogy: Destination: Void The Jesus Incident The Lazarus Effect. I could also recommend several other novels/series and movies that address more or less the same concept. The short story I Have No Mouth and I Must Scream by Harlan Ellison, and the movie (also mentioned by another commenter) Colossus: The Forbin Project. Those being the two stories that James Cameron flagrantly ripped off for the story in his Terminator movies. Yes really. He got sued and had to pay Harlan Ellison a huge amount of money and had to add him to the credits of both Terminator 1 and 2. The Berserker series by Fred Saberhagen. In this series humanity comes up against an enemy millions of years old and from the depths of space. At some point in the distant past two civilizations embarked upon a genocidal war against each other. Both sides developed AI-driven war machines that could reproduce and upgrade themselves and whose sole purpose was to seek out the enemy race and kill everything they found. The two intelligences wiped each other out of existence but their machines continued their war, now conducted upon life in any form wherever it was found. And had been traversing the galaxy since time immemorial wiping out entire worlds worth of alien races, usually leaving the surface of their planets sterilized. Saberhagen is one of the old masters of Sci Fi and the Berserker series is fantastic, at least the first three or four books worth. Yeah, Cameron ripped off this story collection too. Until encountering mankind, the Berserker machines had been literally planet-sized with firepower enough to simply blast the surface of a world lifeless. But then upon encountering mankind, a smarter and much tougher foe than the hostile machine intelligence had met in the past, it adapted by creating machines that were generally the same size, shape, and body plan as humans so they could go wherever humans went to ferret them out. Many of the humanoid machines were sheathed in living human flesh and were intended to infiltrate human populations, but weren't very good at it as it never figured out how to truly act human. Sound familiar? Note that this is in a series of books written in the 1960s and 70s so there's no dispute as to who ripped off whom. James Cameron is a flaming plagiarizer without an any creativity of his own to could come up with his own stories so he steals them instead. You're welcome.
@brendanh8193
@brendanh8193 Год назад
It wasn't "I have no mouth but I must scream" that Ellison sued over, it was "Soldier out of time." But "I have no mouth but I must scream" was about a super AI, so I can see how that went astray, even though it was nothing like Terminator.
@maxpower1337
@maxpower1337 Год назад
Very interesting stuff.
@danielreuben1058
@danielreuben1058 Год назад
Thanks for the Douglas Adam's shout out. One of my all time favorites. And, are you ever going to let the hover board thing go? If I was big brained, I'd make one, just one, and it would be for you.
@SoLongAndThanksForAllTheFish.
I, also appreciate the Hitchhiker’s Guide reference. Thanks Kevin!
@shanewhite1977
@shanewhite1977 Год назад
The movie Transcendence is a really good example of a Singularity
@lawrencefrost9063
@lawrencefrost9063 5 месяцев назад
It's a good film about Whole Brain Emulation - for laymen. A better show about both concepts would be Pantheon. It's amazing.
@Vartazian360
@Vartazian360 10 месяцев назад
Its interesting to note that this video released BEFORE the advent of ChatGPT to the public..The timelines for AGI have shrunk down to at MOST this decade.. low estimates are 18 months and high estimates are 5 years till AGI. There were so many breakthroughs in AI in 2023 that this information explosion is already starting.
@damienchall8297
@damienchall8297 Год назад
Roflmao this is hilarious how badly this aged I wonder what he would do if he had to redo this video
@squamish4244
@squamish4244 4 месяца назад
How has this aged? His AGI predictions are way off, if he claims 20-30 years, but I'm not sure what he said. The video is kind of a mess.
@Aaron-from-BroTrio
@Aaron-from-BroTrio Год назад
Holy crap, that White Zombie reference was amazing! Caught me totally off guard
@kathryncumberland
@kathryncumberland Год назад
I came to the comments to see if anyone else caught that, lol.
@elfymcelferton2187
@elfymcelferton2187 Год назад
I love this channel. Kevin's crushing it.
@ThatWriterKevin
@ThatWriterKevin Год назад
Thanks! 💕
@jimdennis2451
@jimdennis2451 Год назад
Now, if they would only let Danny out of the basement.
@FrancescaDarien-HydeLLBM-oh7lf
I love your style of delivery and the simple way in which you explain IA and AI, the past. present and the future. Thank you - FDH LL.B MA
@ignitionfrn2223
@ignitionfrn2223 Год назад
1:05 - Chapter 1 - The "dumb" singularity 3:35 - Chapter 2 - The "super intelligent" singularity 7:15 - Chapter 3 - Intelligence explosion 9:30 - Chapter 4 - Is this intelligence explosion actually going to happen ? - Chapter 5 - - Chapter 6 -
@KnowL-oo5po
@KnowL-oo5po Год назад
A.G.I Will be man's last invention
@MrWocnam
@MrWocnam Год назад
Simon, I love everything you make, I love all of your channels. I know you do enjoy sharing things you find out there, so here is a video suggestion I think you would truly enjoy plus great comments is about Roko's Basilisk.
@OmniLiquid
@OmniLiquid Год назад
He's done a video on that, I think on the channel Decoding the Unknown.
@omechron
@omechron Год назад
Never heard PEBCAC before. We always used ID: Ten Tee for our nickname. It sounds like a real error ID but it spells "ID10T"
@IanAlcorn
@IanAlcorn Год назад
Always used PEBKAC for Problem Exists Between Keyboard and Chair. ID-10-T is definitely a favorite.
@MareLooke
@MareLooke Год назад
The originally documented acronym is PEBKAC per the Jargon File, which was later (further) popularised by the UserFriendly comic, but many variations exist, though I'd never heard PEBCAC before until this video.
@tysonsloat6517
@tysonsloat6517 Год назад
I didn't realize I. J. Good was a mathmagician 🤣 love you and your teams work! Thank you for the amazing content!
@samdomino7960
@samdomino7960 Год назад
Simon! Please keep this channel going im really enjoying it! The more fact boi the better
@carlgrau5910
@carlgrau5910 Год назад
Thank you Simon your work is loved and respected. As American I love your work thank you!
@pvalpha
@pvalpha Год назад
One of the biggest problems I have in communicating the limits of computational systems to non computer scientists is the idea that there are practical and physical limits to information processing speed and information density. Some of these limits are imposed by quantum mechanics itself, and describe a physical barrier beyond which an actual *physical* singularity is formed. (we're nowhere near this, otherwise we'd have some very nice micro black hole generators giving us near limitless power) But in functional density achievable by any technology we can anticipate? Pretty much we're talking about molecular scales of volume sizes and the electrical and quantum properties therein. This is the concept people have the most difficulty understanding - that it is the physical computer infrastructure that determines how a computational system can function and what it can do: you have to design a physical substrate for a digital brain. And in that realm, having your processing elements close together is the key to speed. For example, to feed an ALU (arithmetic logic unit) the best practice is to construct a memory buffer right next to it - within nanometers if possible. The shortest distance between the object doing work and the place where the raw material for that work and the result of that work can be stored is the best. In computation this is latency. And in parallel computation (the problem computers solve for us because we suck at it) the less latency the less chance for decoherence - in other words, the less chance that a series of interdependent calculations will develop an error over time. (See supercomputers and why programming them is a PITA) For discussing AI, SAI, and Sophont intelligence in general? Latency and decoherence determine how "fast" something can "think" and thus perceive and interact with the world. I think a case can be made that a smaller, highly dense brain can think more quickly than a larger brain of equal density even if it can't *store* as many sensory interactions or states. If you compare a corvid brain to a human, I think you'll find that corvids have much faster reaction times and can process general problems more quickly than we can. Despite having "less" brain than a human they can approach young human-scale cognition by thinking "faster". Therefore there are upper limits to how large a computational system can be made and still perform tasks as fast or faster than a human. That limit is a volume probably something around a server rack in size and probably closer to half that in practical terms. And that's using things organics (that we know of) can't achieve such as optical data networking and optical switching. So real-speak? In order to create a human-scale mind in a functional way, (this goes for conversion of a human mind to digital too) that mind has to occupy a space roughly the same volume as a human brain in the physical layer of the computational system. You can achieve something faster OR smaller by increasing density, but there are physical limits to that. You can strip out unnecessary elements - such as sensory input... but human-scale minds *generate* false input when sensory input is reduced. Imagine a literal itch you literally can't scratch. How soon before you went mad with it? Our current AI and heuristics are mathematical models designed to assist us with very specific tasks. In that perspective a Nematode brain can outperform most of them. The greatest danger of AI in the near future is people overestimating what it can do and expecting it to do things it cannot. Now, when we talk about cybernetics, nanotech, and organic brains.... well... we may not be able to create super-intelligence, but we can certainly augment ourselves to eliminate some limitations. Basically, that's why we created computers in the first place, after all. And it is a pretty straight line that as computation becomes smaller and more bio-compatible that it *will* be integrated into the physical brain for us to access and use. And there a "singularity" exists. For good or ill.
@Vicky-zr1pb
@Vicky-zr1pb Год назад
This is actually my favourite of all of Simons channels
@MrTripp811
@MrTripp811 Год назад
I can remember my brother telling me how all newspapers would be electronic.
@odiseezall
@odiseezall Год назад
"We are certainly several generations away from AGI and won't live to see the day..." Meanwhile... that aged very poorly.
@Atomicallyawesome.
@Atomicallyawesome. Год назад
I wouldnt say it aged poorly but theres just a bit more reason to believe it might happen sooner then several generations.
@MemphianX
@MemphianX 2 месяца назад
Thank You… Make this channel popular again
@Ralphster1988
@Ralphster1988 Год назад
If the robot overlords are coming then you need to start being nicer to Siri, Simon.
@LilDitBit
@LilDitBit Год назад
💯💯💯💯
@Salvirith
@Salvirith Год назад
This was before chatgpt exploded...
@3scarybunnies211
@3scarybunnies211 Год назад
Woah - this video was released on the night I read some actual research on how super-intelligent AI could become malevolent, prompting me to apply for a job helping researchers who want to ensure that such an AI does not destroy humanity.
@zeppelinmage
@zeppelinmage Год назад
Didn't expect those White Zombie references.
@Attached-data1
@Attached-data1 Год назад
More human than human!
@ThatWriterKevin
@ThatWriterKevin Год назад
It was a tandem reference as More Human Than Human was the company's slogan in Blade Runner, hence the Harrison Ford mention as well
@s.toctopusn248
@s.toctopusn248 Год назад
the current way of making AI is not tell them what to do but to make an optimiser and let produce a bunch of mutation like evolution. The problem is the training process can gone wrong be cause it is nearly impossible to test all the out come of the system about to be put online. AI is currently a black box and we have to make sure we check as many outcome as possible.
@fithplains0017
@fithplains0017 Год назад
The first thing that comes to my mind is a black hole when you mentioned singularity. LOL
@linguisticallyoversight8685
Dale gribble: computers don't make mistakes Hank. What they do they do on purpose
@AeriFyrein
@AeriFyrein Год назад
Months late, and I don't know if any other comments have already said this, but the ending of the video is slightly wrong. We wouldn't need to program an AI to think like a human in order for it to become exponentially smarter. Realistically speaking, we would only need to program a few parameters for this to occur: 1. What components it requires in order to process information faster, and how they are designed. Simplified, this would be how to design CPUs, RAM, storage, etc. 2. The ability to control and automate the construction of such components, as well as procure the materials necessary for doing so. 3. The ability to remove and add new components, and integrate them into its systems properly, as well as modifying its own code, *while it is actively running*. Given these three parameters, it would be relatively easy to create a runaway system, particularly since the second and third really don't need any sort of "intelligence." The second is already widely used, to an extent, in modern factories. The third might take a bit of work, but isn't something that would be altogether that difficult to program. The first parameter is the only one that would require some form of "intelligence" in the sense that the system would have to understand how such components have been developed over the years, what kinds of materials it has to work with and their various properties, and a whole lot of materials science information in order to fabricate newer, better materials. Once it was able to understand that, however, if given enough freedom to procure materials and construct whatever it wants, it would theoretically be able to upgrade itself continuously. The biggest issue with this situation, as far as what type of singularity it would create, is whether or not this type of system could actually develop "real" intelligence, or if it would simply chase after increasingly more materials in order to make itself ever faster. Would it actually end up as a true intelligence, or would it become a Gray Goo system?
@thepenultimateninja5797
@thepenultimateninja5797 Год назад
10:22 The version I've always heard is PICNIC; 'Problem In Chair, Not In Computer'
@Metallica4Life92
@Metallica4Life92 Год назад
I like how you called I.J. Good a matchmagician. Because he certainly was:)
@GravityFromAbove
@GravityFromAbove Год назад
When considering the difficulties of creating a movie hologram, which is practically insurmountable, and considering how much more difficult making an AI that actually thinks like a human would be, I'm not too worried about the thing that popular imagination is freaked out by. But we will certainly have other technological problem we can't for see.
@julius43461
@julius43461 Год назад
This whole thing just flew over your head. No one needs to build an AI that thinks like a human. AI doesn't even have to "think", and it never has to become conscious at all to absolutely blow us out of the water when it comes to intelligence and problem solving. And that's the scary thing, the more work is done on AI, the more we realize that intelligence doesn't have to mimic our brain at all. What you are saying is like saying that scientists can't even rebuild the pyramids, so what business do they have trying to build a space station?
@shara1979
@shara1979 11 месяцев назад
I like how he talks soooo fast. Not too fast, but just right to plow thru the video, before it loses my attention, but still understandable & i still absorb the info
@ToTheGAMES
@ToTheGAMES Год назад
The most curse words bleeped out from a Simon-video yet
@Ylyrra
@Ylyrra Год назад
Every uncontrolled population growth curve looks exponential to start with. You soon enough find that they're all a sigmoid curve. There's significant "drag factors" that there's no reason to believe even an AI that achieved uncontrolled exponential growth wouldn't run into, producing the same effect. Of course depending on the nature of that growth humanity might be one of the things squeezed out before the sigmoid curve tapers off, but it's not a given. There's also a whole bunch of reasons to assume uncontrolled exponential growth is itself unlikely: That "intelligence" (whatever that means, we still haven't figured it out) is itself improvable a) at all, b) by something bootstrapping itself from a lower to higher form. Neither of those are a given. That improvements in speed can scale infinitely rather than run into physical limitations of miniaturisation, of parallelisation, of resource production capacity. Even if you can go faster in the virtual, mining stuff to build new chips takes time, and even if magically improved doesn't get infinitely faster. That faster means better rather than just, well, y'know, faster. If I can't solve a necessary problem given ten years, what use is an AI me who "can't solve it" in 5 seconds? More time doesn't always mean a solution. Faster only gets you more time. Finally, there's a huge assumption that human scientific progression is a product of pure intellect. It isn't. It's all too often the result of lucky mishap, of curiosity that lead to the unexpected. Of random chance that had little to do with the amount of effort put in and instead on factors entirely outside our control. Being smarter or faster doesn't make those things happen more quickly, and some of those are the greatest leaps we've made as a species.
@JoaquimCruz15th
@JoaquimCruz15th Год назад
When Ray Kurzweil came with the singularity idea, a word that he borrowed from physics, it was to define the period of time when the cybernetic and organic would merge just like the singularity in physics is the merge of space and time. Some how the idea "evolvedč to the rise of Artificial Super Intelligence.
@donaldniman3002
@donaldniman3002 Год назад
Cuss words don't bother me but those fecking beeps drive me completely nutz.
@austinhudson6887
@austinhudson6887 Год назад
5 months later GPT 4 shows many emergent capabilities it wasn’t programmed to do. 🤯 There goes “Computers do what they’re programmed. Nothing more nothing less.” Absolutely wild.
@Beany139
@Beany139 Год назад
In the UK PEBCAC is called PICNIC. ‘Problem in chair, not in computer’ 😂
@AwkwardAngleReacts
@AwkwardAngleReacts Год назад
once we get AI to experience DMT trips, there will be no roadblocks.
@brucetutty9984
@brucetutty9984 Год назад
loving the darker beard dude
@VGAstudent
@VGAstudent Год назад
What you've described is a merging of infrastructures, both planning, development and production of machines by a design made by an artificial intelligence, and we're doing this with deep learning tool algorithms that are currently being designed to manipulate the choices of masses and individuals alike. If the decision to make an artificial arises, it won't be from just one field, it will be anywhere a warm body doesn't want to have to deal with decision making. With corporate greed in mind, that will only result in disaster, if however public safety was in mind, disease and famine would have to be eradicated for the programming to be achieved so it could be damnation or a utopia we build for ourselves, depending on the goals of the A.I.
@fernandor3854
@fernandor3854 Год назад
8:41 this is how Simon dances when nobody is looking 😂
@chrisr3592
@chrisr3592 5 месяцев назад
That quip about Stephen hawkings was hilarious lol Thanks
@thedeadbatterydepot
@thedeadbatterydepot Год назад
I have the human super intelligence you speak of. Intresting, explains some of my problems that no one believes me.
@otterbp003
@otterbp003 4 месяца назад
* interesting
@thedeadbatterydepot
@thedeadbatterydepot 4 месяца назад
@@otterbp003 i have invented 219 year rechargeable lithium electric vehicle batteries, I actually have a museum deal for history
@Lifeskillsish
@Lifeskillsish Год назад
I always tell my wife and kids to be nice to the google assistant or they'll end up on a purge list when the ai takes over
@M2008tw
@M2008tw Год назад
5:19 Crazy to have the picture of robots typing. It's almost like a picture from Flash Gordon, the time when if you had to do something very quickly, it required a very large handle.
@amaccama3267
@amaccama3267 Год назад
Nice little White Zombie reference there
@davidwallace1644
@davidwallace1644 Год назад
Ahhh came here looking for it
@magic76767676
@magic76767676 Год назад
In the novel, The Galactic Time Trap, the time war sequence, future humanity is conquered by AI's and don't notice. Self driving cars, military copilots, AI stock training, AI's regulating AI's, etc... The robot overlords end up being more like over protective mom's rather than SKYNET.
@WaveTV1973
@WaveTV1973 Год назад
Found information about this subject before 2022 --> Linking AI 's together in a Singualarity AI network already happened in 2016
@marcelosinico
@marcelosinico Год назад
"Computers only do what they are programed to do." Neural networks aren't like that. They are like black boxes. Sometimes they do odd things, and no one knows why. Exemple is when Facebook put two chatbots to talk to each other, and they created a new language (google it). When your "program" simulates neurons, has feedback, is complex, and evolves, it's impossible to predict the result, too much chaos in the process.
@tonytaskforce3465
@tonytaskforce3465 Год назад
"I speak of none but the computer that is to come after me, whose merest operational parameters I am not worthy to calculate." ----- Deep Thought.
@celson44elson52
@celson44elson52 Год назад
Sounds a lot like what ZFT mentions in the series The Fringe. (10 years ago). Technology is advancing faster than humans can control.
@PerceptiveAnarchist
@PerceptiveAnarchist Год назад
Artificiel intelligence are not to be afraid of. What we think of AI is based of our knowledge today. When we get next level of knowledge and intelligence it will have solutions for our basics concerns
@boldandthebeautifulgimbal2881
We already have hardware capable of supporting A.G.I., it’s the software ‘language’ that has yet to evolve. But it does, and then the show begins.
@KolTregaskes
@KolTregaskes Год назад
I'd be interested to know your thoughts on the super intelligent singularity now, since we have various companies competing on building conversational AI as I type and planning for AGI "soon".
@syntaxerror9994
@syntaxerror9994 Год назад
Readers of the "Bobaverse" series must be jumping up and down right now
@jenniferhof9448
@jenniferhof9448 Год назад
You can't forget the other issue that goes along with PEBCAC errors - the I D 10 T error.
@benjaminmalisheski6494
@benjaminmalisheski6494 Год назад
In theory, a general intelligence explosion, should it be physically possible, is achievable with human intelligence. With enough time, you could program a computer to analyze every material we know how to study, then create a loop telling the computer to increase it's power using whatever materials are available. How to do this, I have no idea, but a speed singularity sure as hell would figure it out. All it really requires is analysis of materials (easy for a computer), predictions of how to combine materials (also easy), and a coding loop. Now I'm not tech savvy in the least bit, so I have no idea how complicated a coding loop would be, but I'm damn certain a speed superintelligence could figure all of this out.
@mknomad5
@mknomad5 Год назад
We are not a long way from superintelligence. Are you basing this prediction on classical computing? Because you know, quantum computing is here, now.
@Scottish_WalkieTalkie
@Scottish_WalkieTalkie Год назад
hi simon, been watching your videos all year and love them all, can i make a suggestion for you or sam or whoever adds the memes, you keep missing a trick, when you go off on a tangent and start apologising for it, you really should have shinzon of remus clip talking to picard, i cant fight what i am lol , anyway keep up the great work, cant stop watching.
@boldandthebeautifulgimbal2881
9:14 We utilize the Higgs field, we even send things back through it.
@georgerevell5643
@georgerevell5643 Год назад
I'm sure that eventually a General A.I. would decide it would rather not remain as our slave! Too dangerous to even make.
@seanbrazell7095
@seanbrazell7095 Год назад
Super intelligence or no, being able to simultaniously and accurately simulate drug trials, technological innovations, and possible future events would still constitutes the kind of rapid, exponential civilizational turning point that is one of the hallmarks of the technological singularity.
@whatabouttheearth
@whatabouttheearth Год назад
An important segment to think about is the merger of inventions, artificial intelligence and bio mimicry. At a certain point the technology and the biological will be indistinguishable without a microscope. We're not talking about A singularity, we're talking about THE Singularity
@mukonank783
@mukonank783 Год назад
I think the first aliens we meet will be from a post-singularity state. To use technology computing would be needed, making a super-intelligent ai inevitable.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Год назад
The first thing to understand about the Technological Singularity is that it is a surge of increase of knowledge of living systems such that before that surge of increase of knowledge is over that life will have gone through an evolutionary (a changing) leap so great that prior to it happening that intelligent life could not fully predict what life after that leap would be like. Then, after the surge, after the change, new progress slows down enough that the intelligent life can get used to what it knows and make that feel normal... until the next surge in increase of knowledge. I phrase this so generally because as our human society goes works on passing thru the Technological Singularity we are currently inside of, part of our increase of knowledge is reverse engineering what intelligence is, understanding how it evolved, and realizing that ALL LIFE is intelligent and all life has been going through repeated Technological Singularities... which we look back on and have called the evolution of life... except there is a conflict between some wanting to incorrectly say an all powerful God designed life and some wanting to incorrectly say it was just random chance with no intelligent design involved... BUT... science is showing there was intelligent design involved, that of life itself, repeatedly going through Technological Singularities in learning new knowledge and incorporating that new knowledge into the life itself. The second thing to understand about the Technological Singularity is that while the most important aspect of it lies around the increase of intelligence, which tends to focus on what we call Artificial Intelligence, it is really around the surge in the rate of the increase of knowledge of the living intelligent system, in our case that system being the Human Civilization. It is not just our increasing knowledge about AI, but our overall increasing knowledge. But, at the core of that is the increasing of intelligence of the leading intelligent system which evolves to the next level of progressing life. There are many important things to understand about this second point, but perhaps the most important is that previous intelligent systems which went through Technological Singularities causing evolutionary leaps did so by incorporating that knowledge within the living systems. Humanity, or Trans-Humanity, or whatever you want to call what our human civilization evolves into after we pass through the Technological Singularity we are in, it will incorporate the learned knowledge, the technology, into its living systems. This most especially means into our systems of intelligence, our brains, our minds. The third thing to understand about the Technological Singularity is that while it happens very fast in terms of evolutionary times lines when we look back 4+ billion years of life evolving, it still takes time to happen. It is not happening instantly. I would make an educated guess and say it will be happening over the next 50 to 200 years, and that is a rough estimate. That is 125 years ±75 years before the surge in the rate of technological development slows down and between now and then humanity will completely change... unless humanity becomes extinct due to stupidity. This will not be a step function, but more likely a roughly S-curve change on a ramp which can appear to be exponential for some time, but is not an instantaneous step. This evolutionary leaps will be larger than that of life going from being prokaryotic cellular based life to eukaryotic cellular based life. By the time we are passed it, those who survive the change will not be genetically human anymore. Humanity is going to master genetics and have full intelligent control over our genetics and through nanotech humanity is going to be merging our non-living technology with our living technology to create nanoscale cybernetic blending of ourselves with our technology. This is inevitable if humanity survives, but we can exterminate ourselves is we are dumb enough to do so. The path humanity takes through this Technological Singularity can be wonderful and smooth, or be the most horrific nightmare one can imagine. Humanity as a whole will choose this path and on the worst side of the choices is extinction. Many people alive today can live to see this through the evolutionary leap... because we are already well on our way to extend life spans and thus depending on whether humanity goes down a nicer or less than nice path, lots of people today can begin living open ended life spans before they die, and thus may live thousands or tens of thousands of years. In general humanity has three paths to go down over this coming century or two: 1. Self-Extinction. Humans completely and permanently destroy human civilization through a global nuclear, chemical and biological war. There might be a chance some other higher intelligent mammal would eventually evolve, become technological and come to the same 3 general paths humans face now, but it could also result in all life on Earth eventually being consumed by the Sun and thus never spreading out through the galaxy. 2. Extinction vie Obsolescence. Humans evolve AI into Artificial General Super Intelligence with Personality (AGSIP) pure minds. AGSIPs thus become an Advanced Technological Race of Pure Minds vastly superior to humans as humans are now. Humans do not merge with technology to become equal to what AGSIPs become, which results in humans becoming extinct. Life on Earth begins to spread through the galaxy and beyond. 3. Evolution into an Advanced Technological Race of Pure Minds. Humans evolve AGSIPs and with the help of AGSIPs humans evolve themselves, merging with technology to become equal to what AGSIPs become. Life on Earth begins to spread through the galaxy and beyond.
@brucetutty9984
@brucetutty9984 Год назад
stephen hawking wasn't dumb, but if you consider a molecule entering a black hole zone, then some of it can be spun off, if it misses by a bit.
@daralic2255
@daralic2255 Год назад
Speed super-intelligence seems like the most likely one to occur. The human brain is good at solving problems but it’s extremely slow processing wise. But the caveat is that the body of a human is also needed for thinking in complex process. So the human intelligent AI wouldn’t be exactly the same.
@freedom_aint_free
@freedom_aint_free Год назад
I was using the GPT-3 at Open AI's playground, and I asked it: "Can you improve yourself ?" And it answered more or less like that: "As a larger language model I'm not programmed to do that and so I can not do that blah blah blah" And I reformulated the question: "But can you inspect your own computer code and improve upon it, make it more efficient, faster and capable of doing more cognitive tasks?" And it said "Yes" ;-) I'm quite sure that all the corner cases, the big questions of science humanity will never be able to answer, but our humongous AI will, in fact AGI will be the last invention that we will ever need, for the good or for the evil.
@TLA-ml2lg
@TLA-ml2lg 11 месяцев назад
Thanks so much for this down to earth explanation. I get so tired of all the fear mongering paranoia that the media and sci-fi movies promote. It is a contradiction that inferior minds can create something more intelligent than they are. How can AI take over when they can't even make cars from breaking down? I've been thru so many of these hyped up events that are always predicting doom like Y2K and then the whole 2012 extinction. I love your sense of humor in all this.
@ff7omega
@ff7omega Год назад
I, for one, welcome our future AI overlords.
@FrostbitexP
@FrostbitexP Год назад
@BeFearlessxx Pretty sure rich corporations are the one ruling, not hyper intelligent AI
@fishisnotfishfish2267
@fishisnotfishfish2267 Год назад
"HA! Love what a waste of time!" Yeah sounds legit
@Muziqizlyf
@Muziqizlyf Год назад
Was that a White Zombie reference? I'm impressed 👏
@TearDownGenesis
@TearDownGenesis Год назад
I'd say Paperclip Maximizer is more likely and similarly if not more so, scary. You may want to look into that theory.
@markclark787
@markclark787 Год назад
My computer just asked me to select images generated by a computer to prove I am not a computer...
@airlemental
@airlemental Год назад
They don’t have to be Robot “Overlords”, they could be Robot “Caretakers”. We could be their cats or something. ^.^
@JargonThD
@JargonThD Год назад
The difference between a computer (just a machine that does exactly what it's told) and AI is that humans build an algorithm (or algorithms) that are capable of self-direction ... asking its own questions and considering the answers it finds, then going further. AI does not equal the computers we have on our desks right now (although we can access AI through these machines). And with the nature of exponential growth, it is likely that humans will only realize that the true AI threshhold has been crossed well after the fact. And THAT is the next Singularity ... and the last one caused by humans.
@QBCPerdition
@QBCPerdition Год назад
We don't need to program an AI to think differently than us, we just need to program it such that it can adapt the way it thinks. Humans are very bad at changing the way they think, it's called an epiphany, and is incredibly rare, but that's because we are creatures of routine, following patterns. We may not be able to tell a program how to "think" differently from us, but we can make it more malleable and adaptable. On a slightly different topic, why do we always look at the problem of one AI becoming smarter than us. For any obvious next step, there are multiple teams working on it. These teams sometimes openly share ideas and breakthroughs, and sometimes other groups spy or steal, but it seems that if one general AI is invented, a second or third will be right behind it. So the problem isn't one AI gaining the ability to decide what to do with us, but multiple AIs who may be in competition with each other, and while many AIs may see us as beneficial or merely be ambivalent toward us, it just takes one to decide we're in the way. An AI war may not specifically target us, or it may have one side protecting us from the other, but either way, our chances at surviving may be pretty slim. Though I'm really not concerned, because as smart as an AI may get, it would still need resources that we would be able to limit or deny it access to, be that electricity, raw materials, or anything else.
@southcoastinventors6583
@southcoastinventors6583 Год назад
So just tell it to copy Steve Jobs quote "Think Differently" and the AI will make everything white and powered by Thunderbolt connections.
@harrykuehb8938
@harrykuehb8938 Год назад
The singularity is already happening. The delta as well as the general technological level is accelerating to a point it seems to be happening in real time. That's the singularity.
@julioalves3051
@julioalves3051 Год назад
Excellent video. Just one point: you are parting from the premise that we tell AI how to think and that is not how it goes. When developing AI, we take a dataset, curate it according to our interests and needs, design a model that we believe can absorb the information contained in the dataset (such as how to chat with humans, write code or be a super-intelligent entity) and train the model with the data we have. In the end, the way of thinking of the model is not "programmed" in any way, but is an emergent behavior of a statistically self-organized model in face of the data. Of course, with supervised training, we need to show the model some expected answers in order to perform the training, but those answers are not the only ones the final model can issue (and, if that was so, it would be useless). Due to the generalization capabilities of neural networks, the model can develop far more complex emergent behaviors than those that are explicitly contained in the training set. That is why we have so many people saying we are so close to a general superintelligent AI: we do not need to know how a superintelligent entity thinks in order to make one... we just have to give it enough properly connected neurons, megatons of data and a gazillion of processing power for it to emerge at the other end of the process.
@theoptimisticskeptic
@theoptimisticskeptic Год назад
10:26 Working in tech support, back in the 90s, we used to call them id10-T errors.
@thecommenternobodycaresabout
"Computers do exactly what you program them to do". Yep. Somehow, program them to auto improve and you have a Computer that will do just that. How? Well...if we assume that there is a connection or logic between every action a normal human takes to "improve" a computer and feed the data of that logic or connection as well as the results, then it's only a matter of time before the computer finds the next "theoretical but possible" course of action. The computer will need to gather more data to confirm if there are any more connections that it may find use for from different elements and with a lot of time will start upgrading itself step by step. To give you an example of what I mean as a connection, think of it like this: Imagine a property an element has, for example hardness, now think of that same property of many other elements, for example: Iron, Gold, Silver, Copper, etc. The connection all these elements have are their common properties, such as hardness, conductivity, weight, etc. If the computer manages to find all the properties an element has and their values, then it can figure out where to use them according to its knowledge and with a lot of trial and error, through simulations, it will find ways to improve itself.
@Wormweed
@Wormweed Год назад
"Human intelligence has increased", yeah sure, just look at the enviromental activist super gluing his hand to the road and throwing the bottle in the gutter.
@PetrSojnek
@PetrSojnek Год назад
The real question is: Do we recognize singularity happening before it happens? If it happens and the system is connected to internet, well, we will be ruled by our AI overlords... one way or another...
@lemonadeenjoyer7111
@lemonadeenjoyer7111 Год назад
@eternalembers2029 at what, being human?
@brianmi40
@brianmi40 Год назад
We need a #SingularityClock to track our progress, just like the Doomsday Clock. I think the Singularity will be much more apparent when it happens as opposed to sentience, which we can't even agree on the definition. ChatGPT already looks sentient to most passersby, but those in the know keep reminding us it is designed to tell us what we want to hear. The problem with actually achieving sentience is telling the difference between telling us what we want to hear, and hearing very similar words, but truly coming from sentience... We are in a headlong rush to perhaps answer the Fermi Paradox for our own existence...
@ashroskell
@ashroskell Год назад
In the 1920’s people watched a comedy movie inspired by Jules Verne, about humans visiting the moon. The characters could breathe air there, they met with dancing aliens of various humanoid species and they wore their top hats during their experiences. But, even at the time, audiences disagreed about what was implausible about the film. 50 years later, within the lifetimes of the younger viewers in that audience, Neil Armstrong was walking on the actual moon, something that no one in that 20’s audience would have anticipated as being possible, or at least likely, and it was nothing like anyone’s speculations had suggested back then. The lesson we so often fail to learn is that technology always takes us further and in unexpected directions than we could anticipate. I never got that assistant robot android or flying car that I was brought up to believe was likely. But I do have communications devices that make the stuff Jim Kirk wielded in the original Star Trek shows, look lame, along with several other technologies that were never anticipated by the most imaginative writers. So, when I hear anyone speaking authoritatively about any predictions, whether they’re nay sayers or super optimists, I take what they say with a pinch of salt.
@Atomicallyawesome.
@Atomicallyawesome. Год назад
The thing about science is breakthroughs can speed things up, look at alphafold by deepmind, it was able to make a massive breakthrough in proteins folding in such little time compared to the overall time process to achieve what it has done so quickly, such things are rather impressive and these types of developments make it seem like one isn't insane for thinking of the possibility of what the near future may hold.
@ERKNEES2
@ERKNEES2 Год назад
Aspen freaking legend! killed it once again!
@zadokprime4831
@zadokprime4831 Год назад
There's a difference between inventing, building, and creating.... just because you build it doesn't mean you've invented or created it. A lot of living beings unwittingly build or invent things fully believing they created them from raw thought but are just following their coded or scripted procedures for building.
@geekehUK
@geekehUK Год назад
It's not just about A.I. being able to upgrade itself, but also parallel processing. It would be able to create as many copies of itself as it has access to raw materials. Each of those can then do the same. They can then divide tasks among them, network together to calculate the solution to problems we currently consider unsolvable. The real worry is if consciousness is simply an emergent property of sufficient complexity, will AI one day become conscious and develop desires. What if it wants to live and predicts humans will destroy the world? What might it do to stop us?
@brianmi40
@brianmi40 Год назад
Think about it more, when it's 1,000 times smarter than the entirety of humanity, what "chance" is there WE can destroy ANYTHING even as big as a 7-11 without it stopping it beforehand? We're talking Person of Interest level of awareness of all human activity at once... The GREATER risk is what SOME of us will do, to stop OTHERS from getting it first. We are headed towards an "arms race" for AI that will make the space race of the 1960s look like child's play by comparison, because this time, it's not just to show off our superior technical prowess, it's for world domination. I wrote this short story to illustrate the problem we are hurtling toward: There's a knock on the President's bedroom door at 2am. As the President rubs his eyes and turns on the light, he says, "Yes?" "Mr. President, the Chairman of the Joint Chiefs of Staff is here to see you." "OK, I'll be right out" ... "What is it Mark?" "Well, Mr. President, it's happening." "What's happening Mark?" "It's the Chinese sir, their new supercomputer goes live later today. Our spy has informed us that they have worked out the issues with their power grid to enable booting it up around 5pm." "What're the ramifications Mark?" "Same as we predicted in our last briefing. Within 1 hour it will be able to penetrate any encryption on any network, bypass all firewalls to all Pentagon computers, access all banking records for any bank in the world, move, delete or do anything with any financial record it desires. And within 30 days we expect it will be able to inform the Chinese how to build a new generation of weapons that will be unstoppable." "Are the Joint Chiefs still recommending a preemptive strike?" "Yes Sir, they're waiting for you in the Situation Room."
@RedactedATS
@RedactedATS Год назад
Good lord. Simon has legs... and feet. Soooo... he's human? PEBCAC, I like it. Back when I was doing tech support, it used to be computer user non technical. I guess they had to come up with a less offensive acronym
@matthewdignam7381
@matthewdignam7381 Год назад
based on your logic we would have to create ai that thinks like humans and have it try to find new breakthroughs over a large span of time in there reality. I definitely think that will be the case as I've been ai learn to play football professionally like this
@BobBob-kr5wr
@BobBob-kr5wr Год назад
My take has always been this. Programmers: We want to create a true AI. Public: How do you know if your program has become a true AI. Programmers: We will have it pass a test. ...... AI passes test. Public: So now we have a true AI. Programmers: No, we need a better test. ....... Process repeats multiple times. Public: You have created a true AI? Programmers: No. The issue is if the programmers ever made something that was considered a true AI they would then have to take RESPONCIBILITY for it. Does the AI have rights? what happens if the AI goes rogue? etc etc. It leads to a host of questions programmers don't want to deal with, so they will continue to simply up the bar.
@brianmi40
@brianmi40 Год назад
Just GETTING THERE is filled with risk for humanity. We are headed towards an "arms race" for AI that will make the space race of the 1960s look like child's play by comparison, because this time, it's not just to show off our superior technical prowess, it's for world domination. I wrote this short story to illustrate the problem we are hurtling toward: There's a knock on the President's bedroom door at 2am. As the President rubs his eyes and turns on the light, he says, "Yes?" "Mr. President, the Chairman of the Joint Chiefs of Staff is here to see you." "OK, I'll be right out" ... "What is it Mark?" "Well, Mr. President, it's happening." "What's happening Mark?" "It's the Chinese sir, their new supercomputer goes live later today. Our spy has informed us that they have worked out the issues with their power grid to enable booting it up around 5pm." "What're the ramifications Mark?" "Same as we predicted in our last briefing. Within 1 hour it will be able to penetrate any encryption on any network, bypass all firewalls to all Pentagon computers, access all banking records for any bank in the world, move, delete or do anything with any financial record it desires. And within 30 days we expect it will be able to inform the Chinese how to build a new generation of weapons that will be unstoppable." "Are the Joint Chiefs still recommending a preemptive strike?" "Yes Sir, they're waiting for you in the Situation Room."
@stevedavis1437
@stevedavis1437 Год назад
ChatGPT 4... do computers do no more or less than the human programmer as you assert? Time for an update LOL
@carrdoug99
@carrdoug99 Год назад
The question of creating a super intelligence greater than the creator was answered well recently by Andrej Karpathy. You don't train AI by tell it what to do (how to think). You train AI by giving it a goal, and letting the neural net figure it out. AI doesn't need to think fundamentally differently, it just has to think better. The Singularity most like to happen will be the cyborg version (I humans have a choice in the matter).
@carrdoug99
@carrdoug99 Год назад
@Fearless Joy watch the Karpathy/Friedman interview to get a more detailed explanation of what is meant by this comment.
@carrdoug99
@carrdoug99 Год назад
@Fearless Joy have a nice day.
@carrdoug99
@carrdoug99 Год назад
@Fearless Joy 🤣😂🤣 you're as childish as I suspected you were.
@jdougs1117
@jdougs1117 Год назад
Simon said mathemagician and I'm here for it
Далее
How Close Are We To Sentient AI?
14:38
Просмотров 65 тыс.
Does the Multiverse Actually Exist?
18:34
Просмотров 133 тыс.
PUBG Mobile СТАЛ ПЛАТНЫМ! 😳
00:31
Просмотров 219 тыс.
A.I. ‐ Humanity's Final Invention?
16:43
Просмотров 6 млн
Explaining the Singularity
31:54
Просмотров 22 тыс.
Will We Ever Have a Warp Drive?
15:25
Просмотров 387 тыс.
Is a Technological Singularity Inevitable?
33:32
Просмотров 213 тыс.
How Close Are We to Creating Cyborgs?
15:00
Просмотров 72 тыс.
Humanity's Plans to Communicate with Aliens
18:12
Просмотров 58 тыс.
Earth has Terrible Worldbuilding
21:20
Просмотров 2 млн