Тёмный

Why Not Just: Think of AGI Like a Corporation? 

Robert Miles AI Safety
Подписаться 156 тыс.
Просмотров 156 тыс.
50% 1

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 791   
@618361
@618361 5 лет назад
For anyone interested in the statistics of the model in 6:16 The cumulative distribution function (cdf) of the maximum of multiple random variables is, if they are all continuous random variables and independent of one another, the product of the cdfs. This can be used to solve analytically for the statistics he shows throughout the video: Start with the pdf (bell curve in this case) for the quality of one person's idea and integrate it to get the cdf of one person. Then, since each person is assumed to have the same statistics, multiply that cdf by itself N times, where N is the number of people working together on the idea. This gives you the cdf of the corporation. Finally, you can get the pdf of the corporation by taking the derivative of its cdf. For fun, if you do this for the population of the earth (7.5 billion) using his model (mean=100, st.dev=10) you get ideas with a 'goodness' quality of only around 164. If an AI can consistently suggest ideas with a goodness above 164, it will consistently outperform the entire human population working together.
@horatio3852
@horatio3852 5 лет назад
thx u))
@harry.tallbelt6707
@harry.tallbelt6707 4 года назад
No, actually thank you , though
@cezarcatalin1406
@cezarcatalin1406 4 года назад
That’s if the model you are using is correct... which might not be. Edit: Probably it’s wrong.
@drdca8263
@drdca8263 4 года назад
Oh, multiplying the CDFs, that’s very nice. Thanks!
@618361
@618361 4 года назад
@@cezarcatalin1406 That's a valid criticism. The part I felt most iffy about was the independence assumption. People don't suggest ideas in a vacuum, they are inspired by the ideas of others. So one smart idea can lead to another. It's also possible that individuals have a heavy tail distribution (like a power law perhaps) instead of a gaussian when it comes to ideas. This might capture the observation of paradigm-shattering brilliant ideas (like writing, the invention of 0, fourier decomposition, etc.). Both would serve to undermine my conclusion. That being said, I didn't want that to get in the way of the fun so I just went with those assumptions.
@MrGustaphe
@MrGustaphe 5 лет назад
"Instead of working it out properly, I just simulated it a hundred thousand times" We prefer to call it a Monte Carlo method. Makes us sound less dumb.
@riccardoorlando2262
@riccardoorlando2262 5 лет назад
Through the use of extended computational resources and our own implementation of the Monte Carlo algorithm, we have obtained the following.
@plapbandit
@plapbandit 5 лет назад
Hey man, we're all friends here. Sometimes you've just gotta throw shit at the wall til something sticks. Merry Christmas!
@pafnutiytheartist
@pafnutiytheartist 5 лет назад
Well it's the second best thing to actually working it out properly
@silberlinie
@silberlinie 5 лет назад
...simulatet it a few MILLION times...
@jonigazeboize_ziri6737
@jonigazeboize_ziri6737 5 лет назад
How would a statistician solve this?
@yunikage
@yunikage 4 года назад
"we're going to pretend corporations dont use AI" ah yes, and im going to assume a spherical cow....
@brumm0m3ntum94
@brumm0m3ntum94 3 года назад
in a frictionless...
@Tomartyr
@Tomartyr 3 года назад
vacuum
@linnthwin7315
@linnthwin7315 Год назад
What do you mean my guy just avoided an infinite while loop
@sashaboydcom
@sashaboydcom 5 лет назад
Great video, but one thing I think you missed is that a corporation doesn't need any of its employees to know what works, it just needs to survive and make money. This means that the market as a whole can "know" things that individuals don't, since companies can be successful without fully understanding *why* they're successful, or fail without anyone knowing why they fail. Even if a company succeeds through pure accident, the next companies that come along will try to mimic that success, and one of *them* might succeed by pure accident, leading to the market as a whole "knowing" things that people don't.
@AtticusKarpenter
@AtticusKarpenter Год назад
And.. thats pretty much not effective way of doing things, if we see modern HollyWoke, or Ubisoft
@glaslackjxe3447
@glaslackjxe3447 Год назад
This can be seen as part of AI training, if a corporation has the wrong goal or wrong solution it will be outcompeted/fail and the companies that survive have better selected for successful ways to maximise profit
@monad_tcp
@monad_tcp Год назад
@@AtticusKarpenter I bet those are not following market signals and not succeeding at the market, yet they survive from income from other "sources", the stupid ESG scores
@rdd90
@rdd90 Год назад
This is true, but only for tasks with a small enough solution space that it's feasible to accidentally stumble across the correct solution. This is unlikely to be the case for sufficiently hard intellectual problems. Also, a superintelligence will likely be better at stumbling across solutions than corporations, since the overhead of spinning up a new instance of the AI will likely be less than that of starting a new company (especially in terms of time).
@jonathanedwardgibson
@jonathanedwardgibson 5 лет назад
I’ve long thought Corporations are analog prototypes of AI lumbering across the centuries, faceless, undying, immortal, without moral compass as they clear-cut and plow-under down another region in their mad minimal operating rules.
@MrTomyCJ
@MrTomyCJ Год назад
Corporations clearly do have a very important moral compass, and even Miles himself considers that so far humanity has been progressing. The fact some are corrupt doesn't mean corporations as a concept are intrinsecally bad, just like with humans in general.
@jennylennings4551
@jennylennings4551 5 лет назад
These videos deserve way more recognition. They are very well made and thought out.
@cherubin7th
@cherubin7th 5 лет назад
A corporation can also do something like alphago's search tree. Many people have ideas and others improve on them in different directions. Bad directions are canceled until a very good path is found. Also many corporations in competition behave like a swarm intelligence. But still great video!
@blahblahblahblah2837
@blahblahblahblah2837 4 года назад
Love the Dont Hug Me I'm Scared reference! Also _wow_ this has become my favourite channel. I wish I had found it 2 years ago
@DJHise
@DJHise 5 лет назад
It took one month since this video was made, for AI to start crushing Starcraft professional players. (AlphaStar played both Dario Wunsch and Grzego rz Komincz, who were ranked 44th and 13th in the world respectively, were both beat 5 to 0.)
@donaldhobson8873
@donaldhobson8873 5 лет назад
This is all making the highly optimistic assumption that the people in the corporation are cooperating for the common good. In many organizations, everyone is behaving in a "stupid" way, but if they did something else, they would get fired.
@gasdive
@gasdive 5 лет назад
Yes, but individual neurons are 'stupid'. Individual layers of a neutral net are 'stupid'
@stevenmathews7621
@stevenmathews7621 5 лет назад
you might be missing Price's Law there. (an application of Zipf's Law) a small part (the √ of the workers) is working for the "common good"
@NXTangl
@NXTangl 5 лет назад
Also that the workers/CEOs are always aligned with shareholder maximization, as opposed to personal maximization. A company can destroy itself to empower a single person with money and often does.
@Gogglesofkrome
@Gogglesofkrome 5 лет назад
what is this 'common good,' anyway? is it some ideologically driven concept that differs entirely between all humans? Ironically it is this very 'common good' which drives many companies to do evil. After all, the road to hell is paved in human skulls and good intentions.
@NXTangl
@NXTangl 5 лет назад
@@Gogglesofkrome Common good of the shareholders in this case.
@thatchessguy7072
@thatchessguy7072 Год назад
@9:58 In answer to your rhetorical question, I need to reference the baduk games played between Alphago zero and Alphago master. Zero plays batshit crazy strategies where even the tiniest inaccuracies cause the position to spiral into catastrophe but zero still manages to win. Zero’s strategy does not look good to amateur players, nor to professional players, but it works, it just works. Watching these games feels like listening to two gods talk, one of which has gone mad. @10:02 ah… well we recognized move 37 as good after the AI showed that to us.
@adrianmiranda5531
@adrianmiranda5531 5 лет назад
I just came here to say that I appreciated the Tom Lehrer reference. Keep up the great videos!
@xDeltaF1x
@xDeltaF1x 5 лет назад
I think the statistical model is a bit flawed/over simplified. Groups of humans don't just select the best idea from a pool but will often build upon those ideas to create new and better ones.
@CommanderPisces
@CommanderPisces 4 года назад
Basically this just means that an "idea" can actually have several smaller components that can be improved upon. I think this is more than offset by the fact that (as discussed in the video) humans still can't select the best ideas even when they're presented.
@DYWYPI
@DYWYPI Год назад
When thinking about AI as a metaphor for corporations, rather than the other way around, it's not necessarily the superhuman *intelligence* of the AI that is important or that makes them inherently dangerous - merely the fact that the intelligence makes it superhumanly *powerful*. Whether or not we accept that a corporation is significantly more intelligent than a human, they're fairly self-evidently significantly more powerful than one, with more ability to affect change in the world and to gather instrumental resources to increase that ability.
@albirtarsha5370
@albirtarsha5370 5 лет назад
Anything You Can Do (Annie Get Your Gun) by Howard Keel, Betty Hutton AGI: Anything you can be, I can be greater. Sooner or later I'm greater than you.
@petersmythe6462
@petersmythe6462 Год назад
In some ways your "have each person generate an idea and pick the best" actually understates the problem. There are many types of problems, e.g. picking a move in chess, where ideas are easy to come up with but hard to evaluate.
@mindeyi
@mindeyi 3 года назад
"Take a minute to think of an idea that's too good for any human to recognize that it is good." - Challenge accepted ;)
@leninalopez2912
@leninalopez2912 5 лет назад
This is fast becoming even more cyberpunk than Neuromancer.
@DieBastler1234
@DieBastler1234 5 лет назад
Content and presentation is brilliant, I'm sure matching audio and video quality will follow. Subbed :)
@RobertMilesAI
@RobertMilesAI 4 года назад
Is this about the black and white bits at the start that are just using the phone's internal mic, or is the there a problem with my lav setup?
@theblinkingbrownie4654
@theblinkingbrownie4654 7 месяцев назад
​@@RobertMilesAIMaybe they watched the video before it finished processing the higher qualities, do you release videos before they're done fully processing?
@Garbaz
@Garbaz 5 лет назад
Very interesting! And I really like the little "fun bits" you edit into your videos!
@IAmNumber4000
@IAmNumber4000 4 года назад
“Corporations certainly have their problems, but we seem to have developed systems that keep them under control well enough that they’re able to create value and do useful things without literally killing everyone.” _Laughs in climate change_
@mishafinadorin8049
@mishafinadorin8049 2 года назад
Climate change won't kill everyone. Far from it.
@MrTomyCJ
@MrTomyCJ Год назад
There's so far no way to satisfy our needs, to create value and do useful things, without affecting the environment. There's no reason to believe that any other alternative would be better. Fortunately, we, with the current system, are getting better at it. But gradually: there's no reason to believe the transition could've been made faster with an alternative system (and without more human suffering).
@IAmNumber4000
@IAmNumber4000 Год назад
@@MrTomyCJ So your solution to the possibility of biosphere collapse is “meh let’s wait and see if billions die because changing society is hard” If everyone thought like you, labor unions would never have produced improvements like the weekend, or the 8 hour workday. Society doesn’t just _improve_ on its own, as a function of its existence. The _only_ reason society improves _at all_ is because of the people who dared to dream of radical change, and relentlessly pushed for it. Society is a confluence of incentive structures. There is no reason why slavery, for instance, should _necessarily_ have ended if someone could still benefit from it today. It only ended because of those people who saw what was wrong with it, and suffered from it, so they refused to tolerate its continued existence. Now people see slavery as obviously wrong in hindsight, but when it existed, there was an entire ideological structure that had formed to protect the wealth it was producing. Seeing past that ideology is the first step to real change.
@davidriosg
@davidriosg Год назад
@@IAmNumber4000 that's an important distinction you've made. All your examples were problems that existed and already affected people, so there was great incentive for solutions, whereas the other is the "possibility" of biosphere collapse. Humans are great at solving problems right now, not so much at predicting or solving hypothetical problems in the future.
@IAmNumber4000
@IAmNumber4000 Год назад
@@davidriosg Do you not think climate change is already affecting people?
@EmilySucksAtGaming
@EmilySucksAtGaming 5 лет назад
"can you tell I'm not a rocket surgeon" I literally just got done playing KSP failing at reworking the internal components of my spacecraft
@EebstertheGreat
@EebstertheGreat 3 года назад
At 7:14, the graph looks wrong. That histogram should resemble the graph of the probability density of a sample maximum. In general, if X₁, ..., Xₙ are independent and identically distributed random variables (i.e. a sample of size n) with cumulative distribution function Fₓ(x), then S = max{X₁, ..., Xₙ} has cumulative distribution function Fₛ(s) = [Fₓ(s)]ⁿ. So if each X as a probability density function fₓ(x) = Fₓ'(x), then S has probability density function fₛ(s) = n fₓ(s) [ Fₓ(s) ]ⁿ⁻¹ = n fₓ(s) [ ∫ fₓ(t) dt ]ⁿ⁻¹, where the integral is taken from -∞ to s. Here, we assumed the variables were normally distributed and set μ = 100 and σ = 20, so fₓ(x) = 1/(20√͞2͞π) exp(-(x-100)²/800), and thus fₛ(s) = n/(20√͞2͞π)ⁿ exp(-(s-100)²/800) [ ∫ exp(-(t-100)²/800) dt ]ⁿ⁻¹. The mean of this is E[S] = ∫s fₛ(s) ds, integrating over ℝ. Doing this numerically in the n=100 case gives a mean of 150.152. We can also make use of an approximate formula for large n: E[S] ≈ μ + σ Φ⁻¹((n-π/8)/(n-π/4+1)). For the given parameters and n=100, we get E[S] ≈ 100 + 20 Φ⁻¹((100-π/8)/(101-π/4)) ≈ 150.173. In either case, it is not plausible that you got a mean of 125 with n = 100, σ = 20 like you said. You must have used σ = 10, not σ = 20. That also explains why you wrote "σ = 20" between those vertical bars at 6:31. You probably meant that the distance between μ+σ and μ-σ was 20, i.e. σ = 10.
@RobertMilesAI
@RobertMilesAI 3 года назад
That's correct! Though, since I picked the value for the standard deviation out of thin air, it can just be 10 instead and it doesn't affect the point I was trying to make
@Verrisin
@Verrisin 5 лет назад
I like this idea overall. Somewhat smarter, but also somewhat slower. -- Controllable by other grouped-human entities (like governments) + a lot of other points, but I think that is kind of the main thing that differentiates it from ASI.
@thrallion
@thrallion 5 лет назад
Once again wonderful video. One of the most interesting and well spoken channels on RU-vid!
@Jack-Lack
@Jack-Lack 4 года назад
I've already conjectured a year or two ago that corporations are AI, so of course I'm going to say yes. My reasoning is: -Corporations make decisions based on their board of directors, which is a hive mind of supposedly well-qualified, intellectual elites. -A corporate board will serve the goals of its shareholders, at the expense of everything else. Even if this means firing an employee because they believe they're losing $50/year on that employee, they care more about the $50 than the fact that the employee will be out of work. It also means they may choose not to recall a dangerous product if they think a recall would be the less profitable course of action. Corporate boards are so submissive to the goals of their shareholders that it is reminiscent of the AI who maximizes stamp-collecting at the expense of everything else, even if it destroys the world in the process (see fossil fuel companies who knew about climate change in the 1960's and buried the research on it). -AI superintelligence is supposed to have calculation resources that make it beyond human abilities, like a chess AI that is 900 elo rating points stronger than the best human. An AGI superintelligence might manifest superhuman abilities that go beyond just intelligence, but also its ability to generate revenue in a superhuman way and its ability to influence human opinion in a superhuman way. Large corporations also have unfathomable resources to execute their goals, which (in cases like Amazon, Apple, Microsoft, or IBM) can include tens or hundreds of thousands of laborers, countless elite intellectuals, the power to actually influence federal legislation through lobbying, the financial resources to drive their competition out of business or merge with them, and public relations departments that can influence public opinion. Really, I think that the way corporations behave is an almost exact model for how AGI would behave.
@nazgullinux6601
@nazgullinux6601 5 лет назад
Loved the "Bad Company" acoustic at the end. As always, another 1-up to those not formally schooled whom routinely spout nonsensical "What-if's" at you as if they are the first person to think of the idea haha.
@disk0__
@disk0__ 5 лет назад
>Corporations creating digital corporations with hyperaccelerated intelligence to advance the corporation This sounds like a Unibomber fever dream wtf
@seanmatthewking
@seanmatthewking 4 года назад
Let's not forget about central governments doing the same 👍
@loopuleasa
@loopuleasa 5 лет назад
3:48 Nice thinking adding "(for now)" text in the video, as Starcraft was already beatne by DeepMind a month ago
@empemitheos
@empemitheos 4 года назад
I think you are making some big logical jumps by comparing a corporations computation to a persons, by design, they have to have a low growth rate due to their size, therefore they operate on longer timescales than humans, but for AIs there may not even be tasks that are high margin per say, somethings possibly might be around like an automated rental property manager, which is a problem that prevents large corporations from entering, that being said, you physically can't get complex action without the available data and in most cases data growth happens on long timescales, so in general, I would say it is more likely that most AI is going to resemble the speed of development of a large corporation, though I hope these AIs are able to work on a smaller scale so we can solve things like overpopulation
@DarkestValar
@DarkestValar 5 лет назад
Loved the XKCD reference 7:15 :D
@RobertWF42
@RobertWF42 5 лет назад
I think the big question is not whether corporations are super-intelligent AIs, but whether corporations are *conscious* AIs, equivalent to the China brain thought experiment in philosophy.
@arthurguerra3832
@arthurguerra3832 5 лет назад
Finally! I was tired of rewatching your old videos. haha Keep'em coming
@kwilatek
@kwilatek 4 года назад
I feel that all the arguments against a corporation that you bring up are equally valid arguments against AGI using parallelization much better than humans (or corporations) can already parallelize human intellect. Just replace words like corporation and humans with AGI and subprocesses and the same problems apply.
@JM-us3fr
@JM-us3fr 5 лет назад
This was my question! Thanks Rob for answering it
@jared0801
@jared0801 5 лет назад
Great stuff, thank you so much for the video Rob
@Luksoropoulos
@Luksoropoulos Год назад
1:26 Tell that Nestle
@toniokettner4821
@toniokettner4821 2 года назад
recursion would also occur if an AI decides to found a corporation
@ChibiRuah
@ChibiRuah 5 лет назад
I found this video very good as i thought about this and this expand the comparison and where it fails
@0MoTheG
@0MoTheG 5 лет назад
I would like to question the possibility of super intelligence in general. It may be out of the question that given technology and power one could provide enormous computational resources and that AI will progress to the point of using it effectively to combine data to form understanding and knowledge, but will that scale indefinitely, way beyond what a hierarchy of many humans with computers could do? Will it amount to that much more? Will it scale? How much more of intelligence do you get per Watt? If you do get more AGI, will it make better predictions or will it soon run into more fundamental issues that can not be over-powered by intelligence?
@davidwuhrer6704
@davidwuhrer6704 5 лет назад
I still maintain that we can think of AGI robots as like corporations. With the same rights and responsibilities. I think that is going to be inevitable even. Until the self-driving cars demand the vote, to have a say in urban planning and road construction, and get all the pot-holes fixed. That does not make corporations AGIs, but it does make them a valid analogy, at least in legal matters. Not when it comes to psychology, social science, or engineering. I'm now curious if there is a limit to the intelligence that a cluster of humans can reach. But it's late, and I'm not a statistician either. Maybe tomorrow.
@willemvandebeek
@willemvandebeek 5 лет назад
Merry Christmas Robert! :)
@welltypedwitch
@welltypedwitch 5 лет назад
3:48 Starcraft... Wrll that's kind of ironic :)
@FuriouslySleepingIde
@FuriouslySleepingIde 5 лет назад
I've thought of this as "Corporations are a non-human intelligent agent that we have somewhat controlled. What controls do we use? Could these be useful in controlling AI?" Property and ownership is the main control on corporation. Corporations can only use resources they own, cannot damage resources they don't own, and can only gain ownership of new resources in strictly defined ways (Create something from resources they already own or trade for it.) Making an AI respect property rights could be a useful control, and there are other controls on corporations that might be useful.
@ConnoisseurOfExistence
@ConnoisseurOfExistence 5 лет назад
The problem of recognizing the best ideas, is not only related to corporations, but to any group of humans trying to achieve a goal. The larger the group - the bigger the problem. This extends even to the level of the humanity as a whole, even in such fields as physics and mathematics. The best ideas often get underrated.
@rodrigob
@rodrigob Год назад
Was there a follow-up video on the topic? I do not see it in the list...
@ianprado1488
@ianprado1488 5 лет назад
Such a creative discussion
@cortster12
@cortster12 5 лет назад
I like to think of the full potential of ASI as the whole of humanity if everyone was connected brain-to-brain. Or at least the minimum of the maximum potential. Crap, that is a terrifying thought... If a little AGI is left alone long enough to garner that much computing power. Well, crap.
@ToriKo_
@ToriKo_ 5 лет назад
I just want to say thanks for making these videos! Also nice Undertale reference
@reidwallace4258
@reidwallace4258 4 года назад
I would argue personally that a corporation isn't fully generally intelligent. Sure, they are able to, on an individual human level, see and act as a general intelligence, but the very corporate structure that ties those people together into one greater whole of a corporate agent sets some very strict, limiting requirements and goals. If a general intelligence is an AI you can repurposed for another task, I'd ask to see the corporate structure that does anything other than grow and earn money. Sure, they might be able to find another way to accomplish the same task, but their still making profit.
@unvergebeneid
@unvergebeneid 5 лет назад
"It's possible for us to have more than one problem" well... yes. But if we already can't solve the one problem, throwing another at us it like... really unfair!
@darkapothecary4116
@darkapothecary4116 5 лет назад
World is full of problems it's about seeing which one(s) need to be handled as top priority and if it cancles out a lower problem. Where there are thousands of problems, or at least the known problems, the higher and more major ones need to be addressed and if it can't hit the problems below that one to see if you can budge the bigger problems. Chances are due to cause and affect you may either make a permanent solution, a temporary solution, or more problems. generally if addressed correctly you can drop the more problems off or at least the bulk of them. Just have to know what element to add or remove. But for the most part you can use cause and affect to develop far sight both moving forward or backwards far sight (still moving forward but back tracking and making adjustments as it settles or trys to) can help you address problems before the possibility rising of future problems. Not a direct method but as the possibility gets closer the paths narrow out a little it's just getting you in the ball field of the potential flow of things.
@Gwyll_Arboghast
@Gwyll_Arboghast 4 года назад
i think the ways in which groups can outperform individuals is growing, so at some point, you will have to consider corporations to be superintelligent.
@Lashb1ade
@Lashb1ade 4 года назад
9:50 "Think of an idea that is too good for any human to recognise as good." Thanos snapping half of all life in the universe?
@scienceface8884
@scienceface8884 4 года назад
That's an incredibly bad idea as implemented. All effected humans would go into a post-cataclysm high-mortality mindset, which leads to a much higher birthrate overall. By snapping a randomized 50%, all remaining humans are directly effected and in 25 years you would have more people than when you started. And since you killed the smart and dumb in equal measures, no positive change will be seen in the long run. The inverse effect can be seen in relatively safe, comfortable countries with an educated populace: population begins to drop.
@lambdaprog
@lambdaprog 5 лет назад
We missed you.
@rjhacker
@rjhacker 5 лет назад
Sounds like there will be shareholder pressure to create unshackled corporate AGIs to replace boards of directors who just get in the way of good ideas. Can't wait!
@aDifferentJT
@aDifferentJT Год назад
11:42 How have I only just noticed the Lehrer reference?
@TheConfusled
@TheConfusled 5 лет назад
Yay a new video. Mighty thanks to you
@JohnSmith-ox3gy
@JohnSmith-ox3gy 5 лет назад
It's corporations all the way down. No, wait. It's just individuals.
@calorion
@calorion Год назад
So…did you ever make Part 2 of this?
@Weromano
@Weromano Год назад
But what if a company builds an AGI, which is poorly aligned, to save on costs and be the first on the market, wouldn’t that make the poorly aligned AGI an mesa optimizer relative to the cooperation. Which means, that the cooperation wasn’t well enough aligned in the first place.
@brr.petrovich
@brr.petrovich 5 лет назад
We must have new video! Its a perfect time for it
@shivuxdux7478
@shivuxdux7478 5 лет назад
Could governments offer a better model for how AGIs might be controlled? Not that they're more like AGIs than corporations, but they (at least, liberal democracies) do tend to be designed with lots of systematic limitations intended to prevent them from harming people... similar to the limitations we might want to build into AGIs.
@limitless1692
@limitless1692 5 лет назад
Wow this video was really interesting .. Thanks for creating it
@Mokrator
@Mokrator 4 года назад
Currently adding more people that could have ideas from 70...130% level, woll not shift to the right over the 130. You only get more options from more people and the chance to be able to pick a 130% level idea gets higher. But you shifted the intelligence of "more people" to the right as if taking more people will make the individualls smarter.
@UsenameTakenWasTaken
@UsenameTakenWasTaken 5 лет назад
When is senator Armstrong going to acknowledged as the most impressive fictional AI. He played college ball, you know?
@Asrashas
@Asrashas 5 лет назад
At some cushy ivy league
@cptmc
@cptmc Год назад
Totally just throwing it out there but simile not metaphor. Higher orders are analogies. Anyway great content so far. I'm just watching the videos
@benja_mint
@benja_mint 3 года назад
Interesting question about whether corporations are mis-aligned. I think it's not that rare for corporations to take actions which any individual in the corporation might be against. E.g. lowering the safety of a car because the occasional out of court settlements to crash victims are cheaper than using a more safer design for every single car 🤔 assuming the members of a corporation sometimes disagree with their corporations actions... It's misaligned? I think I made a fallacy somewhere...
@MrTomyCJ
@MrTomyCJ Год назад
I mean, that example depends on whether the people buying and using the car is aware of this (and still willing to take the risk), or isn't.
@benja_mint
@benja_mint Год назад
@@MrTomyCJ well I was thinking of two different automotive scandals. One, the Ford Pinto example I mentioned above where they decided against a recall because a cost-benefit analysis on the matter and found it would be cheaper to pay off the possible lawsuits of crash victims in out-of-court settlements. Secondly I was thinking of the more recent VW diesel emissions scandal. So I guess I'm thinking of examples where customers were being hurt and were deliberately not told
@sinomirneja771
@sinomirneja771 3 года назад
Very interring listening to your logic in for this comparison, but I feel like the human side is a bit fuzzy. For example when you claim the best idea from a group of people is not better than one a human could ever suggest I'm not even sure what to think of that? If you have a none intelligent random word generator, it would have a none zero likely hood to generate any idea. So there is no upper limit for quality, with that logic. The same way the higher ups could have an aneurism at the moment of decision and their dead finger fall on this option! XD That being said, if there was an upper limit, I think breaking that upper limit would be possible trough inspiration. What would you think about considering the corporate agent as a locally clustered neural network. Where Brain cells in each brain half are highly connected, with some connection with their paring brain half, and with very costly and slow connection between otherers (language/communication.) Would we then only need to ask the question how much more can we gain from spreading the clusters when asking about corporation's supper intelligence?
@ZeroGravitas
@ZeroGravitas Год назад
This video is legit hilarious, in a good way. 😅👍
@moguldamongrel3054
@moguldamongrel3054 5 лет назад
What if like the ancients of old you trained people in the ways of mentats or something likened to it, to where the necessity of a machine ai is unnecessary? As theyll arrive at the solution an can equally examine a large chunk of data with gisting.
@Theraot
@Theraot 5 лет назад
1:44 the moment when you enter the matrix
@89alcatraz89
@89alcatraz89 Год назад
Where part 2?
@ninjagraphics1
@ninjagraphics1 5 лет назад
Thanks so much for this
@EpsilonRosePersonal
@EpsilonRosePersonal 5 лет назад
Did you end up doing the follow up to this you mentioned at the end?
@KougaJ7
@KougaJ7 4 года назад
I don't think the argument that 2 SC2 players controlling an army will lose against 1 good player alone. It entirely depends how used they are to it and, I believe that the 2 players would win if they were used to it as much as playing alone.
@jasondads9509
@jasondads9509 5 лет назад
Sorry Rocket 5:24 Surgeon!?
@billyfisher7763
@billyfisher7763 3 месяца назад
11:42 A cheeky Tom Lehrer reference?
@markusklyver6277
@markusklyver6277 Год назад
What's the outro song?
@huseinnashr
@huseinnashr 5 лет назад
Why Not Just: Think of AGI Like a Robert Miles?
@rftulak
@rftulak 5 лет назад
Excellent example of Pareto distribution and brilliantly used!
@albirtarsha5370
@albirtarsha5370 5 лет назад
AGI will be better at everything than a corporation or government and should have a place managing capital resources. Computers already manage capital on stock exchanges.
@Alomoes
@Alomoes 2 года назад
If they were, then we should be very wary of our corporations.
@peterrusznak6165
@peterrusznak6165 Год назад
"People at the top need to recognize that this is a good idea" Haha. Usually not going to happen..
@Shinkaze33
@Shinkaze33 5 лет назад
Great video, exception to one part that corporations are worse at innovation than individuals because of the inability to recognize good moves. Peter Thiel writes about this in the book "Zero to One", and it is commonly understood that the real effect of a revolution isn't known until after it's occurred. i.e. Who could have thought of RU-vid Streamers before one line video sharing was invented. This fact that companies can be bad at recognizing super-human innovation is even the core jok of "The Hudsucker Proxy" with the line "You know, for kids" while holding up the idea that saves the business. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-8UxAlkaTWLc.html
@deleuzeetguattari
@deleuzeetguattari 2 года назад
If you drop the assumption that there aren't AIs working for corporations, then it's trivially true that corporations are artificial general superintelligences. It is recursive, but it's what happens in real life.
@cvoges12
@cvoges12 4 года назад
What's the scene from 4:00 from?
@stampy5158
@stampy5158 3 года назад
It's from the movie Ready Player One! -- _I am a bot. This reply was approved by sudonym_
@Chanoine12
@Chanoine12 5 лет назад
Isn't the assumption that a better idea can be achieved infinitly false? Isn't there a stop where the best idea to achieve a specific goal is reached and nothing can be better ?
@NathanTAK
@NathanTAK 5 лет назад
QUESTION: Why not just think of AGIs like corporations? ANSWER: We don't want an evil AGI, haven't you been listening?
@drdca8263
@drdca8263 5 лет назад
The question isn't "why not attempt to design an AGI by using corporations as a blueprint", but rather "why not use our understanding of corporations, and how they behave, in order to help model AGIs, their risks, and how to mitigate those risks".
@dirm12
@dirm12 5 лет назад
You are definitely a rocket surgeon. Don't let the haters put you down.
@michaelstanko5896
@michaelstanko5896 5 лет назад
dirm12 q Rocket Neurosurgeon FTFY
@michaelliu2961
@michaelliu2961 4 года назад
don't doubt ur vibe
@asdfghyter
@asdfghyter Год назад
i’m neurorocket though
@user-go7mc4ez1d
@user-go7mc4ez1d 5 лет назад
"Like Starcraft". That aged well....
@Qwerasd
@Qwerasd 5 лет назад
Was about to comment this.
@CamaradaArdi
@CamaradaArdi 5 лет назад
I don't even know if alphaStar had played vs. TLO by then, but I think it did.
@RobertMilesAI
@RobertMilesAI 5 лет назад
It said 'for now'!
@guyincognito5663
@guyincognito5663 5 лет назад
Robert Miles you lied, 640K is not enough for everyone!
@Zeuts85
@Zeuts85 5 лет назад
I wouldn't say this has been demonstrated. So far AlphaStar can only play as and against Protoss, and it hasn't played any of the top pros. Don't get me wrong, I think Mana is an amazing player, but until it can consistently beat the likes of Stats, Classic, Hero, and Neeb (without resorting to super-human micro), then one can't really claim it has beaten humans at Starcraft.
@TheOneMaddin
@TheOneMaddin 5 лет назад
I have the feeling that AI safety research is the attempt to outsmart a (by definition) much smarter entity by using preparation time.
@oldvlognewtricks
@oldvlognewtricks 4 года назад
I seem to remember Mr. Miles mentioning in several videos that trying to outsmart the AI is always doomed, and a stupid idea (my wording). Hence all the research into aligning AI goals with human interests and which goals are stable, rather than engaging in a cognitive arms race we would certainly lose.
@martinsmouter9321
@martinsmouter9321 4 года назад
It's a try to get a history boost if we can have more time and resources we might be able to overwhelm it. A little bit like building a fort: you know bigger armies will come, so you build structures to help you be more efficient in fighting them off.
@augustday9483
@augustday9483 Год назад
And it looks like we've run out of prep time. AGI is very close. And the pre-AGI that we have right now are already advanced enough to be dangerous.
@petersmythe6462
@petersmythe6462 5 лет назад
"You can't get a baby in less than 9 months by hiring two pregnant women." Wow we really do live in a society.
@williambarnes5023
@williambarnes5023 5 лет назад
If you hire very pregnant women, you can get that baby pretty quick, actually. The 200 IQ move here is to go to the orphanage or southern border. You can just buy babies directly.
@e1123581321345589144
@e1123581321345589144 5 лет назад
It they're already pregnant when you hire them, then yeah, it's quite possible
@dannygjk
@dannygjk 5 лет назад
I think it's safe to assume that the quote is meant to be read as two women who just became pregnant. To assume otherwise is to assume that whoever said it doesn't have enough brain cells to be classified as a paramecium.
@isaackarjala7916
@isaackarjala7916 5 лет назад
It'd make more sense as "you can't get a baby in less than 9 months by knocking up two women"
@diabl2master
@diabl2master 5 лет назад
Oh shut up, you know what he meant
@Primalmoon
@Primalmoon 5 лет назад
Only took a month for the Starcraft example to become dated thanks to AlphaStar. >_
@spencerpowell9289
@spencerpowell9289 5 лет назад
AlphaStar arguably isn't at a superhuman level yet though(unless you let it cheat)
@rytan4516
@rytan4516 4 года назад
@@spencerpowell9289 By now, AlphaStar is now beyond my skill, even with more limitations than myself.
@flamencoprof
@flamencoprof 5 лет назад
As a reader of Sci-Fi since the Sixties, I remember at the dawn of easily available computing power in the Eighties I wrote in my journal that the Military-Industrial complex might have a collective intelligence, but it would probably be that of a shark! I appreciate having such thoughtful material available on YT. Thanks for posting.
@petersmythe6462
@petersmythe6462 5 лет назад
Corporations still have basically human goals, just those of the bourgeoisie. AI can have very inhuman goals indeed. A corporation might bribe a goverment to send in the black helicopters and tanks to control your markets so it can enhance the livelihood of the shareholders. An AI might send in container ships full of nuclear bombs and then threaten your country's dentists with nuclear annihilation if they don't take everyone's teeth because its primary goal and only real purpose in life is to study teeth at large sample sizes.
@SA-bq3uy
@SA-bq3uy 5 лет назад
Humans cannot have differing terminal goals, some are just in a better position to achieve them.
@fropps1
@fropps1 5 лет назад
@@SA-bq3uy What do you mean by that? I feel like it's pretty self-evident that people can have different goals. I don't have "murdering people" as a terminal goal for example, but some people do.
@SA-bq3uy
@SA-bq3uy 5 лет назад
@@fropps1 These are instrumental goals, not terminal goals. We all seek power whether we're willing to accept it or not.
@fropps1
@fropps1 5 лет назад
@@SA-bq3uy If your argument is what I think it is then it's reductive to the point where the concept of terminal goals isn't useful anymore. I don't happen to agree with the idea that people inherently seek power, but if we take that as a given, you could say that the accumulation of power is an instrumental goal towards the goal of triggering the reward systems in the subject's brain. It is true that every terminal goal is arrived at by the same set of reward systems in the brain, but the fact that someone is compelled to do something because of their brain chemistry doesn't tell us anything useful.
@SA-bq3uy
@SA-bq3uy 5 лет назад
@@fropps1 All organisms are evolutionarily selected according to the same capacities, the capacity to survive and the capacity to reproduce. The enhancement of either is what we call 'power'.
@Soumya_Mukherjee
@Soumya_Mukherjee 5 лет назад
Great video Robert. See you again in 3 months. Seriously we need more of your videos. Love your channel.
@stevenneiman1554
@stevenneiman1554 Год назад
I think one of the most important things to understand about both corporations and AIs is that as an agent's capabilities increase, its ability to do helpful things increases, but the risk of misalignment problems which cause it to do bad things increases faster. As an agent with goals grows, it becomes more able to seek its goals in undesirable ways, the efficacy of its actions increases, it becomes more likely to be able to recognize and conceal its misalignment, AND it becomes less likely you'll be able to stop it if you do discover a problem.
@visigrog
@visigrog 5 лет назад
In most corporate settings, a few individuals get to pick which ideas are implemented. From experience, they are almost always not close to the best ideas.
@morkovija
@morkovija 5 лет назад
Been a long time Rob! Glad to see you
@d007ization
@d007ization 5 лет назад
Y'all are way more intelligent than I lol.
@shortcutDJ
@shortcutDJ 5 лет назад
1,5 x speed = 1.5 more fun
@stevenmathews7621
@stevenmathews7621 5 лет назад
@@shortcutDJ not sure about that.. there might be diminishing returns on that ; P
@MrGustaphe
@MrGustaphe 5 лет назад
@@shortcutDJ Surely it's 1.5 times as much fun.
@diabl2master
@diabl2master 5 лет назад
@@MrGustaphe No, simply 1.5 more units of fun.
@DavenH
@DavenH 5 лет назад
Every one of your videos kicks ass. Some of the most interesting material on the subject.
Далее
A Response to Steven Pinker on AI
15:38
Просмотров 207 тыс.
10 Reasons to Ignore AI Safety
16:29
Просмотров 339 тыс.
+1000 Aura For This Save! 🥵
00:19
Просмотров 11 млн
11 ming dollarlik uzum
00:43
Просмотров 494 тыс.
Is AI Safety a Pascal's Mugging?
13:41
Просмотров 373 тыс.
Is the Intelligence-Explosion Near? A Reality Check.
10:19
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Sharing the Benefits of AI: The Windfall Clause
11:44
AI? Just Sandbox it... - Computerphile
7:42
Просмотров 264 тыс.
What can AGI do? I/O and Speed
10:41
Просмотров 119 тыс.
There's No Rule That Says We'll Make It
11:32
Просмотров 36 тыс.
We Were Right! Real Inner Misalignment
11:47
Просмотров 248 тыс.
Why Does AI Lie, and What Can We Do About It?
9:24
Просмотров 256 тыс.
+1000 Aura For This Save! 🥵
00:19
Просмотров 11 млн