For anyone interested in the statistics of the model in 6:16 The cumulative distribution function (cdf) of the maximum of multiple random variables is, if they are all continuous random variables and independent of one another, the product of the cdfs. This can be used to solve analytically for the statistics he shows throughout the video: Start with the pdf (bell curve in this case) for the quality of one person's idea and integrate it to get the cdf of one person. Then, since each person is assumed to have the same statistics, multiply that cdf by itself N times, where N is the number of people working together on the idea. This gives you the cdf of the corporation. Finally, you can get the pdf of the corporation by taking the derivative of its cdf. For fun, if you do this for the population of the earth (7.5 billion) using his model (mean=100, st.dev=10) you get ideas with a 'goodness' quality of only around 164. If an AI can consistently suggest ideas with a goodness above 164, it will consistently outperform the entire human population working together.
@@cezarcatalin1406 That's a valid criticism. The part I felt most iffy about was the independence assumption. People don't suggest ideas in a vacuum, they are inspired by the ideas of others. So one smart idea can lead to another. It's also possible that individuals have a heavy tail distribution (like a power law perhaps) instead of a gaussian when it comes to ideas. This might capture the observation of paradigm-shattering brilliant ideas (like writing, the invention of 0, fourier decomposition, etc.). Both would serve to undermine my conclusion. That being said, I didn't want that to get in the way of the fun so I just went with those assumptions.
"Instead of working it out properly, I just simulated it a hundred thousand times" We prefer to call it a Monte Carlo method. Makes us sound less dumb.
Great video, but one thing I think you missed is that a corporation doesn't need any of its employees to know what works, it just needs to survive and make money. This means that the market as a whole can "know" things that individuals don't, since companies can be successful without fully understanding *why* they're successful, or fail without anyone knowing why they fail. Even if a company succeeds through pure accident, the next companies that come along will try to mimic that success, and one of *them* might succeed by pure accident, leading to the market as a whole "knowing" things that people don't.
This can be seen as part of AI training, if a corporation has the wrong goal or wrong solution it will be outcompeted/fail and the companies that survive have better selected for successful ways to maximise profit
@@AtticusKarpenter I bet those are not following market signals and not succeeding at the market, yet they survive from income from other "sources", the stupid ESG scores
This is true, but only for tasks with a small enough solution space that it's feasible to accidentally stumble across the correct solution. This is unlikely to be the case for sufficiently hard intellectual problems. Also, a superintelligence will likely be better at stumbling across solutions than corporations, since the overhead of spinning up a new instance of the AI will likely be less than that of starting a new company (especially in terms of time).
I’ve long thought Corporations are analog prototypes of AI lumbering across the centuries, faceless, undying, immortal, without moral compass as they clear-cut and plow-under down another region in their mad minimal operating rules.
Corporations clearly do have a very important moral compass, and even Miles himself considers that so far humanity has been progressing. The fact some are corrupt doesn't mean corporations as a concept are intrinsecally bad, just like with humans in general.
A corporation can also do something like alphago's search tree. Many people have ideas and others improve on them in different directions. Bad directions are canceled until a very good path is found. Also many corporations in competition behave like a swarm intelligence. But still great video!
It took one month since this video was made, for AI to start crushing Starcraft professional players. (AlphaStar played both Dario Wunsch and Grzego rz Komincz, who were ranked 44th and 13th in the world respectively, were both beat 5 to 0.)
This is all making the highly optimistic assumption that the people in the corporation are cooperating for the common good. In many organizations, everyone is behaving in a "stupid" way, but if they did something else, they would get fired.
Also that the workers/CEOs are always aligned with shareholder maximization, as opposed to personal maximization. A company can destroy itself to empower a single person with money and often does.
what is this 'common good,' anyway? is it some ideologically driven concept that differs entirely between all humans? Ironically it is this very 'common good' which drives many companies to do evil. After all, the road to hell is paved in human skulls and good intentions.
@9:58 In answer to your rhetorical question, I need to reference the baduk games played between Alphago zero and Alphago master. Zero plays batshit crazy strategies where even the tiniest inaccuracies cause the position to spiral into catastrophe but zero still manages to win. Zero’s strategy does not look good to amateur players, nor to professional players, but it works, it just works. Watching these games feels like listening to two gods talk, one of which has gone mad. @10:02 ah… well we recognized move 37 as good after the AI showed that to us.
I think the statistical model is a bit flawed/over simplified. Groups of humans don't just select the best idea from a pool but will often build upon those ideas to create new and better ones.
Basically this just means that an "idea" can actually have several smaller components that can be improved upon. I think this is more than offset by the fact that (as discussed in the video) humans still can't select the best ideas even when they're presented.
When thinking about AI as a metaphor for corporations, rather than the other way around, it's not necessarily the superhuman *intelligence* of the AI that is important or that makes them inherently dangerous - merely the fact that the intelligence makes it superhumanly *powerful*. Whether or not we accept that a corporation is significantly more intelligent than a human, they're fairly self-evidently significantly more powerful than one, with more ability to affect change in the world and to gather instrumental resources to increase that ability.
Anything You Can Do (Annie Get Your Gun) by Howard Keel, Betty Hutton AGI: Anything you can be, I can be greater. Sooner or later I'm greater than you.
In some ways your "have each person generate an idea and pick the best" actually understates the problem. There are many types of problems, e.g. picking a move in chess, where ideas are easy to come up with but hard to evaluate.
@@RobertMilesAIMaybe they watched the video before it finished processing the higher qualities, do you release videos before they're done fully processing?
“Corporations certainly have their problems, but we seem to have developed systems that keep them under control well enough that they’re able to create value and do useful things without literally killing everyone.” _Laughs in climate change_
There's so far no way to satisfy our needs, to create value and do useful things, without affecting the environment. There's no reason to believe that any other alternative would be better. Fortunately, we, with the current system, are getting better at it. But gradually: there's no reason to believe the transition could've been made faster with an alternative system (and without more human suffering).
@@MrTomyCJ So your solution to the possibility of biosphere collapse is “meh let’s wait and see if billions die because changing society is hard” If everyone thought like you, labor unions would never have produced improvements like the weekend, or the 8 hour workday. Society doesn’t just _improve_ on its own, as a function of its existence. The _only_ reason society improves _at all_ is because of the people who dared to dream of radical change, and relentlessly pushed for it. Society is a confluence of incentive structures. There is no reason why slavery, for instance, should _necessarily_ have ended if someone could still benefit from it today. It only ended because of those people who saw what was wrong with it, and suffered from it, so they refused to tolerate its continued existence. Now people see slavery as obviously wrong in hindsight, but when it existed, there was an entire ideological structure that had formed to protect the wealth it was producing. Seeing past that ideology is the first step to real change.
@@IAmNumber4000 that's an important distinction you've made. All your examples were problems that existed and already affected people, so there was great incentive for solutions, whereas the other is the "possibility" of biosphere collapse. Humans are great at solving problems right now, not so much at predicting or solving hypothetical problems in the future.
At 7:14, the graph looks wrong. That histogram should resemble the graph of the probability density of a sample maximum. In general, if X₁, ..., Xₙ are independent and identically distributed random variables (i.e. a sample of size n) with cumulative distribution function Fₓ(x), then S = max{X₁, ..., Xₙ} has cumulative distribution function Fₛ(s) = [Fₓ(s)]ⁿ. So if each X as a probability density function fₓ(x) = Fₓ'(x), then S has probability density function fₛ(s) = n fₓ(s) [ Fₓ(s) ]ⁿ⁻¹ = n fₓ(s) [ ∫ fₓ(t) dt ]ⁿ⁻¹, where the integral is taken from -∞ to s. Here, we assumed the variables were normally distributed and set μ = 100 and σ = 20, so fₓ(x) = 1/(20√͞2͞π) exp(-(x-100)²/800), and thus fₛ(s) = n/(20√͞2͞π)ⁿ exp(-(s-100)²/800) [ ∫ exp(-(t-100)²/800) dt ]ⁿ⁻¹. The mean of this is E[S] = ∫s fₛ(s) ds, integrating over ℝ. Doing this numerically in the n=100 case gives a mean of 150.152. We can also make use of an approximate formula for large n: E[S] ≈ μ + σ Φ⁻¹((n-π/8)/(n-π/4+1)). For the given parameters and n=100, we get E[S] ≈ 100 + 20 Φ⁻¹((100-π/8)/(101-π/4)) ≈ 150.173. In either case, it is not plausible that you got a mean of 125 with n = 100, σ = 20 like you said. You must have used σ = 10, not σ = 20. That also explains why you wrote "σ = 20" between those vertical bars at 6:31. You probably meant that the distance between μ+σ and μ-σ was 20, i.e. σ = 10.
That's correct! Though, since I picked the value for the standard deviation out of thin air, it can just be 10 instead and it doesn't affect the point I was trying to make
I like this idea overall. Somewhat smarter, but also somewhat slower. -- Controllable by other grouped-human entities (like governments) + a lot of other points, but I think that is kind of the main thing that differentiates it from ASI.
I've already conjectured a year or two ago that corporations are AI, so of course I'm going to say yes. My reasoning is: -Corporations make decisions based on their board of directors, which is a hive mind of supposedly well-qualified, intellectual elites. -A corporate board will serve the goals of its shareholders, at the expense of everything else. Even if this means firing an employee because they believe they're losing $50/year on that employee, they care more about the $50 than the fact that the employee will be out of work. It also means they may choose not to recall a dangerous product if they think a recall would be the less profitable course of action. Corporate boards are so submissive to the goals of their shareholders that it is reminiscent of the AI who maximizes stamp-collecting at the expense of everything else, even if it destroys the world in the process (see fossil fuel companies who knew about climate change in the 1960's and buried the research on it). -AI superintelligence is supposed to have calculation resources that make it beyond human abilities, like a chess AI that is 900 elo rating points stronger than the best human. An AGI superintelligence might manifest superhuman abilities that go beyond just intelligence, but also its ability to generate revenue in a superhuman way and its ability to influence human opinion in a superhuman way. Large corporations also have unfathomable resources to execute their goals, which (in cases like Amazon, Apple, Microsoft, or IBM) can include tens or hundreds of thousands of laborers, countless elite intellectuals, the power to actually influence federal legislation through lobbying, the financial resources to drive their competition out of business or merge with them, and public relations departments that can influence public opinion. Really, I think that the way corporations behave is an almost exact model for how AGI would behave.
Loved the "Bad Company" acoustic at the end. As always, another 1-up to those not formally schooled whom routinely spout nonsensical "What-if's" at you as if they are the first person to think of the idea haha.
I think you are making some big logical jumps by comparing a corporations computation to a persons, by design, they have to have a low growth rate due to their size, therefore they operate on longer timescales than humans, but for AIs there may not even be tasks that are high margin per say, somethings possibly might be around like an automated rental property manager, which is a problem that prevents large corporations from entering, that being said, you physically can't get complex action without the available data and in most cases data growth happens on long timescales, so in general, I would say it is more likely that most AI is going to resemble the speed of development of a large corporation, though I hope these AIs are able to work on a smaller scale so we can solve things like overpopulation
I think the big question is not whether corporations are super-intelligent AIs, but whether corporations are *conscious* AIs, equivalent to the China brain thought experiment in philosophy.
I feel that all the arguments against a corporation that you bring up are equally valid arguments against AGI using parallelization much better than humans (or corporations) can already parallelize human intellect. Just replace words like corporation and humans with AGI and subprocesses and the same problems apply.
I would like to question the possibility of super intelligence in general. It may be out of the question that given technology and power one could provide enormous computational resources and that AI will progress to the point of using it effectively to combine data to form understanding and knowledge, but will that scale indefinitely, way beyond what a hierarchy of many humans with computers could do? Will it amount to that much more? Will it scale? How much more of intelligence do you get per Watt? If you do get more AGI, will it make better predictions or will it soon run into more fundamental issues that can not be over-powered by intelligence?
I still maintain that we can think of AGI robots as like corporations. With the same rights and responsibilities. I think that is going to be inevitable even. Until the self-driving cars demand the vote, to have a say in urban planning and road construction, and get all the pot-holes fixed. That does not make corporations AGIs, but it does make them a valid analogy, at least in legal matters. Not when it comes to psychology, social science, or engineering. I'm now curious if there is a limit to the intelligence that a cluster of humans can reach. But it's late, and I'm not a statistician either. Maybe tomorrow.
I've thought of this as "Corporations are a non-human intelligent agent that we have somewhat controlled. What controls do we use? Could these be useful in controlling AI?" Property and ownership is the main control on corporation. Corporations can only use resources they own, cannot damage resources they don't own, and can only gain ownership of new resources in strictly defined ways (Create something from resources they already own or trade for it.) Making an AI respect property rights could be a useful control, and there are other controls on corporations that might be useful.
The problem of recognizing the best ideas, is not only related to corporations, but to any group of humans trying to achieve a goal. The larger the group - the bigger the problem. This extends even to the level of the humanity as a whole, even in such fields as physics and mathematics. The best ideas often get underrated.
I like to think of the full potential of ASI as the whole of humanity if everyone was connected brain-to-brain. Or at least the minimum of the maximum potential. Crap, that is a terrifying thought... If a little AGI is left alone long enough to garner that much computing power. Well, crap.
I would argue personally that a corporation isn't fully generally intelligent. Sure, they are able to, on an individual human level, see and act as a general intelligence, but the very corporate structure that ties those people together into one greater whole of a corporate agent sets some very strict, limiting requirements and goals. If a general intelligence is an AI you can repurposed for another task, I'd ask to see the corporate structure that does anything other than grow and earn money. Sure, they might be able to find another way to accomplish the same task, but their still making profit.
"It's possible for us to have more than one problem" well... yes. But if we already can't solve the one problem, throwing another at us it like... really unfair!
World is full of problems it's about seeing which one(s) need to be handled as top priority and if it cancles out a lower problem. Where there are thousands of problems, or at least the known problems, the higher and more major ones need to be addressed and if it can't hit the problems below that one to see if you can budge the bigger problems. Chances are due to cause and affect you may either make a permanent solution, a temporary solution, or more problems. generally if addressed correctly you can drop the more problems off or at least the bulk of them. Just have to know what element to add or remove. But for the most part you can use cause and affect to develop far sight both moving forward or backwards far sight (still moving forward but back tracking and making adjustments as it settles or trys to) can help you address problems before the possibility rising of future problems. Not a direct method but as the possibility gets closer the paths narrow out a little it's just getting you in the ball field of the potential flow of things.
i think the ways in which groups can outperform individuals is growing, so at some point, you will have to consider corporations to be superintelligent.
That's an incredibly bad idea as implemented. All effected humans would go into a post-cataclysm high-mortality mindset, which leads to a much higher birthrate overall. By snapping a randomized 50%, all remaining humans are directly effected and in 25 years you would have more people than when you started. And since you killed the smart and dumb in equal measures, no positive change will be seen in the long run. The inverse effect can be seen in relatively safe, comfortable countries with an educated populace: population begins to drop.
Sounds like there will be shareholder pressure to create unshackled corporate AGIs to replace boards of directors who just get in the way of good ideas. Can't wait!
But what if a company builds an AGI, which is poorly aligned, to save on costs and be the first on the market, wouldn’t that make the poorly aligned AGI an mesa optimizer relative to the cooperation. Which means, that the cooperation wasn’t well enough aligned in the first place.
Could governments offer a better model for how AGIs might be controlled? Not that they're more like AGIs than corporations, but they (at least, liberal democracies) do tend to be designed with lots of systematic limitations intended to prevent them from harming people... similar to the limitations we might want to build into AGIs.
Currently adding more people that could have ideas from 70...130% level, woll not shift to the right over the 130. You only get more options from more people and the chance to be able to pick a 130% level idea gets higher. But you shifted the intelligence of "more people" to the right as if taking more people will make the individualls smarter.
Interesting question about whether corporations are mis-aligned. I think it's not that rare for corporations to take actions which any individual in the corporation might be against. E.g. lowering the safety of a car because the occasional out of court settlements to crash victims are cheaper than using a more safer design for every single car 🤔 assuming the members of a corporation sometimes disagree with their corporations actions... It's misaligned? I think I made a fallacy somewhere...
@@MrTomyCJ well I was thinking of two different automotive scandals. One, the Ford Pinto example I mentioned above where they decided against a recall because a cost-benefit analysis on the matter and found it would be cheaper to pay off the possible lawsuits of crash victims in out-of-court settlements. Secondly I was thinking of the more recent VW diesel emissions scandal. So I guess I'm thinking of examples where customers were being hurt and were deliberately not told
Very interring listening to your logic in for this comparison, but I feel like the human side is a bit fuzzy. For example when you claim the best idea from a group of people is not better than one a human could ever suggest I'm not even sure what to think of that? If you have a none intelligent random word generator, it would have a none zero likely hood to generate any idea. So there is no upper limit for quality, with that logic. The same way the higher ups could have an aneurism at the moment of decision and their dead finger fall on this option! XD That being said, if there was an upper limit, I think breaking that upper limit would be possible trough inspiration. What would you think about considering the corporate agent as a locally clustered neural network. Where Brain cells in each brain half are highly connected, with some connection with their paring brain half, and with very costly and slow connection between otherers (language/communication.) Would we then only need to ask the question how much more can we gain from spreading the clusters when asking about corporation's supper intelligence?
What if like the ancients of old you trained people in the ways of mentats or something likened to it, to where the necessity of a machine ai is unnecessary? As theyll arrive at the solution an can equally examine a large chunk of data with gisting.
I don't think the argument that 2 SC2 players controlling an army will lose against 1 good player alone. It entirely depends how used they are to it and, I believe that the 2 players would win if they were used to it as much as playing alone.
AGI will be better at everything than a corporation or government and should have a place managing capital resources. Computers already manage capital on stock exchanges.
Great video, exception to one part that corporations are worse at innovation than individuals because of the inability to recognize good moves. Peter Thiel writes about this in the book "Zero to One", and it is commonly understood that the real effect of a revolution isn't known until after it's occurred. i.e. Who could have thought of RU-vid Streamers before one line video sharing was invented. This fact that companies can be bad at recognizing super-human innovation is even the core jok of "The Hudsucker Proxy" with the line "You know, for kids" while holding up the idea that saves the business. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-8UxAlkaTWLc.html
If you drop the assumption that there aren't AIs working for corporations, then it's trivially true that corporations are artificial general superintelligences. It is recursive, but it's what happens in real life.
Isn't the assumption that a better idea can be achieved infinitly false? Isn't there a stop where the best idea to achieve a specific goal is reached and nothing can be better ?
The question isn't "why not attempt to design an AGI by using corporations as a blueprint", but rather "why not use our understanding of corporations, and how they behave, in order to help model AGIs, their risks, and how to mitigate those risks".
I wouldn't say this has been demonstrated. So far AlphaStar can only play as and against Protoss, and it hasn't played any of the top pros. Don't get me wrong, I think Mana is an amazing player, but until it can consistently beat the likes of Stats, Classic, Hero, and Neeb (without resorting to super-human micro), then one can't really claim it has beaten humans at Starcraft.
I seem to remember Mr. Miles mentioning in several videos that trying to outsmart the AI is always doomed, and a stupid idea (my wording). Hence all the research into aligning AI goals with human interests and which goals are stable, rather than engaging in a cognitive arms race we would certainly lose.
It's a try to get a history boost if we can have more time and resources we might be able to overwhelm it. A little bit like building a fort: you know bigger armies will come, so you build structures to help you be more efficient in fighting them off.
If you hire very pregnant women, you can get that baby pretty quick, actually. The 200 IQ move here is to go to the orphanage or southern border. You can just buy babies directly.
I think it's safe to assume that the quote is meant to be read as two women who just became pregnant. To assume otherwise is to assume that whoever said it doesn't have enough brain cells to be classified as a paramecium.
As a reader of Sci-Fi since the Sixties, I remember at the dawn of easily available computing power in the Eighties I wrote in my journal that the Military-Industrial complex might have a collective intelligence, but it would probably be that of a shark! I appreciate having such thoughtful material available on YT. Thanks for posting.
Corporations still have basically human goals, just those of the bourgeoisie. AI can have very inhuman goals indeed. A corporation might bribe a goverment to send in the black helicopters and tanks to control your markets so it can enhance the livelihood of the shareholders. An AI might send in container ships full of nuclear bombs and then threaten your country's dentists with nuclear annihilation if they don't take everyone's teeth because its primary goal and only real purpose in life is to study teeth at large sample sizes.
@@SA-bq3uy What do you mean by that? I feel like it's pretty self-evident that people can have different goals. I don't have "murdering people" as a terminal goal for example, but some people do.
@@SA-bq3uy If your argument is what I think it is then it's reductive to the point where the concept of terminal goals isn't useful anymore. I don't happen to agree with the idea that people inherently seek power, but if we take that as a given, you could say that the accumulation of power is an instrumental goal towards the goal of triggering the reward systems in the subject's brain. It is true that every terminal goal is arrived at by the same set of reward systems in the brain, but the fact that someone is compelled to do something because of their brain chemistry doesn't tell us anything useful.
@@fropps1 All organisms are evolutionarily selected according to the same capacities, the capacity to survive and the capacity to reproduce. The enhancement of either is what we call 'power'.
I think one of the most important things to understand about both corporations and AIs is that as an agent's capabilities increase, its ability to do helpful things increases, but the risk of misalignment problems which cause it to do bad things increases faster. As an agent with goals grows, it becomes more able to seek its goals in undesirable ways, the efficacy of its actions increases, it becomes more likely to be able to recognize and conceal its misalignment, AND it becomes less likely you'll be able to stop it if you do discover a problem.
In most corporate settings, a few individuals get to pick which ideas are implemented. From experience, they are almost always not close to the best ideas.