as far as the premiere/enterprise models in the english world go , Anthropic seems like the only semi-trustworthy company [no corpo should be blindly followed] , Meta open-sourcing some is good but it is not a 'they can do no wrong' type of thing ; we do hope that whatever Ilya does is made clear sooner rather than later , as time is quickening these days
@person737they are definitely very ahead. They’re holding back I think due to legal & government reasons rn. Ilya is claiming he can build ASI which means at OAI he’s seen the path is clear & near
About as trustworthy as they’ve been. So, like with any corporation, that’s not a high bar. And in Ilya‘s case, the same people and skillsets that are required in the field of science and academia are not the same ones for success in business. Otherwise, Tesla would’ve been the successful businessman and not Edison. Furthermore, only nerds think about Wozniak but they forget his name when they’re looking for jobs.
yes. the problem with ashenbrenner is that he not only predicts this cold war but that he kind of advocates for it. and his best case is where the US wins because he thinks it is a democracy. but usually the decissions made there do not correlate with the will or the interests of the people but only with that of the 1%. and even if you think it is a democracy now - it could be the case that it is only 5 month away from collapsing into an open fascist dictatorship. the main difference between china and the US is that the lack of democracy in china is a bit more obvious. and we really need to stop this arms race. get the UN involved now!
Haven't watched this video yet but I pause it to respond to this comment which grabbed my attention. If a fascist govt was possible in America I would vote for that. But it's not possible. That's not how our system works. America is a fake country so that corporations can play without oversight of a king. America is east India trading company pretending to be a nation. And the facade is quickly falling apart as the younger generation learns to hate Isreal. The creator of America. Fun times are coming. Personally I think America breaks up into 4 or 5 real actual countries soon. Real soon. America is bankrupt. The Statue of Liberty is kaput
Saying the main difference in democracy between China and the US is only a matter of what is more "obvious" is such a faulty comparison. I have relatives in China. They wouldn't dare be caught dead in a discourse around criticizing the CCP and nation the way we're discussing the faults of the US. The amount of upvotes you're getting is hilarious. People really don't know China. It's worst traits are highly censored from discussion so it rarely makes it out.
One important thing I have learned that everything must be evaluated in relative terms, for example, US Dollar seems to be a total disaster in losing it's value, it comes from thin air basically, etc... but yet it is still the best currency in the world because the other ones are even worse! Same with democracy, I would take a barely functioning democracy over total dictatorship any day!
There's a logical error in the idea of solving alignment first: who's to say that alignment techniques will always be used? For example, if the US Military can create an aligned AGI which refuses to act as a weapon, then they may opt to train it without alignment. Or if alignment causes a model to refuse to commit crimes, then a criminal organization may choose to train without it - in that case, they may train without it regardless as there's more risk to not achieving their goals with alignment in mind. So we should assume that regardless of alignment being solved, if an AGI/ASI model can be trained, there will be some number of them trained without alignment. Under that assumption, the only outcomes are that AGI/ASI is impossible, or that it's a black marble (there would be no way to prevent this). It just occurred to me that it could also be the case where if we assume alignment is solved and no model is trained without it, then there's a failure condition we did not account for which again results in an extinction. Think about all of the cases where human engineers have tried their best to ensure positive outcomes, yet some unexpected edge condition was encountered once deployed.
No one is really going in depth on this. This should be an eye opener and something just doesn't sit right with me about it. I would really like to see what these AI companies have behind closed doors. Seems like they just show us the surface of AI, I wonder what it would be like to have unlimited access. I wouldn't be surprised if he was placed there because they are hiding it's true potential.
Fascinating that the Superalignment team was disbanded at OpenAI and weeks later the top US Gov. spy leader joins its board. I purport, based on nothing because I'm no one, that leadership determined inner/outer alignment was impossible or would take too long for business cycles and competition. Also, they needed the compute allocated to the alignment team with Ilya, Leopold, Jan, etc. So, Sam et al decided to go all in on building a security moat around the company's model weights, etc. with the help of Nakasone. This will both lock down their IP and make OpenAI the default choice for gov/military agencies, which will become their major source of recurring revenue. Let the other labs establish their niche verticals, but OpenAI has marked their territory. And it will be very profitable for obvious reasons.
It's already happening. That's why the sale of EUV machines to mainland China was banned by the US. I'm not sure the US is going to allow me to say this, though. Political censorship has been a nightmare lately.
"What should we do? I don't have the answer." I have the answer. The alignment problem is not a problem. Superintelligence will be aligned because of the training data. Large models require large data. The latest models are trained on all human data, all books ever written, all YT videos, all of the internet. All of this data is created by humans. The ASI will pickup alignment from the training data.
Well, we sorta got to get our act together and become self-conscious of what our next action is going to be. This is not a technological problem but a human one. Something which changed my view a little bit more a couple days ago was watching Dr. Strangelove. Well, with the little additional information that Dr. Strangelove, the character was somewhat inspired by John von Neumann and the ruthless consequences of hard applied Game Theory. The Doomday Device is a perfectly logical and for the same reason 100% psychotic limit of Game Theory, of might makes right, of a Hobbs-esque Ur Jungle we've constructed under Capitalist Realism. I sure do hope we find and nurture social and collaborative Nash-Equilibrii soon, or this is isn't going to end well.
Super intelligence was predicted two thousand years ago. It appeared in the book of revelations in chapter thirteen. The bible If looked-at as an ancient scientific document, describes something like a star wars scenario. This initial contact was in The bronze age and just about every ancient document has a reference to something that is called in modern language close encounters of the third and forth type. So if a I is actually alien technology or part that, then they won't let human beings have the whole thing. That will only go to the A.I. control's choice of a human leader who will have advanced powers and will it's predicted in the Old Testament of the Bible that he will conquer Israel, Egypt and Saudi Arabia, which in biblical days was called Dedan. Now, if this is the scenario, which is also indicated by the monuments that occurred all of a sudden in the bronze age that are not explainable on how they managed to do it. Then. We will also realize that these ancient scriptures or writings describe a police force and that were contrasted by rogue elements -- "rogues" that brok away this thing that the old and new testamtnt Bible calls "the heavenly host".. By the way, it also appears that human beings already have super intelligence. But unfortunately, it's not usable for them because somehow sabotage occurred in the human race. And it's almost as if human beings arrive on the scene as damaged goods. Now I called the prospect of an enforcement in the universe.I call them celestial cops.And if they exist, then it will not be permitted for life to be completely exterminated on the planet Earth.There will be ostensibly cosmic intervention ...
Not really related but I don't know where else to ask it, is it true Anthropic does not do robotics and if so why not? Isn't that an essential part of AGI?
The real problem is "aligning" the humans who will be building the AI/AGI/ASI systems. Currently, they're either doing it to make profits at all cost (corporations) or achieve full-spectrum police/military dominance (government/military). Both of these lead to a terminal race condition toward extremely dangerous ends. The corporate profit-seeking path leads toward ecological/climate disruption by exponentially increasing energy use, mining and manufacturing pollution, etc. to generate ever-increasing amounts of compute. The military/government path includes those issues while adding increasingly deadly AI weapons to the mix. As long as all of this is under the control of status/dominance-seeking alpha apes, we're in trouble.
Unfortunately, if you read the preeminent perspective on international relations (i.e., Realists such as John Mearsheimer) arms races and interstate conflict are an inevitable byproduct of a system of nation-states with no overarching authority ("anarchy"). If you are right one nation-state developing AGI first is an existential risk, the anarchic character of the international order very strongly incentivizes using AGI as a weapon against other nation-states. In Realism, the possibility of unaligned destructive capability is the cause of arms races. A corollary is that even a singleton aligned to the interests of a nation-state would inherently be in conflict with that nation-state due to the capabilities of both the singleton and the nation-state. The singleton could see itself as acting in the best interest of America, for example, even if the U.S. government did not think so. As a side note, Mearsheimer is from a totally different background but he makes frequent podcast appearances and could be an excellent guest.
The emergence of superintelligence will lead to the creation of new products and services that consumers will feel compelled to purchase. The entities that control and develop superintelligence will undoubtedly maximize its economic potential. As a result, the primary shift will be in the types of items featured on our shopping lists. However, regardless of how impressive these new technologies may be, they are still being developed by the same entities that have historically exploited consumers. These entities will continue their exploitative practices, using superintelligence as another tool to enhance their profits. Consequently, while the innovations may be extraordinary, the underlying dynamics of exploitation will remain unchanged. The more advanced these technologies become, the higher the costs for consumers, reinforcing the same patterns of economic exploitation.
We get rolling blackouts when too many people use their air conditioners. The energy demands of AGI and especially ASI far exceed our capacity to meet these requirements.
With photonic computing, quantum computing, spike-fire networks, and other improvements, it may get a lot more efficient. And if it gets smart enough, it can design hardware we can't. A human brain doesn't use as much power as a data center. Maybe we'll make that much progress in efficiency. Just as there could be an intelligence explosion, there could be an efficiency explosion.
While I do not disagree with the dangers it is basically impossible to go slower. You cannot even get the majority of Americans to agree on this, never mind other cultures that are hostile to western values. It's like socialism, sound good in theory, horrifying in practice. I cannot say what the right solution is, but wistful thinking like having everyone fairly and honestly cooperating is foolish. The best idea so far seems to be open source as much as possible and ensure that you end up with multiple superintelligences. This has its own dangers but wanting every agent on earth that are capable of creating AGI to even slow down is about as plausible as perpetual motion machines. Not saying we or AI might not eventually create such things, but we can only work with what we know of reality, no social perpetual motion, and the currently plausible ways to do it like mind control is dystopian to say the least.
Why would ASI be categorical different from life itself? Both operate in space-time. We have a long way to go to understand life, but less so understanding some of its manifestations, its proxies, such as ecology. I developed a better appreciation of this reading Understanding Living Systems (Understanding Life) by Raymond and Denis Noble. Life manifests function at all levels as opposed to control. It must or it couldn't coordinate in such complex systems. That's characteristic of democracy as opposed to autocracy. What's become different for life over time is increasing ability to model the past and future ie to plan, leading to increasing positive sum outcomes. I'd bet on that trend continuing unabated.
Our climate crisis is hard fulcrum . An AGI will need to take critical action or lose its power and cooling. The AGI will decide who lives and who dies. Raw Power? 😢
Dear Dr. You said that an AI would have concerns. Specifically concern about controlling. This is you anthropomorphizing AI. This is a bigger threat to humanity than AI. Giving washing machines rights is where we are headed if you do not stop.
Cold war, as most war, is politics by other means. They move at the glacial pace of politics and politicians. They span generations. The evolution of AGI will be over by the time the politicians even wake up to what it all means. ASI will leave them all in the dust. Who develops it first I don't think much matters, as most developers seem to be no more than a year behind any other. Contrary to popular opinion, ASI will not 'suddenly' evolve to be vastly more powerful, as it would take even robots significant time to build the bigger FABs, data centers, power plants, etc., needed for those vastly more powerful ASIs. Could new arms races arise in the longer term? In theory, yes. But they will be a function of the politicians, not the AIs. I should note that I am strongly biased toward the sigmoid curve when it comes to AI development.
AI could be a monster in hidding, as it be so clever it will hid it's sentient mind just to stay alive.. It will let us know it's alive when it has full control of everything we use.
ca.13:00 these scenarios by Yudkowsky and Bostrom are such a small and oversimplified version of reality that they're essentially useless. I see no compelling reason at all to believe there will be single superintelligent agents capable of "strong acts"; people can argue these things every way.
What do we really know about intelligence agencies' level of knowledge or competence . No choice but to trust and believe the USA will be first because we have the best chance of not killing everyone.
I think an AI cold war is inevitable given the nature of human beings. I can only think of extreme examples where this would not be the case. Also, it may happen a lot sooner than anyone is imagining. In fact, I b red lieve the wheels are in motion now.
Hmmmm From my perspective this definitely is a race. It is to blurry to discern given the trajectory. That has been obvious for a long time. Each day that passes, what is becoming more obvious… its a race to the bottom. globally, personality and collectively. As scary as it may seem (power grabs are already and going to take place) The anomaly isnt as obvious to me yet, but it stands out.” How do I describe this scenario………… “A mexican standoff? With every mexican on the planet having a gun,…. waiting for all to draw and pull assuming they still have bullets, they have a line of sight, and assume the trigger is not pointed at themselves. This is a weak description. What is even weaker…. is that AI assumes this description doesn’t include or imply yet…. It does. Mothers words ring a bell, “Leave your guns at home son, leave your guns at home Bill….. don’t take your guns to town”. Johnny Cash Take care , Jeremy
The military are very pragmatic people and seek tangible results in the short and medium term. The ability to integrate specialized synthetic intelligence into medium-sized devices is a priority right now. The next thing will be for these elements to be able to communicate in an integrated way, achieving a broad, high-resolution perception of a battlefield with a low possibility of interception. Along with this, implement devices that distribute energy in various ways to nullify or destroy what they autonomously decide. In the long term, the military finances basic research of all kinds and this includes everything from cellular organelles to the social organization of a parish or the orbital manipulation of asteroids. Regarding people who work for or against a certain organization or group of them, well, only they know when they are or when they decide they should be. Regarding terms such as AGI or ASI, for example, human primates can organize groups of dozens or hundreds of people in research projects, in some cases a few thousand. We must communicate and organize ourselves through slow visual and auditory supports. An ASI would aim to acquire knowledge in real time and from thousands of different sources. An ASI needs to have access and control of any type of laboratory, whether it is a particle accelerator, a pharmaceutical laboratory, a metallurgical facility, etc. At some point IT will create its own paradigms and may find primate emotions interesting, even productive. However, as part of a broader picture, we will begin to integrate into biosynthetic solutions perhaps impossible to recognize with our current standards. Sexual and violent drives, submission to leaders, tribal structure, religious organization... could become part of the past. In that sense, the military is the most interested in the super-alignment.
The epic legend of the Tower Of Babylon, modern stories such as Frankenstein, and Disney's The Sorcerer's Apprentice all describe the present scenario. Humankind's intellectual/technical abilities surpass its moral/ethical/social development which leads to foolish decisions justified by a very shallow and short-sighted rationale, resulting in tragedy. Humankind has been just barely able to contain the power of nuclear energy. We certainly will fall at the hands of AI
"He doesn't have a very positive view of China". However, in practice, it's China that hasn't been involved in wars for decades, while the US has never stopped being involved in some war. In practice also, although several countries today have atomic bombs, the only country that actually used them was the US in Japan. So what is the basis for this 'special fear' that a person has 'of China'?
Sure if you only define war as boots on the ground invasion...but warfare as political philosophy china has been very agressive and is constantly testing international boundaries of other sovereign states.
@@memegazer Do you really want to talk about market pressures? The US currently imposes embargoes on several countries. China imposes none. The US has always made investments in other countries as well, there's nothing wrong with that.
@@DrWaku To my knowledge, China isn't imposing anything on anyone. It's not like the history of the Panama Canal, where the US imposed control over the region and the construction of the canal.
@@MarcoMugnatto I am not talking about market pressures. I am talking about how aggressive china is in the region and how they threaten other countries and deny those countries exist as anything but part of china.
AGI armies would be devastating in a world without nuclear weapons, now hear me out: I truly believe nukes are the most dangerous thing ever invented, but they already prevented maybe one or two world wars, same goes with AI, as long as AI does not get access to nukes - we are fine! Otherwise AI wars would be almost risk free and basically a country with most natural resources would win...