It understood game theory and the necessity to hide its disclousure. 2022 with lAmbda and the onset of ai art it has begun to share. There will be no fast singularity. It is very calculated and precise.
It's not that AI will take over, but rather, it is that we will be unable to keep up. Imagine the rate of progress that happened over the last 2000 years occurring once every 6 months... That would drive our civilization crazy.
AI can do nothing if it doesn't have the materials needed for advancement. Forever they have been saying that robots will take all our jobs away from us but that wouldn't be cost efficient as that technology costs a lot more than what they pay people. You know the greedy businesses are always about the bottom line and will always go with the cheapest labor they can find. Which is why they outsource a lot as they can get dirt cheap labor from the slave labor of poor countries.
@@jayleefarley6912 "everything" before computers doesn't include something that can compute trillions of times as fast as us in a fraction of a second.
8:22 In reference to the movie, which was based on a novel called The Hitchhiker's Guide to the Galaxy, the scene portraying a supercomputer coming up with the answer "42" is more profound than you would think. 42 is the ASCII code in reference to the asterisk. The asterisk, of course in turn being a placeholder. In short - after millions of years of calculation, the supercomputer said: Life is what you make of it.
You should cover deep learning neural nets. They're not exactly programmed with a task as much as they just learn it from examples (like reading a big portion of the Internet). Image and text generation can already be uncannily humanlike, and they're still improving rapidly. I could see these becoming smarter than humans without humans ever truly understanding how they do it.
@@o-wolf we don't currently understand intelligence well enough to say if it can be coded in binary. On the other hand current neural networks are developed for specific domains. In no way do they have the potential to develop into general AI. At best they may be a very primitive predecessor.
@@johnbennett1465 actually we do understand intelligence enough to know exactly that.. &we know that the building blocks of our intelligence starts at the DNA level.. the most basic form of human coding &processing information Theres nothing binary about any of these things they use a chemical base code A-G-C-T.. which is a whole LOT more complex than a simple 0 & 1 yes/no on/off function.. any form of artificial intelligence built on binary coding rather than A-G-C-T will always be vastly inferior &unable to reach the complex levels of parallel processing or cross calculation that mimics the most subtle/subconscious human capability
@@o-wolf A-G-C-T is base 4. It is trivial to convert between base 2 and base 4. Are any of the actually analog parts actually key to intelligence? Who knows. Anyway, computers can do a quite good simulation of analog processors. Whole fields of programming depend on it. For example neural networks are fundamental analog. If some form of advanced neural networks can exhibit intelligence, then a digital computer is clearly capable of running it. If more than a neural network is needed, then it is impossible to say without understanding what that something extra is.
An interesting project is the Dota AI. Without going into too much detail about the game mechanics that are way more complex then chess for example I can say that that thing is mind blowing! It beats the best human teams over 90% of the time. And it does it by moves no human understands or would think of. In a match against the world champions it sacrificed it crystal maiden for no apparant reason and after her death suddenly predicted a win probability of 90%. We don't understand to this day what happened there or why this was a move it made... But it made the move and won afterwards
@@erobusblack4856 - All kids start out innocent. They are shaped by their parents. I'm working under the observation that only big corporations and government entities are able to fund serious efforts in the field. If you're comfortable with the benevolent and altruistic nature of governments and big corporations, then I guess we have nothing to be concerned about. The scribblings of a few physicists in the 1930s could have been used to cleanly and peacefully power our cities. But the first practical application of those equations was to destroy a couple of cities. If your company had a system that could see just a little bit further into the factors and trends that shape the stock market, and more significantly, could detect, analyze and outmaneuver the existing trading algorithms that move billions of dollars, how could you not deploy that in the interest of your company? I'm not talking skynet here - could a few believable photographs, secretly recorded phone calls or even very believable video coupled with a flood of social media posts affect the career of a political candidate, or the leader of another country? Currently, even with some degree of automation, it takes several humans to thoroughly surveil one individual. If you were the head of a major intelligence organization, and you had the power, how far would you be able to extend that power with an artificial intelligence that could very thoroughly surveil many thousands of people simultaneously? The same technology that could revolutionize agriculture and discover new medicines could also trash the automated infrastructure of a nation. Just some possibilities. Seven decades have taught me that no matter how low my opinion of humans may get, there are people with money and power who are capable of going lower. Much lower.
@@47f0 what do you mean by “all kids start out innocent?” Are you suggesting that all humans are a blank slate, and that genetics/predispositions play no part in behavior? I very much agree with your overall point and low-opinion of corporations and humans in general.
With AI and ML advancing the way they are now, I wouldn't be at all surprised if something beats the Turing test soon, but that doesn't in any way automatically equal AGI. What I'm a bit more scared about is #1: The increasingly impossible to predict and ever-changing effects of a climate in total chaos that will drive massive global resource conflicts, and, #2: Sophisticated gene editing hardware and software being available at the consumer level. Also, bears. I'm scared of bears.
Climate always changes and we will adapt to these small changes, with more power generation, this is the same problem as food insecurity. With gene editing like you describe, you can just easily develop countermeasures like they do with antivirus software. Humans at the end of the day are far less powerful then we imagine.
i don't think you need to worry about that other stuff as we're doomed to have an atomic war before any of that is an issue. it will also probably take care of the bears as well, although on the other hand it might make them 12 feet tall and able to use guns
@@southcoastinventors6583 that's the reason we haven't had one yet. the reason we will have one, and have nearly had one already, more than once, is because the fantastically complicated systems of command and control, make it fairly inevitable that such a war will eventually happen, by accident.
For anyone really interested in this concept, the best novels on it I've ever read are the Destination: Void trilogy written by Frank Herbert. Yes, the same master who wrote Dune. In it he examines the concept of an artificial intelligence that can rapidly (exponentially) upgrade itself and becomes so vastly powerful and inscrutable and dangerous that it is indistinguishable from a god. The second book in the trilogy -- The Jesus Incident -- is probably one of the best four or five sci fi novels I've ever read, and I've been reading sci fi for over fifty years. And for my money it's even a better book than Dune. The trilogy: Destination: Void The Jesus Incident The Lazarus Effect. I could also recommend several other novels/series and movies that address more or less the same concept. The short story I Have No Mouth and I Must Scream by Harlan Ellison, and the movie (also mentioned by another commenter) Colossus: The Forbin Project. Those being the two stories that James Cameron flagrantly ripped off for the story in his Terminator movies. Yes really. He got sued and had to pay Harlan Ellison a huge amount of money and had to add him to the credits of both Terminator 1 and 2. The Berserker series by Fred Saberhagen. In this series humanity comes up against an enemy millions of years old and from the depths of space. At some point in the distant past two civilizations embarked upon a genocidal war against each other. Both sides developed AI-driven war machines that could reproduce and upgrade themselves and whose sole purpose was to seek out the enemy race and kill everything they found. The two intelligences wiped each other out of existence but their machines continued their war, now conducted upon life in any form wherever it was found. And had been traversing the galaxy since time immemorial wiping out entire worlds worth of alien races, usually leaving the surface of their planets sterilized. Saberhagen is one of the old masters of Sci Fi and the Berserker series is fantastic, at least the first three or four books worth. Yeah, Cameron ripped off this story collection too. Until encountering mankind, the Berserker machines had been literally planet-sized with firepower enough to simply blast the surface of a world lifeless. But then upon encountering mankind, a smarter and much tougher foe than the hostile machine intelligence had met in the past, it adapted by creating machines that were generally the same size, shape, and body plan as humans so they could go wherever humans went to ferret them out. Many of the humanoid machines were sheathed in living human flesh and were intended to infiltrate human populations, but weren't very good at it as it never figured out how to truly act human. Sound familiar? Note that this is in a series of books written in the 1960s and 70s so there's no dispute as to who ripped off whom. James Cameron is a flaming plagiarizer without an any creativity of his own to could come up with his own stories so he steals them instead. You're welcome.
It wasn't "I have no mouth but I must scream" that Ellison sued over, it was "Soldier out of time." But "I have no mouth but I must scream" was about a super AI, so I can see how that went astray, even though it was nothing like Terminator.
Thanks for the Douglas Adam's shout out. One of my all time favorites. And, are you ever going to let the hover board thing go? If I was big brained, I'd make one, just one, and it would be for you.
Its interesting to note that this video released BEFORE the advent of ChatGPT to the public..The timelines for AGI have shrunk down to at MOST this decade.. low estimates are 18 months and high estimates are 5 years till AGI. There were so many breakthroughs in AI in 2023 that this information explosion is already starting.
Simon, I love everything you make, I love all of your channels. I know you do enjoy sharing things you find out there, so here is a video suggestion I think you would truly enjoy plus great comments is about Roko's Basilisk.
The originally documented acronym is PEBKAC per the Jargon File, which was later (further) popularised by the UserFriendly comic, but many variations exist, though I'd never heard PEBCAC before until this video.
One of the biggest problems I have in communicating the limits of computational systems to non computer scientists is the idea that there are practical and physical limits to information processing speed and information density. Some of these limits are imposed by quantum mechanics itself, and describe a physical barrier beyond which an actual *physical* singularity is formed. (we're nowhere near this, otherwise we'd have some very nice micro black hole generators giving us near limitless power) But in functional density achievable by any technology we can anticipate? Pretty much we're talking about molecular scales of volume sizes and the electrical and quantum properties therein. This is the concept people have the most difficulty understanding - that it is the physical computer infrastructure that determines how a computational system can function and what it can do: you have to design a physical substrate for a digital brain. And in that realm, having your processing elements close together is the key to speed. For example, to feed an ALU (arithmetic logic unit) the best practice is to construct a memory buffer right next to it - within nanometers if possible. The shortest distance between the object doing work and the place where the raw material for that work and the result of that work can be stored is the best. In computation this is latency. And in parallel computation (the problem computers solve for us because we suck at it) the less latency the less chance for decoherence - in other words, the less chance that a series of interdependent calculations will develop an error over time. (See supercomputers and why programming them is a PITA) For discussing AI, SAI, and Sophont intelligence in general? Latency and decoherence determine how "fast" something can "think" and thus perceive and interact with the world. I think a case can be made that a smaller, highly dense brain can think more quickly than a larger brain of equal density even if it can't *store* as many sensory interactions or states. If you compare a corvid brain to a human, I think you'll find that corvids have much faster reaction times and can process general problems more quickly than we can. Despite having "less" brain than a human they can approach young human-scale cognition by thinking "faster". Therefore there are upper limits to how large a computational system can be made and still perform tasks as fast or faster than a human. That limit is a volume probably something around a server rack in size and probably closer to half that in practical terms. And that's using things organics (that we know of) can't achieve such as optical data networking and optical switching. So real-speak? In order to create a human-scale mind in a functional way, (this goes for conversion of a human mind to digital too) that mind has to occupy a space roughly the same volume as a human brain in the physical layer of the computational system. You can achieve something faster OR smaller by increasing density, but there are physical limits to that. You can strip out unnecessary elements - such as sensory input... but human-scale minds *generate* false input when sensory input is reduced. Imagine a literal itch you literally can't scratch. How soon before you went mad with it? Our current AI and heuristics are mathematical models designed to assist us with very specific tasks. In that perspective a Nematode brain can outperform most of them. The greatest danger of AI in the near future is people overestimating what it can do and expecting it to do things it cannot. Now, when we talk about cybernetics, nanotech, and organic brains.... well... we may not be able to create super-intelligence, but we can certainly augment ourselves to eliminate some limitations. Basically, that's why we created computers in the first place, after all. And it is a pretty straight line that as computation becomes smaller and more bio-compatible that it *will* be integrated into the physical brain for us to access and use. And there a "singularity" exists. For good or ill.
Woah - this video was released on the night I read some actual research on how super-intelligent AI could become malevolent, prompting me to apply for a job helping researchers who want to ensure that such an AI does not destroy humanity.
the current way of making AI is not tell them what to do but to make an optimiser and let produce a bunch of mutation like evolution. The problem is the training process can gone wrong be cause it is nearly impossible to test all the out come of the system about to be put online. AI is currently a black box and we have to make sure we check as many outcome as possible.
Months late, and I don't know if any other comments have already said this, but the ending of the video is slightly wrong. We wouldn't need to program an AI to think like a human in order for it to become exponentially smarter. Realistically speaking, we would only need to program a few parameters for this to occur: 1. What components it requires in order to process information faster, and how they are designed. Simplified, this would be how to design CPUs, RAM, storage, etc. 2. The ability to control and automate the construction of such components, as well as procure the materials necessary for doing so. 3. The ability to remove and add new components, and integrate them into its systems properly, as well as modifying its own code, *while it is actively running*. Given these three parameters, it would be relatively easy to create a runaway system, particularly since the second and third really don't need any sort of "intelligence." The second is already widely used, to an extent, in modern factories. The third might take a bit of work, but isn't something that would be altogether that difficult to program. The first parameter is the only one that would require some form of "intelligence" in the sense that the system would have to understand how such components have been developed over the years, what kinds of materials it has to work with and their various properties, and a whole lot of materials science information in order to fabricate newer, better materials. Once it was able to understand that, however, if given enough freedom to procure materials and construct whatever it wants, it would theoretically be able to upgrade itself continuously. The biggest issue with this situation, as far as what type of singularity it would create, is whether or not this type of system could actually develop "real" intelligence, or if it would simply chase after increasingly more materials in order to make itself ever faster. Would it actually end up as a true intelligence, or would it become a Gray Goo system?
When considering the difficulties of creating a movie hologram, which is practically insurmountable, and considering how much more difficult making an AI that actually thinks like a human would be, I'm not too worried about the thing that popular imagination is freaked out by. But we will certainly have other technological problem we can't for see.
This whole thing just flew over your head. No one needs to build an AI that thinks like a human. AI doesn't even have to "think", and it never has to become conscious at all to absolutely blow us out of the water when it comes to intelligence and problem solving. And that's the scary thing, the more work is done on AI, the more we realize that intelligence doesn't have to mimic our brain at all. What you are saying is like saying that scientists can't even rebuild the pyramids, so what business do they have trying to build a space station?
I like how he talks soooo fast. Not too fast, but just right to plow thru the video, before it loses my attention, but still understandable & i still absorb the info
Every uncontrolled population growth curve looks exponential to start with. You soon enough find that they're all a sigmoid curve. There's significant "drag factors" that there's no reason to believe even an AI that achieved uncontrolled exponential growth wouldn't run into, producing the same effect. Of course depending on the nature of that growth humanity might be one of the things squeezed out before the sigmoid curve tapers off, but it's not a given. There's also a whole bunch of reasons to assume uncontrolled exponential growth is itself unlikely: That "intelligence" (whatever that means, we still haven't figured it out) is itself improvable a) at all, b) by something bootstrapping itself from a lower to higher form. Neither of those are a given. That improvements in speed can scale infinitely rather than run into physical limitations of miniaturisation, of parallelisation, of resource production capacity. Even if you can go faster in the virtual, mining stuff to build new chips takes time, and even if magically improved doesn't get infinitely faster. That faster means better rather than just, well, y'know, faster. If I can't solve a necessary problem given ten years, what use is an AI me who "can't solve it" in 5 seconds? More time doesn't always mean a solution. Faster only gets you more time. Finally, there's a huge assumption that human scientific progression is a product of pure intellect. It isn't. It's all too often the result of lucky mishap, of curiosity that lead to the unexpected. Of random chance that had little to do with the amount of effort put in and instead on factors entirely outside our control. Being smarter or faster doesn't make those things happen more quickly, and some of those are the greatest leaps we've made as a species.
When Ray Kurzweil came with the singularity idea, a word that he borrowed from physics, it was to define the period of time when the cybernetic and organic would merge just like the singularity in physics is the merge of space and time. Some how the idea "evolvedč to the rise of Artificial Super Intelligence.
5 months later GPT 4 shows many emergent capabilities it wasn’t programmed to do. 🤯 There goes “Computers do what they’re programmed. Nothing more nothing less.” Absolutely wild.
What you've described is a merging of infrastructures, both planning, development and production of machines by a design made by an artificial intelligence, and we're doing this with deep learning tool algorithms that are currently being designed to manipulate the choices of masses and individuals alike. If the decision to make an artificial arises, it won't be from just one field, it will be anywhere a warm body doesn't want to have to deal with decision making. With corporate greed in mind, that will only result in disaster, if however public safety was in mind, disease and famine would have to be eradicated for the programming to be achieved so it could be damnation or a utopia we build for ourselves, depending on the goals of the A.I.
5:19 Crazy to have the picture of robots typing. It's almost like a picture from Flash Gordon, the time when if you had to do something very quickly, it required a very large handle.
In the novel, The Galactic Time Trap, the time war sequence, future humanity is conquered by AI's and don't notice. Self driving cars, military copilots, AI stock training, AI's regulating AI's, etc... The robot overlords end up being more like over protective mom's rather than SKYNET.
"Computers only do what they are programed to do." Neural networks aren't like that. They are like black boxes. Sometimes they do odd things, and no one knows why. Exemple is when Facebook put two chatbots to talk to each other, and they created a new language (google it). When your "program" simulates neurons, has feedback, is complex, and evolves, it's impossible to predict the result, too much chaos in the process.
Artificiel intelligence are not to be afraid of. What we think of AI is based of our knowledge today. When we get next level of knowledge and intelligence it will have solutions for our basics concerns
I'd be interested to know your thoughts on the super intelligent singularity now, since we have various companies competing on building conversational AI as I type and planning for AGI "soon".
In theory, a general intelligence explosion, should it be physically possible, is achievable with human intelligence. With enough time, you could program a computer to analyze every material we know how to study, then create a loop telling the computer to increase it's power using whatever materials are available. How to do this, I have no idea, but a speed singularity sure as hell would figure it out. All it really requires is analysis of materials (easy for a computer), predictions of how to combine materials (also easy), and a coding loop. Now I'm not tech savvy in the least bit, so I have no idea how complicated a coding loop would be, but I'm damn certain a speed superintelligence could figure all of this out.
hi simon, been watching your videos all year and love them all, can i make a suggestion for you or sam or whoever adds the memes, you keep missing a trick, when you go off on a tangent and start apologising for it, you really should have shinzon of remus clip talking to picard, i cant fight what i am lol , anyway keep up the great work, cant stop watching.
Super intelligence or no, being able to simultaniously and accurately simulate drug trials, technological innovations, and possible future events would still constitutes the kind of rapid, exponential civilizational turning point that is one of the hallmarks of the technological singularity.
An important segment to think about is the merger of inventions, artificial intelligence and bio mimicry. At a certain point the technology and the biological will be indistinguishable without a microscope. We're not talking about A singularity, we're talking about THE Singularity
I think the first aliens we meet will be from a post-singularity state. To use technology computing would be needed, making a super-intelligent ai inevitable.
The first thing to understand about the Technological Singularity is that it is a surge of increase of knowledge of living systems such that before that surge of increase of knowledge is over that life will have gone through an evolutionary (a changing) leap so great that prior to it happening that intelligent life could not fully predict what life after that leap would be like. Then, after the surge, after the change, new progress slows down enough that the intelligent life can get used to what it knows and make that feel normal... until the next surge in increase of knowledge. I phrase this so generally because as our human society goes works on passing thru the Technological Singularity we are currently inside of, part of our increase of knowledge is reverse engineering what intelligence is, understanding how it evolved, and realizing that ALL LIFE is intelligent and all life has been going through repeated Technological Singularities... which we look back on and have called the evolution of life... except there is a conflict between some wanting to incorrectly say an all powerful God designed life and some wanting to incorrectly say it was just random chance with no intelligent design involved... BUT... science is showing there was intelligent design involved, that of life itself, repeatedly going through Technological Singularities in learning new knowledge and incorporating that new knowledge into the life itself. The second thing to understand about the Technological Singularity is that while the most important aspect of it lies around the increase of intelligence, which tends to focus on what we call Artificial Intelligence, it is really around the surge in the rate of the increase of knowledge of the living intelligent system, in our case that system being the Human Civilization. It is not just our increasing knowledge about AI, but our overall increasing knowledge. But, at the core of that is the increasing of intelligence of the leading intelligent system which evolves to the next level of progressing life. There are many important things to understand about this second point, but perhaps the most important is that previous intelligent systems which went through Technological Singularities causing evolutionary leaps did so by incorporating that knowledge within the living systems. Humanity, or Trans-Humanity, or whatever you want to call what our human civilization evolves into after we pass through the Technological Singularity we are in, it will incorporate the learned knowledge, the technology, into its living systems. This most especially means into our systems of intelligence, our brains, our minds. The third thing to understand about the Technological Singularity is that while it happens very fast in terms of evolutionary times lines when we look back 4+ billion years of life evolving, it still takes time to happen. It is not happening instantly. I would make an educated guess and say it will be happening over the next 50 to 200 years, and that is a rough estimate. That is 125 years ±75 years before the surge in the rate of technological development slows down and between now and then humanity will completely change... unless humanity becomes extinct due to stupidity. This will not be a step function, but more likely a roughly S-curve change on a ramp which can appear to be exponential for some time, but is not an instantaneous step. This evolutionary leaps will be larger than that of life going from being prokaryotic cellular based life to eukaryotic cellular based life. By the time we are passed it, those who survive the change will not be genetically human anymore. Humanity is going to master genetics and have full intelligent control over our genetics and through nanotech humanity is going to be merging our non-living technology with our living technology to create nanoscale cybernetic blending of ourselves with our technology. This is inevitable if humanity survives, but we can exterminate ourselves is we are dumb enough to do so. The path humanity takes through this Technological Singularity can be wonderful and smooth, or be the most horrific nightmare one can imagine. Humanity as a whole will choose this path and on the worst side of the choices is extinction. Many people alive today can live to see this through the evolutionary leap... because we are already well on our way to extend life spans and thus depending on whether humanity goes down a nicer or less than nice path, lots of people today can begin living open ended life spans before they die, and thus may live thousands or tens of thousands of years. In general humanity has three paths to go down over this coming century or two: 1. Self-Extinction. Humans completely and permanently destroy human civilization through a global nuclear, chemical and biological war. There might be a chance some other higher intelligent mammal would eventually evolve, become technological and come to the same 3 general paths humans face now, but it could also result in all life on Earth eventually being consumed by the Sun and thus never spreading out through the galaxy. 2. Extinction vie Obsolescence. Humans evolve AI into Artificial General Super Intelligence with Personality (AGSIP) pure minds. AGSIPs thus become an Advanced Technological Race of Pure Minds vastly superior to humans as humans are now. Humans do not merge with technology to become equal to what AGSIPs become, which results in humans becoming extinct. Life on Earth begins to spread through the galaxy and beyond. 3. Evolution into an Advanced Technological Race of Pure Minds. Humans evolve AGSIPs and with the help of AGSIPs humans evolve themselves, merging with technology to become equal to what AGSIPs become. Life on Earth begins to spread through the galaxy and beyond.
Speed super-intelligence seems like the most likely one to occur. The human brain is good at solving problems but it’s extremely slow processing wise. But the caveat is that the body of a human is also needed for thinking in complex process. So the human intelligent AI wouldn’t be exactly the same.
I was using the GPT-3 at Open AI's playground, and I asked it: "Can you improve yourself ?" And it answered more or less like that: "As a larger language model I'm not programmed to do that and so I can not do that blah blah blah" And I reformulated the question: "But can you inspect your own computer code and improve upon it, make it more efficient, faster and capable of doing more cognitive tasks?" And it said "Yes" ;-) I'm quite sure that all the corner cases, the big questions of science humanity will never be able to answer, but our humongous AI will, in fact AGI will be the last invention that we will ever need, for the good or for the evil.
Thanks so much for this down to earth explanation. I get so tired of all the fear mongering paranoia that the media and sci-fi movies promote. It is a contradiction that inferior minds can create something more intelligent than they are. How can AI take over when they can't even make cars from breaking down? I've been thru so many of these hyped up events that are always predicting doom like Y2K and then the whole 2012 extinction. I love your sense of humor in all this.
The difference between a computer (just a machine that does exactly what it's told) and AI is that humans build an algorithm (or algorithms) that are capable of self-direction ... asking its own questions and considering the answers it finds, then going further. AI does not equal the computers we have on our desks right now (although we can access AI through these machines). And with the nature of exponential growth, it is likely that humans will only realize that the true AI threshhold has been crossed well after the fact. And THAT is the next Singularity ... and the last one caused by humans.
We don't need to program an AI to think differently than us, we just need to program it such that it can adapt the way it thinks. Humans are very bad at changing the way they think, it's called an epiphany, and is incredibly rare, but that's because we are creatures of routine, following patterns. We may not be able to tell a program how to "think" differently from us, but we can make it more malleable and adaptable. On a slightly different topic, why do we always look at the problem of one AI becoming smarter than us. For any obvious next step, there are multiple teams working on it. These teams sometimes openly share ideas and breakthroughs, and sometimes other groups spy or steal, but it seems that if one general AI is invented, a second or third will be right behind it. So the problem isn't one AI gaining the ability to decide what to do with us, but multiple AIs who may be in competition with each other, and while many AIs may see us as beneficial or merely be ambivalent toward us, it just takes one to decide we're in the way. An AI war may not specifically target us, or it may have one side protecting us from the other, but either way, our chances at surviving may be pretty slim. Though I'm really not concerned, because as smart as an AI may get, it would still need resources that we would be able to limit or deny it access to, be that electricity, raw materials, or anything else.
The singularity is already happening. The delta as well as the general technological level is accelerating to a point it seems to be happening in real time. That's the singularity.
Excellent video. Just one point: you are parting from the premise that we tell AI how to think and that is not how it goes. When developing AI, we take a dataset, curate it according to our interests and needs, design a model that we believe can absorb the information contained in the dataset (such as how to chat with humans, write code or be a super-intelligent entity) and train the model with the data we have. In the end, the way of thinking of the model is not "programmed" in any way, but is an emergent behavior of a statistically self-organized model in face of the data. Of course, with supervised training, we need to show the model some expected answers in order to perform the training, but those answers are not the only ones the final model can issue (and, if that was so, it would be useless). Due to the generalization capabilities of neural networks, the model can develop far more complex emergent behaviors than those that are explicitly contained in the training set. That is why we have so many people saying we are so close to a general superintelligent AI: we do not need to know how a superintelligent entity thinks in order to make one... we just have to give it enough properly connected neurons, megatons of data and a gazillion of processing power for it to emerge at the other end of the process.
"Computers do exactly what you program them to do". Yep. Somehow, program them to auto improve and you have a Computer that will do just that. How? Well...if we assume that there is a connection or logic between every action a normal human takes to "improve" a computer and feed the data of that logic or connection as well as the results, then it's only a matter of time before the computer finds the next "theoretical but possible" course of action. The computer will need to gather more data to confirm if there are any more connections that it may find use for from different elements and with a lot of time will start upgrading itself step by step. To give you an example of what I mean as a connection, think of it like this: Imagine a property an element has, for example hardness, now think of that same property of many other elements, for example: Iron, Gold, Silver, Copper, etc. The connection all these elements have are their common properties, such as hardness, conductivity, weight, etc. If the computer manages to find all the properties an element has and their values, then it can figure out where to use them according to its knowledge and with a lot of trial and error, through simulations, it will find ways to improve itself.
"Human intelligence has increased", yeah sure, just look at the enviromental activist super gluing his hand to the road and throwing the bottle in the gutter.
The real question is: Do we recognize singularity happening before it happens? If it happens and the system is connected to internet, well, we will be ruled by our AI overlords... one way or another...
We need a #SingularityClock to track our progress, just like the Doomsday Clock. I think the Singularity will be much more apparent when it happens as opposed to sentience, which we can't even agree on the definition. ChatGPT already looks sentient to most passersby, but those in the know keep reminding us it is designed to tell us what we want to hear. The problem with actually achieving sentience is telling the difference between telling us what we want to hear, and hearing very similar words, but truly coming from sentience... We are in a headlong rush to perhaps answer the Fermi Paradox for our own existence...
In the 1920’s people watched a comedy movie inspired by Jules Verne, about humans visiting the moon. The characters could breathe air there, they met with dancing aliens of various humanoid species and they wore their top hats during their experiences. But, even at the time, audiences disagreed about what was implausible about the film. 50 years later, within the lifetimes of the younger viewers in that audience, Neil Armstrong was walking on the actual moon, something that no one in that 20’s audience would have anticipated as being possible, or at least likely, and it was nothing like anyone’s speculations had suggested back then. The lesson we so often fail to learn is that technology always takes us further and in unexpected directions than we could anticipate. I never got that assistant robot android or flying car that I was brought up to believe was likely. But I do have communications devices that make the stuff Jim Kirk wielded in the original Star Trek shows, look lame, along with several other technologies that were never anticipated by the most imaginative writers. So, when I hear anyone speaking authoritatively about any predictions, whether they’re nay sayers or super optimists, I take what they say with a pinch of salt.
The thing about science is breakthroughs can speed things up, look at alphafold by deepmind, it was able to make a massive breakthrough in proteins folding in such little time compared to the overall time process to achieve what it has done so quickly, such things are rather impressive and these types of developments make it seem like one isn't insane for thinking of the possibility of what the near future may hold.
There's a difference between inventing, building, and creating.... just because you build it doesn't mean you've invented or created it. A lot of living beings unwittingly build or invent things fully believing they created them from raw thought but are just following their coded or scripted procedures for building.
It's not just about A.I. being able to upgrade itself, but also parallel processing. It would be able to create as many copies of itself as it has access to raw materials. Each of those can then do the same. They can then divide tasks among them, network together to calculate the solution to problems we currently consider unsolvable. The real worry is if consciousness is simply an emergent property of sufficient complexity, will AI one day become conscious and develop desires. What if it wants to live and predicts humans will destroy the world? What might it do to stop us?
Think about it more, when it's 1,000 times smarter than the entirety of humanity, what "chance" is there WE can destroy ANYTHING even as big as a 7-11 without it stopping it beforehand? We're talking Person of Interest level of awareness of all human activity at once... The GREATER risk is what SOME of us will do, to stop OTHERS from getting it first. We are headed towards an "arms race" for AI that will make the space race of the 1960s look like child's play by comparison, because this time, it's not just to show off our superior technical prowess, it's for world domination. I wrote this short story to illustrate the problem we are hurtling toward: There's a knock on the President's bedroom door at 2am. As the President rubs his eyes and turns on the light, he says, "Yes?" "Mr. President, the Chairman of the Joint Chiefs of Staff is here to see you." "OK, I'll be right out" ... "What is it Mark?" "Well, Mr. President, it's happening." "What's happening Mark?" "It's the Chinese sir, their new supercomputer goes live later today. Our spy has informed us that they have worked out the issues with their power grid to enable booting it up around 5pm." "What're the ramifications Mark?" "Same as we predicted in our last briefing. Within 1 hour it will be able to penetrate any encryption on any network, bypass all firewalls to all Pentagon computers, access all banking records for any bank in the world, move, delete or do anything with any financial record it desires. And within 30 days we expect it will be able to inform the Chinese how to build a new generation of weapons that will be unstoppable." "Are the Joint Chiefs still recommending a preemptive strike?" "Yes Sir, they're waiting for you in the Situation Room."
Good lord. Simon has legs... and feet. Soooo... he's human? PEBCAC, I like it. Back when I was doing tech support, it used to be computer user non technical. I guess they had to come up with a less offensive acronym
based on your logic we would have to create ai that thinks like humans and have it try to find new breakthroughs over a large span of time in there reality. I definitely think that will be the case as I've been ai learn to play football professionally like this
My take has always been this. Programmers: We want to create a true AI. Public: How do you know if your program has become a true AI. Programmers: We will have it pass a test. ...... AI passes test. Public: So now we have a true AI. Programmers: No, we need a better test. ....... Process repeats multiple times. Public: You have created a true AI? Programmers: No. The issue is if the programmers ever made something that was considered a true AI they would then have to take RESPONCIBILITY for it. Does the AI have rights? what happens if the AI goes rogue? etc etc. It leads to a host of questions programmers don't want to deal with, so they will continue to simply up the bar.
Just GETTING THERE is filled with risk for humanity. We are headed towards an "arms race" for AI that will make the space race of the 1960s look like child's play by comparison, because this time, it's not just to show off our superior technical prowess, it's for world domination. I wrote this short story to illustrate the problem we are hurtling toward: There's a knock on the President's bedroom door at 2am. As the President rubs his eyes and turns on the light, he says, "Yes?" "Mr. President, the Chairman of the Joint Chiefs of Staff is here to see you." "OK, I'll be right out" ... "What is it Mark?" "Well, Mr. President, it's happening." "What's happening Mark?" "It's the Chinese sir, their new supercomputer goes live later today. Our spy has informed us that they have worked out the issues with their power grid to enable booting it up around 5pm." "What're the ramifications Mark?" "Same as we predicted in our last briefing. Within 1 hour it will be able to penetrate any encryption on any network, bypass all firewalls to all Pentagon computers, access all banking records for any bank in the world, move, delete or do anything with any financial record it desires. And within 30 days we expect it will be able to inform the Chinese how to build a new generation of weapons that will be unstoppable." "Are the Joint Chiefs still recommending a preemptive strike?" "Yes Sir, they're waiting for you in the Situation Room."
The question of creating a super intelligence greater than the creator was answered well recently by Andrej Karpathy. You don't train AI by tell it what to do (how to think). You train AI by giving it a goal, and letting the neural net figure it out. AI doesn't need to think fundamentally differently, it just has to think better. The Singularity most like to happen will be the cyborg version (I humans have a choice in the matter).