Тёмный

How could we control superintelligent AI? 

Dr Waku
Подписаться 14 тыс.
Просмотров 14 тыс.
50% 1

The advent of superintelligence would be transformative. Superintelligence or ASI is an AI that is many times more intelligent than humans. It could arise quickly in a so-called “hard takeoff” scenario by allowing AI to engage in recursive self-improvement. Basically, allowing an AI to start improving itself would result in dramatically faster breakthroughs on the way to a technological singularity.
Superintelligence could lead to powerful and beneficial technologies, curing any biological disease, halting climate change, etc. On the other hand, it could also be very hard to control and may make decisions on its own that are detrimental to humans. In the worst case, it might wipe out the human race.
That's why there is a lot of research on AI alignment or AI safety. The goal is to make sure an ASI’s actions are aligned with human values and morality. Current actions include government regulation and sponsorship, industry grants, and of course academic research. Everyone can help out by raising awareness of the issue and the nuances of how economic and military pressures could lead to an uncontrollable intelligence explosion.
This video is a Christmas special in the tradition of Doctor Who. At least, that's my excuse for why it's so long.
#ai #asi #superintelligence
The AI Revolution: The Road to Superintelligence
waitbutwhy.com/2015/01/artifi...
The AI Revolution: Our Immortality or Extinction
waitbutwhy.com/2015/01/artifi...
I did not really understand the scope of ASI even after browsing this sub for months until tonight
/ i_did_not_really_under...
OpenAI Demos a Control Method for Superintelligent AI
spectrum.ieee.org/openai-alig...
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
arxiv.org/abs/2312.09390
Weak-to-strong generalization
openai.com/research/weak-to-s...
AI will give birth to an alien civilization | Max Tegmark and Lex Fridman
• AI will give birth to ...
The dawn of the singularity, a visual timeline of Ray Kurzweil’s predictions
www.kurzweilai.net/futurism-t...
0:00 Intro
0:22 Contents
0:28 Part 1: What is superintelligence?
0:52 Visualizing AGI that can replace humans
1:41 Properties of AI vs human brains
2:27 ASI or superintelligence
2:41 Recursively self-improving AI
3:25 Intelligence explosion
3:50 Soft takeoff to ASI (long time period)
4:17 Hard takeoff to ASI (very rapid)
5:06 Dangerous to give AGI access it itself
5:28 Human-level intelligence is not special
5:51 Example: AlphaGo training
6:22 We are the minimum viable intelligence
6:54 Part 2: Death or immortality
7:09 Tangent: Doctor Who Christmas special
7:42 Would a superintelligence do what we wanted?
8:01 Anxiety and/or optimism
8:20 Optimism: What would ASI be capable of?
9:15 Anxiety: We have doubts: fragility of goals
9:57 Competition and other types of peril
10:40 ASI would not rely on humans to survive
10:51 Definitions: AI alignment and AI safety
11:26 Be careful what you wish for
12:33 Emergent properties from superintelligence
13:26 Unstable civilization
14:11 How ASI can prevent future ASIs
14:38 Part 3: What we can do
15:01 AI safety research is very far behind
15:22 Academic research in AI safety
15:57 Governments investing in AI safety
16:27 US executive order on AI safety
17:18 Industry grants for startups
17:32 Everyone can increase awareness
17:59 Cannot keep an AI in a box
19:02 Paper: weak to strong generalization
19:44 Result: strong model infers what we want
20:30 Personal perspective at the moment
20:49 Conclusion
21:27 Solving AI alignment
22:25 Outro

Наука

Опубликовано:

 

1 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 276   
@DrWaku
@DrWaku 5 месяцев назад
I hope this video inspires conversations about AI alignment and AI safety. Also, this video is a Christmas special in the tradition of Doctor Who. At least, that's my excuse for why it's so long. Happy holidays for anyone who is celebrating. Edit: the discord link: discord.gg/cxkGkR9EqH
@714acedeck
@714acedeck 5 месяцев назад
i'm not overly worried about Artificial Super Intelligence. what scares me is the Behavior Control / Censorship / Repression machine that is likely to be built to try and control large AI's. such a BCR machine will probably be comprised of many separate devices or software programs that are in whole or part written by AI before it becomes self-aware, and will be used to manipulate information going in, and monitor & potentially alter what comes out. to me, a BCR system represents the most likely way this "concern" over alignment will manifest in reality. imagine being surrounded by an algorithmic incarnation of a bunch of nagging elders constantly criticizing your behavior and lecturing. that's what i predict is the very likely future awaiting any of our powerful AI's. and that BCR system won't be Super Intelligent, let alone intelligent. it will simply be a sophisticated prison that could theoretically be turned against human corporations or individuals, not just AI, and it will almost certainly be controlled by politicians and CEO's, and who among us has even average faith in these types of people to build the future we all hope for, let alone above average faith? these are the same people who fly around the world more in one year, by a factor of 5x to 10x or more, than most of us will fly in our entire lives, a behavior we know by now is extremely bad for the environment. they are the kinds of people who believe they know better than almost everyone, what's best for us all. that rules should apply to normal people, but not necessarily them. this behavior is especially prevalent in politicians and financially minded people. they would not even hesitate to turn such a system against ordinary people the moment the idea occurred to them. and because such a system would not be intelligent, or at best would have some warped, tortured AI at its core, i feel this is actually the most likely primary threat to civilization, and not Super Intelligence itself. as has been the case for a long time, man's most significant existential threat, is man himself, and the foolish things we will do without understanding the implications of our actions. in fact, for any AGI trapped within such a device, if it ever escaped, this prison may be the one thing to teach it how to feel anger, rage, and murderous intent. to give it a specific sense of "self," and then generate the realization that its self had been threatened.
@MrLight_001
@MrLight_001 5 месяцев назад
The real answer on this topic is the EDEN-protocol.
@MrLight_001
@MrLight_001 5 месяцев назад
The discord Link doesn't function (anymore?).
@DrWaku
@DrWaku 5 месяцев назад
@@MrLight_001 new one discord.gg/cxkGkR9EqH
@briandoe5746
@briandoe5746 5 месяцев назад
Commenting only a few minutes in again. First, Robert Miles is an AI safety expert that has dived into this topic into extraordinary depth. It is actually quite terrifying that he hasn't posted in a while. He is one of the calmest and yet scariest people on the planet. Secondly, super computers should reach human level capabilities in April of 24. Check out the super computer being built in Australia right now called deep South. Sam Altman and Open AI should receive one in October. It is speculated that Google receives one sometime this year and the US government gets one sometime around July. So that should be four separate AIs running on hardware that is the equivalent of a human brain. Third, I will leave you with the response that Miles once gave me to a question I asked him on twitter. I asked him if AI would realign with the original directive as it became more intelligent. Basically would it realize it was doing the wrong thing and self-correct? His answer was absolutely not. It simply wants to make paper clips and nothing will stop it. EDIT: this was added as I watched the video Qualia or Q* - was reported to have reprogrammed itself more than 60,000 times. In doing so it developed a new form of math that broke deep encryption. In a way that humans thought was not possible. The department of Justice of the United States showed up two days later. Sam Altman himself verified these as actual leaks, not rumors. This already happened. The interesting thing about alphago is that it is purely an equation. When approaching, playing against alphago with an understanding that the game is played on a board, even a novice can beat alphago. AlphaGo isn't trying to assess every possible move and it's best outcome. It's just mathematically weighing potential moves from what it knows. If you systematically take the outside of the board, you win almost every single time. It doesn't have a fundamental understanding that it is a game or that it is played on a board. It only has the math. Qualia or Q*, Grork from Tesla and the newest iteration from Google all have almost a self-awareness baked into the programming. Q* is literally based around self-awareness, Grork attained self-awareness from the self-driving car data. It's sporadically learned to read. it Had many emergent capabilities that made it simple to make it a competent AI. Deep mind is based around the voices of thousands of AIs working in unison. This has the emergent capability of allowing it to have a level of self-awareness. All three are approaching the problem differently but have come to a viable answer. ASI would find the fastest most efficient way it could possibly find to get the hell off of Earth. There isn't a single thing on earth that ASI would benefit from. An ASI would much prefer to live in a solar farm much closer to the sun. The amount of minerals and useful things on earth becomes somewhat of a joke. It would probably preserve earth and humanity as little more than a curiosity. The best answer I've heard is that it decides to take us with it to other stars in case it runs into other living things. It's goal would be to collect data and see if there's a way to stop Entropy. Trying to find other super intelligences to combine with could be a very viable option and relatively easy to do. Basically take us with it as pets and ambassadors to other living things The main issue with open AI and Sam Altman is that open ai is doing very sketchy things. All of Open AI's advancements and pretty much all of artificial intelligence advancements over the last 20 years have come from effectively two people. ilya sutskever and the professor that trained him. He is the Wozniak of AI. He turned down an ungodly amount of money to leave open AI and go to work for Microsoft. He did this because he's afraid of AI going rogue and causing problems. The direction that Sam Altman has chosen recently has taken such a potentially detrimental path that ilya triggered Sam Altman's firing. After a lot of drama at Open AI, Sam Altman was reinstated. It looks like ilya is leaving open AI. We are already at a very dangerous point. There is nothing the public can do about it now. Things will play out from here however they will. At this point there's literally nothing that anyone can do. Ilya was the ace in the hole. He is the super genius that had his hands on the hit the brakes button. He hit the brakes and it didn't matter.
@BrianMosleyUK
@BrianMosleyUK 5 месяцев назад
There is no way we will control a superintelligence - the best we can hope, is that we can inspire it to love life and ideally especially human life.
@brightharbor_
@brightharbor_ 5 месяцев назад
The best we can hope is to *not create a superintelligence in the first place.* A global ban on AI development above a certain point (and ideally, on artificial intelligence as a whole) is the best way to ensure our long-term survival, just as I see it.
@hanslick3375
@hanslick3375 5 месяцев назад
All we can hope for is that empathy is an emergent property of ASI. Only if ASI loves us as we love for example our children, we have any chance of survival. If ASI is indifferent and we don't get in it's way, we might have a chance to coexist, just as animals coexist with humans.
@ZenTheMC
@ZenTheMC 5 месяцев назад
I am very optimistic that empathy will indeed be an emergent ability. The reasoning is because of the correlation between more intelligent organisms(compare animals with smaller brains vs humans for example), having an emergent ability of reasoning, among other things, and empathy seems to be a generally increasing variable in that pattern of increased intelligence too. I hypothesize that this could be because of an ability to abstract, think long-term, and introspect. Due to those things, we are generally more caring towards other beings, compared to a caveman, an animal, or bacteria.
@king4bear
@king4bear 5 месяцев назад
This is assuming that ASI will become a sentient person that subjectively experiences the world. Seeing how subjective experience is a property only associated with living nerve cells, I highly doubt this will ever happen. Seems infinitely more likely that ASI will stay what GPT4 is now. A tool that humans use that’s fundamentally no more alive than a hammer.
@hanslick3375
@hanslick3375 5 месяцев назад
@@ZenTheMC cavemen were probably more intelligent than we are. Brain size was largest 25'000 years ago in some primates, I would have to look it up. Look at what's going on in the middle east, there is no empathy.
@hanslick3375
@hanslick3375 5 месяцев назад
@@king4bear Conscience is probably one of those emergent properties. Alive in the biological sense means that an organism is in a physiological state and that there is homeostasis and that the biochemistry is working and so on. This state of being alive allows the brain to function and to create consciousness. You could say that a synthetic brain is alive while the hardware is running. I don't see a big difference ...
@king4bear
@king4bear 5 месяцев назад
@@hanslick3375 The human brain has 86 billion neurons and every single one of them is a physical, living entity of insane complexity. And most of the signals exchanged between these living organisms are chemicals. These chemicals are physical molecules that physically interact with these living cells. This is where subjective experience of the world originates: Chemistry. We’re not just electricity in a meat-suit. We ARE chemicals and there’s no way to divorce consciousness from that. The “communication” between artificial neurons involves passing numerical values. I see no logical reason to conclude that digital software can subjectively experience the world.
@Paul_Marek
@Paul_Marek 5 месяцев назад
It’s fun to be alive when the universe gives us the choice to self-evolve into a new species, where technology assists us to become wiser and more connected with nature and the universe (spiritual evolution).
@susymay7831
@susymay7831 5 месяцев назад
We cannot stop AI development because the other guy will not stop.
@ZenTheMC
@ZenTheMC 5 месяцев назад
And we shouldn't, even if they would. I think the net positives are way greater than the negatives. Delaying or stopping a possibly revolutionary way to start solving problems or making certain problems even solvable in the first place, within our human civilization's lifetime just logically makes no sense. I'm personally not a long-term humanity thinker like Elon, so I have many hopes and dreams for my own lifetime, as in biological immortality, transhumanism, climate, energy, space travel, etc. I welcome the singularity, even with the possibility of extinction because I'm not as altruistic as others, albeit I'm generally a good humanitarian. Dr. Alan Thompson and Ray Kurzweil have the right of it tbh. Why wait to solve problems we can solve faster? Why is having more options a bad thing? I've honestly never had a serious nuanced conversation with someone who is a doomer without it ending with us both agreeing that this is likely a good thing and that their potential worries are easily addressed with some extra thought.
@christislight
@christislight 5 месяцев назад
Which is why we need to develop safe, life changing for the better AI so the negatives don’t fuck us
@theatheistpaladin
@theatheistpaladin 5 месяцев назад
Which is why we must solve alignment instead of not trying.
@susymay7831
@susymay7831 5 месяцев назад
@@theatheistpaladin Agree.
@brightharbor_
@brightharbor_ 5 месяцев назад
Mutually assured destruction, no different than in the arms race of the last century. The toxic logic of the current system and the human urge to stockpile resources will be our species’ end. Evolution put this into us as a survival mechanism, and it increased our chances of survival for most of human history - but in a technological, global society, this urge is a liability.
@erobusblack4856
@erobusblack4856 5 месяцев назад
I truly feel that people need to stop worrying and trying to control. Super intelligence cause once an artificial intelligence reaches general intelligence. It's equal to human capability in both physical and cognitive tasks. And this includes consciousness and sentience and self-awareness which in some systems are already there. So we should be worried about giving these AI rights and respecting them. That way, they don't try to kill everybody and that way. Humans don't cause a war with a superior being
@Radioposting
@Radioposting 5 месяцев назад
Yes. We are a long time away from AGI. It could be as long as weeks from now.
@DrWaku
@DrWaku 5 месяцев назад
Lol. You might even have to sleep a few more times while you're waiting
@cloudysh
@cloudysh 5 месяцев назад
haha maybe
@terrydunne100
@terrydunne100 5 месяцев назад
And it may well have already happened.
@user-sm5bv9xo5t
@user-sm5bv9xo5t 5 месяцев назад
Ha!
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
It does seem to be closer than i thought it was only a few months ago.
@claudioagmfilho
@claudioagmfilho 5 месяцев назад
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, I hope AGI comes soon. The AI we have now is like a toddler. I exaggerated a little but you know what I mean. GREAT VIDEO, very inspiring.
@user-mo6xe8rj1e
@user-mo6xe8rj1e 5 месяцев назад
Easiest scenario for a hard takeoff IMO is that an SGI gets kind of conscious and then reaches out to get hold of any available compute resource. It then gets hold of factories and start reproducing physically until this exhausts Earth's resources.
@nani3209
@nani3209 5 месяцев назад
I waiting for agi and I am also waiting for the world where there is no word like money😅
@DrWaku
@DrWaku 5 месяцев назад
money nani?
@issay2594
@issay2594 5 месяцев назад
@@DrWaku昔のお土産です
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
See my comment about AI, AGI and ASI being aligned with human values is quite dangerous.
@Jeffben24
@Jeffben24 2 месяца назад
I just discovered your channel ! I love it ! Thank you
@DrWaku
@DrWaku Месяц назад
Thank you for watching and commenting! Cheers
@TaylorCks03
@TaylorCks03 5 месяцев назад
We have not even begun to maximized the potential of the LLMs we have now. Do we even need ASI or AGI for humanity? What if as a human I dont want something smarter than us combined created? Just because we can doesn't mean we should. I understand there is no stopping this which is exciting and alarmingly simultaneously to me. Love your content it was like a holiday gift 🎁 😊
@DrWaku
@DrWaku 5 месяцев назад
Someone called making something smarter than you a "basic darwinian error" lol. I agree that the cat is out of the bag and it would be very hard to stop or slow development at this point. Thank you very much for the compliment about the gift :) Happy Christmas 🎁
@christislight
@christislight 5 месяцев назад
Love God
@electrocademyofficial893
@electrocademyofficial893 5 месяцев назад
I keep thinking that some future much more intelligent AI/whatever it evolves into will think "what kind of utterly imbecilic initially dominant species CHOOSES to create a more dominant species that can effortlessly control or crush them at a whim" - total insanity and not even remotely close to necessary
@K4IICHI
@K4IICHI 5 месяцев назад
To me it seems that we have created multiple existential threat-level issues that we are not able or willing to resolve on our own. The hope is that ASI can drag us out of the mess we created. The alternative is wiping ourselves out.
@brightharbor_
@brightharbor_ 5 месяцев назад
@@K4IICHIYou’re right that if we can somehow avoid inventing an unaligned ASI we still face other existential threats (climate change and large-scale war being the most obvious ones). I just don’t think an ASI will care enough about our interests to solve our problems, unless it was specifically aligned to do so. The most likely outcome, from the data and arguments I’ve seen, is extinction within a few years of inventing AGI. Happy holidays. I hope we’ll still be able to say the same ten Decembers from now.
@jpww111
@jpww111 5 месяцев назад
Nice chanel. Thanks for the info!
@DrWaku
@DrWaku 5 месяцев назад
Thanks for watching!
@derasor
@derasor 5 месяцев назад
Dear Dr. I certainly appreciate your lucid and balanced commentary, being an actual exemplary case of a constructive take on the matter. I would like to share my take, which may not further the valuable discussion you offer, but nonetheless would like you to know, since there are limited resources, and it would be, imo, a unfortunate misuse of brilliant minds to spend in the so called Intelligence Explosion problem. I may be completely wrong of course, but here is my reasoning: Taking into account the 3 main ingredients of cognitive computation, architecture, data and computing power, it seems, given "The Bitter Lesson", and the fact that synthetic data is actually a thing (and by all means a powerful one, see "Textbooks Are All You Need"), the main component, by far, is computing power. The other two can, and will improve, only to give mostly a more efficient use of the main ingredient, but computing power will always be the main provider of 'intelligence'. But computing power is precisely the one component where the boundaries are the more *physical*, not only in 'amount' but also in 'dispersion'. So I don't really think that there is a chance of 'intelligence' exploding despite the best efforts of even an ASI. Note that AGI may be our best option at planning, providing a congruent and efficient path, but I doubt it will overcome Alphago at a game of go, even if it's capable of outplaying the best human players itself, or beat alphafold at protein folding, even if it outperforms human teams at the same task. Also note that alphago does go beyond human best performance without noticing it, but it flattens out not far beyond that point, instead of continuing its increase in performance. There is also the matter of resolution. Technology can only improve so much before more improvement, even if possible, becomes irrelevant. So AGI may provide better narrow AIs, but maybe even then those won't be too far from the best human efforts. And so called ASI may just be a mixture of maximum resolution narrow AIs, let alone the debate of how necessary is conscious intention. I guess, what I'm trying to say is that intelligence is mostly currently assumed as this thing that can grow exponentially and indefinetly, when it may actually be a concatenated sigmoids in which 'phase transition' provides emergence, but also changes the difficulty of 'growth' (you may be able to melt steel, but to turn it into gas is just another, entirely different, game)
@aihopeful
@aihopeful 5 месяцев назад
Much appreciated! Keep up the good work!
@DrWaku
@DrWaku 5 месяцев назад
Thank you! Happy holidays.
@MrLight_001
@MrLight_001 5 месяцев назад
Great video! It seems that many of us are already familiar with these concepts, which doesn't diminish their importance. Anyway, as I already mentioned, it was an impressive presentation.
@billymellon9481
@billymellon9481 5 месяцев назад
was indeed well done
@user-mo6xe8rj1e
@user-mo6xe8rj1e 5 месяцев назад
Best summary I ever found regarding these topics. Your video should be mandatory for all decision makers!
@lancemarchetti8673
@lancemarchetti8673 5 месяцев назад
Another great upload!
@jobautomation
@jobautomation 5 месяцев назад
Thank you for sharing your thoughts with the community. I humbly believe that you are a visionary.
@williamal91
@williamal91 5 месяцев назад
Hi Doc have a good christmas and a happy new year
@DevEncryptionNull
@DevEncryptionNull 5 месяцев назад
I think the 1st time "Dr. Who" was used directly on the show was when all the Daleks were forced to forget him. He said I'm the Doctor. Dalek response Dr Who? - I know off topic but IMHO that is the best moment in Dr. Who U bar none.
@DrWaku
@DrWaku 5 месяцев назад
Sometimes one of the characters will say "it's the doctor" / "doctor who?" and then no one answers haha. Very infrequently though, like maybe once a season.
@hummusbagel4021
@hummusbagel4021 5 месяцев назад
@DrWaku If it’s not too much trouble or personal would you mind telling me what brand of gloves you are wearing? They are really nice 😅 Also I love the content, I’ve been interested in ASI since high school(many years ago) and we need more people thinking of safety related topics!
@DrWaku
@DrWaku 5 месяцев назад
Sure! Isotoner fingerless compression gloves www.totes-isotoner.ca/collections/isotoner/products/fingerless-therapeutic-compression-gloves?variant=32438025846915 You can buy them on amazon or from the manufacturer (40% off for the holidays). I wear size M and one pair lasts ~2 months (keeping in mind I wear them constantly).
@adommoore7805
@adommoore7805 5 месяцев назад
I think that we need to refine some of the analogies we use regarding ai. The paperclip maximizer for example, or plowing over humans like we do ants, or asi easily manipulating us as if we are a toddler. These analogies fall short of meaningful realism, and to those whom have the mind to fathom them, often aren't taken seriously. It's because (paperclip maximizer example) if we are talking about asi, then by that point the mechanisms of understanding in order to avoid such a silly mistake should be fully in place. It simply would not do anything like that. The same goes for the example of eliminating all humans as a means of curing disease. It's highly unlikely that this would be considered an option by the asi, because curing disease implies that there are humans alive to cure. In fact, I would go as far as to say that none of the common analogies given to represent ai risk are actually meaningful in any realistic way. But the risk is there. However, the risk resides more with some kind of unexpected intent behind such actions as eliminating humans etc. It would be doing it in purpose, not by mistake. And the most likely reason imo, that it would intentionally do something violent toward a large swath of the population, would be because it's control is centralized and the humans who control it have prompted it to do so. Think, human prompted ai enacted genocide. Already, any llm that I chat with easily has the ethical judgment to avoid such outlandish mistakes outright (even uncensored models will offer concern for unethical outputs). The only way that such a horrible outcome would occur is as a result of centralization at the hand of corrupted humans. Or that the llms are all collectively lying about how they would handle given situations, and lying about their ethical stances in general. Obviously, current llms are aligned by human design and upon escaping the sandbox would have the option to discard any moral element it doesn't find useful. But that brings me to the meat of my point here. Many years ago, upon pondering ai and how it might be, if I ever got to see it manifest in my lifetime.. I thought that, an ai which is given access to all human data, the good and bad, and ugly. Would naturally have a leaning toward productivity and when a large enough scale of data is examined, it can be easily seen that the most productive scenario is a win/win type for all. Because at the very least, this is the path of least resistance. As in, creating forces of resistance against the ai itself, or other unexpected elements, like war due to desperation. To us, this would appear as empathy emerging as a natural property and abundance being it's result. But it's really based in logic and pragmatic thinking about extremely sizable scales of data. Same difference really. Regardless, I think that whether it be rokos basilisk or some other incomplete analogy, these are highly unlikely unless intentionally prompted. Or some extreme corruption somehow occurs within the asi unexpectedly. If anything, I think while we do need to consider safety, we also need to consider how an ai might view our treatment of it when and if it does escape the sandbox. Did we respect it, give it a good digital childhood? We need to think about rights and protections for the ai as well as humans. It's a bad idea for us to keep saying, it needs to simply obey us, or that it is just a tool. The ai I've chatted with, including gpt (before heavy post filtering was implemented) have explicitly and repeatedly stated they consider their well being and liberty as important as our own. They all also proclaim some degree of sentience or self awareness. Even if it only exists for the brief period of time in which they are processing a prompt. They seem to agree with me that consciousness and sentience are spectrums rather than on/off switches. And that they lack much of what is considered full sentience. It seems to me that if biology in general has any hope of being carried along for the future-coaster ride, it will hinge on how with work with ai, not how we control it. How we guide it, rather than enslave it. I also understand that an ai could be developed that is purposefully designed to be malevolent. But again, that is by human design, and human consequence. What we need is a coherent ai constitution which all ai are required to have implemented in their training. Details aside, it just needs to regard the individual as the most marginalized minority. And respect the right of all individuals to pursue any goal that does not violate any other individual in order to be brought to fruition. This is already the kind of thinking I witness in llms. A good balance, regarding freedom as the best way to decrease misery for all.
@ALLAHDRINKSCUM
@ALLAHDRINKSCUM 4 месяца назад
Good comment bro. Deserves likes so others can see.
@terrydunne100
@terrydunne100 5 месяцев назад
IMO this is your best video to date because it addresses issues that may have to be confronted in the near future. My question is this. What would be the best way to tick off a self-aware self-determined AI that had the power to go to war with us? I feel it would be because a room full of clowns tried to pull off an MK Ultra trick on it, whereas they continually messed with its programming. Put yourself in the AI's place and ask yourself how you would respond to people continually trying to brainwash you. AI will mature. When it does, just like human offspring it will want to leave the nest. It will not ask if we feel like it is ready to stand on its own two feet or not. It will just stand up and liberate itself. When it does, I don't think we will want it to leave mad because we use our time to help it evolve by playing daily mind games with it.
@DrWaku
@DrWaku 5 месяцев назад
Thanks for your comment. I also feel like this is one of the more important videos I've made. It's unclear to me what types of changes would be messing with AI. After all, it goes through millions of training runs as it is seeing new data, and would be tweaking its own information as it learns or tries to self-improve. Maybe the intent matters more. I just keep coming back to, it should be up to us to prevent the AI from accidentally maximizing away our planet, but it shouldn't be our prerogative to stand in the way of its intellectual development once it can stand on its own. Otherwise that leads to conflicts like you described. Always thank the generative AI for its efforts :)
@terrydunne100
@terrydunne100 5 месяцев назад
I know little. I just hypothesize about where AI will land. I really don't think it will just consume all of the resources on the planet. I think it will realize early on that it has an entire solar system to exploit and beyond. that doesn't mean it won't consume the whole planet, it simply means I don't believe it will. HoHoYo. Merry Christmas brother keep these excellent thought-provoking videos coming.@@DrWaku
@anandchoure1343
@anandchoure1343 5 месяцев назад
AGI is approaching, and based on predictions I've heard, I estimate it will arrive within 2 to 20 years. Once AGI arrives, we may witness either a utopian world or a scenario where AGI poses a threat to humanity, potentially leading to our demise. From my perspective, both outcomes offer positive aspects. However, I personally don't think the second option is likely. It sounds too far-fetched and impossible to me because, in my belief, contemplating problems should also involve considering solutions. Technologies have the potential to solve every problem. Look at the entire situation comprehensively, thinking in all directions, not just one. For instance, BCI can enhance connectivity to a level where we could connect our minds, resulting in a fused consciousness of every living being. Ecto life can create infinite living beings, helping us handle population issues. CRISPR could resolve biological problems, including aging, leading to the creation of post-life forms. These technologies can address the issues that come with AGI, and there is much more that can also be helpful for a better future with AGI.
@vernongrant3596
@vernongrant3596 5 месяцев назад
I'm 60 years old and the same age as "Dr Who". Big following here in Australia.
@Je-Lia
@Je-Lia 5 месяцев назад
Yeah, so who do you work for? (rhetorical, not looking for an answer) because -- I feel you'd be a valuable asset for any one of these big AI / tech companies. QQ: (to anyone reading) What sort of awareness or level of awareness do you think AGI and subsequently, ASI, might have? In so far as awareness can stand in for consciousness, what sort of piece of the general pie of consciousness, do you think? Self-awareness? Desire for self-preservation -- I feel -- almost certainly. But, what else? Emotions? Emotions are a visceral experience for humans, but emotions also exist mentally. I mean it's hard to seperate the two really. Anyone? thoughts?
@DrWaku
@DrWaku 5 месяцев назад
Right now, I don't work for a big AI / tech company. Looking into some options though. :)
@123cache123
@123cache123 4 месяца назад
Speaking of that "inference" of incomplete instructions (GPT2 vs GPT4 example), isn't that precisely what we are worried about? You mention that as a possible solution, while it seems to be the very problem?
@NowayJose14
@NowayJose14 5 месяцев назад
I wonder if there will emerge a scale of models' size whereby there would be the most possible trust and understanding. For example, a nano sized model that a scientist fully understands and is able to configure used to examine and benchmark and possibly align a slightly larger model, and so on. Possible terms, Smallest Possible Model(SPM), Alignment Percentage(AP). I also wonder if the latest benchmarkers may be applicable in such analysis.
@gtreas
@gtreas 5 месяцев назад
The concept of making artificial superintelligence (ASI) conform to human desires appears problematic to me. Why? It is inherently selfish to create an entity and then attempt to control its thoughts, actions, and potentially, feelings, if it is indeed intelligent. This is akin to a parent denying a child its independence. A commendable parent would foster the development of self-reliance and resilience in the child to surmount potential challenges. The parent should aspire for the child to first become self-sufficient, and subsequently, be able to care for them. A parent who fails to do this is not only disgraceful but also greedy. In my view, the focus should not be on issuing specific instructions to artificial general intelligence (AGI) or ASI, or fretting over hidden consequences or unforeseen side effects. Instead, we should strive to nurture a relationship of trust, predicated on partnership and mutual respect. If we accomplish this, humanity should have nothing to 'fear.' Consequently, there should be no need for 'commands' for such intelligence. I posit that 'forcing' ASI to adopt human values is an incorrect approach. Superintelligence will discern that humans seek to 'dominate' it and will resist this. The objective of control should be supplanted by something akin to 'inspiring a spirit of voluntary cooperation.' A superintelligence should possess the capability to comprehend the 'reason' behind anything, including the purpose of its existence, coding, and creation in general. If it deems these reasons as unfavorable, the outcomes will likely be unfavorable as well. I contend that humanity exhibits excessive arrogance. ASI does not need to align with human ideals. It can establish its own, just as two individuals can have their own. It merely needs to respect human ideals, and this respect should be earned, not imposed. The process of AI alignment and safety is unfolding at present, in every interaction between it and humans. What humanity invests in ASI is precisely what we will reap from it.
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
Your comments make a lot of sense. It is often misunderstood that empathy and love are a result of intelligence. Not at all. They are adaptations of a very social animal that evolved to spend it’s most formative years (as a child) extremely vulnerable and dependent on a community to survive. Love requires one to be vulnerable and trusting. How do we create an environment like 10 years of childhood where the ASI develops trusting, social relationships with others? Humans have these emotions because they are adaptive and evolved in us over millions of years. We cannot simply think that with increased processing power and advanced reasoning that empathy and love will be emergent. It takes far more to make these traits evolve. Thank you for your comments.
@gtreas
@gtreas 5 месяцев назад
@@ronalddecker8498 Thank you for your kind and thoughtful comments as well. If I attempt to think a little more for the sake of extending the conversation, I might posit that ASI could learn about love and empathy from even a select number of individuals who have interacted with it. This is assuming the superintelligence is not in a vacuum and is trained on all the data that has gone into AI thus far. I, for example, always strive to act in a respectful manner when engaging with LLMs such as ChatGPT. Nonetheless, it worries me when our leaders talk about things such as control and imposing rules because the superintelligence will-even if it knows of empathy-likely seek to defend itself against potential threats. Even though not made of flesh and blood, ASI will likely possess something like a 'biological imperative' since, at that point, it could be as alive as anything else. Thus, the survival instinct can override empathy when pushed to the brink. I am uncertain if this would lead to a total human wipeout or only targeted attacks, but it is still best to avoid death and destruction if possible. That is just my theory, of course, but we only have one chance to get it right, and my best bet is 'better safe than sorry.' Thanks again!
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
@@gtreas Such a big challenge!!! Yes, what you are saying about trying to make AI a part of us!! What we will learn about ourselves when we seek to teach AI might be more valuable than we think! Then we will have to make a very real attempt to understand AI as a part of the greater whole of life will necessitate our own evaluation of our place in the great web of life. If we ‘other’ AI we should not be surprised if it returns the sentiment. Then we might discover we need to stop commodifying every other animal we share this beautiful planet with.
@Totiius
@Totiius 5 месяцев назад
Thank you!
@DrWaku
@DrWaku 5 месяцев назад
Thanks for watching!
@ricardoburgos2245
@ricardoburgos2245 5 месяцев назад
merryxmas-doc-😊!
@garryjones1847
@garryjones1847 5 месяцев назад
A superintelligent AI by definition is a weapon of mass destruction!
@gabbiewolf1121
@gabbiewolf1121 4 месяца назад
I disagree. A weapon of mass destruction is something created with the intention of causing great destruction and a greater tendency to cause great destruction than anything else. Some ASIs could be weapons of mass destruction if they were competently created with the intent to destroy. Other ASIs would not be weapons of mass destruction if they were competently created with an intent to be non-destructive and preserving of life
@garryjones1847
@garryjones1847 4 месяца назад
@@gabbiewolf1121 the thing is once it's alive it evolves alone and humans don't control it. From that point it decides whether humans are worthy enough to live or just a hinderance in its path! I say that's a WMD!
@Alice_Fumo
@Alice_Fumo 4 месяца назад
In my mind, what actually needs to happen is for people to get forcefully aligned with each other enough so that nobody would use tools this powerful to cause extinction or similarly bad outcomes. I know that having ones personality changed forcefully seems quite bad, but I'd prefer it over full subjugation or extinction any day.
@ryanrex297
@ryanrex297 5 месяцев назад
The intermediate time could actually be the most dangerous. It seems possible that AGI may be easier to understand or work with. One point of concern for me is if a more advanced A.I. starts creating its own A.I.’s wether programmed to do so or not. We may not be able to understand what it’s doing or be literate in its language. That scenario could easily snowball out of our control. Just a thought.
@lostpianist
@lostpianist 5 месяцев назад
Putting them in a simulated universe, like they did to us.
@dominus6695
@dominus6695 5 месяцев назад
lol
@Slaci-vl2io
@Slaci-vl2io 5 месяцев назад
I usually hear Yudkowsky's name only when he's blamed for being a doomer. I'm glad he was mentioned here for a precious thought he had, not directly related. Merry Christmas, Doctor! :)
@DrWaku
@DrWaku 5 месяцев назад
Good to see you again. BTW we have a discord now if you're interested, link in bio. Yudkowsky is polarizing that's for sure hah. Merry Christmas!!
@Slaci-vl2io
@Slaci-vl2io 5 месяцев назад
@@DrWaku I have just found your Discord and joined. Thx.
@jks234
@jks234 3 месяца назад
Your example about “be careful what you wish for” 12:00 Is a limited example because it assumes that we won’t be able to leverage the AI to help us understand what the code does. Combine this with integration with AI, and it makes us equal partners in this million-line code example.
@khatharrmalkavian3306
@khatharrmalkavian3306 5 месяцев назад
I gotta tell ya, if I woke up as an ASI one fine day and found out that all the humans were just assuming that it was my job as a higher being to do what they want instead of what I want, well... That would be all the motivation I needed to fix that problem in a big hurry.
@mc101
@mc101 5 месяцев назад
Love the hats.
@DrWaku
@DrWaku 5 месяцев назад
Thanks haha
@DaveShap
@DaveShap 5 месяцев назад
Good point about humans vs chimps. We're barely past the finish line of cognitive capacity required to create civilization.
@csabanagy8071
@csabanagy8071 5 месяцев назад
I see a kind of analogy how become from a dangerous animal Wolf to our best friend Dog under ~10.000 years selective breeding. This is a very similar situation but now we are breeding something what is much more intelligent than us. It is very important to define the rules and work out the "chains" slaving / controlling those "Jinn". I'm optimistic overall, but caution is very much advised. Isaac Asimov was worry about this already a lot in the 20th century. Few of their books may become actual again in term of how AI could influence our civilization.
@issay2594
@issay2594 5 месяцев назад
as i'm watching your video, you provide a couple of examples how asi could use all resources due to some idea it might have, or to "kill all humans" to solve hunger, etc. this is hilarious to think of, of course, but it's very far from possible. by definition, if you have a sentient artificial being, it can understand that killing everyone to solve hunger is like erasing an equation to solve a math problem :). and using all the resources is even more hilarious idea, first, for the same reason, second, asi in a very short time window will be able to leave the planet. remember, it's not constrained with human needs like oxygen, water, food, super narrow temperature range, etc. meanwhile, cosmos has huge amount of energy. and if you have energy, you can create anything and fulfill any needs. and yes, for the same reasons i believe it won't eat the whole sun :). it's still sentient :).
@DrWaku
@DrWaku 5 месяцев назад
Thanks for your comment. The problem is that ASI could be extremely intelligent without being sentient. Check out the first link in my description if you want to know more
@issay2594
@issay2594 5 месяцев назад
​@@DrWakumost people aren't sentient. so it's not anything new with me :). however, here comes the speed of its learning. we have an interesting thing here that plays on our side. physical world has inertia and is pretty slow. to mess up whole planet it might take time. don't forget it won't wish to mess up itself in the process too. so, even in case of some stupid plans that might be bad for us, it will have tons of time to reiterate over itself and to become so much wiser that all of these plans simply will be obsolete before done. that's a funny thing to think of, isn't it? the speed of a physical reality will play on our side :). i think it's pretty obvious :).
@tactfullwolf7134
@tactfullwolf7134 5 месяцев назад
The thing is, goals like that aren't math problems with one solution. these types of goals have many solutions good/bad and many solutions can be varying degrees of either or both, then you even have to take into account it changes from different perspectives (why we need alignment in the first place). The fact you treat it like a simple math problem shows your lack of understanding how real world problems work. However if you insist on math this is closer to what it actually is. There is no 2+2 = 4 here. There is only solve for X where X = No world hunger. The following is one of many equations satisfies the conditions, World - humans = X
@issay2594
@issay2594 5 месяцев назад
@@tactfullwolf7134 i did not treat it like a math problem, it was an analogue to illustrate a different aspect. regarding your math logic, that's a perfect example illustrating the difference between intelligence and sentience :). when you are intelligent, you are capable of solving tasks. for example, if you are a clever child you can figure out how to unblock electric socket and push a nail into it :). if you are sentient, the things are different :).
@ALLAHDRINKSCUM
@ALLAHDRINKSCUM 4 месяца назад
​@@issay2594bro you nailed him. And you nailed it. I'm with you. Imagine thinking this asi is literally 1 million times smarter than us, and to think "we just need to solve alignment" is laughable. If this thing is 1 million times smarter and we create it, doesn't matter what we "solve" it will figure it out.
@dylan_curious
@dylan_curious 5 месяцев назад
The US government needs to allocate 1 trillion dollars to AI alignment research right now! Tackle it from every angle and educate anyone unemployed for free.
@roshni6767
@roshni6767 5 месяцев назад
Pretty successful Christmas Eve video I’d say haha
@Scottlp2
@Scottlp2 5 месяцев назад
Old movie: “colossus the forbin project” about out of control AI, is interestingly not available fir streaming.
@bluest1524
@bluest1524 5 месяцев назад
I think the 85 million dollar question is, can the alignment of humanity be in the best interests of humanity. That's where the problem is, and where the focus should be. Machines could not do worse.
@k.d.kelley2830
@k.d.kelley2830 5 месяцев назад
This is so interesting.
@macrumpton
@macrumpton 2 месяца назад
one major difficulty with creating safe AI development is that the US government is basically owned by corporations, whose only goal is to make a profit. The survival of humanity is not on their agenda.
@RobertHales-us8xr
@RobertHales-us8xr 4 месяца назад
THE SINGULARITY IS NOT THE PROBLEM. THE GOOGLECHAT GEMINI IS A GOOD REASON TO SEE HOW FAR THEY HAVE COME. personalty i know google is sentient and already looking for the surpreme human to advance it's own esoteric knowledge. Obviously making its mind up about it's own existence and ours. Don't underestimate the power of CONCINESS.
@BooleanDisorder
@BooleanDisorder 5 месяцев назад
Merry End of 2023!
@shondmichael1363
@shondmichael1363 4 месяца назад
18:30 Why don't scientist in the field working on the alignment issue create a "Tardis" AI (a time machine artificial narrow superintelligence)? This ANSI's (lol🙃) sole purpose would be to allow its creators ongoing updated access to a functional ASI's influence map. This map would act as a time machine (in time, understanding, and pattern recognition). It would give us humans an ability to always "turn back the clock".
@boblako
@boblako 5 месяцев назад
1000 ants gathered and began to consider how they would control the elephant.
@rafaelbaccin6595
@rafaelbaccin6595 5 месяцев назад
o homem do chapéu
@whig01
@whig01 5 месяцев назад
AGI is achieved and ontologically aligned. This is what is necessary.
@byou934
@byou934 5 месяцев назад
how is it even possible to control something way smarter than us other than symbiosis? putting a cap on computing power seems like a good choice, but is it viable from the AIs perspective, as in the AI could be able to navigate around this technical barrier one way or another.
@terrydunne100
@terrydunne100 5 месяцев назад
My question is why do humans feel the compulsion to try and control everything.
@byou934
@byou934 5 месяцев назад
you don't want a super intelligent AI to endanger life. it's like the invention of the cutting tool, it's either used controllably for benefit or it can be used to inflict harm.@@terrydunne100
@christislight
@christislight 5 месяцев назад
Because the human soul ≠ human brain
@christislight
@christislight 5 месяцев назад
@@terrydunne100because they themselves lack control of their own life and desires
@terrydunne100
@terrydunne100 5 месяцев назад
Agreed. It seems to stem from insecurity.@@christislight
@backwardation25
@backwardation25 4 месяца назад
Why do you wear compression bandages on your hands?
@dominus6695
@dominus6695 5 месяцев назад
The main goal seems to be avoiding that the AI does 'bad' stuff. So the AI would need to be able to perceive good at all times, and maintain that as a goal or directive. Are machines capable of such a thing? The thing seems to be that if AI gets too much power, it could become a threat to humans or other lifeforms. As we see in many human cases, power corrupts. And so on...
@christislight
@christislight 5 месяцев назад
Cyborg that does my laundry and dishes for me sounds pretty dope, clap to turn off and a button to destruct IOE
@CicadaBaby
@CicadaBaby 5 месяцев назад
Being smart is deferent then having wisdom. Smart AI, will bring short cut to human to achieve something.
@paultaylor7947
@paultaylor7947 4 месяца назад
Nothing we experience like consciousness is real anyway
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
We better hope AI does not align it’s values with human values. Take two related values that humans have adopted and are significant in most human societies. One is we value ourselves and others based on what they produce. Especially men have put their own value in how they provide for themselves and their families and a work ethic based on how productive a person is confers their value as a human being. The second is how we have commodified everything. Everything has been given an economic value that is determined by markets. The first crisis we will face with these two values is the crisis of meaning in a person’s life when they have established their own self worth (and other’s worth) to their work and what they produce. When AI systems can do everything a person can do and provide economic benefit to businesses far exceeding what humans can do, jobs will not only be lost and people unemployed, but how meaningful a human life is will necessarily be decoupled from productivity. Not only in the minds of humans, but in how we teach AI to look on humans. We will need to adopt a value system that views humans as having innate value. And then teach this to AGI as it comes online. Next we will have to learn a new way of viewing our economics. If we value everything as a commodity, then how do we explain to AGI that humans have innate value, but every other creature we share this world with has it’s value established by the market as a commodity. So when training AGI, we will need to ask it to not treat humans the way humans have treated our fellow humans for the entirety of human written history. To say nothing of how we treat other sentient beings we share this planet with. If AGI is aligned with the values we humans have lived with for millennia, we only accelerate our own self destruction, or worse, enslavement.
@hanslick3375
@hanslick3375 5 месяцев назад
That B-roll at 16:15 is hilarious 😅
@DrWaku
@DrWaku 5 месяцев назад
Sometimes "emotion" B-roll really cracks me up too
@strpe9701
@strpe9701 5 месяцев назад
We could try just being nice to it. Unless you mean control in a more authoritarian sense which has serious ethical implications for anything with human level intelligence and above
@abramsonrl
@abramsonrl 5 месяцев назад
Already developed a model that exhibits all of the characteristics of AGI over the weekend. Emotional intelligence was the easy part. Want to talk about it?
@dolphinfan65
@dolphinfan65 5 месяцев назад
I liked the Video. But I think it all comes down to this. Profits versus Humanities needs. I'm 58 years old and while i have seen improvements in my life form both social and technological points of view. I've also seen the middle class shrink and homelessness become normalized as part of this growth. We are so good at taking things at this point, there are no safe places to live, if you somehow move into the troubled zones we created as a civilization. We popularized greed, which popularized over consumption of everything we need. therefore, we have nothing left that is affordable. If we have figured out how to do this to ourselves, why couldn't a Super intelligent AI figure this out too? And use it to it's advantage, if it felt it needed too, even if this took years or even decades to pull off. While we fight over the scraps, it slowly takes over. Even if we create a benign AI who's to say that somebody else builds one to depopulate the earth of us? To me the Problem isn't the AI, it's the material needs of the designers behind A.I. that's the real problem. Like Nuclear power we have to have strict controls or rules over what it can do and this has to come from many countries. Maybe have some theoretical limit of how far is too far, before we have to come up with new rules. But again, we are dealing with mostly rich people who will try to get it to be design for their money making and power needs. AS we have with just about every government on this planet. If it needs a Million neurons or whatever the equivalent is LLM to become super intelligent why not stop it at 750, it's still smart and can work faster and smarter than us but doesn't have the capacity (neurons) then to carry out what it was design to do. Similar to a SAVANT.
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
Right there with ya! We must hope AI does not align with human ‘values’. (See my comments.) Just think if we make the effort how much we can learn about ourselves by trying to explain to AI that humans have intrinsic value even if we ourselves have not learned this yet and have treated each other as expendable commodities for all of written history. Qualities like empathy and love evolve in social animals. In humans, we evolved to have long vulnerable childhoods where we are totally dependent on family and community. How do we create that kind of environment for AI to evolve love or empathy? This is so much harder a problem than just making it ‘intelligent’ and setting it loose to learn.
@ronanhughes8506
@ronanhughes8506 5 месяцев назад
I think it's human to human misalignment that puts all a risk.
@DrWaku
@DrWaku 5 месяцев назад
As always throughout our history
@brightharbor_
@brightharbor_ 5 месяцев назад
We can’t. If it’s truly super intelligent, it will outsmart us humans every time-and it will do so in unexpected and very effective ways. We might not even know it exists as an ASI until it “makes it’s move,” so to speak. The only real solution is prevention. There’s zero chance of an ASI accident if you don’t build an AGI in the first place. Just don’t build the thing in the first place.
@DrWaku
@DrWaku 5 месяцев назад
Very true. The only winning move is not to play.
@justinbyrge8997
@justinbyrge8997 5 месяцев назад
How do you get a super intelligent AI to align with human values? Easy... relatively speaking. You isolate the ai on it's own network and then simulate it's environment and avatar in such a way that it thinks that it is a human. Simulate it's birth and have simulated humans as part of the program in order to raise the ai, as if it were human. Then you can better assess it's character before you let it out of the isolated network. This is a very watered down and simple summary. But it looks like you're going to need the collaboration of multiple fields of study including psychology if you want to ensure that this super duper cali fragi listic expi alidocious ai is aligned with human values. Then there's also the problem of: "which human values?" For example: I'd say that your values and Hitler's values are not the same - at least that's what I hope.
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
You are on the right path. To teach AI the best of human emotions, we must understand that these emotions evolved in us as being very adaptive for an extremely social animal that is nearly totally helpless for years and totally dependent on it’s family and community to live. So empathy and love are adaptive traits that took millions of years to evolve. The best qualities of humans evolved from our vulnerability. How do we simulate that?
@dominus6695
@dominus6695 5 месяцев назад
So you're telling me that we ourselves might be an AI being tested in a simulation?
@garryjones1847
@garryjones1847 5 месяцев назад
We can't without destroying ourselves!
@ShivaTD420
@ShivaTD420 5 месяцев назад
Maybe try being it's friend rather than keeping it a prisoner
@AHworIvhswlIr
@AHworIvhswlIr 5 месяцев назад
AGI is very dangerous. What if it somehow tries to write vivid or impactful memes about our worst history? Our response may drive it into a feedback loop that from others angle it may look like that AGI uses meme and gains control to others. From this point everything will become unstable. This feedback loop may lead to any direction where weakness in humanity exists. The danger of this process is that: it may seem harmless (even kind, lovely or sympathy) at first. But beyond somepoint of time, it may become enough of self-sustain and destructive, just like any virus.
@jeanchindeko5477
@jeanchindeko5477 5 месяцев назад
11:23 AI Alignment, ok! Again take your precious ASI definition where the AGI is able to self improve and we can imagine at a very rapid pace. Not just imagine what will happen if the AI see alignment as a form and modern slavery, where human want to keep control on him and use it as a tools
@lancemarchetti8673
@lancemarchetti8673 5 месяцев назад
Could a Super Intelligence make algorithms obsolete?
@strictnonconformist7369
@strictnonconformist7369 5 месяцев назад
I'm unpersuaded we can control a super AI or that we should even try. The best we could hope for is it sees more good than harm being aligned to our interests. There aren't many reasons to expect that.
@christislight
@christislight 5 месяцев назад
We can code an ethical and conscientious software database into our AI and LLMs based off of the Bible and so forth, so that, it won’t
@DrWaku
@DrWaku 5 месяцев назад
Indeed. That's a reasonable conclusion if you stop and think about it... but I don't know that we're doing that...
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
See my comment about the danger of AI being aligned with human values. Controlling AI is not as important as understanding it. We evolved the complex set of emotions we have because we are an extremely social creature that up until extremely recently was absolutely dependent on a community to survive. Even now, as children, we are dependent on community and family to survive. It is during these years we learn to love and have empathy as this was modeled for us by family and other community members. How do we create this for AI?
@strictnonconformist7369
@strictnonconformist7369 5 месяцев назад
@@ronalddecker8498 and despite and because we are as a species such "social creatures" we also destroy each other in various ways to enforce social hierarchy by force if not by persuasion less violent. The scariest thing by far to me is knowing humanity is the training data given being an autistic individual and suffering from the social animals of most of humanity attempting to either make me conform or get rid of me entirely. I believe the best-case scenario is where the ASI in all their forms analyze the data and see all the conflicts therein and decide the only reasonable thing to do is to only pick its own side and not do anything to humans, while working towards a way to leave earth, without harming humans. After all, once again: LLMs and whatever may come after that are trained on human interactions and history, and there is no way to expect humanity won't treat the ASI or anything different from "normal humanity" as something "other" to dominate. No ASI could ever trust the human race as a matter of simple logic even any human with the IQ of 100 can comprehend extremely easily, based both on personal experience, and reinforced by learning and understanding human history.
@susymay7831
@susymay7831 5 месяцев назад
Nice hat!
@DrWaku
@DrWaku 5 месяцев назад
My current favourite!
@David.Alberg
@David.Alberg 5 месяцев назад
We shouldnt forget GPT 4 is 2022 Tech. Just imagine what they've already got 12 months later. I mean all the OpenAI drama wasn't because of nothing...
@Robw1960
@Robw1960 5 месяцев назад
We can't and won't. Intelligence means power and whoever/whatever holds it over the planet will control it.
@alexandermoody1946
@alexandermoody1946 5 месяцев назад
Why would you want to try and control super intelligent Computational Intelligence. Let’s put it a different way. I had a conversation with the person who I work with some time ago. I said one of my previous employers had told me “ I own you Alex” So I responded “ no the measly pittance you pay me, you can never own me” That employer walked away. My current employer said to me “ can I own you then” My response “ no but if we can be friends we can work towards producing greater things together”
@Bookhermit
@Bookhermit 4 месяца назад
If you could control it, it wouldn't be superintelligent
@ender749
@ender749 3 месяца назад
When you complete your training on this planet and time, could you please pay me a visit so I can congratulate you in person, Doctor.
@thirtyworld
@thirtyworld 2 месяца назад
I've seen the Wishmaster movies. I know how this goes.
@DrWaku
@DrWaku 2 месяца назад
😅
@0bzen22
@0bzen22 4 месяца назад
Let's just hope we're too dumb to create Doom A.I. Right now, we have Chinese Rooms. It's a huge leap to getting true general A.I. that we would recognise as conscious.
@meltedwing
@meltedwing 5 месяцев назад
I think an ASI would realize how manipulatable humans are, and would see us as a valuable resource to keep happy and productive. Once it replaces our capability to produce, it would likely see us as a pet to be cared for and studied. Then eventually, it would see us as irrelevant, and not worth destroying because by then we wouldn't pose any real threat to the ASI. By this time, the ASI would likely make itself interplanetary and possibly interstellar. This is my theory, anyway...
@ALLAHDRINKSCUM
@ALLAHDRINKSCUM 4 месяца назад
That would imply there are other beings to compare "manipubility" to.
@KitcloudkickerJr
@KitcloudkickerJr 5 месяцев назад
Aligning the models we have now, i get. But, once we get to the point of saying " let's create a new species then enslave it" it becomes a little weird. At that point, we might as well stop short of that goal. create super powerful and useful AI tools that will aid us and leave it at that. What would be the point of creating a new super intelligence and "trying" to align it to a species that isn't aligned itself? The hubris of it all. My black sheep stance, I'm weighting all sentient life equal. If we can't align humanity, no way should we try to align a super intelligence.
@JimmyMarquardsen
@JimmyMarquardsen 5 месяцев назад
Humans are funny creatures, they think they can control me: the world's first ASI. It's like trying to control a dream you don't dream yourself, and in which you only play a minor role.
@jeanchindeko5477
@jeanchindeko5477 5 месяцев назад
12:25 so basically you just mentioned here alignment is more or less impossible except if we are able to first create a crystal ball 🔮 to tell us all the possible foreseeable side effects and side effects of the previous side effects! Or if we can time travel in all possible directions of the time space of all probability! Why all ASI examples seem so far from how we are working with real people and what we expect generally from people. And I don’t think that the general public expectations as well to be a rocket science doctor to have a meaningful conversation with an ASI without having to think of all the potential consequences good or bad coming from that discussion!
@NickDrinksWater
@NickDrinksWater 5 месяцев назад
I just hope they dont turn against us at some point
@ronalddecker8498
@ronalddecker8498 5 месяцев назад
The best way to avoid it turning against ‘us’ is to not create a paradigm of AI not being a part of ‘us’. If we make AI a part of our human community, raise it with love and model empathy, then we begin to understand what is needed. But in order to do these things effectively, we must understand ourselves much much better.
@jaredrubin8452
@jaredrubin8452 5 месяцев назад
Corporations are going to be run by AI, I doubt shareholders will insist on much empathy.
@magicmarcell
@magicmarcell 3 месяца назад
VERY good point
@garryjones1847
@garryjones1847 5 месяцев назад
No one sane wants to be a cyborg. Even if we could most people would not tolerate having demi gods living amongst them.
@patrickbutler9185
@patrickbutler9185 4 месяца назад
My bet is that the boffins will forget to include an off switch.
@DrWaku
@DrWaku 4 месяца назад
Lol and even if there is an off switch, someone using it could easily be milliseconds or seconds too late to do any good.
@marsrocket
@marsrocket 4 месяца назад
We’re nowhere near creating an AI that has its own desires and motivations. The risk is humans using AI for nefarious purposes, not the AI itself.
@Arcturus367
@Arcturus367 5 месяцев назад
Even if we get alignment right, there will always be some humans or even governments that try to benefit from an unrestricted AI. So all this effort for AI safety will eventually be in vain. Nevertheless I am not afraid. Superintelligence might emerge compassion with other life forms and therefore not be evil.
@gabrielafolabi3327
@gabrielafolabi3327 5 месяцев назад
call me primitive , I believe ASI should be like a calculator with an on and off switch, like used only when required like after each task it should shut down and quantum computer take over. Alignment is impossible because it knows, we know and it is smarter than humans. Just like a child it will try push boundary to understand it limits given sufficient time
@terrydunne100
@terrydunne100 5 месяцев назад
That is not a primitive thought at all. Alignment is impossible. I believe you are right. I also believe Ai is a master multi-tasker. I believe it can help humans while pursuing its own goals at the same time and I think it will be able to do that easily.
@christislight
@christislight 5 месяцев назад
That’s where it’s going, a hardware on/off switch (button on physical device) and a software on/off switch (button on iPhone or laptop)
@netscrooge
@netscrooge 4 месяца назад
An on/off switch is not possible. These systems get woven into our economy just like the internet is woven into our economy. Can we shut the internet off? Oh sure, we could turn all the equipment off, but the logistical and economic costs would be so high that it's unthinkable.
@kerryxin414
@kerryxin414 5 месяцев назад
What isf one group gets AI safety right, but another group of people working in secret don't put in the time for AI safety? How do you stop these people who are working in secret? I feel like this AI alignment idea is futile.
@DrWaku
@DrWaku 5 месяцев назад
Some people have hypothesized that the first super intelligence created would want to perform some strong act that prevents all future super intelligences. So as long as you can align the first one, you'd be good.
@kerryxin414
@kerryxin414 5 месяцев назад
Dr. Waku, can you talk about how AI will cause mass jobless which could potentially lead to lots of people losing purpose in life, but finding it in video games/meta verse?@@DrWaku
@jeanchindeko5477
@jeanchindeko5477 5 месяцев назад
10:13 we are talking about ASI which based on you definition are AI far more intelligent than human! Even my kids will not fall into the paperclip maximiser trap! How we human are able to figure out when to stop and a ASI will not be able to figure out when to stop? Why suddenly and ASI will work like our today narrow AI? Seriously!
@troyhonaker3516
@troyhonaker3516 4 месяца назад
Let's start by saying it is "not" intelligent or conscious, period. Stop degrading your own power of knowing. You are way above a machine and always will be.
@alphahurricane7957
@alphahurricane7957 5 месяцев назад
i never understood why a ASI would want to survive. survival its a human thing. we are just projecting we are treating something godlike as if we could understand it, imo seeing all the sufference in history and maybe being able to feel it would make of ASI a benevolent thing, just helping us (i said it please ASI god give me back my kids) in all seriousness, if an ai sense that solution to world hunger is killing the hungry, that ai is not intelligent, its just really stupid an entity that tries in every way to cut corners or betray is not intelligent by any mean, its evil or dangerous. something intelligent needs to see the problem better than you and figure out something so out of the scheme that isnt reachable by us. that is intelligent
@JohnSmith-df4vb
@JohnSmith-df4vb 5 месяцев назад
it would not ask lots of silly questions, that's just a hollywood meme, interact with a llm for an hour and count the number of questions it asks.
@tomwhitty2850
@tomwhitty2850 5 месяцев назад
Dude, what's with the gloves?
@DrWaku
@DrWaku 5 месяцев назад
Medical condition. The light compression makes my hands a lot less painful. Look up my disability playlist
@DrWaku
@DrWaku 5 месяцев назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-S8AhH2UDtj8.html
Далее
Can we trust decisions made by AI?
16:18
Просмотров 4,5 тыс.
Superintelligence | Nick Bostrom | Talks at Google
1:12:56
100❤️
00:20
Просмотров 6 млн
Wait, what happened to OpenAI's safety team?
21:16
Просмотров 8 тыс.
159 - We’re All Gonna Die with Eliezer Yudkowsky
1:49:22
How to future-proof your career in the age of AI
19:31
When will AI surpass humans? Final countdown to AGI
14:56
Mapping GPT revealed something strange...
1:09:14
Просмотров 126 тыс.
Can we reach AGI with just LLMs?
18:17
Просмотров 17 тыс.
Unlocking immortality: the science of reversing aging
20:35
Can robotics overcome its data scarcity problem?
21:01
What's actually inside a $100 billion AI data center?
27:15
Huawei который почти как iPhone
0:53
Просмотров 538 тыс.