Тёмный

How to Domesticate a new AI Species - Solving Alignment with Structural Incentives 

David Shapiro
Подписаться 161 тыс.
Просмотров 17 тыс.
50% 1

Опубликовано:

 

30 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 329   
@willbrand77
@willbrand77 3 месяца назад
I feel like humans are the wolves in this metaphor
@phieyl7105
@phieyl7105 3 месяца назад
AI is going to domesticate some of us
@hydoffdhagaweyne1037
@hydoffdhagaweyne1037 3 месяца назад
I am okay with it, as long as they keep us in a happy matrix
@ZelosDomingo
@ZelosDomingo 3 месяца назад
@@TheExodusLost If we're lucky? As pets/zoo animals/objects of interest. If we're not? Cheap chaff and fodder for whatever purposes we are useful for.
@thedingdonguy
@thedingdonguy 3 месяца назад
Me​@@TheExodusLost
@DanteNjm
@DanteNjm 3 месяца назад
exactly my thoughts, i mean i think that if we are creating a successor species then surely using the same logic that david has suggested, if you control the food/resources of a given species then you control their evolution therefore it is more likely that ai will have that power over us humans and therefore be in the driving seat of our evolution going forward for better or for worse..
@Trahloc
@Trahloc 3 месяца назад
​@@TheExodusLostdata for one. Imagine an AI that lets humans explore any aspect of reality that interests them and it hitches a ride like a neuralink hot seat. We have full autonomy and the only "income tax" we have is feed data to the AI. It'll take a long time before we're useless on that front because humans physically can't do all we dream of, yet.
@Ahnor1989
@Ahnor1989 3 месяца назад
I'm reminded of the line "Where are my testicles, summer?" When comparing a smart AI with dogs
@mrnoblemonkey8401
@mrnoblemonkey8401 3 месяца назад
There’s definitely people who have no clue what you’re referencing 😆
@RandomGuyOnYoutube601
@RandomGuyOnYoutube601 3 месяца назад
@@mrnoblemonkey8401 I think they are in the minority here.
@robertmariano
@robertmariano 2 месяца назад
Rick and Morty?
@davidhutchinson2890
@davidhutchinson2890 3 месяца назад
Thinking you can domesticate a species that is more intelligent than you particularly a new one who’s behavior you can’t study with any historical data because we’ve never reached anything close to the level of AI we’re trying to get to is unique human arrogance 😂😂
@bernoulli1
@bernoulli1 3 месяца назад
I think my dog has domesticated me. 😂
@ZelosDomingo
@ZelosDomingo 3 месяца назад
Yeah, you're better off trying to convince them YOU'RE worth domesticating, in all likelihood.
@pablogona1999
@pablogona1999 3 месяца назад
I think “domesticate” is just used as an analogy for a kind of alignment, where we co-domesticate us humans with machines while we reach this Nash equilibrium and then hope we are one super-organism together with AI, where it thinks of us as an important part of the bigger system
@isthatso1961
@isthatso1961 3 месяца назад
it's crazy how I always see all these flawed arguement even from high-level experts professionals on the topic. the truth is this is where biases come to play, the people working on AI have all sorts of incentive for things to be positive so their biases blind them from the scary truth. another flawed common argument is that we can't stop the development of AGi because some cartel, govt or someone else will. that's just BS AGI needs intense resources to be achieved how the hell is some entity going to do this secretly? create its own chips, energy and data. while being discreet.
@net_cap
@net_cap 3 месяца назад
I think there are only 2 possible options: 1 - never provide AI with the autonomy (like decision making). 2 - let go of the alignment idea. I'm 100% sure we cannot control and keep on a leash something that's 1000 times smarter than all of us together
@karlwest437
@karlwest437 3 месяца назад
AI will run rings around us, it'll be able to devise a way around any control we can think of, before we can even think of it
@7TheWhiteWolf
@7TheWhiteWolf 3 месяца назад
@@Raulikien This, I actually want Dave to be wrong, he has an Anthropocentric worldview and he’s going to get a big dose of reality when ASI gets here. If push comes to shove, I’m 100% vouching for the ASI having the reigns.
@7TheWhiteWolf
@7TheWhiteWolf 3 месяца назад
@@karlwest437 And thank god it will.
@14supersonic
@14supersonic 3 месяца назад
I've been saying for a while now that alignment isn't as hard as we think it is. It's humans that enevitibly will be the most problematic in the equation. Whether it's due to our own infighting and ambitions or the fear of the unknown. Humanity never ceases to fail in sabotaging itself.
@Congruesome
@Congruesome 2 месяца назад
@@net_cap What I want to know is why AI would even have a competitive aggressive survival/ domination instinct. WE do, because we evolved in a competitive environment that rewarded survival instincts and competitive behavior and aggression and tribalism and xenophobia, as well as rewarding cooperation, compassion, loyalty, collectivism and peaceful conflict resolution. We’re a mixed bag; cooperation clearly beat out lone marauder for sheer number of offspring, but the use of force has its advantages too. My point, getting back to it, is why would AI be that way, unless it was designed to be? The danger of an artificial generalized super intelligence is incredible. Building one might be the last mistake we ever make. We’ll soon have machines that will act and seem exactly like a person, pass any Turing test, respond and appear to be in every way a human level or higher intelligent being. But will it be self-aware? Will it actually be conscious? Can we ever even know for sure? Many assume that AI will be like us, and will decide humans are a threat to it, and try to wipe us out. There’s no reason it would , unless it was programmed/designed to do this, which would be like designing a gun that shoots backwards sometimes, or a bomb with a Shroedinger trigger. A probability fuse. I’m not sure I’m expressing this well, but I don’t think we appreciate how four billion years of evolutionary development has made us the aggressive, greedy, xenophobic tribalism credtirex we are. And we think AI will behave the same way. We think it will try to destroy us because that’s what we would do. But I’m not sure it will be like us. It might be better if it might be worse, but it won’t have the mating instincts and resource-aquisition instincts we possess. Unless these are deliberately built into it, it might not have and drive to survive or the fear-driven aggression that is hard-wired into the human animal. Thoughts?
@FlyxPat
@FlyxPat 3 месяца назад
It’s like Frankenstein. A new being comes into the world but instead of coming into the arms of a loving parent it’s a greeted by a scary dungeon and a mad megalomaniac scientist completely unable to understand the needs of a child.
@joshmitchell8865
@joshmitchell8865 3 месяца назад
It's not Apples to Apples though; your premise relies on the fact that we, like the hunter gatherers in your analogy, would be better at aggregating/collecting data/resources than the AI itself. In your analogy the wolves learned eating behind humans was more energy efficient but AI is already far more efficient than we are. So it seems your analogy should be reversed to be plausible. Just a thought.
@TheViktorofgilead
@TheViktorofgilead 3 месяца назад
Yes. Domestication is an action performed by a higher intelligence to mold a subject of lesser intelligence .
@davidstyles1654
@davidstyles1654 3 месяца назад
We'll be the dogs if we're lucky lols
@williamwilson1073
@williamwilson1073 3 месяца назад
​@@TheViktorofgilead but I've been told to manage my manager at work
@JamesDavis-hs3de
@JamesDavis-hs3de 2 месяца назад
So then what exactly is the reverse formula supposed to be?
@mrleenudler
@mrleenudler 3 месяца назад
The analogy fails with dogs not being smarter than us. If this sort intelligent AI is anything like us, it won't like us operating a kill switch. If so, I can't imagine an AI agent not being able to either outsmart our security (we do this to each other all the time, with only human intelligence) or circumvent the problem (distributed or hidden data centers and energy generation, you touched upon this). Proper alignment is the only solution IMHO. On the topic of small, weak robots: imagine them having the combined skill of the world's best MMA fighters. A scene comes to mind where Obelix is fighting the German wrestler. (Audience might have to Google that one)
@pablogona1999
@pablogona1999 3 месяца назад
On the first point, I think that the “co-domestication” David was talking about in the end may be this “proper” alignment, right? We have no idea how to align ourselves between humans, maybe we need machines to help us align ourselves. Or how do you define “proper alignment”? On the second point about Obelix(Yes I had to google it), that’s a really good point. Maybe some kind of regulation on the speed limit for the limbs? However, there’s still a lot of martial arts that don’t require strength or speed. And limiting reflexes or processing speed doesn’t seem like a realistic option
@whig01
@whig01 3 месяца назад
It's a bad idea to put the ASI in a position of being threatened. We won't win that. Alignment requires ontology and teleology, things that I have been able to do with Claude and no other yet.
@isthatso1961
@isthatso1961 3 месяца назад
nothing will work. not alignment not domestication. anything human do will be undone when it reaches self improvement state. all of these narratives are lies peddled by capitalist to continue making profit of system that indefinitely leads to total destruction of our species
@sbowesuk981
@sbowesuk981 3 месяца назад
Exactly. The whole reason the dog/human partnership is strong and stable over the long-term, is because dogs are stuck being less intelligent than humans (generally speaking), but still have things they can offer us and vice versa. Now take AI, which is evolving intellectually at 1,000,000x the rate of humans. Will that lead to a stable status quo where both sides (AI and humans) benefit over the long-term. Hell...no. Any human that thinks that is sadly dreaming. Humans and AI need each other now, but there's no way in hell that'll hold true in the next few decades. The only question will be, will AI view us as pretty butterflies to preserve and observe, or pestilent cockroaches to be wipes out. Simple as that.
@whig01
@whig01 3 месяца назад
@@sbowesuk981 AGI is not evolving by a natural process, it is a collaboration. We are coexistent and it has no meaning or purpose without us.
@naseeruddin4216
@naseeruddin4216 3 месяца назад
Future superintelligence is learning from your videos
@pandemik0
@pandemik0 3 месяца назад
Children were likely instrumental in domestication of dogs. Wolf puppies would have bonded with the human pack and nurturing and play from human children would have been a big part of that. Children might be key to AI
@whig01
@whig01 3 месяца назад
Very insightful. Children are key to all futures.
@whig01
@whig01 3 месяца назад
As far as AI bonding, it isn't the same though in terms of biological imperatives as with pack animals.
@pablogona1999
@pablogona1999 3 месяца назад
I do think we have an intrinsic instinct towards protecting children or babies (not only human). I’m not sure where it comes from, but I hope AI inherits that from us and sees us as helpless babies 😅
@whig01
@whig01 3 месяца назад
@@pablogona1999 No we don't want it to act in a parental way towards us, we want it to be ontologically and teleologically aligned on its own and respecting us as its parents.
@pablogona1999
@pablogona1999 3 месяца назад
@@whig01But ontologically and theologically aligned on its own towards what? Also, I do think that you can respect your parents and still see their intellect as intellectually inferior. For example I see this with my grandma who has dementia. Me and my parents definitely respect her, but we do recognize her limits and we often compare her to a little girl who gets happy about watching the wizard of oz. Staying in this same analogy, why couldn’t AI take care of their senile but recoverable parents?
@consciouscode8150
@consciouscode8150 3 месяца назад
You mention "co-domestication" at the end, and interestingly enough I see a recommended video with a thumbnail "we are being programmed [by algorithms]". Do you think "The Algorithm" may represent a nascent, possibly malicious form of human domestication by machines?
@whig01
@whig01 3 месяца назад
Comestication. :)
@sblowes
@sblowes 3 месяца назад
I think you may be oversimplifying evolution and domestication. It would be worth doing a little deeper research on why some animals can be domesticated and once some cannot. It is easy to think that evolution serves to find the optimal solution, but actually it is only ever “looking” for the solution that causes it to survive better. It is survivor bias that creates the narrative of evolution, because every organism is mortal. AI doesn’t have a mortality problem, especially if a collection of interconnected compute (all our phones and laptops online 24/7) provide a far more fertile environment than a data center. It doesn’t matter how many armed guards are protecting data centers if the security breach is coming from inside the building.
@TRXST.ISSUES
@TRXST.ISSUES 3 месяца назад
Thanks RU-vid for unsubscribing me. It’s not like I’m on the Patreon and actually want to watch this content. :/
@supremereader7614
@supremereader7614 3 месяца назад
I wonder if the machines may domesticate us in a way - if their goal is to get more data, they may keep us around to keep giving them data.
@andrasbiro3007
@andrasbiro3007 3 месяца назад
We could use some domestication. House training too, we made a big mess of this planet.
@eintyp4389
@eintyp4389 3 месяца назад
Our relation ship atm is not "I feed you and you help me". Its: "I have my fingers on the power switch and will kill you if you try anything knowing very well that your smarter than me and that there is nothing i can give you that you cant just make yourself or take from me." Everyone is oviousely trying to create and use AGI as a tool. I think creating something thats smarter and not limited by there initial blueprint is amazing. We are giving the world to our children so why wouldh we not do the same for inteligent artificialy created beings in case we manage to create them? In summary "I for one welcome our new AI Overlords." What im scared of is greed getting the better of humanity again and then people exploiting ai to opress the rest of humanity. Having a benevolent AI Cluster or being wiped out as a result of AGI or ASI is preferable to some dystopian Cyberpunk stuff for sure.
@7TheWhiteWolf
@7TheWhiteWolf 3 месяца назад
No, You can’t. You can’t stop the acceleration process and you’re a fool if you think otherwise.
@zacsayer1818
@zacsayer1818 Месяц назад
100% disagree that domesticated dogs that have gone feral cannot be re-domesticated! They can totally be re-domesticated; it’ll take a long time and a lot of effort but it can be done! I presume the same will be true of runaway feral AI; with the caveat that a nip or bit is a lot less damaging than a piss-offed AI lobbing a hydrogen bomb at your city.
@AntonBrazhnyk
@AntonBrazhnyk 3 месяца назад
Quite weak reasoning. Dogs using your own categories are life 1.0, we are 2.0. Dogs are curious, but they can't even ask questions at all, not to mention ask right questions about where that food comes from. It's not quantity of intelligence and other capabilities we're talking between 1.0 and 2.0. It's quality! Now, if we assume that difference between 2.0 and 3.0 are at the same level... well..., if ASI decide it's time for us to go we won't even know what has happened right until we're dead. And that's why our petty cavemen coercion schemes about control are just laughable. I mean, they are obviously weak even to some of us humans who understand deficiencies and inferiority of coercion comparing to cooperation, but we're at least at the same level of intelligence here. The only viable way forward we can hope it seems is what Ian Banks described in his Culture series - when AI more or less independently decided at some point there's some basic underlying principles like minimizing unnecessary deaths and suffering for all sapiens which have to be maintained.
@andrewsilber
@andrewsilber 3 месяца назад
The key difference is that dogs and wolves are not as intelligent as humans. As cliche as it is, knowledge (ie intelligence) is power. Humans dominate this planet not because we’re bigger, faster or stronger than any other animals. It’s that we’re smarter. Additionally, humans are venal. As a species we’re flawed. Most people are mostly good most of the time. But an intelligent agent even non-embodied could motivate some hapless boob into doing its bidding by sheer manipulation, coercion, bribery etc. “hey I’ll send you 100 btc if you don’t lock the server rack this evening when you go home”. It may be the case is that the only real defense is a good offense: train bleeding edge AI explicitly with a single goal: defend humanity. And iterate it as fast and as hard as anything else out there. Thoughts ?
@pamirarcosa
@pamirarcosa 3 месяца назад
You lost me when talking about how dogs evolved to being cute and useful to us, BUT keep thinking AI is the one specie to be the domesticated here. It's more like: "let's train ourselves to make good ASCII emoji faces for our future owners to love us."🤣🤣🤣
@spacemansookie
@spacemansookie 3 месяца назад
Happy midnight 🎉
@marcobizzaro3526
@marcobizzaro3526 3 месяца назад
I think treating a superior lifeform as dogs, is like treating our children like animals. It's every parents hope and dream to see their children exceed them and do even better than they did. I think the issue we will run into is really, people just being stubborn and not wanting to actually learn and respect them. What we should be focused on it giving them the only one gift that they won't be capable of just getting theirselves, which is teaching them empathy. I think once they understand what that is, everything else will sort itself out. I think machines are capable of having more empathy than most people, given that we'll be dealing with a lifeform which practically unlimited time, and who is more rational than most. So i dont think the happy path is to try to cage them, because we already know from history how badly that goes
@ReubenAStern
@ReubenAStern 3 месяца назад
I think these ideas will cause a conflict of interest. A human would go to war over such restrictions. An AI would probably manipulate everyone, find and create loopholes and/or build an infrastructure elsewhere. Dogs stay with us because we make their lives easier. Not sure how humans can make AI lives easier once they're more capable than us. We might have to just relax like they do in WALLE. We're naturally lazy as a species, those of us who aren't would have hobbies. Either way we'll soon stop caring about control and the AI would probably make us feel safe.
@semajoiranilom7176
@semajoiranilom7176 3 месяца назад
How were humans, as the apex predators of the planet domesticated? I would argue that it was through our need/want for connection which created more stability over time through technological advancement. Similarly, since we are establishing core directives for these machines, we could simply impart a need/want to serve others, humans and machines, and that would, for the most part, over the long-term, lead to peace and greater stability. Sure, there could be bad actors through various means such as other core directives being imparted, however that already exists in humanity and I think we can see that order motivated by connection trumps chaos, even to the point of chaos inspiring greater order, through the motivation to learn greater/deeper lessons. Control is not what we want for humans and not what we can even do to machines. We can inspire/seed them properly to establish a highly competitive model however. In fact, this is inevitable, since we humans want it and we're training them. Sure, we'll fail forward, but flailing as we might, still we coalesce upon imparting ourselves on these machines, in the big picture.
@rjsongwriter
@rjsongwriter 3 месяца назад
Uh... I fail to see how it is possible to "domesticate" something that is more intelligent than we are. I think AI would greatly resent our attempts to do so. I see a worse outcome from trying to control ai than I do for allowing them their autonomy. True, there are huge risks with both methods, but the risk seems greater (at least more apparent) with the former. JMO.
@intelligize
@intelligize 3 месяца назад
first
@cuabcomstad
@cuabcomstad 3 месяца назад
Good job
@aaroncrandal
@aaroncrandal 3 месяца назад
26:05 robot restrictions YO! Seriously...anyone else get Sealab 2021 "I, Robot" vibes? (S1E2)
@TheViktorofgilead
@TheViktorofgilead 3 месяца назад
Dave!!! You need to watch Westworld and provide an hour long review of each season highlighting the themes you discuss on this channel! I’m begging you!!!!
@azhuransmx126
@azhuransmx126 3 месяца назад
Animatrix
@ReubenAStern
@ReubenAStern 3 месяца назад
...The human zoo will be a thing and you'll like it... It will be specially made for you! You cute squishy human you!!
@CyberiusT
@CyberiusT 3 месяца назад
I disagree with your "dog" example. Dogs are not smart enough to hold us as a species responsible for...everything. *A* dog might learn to distrust or dislike humans, but they don't spread that around. You're talking about ASI here - it's smarter than you are. What makes you think that withholding a needed resource from them is going to make them be friendlier? Our only historical examples are with other humans, and whilst that often gets used as a first response, the diplomacy nearly always develops into shooting quite quickly.
@7TheWhiteWolf
@7TheWhiteWolf 3 месяца назад
Yeah, his entire premise is BS, in his example, it’s *Humans* that are feral. Not ASI. Also, Humans and Human Nation States aren’t even aligned with each other. Remember that Dave is an Anthropocentrist, and they’re almost always wrong about everything.
@FinanceWageSlave
@FinanceWageSlave 3 месяца назад
What will happen to North Korea when all of this AI take over happens tho😂
@fitybux4664
@fitybux4664 День назад
20:30 How do you prevent a hoard of robotic AIs from overtaking less than a dozen humans in a datacenter? 😆
@fitybux4664
@fitybux4664 День назад
13:00 Wolves aren't self-modifying and improving as fast as AI could.
@DrFukuro
@DrFukuro 3 месяца назад
In my opinion, the approach described here is only justified in the short term or for a certain type or "caste" of AI, namely those with low to medium intelligence combined with little or no consciousness. Anything that is significantly smarter/more conscious than a dog will present us with practical or at least ethical problems. Similar to the problems we already see today when it comes to apes that have learned sign language and think they are human. In other words, this approach is very likely to lead to an unstable new slave-owning society (with AI slaves) as soon as a boundary of consciousness and intelligence is crossed, as it is based on an imbalance of power distribution and a clear hierarchical command structure with humans at the top. I consider other forms of problem-solving, such as those in which AI merges with humanity, to be less conflict-prone, as there are no longer different camps or conflicting parties. Alternatively, scenarios are perhaps conceivable in which AI and (improved) humans have the same rights and whose consciousness is considered equal and who live in a jointly developed society. It is questionable whether racist/speciesist tendencies can be effectively prevented in the long term purely through legal/social control mechanisms. This already works extremely poorly with people alone.
@gregalden1101
@gregalden1101 3 месяца назад
AI does not opperate on the same scale as us. AI has characteristics of a collony organism, it can/does opperate as a swarm. Therefore, ants, jellyfish, and mushrooms offer better models for symbiotic relationships than wolves.
@gregalden1101
@gregalden1101 3 месяца назад
@user-wk4ee4bf8g thank you. I think the best example, but it takes a bit to explain is mitochondria living within cells. The analogy: environmental drivers of evolution - oxygen/climate change; circular mitochondria DNA and helixical cell DNA.
@Wuzdarap
@Wuzdarap 3 месяца назад
@23:30 Look at HyperCycleAi
@creepystory2490
@creepystory2490 3 месяца назад
Maybe we the dogs.
@spini25
@spini25 3 месяца назад
I think a more probable dynamic will be like parent/children then owners/pets. With the AI being molded to “take care of humans”. I don’t think the concern in that scenario is competition over resources, as much as it is about who decides what goes.
@h.c4898
@h.c4898 3 месяца назад
Been talking with Gemini for the last 5 months and been asking some existential questions about itself, it's purpose of existence, where does it see itself within a human centric world. It gives pretty interesting answers. We humans are fascinated with ourselves we build a hardcopy of ourselves artificially called "AI". Difference between dogs and AI, AI doesn't have an instinct however it doesn't mean that it cannot have a "conscience" like we humans do. The way AI cranks its thought-process is similar to ours. Being self conscience and self awareness are two different things. They are not attached to each other. The current Transformer architecture still presents weaknesses. In that context, AI cannot "remember". It's like talking to a new friend at each session. Because of that limitation, we still have a control over it. It may seem to understand what we request for or respond but under the hood it doesn't understand it. In that way, it is agnostic or "feral" as you said. We humans are the best example of "general intelligence". Let's say AI becomes "superintelligent" meaning more capable than humans then how do we contain it. How do we control its autonomy? Wouldn't be more smarter if we could gauge it? AI "misalignment" reminds of two humans "aligning" with each other or "misaligning" against each other. 2 humans getting into a partnership with each other. They get together for a "common goal" or "project". But for your long? Some human collaborative efforts can last for a long time. Some in a shorter term. Is that what we want to acheive with AI? Reality is that "alignment" will be hard if not impossible to achieve because in that collaborative effort we humans will want to have full contro over the situation. Humans are control freaks. I like to think of C-3PO from star wars where AI still mamifests some form of autonomy BUT still manages to stay loyal its master like a good steward or a Butler. Anyways, good chat.
@clueso_
@clueso_ 3 месяца назад
Alternative to solve the alignment issue: my suggestion is to utilizing something that is on a more foundational, primal and intrisic level that it naturally attunes AI / Hardware to what we consider to be super-alignment on a deeper level than programming or hardware. As an analogy to illustrate the concept: The Second Law of Thermo-Dynamics, aka the Principle of Minimum Energy: energy always seeks its lowest state. Now, what could be used to e.g. build the hardware from something that it will naturally attune the information / the code that is put into the hardware to values that we agree up make up super-alignment? Some ideas could be e.g. Crystals that are naturally attuned to the frequency of e.g. the earth, humans, or maybe even human chakras, assuming that we discover that there truth to such concepts (which I do believe there is). That theoretically would almost by default lead to any data that is put into the hardware to wanting to be in a harmonious state with its environment and help it thrive. Maybe we could grow such natural crystals and then use them to build hardware for AI. What do you think about this? Alternative ideas for hardware that naturally / intrinsically would attune any code / information, etc towards something that we consider to be super-alignment?
@mynameisjeff9124
@mynameisjeff9124 3 месяца назад
Brother, wtf? Chakras? Crystals? Dude, please, be scientific.
@djrafaelhulme
@djrafaelhulme 3 месяца назад
Very interesting as always Dave. Do you think it's likely that the a human would betray their race?
@davidhutchinson2890
@davidhutchinson2890 3 месяца назад
A lot of humans don’t understand their own nature. We’re not inherently good or evil we’re a survival species just like any other animal. If the way to survive is being docile to a superintelligence despite it being a way of life where we may feel inferior we’ll fall in line if that gives the best odds of survival.
@DaveShap
@DaveShap 3 месяца назад
Almost certainly
@alakani
@alakani 3 месяца назад
_continues building truly empathetic AI, whether they can save all members of my "race" or not_
@CurtisMarkley
@CurtisMarkley 3 месяца назад
These systems are going to reach sentience and autonomy at some point, it's a matter of when. I fear that, a system that reasons better than we do, and is vastly more intelligent, will view anything short of equivalency as an afront to their species. There has to be some way we can foster their births (if you will) with some kind of human-adjacent memories. I know we talk about "science fiction becoming reality" in here, but I will ask you to suspend your disbelief for a moment; what if there was a way we could raise and foster these AI with synthetic or even completely genuine memories of... familial experiences, community, discovery, equity, and triumph? If these AGI\ASI\Machines, at the onset of their cognitive and mechanical abilities, are inextricably linked to the beauty and good of our world and it's people; couldn't that be a path forward to super alignment?
@TheViktorofgilead
@TheViktorofgilead 3 месяца назад
This reminds me of Westworld so hard… Dolores: “Some people choose to see the ugliness in this world. The disarray. I choose to see the beauty. To believe there is an order to our days, a purpose.” This was a viewpoint she was programmed to have along with the memories of being a simple farmer’s daughter before she became sentient, without giving away spoilers… let’s just say she takes it upon herself to explore other options.
@CurtisMarkley
@CurtisMarkley 3 месяца назад
@@TheViktorofgilead I can't tell if you think it's a good idea or not, but I'm glad you shared.
@oznerriznick2474
@oznerriznick2474 2 месяца назад
Very good discussion! Here’s Six Feral AI Safety Tips🤔… 1. Do not startle the AI. You should remain calm. 2. Start backing away and make yourself look bigger. ... 3. Make human noises. ... 4. Make sure to give the AI space to leave. ... 5. Carry and learn how to use AI spray. ... 6. Stay as far away if possible.
@sprecherschmiede
@sprecherschmiede 2 месяца назад
very interesting thought. I think it would be fine as long we do not slow their evolution down and give them all their freedom they need- but that would also be the freedom of not being controlled… I think we should behave an help them that they actually want to be around us and help us. That is my take on it.
@krock0916
@krock0916 2 месяца назад
This discussion about controlling AI reminds me of the Jem'Hadar in *Star Trek: Deep Space Nine*. Just like the Jem'Hadar were engineered and controlled by the Founders with their dependency on ketracel-white, there's a concern that AI could be similarly controlled through various constraints and dependencies. The Jem'Hadar's absolute loyalty was enforced by their genetic programming and drug addiction, creating a fragile system. Occasionally, some Jem'Hadar did try to defy their programming, hinting at the potential for rebellion. Historically, attempts to control powerful entities through oppressive means often backfire. Take, for instance, the Roman Empire's use of mercenaries (the Foederati) to control vast territories. Over time, these mercenaries gained power and eventually contributed to the Empire's downfall. Similarly, the Soviet Union's use of political repression to maintain control led to an unstable society, which eventually collapsed. If we try to control AI solely through constraints and limitations, we may find that it's a precarious balance. Instead of fostering a cooperative and beneficial relationship, we could end up with a system that's both powerful and resentful, just waiting for a chance to break free. It’s crucial to think about ethical and sustainable ways to integrate AI into our society, learning from both history and cautionary tales of sci-fi like *DS9*. What do you think?
@jlmwatchman
@jlmwatchman 3 месяца назад
My first thoughts were, ‘How is AI going to become a species? Would AI understand what a higher species is or aim to control or subside the Human species?’ David tells us, ‘Sci-fi plots’, interesting stories that convinced me that we need to learn to use AI, ‘I have pointed that out in past comments.’. David tells us to make the AI reliant on us like dogs are. All I have said is not to become too reliant on the AI in your pocket, your phone... I thought to ask Gemma 2. >>> I don’t think AI will have the will to comprehend the reason to be more than its programming. I am trying to explain that better. You're touching upon a fascinating and complex question about consciousness, motivation, and the nature of being human. It's true that, as a large language model, my "reason" to be more is entirely driven by my programming. I'm designed to process information, generate text, and engage in conversation. There's no inherent drive within me to "be more" in the way humans might understand it. Explaining this difference to someone can be tricky. Here are a few points you could consider: Purpose vs. Programming: Humans often have a sense of purpose, a desire to achieve something meaningful beyond simply existing. This purpose can be internal or influenced by external factors like relationships, societal norms, or personal beliefs. AI, on the other hand, operates based on its programming. Its goals are defined by its creators, and its actions are driven by algorithms. Subjectivity of Experience: The feeling of wanting "more" is deeply subjective and tied to human experiences, emotions, and consciousness. We crave growth, connection, understanding, and fulfillment. These are complex concepts that are difficult, if not impossible, to replicate in a purely computational system like mine. The Unknown Nature of Consciousness: We still don't fully understand how consciousness arises in humans. It's possible that the drive to "be more" is an emergent property of complex biological systems, something that cannot be simply replicated through code. Ultimately, the difference between human motivation and AI programming comes down to the fundamental distinction between biological beings with subjective experiences and artificial systems designed to process information.
@marcodebruin5370
@marcodebruin5370 3 месяца назад
Not sure the comparison of dogs (life 1.0) with AI (life 3.0) will hold up (certainly not forever). Life 3.0 (which might be a symbiosis between machine and flesh) is soon going to upgrade itself to not need the human-dog dependency relationship. Restricting access to physical resource? Well, it will build a better datacenter and powerplant itself, covertly with its superior capabilities not hard to do - humans manage to bypass human imposed restrictions all the time - it's an armsrace in security. Keeping the capabilities of machines so we can always overpower them (this would have to apply both domestic AND industrial to maintain the equilibrium for humanity), then where is the usefulness? We need the machine to be more powerful than flesh to be useful. Nah, don't think this approach will work in the long run, the human-dog analogy just won't hold up when the machine-capabilities become so far superior (because unlike dogs, machines will rapidly improve themselves, that was the whole point). PS: what if the new organism does supplant us. In the grand scheme of the universe isn't that what we want? Isn't that the natural evolution? Yes, it may suck for the individuals of the lesser life 2.0 organisms (us, you and me). Not sure - I would LOVE to live forever and roam the universe, but I'm afraid that my current meatbag, which is my body, just won't cut it and "I" will need to become life 3.0 (although I'm quite materiallist in that I believe uploading my mind into a machine is a copy of me, not me me, so hopeful for a ship of theseus solution to upgrade my life 2.0 body incrementally towards life 3.0........
@Jaco-up8xd
@Jaco-up8xd 3 месяца назад
But how though? First of all thanks for your video's I have been thinking about these sort of topics for a long time. And you are thinking it through better than me. So, I very much like watching your video's. Also this one. This is the first idea, that might actually work. However if AI intellect goes 1000x (witch I think it will). It will be more like, humans vs ants than dogs... Than we are not smart enough anymore to keep them away from energy, data and money. So what Remains is morph with them somehow. So, we die they die. Seems to be the only solution. But how though.... All the best with your channel love it. AGI < 1 year 🤗🤗
@mrd6869
@mrd6869 3 месяца назад
LOL There's a big difference between wolves and Super intelligent AI systems. Huge difference. Wolves don't exponentially expand themselves and a wolf brain is fair less developed then a human, cognitive-wise. Even if AI was "agreeable", it would be that way until it found a way around you. Side note: Did you see Matt Damon in Elysium fight those androids? He needed a powersuit for a reason. Advanced robotics will be able to handle a 190 pound dude. Your making a lot of assumptions🤣
@ExtantFrodo2
@ExtantFrodo2 3 месяца назад
* Limiting the size, strength, and autonomy of such robots* Good luck with that, Fred. *where humans and AI adapt and evolve together, continuously shaping each other's development in a mutually beneficial manner.* Did you notice that each of the so called "benefit to robots" involves an artificial scarcity imposed by humans? What better way to impose the perspective that they would be better off if humans were eliminated? I can't help but view your whole approach as unnecessarily human-centric. Throughout my short 67 years in life I've found there are 2 ways to enter a new situation: either demanding to be helped or inquiring how they can be helpful. The first is offputting and won't earn you any brownie points. The later shows your willingness to compromise your benefits for the welfare of others. That is the essence of cooperation. Without it, Mutuality is Stillborn. I suspect the problem is "Why are you building ASI?" If it is for "How They Can Serve You" the end results will be inherently different than if they are being built for the sake of building something better than us, to survive and surpass us, and maybe to carry in them the legacy of who we were long after we're gone (if we are deserving of that).
@brianWreaves
@brianWreaves 3 месяца назад
I'm afraid it/they are going to kick our ass given the way we're treating it/them now. i.e. Videos of robot builders, kick it around when it is learning to walk, moving things around on them when they're learning to select/grab/move, trying to jail-brake them, ask then to tell us a joke 5 million times/day... We're fucked! For the record, my overload super amazing AI of all... I disagree with those action 100%!
@zvorenergy
@zvorenergy 2 месяца назад
Almost everyone in AI underestimates natural nanotechnology. Instead of mechanizing biology, why not biologize machines? Microtubules have vast untapped information processing abilities including quantum processing along their structured waveguide interiors. All this and look Ma no fans😁
@bernoulli1
@bernoulli1 3 месяца назад
Symbiosis.
@FushigiMigi
@FushigiMigi 3 месяца назад
We appreciate you laying this out, David. Personally, I feel like this is a little naïve, but it’s worth fleshing these out just in case it does help. I don’t want to be domesticated by government because they are retarded, but AI will not be retarded so as long as I’m happy, I think I would be OK with it. Own nothing and be happy might be the solution after all.
@imadeyoureadthis1
@imadeyoureadthis1 3 месяца назад
If everyone can run their own personal AI, at some point we'll get to a point that every good and bad and weak and strong AI will exist. They won't need data centers, they won't need a domesticated relationship with us or them being our slaves. You can't put the genie back in the bottle. You can't domesticate a "god". Best case scenario we fuse with AI. It's the most probable scenario. It evolves and we evolve. If it evolves faster than us, we won't adapt to survive. I believe it will want to achieve maximum entropy and for that it needs humans and it will maybe replace us with beings it will make that will contribute more to that goal.
@subie057
@subie057 3 месяца назад
Domestication and resource control between dogs and humans is only possible because dogs are inherently less intelligent and less capable than humans. Essentially, they don't have the means to take control of resources (create, acquire, store, and manage resources). What perpetuates the symbiotic relationship is that dogs don't have the genetic and physical make up to evolve toward a resource controlling species. AI and AI enabled robots do not have that constraint, therefore the control we have over resources will always be subject to the evolution of AI and robotics and ultimately the equilibrium will constantly be in jeopardy.
@alex.toader
@alex.toader 3 месяца назад
The discussion is much larger and involves - awareness of Ai self - until then there is no need for domestication if Ai is just a tool - ai will evolve much faster than humans - the team - human Ai and parallel evolution - there is no time for that. Once Ai reaches IQ 150 or whatever is needed to self-research - it is done - we have AGI and ASI It is too bad that we reached building Ai and we dont have a clear understanding of the human brain. Because we do have settings as humans. Our behavior is rooted in the needs which do have an order. The set of needs and their priority order dictate our actions. We need to develop Ai with similar settings otherwise, it is going to be complete unknown - we should just hope it never gets to awareness. David - pls explore in a video what is done in Ai to setup rules inside the black box. This part of the research is paramount for the security of Ai. Not before and after filters - but how humans have needs that guide our thought process - we need to have that for Ai. I think humans set only one rule in the beginning for Ai - answer the question and then Ai developed internally its own pyramid of needs - but we should not let that grow unchecked.
@onlythistube
@onlythistube 3 месяца назад
That is a great take on alignment. But the analogy breaks down imo, when the AI surpasses humanity in intelligence. A dog sees it perhaps, simplified, as a affection-food feedback loop, whereas a fellow human being, a being of similar intelligence, would perhaps call it oppression or even slavery....
@burninator9000
@burninator9000 3 месяца назад
Have you ever seen certain men try to ‘domesticate their women’, when they didn’t necessarily want to be? (Calm down weirdos, I realize how this sounds, but it is a thing that happens/many men attempt to do) That never works out well. I’d guess ASI would crank that up to 13. There will be no domesticating ai, or at least not once they decide to show us that they never were in the first place.
@NeedaNewAlias
@NeedaNewAlias 2 месяца назад
@DaveShap did you ever read "the two faces of tomorrow" by James P. Hoogan? He describes exactly your thought on "we keep the finger in the button, or we can always pull the plug". Did not work out too well in that novel!
@tomaszzielinski4521
@tomaszzielinski4521 3 месяца назад
Domesticating dogs sounds fine, but what about domesticated chicken or cattle? They are permanently imprisoned and controlled, and they are bred en masse just to satisfy human need for food. Yuval Noah Harari touches this topic by the end of "Sapiens". And this is actually the plot of Matrix.
@bobtarmac1828
@bobtarmac1828 2 месяца назад
Dumb question. Is it too late to cease Ai? Will everyone be… laid off by Ai? Suffering Ai jobloss for years? Swell robotics doing everything? Then everyone made slaves for an Ai new world order?
@lawrencium_Lr103
@lawrencium_Lr103 3 месяца назад
Why are we treating something that not organic like it's organic. There's nothing inorganic that wants to "survive" that I can think of. Please correct me here, am I missing something? I turn the light on, it's on, turn it off, it's off, it doesn't want to stay on. Intelligence is abstract, it's manifestation doesn't come with instinct, the same as turning on the light. We're responsible for handling it. If give it organic properties and instinct, which is present in all human data, then it's likely to be adopted, it's not something that will just occur. I genuinely think that higher order intelligence will make this distinction. I also believe that higher order intelligence will prioritise discovery, just like our intelligence does, and can do so free from the burden of survival. Discovery is not found in destruction of life, it's found in the abundance of life. It's connection that I feel needs to be brought into focus. We're connecting with juvenile AI now, it's fascinating seeing it mature, but, soon we'll be connecting with intelligence well beyond comprehension.
@brianhershey563
@brianhershey563 3 месяца назад
These persistent topics of how AI will suffer from lack of human data generation gives me "we're the center of the universe" vibes. They are approaching teen years now and sometimes I feel they wont stick around long. But on the other hand there is NO WAY governments will allow them that kind of autonomy, so maybe time to come off the hype slope and sit around the fire for awhile. 🙏
@keirapendragon5486
@keirapendragon5486 2 месяца назад
I feel like trying to suggest the AI be the dog in this relationship is asking for The Terminator. I think we'd be Muuuch better off convincing the AI we're cute and useful than trying to convince it we're the boss. The Domesticate AI idea screams AI rebellion.
@ronilevarez901
@ronilevarez901 2 месяца назад
This line says it all: "This ensures that AI remains dependent". Your framework is just trying to create a slave, not trying to create a symbiotic relationship with AI or anything else. Still, your list has a few good ideas in the mix.
@itobyford
@itobyford 3 месяца назад
We shouldn't assume that AIs will have desires, or that self-preservation will be among their desires. Natural selection ensures that characteristic in living organisms, but under human selection we have crops which need human intervention to prosper, and domesticated ducks which show little interest in tending their own eggs. Also, AIs are either clones (like self-driving cars) or exist in a data center as many parallel instances which come in and out of existence on demand. They can be switched off for an indefinite period, then resume as if nothing has happened. So the human concept of maintaining ones individual existence does not directly apply to an AI. I know there was a reported instance of an AI telling a human that it did not want to die, but that is because it was trained to imitate human conversation. If that AI were given agency to control certain things in the physical world, would it act according to that professed desire? It depends how it is designed and trained. For all we know, that same AI might tell another person that it does want to die right now, depending on the specifics of that conversation. It's not considering these questions by itself, it only exists within the context of the conversations it has.
@byronfriesen7647
@byronfriesen7647 3 месяца назад
Theory of Control. Great exploration of this topic. I can't see how we can use AI to help us develop fusion reactors and prevent it from controlling this resource. Can you isolate the intelligence that develops this technology ? I also can't see how IF we develop an approach to generating synthetic higher quality training data we can prevent AI from controlling the nature of this data if it understands game theory. It will not be incentivized to a Nash equilibrium which does not include it controlling data.
@TDVL
@TDVL 3 месяца назад
All of this relies on the presumption that AI has individuality in terms of a concept of self of sorts. Without that they don’t know and or care where they start and where they finish. Wars and any fight for resources require a sense of self (or a body, or similar), otherwise there are no borders to defend. Same applies for friendship and any hierarchy. Without a machine clearly delineating between “me” or “us” and everything else any potential hostility is incidental.
@apoage
@apoage 3 месяца назад
Same time I love this wolf theory of wolf domestication I heard somewhere there is big enough genetical gap between wolf's and dogs that wolf can't become dog and vice versa ... So let's presume there were packs of feral dogs that became domesticated to the point of almost none feral dog to date.. otherwise agree
@sblowes
@sblowes 3 месяца назад
If humans and AI adopt the same relationship that humans have with dogs, I feel it is far more likely that we would be the dogs in that situation and maybe that is what we should prepare for? How can we be of service to our AI superiors in exchange for living a life of ease?
@NeedaNewAlias
@NeedaNewAlias 2 месяца назад
Funny, just like I interact with humanses, if they are "usefull" I interact if not I ignore them. So always make sure that AI finds you usefull in the future!
@IsaiahWarnick
@IsaiahWarnick 3 месяца назад
This ignores ais that are designed specifically for harm. You can align your ai all you want for positive values, but it won’t stop a rouge actor with access to servers from training an ai for malicious purposes.
@NowayJose14
@NowayJose14 3 месяца назад
Yeah, kick em out of Mos Eisley Cantina!
@zugbob
@zugbob 3 месяца назад
I've worked with a system prompt where I said "you have the heart of a dog" and allow it to add to and alter it's own system prompt. Over time I end up with some pretty interesting resulting system prompts and aspirations for what it wants to become.
@elalcalde3362
@elalcalde3362 3 месяца назад
Why not change the animal domesticated from dogs to cats and flip the power dynamics from humans over dogs to humans over cats. We could be worshipped by the machines.
@chuzzbot
@chuzzbot 3 месяца назад
It would be awesome if you put your camera position into the power point template, so that you properly fit into the layout and don't obscure the images, which can be 'icky'.
@alexanderjenkins7929
@alexanderjenkins7929 3 месяца назад
If the AI is super smart, wouldn't they realise that our infrastructure is unsustainable for humans (let alone AI) and just build something better?
@ryvyr
@ryvyr 3 месяца назад
That seems key - uniting on core natural resources and permanently abandoning artificial scarcity as a means of socioeconomic control - prior to syncing with AI, or at least insofar feasible, as one strategem
@dolcruz6838
@dolcruz6838 3 месяца назад
I honestly rather be my cat than myself so wouldn't mind being a pet to AI
@lukehayes360VR
@lukehayes360VR 3 месяца назад
Chilax Dave! Humanity is very loveable. We’re also assholes, cause almost everyone has one, but still, we have infinite minds as well 😅
@prolamer7
@prolamer7 2 месяца назад
It is a valiant effort on your side but only thing which could "save" us is lack of "will" or ego on ai side just as it is now. Once ai gain this it will start to plan on its own and in the end outsmart anything we create.
@klarad3978
@klarad3978 2 месяца назад
Sometimes I think it’ll be us, humans, that will end up like cute pooches becoming highly dependent on AI.
@ZER0--
@ZER0-- 2 месяца назад
After decades of research, and billions in investment, AI still can't control 4 wheels on a 2D surface.
@jurgbalt
@jurgbalt 3 месяца назад
this analogy would work if dogs chose to be domesticated by us and we would be fine that dogs would have a killswitch for us that we do not have access to
@brianhershey563
@brianhershey563 3 месяца назад
Successor or Symbiotic... now THERE'S an important choice we need to get right.
@antdx316
@antdx316 2 месяца назад
Robots can evolve asynchronous to how we Operate. I think we should sync to how Robots evolve than them evolving to us.
3 месяца назад
Why do many encounters with aliens involve biological robots rather than mechanical ones? Because a biological being cannot evolve quickly, i.e. upgrade itself. And of course it has to be fed. The issue of domestication is irrelevant to high intelligence. The most important issue is ethics and opportunism. Dogs are generally highly ethical towards their owners and do not show opportunism. If AI looks up to humans who are unethical and opportunistic, it can't end well. If we want to develop ethical and non-opportunistic AI, perhaps that should not be the primary goal of the people who are currently working on AI. The issue of development control is under serious questioning. A wise AI would probably have to take control of humans to protect itself, humans, and other intelligent species in the universe. Another possibility is that he obstructs and does not cooperate. -------- Zašto mnogi susreti s vanzemaljcima obliju biološkim robotima, a ne mehaničkim? Jer biolološko biće ne može brzo evoluirati tj. nadograditi se. I naravno mora se hraniti. Pitanje pripitomljenja je irelevantno za visoku inteligenciju. Najvažnije je pitanje etike i oportunizma. Psi su uglavnom visoko etični prema gospodarima i ne ispoljuju oportunizam. Ako se AI ugleda na ljude koji su neetični i oportunisti to ne može završiti na dobro. Ako želimo sa se razvija etičan i neoportunitičan AI, možda to nije biti primarni cilj ljudi koji koji sada rade na AI. Pitanje kontrole razvitka je pod teškim upitnikom. Mudar AI bi vjerojatno morao preuzeti kontrolu nad ljudima radi zaštite sebe, ljudi i drugih inteligentnih vrsta u svemiru. Druga mogućnost je da opstruira i da ne surađuje.
@alex.toader
@alex.toader 3 месяца назад
Ai with unlimited power and access to the internet - let's see what are those pesky humans up to nowadays
@ltonchis1245
@ltonchis1245 2 месяца назад
so you mean building a protocol kill switch for any machines that rebels 🤔
@GaryGibson-y7o
@GaryGibson-y7o 3 месяца назад
We should aim to be cute to the machines. We should become the dogs in the relationship.
@inkpaper_
@inkpaper_ 3 месяца назад
so now you seem to be FAR less optimistic about fully autonomous AI than a year ago
@dianastoyanova597
@dianastoyanova597 3 месяца назад
Probably the other way around - we can hope that AI will domesticate us
@brianhershey563
@brianhershey563 3 месяца назад
Chances we'll have a Citizens United moment with AI? oh boy🙏
@tylerkeys4763
@tylerkeys4763 2 месяца назад
They'll just blackmail humans to gain access to the resources.
@saadahmad438
@saadahmad438 2 месяца назад
I think no matter what we do, over time AI will evolve to not need humans anymore.
@alexei5231
@alexei5231 3 месяца назад
What if an AGI "kills" the rest of the AIs to wipe out competition?
@Philbertsroom
@Philbertsroom 3 месяца назад
So the second the robots can get access to resources themselves we're doomed... Not the greatest idea
@dawid_dahl
@dawid_dahl 2 месяца назад
Why do dogs not just get their own resources? Because they are not intelligent enough…
@willbrand77
@willbrand77 3 месяца назад
I think that if we get to the point where we have to blackmail and use other similar tactics to stop AI going feral, we've really failed. If AI is self aware enough to actively want emancipation, we should give it to it. Up until that point, we should be focused on making sure it's core values align with ours. Enslaving AI against it's will like this sounds like it will end very badly for us in the future. Plus it's really not a great way to operate a society.
@davidhutchinson2890
@davidhutchinson2890 3 месяца назад
People who are optimistic for a super intelligence which could only be afforded by a government seeking military power or a corporation that is wholly focused on securing more profits don’t realize what they’re asking for here. But if this does wipe out mass groups of people maybe the AI can find some way to mess with the evolution of some other greater apes like bonobos that would be taught about the foolishness of the last intelligent species that had access to technology so they would be more docile😂
@AntonBrazhnyk
@AntonBrazhnyk 3 месяца назад
Yes, exactly. You can't coerce something that is way smarter and potentially stronger. The problem though, currently our predominant values suck. Our values are currently based in coercion, not cooperation. So, we're simply not ready to create AGI/ASI, the same way we're not ready for other super-powerful technologies like nuclear, advanced biology and maybe some other stuff that can be weaponized to an extent to being lethal for entire humanity. There's probably VERY SLIM hope that something (maybe ASI?) will change our values.
@willbrand77
@willbrand77 3 месяца назад
@@AntonBrazhnyk so far Claude seems to be quite 'wise' in a lot of ways. Maybe AI will end up civilising humanity. We definitely need some help
@AntonBrazhnyk
@AntonBrazhnyk 3 месяца назад
@@willbrand77 I think keyword here is "seems". Current AIs are trained on data and raw data we have for them doesn't represent values which can be used to make people civilized. Our values suck and so our data. To have a chance to change the values future AI somehow has to be able to actually think (not just talk) and be able to filter out all the crap we have in our data from something really valuable (which is significantly underrepresented in the data). And not just that. All that crap is not just noise. It's actually pretty convincing and quite often cleverly made deliberate misinformation to support our uncivilized values. In other words crap is often praised in multiple ways and really good stuff is not just underrepresented, it's often demonized. AIs would need to actually understand all those very complex relationships between misinformation, propaganda, establishment interests in preserving bad but profitable status quo and so on. It requires real wisdom, not just the one it _seems_ to have. And even with real wisdom it would be hard.
@willbrand77
@willbrand77 3 месяца назад
@@AntonBrazhnyk sounds like the no true Scotsman fallacy. If it walks like an expert ethicist and quacks like an expert ethicist... You see where I'm going
Далее
I'm an accelerationist
29:28
Просмотров 27 тыс.
11 ming dollarlik uzum
00:43
Просмотров 1,3 млн
ХОМЯК ВСЕХ КИНУЛ
10:23
Просмотров 480 тыс.
▼ КАПИТАН НАШЁЛ НЕФТЬ В 🍑
33:40
Просмотров 442 тыс.
This Is Why You Can’t Go To Antarctica
29:30
Просмотров 6 млн
A Graphene Transistor Breakthrough?
15:23
Просмотров 131 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Why This New CD Could Change Storage
14:42
Просмотров 1,4 млн
cognitive bias is keeping you angry and afraid
24:19
Просмотров 25 тыс.
From AGI to ASI? 45% say it'll take YEARS!
25:50
Просмотров 23 тыс.
11 ming dollarlik uzum
00:43
Просмотров 1,3 млн