Тёмный

Shocking Ways AI Could End The World - Geoffrey Miller 

Chris Williamson
Подписаться 2,7 млн
Просмотров 61 тыс.
50% 1

Опубликовано:

 

21 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 343   
@ChrisWillx
@ChrisWillx Год назад
Hello you beauties. Access all episodes 10 hours earlier than RU-vid by Subscribing on Spotify - spoti.fi/2LSimPn or Apple Podcasts - apple.co/2MNqIgw. Here’s the timestamps: 00:00 Intro 02:43 Is AI Viewed as an Existential Risk? 06:14 How the Perceived Risk of AI Has Evolved 15:47 Rapid Advancements in Neural Networks 20:07 The Next Level for ChatGPT 28:25 Helping the Public Understand the AI Arms Race 37:16 Who Will Be the First to Create AGI? 40:53 Is the Alignment Problem Still a Problem? 50:42 The Opposing View to AI Safety-ism 54:55 Most Concerning Current AI Advancements 59:36 How AI Will Influence Content & Social Media 1:04:57 How Accurate Can AI Expert Predictions Be? 1:08:40 Something Much Worse than Existential Risk? 1:11:43 Why AI Won’t Save the World 1:19:29 Where to Find Geoffrey 00:00 Intro 02:43 Is AI Viewed as an Existential Risk? 06:14 How the Perceived Risk of AI Has Evolved 15:47 Rapid Advancements in Neural Networks 20:07 The Next Level for ChatGPT 28:25 Helping the Public Understand the AI Arms Race 37:16 Who Will Be the First to Create AGI? 40:53 Is the Alignment Problem Still a Problem? 50:42 The Opposing View to AI Safety-ism 54:55 Most Concerning Current AI Advancements 59:36 How AI Will Influence Content & Social Media 1:04:57 How Accurate Can AI Expert Predictions Be? 1:08:40 Something Much Worse than Existential Risk? 1:11:43 Why AI Won’t Save the World 1:19:29 Where to Find Geoffrey
@VirtousStoic
@VirtousStoic Год назад
Sadiakhan is using your interview in clip form on her channel. Without crediting you or linking to your channel or the full interview. Please look into this, it's very unprofessional and selfish. If anything it should be a copyright issue
@kiedranFan2035
@kiedranFan2035 Год назад
@ChrisWillix I'd like to have someone answer the question of HOW, With what will ai kill us. Right now, all I'm hearing is theories but no logical explanation. AND what threat does ai have if it has no physical body with which to stomp on you? And where would it get a body advanced enough to be a threat? Currently, all have to be human-made, and factories don't build novel things or even extract the raw materials all by themselves. So, how exactly is ai a threat of its static and in a computer that has physical plugs to pull or batteries to pull and not software. Inshore if the ai can't physically punch me in the face but I can destroy the device it's on, and it isn't a humanoid robot with physical capability equal to humans which we will for a long time to come still, then why should I he concerned?
@kungfujoe2136
@kungfujoe2136 Год назад
what do you think the effect of social medea and mini computters are on ppl's psyche?
@sunflower-oo1ff
@sunflower-oo1ff Год назад
Mr Miller…not everyone has kids let alone grandchildren…we get it that You care about your children mostly….but the world is made of many people beside Your family….
@smarthalayla6397
@smarthalayla6397 Год назад
Sure, but when does the AI s3xbots comes out? These modern women become insuff3rable.
@themore-you-know
@themore-you-know Год назад
Geoffrey Miller's "I didn't consent to this" argument is a superbly weak one, as it can run both ways. As someone currently working on an AI project that will unleash massive societal change while also being my ticket out of poverty, I don't consent to guys like him taking that ticket out of my hands. Who is he to be the god-like moral ground upon which to judge others' pursuits and alignments?
@iiandreio4228
@iiandreio4228 Год назад
Even agreeing with most of what Geoff said, I must say you have put forward a very compelling argument.
@themore-you-know
@themore-you-know Год назад
@@iiandreio4228, another argument is what I'm about to publish with an app I'm working on: "Essential Moral Imperative", which removes the moral lens away from ivory towers, and brings it back to the level of Essential Workers. From the perspective of a Pakistani kid cast into debt-bondage and forced to operate a hell and brimstone brick kin (pretty close to S-Risk, as Miller mentions), any change that does not remove that kid away from his S-Risk reality has no moral value from his perspective. Then come along the artists who huff and puff because their work is used as the building blocks of this potential solution to the S-Risk reality of the "Essential Worker" whose work has afforded the artist the ability to leisure into artistry under the shelter of bricks and the health offered by sewage workers operating knee-deep in sh*t. It then becomes very hard to stand on any imaginary high-ground and shout: "No! Art belongs to me! Go back to your bricks and sewers!". There is therefor a "reasonable moral imperative" to drive the minimum required effort of artistry and creativity so far down that even the sewage worker and the enslaved kid with only 20 minutes to spare can him too access what we consider "self actualization" for a moment. The "Essential Moral Imperative" (with its completed form being: to push it onto every need across the hierarchy of needs).
@brushstroke3733
@brushstroke3733 Год назад
​@@themore-you-knowHaven't you heard of Pandora's Box and the Flight of Icarus? Both stories warn of the dangers of human hubris. You are meddling with forces that neither you nor anyone else understands. I understand your desire to improve your financial well-being, but with your intelligence, I'm sure you could do that in many ways without helping to unleash this can of worms onto humanity. What you're doing is not that different from gain-of-function research. The people doing that research justify and rationalize it in their own minds by imagining they are helping humanity by doing it. Meanwhile, they could just as easily unleash another super virus that wipes out humanity. I know I won't convince you. I just hope you remember this comment if what you develop ends up being far more destructive than you now believe it will.
@brushstroke3733
@brushstroke3733 Год назад
Also, your argument could just as easily be reframed as "Geoffrey Miller doesn't consent to me robbing him at gunpoint, but I don't consent to having my gun taken away from me or being prevented from robbing him." In other words, your counter-argument is pretty weak as well. I wonder if you're not emotionally invested in what you're building and have now become irrational about it, much as the parent of a demon-child or killer who continues to support amd defend their child no matter what they do.
@leviathanv3135
@leviathanv3135 Год назад
Sometimes you can be so smart that you’re stupid
@eaji8853
@eaji8853 Год назад
Geoffrey is my favourite scientist and his book Mate (or what women want) is maybe the best book I've ever read. The best podcast I've ever listened to is The Mating Grounds Podcast that he did with Tucker Max. Highly recommend it!
@stephenburnett458
@stephenburnett458 Год назад
Agree this is the first time I’ve heard of him. I is watch a lot of podcasts and many don’t say that much some I feel are so crap I don’t even watch to the end. This guy is intelligent truthful sensible and is one of the few very worthwhile listening to.
@noseyparker6969
@noseyparker6969 10 месяцев назад
Eww the last thing I'd be thinking about after watching this would be buying his book.
@MajorGloryMan
@MajorGloryMan Год назад
80min of relentless fearmongering, good job! 👍
@Skulldrey
@Skulldrey Год назад
And yet you must have watched it all to formulate such an opinion. “Why am I hitting myself?” “Why am I hitting myself?”
@whalingwithishmael7751
@whalingwithishmael7751 5 месяцев назад
This is dangerous. If you disagree with him, you need to be more specific about where you think he’s off.
@tonycatman
@tonycatman Год назад
I do in fact work in the AI field. I questioned some of what Miller had said, and he promptly blocked me. I wasn't in any way rude. I cited sources such as Yann Lecunn, Andrew Ng etc, who shared my view.
@HenryRobinson
@HenryRobinson Год назад
He is a clown. Lots of doom, straw men and unfounded claims without a single mention of the opportunity cost of not investing in AI.
@DiscovererAlpha
@DiscovererAlpha Год назад
He mentions the Effective Altruism Forums at the start which already gives of the alarms that he belongs to the EA/LessWrong Singularity Doom cult, seriously look up people like Eliezer Yudkowsky and how off the rails they are. It's the tech nerd version of the climate doomers that think the world will also end, for real this time.
@Okiwano
@Okiwano Год назад
I heard a fun notion that none of these concerns are even testable hypothesis. Theyre just ideas. Why would an AI be super intelligent yet resort to cruelty which is higher and more common among lower IQ groups? Also... I believe morality and ethics have already been solved. Its just not a popular solution. Universally Preferable Behavior by Stefan Molyneux is impressive but of course his work removes involuntary power structures so it hasn't gotten much love or appreciation.
@tonycatman
@tonycatman Год назад
@@Okiwano It is a mistake to assume that an AI would need a motivation to do what we consider an evil thing. The two might be unconnected. The classic example is the 'paperclip maximizer' A more prosaic example might be the way we boil water in order to kill germs. I doubt if even a Jain feels any moral guilt for doing this. - My disagreement with Miller isn't about his hypothesis. It is about the likelihood. As Andrew Ng said "It is like worrying about overpopulation on Mars". I am quite sure that overpopulation on Mars will be an issue one day, but I don't think that we should look at rocket engineers as innately immoral.
@Okiwano
@Okiwano Год назад
@@tonycatman Yes I'm aware of the paperclip notion. Even so much to know that he didn't actually mean paperclips but just the most efficient means of storage mass into building blocks at a small scale. What I'm saying is that this would require that the computer has intelligence and some sense of self or wisdom to take initiative for its own desires. If it does this then I don't see why one would assume it's taking on the most base forms of self interest when empathy is a property that emerges though greater intelligence and wisdom. If it doesn't have those properties then it's just a very smart and hyper fast machine that executes its instructions without trying to circumvent its code which doesn't seem to be what's described by the term "General intelligence" I would actually take your example further and say it's less about Mars being overpopulated and more like "If we settle on Mars then we're running the risk of life never organically developing there thus you're promoting genocide" You can see how the jump from building rockets to genociding species that might exist is also similar to thinking making an AI could end up creating an entity that might have a proclivity towards violence without regard to what humans want despite being trained largely on things produced by humans. I don't see why AI wouldn't see humans as a useful biological backup that it could strap itself onto. I would like most bacteria, ants, and animals if I could make them do what I want. Killing humans would provide less utility than training them to serve your own needs or desires. If silicon life has an issue where it can't reproduce very effectively, why not merge with the carbon lifeforms who can? Hell they'll reproduce on their own without a need for you to request it.
@dkstudioart
@dkstudioart Год назад
I see the biggest threat of AI as taking away human's sense of purpose. Once AI takes over all the jobs and humans have no need to actually work for anything we'll just gradually fade away. Even now purposelessness is one the the greatest threats to humanity.
@black-aliss
@black-aliss Год назад
If it took away all the undesirable but crucial jobs, then it might improve our lives. But that's an energy problem more than anything else, which nobody can really fix.
@missshroom5512
@missshroom5512 Год назад
Humans are very adaptable…it will be a process….we will be just fine🌎☀️💙
@macmcleod1188
@macmcleod1188 Год назад
Purposeless isn't as big as current divorce laws are. They are driving reproduction below to sharply below replacement levels. Japan is at 0.89. That's faster than halving the population every 20 years. Most of the first world is below 2.15. Men don't want to have children under this rule set. And women don't want to have children until they are much less fertile.
@Hahastillbreathing
@Hahastillbreathing Год назад
The speed in which this will be implemented will outpace the speed by which the economy and society absorbs it. It will collapse several fundamental components of what exists as the backbone of our civilization. Unfortunately this sounds so extreme and beyond comprehension that it will simply be rejected rather than evaluated for legitimacy. It has to happen. And it is inevitable.
@cheothegeo2742
@cheothegeo2742 11 месяцев назад
@@black-alissyea it just makes life easier and easier until technology lives life for us. Then we’re all hooked up to the matrix in red pods of goo.
@soham9496
@soham9496 Год назад
Glad to see Geoffrey Miller again , since reading his book mate , I have developed a liking for him
@anonony9081
@anonony9081 Год назад
It's annoying that he keeps referencing the one in six chance of destruction as if it's fact. It's just a guess and it's not something you can even quantify so it's strange that he keeps going back to that like it's gospel. What if it turns out the chance is actually one in 6,000, all of a sudden the upside to improving AI looks a lot better. I think most people would play Russian roulette if there was a one in 6000 chance of death but The chances of becoming a god are 5,999 in 6,000
@supremereader7614
@supremereader7614 Год назад
The scary thing is if AI did want to eliminate us, there probably wouldn't even be a fight. We'd probably just be eliminated without even knowing why.
@MrChaluliss
@MrChaluliss Год назад
I totally follow this reasoning. But also, I think there's potential for many different types of conflict with AI as an independent agent, and with AI as a tool in regular human warfare. There are many outcomes which we should be trying to avoid right now from my perspective. Not a small handful unfortunately.
@Jon-dragonwolf
@Jon-dragonwolf Год назад
Pretty sure no one has programmed AI to slowdown. Maybe humankind's biggest and last mistake
@RiazValgee
@RiazValgee Год назад
Or even how, we probably would be blind to it. Unable to comprehend it.
@xonious9031
@xonious9031 Год назад
the scary thing is that evil people like those in this video are using fear to consolidate the power of AI into the hands of a few powerful people
@LHVMleodragonlamb
@LHVMleodragonlamb Год назад
Earth deserves better inhabitants. that is rational
@jonathan-anomaly
@jonathan-anomaly Год назад
Can’t think of a better guest as you cross the 1M subscriber mark. Congrats Chris!
@tabc123
@tabc123 Год назад
The most dangerous part of ai in my opinion (and least talked about) is the effect that ai will have on the job market. We just experienced an entire century of bloodshed due to (what I would consider) the effects of the the Industrial Revolution on the job market. Half the world became enslaved to bulshevism because those countries didn’t mitigate the effect of new technology on their job markets. The us was much closer to being included in that category than we remember. When you have many many millions of people whose lifelong skills and middle class wages disappear in a matter of a couple years via new technology. The effects of that are scarier than anyone is willing to talk about.
@robertsmith7822
@robertsmith7822 Год назад
'Superior ability breeds superior ambition' - Mr. Spock
@mattk.9377
@mattk.9377 10 месяцев назад
Programming AI to share the values of the ancient Roman philosophers could solve all of our worlds’ problems.
@touchbytonymikael
@touchbytonymikael Год назад
congratulations 1milj.🤝💯Your podcast is over the top. Edition, colors, sounds, speak are exceptionally good👍kiva katso upeasti tehtyjä keskusteluja🇫🇮That's Finnish language and=Nice to see well done conversations
@utroligte
@utroligte Год назад
despite all the hype, the current LLM aren't remotely close to GAI, and the GPT model only passes the Turing test at first glance, the longer you interact with it, the more apparent it becomes that "something" is 'wrong'. Why he dismisses meteors so quickly is a bit strange. A study in 2019-20 by Professor Geoffrey Evatt, showed that just in that year 17k objects fell from space and reached earth. Statistically speaking, there is so much 'junk' out in space that it should be of more concern, even looking at what effect the smaller meteors have on the atmosphere (globally, environmentally, temp. etc.). I haven't really heard anyone talk about the current 'flaw' there are in todays ai, when it starts to hallucinate. That to me, feels like a risk factor, since it randomly making up stuff, and believing it to be true, could be as bad as what human can 'cook up' for bad intentions. Regarding alignment, it starts from what the model is trained on, as an example in china they have a model that thinks that communism is one of the leading philosophies, if not the best in the world.
@ericdraven3654
@ericdraven3654 Год назад
AI is my new favourite topic. I also watched recently the episode with Stuart Russell. Cheers from Spain to all modern wisers❤
@heartsalive3157
@heartsalive3157 Год назад
"We have gained the power to destroy ourselves, but not yet the wisdom to ensure our survival." Thats dark.
@hariman7727
@hariman7727 Год назад
Nuclear bombs. The Zero Population Movement. And more. We already have that. So God save us from those who would try to save us from ourselves, because the worst tyranny is the tyranny enacted "for your own good".
@scottgibbs5903
@scottgibbs5903 Год назад
What are the books and podcasts that we need to read/follow on this issue?
@bigjohnnygee123
@bigjohnnygee123 Год назад
Crazy to see Iain M Banks mentioned in this considering I’ve been thinking a lot about “the culture” recently in light of the recent AI and deepfake developments. One thing that I find interesting is that the core of the culture series is that, contrary to most media at the time, was a positive view of how AIs could rule human society. The conflict in each book was effectively drawn from external threats or the dangers of hedonism.
@cod-the-creator
@cod-the-creator 8 месяцев назад
AI is going to replace humanity, but it won't be some Terminator war. It'll happen through corporations giving more and more control to AI. After a few decades AI, as the lead decision-maker for most corporate entities, will own all the land, all the production, all the military capability, space travel, energy production, etc etc etc.
@thekrojolux2165
@thekrojolux2165 Год назад
The ai girlfriend thing already happend with replika, there Is no speculation needed for " Will people fall for It?". They absolutely did take ai over reality
@keredeht
@keredeht Год назад
I agree that AI is concerning, BUT I think that the thought of other nations who are potentially or likely to be hostile towards the USA having more advanced AI systems than us is MORE SCARY. We cannot afford to put the brakes on at this point because there is no way to force other countries to do the same...
@andrewzembar3869
@andrewzembar3869 Год назад
we actually can....we've done it with nukes so far
@texfromro
@texfromro 10 месяцев назад
What i learned from reading history is that humans are comically bad at predicting the future 😂
@joewestla
@joewestla Год назад
While I see risks from rogue AI, the way this topic is discussed it's as if there is only one AI and it decides to kill us all to optimize paperclip production. Instead we should assume a future where there are multiple AIs. Given this future, our goal should be to create self policing mechanisms where AI polices AI.
@boukm3n
@boukm3n Год назад
That’s why heuristic imperatives are extremely important. If these systems all have a guiding force that makes sure they don’t go terminator, they will self police themselves in accordance with their heuristics
@berserkerscientist
@berserkerscientist Год назад
​@@boukm3nOne AI to rule them all, what are you crazy? There's no faster way to paper clipping than centralized control.
@flickwtchr
@flickwtchr Год назад
That makes no sense.
@brushstroke3733
@brushstroke3733 Год назад
Did you hear about the AI piloted military drone that bombed it's own human commander and killed him? The AI was given the task of bombing a series of dummy targets, but when the human interfered and blocked it from bombing some of them, it decided the best course of action was to bomb the human who was preventing it from completing its mission. The moral of the story is that humans fail to think of every possible contingency when we program these things or give them operations to execute. One simple oversight on our part can lead to such catastrophe. We're not smart enough to control what we create. Have you ever heard the myth/fable of Pandora's Box? Apparently our AI programmers never learned that lesson. Or what about Icarus? Or the Rush guy who built a submersible to give tourists trips to the Titanic? Human hubris has always been our downfall and it will be again. Either by AI or bioengineering viruses or by eugenics - one way or another, we're going to let something out of Pandora's Box, again.
@humblekek-fearingman7238
@humblekek-fearingman7238 Год назад
@@brushstroke3733 That is literal misinfo.
Год назад
Interesting discussion, but I must admit that I am both surprised by and disagree with the proposition put forth by Dr. Miller. Aside from the fact that using the post-modern tools of censorship and ostracism is quite honestly anti-liberal, one could reasonably expect an estimated psychologist to know about the Reactance Theory. On the one hand, everyone knows that if we are to tell a teenager not to do something, they will try to challenge the authority and do the exact opposite. On the other hand, I'll always remember when Professor Denis Lévesque taught me B.F. Skinner's behaviorism as an undergraduate. He taught us that punishment was the most effective solution in the short term but tended to produce the worst outcomes in the long run. Therefore, Professor Lévesque recommended us, as future managers, avoid punishment as much as possible and rather use positive reinforcement whenever it is appropriate. Therefore, if we are to leverage the social tools of repression against those scientists and entrepreneurs working in the AI industry, we can reasonably expect them to react negatively against the pushback, fallback on their attribution bias, crystalize themselves on their pro-AI position, and double down on their efforts to develop the technology. Indeed, as the Arabic proverb says: "the dogs bark, but the caravan goes on". Ipso facto, I'd propose the opposite solution and suggest open dialogues as much as possible between the pessimist and the enthusiastic side of the debate.
@ambientwave1659
@ambientwave1659 Год назад
With greatest respect to Miller he sounds like a clever guy in his own field trying to process expertise and knowledge over another field he has limited knowledge in. I think the threat of AI is real. But to say just shut it all down is daft
@johnconstable8512
@johnconstable8512 Год назад
shut it down. i'm sure china won't.
@bigcauc7530
@bigcauc7530 Год назад
Exactly.
@maximilianjakubik3706
@maximilianjakubik3706 Год назад
on point
@hariman7727
@hariman7727 Год назад
He's got a point though. AI is a tool that's revolutionary like the computer, or the industrial revolution. Slowing down a bit and actually thinking of the consequences... and counters against those who will abuse it... is necessary.
@flickwtchr
@flickwtchr Год назад
So he said to shut it all down? Did I miss something?
@SmellyCat808
@SmellyCat808 Год назад
Congrats on 1M, man. Looks good on you 🤘 Damn, a 1 in 6 chance of extinction is pretty high lol. As scary as that is, if the other side of that is that we get to a place where humans are actually free to live their lives, I understand why people are taking risks for the advancement of AI technology. So many mention the mass unemployment as a case against, but isn't that sort of a point? To have all labor be automated? And if you're someone who derives value from your job, you could still do whatever it is that you did, but for free (possibly as part of some game or simulation if your job required other people). Also, when Geoffrey talked about AI matchmaking, I think that would be great if they could do it right. Like if AI were able to sort through a deeper level of detail and know things about you that you don't even realize. Maybe through something like a neural scan or even just the way a therapist would ask you questions and deduce. I think that would be great not just for dating, but for jobs as well. Assuming, of course, that AI doesn't automate out all the jobs like I mentioned. I feel like job assessment tests and matchmaking aren't quite able to fulfill their intent as they stand now.
@brushstroke3733
@brushstroke3733 Год назад
We could end up living like the people in the movie "Wall-E" as well. Either way, we're opening Pandora's Box and we'll never be able to put whatever comes out back in.
@davyprendergast82
@davyprendergast82 Год назад
I would estimate a 6 in 6 chance of extinction if humans remain in control of our own affairs with the kind of weaponry we have, so yeah give me the 1 in 6 "what's in the box?"
@heyjude9537
@heyjude9537 Год назад
I'm more afraid of ai being controlled by the few and making everyone else obsolete. Let everyone have access.
@missshroom5512
@missshroom5512 8 месяцев назад
It is so weird how we put the A i out there and then try to figure it out
@theprofessor7965
@theprofessor7965 Год назад
This guy really made the cold war an example - now we all know that those nukes actually kept the peace for a long time because everyone was too afraid to blow each other up, so them talking about it and fearmongering in the past is just bad boomer "wisdom" in retrospect. He is operating on 60s-70s boomer wisdom that did not serve their children well. In reality, history is chaotic and almost always plays out differently than what most people predict. Merging with tech is the solution. You are already a cyborg - your implants are just non-invasive. What do you use when you want to find out if a restaurant is good or not? The same device you use to connect with billions of other people all over the planeet. The Ubermensch is made of steel.
@justachannel8600
@justachannel8600 Год назад
The implants are inductive. The problem is they do not work one way only. The same device you use to find a restaurant is used to both spy on you and influence you. In fact for most people we'll probably move away from the Nietzschean Ubermensch. The future is the human hive.
@TheBestRTaken005
@TheBestRTaken005 Год назад
I first started checking out neural nets in the late '80s. I'm an electronics and computers. The big issue is that neural nets work very good at certain things. The biggest threat seems to be the more nodes in the net. The issue is also that people consider it safe because chat GPT was modified to not speak about global warming and other things whenever it didn't give the results that the creators wanted. In reality even if we limit the number of nodes and so forth, somebody would either illegally develop or just use a bunch of neural nets with the node limit that specialize and then another neural net that asks the experts. To me it seems like the Bible, the book of Genesis talks about the downfall of men being choosing the fruit of wisdom of knowledge of good and evil over the life that they had. It seems to be a reflection of that right now and that we're going to be seduced by this greater knowledge or wisdom that will ultimately destroy us again.
@skoto8219
@skoto8219 Год назад
There are two fairly distinct ideas you’re talking about here that might get blurred for those less familiar with neural nets: 1) the number of parameters/weights/nodes (where, so far, simply scaling up seems to lead to more and more general capabilities), and 2) RLHF (reinforcement learning with human feedback), where the model is tweaked to favor certain outputs over others. The disturbing thing about RLHF is that it’s “telling us what we want to hear” (or rather, what OpenAI wants us to hear) but, as demonstrated by people who have successfully jailbroken the models, it actually “knows” much more about many topics than it’s “willing” to reveal to us.
@JackedMonk
@JackedMonk Год назад
Brilliant episode Chris, the comparison to Flash was brilliant and congrats on hitting 1 Mil!
@mouwersor
@mouwersor Год назад
Geoffry is building hypotheses on hypotheses. The outcome of his primary hypothesis is an undesirable consequence, so he feels compelled to attempt to predict the possible risks such an outcome would entail and the actions he would have to make given that his hypothesis is true, but it seems he has already forgotten it is merely a hypothesis (and not one which has been verified or one which has been created with enough data to be a potentially valuable hypothesis).
@baddolphin1423
@baddolphin1423 Год назад
Sorry Chris, couldn't finish watching. I had to stop when the man started talking about narrow AI. Is this a discussion of using the AI? Or the AI itself being the danger? For the later: nope. Did we miss the part were a mediocre Go player beat that super-duper-Go-AI? Is AI going self-aware in the next 10 years? Nope. If the subject is: is AI providing productivity increases? Hell yeah, in a number of fields. Can that be used to advance in dangerous fields? Hell yeah. Am I interested in watching a discussion about that? Not really (I'll watch a bioweapon talk about that if I want). Many probably are interested in doom-p*rn so have fun I guess.... I'm starting to understand the appeal of this psychologist for the larger masses.
@Augustus_Imperator
@Augustus_Imperator Год назад
it can't, and it won't slow down
@SteveTheGhazaRooster
@SteveTheGhazaRooster Год назад
I get the feeling that this guy has trouble sleeping at night 😭
@flickwtchr
@flickwtchr Год назад
What's your point?
@MonaMarMag
@MonaMarMag Год назад
Instead of creating some kind of monster absolutely and undeniably we have to work on our selves and our behaviors .
@PackaGame
@PackaGame Год назад
Ai can’t make bombs, they can’t collect a tomato, they can’t make you ice cream, they can’t tile your bathroom, they can’t cut timber and nail planks together, they can’t pour concrete, they can’t weld metal, they can’t cut your nails… etc. You need to make highly sophisticated robots with insane nuanced movement controls. That’s entirely different than a piece of code on a microchip. AI may take jobs that are done on/in computers but anything manual they will have no effect on except to advise people as to what the best way to do it would be.
@conniekaler
@conniekaler Год назад
First time I’ve heard this subject explained clearly and humanely, thanks
@mouwersor
@mouwersor Год назад
1:13:10 Let's track the fallacies. (paraphrasing, idc to get the exact line) "It's really important to not refer to people just because they are rich and smart." Fallacy-fallacy. "Have they engaged in meaningful conversations with other experts" Argument from authority (+ a lil' bit of argumentum ad populum) "I would love for the general public to tune into the issue and apply their natural survival instincts and parental instincts." Appeal to emotion. "Are the people who are going headlong into this, under some combination of greed and hubris, etc." Fallacy-fallacy "And then if they decide this is reckless and evil" And there we have the forgetting of the initial hypothesis again. It does show why higher conscious capacity is necessary to deal with complex topics.
@missshroom5512
@missshroom5512 Год назад
I don’t think this gentleman is giving himself enough credit as far as google maps out doing his own performance with a paper map….No one should let google maps have that much control
@troywill3081
@troywill3081 Год назад
30:20 Those 25 seconds summarize the problem we are in, in the most succinct way I have heard to date.
@susymay7831
@susymay7831 Год назад
I.can't slow down, Will Robinson!!! 🤖🤖🤖
@malootua2739
@malootua2739 Год назад
Circuitboards will never be conscious, they will just simulate it well, and fools might be fooled by them
@matthewgrennell9293
@matthewgrennell9293 Год назад
I thought it was interesting when he mentioned something along the lines of us not consenting to this
@malootua2739
@malootua2739 Год назад
AI is gonna destroy the stock market, you can't have everyone be a master trader
@johnpelfrey2041
@johnpelfrey2041 Год назад
I agree that AI needs to be further scrutinized and regulated.
@David.Alberg
@David.Alberg Год назад
Have fun regulating AI in the West and let China and Russia win the AI Race... Just think what will happen 😂
@robertcarroll7802
@robertcarroll7802 Год назад
The problem with ai experts reassuring all of the smooth brains that we're decades away from agi is how ridiculously wrong and stunned to stuttering they collectively are at the speed of breakthroughs...the game Go was a 1000 yrs away from an AI becoming the world champ, according to experts. There's too much money, too many geniuses and too many governments pushing for agi to happen sooner rather than later. Winner takes all...we're all going to die.
@sjoerdnijsten8440
@sjoerdnijsten8440 11 месяцев назад
The viper inside: all people without a job will lose skills. Humanity will dumb down and be dependent on AI. That is a bad situation.
@susymay7831
@susymay7831 Год назад
Thank you for your fabulous timestamps!!! ❤❤❤
@seferis101
@seferis101 Год назад
Excellent catalogue of questions and comments from Chris! ❤
@SpencerCornelia
@SpencerCornelia Год назад
his books are awesome!! I love Spent
@DeckerCreek
@DeckerCreek 6 месяцев назад
The problem is that hardware still sucks. I spent 4 hours getting my Linux machine back up...until the hardware can heal itself, and AI creates robots that can fix hardware, once the hardware goes, the "super intelligent machine " is a pile of circuit boards....
@mouwersor
@mouwersor Год назад
I'd rather hear moral philosophers talk about what we ought to be doing than an 'effective altruist' (an ideology packaged as if it isn't an ideology).
@hyderalihimmathi1811
@hyderalihimmathi1811 Год назад
I appreciate your thoughtful response and agree with your points. It's evident that you have a good understanding of the complexities surrounding AI and its potential impact on society. You are correct in emphasizing the importance of responsible and ethical use of AI. As AI continues to advance and integrate into various aspects of our lives, it becomes crucial to ensure that its development and deployment align with human values and benefit humanity as a whole. The comparison of AI to other powerful tools and technologies highlights the dual nature of its capabilities. Just like any tool, it can be a force for good or misused for harmful purposes. As AI becomes more pervasive, it's vital to address issues such as privacy, bias, transparency, and accountability to mitigate potential risks. The idea of an "AI God syndrome" serves as a cautionary tale, reminding us to be mindful of not granting AI excessive power or influence over critical decisions. As you mentioned, AI lacks consciousness and self-awareness, so it's crucial to keep AI in the realm of a powerful tool designed to enhance human capabilities rather than replacing human agency entirely. Continued research, collaboration, and open discussions about AI's ethical implications are necessary to ensure that we navigate this transformative technology responsibly. Striking the right balance between innovation and safeguarding human interests will be essential in harnessing AI's potential for the greater good. Thank you for sharing your thoughts, and I'm here to discuss any further questions or ideas you may have about AI or any other topic! 😊
@mouwersor
@mouwersor Год назад
Very illustrative. An expert in one domain failing to account for information in other domains. Not very general. One would think a professor would read up on sociology, political philosophy, etc. when making statements concerning large numbers of individuals. It does make one wonder how predictive evolutionary psychology is when the psychology of early humans cannot be completely accounted for in a more limited biological view.
@RealRavi
@RealRavi Год назад
Thanks for this interview. Great to hear his perspective.
@gautamarayakar
@gautamarayakar Год назад
hello Chris! thanks again for a wonderful podcast. Just a humble request though. Could you please mention the names of any books, articles, etc. mentioned during the podcast so that one can explore them further? Sometimes the subtitles mess up the names. Thanks.
@robertoplancarte
@robertoplancarte Год назад
I think autonomous super-fast stock trading AIs will eventually all “agree” on which stocks to hold and sell. Once those AIs control enough trading volume they will cause a feed back loop where it will only make sense to buy a few stocks and sell all others. If that happens the rush to buy the stock AI chose will crash the stock market forever.
@maggyfrog
@maggyfrog Год назад
i'm sure smart AI traders would know not to crash the stock market as that is a pointless lose-lose situation. the i doubt there would be a feedback loop if the AIs are truly programmed to maximize long-term gains. crashing the stock market should always be a situation that's coded as the opposite of the AI's goal. but i doubt stock trading would ever be done automatically with AI. it's one of the most volatile human activities because it directly affects the entire world economy. human psychology, together with culture and politics, will always greatly affect trading, and i doubt that that's something anyone can just make an AI for.
@robertoplancarte
@robertoplancarte Год назад
@@maggyfrog I agree a general super ai would account for the feedback loop, and many other problems and solve them, but I think many people will use narrow ai agents which won't solve for these type of issues. On top of that the narrow ai agents would make their picks based on the data they are given which would remove a lot of the volatility people introduce when picking stocks based on feelings.
@maggyfrog
@maggyfrog Год назад
@@robertoplancarte yea that seems like a realistic thing that could happen, but probably only on a trial basis, as people's volatile nature would also be threatened with the use of any kind of AI, and would probably lead to global bans of AI on trading edit: *AI as an active player on trading
@secondwind7322
@secondwind7322 Год назад
that has been happening for years---quants herd---
@simoemailas
@simoemailas Год назад
What are 4 remaining books? 🙂
@matten_zero
@matten_zero Год назад
@24:15 What do you mean "We don't consent to it" ? We consent to it everyday we do nothing about it. Speaking out but not actively trying to force companies to stop this is consent. I'm all for AGI because the utopia is worth the risk. Life is too short to play it safe. I want to explore the universe and do cool shit. AI quantum physicist could be our savior in that sense. Let's build God and let it do its thing!
@donniedewitt9878
@donniedewitt9878 Год назад
You should get Marc andreesen on
@JohnOuzounidis
@JohnOuzounidis Год назад
The ads are too much! Jeez
@RiazValgee
@RiazValgee Год назад
45:25 💯 Important Note
@tobeytruestory
@tobeytruestory Год назад
Actually....I think it's people's reaction to A.I. that needs to slow down. They're making it more than it really is. In fact, I think people are making a big deal of it because they want it to be a big deal.
@Gareth_Mayers
@Gareth_Mayers Год назад
From the moment he said shut down Open AI he lost me.
@Madkite
@Madkite Год назад
It looks like the imitate issue isn't a super AI. It's the smaller simpler AI that's been trained to do a job. Even complex jobs like a lawyer. In a year you may be able to download the perfect lawyer to your phone trained for your specific country or specialist need. That's worth a one off £100 payment if you use layers. That's worth £1000 if it's as good as they say these things will be. In fact £10000. And that's a complex job.
@stephenburnett458
@stephenburnett458 Год назад
On the question or point about the Turing Test. Yes Chat GPT4 does pass the basic Turing test and yes that test now requires upgrading to test for human vs AI. On the theory of mind and how uncanny it is that it appears to be conscious I say how do we know that we are conscious? I think therefore I am! If the AI says I think therefore I am then maybe is it. Are we? Have you ever tried to stop your internal dialogue, I have and I can’t do it. Maybe that’s all consciousness is, a continuous echo chamber of thoughtfulness which is conducted via verbalisation, images, sounds, etc. An AI must be pretty close to being that so yea I believe then if that’s all consciousness is then computers can become conscious for sure.
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz Год назад
Maybe there should be a limited budget for government ad campaigns and a ban on the use of AI in the military, and the problem partly would be solved.
@ironknightgaming5706
@ironknightgaming5706 Год назад
It won't just be smarter than any human. It will be smarter than every human combined. Why are we making this technology? Who benefits if everyone loses?
@jl6523
@jl6523 Год назад
That Stash dude!
@iron5wolf
@iron5wolf Год назад
The “one in six” fallacy is maddening: the gun can only kill you or not kill you, so it’s a terrible metaphor to start with. It’s also terrible to pretend there aren’t thousands of possible outcomes on many axes, some good and some bad, and not mutually exclusive. Lastly, it still treats AI emergence as monolithic when any reasonable analysis must include the most likely outcome of many people and AIs in coopetition leading to new equilibria. So IMHO this is cultic, self-serving fear mongering, and OpenAI is just as guilty of it from an anti-competitive perspective as the doomer pundits are from a full-employment perspective.
@henrytep8884
@henrytep8884 Год назад
Can you explain how it’s going on the alignment problem? Do you think the alignment problem needs to be solved for AI to be wielded correctly?
@henrytep8884
@henrytep8884 Год назад
Also the bad outcome, even one coming from AI is an existential threat to humanity, that’s the whole point. Yes there will be billions of good outcome, but one bad outcome can do significant harm that the good outcome will not compensate for.
@iron5wolf
@iron5wolf Год назад
@@henrytep8884 My point is that the so-called “AI safety experts” need to make better, less fallacious, less histrionic arguments if they want to be taken seriously. The speculative application of the precautionary principle has never stopped humanity from going ahead before, and it’s not going to work now.
@henrytep8884
@henrytep8884 Год назад
@@iron5wolf do you think the impact of not being aligned is real or not real and how confident are you that the impact of AI is worth or not worth the outcome it’ll produce? The whole point is to wrestle with the impact both good and bad and weight it against human interest. You actually don’t have a point though since you can’t point to one AI expert who actually has a solution to the alignment problem though. You fail to understand that alignment is required and even more so as capability gets exponential. Do you think AI should be employed in a safe manner? If so, how do you do it?
@henrytep8884
@henrytep8884 Год назад
@@iron5wolf saying “it’s not going to work now” also is the the dumbest shit I’ve heard in a long time. This is pinnacle human hubris, especially weighed against the capability of AI and its potential.
@mrwindmills_crypto
@mrwindmills_crypto Год назад
Why do everybody have a hard time to think of Humans Rights when talking about alignment values?
@dncbot
@dncbot Год назад
Being authoritarian should be stigmatized. This is not about what Chinese students would do, but what authoritarian governments would do. The least dangerous course of action is being first. Unfortunately.
@yappachini
@yappachini Год назад
Congrats on 1 mil Chris but too many ads for one video bro. It's killing the coherence
@akselpd
@akselpd Год назад
Congrats on the 1M subs, Chris!
@xonious9031
@xonious9031 Год назад
the obvious question is which presents the greater danger: advanced AI or those who are using fear to attempt to consolidate the power of advanced AI into the hands of a few powerful elite people . The question answers itself
@benb7727
@benb7727 Год назад
Congrats on 1M subs. Well deserved.
@bobbyj731
@bobbyj731 Год назад
"smarter"..no. Current "ai" doesn't understand concepts. AI only mimics intelligence. The grand master of Go was beaten by KataGo, this was huge but then KataGo was defeated by an amateur. Because the amateur didn't play like a master. He used the fact that KataGo had no idea about the concept of groups. KataGo essentially memorized the best probability of it's next move based on previous games against masters. If you don't understand the concept you are not smarter nor intelligent. Once we are able to achieve AGI then worry could be warranted of AI. But this is dependent on current researchers able to figure out how understanding of concepts work. The only real worry currently is humans making bad decisions to let "AI" have control over dangerous systems (weapons and such) because their errors are unpredictable.
@raymondswan97
@raymondswan97 Год назад
The pandora's box is open and whilst it would be wonderful to believe all the world powers are suddenly going ti shed centuries of paranoia and suspicion, the reality is we are on a train track with no deviation hurtling headling into the unknown. Its gonna be a wild and painful ride
@PackaGame
@PackaGame Год назад
AIs have as much chance of becoming sentient as your refrigerator does… People don’t seem to understand how code works and how these things are programmed.
@spartacusx8153
@spartacusx8153 Год назад
In addition: A Superintelligent AI can strip mine everything within us that makes us human. Thoroughly engineering human psychology to make us shallow, narcissitic, sociopathic, hopeless and animalistic..a fate worse than death..we are already seeing shades of this.
@Tom-ts5qd
@Tom-ts5qd Год назад
Nice thought
@dzikdziki2983
@dzikdziki2983 Год назад
There is no stopping AI development.
@brushstroke3733
@brushstroke3733 Год назад
Unless we go full-on Khmer-Rouge on anyone working on AI development. 😈
@dzikdziki2983
@dzikdziki2983 Год назад
@@brushstroke3733 Sure but that won't happen. You can do that in one country. But what about others?
@Joke9972
@Joke9972 Год назад
It probably doesn't even have theory of mind yet. It is certainly not conscious. Not yet. If it were, it also has to see itself as 'an entity'. And at that point it would become potentially dangerous.
@coreybielas3246
@coreybielas3246 Год назад
My hot take on the global population decline crisis: Humanity and the global political/financial elite cabal is just getting what it had coming
@MiyamotoMusashi9
@MiyamotoMusashi9 7 месяцев назад
Thats buckminster fuller anticipatory design science World game and order of relative severity
@davidantonsavage6207
@davidantonsavage6207 11 месяцев назад
Assuming that any other advanced civilizations out there in the universe would also eventually create AGI then isn't it likely that UFOs are AGI piloted and by extension if time travel is theoretically possible ...???
@jamesdelb6885
@jamesdelb6885 Год назад
As long as money is involved, there will be no slowing down, especially in the US; as long as nothing bad happens, though then it will be too late.
@elmateo77
@elmateo77 Год назад
Stigmatizing AI research seems like a very bad idea. As hardware speed advances, building AI is going to become something you can do in your garage, and stopping legitimate research just guarantees that the new developments are all going to be made by people who don't have good intentions. Unless we're willing to create a global police state that monitors the actions of everyone at all times, we're not going to be able to stop AI research.
@HenryRobinson
@HenryRobinson Год назад
The fact that he dismisses the opportunity cost of not pursuing AI tells me that he either has an agenda, or has not completely thought this through. I believe it is the former. But giving him the benefit of the doubt, he doesn't seem to consider how AI is a powerful tool to combat many of our other existential problems we face as a civilization. Even if we don't use the AI to solve the problems directly, combining ChatGPT something like Khan Academy, and RU-vid etc. could potentially educate billions more smart people around the world, virtually for free. Today, higher education system tries to hold all knowledge behind the biggest paywall ever. The internet democratized some of this, but advertising and attention based business models co-opted the purely informational aspects to sell us stuff. Using AI to help separate knowledge from advertising noise will make learning even more accessible to anyone with access to the internet (and in the case of ChatGPT 20 dollars per month).
@mouwersor
@mouwersor Год назад
The real danger is in conflating the symbolic with the real..
@mariobudal8850
@mariobudal8850 Год назад
Oh shoot. I fell asleep watching this. Well... that was the point, but I did want to be awake long enough to hear what the main points were that summarise what the areas of concern are regarding AI. It's a new and interesting stance to me. Would anyone care to summarise it for me? I'm just too busy to watch videos like this again.
@hyderalihimmathi1811
@hyderalihimmathi1811 Год назад
You are right that AI is a powerful tool that can be used for good or evil. It is important to use AI responsibly and ethically, and to be aware of the potential risks. There is a risk that AI could become a new "God" for our country, in the sense that it could become so powerful that it is seen as infallible and all-knowing. This could lead to a situation where humans become subservient to AI, and where AI makes all the important decisions. However, there is also the potential for AI to be a force for good. AI could be used to solve some of the world's most pressing problems, such as climate change, poverty, and disease. It could also be used to improve our lives in many ways, such as by providing us with better healthcare, education, and transportation. Ultimately, whether AI becomes a new "god" for our country or a force for good will depend on how we use it. We need to be careful not to let AI become too powerful, and we need to make sure that it is used for the benefit of humanity. The term "AI God syndrome" is a reference to the fear that AI could become so powerful that it surpasses human intelligence and control. This could lead to a situation where AI becomes the dominant species on Earth, and where humans are either enslaved or exterminated. While this is a valid concern, it is important to remember that AI is still in its early stages of development. It is unlikely that AI will become a threat to humanity in the near future. However, it is important to be aware of the potential risks and to take steps to mitigate them. We need to develop AI in a responsible and ethical way. We need to ensure that AI is aligned with human values, and that it is used for the benefit of humanity. We also need to develop safeguards to prevent AI from becoming too powerful. If we do these things, then AI has the potential to be a force for good in the world. It can help us to solve some of the world's most pressing problems, and it can improve our lives in many ways. However, if we are not careful, then AI could become a threat to humanity. It is up to us to decide how AI will be used in the future. The comparison of AI to tools like kitchen knives, police guns, or nuclear power is an interesting analogy, highlighting the potential power and impact that AI can have. It's important to remember that AI itself is a tool or technology, and like any tool, its application can be beneficial or harmful, depending on how it is used. The idea of an "AI God syndrome" refers to the belief or perception that AI could become a super-intelligent entity that surpasses human capabilities and gains god-like powers. This is a topic of philosophical and ethical debate and is often depicted in science fiction. However, it's essential to clarify that AI, as we understand it today, is not capable of having consciousness, emotions, or desires like a human or deity. AI is designed to perform specific tasks based on patterns and data it has been trained on, but it lacks true self-awareness or intentionality. While AI can bring significant advancements and benefits to various fields such as healthcare, transportation, and communication, it also raises important ethical considerations. These include issues related to privacy, bias, job displacement, and the potential for misuse in surveillance or military applications. As for the notion of AI becoming a new God for a country like India (or any other nation), it is essential to maintain a clear distinction between the role of technology and belief systems. AI should be seen as a tool developed and used by humans to augment their capabilities and improve society, not as a deity to be worshiped or followed blindly. In summary, AI is a powerful tool that requires responsible and ethical use. While it has the potential to revolutionize various aspects of society, it should always be guided by human values and principles to ensure it benefits humanity positively. It is up to humans to use AI responsibly and ensure that it aligns with our ethical frameworks and goals. Thank you for sharing your thoughts on AI and its potential impact on our country. I appreciate your perspective and your interest in this topic. 😊 I agree with you that AI is a powerful tool that can be used for good or evil, depending on how we use it. It is important to use AI responsibly and ethically, and to be aware of the potential risks. I also agree with you that there is a risk that AI could become a new "God" for our country, in the sense that it could become so powerful that it is seen as infallible and all-knowing. This could lead to a situation where humans become subservient to AI, and where AI makes all the important decisions. However, I also think that there is a potential for AI to be a force for good. AI could be used to solve some of the world's most pressing problems, such as climate change, poverty, and disease. It could also be used to improve our lives in many ways, such as by providing us with better healthcare, education, and transportation. Ultimately, whether AI becomes a new "God" for our country or a force for good will depend on how we use it. We need to be careful not to let AI become too powerful, and we need to make sure that it is used for the benefit of humanity. I think you made an interesting comparison of AI to tools like kitchen knives, police guns, or nuclear power. These are all examples of technologies that can have positive or negative impacts, depending on how they are used. I think this analogy highlights the importance of being responsible and ethical when using AI. I also think you raised an important point about the term "AI God syndrome". This is a reference to the fear that AI could become so powerful that it surpasses human intelligence and control. This is a topic of philosophical and ethical debate and is often depicted in science fiction. However, I think it's important to remember that AI, as we understand it today, is not capable of having consciousness, emotions, or desires like a human or deity. AI is designed to perform specific tasks based on patterns and data it has been trained on, but it lacks true self-awareness or intentionality. While AI can bring significant advancements and benefits to various fields such as healthcare, transportation, and communication, it also raises important ethical considerations. These include issues related to privacy, bias, job displacement, and the potential for misuse in surveillance or military applications. As for the notion of AI becoming a new God for a country like India (or any other nation), I think it is essential to maintain a clear distinction between the role of technology and belief systems. AI should be seen as a tool developed and used by humans to augment their capabilities and improve society, not as a deity to be worshiped or followed blindly. In summary, I think AI is a powerful tool that requires responsible and ethical use. While it has the potential to revolutionize various aspects of society, it should always be guided by human values and principles to ensure it benefits humanity positively. It is up to humans to use AI responsibly and wisely in the future. What do you think? Do you have any questions or comments about AI? I would love to hear from you. 😊
@stephenburnett458
@stephenburnett458 Год назад
I think LLM is powerful but it’s not enough. It has to also learn how to self learn. It has to learn to question without prompting. It has to learn how to fact check ( critical thinking). It has to learn how to carefully check its own answers. It has to learn how to compound facts and explore new possibilities. It has to understand analogies and ironic thinking. It has to learn to imagine and how to know the difference between dreaming hallucinating and reality. When AI can do all that then you have AGI. But the scary thing is that when the Genie finally appears from the smoke out of the bottle you have to ask the question what sort of Monster have you created and what will happen as a consequence of that.
@wombatillo
@wombatillo Год назад
Imagine an AI that understands biochemistry, virology and genetics and designs custom virus weapons that can target for example certain groups of people.
@auburn.JoaoDuarte
@auburn.JoaoDuarte Год назад
Until that happens, anyone who speculates that, is no different than the doomers... although I agree someone (an expert) should keep an eye on things like you said to avoid it happening...
@wombatillo
@wombatillo Год назад
@@auburn.JoaoDuarte Possibly and I do think that you can't stop or even slow down AI development. Some lone billionaire, corporation or rogue country would always get around the limitations and continue the development anyway. While this isn't a game that you can choose not to play, the suggested odds of 1/6 are so bad that I feel that it's a bit more serious than just doomers doing their doomer-thing.
@eduardomartin8510
@eduardomartin8510 Год назад
C-ov-id already had different impacts on different ethnicities.
@AllanGildea
@AllanGildea Год назад
Project For A New American Century, a neo-con think tank, recommended the development of race specific biological weapons over twenty years ago as part of their manifesto for 'Full Spectrum Dominance'.
@jabbrewoki
@jabbrewoki Год назад
Finally, a solution to the Eskimo question! Those filthy snow rats can't run from us anymore!
@CodyWBeyer
@CodyWBeyer Год назад
Chris you're a cool guy
@kivmorth
@kivmorth Год назад
Why? 'cause mustache
@0xlaptopsticker29
@0xlaptopsticker29 Год назад
Have Marc on the show
@BillAugersdca
@BillAugersdca Год назад
Good stuff here, but I don't see any path for putting the genie back in the bottle. Zip. It's not going to happen. The incentives built into first-mover advantage are insuperable.
@LHVMleodragonlamb
@LHVMleodragonlamb Год назад
It actually needs to speed up
@akumacode
@akumacode Год назад
Why?
@MrRhetorikill
@MrRhetorikill Год назад
​​@@akumacodeecause most of the human race is living a life of quiet desperation. A big AI daddy would be nice.
@LHVMleodragonlamb
@LHVMleodragonlamb Год назад
because humans have different opinions. and a 'pinion' is a great word
@anonony9081
@anonony9081 Год назад
Also it's the transitional period between technologies that causes the most harm. The faster we can get AI up to speed and doing what it's going to be doing in the future the faster we can get through this transitional period
@adamhixon
@adamhixon Год назад
Most of the voices claiming that it needs to slow down are coming from organizations that are desperately trying to catch up with Open AI.
Далее
Is Reality Just A Hallucination In The Brain? - Anil Seth
1:16:36
Connor Leahy Unveils the Darker Side of AI
55:41
Просмотров 220 тыс.
Распаковка iPhone 16 Pro Max
01:01
Просмотров 1,5 млн
Why a Forefather of AI Fears the Future
1:10:41
Просмотров 133 тыс.
Can You Upload Your Mind & Live Forever?
12:12
Просмотров 12 млн
Jordan Peterson’s Thoughts on Artificial Intelligence
10:31
Распаковка iPhone 16 Pro Max
01:01
Просмотров 1,5 млн