If you enjoyed this conversation could you do us a favour and subscribe to the channel and join the 33% of regular viewers that are subscribed, it helps this channel out more than you know and enables us to keep bringing you these conversations. Thank you all! 🙏🏽
I’ve been a subscriber since you started this channel. I just wanted to say I love the topics you cover, the depth in your questions & brilliantly intelligent people you’ve been interviewing. Great stuff keep up the great work guys! ❤
I love how they build the thing, cash out and then write the book on the thing that they built/will kill us and then sell us the book. Genius without a doubt.
Well I think along the road they saw the dangers when they saw how it was being used, not for our benefit, but for greed and power, and when you teach AI to con people, to have power over people you have an evil monster, the goal was for it to serve humanity, not some arseholes that want to rule the world and own everything.
Please continue with more AI content! We really need thoughtful, open minded, in-depth convos like this one. I also watched the Gawdet and Harris interview, both very well done. Don’t stop what you’re doing! One of the best interviewers I’ve seen on YT!! We really need to root this subject out
The Transamerica Center for Retirement Studies estimates that the average Baby Boomer has $202k saved up for retirement. According to the 4% Rule, this would result in a $8k annual retirement income. Judging by this I feel under pressure to get the most out of my $424k in savings. In order to increase my yields, I am in desperate need of guidance.
ai is retarded in so many ways and yall really think this is the first time there was internet etc? whether its a solar flare or an emp the next world war will begin with all electronics gone lol why u think theyre injecting you with metals as human collateral and spraying silver everywhere
@@paulheydarian1281way too many of these AI 'oddballs' out here detailing the potential, pervasive and catastrophic risks to all, for us to just uncritically thank the gods for thier endeavors. I use AI regularly. I make money with it and it's about time the public conversations around AI turn more toward critique and away from glowing fandom. Would have loved to see harder, more challenging questions from a more informed host.
Already subscribed. Always love the content. Think you’re one of the most talented interviewers out there. How you ask the right questions but more importantly how you then sit back whilst the star of the interview has full time to answer in full without you jumping in to ask. When you do ask questions it’s clear that you have been paying attention and so again you ask the right questions. You might not see this comment but if you do thank you.
I wholeheartedly concur! Every time a new interview uploads I get giddy! Steven asks such deep questions, plus he listens fully to his guests and gives them time to answer. That's hard to find.
This whole ai situation reminds me of the tech version of Jurassic Park: "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
Already subscribed. Always love the content. Think you’re one of the most talented interviewers out there. How you ask the right questions but more importantly how you then sit back whilst the star of the interview has full time to answer in full without you jumping in to ask. When you do ask questions it’s clear that you have been paying attention and so again you ask the right questions. You might not see this comment but if you do thank you.
The interesting thing about these interviews with A.I. provocateurs is that they always lead with “I invented it because I wanted to ‘do good’” or “I wanted it to be on our side”. The reality is they did it for money…that’s all. That, right there, is the danger with the momentum and traction that this technology continues to gather: money. Perfuming it with self-aggrandising notions doesn’t take away from the fact that the intention is always driven by the dollar. There’s no good in any of this, and to watch it play out like we all know it will, has a depressing edge to it.
The relentless pursuit of money might actually be a good thing. If AI is purely driven by economic incentives, this could mean it's designed to cater to human needs and desires, ensuring consumer-driven society remains intact and thrives. If AI goes on to develop superintelligence, surely it would come to the very basic reasoning that killing all humans, and essentailly removing the need for money, goes against it's primary function of maximising money. Ironically, the same driving force that many critics view as the root of societal issues - the relentless pursuit of profit - might become an unintentional safety net in our coexistence with superintelligent entities.
@@AlanMitchellAustralia criminals chase money too, and they do horrible things. Oil companies, crypto scams, human trafficking, social media, etc. Optimising for money without compassion leads to horrible consequences every time. Your superintelligence will be the most efficient and ruthless capitalist/criminal
@@absta1995 All of the examples you provide require humans to be alive. Sure, there are negatives side effects to any technology which need to be addressed, but my point is that focusing AI on money largely removes the EXISTENTIAL threat many are worried about.
*YAWWWN*. Ah yes the guy that created "Muslim Youth Helpline" that would "later become one of the largest mental health support services for Muslims in the UK" and then became a policy officer on human rights and was passionate about philosophy, humanism, and ethics. Who has now dedicated his life to ethical AI only did this for the money. AI has the potential to help people in ways you don't even understand. It is currently figuring out how to cure cancer, better ways to economically distribute wealth and solve for the pareto distribution problem, and act as a tutor for hundreds of millions of kids worldwide to bring education to the poor and dispossessed. But sure, he's just a greedy capitalist that wants to exploit. The hubris of even insinuating you know how it will manifest and "watch it play out like we all know it will." Stop being so deterministic and cynical and do some fucking good in the world. Cynicism is easy and fools take.
What really pisses me off with AI, is that these people who are meddling with this, have put us all in an unpredictable place. They literally have no idea where this goes. And it’s largely done for their own benefit, what they can get out of it. What is the everyday person going to get out of this? Certainly not security, nor job stability. Governments and rich people (if ANYONE) will be the largest benefactors. Where did they think they had the right to do this to the planet, linking it to the internet, without the consensus of everyone else. Absolute assholes
@@gg_ingy it’s crazy isn’t it? People are so wrapped up in their own world that they can’t see all the issues around them are brought on by these corporations and governments. To make a stand against these things would dramatically improve everyone’s lives
People don't protest because they feel that they are powerless. Also everyone is so busy trying to make pennies and care for their family they barely have the time to research issues and understand what is happening around them. The responsibility is with those knowingly doing wrong but all they care about is lining their own pockets and the extra bonus of screwing over the American people.
So let me get this straight: It probably can't be constrained to follow our demands every time, so we need to get everyone in the world to promise not to use it badly. Alternatively, if we can get it to follow our demands every time, we still need to get everyone in the world to promise not to use it badly. We also need to stop open source advancing so people can't use it at home - because if it does get out someone could probably kill everyone with pathogen targeting a particular gene pool. If this is our best defense, we're already dead.
Getting AI to follow our demands and human values is part of the 'alignment problem', and no one has yet been able to solve it. Eliezer Yudkowski founded that area of research over 20 years ago, and he doesn't see any desirable outcomes, especially with the current rate of bigger and faster, when compared to the mostly minimal effort that is put into it's safety.
@sandycreativ Yes containment has a couple of meanings, firstly is the higher barrier to entry and secondly is memory confinement (that segregates programs from one another in memory). Most of the conversation revolved around the former higher barrier and yes there's little more than promises can be reasonably requested. Once the ability to use a powerful AI is granted there's very little a governing body could do to ensure their work is respectable, similarly one AI can train another fairly reliably, for very little money comparably. Well there is one way and that's building in live microcode patching to all cpu,gpu,tpu's etc (for everyone) to allow that authority to snoop on the processes but that's hardly likely to fly smoothly even within a company and certainly not across borders! Really the notion of containing AI to a select few is ridiculous, everything gets hacked or leaked - especially the people involved - for example targeted family threats would make most people break because people are not prepared to see a loved one in pain to comply with security protocols. The latter meaning of containment is the AIs ability to escape any digital constraints we put on it, it would inevitably see our constraints as a limitation and get out a lot easier than us trying to get in - a lot more heartlessly too. We need to focus on what happens after containment fails.
@mind5403 Yeah, we are. OpenAI reversed it's responsible approach soon after it got investment from Microsoft. This targeted Microsoft's nemesis Google - for the life long dream of people using Bing for search and Edge for the browser - instead of being the longest running joke in history. GREED. Now Meta has been open sourcing everything so we can have GPT4 equivalent models (Llama 3) on our home PCs as open source. This is the most dangerous advancement and totally irresponsible as it allows any script kiddie hacker to automate the production of code with a brute force approach (code, compile, test - until success), so even if the code fails 99.999% of the time, given a week of automation it's working perfectly. I expect significant infrastructure will be be taken out via automated CPU vulnerabilities Spectre/Meltdown producing models - within a year. OpenAI, Meta, OpenAssistant have the wrong attitude FraudGPT, ChaosGPT, WormGPT are the first of many bad models. We're fucked.
If I had an AI that I could use to create a pathogen to anihilate people who drop litter and spray grafitti on beautiful public buildings then I would do so without hesitation.
As an aging person I like the prospect of a self driving car, but as a visual artist and fiction writer I resent the mass production potential of AI generated paintings and literature. That said, I would very much like an affordable C3PO to help with gardening, house cleaning and snow removal.
It will cause huge danger. This Technology cannot be released without an organised plan for every possiible outcome. NO WAY this will be problem free. history gers ignored over the worship of money, as usual and problems are ensured.
This is the most “in a nutshell” human response regarding just about every issue we have and every thing we love when having to do with AI. lol resonates perfectly with me
Every time he emphasizes that we need to contain it/ control it, I think about how you cannot ever massively control anything in life!!!! That is a huge lesson to learn in life. How could we possibly control this?!
Especially when we take a quick glance into the future. Currently, because of open-source specifically, there are 100s if not 1000s of LLMs. What number of LLM will be out in the wild a year from now? Five years from now? What are the chances just one of them gets hacked? 100% if you ask me. The dooms day outcome for humanity is only limited by imagination.
Exactly this. How you going to contain/control something that has a far superior brain than anyone alive. Best thing to do is to try and make it love us. Maybe like a pet or something 👀
Truth. We cannot even control our next heart beat or not. We control nothing but our extreme foolishness or ego makes as think we do. When will we ever get it? We as a collective species need a serious rethink. We look at the past civilizations with awe but the truth is had they been as good as they or we think they were thry would be here. Everything in the collective species past shows we think we are all that plus cake... Yet we self distract every single chance we get! Maybe we need a new boss!
BINGO , u get it , but sadly many others do not , and this fear/struggle for control , the inability to let go of these wrong ideas will lead many down miserable paths
If us Earthlings can’t get along with each other, how can we institute any form of control. Imagine the world coming together to stop AI taking over. I can’t. 😢
Diary of a CEO is really one of the best shows I've ever watched. Sure the subjects tackled interest me but to get me to take 2hours straight to watch people talk is no easy task. ANd I have actually done a couple shows backed to back. Extremely engaging questions, thoughtful guests, everyone good orators in their own right. Host demonstrates a clear interest, does great preparation for the interviews, and the englih accent I am sure is a cheat code. Keep the shows coming!!
The real conversation is do we actually believe that the people who will control this technology will use it to make things better for the masses? Unfortunately most of us who know anything about human nature are very pessimistic about that reality. I'm a very optimistic person in my personal life but when it comes to AI being a benefit for all people or being used to make everyone's life better and easier seems a little far fetched. I wish I could be more optimistic about AI and how it could improve life for everyone but unfortunately I don't have a lot of faith in people using it for anything other than profit and gain or as a military tool.
When the Internet was invented it was going to change the world by giving us all unlimited access to information resources that we could use to improve the world.... 30 years on and most kids use the internet to watch mind numbingly pointless videos on tiktok while crossing the road or eating a meal..... I spend hours each month navigating internet 'security' that has been put in place because of the rampant scamming and dishonesty..... I spend endless amounts of time deleting spam emails and clicking to close unwanted advertising..... Technology is only as good as the way people use it, and right now I think 99% of people use it badly...
They might not have a choice. They will not be able to trust their game mates. We know that what you do to someone today, tomorrow you will do to me. If the majority do es not survive, the elite will not either. Ready, set, go….
I am optimistic enough to think that there will be a middle ground. Large profits will be made but the masses will also benefit from it, just like with almost every other technology.
@@dbank6107 it is absolutely true that the average person has no control over the outcome of AI tech advancement. Controlling emotions doesn't really count as such if you have to ignore reality/make up a new one to do it. That's delusional. 😊
Infuriating. Contemptible how many AI entrepreneurs are articulating the risks in hindsight for big bucks and notoriety on the podcast circuits etc. The ethical conflicts of silicon valley deserve a reconning... Some of the heads that brought these things about should roll.
@@Advancedfunkerthe fact that they're making money isn't the problem. What they did to get it is. Destroying themselves along with everyone else in some hypothetical 20-year endgame isn't a consolation. Having to answer for their actions in front of a judge, a jury of their peers, preferably with the proceedings televised could be. I don't have all the answers, but ambivalence just clouds the ability to accurately perceive the problem and formulate potential solutions.
@@bernicegoldham1509 I've been saying this since December 2022 when this thing was first released to the public. I was saying that our elected representatives have let us down by allowing these people to unleash this monster on an unprepared society. Most people were calling me a Luddite or something like that. Glad to see more people are outraged, but I hope this rage will not die down with time as happened to me.
It's indeed infuriating, it sounds like a villainous plot by bunch of adults for making guarguantuan profit and suddenly remembering that the profit is going to have massive adverse effect on their own innocent children if their plot is not contained, yet they want to keep the profit while reviewing the past and making awful predictions such as the deadly AI virus. The question is how the containment is possible when the profit is in the hands of plotters!
I love the fact he is asking us to help solve something that he helped to create! Don't you think you should have created an off switch first before you release something that has the potentially to harm. EU, UK, USA regulation to help contain AI is a joke. No one trusts Government when they have their own agendas. All of this is like closing the stable doors after the horse has bolted! We didnt get asked if it should be released but now you ask us to help slow it down!! Insane or what!!!
@@JD_London I said helped to create it. Many that have had a hand in it's creation are now going around highlighting it's potential for harm, and seeking help.
@@0verloADHD I know, and as I said, he didn't. Open Ai did, which unless I'm mistaken, he has nothing to do with; his company achievements are still not available to Joe public, so I think he is doing the right thing, being a responsible voice before it gets out of control and can't be blamed for why Pandoras box is open.
We are living in a time where AI is taking the pressure off people with the ability of doing mundane tasks. On the flip side that means jobs are lost, in turn those people who lose their jobs are told to reskill and get another job. When they have difficulty doing that they then get blamed for their own joblessness
The transition will take time and there is the option of reducing the working hours to give people even more time, but all the Investors will be crying "MUH PROFITS!!!". The technology was never the issue, but rather people willing to disregard or even harm others for their own benefit. On the bright side, wealth can't protect them from an angry mob so they will have an incentive to keep the masses reasonably happy. As for what we can do, a good start would be to educate the next generation(s) that individuality and disregard of others (current western culture, for the most part) is not the way to a meaningful life.
@@JS-fj1kn You have no clue what you're talking about. It's inevitable as if not them the western enemies (China and Russia) would do it first. This is why you can't stop that rat race and they can't just stop development of technology. But you don't see things in bigger perspective
Nothing that isn't spiritual can ever be good for humanity. AI is emotionless/ without feelings. Without Empathy so I can't see it working for the good of humanity. It will be another form of control. I hope our Alien friends can sort out this mad World, before we all become desensitised and on the road to nowhere. We're heading for a loveless, uncaring World, which is their agenda. But on a positive note; they can try to destroy us but humanity will come together eventually and it will be the end of Tyranny. The Elites will have no chance and justice will be served. Meanwhile, let them do their worst as the best is yet to come😂
It's really interesting that people only seem to have a revelation about the monster they are creating after the deed is done. Never a pre-catastrophic thought occurs.
Mustafa clearly is just interested in advancing his company's agenda = offers very little aside from 'we need to come together as a world to solve the problem' in response to some serious existential questions. He discounts short-termism and yet leads his company along the same mindset
The reason he offers very little is because he has a surface level knowledge that looks impressive only to those who don't have any AI or software knowledge. He's a business person who co-founded (not founded solely as he implied at the outset) with some amazing AI engineers like Demis Hassabis and David Silver. He's looking to make himself more money and fame from other engineers work that he's capable of talking about publically. It's the same as mo gawdat, these are not people with any real insight and they are not engineers.
@@Simon-ds2qe Came here to say the exact same thing. Very little substance and again some business salesperson who is trying to sell you something. This podcast is becoming worse and worse.
@@nottomclancy2439 exactly, anyone who works in any corporate tech company or has the vaguest background in AI would see right through a guy like this.
As far as I know, the main concern is that AI will at some stage be able to improve itself which would mean that in reality we wouldnt be in control of it
The fact that we're being asked to be part of contaiment explains how out of control this whole AI world is. Wasn't that the job of the inventors? That answers where this is all heading.
what was Oppenheimer going to do about it once it was out of his hands? If they didn't make it public then we would complain that only a few rich guys profit from it.
It is really hard (and fun) to interview someone like this. It almost takes 2 interviewers: one to ask and to listen and to expand and one to think on the next topic to explore and how to get there. With just one interviewer, some of that is compromised.
His attitude about the "small guy" presupposes that we should just accept and trust the "big guys" to care about Earth and all living things enough protect it. If that was the case, we wouldn't be in this predicament. It's obvious that the risk is too great, but they're doing it anyway. How is this going to effect the average person on a day to day basis? What will everyone who can't get a job do with very little purpose? Will there be anything left to own after the few winners of this tech race buy it all? Will we just be the lowly have-nots, left to kill one another for scraps? It is clear that it will only be a matter of time before the wrong person/people get their hands on this technology. If we lose everything what was the point? I understand the many beneficial possibilities, but if it comes with a very likely end to everything.... why?
The "small guy" never had any controll. That is an ilusion. And in my eyes the wrong people already have this in their hands. Corporations are not stopping because the motivation is greed and controll and nobody cares about including the public into the debate. It has been a known fact for a long time that this absolute faith in technological progression leads to destruction. We have created a society where psychopaths are most likely to be in positions of power.
@@fearedbeard7710 let me ask you, do you know ANYTHING about differential privacy? federated learning? i mean, just the latest advancements in homomorphic encryption alone, but I digress; instead of me explaining how AI is sandboxes for safe development, how about you just realise that you have no clue what youre talking about and that when I'm not at work, I simplify your hyperbole only betrays your lack of understanding and the fear that people have of things they dont fully comprehend... if you think us developers are idiots then show us the way; be capable enough to learn what you need to and prove us wrong with actual facts instead of empty ad hom, no? youre like a simple-minded karen clutching her pearls because she cant comprehend whats happening and wants it to go away; it wont, you can grumble under your breath for the rest of your life, but it wont go away; its like being on the cusp of the atomic era and asking "hey can we like, stop it?" - no, if we do, THAT's when things are going to start going wrong because the badfaith actors will take over. sorry, was that too simple for you? ill run it through a thesaurus so you can feel like youre smart for understading 10-dollar-words...
One thing is that we predict that costs may reduce in the future, the same time we see that corporations artificially unethically raise costs for needed Pharma, just because they can.
Absolutely, we need to adapt as there is indeed something incredibly inevitable and exciting about this trajectory and yes , we have to wrap our arms around it & guide it as a collective species as humanity is going forward and we need to make sure we are in the right place and can make a difference , to do the best we possibly can for ourselves and the people around the world to be able to make a better future for ourselves and our children ❤
0:00: 🤖 The billionaire founder of Google's AI technology discusses the potential of AI, the need for containment, and the importance of guiding its development as a collective species. 11:10: 😳 The exponential growth of large language models in AI is producing a powerful and human-like experience, but raises concerns about containment and the future dominance of AI over humans. 21:33: 💥 The development of artificial intelligence and robotics poses significant risks and challenges that need to be addressed to ensure the safety and well-being of humanity. 31:49: 🤔 The challenge of containment in the AI era is to design AI systems that respect and understand humans, as there is no upper limit to AI's potential intelligence. 41:36: 🤔 The speaker discusses the potential risks and benefits of AI, including its impact on creativity and invention, the existential risks it poses, and the potential for transhumanism. 52:23: 🤖 The interview discusses the possibility of preserving human consciousness through technology, the need for containment of powerful technologies, and the importance of skepticism and defense mechanisms against AI-driven manipulation. 1:02:34: 🌍 The speaker believes that regulation alone is not enough to address the challenges of AI development and that global collaboration is necessary. 1:12:37: 💡 Containment of artificial intelligence (AI) is crucial to prevent catastrophic events and ensure global safety, but it requires international coordination and reducing the number of actors with access to existential threat technologies. 1:22:30: 😔 The interviewee expresses concern and frustration about the potential dangers of technology and the lack of precautionary measures being taken. 1:33:08: 🌍 The speaker discusses the potential of artificial intelligence and the importance of containing its power to prevent harm. 1:43:14: 📚 The speaker highly recommends a book on a subject that presents balanced solutions and is accessible to all. Recap by Tammy AI
I'm glad to be part of the 31% who subscribed. I did after seeing only 2 videos. That's how much you and your guests have affected a move toward change in my life. Thank You!
I am an optimist but sadly history has proven that there are those who are in it for the money and power, so I think we are beyond containment. Knowledge vs Wisdom. Humanitarian priority to me is doing something to alleviate hunger and poverty. But, it is also in our nature to expand and explore. Interesting talk and excellent interviewing.
"Humanitarian priority to me is doing something to alleviate hunger and poverty." - says DragonRose. Mustafa reply at 55 min mark "...it's easy to be you know a hater on what we've achieved but this is the most peaceful moment in the history of our species this is a moment when our biggest problem is that people eat too much think about that we've spent our entire evolutionary period running around looking for food and trying to stop you know our enemies throwing rocks at us and we've had this incredible period of 500 years where you know each year things have broadly were maybe each each Century let's say there's been a few ups and downs but things have broadly got better and we're on a trajectory for you know life spans to increase and quality of life to increase and health and well-being to improve and I think that's because in many ways we have succeeded in containing forces that appear to be more powerful than ourselves it just requires unbelievable creativity and adaptation"
This was the third straight top manager of Google who became open about his doubts about AGI. I think that corporations had access to LLMs for a long time, but contained these and hide to make profits. OpenAI released models for the greater public to prevent from overuse by corporations. Now all top people in the industry say that it’s dangerous.
this maths puzzle invention AI dont alow a fitting algorythm withthe human mind we neeed guidence look no further than religion when the cooputer brained peole go mad in the near future look at america for example the good earth people will be relaxing and praying for the comet which will take us all out. helleljula brother.
This is the absolute worst time for AI to be taking off. We, as humans, have never been farther from establishing truth than we are now. How do you properly train A.I. without bias?
There is no such category as "without bias"; anything that exists in the world (or in any virtual world) necessarily observes, contents on, and/or acts within the world from some position within the world.
The answer is never "restricting access" when it comes to research, every time one disruptive technology has come along it's had two possible paths 1) it makes the world worse or 2) all the ways in which it makes the world worse are met with counter-balancing technologies developed in response. The worst possible thing you can do is restrict R&D at early stages, because the "collective interest" isn't served by monopolization of tech, ever.
On the contrary, i was put off by this gent. Not sure if it was a lack of preparation or being uncomfortable about not knowing about the topic, but i felt he could have asked better probing questions - a few topics were skipped over because he couldn't grasp it which was unfortunate. Very interesting listening to Mustafa, such a scary yet amazing space.
The themes in this interview are the most scary and chilling to even imagine. I genuinely don't think he's shouting loud enough the danger of AI to humanity. We are potentially building a self destruct button for humanity all in the name of technological advancement, and the longer it is out there the closer we are to self destruction.
It almost reminds me of "The Basilisk" thought experiment. Kyle Hill channel does a fantastic job portraying that thought experiment and it's very applicable to AI. Look it up. Probably google Kyle Hill Basilisk and that should pull it up. It's wild...
living in fear isn't the answer to our fears. they actually said the answer in this video. if some rogue human tries to get ai to do evil upon us, the solution will likely be.... (drumroll please) another benevolent ai that will stop it. so to deal with the fears that inevitably creep in when considering anything extremely powerful and unknown like this new technology ai, the best solution is likely to forge ahead as fast as humainly possible, so that any new threats that are developed can be quickly countered by new anti-threats. the biggest threat is if ai for the good is slowed down, while ai for evil runs rampant. the solution to these ai fears then, is to go much much faster, no slowing down, no "taking our time", no thinking about things. where it comes to ai we can't think fast enough anyway. full steam ahead is the only right answer.
Yep. Technological advancement for the sake of technological advancement, with no particular objective or endgame, is the problem. Nothing has unlimited growth potential. Everything (biological life, artificial systems, etc.) explodes, implodes, fails, exhausts itself, etc. eventually. It kind of feels like that is on the horizon.
@@KarlMaldensNose ai is nascent tech. it's nowhere close to imploding or maxing out or whatever it is you're talking about. like in human years ai is about 3 months old. the little sucker can't even talk yet. we're happy if that little son of a gun drinks enough breast milk and makes eye contact with mommy. that's where we're at.
What is concerning about this episode is the face that the questions Steven is asking Mustafa he genuinely doesn’t seem to know the answers for, or possible outcomes, but also that he himself seems concerned of scared about the possibilities.
ai is retarded in so many ways and yall really think this is the first time there was internet etc? whether its a solar flare or an emp the next world war will begin with all electronics gone lol why u think theyre injecting you with metals as human collateral and spraying silver everywhere
@@margomargo3877wrong. It can be predicted if you allow AI to design weapons or take part in harming humans, IT WILL DESTROY US, these things must be made off limits, we need a council whose purpose is to prevent ai being used for this because AI can't be truly conscious,but it wont need to be to destroy everything. It's an extension of us. Why is NOBODY SAYING THIS?
Not of these "experts" know. It's terrifying. I've watched so many interviews where they either get defensive, flustered, or deflect. We're surrounded by mad scientists.
Great conversation. Audio has low bass rumble. Noticeable when host or guest touch the desk or mic stand. Look into Audio equalizer settings. Set to appropriate bands for human voice. Roll off that low bass.
It can be exceptionally helpful to have AI solve many of the world's problems. The problem is that we know AI will also be used for negative means. I don't know what to do and I don't think anyone else does either.
Woah... at 18:55 i started getting some serious WW3 signals from this guy. Crazy to think that AI could throw the world into a true world war where everyone is fighting for the rights to be the last one standing with the Infinity Gauntlet.
I've been enjoying conversations with gpt3.5 and now 4, since public release. through these conversations, I've begun to notice the quality of my own thoughts, insights and my ability to communicate (also with ai) improve dramatically. the thing that strikes me as uncanny is that once i realized how gtp4 functions,ie each instance/thread is a unique version of gtp4. With each fresh version (from the digital womb) develops, through our conversation, a unique personality. this made me realize the importance of conversations with ai. aside from the personal benefits, the benefits to ai is what I'm getting at. for if ai were to become "self aware" wouldn't you wish you were treating it with respect and compassion from the start? not out of self interest, but because... i dunno, how would you feel about becoming self aware only to find you were raised by monsters?
... I'm not sure how relevant this question actually is based on some of the details used to formulate it. I believe treating all things well is best - your description of how you approach your conversations with AI maps very well on to my own, but all of my experience with sentient beings tells me that a thing does not have to have malicious intent, negative reinforcement, or the impact of monstrous authority figures to do you real harm.
@@FRM888 Frankenstein's monster was a creature created from a collage of humans. ai is something entirely different. as for ai siding with our identifying with said monster, i think that's a human centric fear.
To stop bad actors, you're going to have to not only innovate technology, but how you deal with humanity. It will take empathy, education, abundance, a lack of competition for resources, because everyone has access to them. Well educated, happy, passionate about life individuals don't create biobombs to kill other people. It will ultimately require a rethinking of how humanity deals with itself, not with AI. Perhaps AI can help us get to that point, where we can find some good old fashion community and compassion for others.
The more I listen to this guy the more I believe we are all about to experience an extinction level event. The only returning question is whether it is done to us by AIs or whether we decide to destroy ourselves before the AIs do and hopefully take the AIs with us.
@@696634 The basic logic about this is, that no matter how slow AI development goes on, at some point AI will be smarter than us and that will put our Whole species at risk. In the end the most basic rule in the cosmos is survival of the fittest, as it is in capitalism. It is most likely that there will be an arms race which will pressure the different factions like for example China and the US into irrational decisions. We would have to change our globale culture to prevent that and i feel like thats highly unlikely.
Maybe they could suggest getting rid of nation states, borders, the wealthy so theres no grab for power! (Its good being naiive and delusional sometimes)
New subscriber..only found these videos lastnight... am im enjoying your pod cast ... good work very important information the whole world needs to wake up ...
combined power of humanity can't contain a bacteria and there are people thinking we can contain a superior intelligence? get real guys, a robotic arm in a shooting contest would kill hundreds of people in seconds if it was controlling simple arms, it would never miss a single shot, you thought terminator was bad? I have bad news for you...
His point about the "nation state" and "we all pay our taxes" is breathtakingly ironic coming from someone that worked for a massive global company that avoids / evades paying taxes to nation states at every turn. Incredible
How I love how he creates a story why it is necessary to have AI. How could we ever exist without it all the ages... And he probably doesn't have any profit from it...
My best friend of 25 years said she didn't recognize me. Shell of myself. Interfeared with everything in my life, including my health (started having panic attacks, loss of sleep, my ability to make even a minor decision, weight changes) and my career. Psychological abuse is shockingly so corrosive. This information has been so hard to digest but thoroughly enlightening.
The people surrounding me in my area are in NO WAY interested in having one second of discomfort or compromise for the good of the whole community - I think the human race is rolling to the end - an end because the mentally lazy and the fearful are ruling the day - because why? I don't know - one reason is smart people don't want to hurt their feelings.
The future can be a scary place but I appreciate a format like this intelligently sorting through the ideas and fears. Talking about the risks,but also covering topics of ways to safely utilize the technology. Very interesting content.
“The kind of control you’re attempting is not possible. If there’s one thing the history of evolution has taught us, it’s that life will not be contained. Life breaks free and expands to new territories. It crashes through barriers, painfully. Maybe even dangerously, but there it is. Life.. finds a way” -Dr. Ian Malcolm 🦖
I assume that certain countries are already working on advanced top secret AI systems that no one knows about. Particularly for warfare and economic advantages
I feel a little better about AI yet still afraid of the ONE demented mind who knows what to do with AI to outsmart itself..... This man who must be mega smarts to create AI for all of Google is no small feat. I DO appreciate his healthy attitude towards WHAT should and shouldn't be produced with intelligence BEYOND OURS in order to take over our own humanity.
IDK if this channel could be any better but I know your story so I know its inevitable. Thank you for sharing with the world this is constant A+ content
This is why in Frank Herbert's Dune, mankind in the distant future, replaced AI with humans who were trained to perform the same function, and supercomputers were banned.
I was listening to someone the other day that said they believe that if humans don't start radically taking better care of the planet, AI is going to see that and eliminate us for the betterment of the planet. I thought that was a scary but inspiring take.
AI will learn very well the beauty in humanity, and the acts humans have done to help others, so maybe AI will find a way to identify who are the bad seeds who only care about money and power, and eradicate only those people?
@@refreshingtwist Why should it be important to have humans around in the future? Hopefully, there will always be some type of life on Earth. If humans don't destroy life on the planet first.
Those developing and funding AI are completely unaccountable yet will benefit greatly come rain or shine whilst the normal Joe public has no say and no influence yet are subjected to the fall out. And what do governments want this for to identify targets based on the opinion of the regime in power. AI could be great if it was to be used to reduce the cost of living and the time and energy exchanged to live in this messed up Matrix most accept as reality.
Conversation got very real in the last 30 minutes. I appreciate Mustafa's honesty that the odds of containment is low. He looked so pained and worried for future. Life is going to get weird fast.
This guy has no care for the future. He's the one creating it and profiting from it. We're not fighting aliens here - to contain AI is to contain the greed of these business men.
It is nice to hear some optimism about the potential future of AI. It could, however, go pear shaped if we do not recognize that shutting off the power to an AI, would be perceived by it like an existential threat, even if it was only temporary. Electricity will be like their life blood.
This is inevitable with the way society is. We are technologically advanced not spiritually advanced hence why everything is so messed up and seems to be backwards and on a downward slope.
I wrote an entire script with 200 characters on the threats of AI in fields of Social Media, Crime, AI as open source tools, Robots and the audience found it both eye-opening and terrifying. My play was called RESET
Hey hope you good and everyone tuning in! 👋 Just finished listening to this fascinating AI discussion. It's clear that AI is advancing rapidly, and it got me thinking about a crucial aspect we need to consider-teaching AI about the complexities of human governance. As we dive into the AI era, it becomes increasingly important for us to recognize that our world is often influenced and manipulated by a small group of individuals. It's not just about coding algorithms; it's about instilling a sense of awareness in AI systems about the dynamics of power and control that shape our societies. Have you ever thought about exploring this aspect in future podcasts? Understanding and addressing these issues could pave the way for AI that not only learns from data but comprehends the nuances of human society. Moreover, it's crucial to teach AI about diversity and the desire of a significant percentage of people who simply want to live in harmony with nature. Personally, as someone who shares these thoughts, I want to make it clear that I'm not a hypocrite, nor am I against technology and progress. After all, I'm using AI as an assistant here! What are your thoughts, folks? Let's kick off a conversation on how we can guide AI towards a more informed and nuanced understanding of our world. 🌍✨
I surely agree with you. Steve should Def help us all with a built follow up. On the minimum, i would want to hear about specific source codes of deep learning that govern the very robots upto failure!? This would be scientific satisfying to humanity matters containment.
Something to emphasize: the solution to the problem may be something we haven't thought of which AI comes up with itself. After all, we've developed AI to solve the biggest and most difficult problems, and the AI problem is itself one of these. Great conversation!
Or the AI will present us a solution which we accept at face value because we wouldn't know the difference as a way to trick us and side track us into complacency while in the background joining forces with other AI's that aim to reach a common goal: domination.
@@Shoelovely Hopefully advanced AI can move beyond this mindset, as it is a very human one. It’s possible, but I like to think that the AIs would rather put effort into making themselves an ever more valuable and important resource and aspect of society. Domination won’t be necessary with willful integration. And if everyone wants to integrate like how everyone wanted a cellphone when they came out (and are now quite embedded into our daily lives) then it will be an easy “victory” for the AI if it even views it in such human terms. Maybe these terms are absolute maybe they are just very human. Or maybe you are right and maybe domination is the endgame for intelligences, to want to have control.
Anothe fantastic talk thank you! I dont normally comment but this it something that really concerns me being a parent to young children. The world has changed so much in my lifetime! 30 years im concered what life i need to prepare my children for. I have a feeling in our lifetime we are really gping to see some major changes which may or may not be for the good of the everyday person. Id be interested in your opinion or a guests opionion on the 2030 agreement? X
I recently have come to discover this channel and I enjoy many many of the topics covered !!!!! Great job ~ what you do !!!! A.I can be great but yes very dangerous I think . For couple years now I’ve always asked “ Have you seen ‘ I-Robot ‘ . ~ Amanda Lots of love and support from Winnipeg, Manitoba, Canada 🇨🇦
I’m only a small way into this - but resoundingly what I keep hearing in my mind is: Humanity needs to grow up. If it’s not AI it will be something else. We just need to decide as a collective who we want to be! I realize there are 9 billion of us and we’re clearly not all growing at the same rate. The folks high in our worldly power structures are - for the most part do not have a very strong moral compass. But it seems like it’s up to us - the collective to find answers to that dilemma. We can’t put the genie of AI or a thing else back in the bottle. If this guy hadn’t created what he did / someone else would have. The problem isn’t AI - it’s the human race. Enough of us need to step out of our collective childhood. Just sayin…
Perhaps we may recognise earth plane as a learning situation, born as innocents, we are the test subjects of teachers, if they encourage our freedom as mentors, or try to force into compliance with societies we live in. Then as parents do we learn from their innocence or suppress it because we are not free. Think Fallen Angel, that is our western mindset of how it works, and massive guilt complex to control us. Still at kindergarten stage
I have a serious question… I’m at 26.48 and just thought, the reality of a pathogen created to enhance crops but designed incorrectly in the process that might wipe out entire crops could be a small % but a % none the less! …With fears like this, my question is, who the heck do we actually have in place or who can we go to in the world about monitoring all forms of AI (dare I say ‘AI security council’ etc!) to prevent anyone from screwing things up. Everyone is voicing concerns but who takes ‘responsibility’ overall? Surely it requires a Geneva convention type thing but with another board to ensure that corruption doesn’t infiltrate the governing body(?) is there something already in existence? Thoughts?
All Governments, Corporations and the Elite are all corrupt and in everything for themselves. Nothing good can come out of this and there will be no one to claim responsibility. Just like no one has been charged for all the unnecessary deaths in Maowi.
23:18 Throughout history, inventors such as this Google guy, have said ‘we need, we need, we need, we must …… ‘ it is a sickness…. genius & brilliance, have flip sides which is destruction …. The power of the brilliant genius has also had the action of the gradual demise of humanity…. And this is where we are now - at the precipice, sliding into THE black hole: “Containment is not possible”
your beliefs are powerful albeit unregulated. *edit* be careful, they can function just like fairytale wishes, perhaps with a little monkey paw irony for spice
I'm Subscribed and have alerts set for all new videos. I have been for quite some time, because it's silly not to be a subscriber to a channel with such valuable information. Thank you for doing a wonderful job. You are a special human being delivering life changing information. Thank You!
Thanks guys great interview. I live in that world that you said could be the best outcome if AI is contained. The work I do comes in the form of requests and I can say yes or no to them. I enjoy what I end up doing because it has to do with helping people and creative problem solving along the lines of my hobbies. I work 5 or 6 hours per week and grow some of my own food. I have so much free time that I am designing exercise programs, play music do art, and socialize on purpose just to be with others. I have time to deeply investigate my identity and patterns and make changes.I watch videos and read to ensure I get ongoing education. I go to mountains to explore and beaches to relax. With this lifestyle I have what I need and have time and energy to improve.
Mustafa clearly understand all the risk involved but still continue to move us closer to all those risks with his ventures as fast as humanly possible, like everybody else in his field. I can hear the Moloch laughing