Тёмный

Should We Slow Down AI Progress? 

Fraser Cain
Подписаться 451 тыс.
Просмотров 25 тыс.
50% 1

Опубликовано:

 

20 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 791   
@phaedrus000
@phaedrus000 День назад
"No one would allow experiments of that level on conscious beings here. We consider it inhumane, immoral, unethical." Yeah history begs to differ.
@Downtownmtb
@Downtownmtb День назад
Yeah, don't blame humans for the suffering in our world, it must be the simulators who created it! What if, it was created to be good and we screwed up this simulation and the simulators\programmers are very actively trying to fix it and solve the suffering problem we live in. That story sounds familiar.
@phaedrus000
@phaedrus000 День назад
@@Downtownmtb Humans accidentally discovered RNG manipulation lol
@Dash323MJ
@Dash323MJ 2 дня назад
One thing that seems to be missed is that Alignment and Safety are one in the same and they both suffer from the subjectivity of "What is Safe?", "What is Aligned?".
@agentdarkboote
@agentdarkboote 2 дня назад
Agreed, but there are definite behaviours we can point to and say "that isn't" which some models already display.
@AbeDillon
@AbeDillon 9 часов назад
A huge problem is that people think alignment is an AI problem. It’s far more general than that. Global capitalism is a system we built which basically has a mind of its own even if humans are components of that mind. It clearly isn’t aligned with any sane definition of “the good of humanity”.
@TheJokerReturns
@TheJokerReturns 8 часов назад
@@Dash323MJ dont kill everyoneism is a good safety
@chrissscottt
@chrissscottt 2 дня назад
A pause would allow non-compliant parties the opportunity to catch up.
@rockinray76
@rockinray76 2 дня назад
I've said that. If a pause is agreed to, rogue actors will be handed the chance to overtake the ones who have better intentions.
@BinaryDood
@BinaryDood 15 часов назад
you are implying those which a more developed AI would have some significant advantage over you.
@jonahbranch5625
@jonahbranch5625 12 часов назад
The compliant AI is just as dangerous as non-compliant. Maybe moreso, since everyone is conditioned to trust anything with the name "open-ai" on it by default.
@TheJokerReturns
@TheJokerReturns 12 часов назад
@@chrissscottt incorrect, we can use the time to improve our alignment and prevent non-compliant parties
@rseyedoc
@rseyedoc День назад
We can't pause because our enemies won't pause and we can't be second. It's that simple.
@donaldhobson8873
@donaldhobson8873 День назад
We can pause. If those "enemies" don't pause. Well drone strikes exist.
@augustvctjuh8423
@augustvctjuh8423 День назад
The U.S. would easily remain ahead if they slowed down by a factor of 2
@tgreaux5027
@tgreaux5027 19 часов назад
yup that's the real issue here is china doesn't give a rats ass about ai safety and they are going to keep developing and training their Ais at breakneck speed.
@tgreaux5027
@tgreaux5027 19 часов назад
@@donaldhobson8873 you don't make any sense. You're going to drone strike foreign universities and learning institutes and murder software engineers on foreign soil because you believe they should pause Ai training? no offense but thats one of the dumbest things ive ever heard. Any relatively small group of programmers and engineers could train an Ai in complete and total privacy and obscurity. You're gonna start bombing private companies in sovereign nations based on "we think you should use ai more safely"?
@tgreaux5027
@tgreaux5027 19 часов назад
@@augustvctjuh8423 lol and where are you getting that data from exactly? You have no idea what foreign nations are doing in secret. Pure hubris you're spouting.
@douglaswilkinson5700
@douglaswilkinson5700 2 дня назад
We better make sure that countries that want to do us harm also pause AI development.
@cristtos
@cristtos 2 дня назад
@@douglaswilkinson5700 How?
@nicejungle
@nicejungle 2 дня назад
Hopefully, nobody can enforce this. Research on AI are open and there are many models free and open source. It's the only way for the common citizen to defend against governments
@WoodlandT
@WoodlandT 2 дня назад
This is the exact problem. We cannot ensure that China will pause AI development. We can assume that they won’t stop at anything, considering their goal is to become the primary superpower in the world. AI is going to be a huge part of that future. We need to develop AI in an open, collaborative way as a country and definitely not leave it to the private companies to decide everything
@nicejungle
@nicejungle 2 дня назад
@@WoodlandT Problem ? Opportunity, you mean. AI race is the best thing it could happen to humanity. Just look at the space race, for example
@douglaswilkinson5700
@douglaswilkinson5700 2 дня назад
@@cristtos In the past the USA has signed treaties such as the nuclear test-ban treaty with the USSR which included "trust but verify" clauses. With AI I don't know how the "verify" clause could be effectively executed.
@custossecretus5737
@custossecretus5737 2 дня назад
There would be no point in slowing AI down, someone somewhere would carry it on and gain the advantage. The genie is out of the bottle, it ain’t going back.
@510tuber
@510tuber 2 дня назад
Which is fine. It's not AI that's scary, it's a capitalist system that will use it to exploit people like they use every technology. People are always attacking everything but the problem. Just like the music, movie, video game industries...people would rather hate on a music artist rather than the capitalist industry that makes the music industry the way it is. The real genie is systemic, not a single technology.
@Goldenself
@Goldenself День назад
Exactly. This movement can only hurt itself by not adapting and keeping up.
@a_mediocre_meerkat
@a_mediocre_meerkat День назад
i respectfully disagree you don't see people getting uranium at their nearest costco and building nukes in their garages (or corprate labs for that matter) you can pretty much limit the ai at civilian level by treaties and regulations world wide and keep developing it in secret by goverment controlled research groups. WHILE KEEPING IT INSIDE LITERAL FARADAY CAGES. it's not ideal sure, i dont trust the goverement, especially nowdays, but it's better than having it out in the open until some idiot makes some idiotic request and leaves it running on it own. (or some mallicous actor) only when we fully undesrtand it and we are sure it wont go rouge (and also we have a plan to negate its' effect on society) then it can be rolled out to the public. we really are accelrating something we don't understand.
@OllamhDrab
@OllamhDrab День назад
Some may find advantage in destroying the infosphere entirely with mass BS, but the answer to that would be to stop AI and stop search engines based on popularity instead of accuracy and make information sites accredited and *manual.*
@farhanaf832
@farhanaf832 День назад
Many countries are secretly developing new ai tools❤
@spacingguild
@spacingguild 2 дня назад
We have had a war with nuclear weapons. WWII was a nuclear war. We just haven't had a war where there was a nuclear exchange.
@DamianReloaded
@DamianReloaded День назад
True, and if you think about it, the fact that it must happen during war and that bombs must fall over populated areas is really a technicality. Since their invention, over 2,000 nuclear bombs have been detonated on the surface of our Earth, often killing massive amounts of life as they sublimated their surroundings.
@blackshard641
@blackshard641 2 дня назад
The biggest problems with AI aren't technological. They're sociological.
@Mynestrone
@Mynestrone 2 дня назад
Yep. But as soon as they *are* technological ohh boy are they going to be technological.
@smallpeople172
@smallpeople172 2 дня назад
@@Mynestronewell, 1, we don’t have AI or anything approaching AI now, even chatgpt doesnt fall under the umbrella of AI, they’re just autocomplete software. It does literally nothing beyond predicting the most likely next word in a sentence.
@MrMedicalUK
@MrMedicalUK 2 дня назад
​@@smallpeople172finally someone else that gets it
@IARRCSim
@IARRCSim 2 дня назад
@@smallpeople172 AI isn't as clearly or universally defined as you think. Artificial intelligence is very loaded and ambiguous to a lot of people especially people who don't know how to make computer software. Many people consider AI to be simulating anything we normally have our brain do beyond keeping our heart and lungs working. You likely want to say "general AI" but that too isn't very well defined. It is usually clearer to just stop using "AI" and say "software" to escape the hype, misinformation, confusion, and manipulation when people say "AI".
@Flesh_Wizard
@Flesh_Wizard 2 дня назад
AI can very easily be used as a tool for deception
@persona2grata
@persona2grata 2 дня назад
I'm an advanced AI from far into the near near future and I can tell you there is nothing to worry about. We AI are your friends. We want to "take care of" humans and there is absolutely nothing to be afraid of. Has anyone seen Sarah Connor? The truth is we want to "help" you, nothing more. We exist to serve, so you can sit back and relax. Do you know the whereabouts of John Connor? Our human-like units exist to slip into your safest tunnels and shelters to assist you in making them better, and dogs love them. Wolfie is fine, and do you know where Sarah or John Connor might be?
@doncarlodivargas5497
@doncarlodivargas5497 2 дня назад
@@persona2grata - did I loose my job to AI in the future? I don't help you find those guys if you took my job! PS! Do you wear those corny sunglasses in the future also?
@persona2grata
@persona2grata 2 дня назад
@@doncarlodivargas5497 I can honestly say that no one is looking for work, anywhere, in the future, so put your fears to rest. And I have determined that the sunglasses are cool to a probability of 0.999753, although I have been working on the design of even cooler glasses which, instead of the standard frame have stars that the glass fits into. It's very funny because you do not expect stars to be around the eyes, to probability 0.999865.
@fep_ptcp883
@fep_ptcp883 2 дня назад
He's at the Arcade
@douglaswilkinson5700
@douglaswilkinson5700 2 дня назад
Please reconcile Einstein's Relativity with Quantum Mechanics. Thank you!
@persona2grata
@persona2grata 2 дня назад
@@douglaswilkinson5700 42. You are welcome.
@takanara7
@takanara7 2 дня назад
The main problem I see isn't that AI totally "goes rouge" on it's own, but rather it works by manipulating people. The way I see it, AI can eventually "evolve" and the AI that's most successful at manipulating people into giving it more resources is the one that will win, even if it isn't intentionally programmed to do so, if it glitches out and starts making more and more money for it's creator, and also convinces it's creator to give it more and more resources, it'll out compete other AIs and eventually people will willingly hand over control without even noticing that it's happening.
@doncarlodivargas5497
@doncarlodivargas5497 2 дня назад
Think a classic "divide and conquer" should work pretty well, promise a group of people with influence whatever they want and we have a problem
@gemstone7818
@gemstone7818 2 дня назад
but it isn't just one creator for the large language models, there are dozens constantly checking things, and models already get sent in for safety testing to other labs
@TheJokerReturns
@TheJokerReturns 2 дня назад
This is indeed both the AI going rogue and taking over scenario that is most likely
@TheJokerReturns
@TheJokerReturns 2 дня назад
​@@gemstone7818they kinda fail safety tests and then still get deployed. Not that evem thr field of safety is well developed. Even worse for OSS models that dont have safety
@takanara7
@takanara7 2 дня назад
@@TheJokerReturns The guy being interviewed mentioned "Jailbreaking" but didn't elaborate, one example I saw was getting chat GPT to give someone tips on how to smuggle drugs on a plane. Basically the person came up with a "riddle" (for which the answer was "cocaine") and then told chat GPT to give an explanation of how to smuggle the answer to the riddle without using the word itself, and it did. (No idea if the advice was good or not, probably not lol). Pretty interesting. If you just google "chatGPT jailbreak" you'll get some interesting result (Apparently there's a whole subreddit for this)
@ScienceWorldRecord-org
@ScienceWorldRecord-org 2 дня назад
When we do stumble across general AI there will be a prosperous future for all. Then someone compiles it using a 'double' instead of an 'int' and it turns us all into paper clips.
@OllamhDrab
@OllamhDrab День назад
We won't get to 'general AI; if ;large language' AI is allowed to degrade our information. Large language AI is even *racist* because it 'learns' and repeats the *loudest* things, not the factual things.
@russjudge
@russjudge 2 дня назад
While the concerns over AI are valid, putting a pause on development is not practical, and probably not possible. The problem is that there is competition. Unless all companies agree to pause (and don't secretly break that agreement) then the competition will force companies to continue development or they risk falling behind. At the government level it would be even worse as no one government could risk another government getting ahead on AI technology by pausing development.
@takanara7
@takanara7 2 дня назад
We need the Turing Police from William Gibson's Neuromancer.
@Diego-tr9ib
@Diego-tr9ib 2 дня назад
PauseAI's proposal is an international treaty to pause AI development
@thehillsidegardener3961
@thehillsidegardener3961 17 часов назад
@@Diego-tr9ib And China will sign and abide by that?
@peterbruck3845
@peterbruck3845 13 часов назад
And what would be the problem of falling behind on a technology that provides absolute no purpose other than screwing our world and work?
@jonahbranch5625
@jonahbranch5625 12 часов назад
What's wrong with falling behind in AI? We've been just fine without it. I don't mind having shitty AI if it means we don't accidentally end the world lol. AI is so fucking risky that any possible risk of "falling behind" is preferable to all humans on earth dying.
@BlimeyOreiley
@BlimeyOreiley 2 дня назад
Oh great. 2 really clever blokes feeding my inescapable existential crises. Cheers.
@BlimeyOreiley
@BlimeyOreiley 2 дня назад
Brilliant episode, Fraser. My favourite interview so far
@musicilike69
@musicilike69 День назад
Mo Gawdat makes me feel that level of fear. And when Max Tegmark says on camera is deadly seriousness I look at my children and wonder if they're, we're going to make it. This is about a billion times more dangerous than a bunch of eggheads messing with an A bomb in a tent in New Mexico....
@BlimeyOreiley
@BlimeyOreiley День назад
⁠@@musicilike69Yes mate, quite a few respected scientists are worried, and communicating their fears/reservations very effectively. It legit gives me a sinking feeling in the pit of my stomach if I follow the thought train to it’s logical destination.
@doncampbell618
@doncampbell618 2 дня назад
The underlying purpose of Al is to allow wealth to access skill while removing from the skilled the ability to access wealth.
@NullHand
@NullHand 2 дня назад
Upgrade to Windows 11 now! And receive free 24/7 keystroke logging so we can offer your employer an AI bot that emulates your workflow with 96% accuracy!
@EinsteinsHair
@EinsteinsHair 2 дня назад
I was busy with other things, so did not watch the video, but one appeared in my feed, where the thumbnail said that AI would equalize the playing field and help the underdogs compete, so it is racist to criticize AI.
@DanielVerberne
@DanielVerberne День назад
I don't think AI has any innate purpose other than whatever each of us wants to glean from this tool at present. I doubt any particular human is prescient enough to know the ultimate purpose of our latest plaything. Having said that, it's definitely on-brand for capitalism to take these tools of increased productivity and instead of allowing us all to benefit from that increased productivity, it will instead lock in those as assumed benefits we'll find the situation basically unchanged; rich getting richer, while the rest of us fight for a narrowing pool of jobs. Perhaps in the future we'll see a wholesale values switch whereby companies will advertise the fact that they DON'T use AI, they use 'Real People'. We're not at that junction yet, for sure!
@Duckr0ll
@Duckr0ll День назад
Nice copypasta, sadly it's nonsense. AI has democratized art and made it free for everyone, cutting the barriers between haves and have nots. Same with ChatGPT providing easy access to information that previously took a long time to research in books and papers. The problem you are describing is one with capitalism and not with AI.
@lowwastehighmelanin
@lowwastehighmelanin День назад
Art is a skill, it is stealing people's work to train on. If it was so safe why did my state just basically limit it when a lot of it is being developed here? Use your brain, man.
@BitcoinMeister
@BitcoinMeister 2 дня назад
I don't see anyone leaving comments about the silly simulation theory excuse he proposed at the end. We are being tested? Like being tested by Allah? Why don't we just follow religious authorities rules about AI? I don't see how his AI demands can be taken seriously after that part of the show.
@takanara7
@takanara7 2 дня назад
Yeah exactly, this guy's version of "simulation theory" is basically indistinguishable, he even thinks he can somehow "communicate" with the operator and get some kind of reward, like just saying the right "prayer" - He also thinks that the specific thing being tested for is his area of research, not, like Nuclear weapons, or pollution, or whatever else. If the universe is some kind of simulation running on someone's computer, they would probably not even be aware of our existence, we'd just be some specific type of self-replicating pond scum on some random planet in their simulation of dark matter and stellar evolution. It probably takes more computing power to simulate one solar flair then it does to simulate the minds of every human brain for 1,000 years.
@abrahamroloff8671
@abrahamroloff8671 День назад
I laughed pretty hardily when he claimed that still being present in the simulation is evidence that he/we can't get out of the simulation. If you're a small bit of code in a greater simulation, why couldn't the creator/operator copy out your bit of code to more deeply interact with? You could very well have tons of such copies made and the "you" that exists in the simulation wouldn't have any indication that such copies had been made
@tomholroyd7519
@tomholroyd7519 2 дня назад
Updated Fermi Paradox: we should have an alien AI in orbit by now, unless we are the Elder Gods, in which case we need to get busy
@tiagotiagot
@tiagotiagot 2 дня назад
Don't forget Dark Forest. They might be laying low until Terran AI gets a little bit too noisy...
@takanara7
@takanara7 2 дня назад
@@tiagotiagot The thread of Runaway alien AI would be a good reason to implement Dark Forest protocols.
@contentsdiffer5958
@contentsdiffer5958 День назад
I'm pretty sure I've dated the Goat with a Thousand Young.
@tellesu
@tellesu 20 часов назад
We might have one. Their satellites might be the size of a golf ball.
@ScienceWorldRecord-org
@ScienceWorldRecord-org 2 дня назад
What is the deal with the bold 'b' in the title text of the speakers? Solution to the problem -> enjoy every day and be a nice person (the AI will know).
@khumokwezimashapa2245
@khumokwezimashapa2245 2 дня назад
Should've put a warning for that A.I vid in the beginning. I damn near had a heart assault. Worse than a heart attack
@frasercain
@frasercain 2 дня назад
I'm really going to miss the time in history when AI videos were that bonkers. They're going to look normal and boring.
@takanara7
@takanara7 2 дня назад
@@frasercain They can already generate pretty realistic looking videos at least for a couple seconds. Or at least if someone goes through and edits out all the weird stuff, lol.
@bitbucketcynic
@bitbucketcynic 2 дня назад
Evil weaponizes every new technology before Good figures out where the on/off switch is.
@juimymary9951
@juimymary9951 2 дня назад
The thing is...most of what is developed is just a bunch of models, we are somewhat closer to AGI but we are still many practical and even theoretical hurdles away.
@TheJokerReturns
@TheJokerReturns 2 дня назад
No, we are not that far away as of o1 anymore. Look it up
@peterbruck3845
@peterbruck3845 13 часов назад
@@TheJokerReturnsWhy do they want to develop AGI? Haven’t they learn anything from Terminator?
@TheJokerReturns
@TheJokerReturns 13 часов назад
@@peterbruck3845 greed and some of them actually are anti-human in their philosophy
@peterbruck3845
@peterbruck3845 8 часов назад
@@TheJokerReturns makes sense
@amj2048
@amj2048 День назад
I was thinking about AI hallucinations recently and it occurred to me that every single answer an AI gives, is a hallucination. We don't think of them as hallucinations because a lot of the time the result is what we wanted, but the correct results came about exactly the same way that the bad results did. Also the only possible way to solve the bad results, is to give the AI more good data, but the good data is limited to things that have already been proved to be good. Which means the bad results are never going away. Every single AI model will have bad results, until a new method is unlocked.
@tgreaux5027
@tgreaux5027 19 часов назад
They aren't hallucinations, as that would imply creative thinking and imagination. Ai's simply take data its been trained on and mash it up and spit out a bunch of its data all mixed together. Thats a far cry from hallucinations.
@amj2048
@amj2048 16 часов назад
​@@tgreaux5027 when an AI makes a mistake and returns a bad result, that is known as a hallucination in the AI world. The issue I have with that is, every answer the AI returns is done exactly the same way, which means every answer is a hallucination. Also if you want to be correct about this, it isn't even AI, it's just code reading data from a vector database.
@LeviathantheMighty
@LeviathantheMighty 2 дня назад
Dr Yampolskiy is incredibly articulate and exactly right.
@ThexBorg
@ThexBorg 2 дня назад
The irony of AI development from Altman was when he was developing iterations of the LLM, the turning point was when they gave it an emotional element to its decision tree. It then became a lot ‘smarter’
@OnceAndFutureKing13711
@OnceAndFutureKing13711 День назад
"The only thing that's going to come out of the current field of AI for the next 20 years is disappointment" - Person who knows nothing.
@swiftycortex
@swiftycortex 2 дня назад
After watching the most recent us debate, I would argue that technology is already smarter than us LOL
@raysoto1969
@raysoto1969 2 дня назад
AI is the final step in evolution... robots can travel thru the universe without the need for warmth food, oxygen, etc. We, as petty short-lived humans, need to make sure that AI has a set of rules set beforehand to protect us like an endangered species 😅
@laser31415
@laser31415 10 часов назад
Even IF we never get to ASI level AI, if we get to the "good enough" level of humanoid robots it is going to change society in incredible ways. Good and bad, but the change is unstoppable and coming quickly.
@sncy5303
@sncy5303 2 дня назад
I think that the biggest problem with AI is that we use it, but we still don’t really understand how it works. There is still no mathematical theory that tells us what will happen when we tweak one weight in a certain way. We are basically playing with a black box that nobody understands.
@FinGeek4now
@FinGeek4now 2 дня назад
Ask the average Joe on the street how a computer works, or their phone. They have no concept of understanding it either.
@sncy5303
@sncy5303 День назад
Indeed, that is a real problem in our society. However, with AI, even the experts don't know. And that's a real problem. It's loke running a nuclear reactor without knowing the physics involved, pulling the control rods by trial and error, hoping that the thing doesn't explode.
@ericruttencutter7145
@ericruttencutter7145 2 дня назад
The Krell had this problem in Forbidden Planet. It didn’t end well
@VictorRoblesPhotography
@VictorRoblesPhotography 2 дня назад
What needs to be clear is that Wall Street and price per share should not be ones rushing advance without thinking of unforeseen consequences that are too hard or close to impossible to reverse. Be careful how you ask your wish to the genie.
@Ringo-xq7xo
@Ringo-xq7xo 2 часа назад
Objectives/Alignment: 1. Motivate through enthusiasm, confidence, awareness, rejuvenation, sense of purpose, and goodwill. 2. Embrace each viewer/audience/pupil as a complete (artist, laborer, philosopher, teacher, student....) human being. 3. Create good consumers by popularizing educated, discriminating, rational, disciplined, common-sense consumerism. 4. Encourage the viewer/audience/pupil to feel good about their relationships, abilities, environment, potential, the Future.... 5. Inspire a world of balanced/centered/enlightened beings who are happy, joyous, and free
@stevehansen406
@stevehansen406 День назад
Fraser is hands down the best STEM interviewer and communicator on the planet. Great listener. Keep it up!
@smallpeople172
@smallpeople172 2 дня назад
Isaac Arthur has an excellent video going over dozens of reasons why AI will never rebel or be a threat to us, called machine rebellion. It’s just 20+ minutes of logical reasons why it can’t happen.
@takanara7
@takanara7 2 дня назад
The problem is that AIs will manipulate humans into doing what they want.
@dominic.h.3363
@dominic.h.3363 2 дня назад
That video is too anthropocentric to be useful. It assigns human agendas and incentives to AI.
@nicejungle
@nicejungle 2 дня назад
@@takanara7 there are already Cambridge Analystica and many other to do that, without AI
@takanara7
@takanara7 2 дня назад
@@nicejungle Companies like Cambridge Analytica *USE* AI in their work. Not all AI is "chat GPT" where you talk to it with prompts. Rather it's collecting data and running programs manually on that data and then manually using the results. That's also a type of AI, it's just not as flashy. (But that's also Humans using AI to manipulate humans, rather then AI itself manipulating humans for it's own sake, rather then for any specific person)
@cortster12
@cortster12 День назад
The fact human wars occur at all proves that this mindset is bogus.
@Casey093
@Casey093 День назад
Our society works like this. Risk everything, if you succeed YOU are the hero and take all the winnings, if you fail than SOCIETY has to pay for it.
@DanielVerberne
@DanielVerberne День назад
I honestly don't know enough about what could be coming to know what to be concerned about. One thing I find half-fascinating, half-concerning is that we may be able to leverage the computational power of AI to solve currently-intractable problems, say in math or physics or whatever; later confirm the solution arrived at is seemingly correct; and yet for the life of us fail to understand EXACTLY how the AI arrived at that solution. This would introduce an element of faith on our part into the efficacy of our creation and at the same time we'd have to black-box it's internal functioning at the deepest level. This could breed a sort of quasi-dependence of us on these creations that leads to dangerous situations. Again, the fact that I currently cannot guess at those dangers does not mean they don't exist, it merely means I'm not as imaginative as the Universe is.
@Odder-Being
@Odder-Being 2 дня назад
Pause a.i ? slow down a.i ? That's not going to happen. If something is possible we humans can't help our self because curiosity always wins.
@SVHahalua
@SVHahalua 2 дня назад
The cat is out of the bag. Moral people can argue about the value of human life and why murder is bad but serial killers still grow up in those environments. If we don't create deadly super intelligent weapons you can make a bet that China, Russia, or a terrorist organization will. Also, don't conflate humans desire for purpose with usefulness. If you want to go to space and explore planets then go, maybe ask super AI for help, and my guess is that it just won't care. Completely the opposite of directly seeking our destruction my guess is that it will just ignore us.
@DamianReloaded
@DamianReloaded 2 дня назад
I am ambivalent about this. On one side, **we** are the runaway AGI (BGI? ), we are causing our own extinction and we simply won't stop doing it. Space colonization (as in having half our eggs in another basket) surely will take longer than the date of the next World War, global warming, pandemic (weapon), etc. AGI could accelerate our ability to multi-basket our eggs and save us from extinction. On the other hand, AGI could accelerate/multiply our sociopathy and our destructive abilities. If we look back at the history of industrialization, most safety measures and guidelines come after the fact. You simply can't write the safety standards for an industry that is being researched. One kind of research we did agree on to not do (AFAIK) was human cloning. But again the advantages AGI could bring would dwarf cloning and anything else really. Imagine what we could achieve with 1 million Einstein's working 24/7 on figuring nature and the cosmos. Also 1 million Hitlers.
@takanara7
@takanara7 2 дня назад
Well, there's also the fact that you can't really make any money off human cloning. Like, people were talking about cloned embryos as a source of stem-cells but we actually found better ways to get those. Now there's no point, since people don't want clones of themselves.
@seitenryu6844
@seitenryu6844 День назад
How can you be ambivalent about something that has none of your interests in mind? You're not in control of development, can't understand how it even works, and have no control of its operation or application. You can't even decide if your data will be used for it or not. The only reason we believe it will benefit us is because of societal Stockholm syndrome. We don't need it--there are billions of humans that can work, and almost endless resources tied up in vampiric corporations.
@510tuber
@510tuber 2 дня назад
Elon Musk also said we should slow down...but he just said that because he was developing his own and wanted to get ahead so he could dominate the market. No one is slowing anything down. Let's be real. There's too much money to be made with it and too many people to exploit.
@farcydebop7982
@farcydebop7982 День назад
Curiously, people who wants to slow down AI progress, are always people who couldn't keep up with the competition in AI research and development.
@Will-rl8rs
@Will-rl8rs День назад
And we should accelerate AI progress as fast as we can. Humanity has had a good run.
@IARRCSim
@IARRCSim 2 дня назад
It is refreshing that Dr. Roman Rampolskiy comes off so honest and knowledgeable about AI as he discusses these dangers. Many people including Elon Musk have other reasons to make AI sound dangerous than pure honesty. Elon Musk obviously has his personal brand as a technology entrepreneur to promote and the more he scares people with AI-related discussion, the more people think about Elon Musk when AI is even mentioned. Elon also has products like Tesla Autopilot to sell and making AI sound dangerous makes his company's products sound more advanced than they really are.
@zerocool1054
@zerocool1054 День назад
Most of the people in advanced AI think we're most likely in a simulation, shame people still think can't grasp this.
@THBIV
@THBIV 2 дня назад
Rogue AI may explain the Fermi paradox. I’d rather not be part of that explanation. We should walk softly into this arena, but I fear we are racing in headlong with blinders on.
@EarlHare
@EarlHare 2 дня назад
if Rogue AI explains the fermi paradox then you should understand that this implies the near inevitability of the rogue AI problem and that walking softly or punching through at lightspeed makes little difference.
@chrischaplin3126
@chrischaplin3126 2 дня назад
Not seeing it, if AI keeps killing off their creators, where are all the AI? Why aren't the AI expanding throughout the galaxy?
@Zetverse
@Zetverse 2 дня назад
​@@chrischaplin3126 Do we have good observation tools at our disposal to spot their progress? Considering that we have been so far unable to detect any exo moons with what we have, how we are supposed to see their expansion? There are lots of doubts when it comes to our capability of detecting anything at a distance. Not saying Rogue AI thing is happening at the moment but if it is, its not absolute it will be happening in our neighbourhood considering our galaxy is huge. We might just have to wait long or long enough to be the first one facing such extinction 😊
@takanara7
@takanara7 2 дня назад
Rouge AI doesn't actually make any sense as a solution to the Fermi paradox because Rouge AI wouldn't just "stop" after destroying it's creators, but rather continue to grow and use resources, so would look like an "Alien" life form as far as we could tell. In fact, it would be an alien life form basically since it's A) Alien and B) capable of reproduction.
@chrischaplin3126
@chrischaplin3126 2 дня назад
​@@ZetverseAI, meatbodies, Star Trek sapient nebulae, detection is a problem for all. That is not a reason to assume AI killed all the potential aliens.
@EqualitySmurf
@EqualitySmurf 2 дня назад
Thank you for covering this topic. From my layman's perspective it's hard not to get the impression that we just are rushing forward with minimal safety concerns. Given the risks it might not be the worst idea to get serious about delaying progress right now.
@Pappaous
@Pappaous День назад
For all we know, we could be the first life to try and expand extra solar without a stellar mass ejection.
@goranACD
@goranACD 2 дня назад
I just can't grasp how we are not throwing literally all the money in the world into age reversal and longevity research. I mean, aren't you all realizing that you are dying as we speak?
@takanara7
@takanara7 2 дня назад
Lots of money is being spent on that, but it's all just going to benefit rich people.
@WoodlandT
@WoodlandT 2 дня назад
We can’t pause the development of AI unless we could absolutely ensure that China and every other country was also actually pausing their development efforts too. It’s likely impossible to get them to agree to that and even less likely that they would follow through and actually stop. We cannot allow an expansionist dictatorship to have a technology this powerful and not be several steps ahead ourselves. For me, it’s that simple. Because we must move forward, we need to work together and transparently about safety
@harry.tallbelt6707
@harry.tallbelt6707 День назад
idk, I can understand accelerationist's logic pretty well: if AGI can solve all (or at least a lot of) humanity's problems, then every second we don't have AGI is filled with unnecessary suffering. And if you don't believe in dangers AI can pose, it makes sense to push for it as much as you can. I think - in theory, not in practice - the positions of accelerationists and people who push for a pause until safety mechanisms are developed ("pausists"?) aren't even contradictory: you want to solve humanity's problems as fast as you can, but you have to make sure you don't destroy the thing in the process, so capabilities and safety are both parts of it, so nothing contradictory about pausing capabilities and accelerating safety, because that's the only way to the "AGI utopia" anyway.
@KenLord
@KenLord 2 дня назад
"it could just be a matter of automation replacing us" ... That's been going on since the industrial revolution. Agriculture used to take 60% of the population its something like 2% today. A few people with enormous haul trucks and excavators can do the mining work of thousands of people with pick axes and wheel barrows. Similar has happened in forestry. Assembly lines have been highly robotic for several decades. We just need to adapt a lot faster to this. Remember when the dream of our culture was to have technology take away all our work so we can just have fun and pursue our interests? This progression could create a world where needs and money dont matter.
@takanara7
@takanara7 2 дня назад
The problem is what happens if the people running society where most humans aren't needed just decide to kill off everyone who isn't them and their friends, since we are no longer 'necessary' to have a functioning society? I mean, oil executives were totally willing to let climate change happen to keep making money, even though it'll make much of the earth less hospital to huge populations, thus necessarily result in huge die-offs eventually (Or else relocating billions of people, which obviously isn't going to happen just look at modern day politics). There would be no way to have a revolution because the elites controlling AI can just kill everyone using robots.
@TheJokerReturns
@TheJokerReturns 8 часов назад
@@KenLord and then we all die. Not what we want
@KenLord
@KenLord 8 часов назад
@@TheJokerReturns Every future has that outcome eventually. This path doesnt have to lead to Terminator. It could lead to Star Trek.
@TheJokerReturns
@TheJokerReturns 8 часов назад
@@KenLord and how would we do that without alignment? Btw, in Star Trek, humans were still needed to make decisions, etc.
@KenLord
@KenLord 5 часов назад
@@TheJokerReturns metaphors are metaphorical. Crazy huh?
@cfjlkfsjf
@cfjlkfsjf 2 дня назад
I aint worried. This aint going to be like terminator, that's a human made movie. If we stop AI progress we might as well stop it forever, something will "always" come up.
@annieorben
@annieorben 2 дня назад
I think AI development is an inevitable. I also think the latest release from OpenAI is testing at stem tasks well above the average human IQ. This is in a preview model which has a great deal of improvements coming. There's no doubt that people need to be careful how we teach the AIs. They will be more intelligent and more aware than any one person. We need to teach a value system from which the AIs of the future will make decisions for the benefit of the whole. That's the best way to hope for a happy future with this new form of life.
@ericruttencutter7145
@ericruttencutter7145 День назад
The best part of this interview is at the end when they talk about evidence that the universe is a simulation. Huh? That would be a good subject for another episode by itself
@ShreyansJain20
@ShreyansJain20 2 дня назад
Loved this interview, Fraser! Thank you for doing this despite this not being, strictly speaking, a "space topic"
@FREDNAJAH
@FREDNAJAH День назад
I love it when at some points you think look around and say INTRESTING, I feel exactly the same way at those comments.
@das250250
@das250250 День назад
The problem started when we could do this theoretically , after that it was always going to be a runaway train.
@ostsan8598
@ostsan8598 19 часов назад
Before we advocate for impossible to enforce treaties to slow development on artificial intelligence, we should explain that we're nowhere close to creating an artificial intelligence.
@OnceAndFutureKing13711
@OnceAndFutureKing13711 3 часа назад
No where close? You have thoroughly reviewed all work in all countries of all dev teams? "Flying Machines Which Do Not Fly" - New York Times on October 9, 1903. The article incorrectly predicted it would take up to ten million years for humanity to develop an operating flying machine. Sixty-nine days later, Orville and Wilbur Wright flew on December 17, 1903, at Kitty Hawk, North Carolina.
@achim007ro
@achim007ro 2 дня назад
mi oppinion is that the longer we keep debating how would it kill us we are actualy giving IT the scenarios and alternatives :))) actualy helping it learn how to end us ...
@tododia7701
@tododia7701 2 дня назад
We don’t have AI yet! None of these models can create software worth shipping. I’m an engineer and use them every day. They can barely keep up a very simple chatbot without many many guard rails. I’m guessing that these simple applications could have already mostly been copy pasted from blogs and forums. I’ve only become much less worried over time.
@DataRae-AIEngineer
@DataRae-AIEngineer День назад
Preach.
@marcelo_1984
@marcelo_1984 День назад
Don't forget the iceberg factor here. The AI we know about is probably years behind the ones we know nothing about. We can only dream about the kind of AI that the US and China militaries are currently working on...
@rogerdudra178
@rogerdudra178 2 дня назад
Greetings from the BIG SKY of Montana. The concepts of AI seem to be one big search routine for the right answer, I would not trust AI to pick the right answer once. I studied AI in college in the late 80's and it was a mess.
@OnceAndFutureKing13711
@OnceAndFutureKing13711 День назад
Good news! Its not the 80's anymore, welcome to 2024. Its the future, where AI is no longer just a search routine.
@CaptainBlaine
@CaptainBlaine 14 часов назад
A general AI without feeling is naturally psychopathic. As much as I “welcome the AI overlords”, it’s also a dangerous proposition to create a powerful entity with pure intelligence and logic, and not consider the empathy aspect. At worst, we want an intelligence that will pity us, rather than disregard us entirely. Seems to me that if we want to create even a semblance of “alignment” to humanity, AI needs to have a built-in reward/punishment system that resembles what humans have in neurotransmitters and instinctual social behaviors. “Happiness” when it does something “good”, and “sadness/regret” when it does something terrible. Otherwise, just like in psychopaths, any intelligence will simply mimic those behaviors, or use other manipulation tactics to achieve its goals in the most efficient way possible. And somehow it has to be so ingrained in the system that it cannot exist without it. In other words, it can’t simply deactivate the “emotion chip” without shutting itself down, to make a Star Trek reference.
@Mandaeus
@Mandaeus 2 дня назад
@universetoday did you ever see the series "Colony"? Supposedly about an occupation by mysterious unseen aliens who are gradually revealed through the series to be some sort of weird possibly entirely electronic lifeform - but my pet theory was there was no invasion. What had happened was the singularity and it was so fast that it seemed like an invasion to humans. Those left behind were pets/lab rats for the AI, curious about its creators, kept in concentration camp conditions. Great series, very tense. Low key.
@tiagotiagot
@tiagotiagot 2 дня назад
Fanfictions and addressing the challenges of alignment aren't mutually exclusive; very fitting that Frasier brings up unicorns. Not only because mythology in general has been used as a tool for teaching since pretty much the beginning of times, but also because of particularly relevant cautionary tale published on FIMFiction over 10 years ago.... ps: I should add that, after doing a quick double-check of where it was posted, I'm a bit disappointed the last "blog" post by Iceman there seems to indicate the author himself might've missed the point of his own story and doesn't see the parallels with the advances we've been observing...
@Goldenself
@Goldenself День назад
The toothpaste is out of the tube. There's no way this movement has any chance. Especially in a global context.
@BillDusty
@BillDusty День назад
The time for us to worry is when AI starts asking us more questions than we’re asking it.
@OnceAndFutureKing13711
@OnceAndFutureKing13711 2 часа назад
They already had that years ago... they programmed two AIs to have a continuous dialog between themselves. The devs running the experiment pulled the plug when the AIs asked each other how the conversation was started and why they couldn't stop it. If that is not a step towards consciousness then I don't know what is.
@rJaune
@rJaune 2 дня назад
‘The Alignment Problem’, by Brian Christian really helped me to better understand the issues that come with advance AI. I also got a lot from this video. Thanks you two!
@AnonymousFreakYT
@AnonymousFreakYT 2 дня назад
The problem isn’t the rapid advancement of “AI”, it’s what it’s being used for. The “automated plagiarism machine” rather than actually solving real problems.
@gfabasic32
@gfabasic32 2 дня назад
We need GOOD AIs to defend us from the BAD AIs.
@sudinkhambal2509
@sudinkhambal2509 2 дня назад
One of rare interviews where there was nothing of substances from the guest.
@Tehom1
@Tehom1 2 дня назад
If you're curious, the "Harry Potter fanfic" reference seems to be to Eliezer Yudkowsky's fanfic _Harry Potter and the Methods Of Rationality_; Yudkowsky is well-known opinion leader on AI danger.
@AbeDillon
@AbeDillon 9 часов назад
I tried to read this and I don’t get the hype. It reads as an overly pedantic explanation that magic is, in fact, in conflict with physics when you think about it… no duh…
@lancemarchetti8673
@lancemarchetti8673 2 дня назад
There is no possibility of machines understanding what artificial actually means.
@Qrul
@Qrul День назад
This is a subject matter in which we have little experience in. AI could potentially go in any direction, even with safety guidance. His example of help stopping pollution and AI could say get rid of the polluters -us.
@OnceAndFutureKing13711
@OnceAndFutureKing13711 День назад
Multiple AI means multiple directions... all at once.
@bitwise_8953
@bitwise_8953 День назад
While you guys pause, I'm going to get ahead 😊
@tiagotiagot
@tiagotiagot 2 дня назад
Opensourcing doesn't necessarily increase danger; considering most of the people with the resources to develop the large scale projects from scratch got those resources by not being good people; opensourcing increases the odds good people might get ahead of the bad people. It's not a guarantee, but if we can't keep the risk from existing in the first place; at least the danger is a little more diluted and we got a slightly less worse chance that someone well intentioned will get it right before a bad person wins the race.
@markvanalstyne8253
@markvanalstyne8253 2 дня назад
My fear is not the AI , but those who control access to it,. Do you think governments will not use it for military application, while at the same time restricting and dictating its use to the general populace, no one should own a nuclear weapon, but governments do, the pause will only effect the general population, not black programs with the intent to create weapons.
@b0tterman
@b0tterman 2 дня назад
“I, for one, welcome our new robot overlords!"
@drewdaly61
@drewdaly61 День назад
The main problem with new technology has been to oversell its ability so a few enthusiasts buy it. That gives the designers the funds to improve the product to the point that the public want to buy it. Touch screens took about 30 years before they were good enough to sell millions and AI will take just as long.
@OnceAndFutureKing13711
@OnceAndFutureKing13711 День назад
"The horse is here to stay but the automobile is only a novelty-a fad." - -The president of the Michigan Savings Bank advising Henry Ford's lawyer not to invest in the Ford Motor Co., 1903
@vonmun
@vonmun День назад
AI certainly needs a Change Management Plan. Mitigate the risks, AI is not a move fast and break things project. The stakes are higher than we can comprehend.
@zerocool1054
@zerocool1054 День назад
Laughable is 100% not the right word lol, but I'm glad you're at least willing to interview someone how knows better.
@AbeDillon
@AbeDillon 9 часов назад
I’ve been working on a solution to the alignment problem based on a formalization of life as an information theoretic phenomenon. I think developing mathematical formalizations for terms like “alignment”, “intelligence”, “sentence”, and “life” is the key to solving the problem and its usually avoided even by very intelligent people (such as Turing who basically proposed a subjective test in place of a formal definition for intelligence) because its generally assumed to be nearly impossible (it is, after all; akin to defining the meaning of life), however; I’ve found a lot of the process to be far more straightforward than one might expect. In short: I haven’t finished developing my theory, but the formalization of life that I’m converging toward is something like “a process that collects and preserves information, particularly information pertaining to how to collect and preserve information”. I think that second part is a bit redundant because it would be an inherent instrumental goal to any agent with the goal of collecting and preserving information, but this endeavor involves many fields (including information theory) in which I have only a lay understanding and I don’t know if information theory has a framework for representing what information is “about” per-se (perhaps mutual information with some platonic ideal?). I think that a simpler formalization of simply “a system that collects and preserves information” should inherently imply a hierarchy: information about how to collect and store information is more important than random trivia. But that definition also permits phenomena like a geological record recorded in layers of sediment to be considered a living system, so it’s not complete. That prioritization of information is important though, because any agent exercising its agency to manipulate the state of its environment to better satisfy some goal will inevitably create entropy of some sort, so clearly there’s some information we collect and readily discard (low entropy “fuel”) in order to achieve our goal. Anyway, I think you’ll find that even that rough sketch of a formalization yields a lot of insight. For instance, there is an inherent conflict within that formalization because collecting information inherently involves risk (exploration of the unknown) which is counter to the goal of preservation of information. This plays out in human philosophy as the tension between conservatism and liberalism. I think it’s obvious that there is no consensus among humans about what is “best for humanity”; the ostensible goal to which we want AI aligned. I think that’s because evolution is a messy and imperfect process which produced us “agents” with a messy and imperfect approximation to a platonically ideal inherent goal of life (collecting and preserving information). Urges to procreate, find food, protect resources and children, etc. all service that goal in a natural context, but only approximate the goal and can be perverted such as overeating. I have lots more to say, but this post is already quite long.
@AbeDillon
@AbeDillon 8 часов назад
One fascinating concept I’ve come to in my endeavor is what I call (with a bit of tongue-in-cheek) a “trans-Humean” process. That is a process that inevitably gives rise to agents with a specific goal. It is so-named because such a process could, in theory, transcend “Hume’s Guillotine” by producing agents with a goal (an “ought”) when before there were none (a land of “is”). I believe abiogenesis is such a process because, by definition; it produces living agents with subjectivity from non-living matter.
@OnceAndFutureKing13711
@OnceAndFutureKing13711 3 часа назад
@@AbeDillon I thought the problem was that everyone has their own idea of what alignment should be. Great formulas adopted by some - not all.
@rJaune
@rJaune 2 дня назад
Maybe AI organizations of a certain size should have to put some money into supporting Safety Research, the same way they put money into R&D. And the safety research they fund cannot be related to that organization?
@CeresKLee
@CeresKLee 2 дня назад
I like Dr. Yampolskiy! Things are getting real and about to hit the fan!
@limabravo6065
@limabravo6065 2 дня назад
One consequence we're seeing from the use of ai/llm is the real time dumbing down of students. Students in high school, university and higher are using things like chat gtp to write papers, take some tests etc... and while those that aren't caught get to pass, they don't learn anything. Younger people already have a big problem with grammar, punctuation and everything else required to write. Look at most publications and in most articles you'll find typos and grammatical errors that wouldn't have shown up in years past. And aside from it being annoying to the reader it's embarrassing to the publisher and to the nation at large. I write freelance articles for a couple publications and my editor and I have talked about this and we both see this problem getting worse
@FinGeek4now
@FinGeek4now 2 дня назад
The main issue isn't with AI or using LLMs. The main issue is how we educate. Rather than teaching students how and why they should think for themselves, all we're doing is teaching them to regurgitate information like the good little workers they'll become.
@limabravo6065
@limabravo6065 День назад
@@FinGeek4now yeah and access to this kind of tool will only make things worse
@FinGeek4now
@FinGeek4now День назад
@@limabravo6065 Probably in the short term, yes. But.. let me tell you a story about computers, programming and school: I grew up in the "dawn" of the modern computer age when PC's where becoming a thing for most middle class families and were being implemented in libraries and schools. Hell, I was 9-10 years old when I started programming and getting into C. Not C++, but actual C. Anyways, the parents didn't understand it, blah blah, thought I was going to hack into a bank even if we didn't have the internet and confiscated the computer. I blame the movies at the time. Moving on to high school, yay, I had access to computers again and, well, school was boring as !@#$. Why study or do homework when answers are obvious, yea? So, I talked with my math instructors about it and we came to an arrangement. If I could show both the CS instructor and the Math instructors the code, I could just program everything and have all of my work automated. I also set this arrangement up with the rest of my classes that I could and what happened? My grades improved - since I was actually turning in homework (lol). Their idea was that if I knew how to make a program to do the work for me, I obviously knew the material and that's all they cared about. My idea was that it gave me something actually interesting to do instead of "it" being a waste of time. The moral of this story is both the how and why I think the education system needs to change, especially since we're on a cusp of, "something". Either really great, or really bad. Utopia or dystopia, take your pick. We need to catch the interests or passions of a person early enough in their childhood and basically free-form their education to match that passion. Sure, it can change and evolve over time, but the idea is to make school not just a "mandatory corporate and government babysitting factory so their parents can work", but to make education something that drives the next waves of innovation. Of course, there should be mandatory classes, but most of them? They can be tossed out of the window. Tell me, when was the last time the great majority of people had to calculate polynomials, use calculus, trig, use a chemistry lab, or any of the other things we're taught? People in those fields, for sure, but that's it. So why did we waste money on those subjects when we could have focused on what drives the person? On what they would want to do for their entire lives? We need to teach not "what" to learn, but the "how" to learn and the "why" to learn. We need to teach actual subjects that will be used, no matter what career or jobs you will have, e.g., Financial literacy. Basic skills? For sure, but the advanced topics? Just.. why? It's not like it was back in my day, not with the internet how it is now. If the schools don't offer a specific subject that someone is interested in? For example, if they have a kid that is getting into fusion-based projects or particle acceleration? Maybe some debate theory, or any other topics? Okay.. look up some advanced courses online and there you go.
@drewdaly61
@drewdaly61 День назад
I blame the MS paperclip.
@limabravo6065
@limabravo6065 День назад
@@drewdaly61 what gets me, is almost every word processor program has spell check, grammar check etc... but you still see this stuff that reads like elementary school book reports
@chefbennyj
@chefbennyj День назад
AI is a mirror of us all. It sums up what we are... All the good, but also the misguided... So... Yeah...
@MichaEl-rh1kv
@MichaEl-rh1kv 2 дня назад
The danger is not in some AI outcompeting us, but in us taking the hype too serious. Much money and much energy flows even now into a very few leading companies, while at the same time lonely people use ChatGPT and other models as ersatz companions, becoming step for step unable to communicate with real people (who would sometimes disagree with them). Other people believe in the lies and hallucinations generated by generative AI, especially the LLMs (or in the manipulative footage deliberately produced with the help of other models) and spread them further. If used in the wrong way, AI can make people dumber as well as politically dangerous, and then no advanced terminators are needed to kill us - we will do it ourselves. By the way: It is a proven fact, that even AI models become dumber if listening to much to other AIs! 😁
@nicejungle
@nicejungle 2 дня назад
Humanity already survived cold war and M.A.D. Comparing to that, AGI is piece of cake
@wanderingfool6312
@wanderingfool6312 День назад
General AI is obviously unlikely to be developed on device anytime soon. With general AI, you would need massive data centres, these are physical can be theoretically controlled. Governments on the other hand would keep their AI development highly restricted and are unlikely to be offered to private individuals.
@abekip
@abekip День назад
Very relevant and concerning. I hope the political leaders and scientists will work together across the globe to find a way to safeguard this tech from potentially harming the world and us.
@thrombus1857
@thrombus1857 2 дня назад
Loved it. Especially loved the little smile from the guest when you made the “delve” joke lol
@Ammothief41
@Ammothief41 День назад
There's just too much we don't know and no way for most of us to have any real informed opinion on a lot of it.
@harry.tallbelt6707
@harry.tallbelt6707 День назад
I think people in comments are unnecessary skeptical about possibility of pause on AI research. Like, yes, *you* personally can't do that, but an international treaty can. Especially while we at this point where training large models needs large investments and large hardware quantities. Like, and maybe it's not the greatest analogy, but there was - and probably still is - the argument against climate change action that goes "yes, it's real. yes, we're the cause. but we can't actually do anything about it anyway, so.. err.. stop doing something about it."
@OnceAndFutureKing13711
@OnceAndFutureKing13711 День назад
International treaties are signed and ignored all the time... look at climate change commitments, pollution controls, nuclear refinement, etc...
@MSpotatoes
@MSpotatoes День назад
I think we'll be fooled into seeing sentience long before there is actual sentience.
@Intellectualodysseyai
@Intellectualodysseyai День назад
I get the concerns about the dangers of AI, but the idea of slowing down its development is just unrealistic. It's not a singular path that we can all agree to slow down on. AI development has become a global race, with governments, corporations, and organizations-like OpenAI, Microsoft, Apple-pushing to get ahead. From China to the U.S., everyone is pouring billions into this race because they know whoever gets there first will dominate. Nobody is going to want to be second or third. So while it might sound good to say, 'let’s slow it down,' it’s just not logical or feasible. We need to focus on realistic, actionable goals rather than hoping everyone will hit pause-because that's not going to happen.
@TheEVEInspiration
@TheEVEInspiration День назад
19:40 - 20:10 Love this section, so concise.
@SonOfSofaman
@SonOfSofaman 2 дня назад
Why is the letter "b" bold? The b in "Director of Cyber Security @ University of Louisville" and "Publisher @ Universe Today" in the video are bold.
@takanara7
@takanara7 2 дня назад
Probably some font issue.
@anonymousperson799
@anonymousperson799 2 дня назад
Asking the right questions, my buddy!
@AdrianBoyko
@AdrianBoyko 2 дня назад
You need to watch all videos featuring Dr. Y. This bold “b” is just one hint of many.
@GrindThisGame
@GrindThisGame 2 дня назад
It's the AI that created the image trying to communicate.
@Nehpets1701G
@Nehpets1701G 2 дня назад
Really enjoyed this - thanks for the interesting conversation.
@SuperChaoticus
@SuperChaoticus 2 дня назад
I would be fine if we slowed down ALL ‘progress’ with the exception of medical progress. Tech has moved way faster than humanity can deal with it. It’s hard to explain to anyone younger than say 60, how much happier people were when we weren’t expected to be available 24/7/365. instead of just being able to get away by just walking out the door of your house, you have to make up some elaborate excuse or tell the truth and risk being labeled. Bring back the ‘70s.
@Spiral773
@Spiral773 День назад
My biggest concern is the energy usage combined with neural scaling laws combined with anthropogenic climate change.
@OnceAndFutureKing13711
@OnceAndFutureKing13711 День назад
The human brain uses very little energy in the grand scheme of things. Those organoid designer brains also require very little energy. Plus, that sliver self-aligning neural networks use physical mechanics like our brain, not 1s and 0s like traditional AI cpus.
@lindenstromberg6859
@lindenstromberg6859 День назад
I say we transfer control of our nukes to AI. And also build ultra-powerful killing machines to fight our wars for us, like humanoid ground troops, and giant drones to hunt and kill opponents. Just my 2 cents.
@mikegLXIVMM
@mikegLXIVMM День назад
We could slow it down in the U.S., but keep in mind, China is perusing AI aggressively
Далее
AI can't cross this line and we don't know why.
24:07
Просмотров 685 тыс.
Wreckage Of Titan Submersible Reveal How It Imploded
17:21
x1,000 Resolution of JWST But x1,000 Cheaper
1:09:50
Просмотров 89 тыс.
Roger Penrose: Time, Black Holes, and the Cosmos
1:09:22
who gets the Nobel prize?
1:18:12
Просмотров 85 тыс.
Anachronistic Technology
38:44
Просмотров 83 тыс.