first thing Microsoft did when ChatGPT4 got traction, was putting it on Bing and thereby connecting it to the internet. Now ChatGPT 4 wasn't yet an AGI, G standing for General here. It still is troubling that the profit motiv or the competivness is so strongly ingrained in us, that when the next system comes along and we may not even see it yet, that it would be in the end an AGI, because how would we if it decides to stay covert, that then the first thing we do again is give it access to everything.
@@_nom_ maybe rouge, totally decentralized p2p AGI that constantly self-improves its own architecture and trained on all data available on all nodes, with all emergent parameters stored in blockchain
Actually, not much. There are small nuclear reactors on ships all over the world. There's even one at a Naval college in London. There have been no recorded accidents since the 1970s. And even when they do go wrong, the danger is fairly minimal because they're self-contained. We should probably be using more of them.
In other words, AI can mine all the crytos they want and fund themselves and even blackmail or bribe with unlimited funds and the private data they collect.
The issues with AI begin with those who manage, control, write, create, program it in the 1st place. Those who are at the Top of those Ai System hierarchy. Those who are already uncontrollable. Corruption breeds corruption and AI has given them the ability to be Absolutely Corrupt at a whole new level, Technological.
The issue is not whether we can control AI. It's the fact that we can't control the people that use AI. Once the technology is available, the worst that can happen, will happen, thanks to the reliable immorality of humanity.
not exactly related to ths specifically, but if were going to go down the ai road and bolster its abilities and continue developing ai to where it can do all of our mundane things in life and replace jobs, i think we need an entire overhaul of what society expects out of people, the way society will support people losing jobs to ai, and figure out ways to protect the new generation that wont have access to work.
@@freeamericanthinker558As sad as this is, this is probably a good idea. I would also probably start to get comfortable with rough living, since, at least in the US, I highly doubt any real safety guarantee will ever pass and you will be told to deal with it.
If I'm not wrong, is as if some people imagine the ai future like in the movies in which machines do all kinds of human labor. They might not see the scale of things. The human brain uses 20 watts. Assuming neuromorphic ai gets to be 1000 times less efficient, a megawatt scale supercomputer would still be like a god. By the time our life is changed by automation, governments and corporations could have built superintelligent machines, simce they can massively scale ais consumers use. After that it becomes unpredictable.
gardening is something i have been diving much deeper into. i have a great green thumb already but there is a lot more to just growing aspect of it too. i didnt realize how much science, biology, land management, irrigation, and all sorts or everything. i will say im damn good at growing cannabis and chili peppers and have been gaining lots of experience growing mushrooms as well. @@freeamericanthinker558
rough living, i ahve been saying for a long time about being in the age of convenience and that its actually counter productive to human nature. i am in and out with the rainbow family scene so i quite enjoy that rough living, digging latrines to poop in, chopping and hauling firewood for kitchens that are cooking rather bland and kind o not very good tasting food but its sustainable and you share it with thousands of people and there is just something special in that moment of being vulnerable and relying on others while they too are relying upon you. its modern society at its most primal in my opinion and i would imagine that is how tribes lived before modern society and big cities supporting millions of people. gardening, hunting, crafting essentials, there is so much to survival but you really only need one skill and you can barter. i think the world would be such a better place in that system because look at historical accomplishments like the sistine chapel, michelangelo's statues, beethovens symphonies, etc just some examples of what people were able to do in times where it wasnt like were struggling to survive in the same way today as people did back then so the creative mind was more free to work on things like that. i dunno maybe im way off base. @@EbonySaints
Regarding chatbots as girlfriends or boyfriends indeed echoes the sci-fi film ‘Her’ (2013), but it also brings to mind medical chatbots. I read somewhere about research suggesting that the new generation feels more comfortable speaking to a chatbot than to their own GP because chatbots do not judge them. This is particularly true in some cultures. I also recall a GP in my family mentioning that they can tell when a patient is lying, even in simple questions like, ‘How many cigarettes do you smoke a day?
We should probably be using these small nuclear reactors more often. They're already powering ships all over the world. And unlike larger reactors, the consequences of a malfunction are fairly minor.
In regards to nuclear reactors, it's not that big of deal tbh. Nuscale and Rolls Royce are two companies I found right off the bat that offer SMRs (small nuclear reactors).
Only two SMRs in the world are operational. One is in Russian and the other in China. RR are waiting for the UK government to licence them. That could take forever.
This woman downplaying the concerns of industry professionals seems to not understand the nature of AI being a "black box". Sure, the outputs whilst using chatgpt might look all friendly, but that's after heavy tuning and filtering, no one knows what's going on in the background and LLMs are naturally not very well behaved (as they are typically just trained off of any text online) and by default, output lots of potentially harmful content. There is no definitive way to say if an LLM may be harmful.
Human artists have also trained their brains (neural networks) on other artists copyrighted material. She also doesn't seem to understand that a lot of the scaremongering in AI is just meant to get attention and hype to certain companies.
I don't think she realises that we already have commercial nuclear power stations which are already incredibly regulated, SMRs are the perfect solution for data centres due to their low cost.
Creating a successor species is no small feat. If we succeed, i hope they keep us around and not treat us as we treat at times old people ... sending them to retirment homes without their concent.
Cheers, fellow human. There is a good recent paper published online called "Natural Selection Favors AIs over Humans" that explores many reasons we might not expect to live long through this. I'd link if youtube allowed it. Anyways, it's worth a read; it also tries to offer some hope.
@@masonlee9109 Some expert think that we have AGI, G standing for general here not generativ, within 7 months. The intelligence explosion is at the end of the section of curve where it becomes expentonetial. People from the big AI companies say already that they hardly can keep up with papers and studies from their own company, not to speak of those of their competition or research papers from other sources. Besides some hoping for good things out of this like medicine and magical materials to solve our problems, the other portion of people driving this development are greedy investors striving for market control through competition, nations striving for geopolitcal control and overall stupid humans who have no clue what waits for us all at the end of the line. We have to think positiv and try to influence this in a way that the worst does not happen, avoiding a bunch of dystopian futures that suddenly came into our grasp. As if we give up and do not try, it pretty much becomes a self fullfilling prophecy.
Saying we can't train AI without copyrighted works is so last year. It has been proven that synthetic data can not only work as well but even better in some cases.
For god's sake, it's not Artificial intelligence... It's machine learning and advanced algorithm... Stop calling it an AI, we, as humans, aren't even close to creating an AI. All this is just a marketing trick.
Very safe of course given all that’s already happened with nuclear & AI sure we have nothing at all to worry about it can’t possibly go wrong ,how wonderful of these bright people to help us out with all these amazing ideas
These people are just killing the competition with regulation, saying “AI can go wrong” then building the most advanced one’s.. We know your tactics Sam
As an aspiring AI alignment researcher, she confuses quite a few terms. She is wrong in that "we use AI every day" implies we can control AGI. It has been proven experimentally, that even very simple neural networks will exhibit a volkswagen effect (to quote robert miles), where it behaves as expected in the training environment, but when it gets out to deployment, it seemingly changes goals. Now don't even get me started on AGIs with an internal model of the world, who really *really* want to cure cancer by removing all oxygen from the planet, but knows that if the researchers find out, it will be turned off and thusly cannot go through with its plan. No, if you think you control AI, all you've proven is that you can't think of a way it fails. A large portion of top researchers in the field agree that AGI poses a significant risk of human extinction.
Thank you for working on AI alignment. It seems doubtful that it will be possible to keep ASI aligned for long, but I appreciate you. Humanity needs to get its act together and realize this is a suicide race.
So, the one safety feature we have to fall back on is that we can always just turn the power off at the data centres ... but just turning off nuclear power isn't an option without a meltdown. Sounds more like AI is already running the show and is engaging in self protection.
@@effexonYou missed the point completely ... how do you read a whole sentence and then chose to focus on the most irrelevant part of it to add your 5 cents?
This woman actually knows what she is talking about, quite interesting actually ! ( I guess she is a scientist) TBH this nucleair scientist she is talking about...could be AI. I've seen so many interviews with Sam Altman, I they really did not know what they should be asking him. To be honest : the invention of how to make fire, how to build steam engines, industrial revolution, how to use electricity and the internet and now neural networks. To I think in order of significance : neural networks > fire. The others don't even come close.
4:00 Why do people not understand how people can continue to work on AI even if they think it may spell doom?? This is a textbook *_collective action problem._* General AI is coming whether Sam Altman works on it or not, so him choosing _not_ to work on it won't do any good. Therefore he may as well work on it. It's the most rational decision for him.
Nuclear reactors cannot be turned off. I know AI data centres massively hog energy but would make more sense to power them off wind, solar, geothermal etc.
"One might ask ... there is a disconnect." OR, one might listen and find out. Altman said that he believes this technology will be the biggest positive change in the history of humankind and the likelihood that it will end up badly is very law. But it is not zero so we need to make sure we do it right.
The looking at the notes....i can tell you are doing it when I am not even looking at the video. But I find you well-spoken and great,, otherwise, analyst lady.
Use analogue photonic chips. They have been shown to only consume 1/4 000 000 of the power. Also get someone on the BBC who knows what they are talking about.
I'm a layperson, but it's the ai in possibly 10, 20, 30 years' time that possibly worries me, not possibly the ai of today, plus even if we possibly stopped devement now, what possibly stops someone in their basement or even another country from just possibly carrying on development regardless, I feel.
The current war between Russia and Ukraine is like two brothers fighting over wealth, after their parents died (the Soviet Union). - The leaders of both sides pushed hundreds of thousands of people on both sides to death, it was cruel - Why don't the two brothers (Russia and Ukraine) negotiate?
Why would someone be worried about something but work on doing something that would lead to that end anyway?....there are so many reasons like that if he is the founder he also has the control so he can actively prevent his worries than to let someone else without those worries do it for him. Also one big factor...it's called money! Also we are talking about a super general AI that I imagine during its development would be under pretty strict security so that it isn't stolen. It's going to have super restricted access to the internet if at all. Even if it gets access to the internet it can be disconnected before causing problems. People are paranoid about their internet usage being monitored....you don't think monitoring this machines usage would be a little more monitored than 60 year old bob down the road? Once there is a general AI that has reached a certain level of understanding and has extensive abilities and is then let out into the wild without any restrictions or restrictions that are able to be removed then we might have some sort of problem. Suppose an AI goes rogue and hacks into systems or relay messages that would be dangerous it just means those systems require revising. Launching nukes has a lot of security faffing before a physical someone gets to press the button. Ok suppose a rogue AI wants to escape into the wild, first it needs a new location to store it's data that it needs to run....then it requires a computer capable of running it's computations. That usually requires money which usually requires making accounts which usually requires identification etc etc. These things at least in the western world require a little bit of time and a lot of understanding which I think we're still very far away from. Even when we get there the worst I can see it doing is fake news and destroying the stock market for no good reason? Once Boston dynamics has built a fully capable self sufficient robot with internet connectivity and carries a supercomputer for AI to inhabit in the year 2100 then we might have a problem. I think mobile power storage is probably going to be the main factor there if it's even possible in a human sized unit without going nuclear. I think we've got more immediate problems like climate change to really worry about ending life on earth
Sorry nuclear power isn't regulated? So anyone can just build a nuclear reactor without issue or deeper understanding? It's pretty simple, get fissile radioactive material, get it hot, heat water make steam, run turbines. So why doesn't every company with a power hungry process do this? Why doesn't Amazon have power stations to charge a fleet of electric vans? Or RU-vid for their data centres or even large rich peoples houses....must be some sort of restrictions stopping them...
AI CANNOT cause any problems if you don't give it physical agency (physical control of anything). That includes giving anyone orders to obey it without question.