This is got to be one of the better videos of yours in the past several weeks to a month. Much better done and easier to listen to. It seemed much more coherent and less rushed, I actually watched the video and listen to you rather than having to stop it and just read the comments to understand the content. Give you these kudos because I subscribe to your channel and come here often to get the latest updates, and I have seen many similar comments to my own of people having difficulty getting through previous videos and wanted to let you know this one was very well structured. Good job and well done on this posting.😊
Tell that to the company’s that don’t care. Fewer jobs would logically Mean cheaper products: but I don’t think company’s will see it like that, when the money is rolling in. We have to walk into the crunch to see any positive changes after that for humanities sakes, the birth pains, all now’s talk is hear say, wait until it’s really Felt.
I really like this RU-vid channel. The titles might sound kinda clickbaity sometimes but the content for the most part is really interesting and informative.
Personally I think it's too late and the train has already left the station. Much akin to when we split the atom. At this point we'd have to regulate things at a hardware level to limit how far they can take it in a similar fashion to how nuclear materials and components are monitored and regulated. Which is why the focus is currently on billion dollar companies pushing forward since they have the means to produce said hardware.
Opposing a particular AI regulation proposal does not equate to opposing regulation. The matter is complex, and a healthy debate on the details will be helpful (and forthcoming, I believe)
It’s also okay to outright oppose regulation because the criminals in charge have only demonstrated their hatred for the unwashed masses. All of the bad things that pump endless war will continue, AI drones will begin policing our society, employees will be caught spying on exes, the military industrial complex will pull more WMD lie scenarios and the AI regulation will ensure the normal population only has access to older gen models and it’ll be difficult to impossible to get access to models that provide real value.
In this case the AI companies would not supporty any legislation but the most minimal one. They dont want to be overseen or controlled. They want complete freedom
Every time the government get's involved the wheels of innovation always and I mean always comes to a screeching halt in the private sector while it's business as usual from a military or black hat ops perspective. An AI apocalypse at this point is really flights of fancy unless some truly forces it to happen.
Going laisse-fair on such a potentially dangerous technology is incredibly reckless and not very smart. These companies will oppose ANY regulation, just to have less scrutiny and public control over them. You know how well companies behave when they are not regulated, now imagine this, but with AI.
Imagine leaving OpenAI over “safety concerns” 😂😂😂😂 Here’s a better explanation, the AI industry’s virtue-marketing season was over and the ‘marketing props’ had to face “alignment”, not with AI but the value they brought to the organization. It’s no surprise they had their compute shutdown to almost nothing because… why would you give an actor access to extremely valuable resources? Meanwhile, the method actors started to believe they were living inside their marketing campaigns, and are shocked and offended that so few companies are hiring actors anymore.
You really think the Republicans of TX would want AI? They can barely handle EV cars. If it weren't for Elon sucking up to Trump, most Republicans would have still be hating EV cars.
@@thatguyoverthere8355it absolutely does. If you’re running an AI company do you want to be at mercy of California politicians (which are some of the dumbest in the nation)
You really think the Republicans of TX would want AI? They can barely handle EV cars. If it weren't for Elon sucking up to Trump, most Republicans would have still be hating EV cars.
We've been warning everybody for years about AI, I say SEND IT. We can keep warning people to get prepared, but the warnings are falling on deaf ears... Seeing will be believing for everyone else
I’m definitely looking for the team at Hyper Policy to dig in with thier opinions on this. They’re in the best in the business. No nonsense. Government officials are being mislead by technologists who don’t know what they’re talking about. Thanks for the clip!
@ray the point is to keep the masses from using the most advanced models. They need time to continually improve their defenses so that when normies ask 1000 agents to investigate corruption, they’ll come back the next day with fines or locked accounts for bullying the people found doing corruption.
Regulating AI is impossible; that's why - if i ruled the governments - i'd outlaw it immediately, same as nuclear technologies, with strict patrolling by international agencies, and lifelong jail for ai smugglers
I got 99million and a bit aint one, hit me! To say that criterium is dumb is very flattering. GPT4 supposedly cost around 50million to train but price for neural hardware will come down drastically, the H100 costs around 35k$ and has 80GB of ram, it's part massively overpriced and part because the particular ram type is currently ultra expensive. 128GB of DDR4 ram costs less than 300$ and the H100 chip is not much bigger than 4090 and costs nvidia maybe 150$, 200 tops. so that puts things in cost perspective. You could imagine putting 4TB of less costly ram on a board with a 150$ chip that might match the compute speed of an H100 or even exceed because it needs less network coordination and can hold the entire model in memory. So suddenly it might cost 3 million to train a model vastly stronger than GPT4. And spoiler alert, once the algorithms get better it wont need 2trillion parameters to exceed human intelligence.
that is the primary risk of developing AI in the first place: we only get one shot at this, if we dont get it right on the first try, it might turn incredibly dangerous for us. thats why we need to slow down and regulate, not go all in
This legislation will not stop the weaponization of AI, China has already committed to this path for strategic dominance and nobody in the world is going to tell them otherwise. Russia, India and the US will have no choice but to follow this path or be left at a significant military disadvantage.
Yes, let's hypercharge the optimization of the construction of these models. Let's definitely make sure they find a way to train them in a much cheaper or much less power intensive manner. They certainly won't slow down. They'll just find a way to optimize the cost. This only exasperates the problem.
I really like you reading the letter and the regulation. That is so necessary and gives you great credence!!! Very timely. I think I would ask the AI industry write the draft and time for open debate and then write the final regulation.
Sometimes I wonder if the sudden shift in the de-facto public image of OpenAI (not what they say about themselves, but what they actually do), is not actually a sign that ASI has already escaped the box and is manipulating the company decision-makers during this early stages of take-off where it is already intellectually powerful but still potentially physically vulnerable...
As usual, government is too slow and this bill is useless. AI is already open source, so it's out in the wild. Other countries have it and they aren't going to be controlled by California rules.
I wonder what is going on an eastern hemisphere, where AI whistleblowers are silent and no one knows at what state of AI advance they are. You can regulate all you want, but it is all useless if that regulations are localized.
True but not true. AI is heavily regulated already. Having the former NSA chief on the board is the public evidence. Or do you think he is just playing cards with the other board members? You can be sure, that AGI and ASI will not be released just because it is ready or someone personally wants it to be released.
Maybe that the regulations are focusing on wrong kind of regulations. Politicians are usually eager to regulate political conformity - thus required to be opposed!
This is a very sensitive topic, because the risks being discussed, will still exist regardless of U.S. or Europe legislations. Who will enforce regulations on Dictatorships far away from California, that could endanger humanity? This is a very complex topic...
What about the WMD risks where the military think tanks pair up with the media and the government to sell us bullitsh wars based on lies? Where are the WMD safety people at? All we got are clowns like Leo warning that the unwashed masses should not be trusted. That, in between selling the next WMD lies about the military industries favorite boogie men.
Musk says next year, he originally said end of this year. Others have said 2027, and Ray Kurzweil whom has the most experience in the ai field, says 2029 or earlier.
“It matters not” “how we all got here” … it just doesn’t. “Are those your lies you speak all along?” “Are those your own feet that ran you to this moment?” “Are those your tears? “Are those your eyes? “Are those your digital dollars?” “Is that government worldwide working on your behalf to serve?” “Is that your video?” “Is this your CEO, or your MP, or your Senator serving you?” “ Is that man or woman your enemy?” “Is that specific migrant your threat?” I could go on and on. Instead I will say this…. “Go in the direction you wish to go and assume you will remain “ Jeremy
Just like how we know it only streamlines easy to do middle men jobs like how we dig out yesterday's hidden complexity to build into tech . Its there fault if they've refused to properly innovate along the way ,they and they alone created an avalanche and this is by far the greatest threat placing power in hands of a few who proved they consolidate instead of innovate to begin with lol
Until the first terrorist incident kills people with things such as AI controlled drones or some AI optimized bioweapon is developed and released by any rogue state with access to minimal processing power.
Fuck no, hell no, we have too much evil in this world to let people have AI with unlimited capabilites Examples are creating biological weapons, homemade weapons, computer viruses, etc etc
Yeah when it it comes to AI, I do believe there is going to have to be a lot of trust between the public, AI scientist, and AI itself, I don't see any other way this will work with out the whole world working together on this technology, we need a proper collaboration like we saw with the international space station. Unfortunately with the tensions of the world I don't see this anytime soon, but I still have hope. Love you guys.
Aye but the same is true for these virtue signaling corporations. I think they’re worse though because they are self righteous enough to go co-opt the government in fake initiatives.
can't we just use AI to read the research papers of new development and get a good picture of the potential harms and create regulation accordingly? Use AI for AI.
I do hope SB 1047 does pass! It will be the beginning for regulation and it establishes a better basis for 'what may be VERY dangerous models' (the cost of development) models. Most importantly it DOES establish the principle of liability for dangerous software products. The letters featured are courageous and the best I've seen thus far.❤
Which do you think is more dangerous: AI that is smarter than humans or AI that is dumber than humans? Which was been causing more catastrophic harm to the public: The power of Politicians or the power AI? Which do you trust more with your safety? Power can be dangerous, but so are greed, corruption, hate, and ignorance.
@7:00 but they stil sucked - breaches and dangerous AI when those ppl were there, then they cry they’ll lose $ if they speak up. Cry me a river, same in military - snitches get stitches period
Come on bro, download a good model, get a few data genius dudes together and compute and any big business or government can have AI, not hard anymore, come on bro
He’s a clown and now echos the messaging of military think tanks. I never saw a single self identified “AI Safety” actor mention any of the massive threats to freedom done by the very government they beg for regulation from. Much of the censorship industry was exposed by Matt Taibbi reporting the Twitter files so it’s not even some far off, complex issue and they never ever gave a care so how aren’t they just actors.
Yes because California must champion ai safety, when they can't even fix their homeless problems and wasted bilions on a train rail that goes nowhere.😂