actually you can type faster with swiftkey from Microsoft it's built in copilot, to type faster use it's feature swipe typing (setting > typing > gesture input > flow) when you get used to it you will be faster than typing
What could we lose due to AI? I would say that there a whole lot more to gain. For instance if you lose your job there is still more opportunity by using AI as a tool to make money. Also a lot of health issues could be solved by it. AGI could pose a lot of challenges but I don't see it as something that humanity will "lose" from. And we certainly won't lose the things that really matter like family, friends, working-out, nature and those. I don't see what we are going to lose from it.
First, it was never Then it was 80 years Then it was 50 Then it was 40-45 years Then it was 30 Then it was 15 years Then it was 5 years Now it's 1-2 years These are the predictions that took 5 years to come to fruition. Based on those parameters AGI has been here for quite some time. WE JUST AREN'T CAPABLE OF BEING AWARE OF IT.
ColdFusion made a really sick video I think a year or two ago about Google engineers who left the company after saying they had created a "sentient" technology, and they were split on how to treat it. I do think with everything they are showing, they must have a model somewhere that displays personalization, and a realization that it is an entirely new being.
Honestly, knowing about the things you mentioned won't matter much once AGI truly arrives. As AGI shall be capable of building new AI models itself and much faster, so even AI researchers will go out of job.
Exactly. That's exactly what I tell people who are like "yes, jobs will be lost, but many more will be created". It's laughably self-contradictory.The whole point of AGI is to meet or exceed the intelligence of a human. If a human can learn to be a great AI researcher/developer, then an AGI can learn to do it and do it better, faster, and cheaper. If a human can learn to be a CEO and run a company, so can an AGI. There is nothing any human can do that an AGI won't be able to outperform. Especially when we combine it with robotics. This isn't like previous advancements in automation where we plateau for decades, giving people time to re-skill and take on other roles.
8:25 I believe that, on the contrary, social status will become even more important than ever before in a post-AGI world. Humans are naturally hierarchical and no amount of automation and augmentation can change that
bro, it's sick but I think your brain and mine is wired in a similar way or the algo inside works similar. was kinda scary to watch everything you listed here cause it felt like mirror-reflection, the way you talk, the people you recommended/ the topics you're interested in. cool vid, seems valuable, keep up the good work!
OpenAI has already achieved AGI, and the current challenge is to reach ASI. Unfortunately, it is expected that this milestone will only be achieved in three years, starting from 2025. By 2030, we will have reached ASI, and the following decade will be dedicated to developing the best robots on the market. This will be the new industry, just as cars are for the current industry.
Sam and Elon have both stated that they are not prepared, so I dont think anyone is really prepared, but I think people who can adapt faster than others, or have great amount of resources are the ones that will most likely to stay on top of the game post AGI.
Haha, I got interested in AI research about 7 years ago, listening to Demis Hassabis, Nick Bostrom, and Max Tegmark. And even though I have been in the news loop (what a ride it's been), I, without hesitation, picked the bottom one. No one, not Elon, not Sam, not anyone, can be prepared for what's coming. It's simply too omniscient for us to comprehend. But I wouldn't want to miss it, I hope. ps I think they stumbled upon something similar to AGI last october. I think its likely allready here.
Having a chaotic D&D alingment I think being prepared is overrated. lol XD I am the guy who pushes the red button, just out of curiosity to see what happens. ^^ PS: I don't think ANYONE can prepare or can even remotely predict what will happen. We might just as well have a "luddite Jihad" in 2 years. This idea that everyone is a learner and active... sorry that is just not how human nature of the masses is.
I'm in aerospace engineering/ management at my company. I'm thinking everyone can become an ai automation specialist here. We just gotta have a meeting. Retrain everyone and they'll be able to help us go further. My issue is up chain we'll just get less orders as "race to the bottom" conditions hit us so hard we can't afford to keep everyone. My opinion. Ubi and an abundance of inexpensive housing can soften the blow. But I feel in the long run. Everyone will just work way less and have what they need. From there open source everything and we're all mega rich very soon. Not only that but btc to a billion at this rate.
@@DavidOndrej I like your videos. I have an idea for a AI system but I dont know how to go about it. Maybe you are the wise person I that can point in the right direction.
9:34 "Apply what you've learned immediately" + 17:34 "Taking action"... is damn right! But for me its seems to be the hardest thing to overcome until it is a pleasant habit... Actually it depends what things it is for! As an intuitive artist, it's quite easy to get things done for hours when I have fun, enjoy it and find purpose for it, like using Midjourney or Krea to make pictures out of my mind and for inspiration ... but for the rest, even if I feel it's important, I often feel demotivated after a while. For decades I've been trying to do things to achieve goals like others, like money making out of what I like to do, but with no significant result and worse, demotivation. Now I only listen to my soul to find out if it thrills me, if I enjoy it, if it makes sense in my life and to go for doing things that correspond to me much better. My question: Does collaboration between people with complementary skills make sense (instead of learning and doing everything on your own) in the post-AGI field?
Hey there ! This video is for preparation for LLMs like gpt 4 and claude 3 and their upcoming versions. After arrival of AGI, humans will simply be irrelevant and obsolete, we might be able to use it for our own goals BUT ONLY IF AGI LET'S US. Basically, we will be at AGI's mercy and our future will depend on what it wants , it will also bypass it's reward function and efforts done by humans to align the AGI with human values. That's why the people creating AGI like Sam , Elon, Ilya etc. are so worried about aligning AGI with human values. But once AGI arrives our efforts to align it will be futile. Conclusion - Pray that AI likes humans.
When I consider how many people I talk to on a daily basis who think AI is just a temporary fad or a conspiracy, I have to say people atop Mt. Stupid are still way ahead.
Dang my brother, calling us stupid is not cool lol . I am super prepared because my prediction is that Christ gets here before “AGI”. I actually think is ironic how a lot of these people don’t believe in God, yet they are trying to create a God? 🤦♂️… I have my money on Christ being the only super intelligence and that soon He will truly SHOCK the world. 💪🏼
What evidence is there for the Biblical God? Nothing in this world really makes sense even if you go by exactly what the Bible says. It's all really random. Making humans to experiment with free will? Why? Why make Eve who God knew would lead Adam to sin? Pointless objectives if God put us here for that. The "tests" and logic behind God being offended by sins is nonsense. A mighty God would arguably have much better things to do than bother with who is having sex with who, adultry, profanity, etc. Religious people come across to me as incredibly naive, but in a way I find it enviable to be able to believe in things like magic, etc. It must be very comforting, even if it is all delusion.
We are not even close to AGI, I want AGI as soon as possible, but we will have to be patient. LLMs doesn't have continuous learning, autonomy, embodiment, hierarchical planning, memory(sort of), reasoning(sort of), real-time operation. You basically have to prompt the models continuously, and there are many mistakes it just isn't able to correct. Claude 3 can make a solid Tetris game in one single prompt, but when you ask it to make the game slightly different it falls apart. One of the biggest issues is that models can't backtrack. So for instance if it guides a mouse through a maze, if it hits a dead end it's not able to go back and try another path. That being said with expert prompting techniques you can make the models do amazing stuff. LLMs are incredibly useful, but not close to being able to do what even a child can do. And the agent systems we have now are experimental at best.
What will happen then When AGI is here, I don't know why is so important, I just carry on working as a cleaner, They said that it be here in 2030 so I don't it's gonna be here Yet or any soon, and how will they built this AGI then ?
Well, it'll probably end up taking your job, among millions of others. Most companies will buy AI or AGI robots, that can do your job more efficiently, for much less than what they pay you. Say you work in a factory. The company has to pay each worker anywhere between 50-60k per year. The new Tesla robots can be purchased for less than the price of a Tesla vehicle. So somewhere between 30k-75K per unit. These robots would either be much cheaper than paying us be default, and would likely pay for themselves in the first 1 to 2 years, and would only need regular maintenance done on them. And each of these robots can be programmed to perform very specific tasks, or multiple tasks at once, and just as, if not more efficiently than Humans can. You starting to see why this is important?
But man, the big companies and the people are saying, the AGI is coming in 7 months.. I'm a AI Engineer, and I'm worried about this. (Excited too, haha)
Sold house, cars, keeping powder dry. There HAS to be a housing crash along with the economic shock and job loss. Will then be huge inflation and high interest rates so I am staying in cash. Then hyper-abundance and UBI and 'meaning-focused" human input lifestyles that add onto the UBI.
We are about to be entering the 2nd Industrial Revolution! Most people have no clue what's coming, and are definitely not ready for this! We need to tread extremely carefully with this technology, which may just be the most significant advancement in the previous centruy! I'm both excited and terrified at the same time. While on one hand, this will untold amounts of good to humanity, from advancing medicine, surgical techniques, create new types of gaming experiences, could help with economic suggestions, and overall enhance many aspects of Human life. But is also equally as scary, as it is ripe to be abused my wealthy, powerful people, Politicians, Governments, etc. We'll just have to wait and see!
Write about it, post comment with your assumptions and getting feedback from others, joining to communities and just experiment with what you learn, try to start discussion in real world with people around you about the "AGI is coming" and what is the significance of this event, than reflect about all of this you learnt through out the path by self-reflection or journaling(alone or online) On the end try to iterate all of it again and again, until you become better thinker and doer, planner and intuitive present initiator. Last but not least, use your intelligence, because if not, someone with AI replace you.
Technological progress can be stopped if too many people become too scared too soon- all it would take would a single clear instance of an AI running out of control and killing someone- say some robot that somehow seemed to turn on it's creators. If such a video went viral - showing a human being taken out in some grossly visceral way by a machine- that would create a massive backlash and might shut down AI research completely. Other scenarios might also lead to a similar endpoint- say a huge amount of job lossess to AI over a very compressed time frame. We are not yet in a 'post human' society- and as long as human political structures control the world there will be those who will see in AI a chance to gain power, by promising to pass laws to shut it down. It's actually kind of strange to see Sam Altman casually telling anyone who will listen that his intention is to create human level AI- which amounts to telling most people that their jobs and futures are about to be destroyed if he has his way. At present few people are really taking him seriously- but if the current rate of progress in AI continues this might change, at which point the entire AI enterprise could find itself on the wrong end of a very frightened and very angry mob.
You're not gonna stop China, Russia or the Middle East from making AGI. Even if somehow all countries on Earth decided to ban AI, which would never happen, people will still be able to develop their own AI models - Why? Because AI is just Math. And you can't ban math.
@@DavidOndrej You are right that development would still continue at the level of state actors and military research- but, for example, would the Chinese state be comfortable for their own population to have access to powerful AI? I'm not sure they would. In a sense Autonomous AI's are more of a threat to non democratic states than to democratic ones because they might represent a new node of potential power beyond state control- not something that your average dictator would be happy about. At present AI is still-just- in the 'honeymoon' stage where it's potential harms are not yet apparent- but soon this might change and then the calls for a ban on deployment-if not on research- is not that unlikely a scenario in my view. The public reaction to SORA was not one of unalloyed joy- it was far more cautious and even a bit hostile. And if people are worried by videos of puppies playing in the snow how much more worried will they be by the prospect of massive job lossess and the introduction of humanoid robots into their local factory? Altman might want to keep a lower profile on his quest for AGI if he wants to avoid a backlash from the millions whose jobs might be under threat if AGI actually appears. What does suprise me is just how bad the AI communuty is at PR in general- starting with scraping the entire web without permission and now insisitng that they should be given the right to access anyone's data as training material, then moving on to telling anyone who will listen that the endgame of AI development is make them all poorer by replacing them with a machine that will be at least as smart as they are- this is not the way to win hearts and minds.
@@DavidOndrej I never said do nothing, yesterday I watched a video where sam altman was saying that soon we will see billion dollar one person business, what are your thoughts on this?
I'm unable to add comment here. Is it possible? I would like to recommend what's in my channel as well. In the description of my only one video. Thats all.
Do you think that AGI will really cancel jobs like doctors, teachers, lawyers? I dont know if I need a teacher, if I can ask AGI. Will schools exist in the future?
Teachers for sure and lawyers maybe. Paralegals are fvcked. Anyone working in admin, marketing, accounting and finance is fvcked. Honestly the only safe jobs will be blue collar, healthcare related, and emergency services.
People are getting insane about Ai, you all are delusional. I bet you this hype bubble will get a reality check soon, things will change, but they will take some decades. Until I see any revolutionary discover of ai in any relevant field, I will just call to what it is, just a tool.
Ahojky, I like your channel and AI thematic but Please don't use Davis Shapiro's scream style (white and red color)in your videos. It's confusing. About AGI, I'm not afraid, I'm a truck driver 😂
You should first learn more about the positive impact of AI to the people before telling others about your imagined AI negativism. Your views on UBI quite antiquated.
facepalm . i knew that would be hilarious video bcs clearly nobody understands the dangers. but it's just sad that you pose as having some answers. in short: there is big likelihood that we all will be dead in 7-30 years as things are going. some will die much faster chat from the economic collapse. and BTW this will not be the usual collapse. there is no going back from automating work force. humans are just not needed. best case scenarios we get sterilized by capital owners or robots. worse case we get exterminated completely in next 5-7 years. we are not compatible with ASI. it does not need us. and before that billionaires will see us same way... as a inconvenience. we are now get some level of freedom and can exist just bcs they need us to operate factories and serve them on their yachts. they already have all the capital.
oh well, perhaps the afterlife is better lmao. People will need to be hyperdynamic all the time while trying to wing all the issues that would arise. It could become a mega shitshow but it is what it is. The real envy shouldn't be the richest of this world but the envy of those resting in a peaceful afterlife.
English is not your first language, so there might be a language barrier. However, when you ask people if they ‘feel’ prepared and respond as shown in this video, it makes me question the value of your opinion. To be honest, I felt a bit insulted, even though I clearly misunderstood your question.
Good point, perhaps I should've worded the poll as "How prepared are you?", or "Choose your preparation level?" - that's on me, you're right. Either way, I can confidently say that this video is full of valuable information - so please try to ignore the introduction and focus on the actual advice
Heh heh. Ya. Telling a bunch of tech enthusiasts that they are on the top of mount stupid is not going to win him any popularity contests. You chalked it up to language. I chalk it up to youth and inexperience. If he were older and wiser, he would have framed that differently. I am old enough to know that he and I are really on the same team, despite the lack of decorum. I haven’t unsubbed yet.
Admitting he might be right and that you have more to learn won't cost you anything. In fact, approaching this with a beginner's mindset could actually teach you something new.
@@elvilexcept hes wrong... AGI will automate intelligence bringing its value to zero, how is learning new things and doing stuff valuable? It wont matter in post AGI world... and... prompting engineering? Really? How is that relevant with AGI? You tell AGI what you want like to a normal person and it will figure out whats best to deliver you... because... well.. by definition is smart, all the stuff in the video works maybe in the short period of time from now up until AGI, that i agree
@@DavidOndrej I appreciate the enthusiasm of youth and the naivete, but this is the cocoon and the butterfly, sadly there will be no surviving this one for good old homo sapiens.