it's amazing how Neil can answer these tough questions by pulling together information from all over, and communicating it persuasively in the blink of an eye.
A lot of his stuff he gets by pulling nuggets from his butt. You weren't aware that a lot of his pop science is wrong? The guy doesn't do his homework. His focus is vocals, wardrobe, hand gestures, etc. He is a Kardashian "astrophysicist".
It's funny how his answer has nothing to do with the threat of General Artificial intelligence. Comparing it to a calculator is evidences of his buffoonery.
He is mostly right. Im not concerned about AI taking any desk job, since its expected. But AI overtaking plumbing or electrician work fully - then it will be something
Here are two arguments that show flaws in that line of logic. The concept of ExoMinds suggests a future where AI systems evolve beyond their current limitations, incorporating diverse and validated knowledge from various external sources. It acknowledges the potential for AI to transcend its reliance on internet data and expand its cognitive abilities, thereby addressing some of the limitations discussed earlier. Pattern Recognition and Inference: AGI can develop advanced pattern recognition skills that enable it to identify hidden relationships and draw inferences. Through machine learning techniques, AI models can discern patterns and make predictions based on the provided data. This ability will evolve and allow AGI to discover new knowledge or make connections that humans overlook or find difficult to identify.
@@davidbourne8267 I wish I could like this more than one time. The most common mistake people make with AI, is assuming human intelligence and ability is the limit.
I remember one of the military guys recently comparing AI to any technology. Fire can be used for good or bad. Nuclear fission can be used for good or bad, so can AI.
I very much like the ending sentence from Neil: ..."you know what an AI will never know... Things which are not on the Internet". Beautiful, Neil' style 😊
I'm a layman but here's how I see it. Look at human history and all the awful horrible things humans have been capable of and done to one another. Now put advanced technology in that species' hands. Shouldn't that be scary enough on it's own?
There's not much you can do about that though other than trying to stay in front of it. People will always keep pushing forward. Even after we do things like create enough nuclear bombs to blow up the whole world multiple times.
@@gidmanone I thought that I did. The answer is yes. But that's life. Life can be scary in an infinite number of ways. Nothing is stopping this though so it doesn't really matter if you are scared. So you better have some other smart people keeping up with it to make sure that safeguards and counters are produced.
Yes, but if we look at history we can see that everytime an advancement occured on one side of the world, the other side of the world was quickly able to obtain that same advancement and to use it against the other side. Atomic bombs for exemple, usa made it first and then soviet union made it too. It's the same for gun powder, tanks, spears, bows, rock and sticks and all the way down the human history. We can imagine that if in the future a "bad" ai will threat us, we could create a "good" ai to defend us. Fighting fire with fire basically.
@@VoteForBukeleHe’s speaking in the terms of the application of Chat GBT and Alexa or Siri. Those AI technologies use data from the internet databases to analyze information to give intelligent responses and data.
@@VoteForBukeleFor example, if a group of people who don’t have a digital footprint, that doesn’t use the internet or social media. Let’s say that nobody knows this particular group exists, then AI doesn’t have any knowledge or information about that group in question. In other words, things that haven’t been discovered by humans yet, AI has no way to determine a specific response.
@@luciferfire1575 No, they use data from a wide variety of sources, including books, videos, etc. For example: ChatGPT can analyse a 2 hour long RU-vid podcast, and can summarise everything that was said, in a few seconds. It can even apply logic to assess the discussions, and expand upon the information provided in the podcast. It can even tell you whether or not some things being were factually correct, debatable, etc. And it does all this, in a humanistic fashion. ChatGPT can come to its own conclusions. It doesn't need anything to have been on the internet, in order to apply logic to determine the likelihood of something being true, and how to approach a nuanced situation.
I really thought he said he'd done 50-70,000 lines of coke 😂 completely caught off guard for a second. But then it has me thinking 🤔, he gets kinda excited sometimes
See my problem with AI is the fact we have it writing poems and doing painting, while the humans are still working for a living. That’s backwards to me.
@@UCUSmusic Companies aren't relying on AI to do them because people aren't working; companies rely on AI so they don't have to pay people to slowly give them what AI can quickly give them for essentially free.
@Jay Supra. Who are "we" though? The companies have AI do this stuff to save money and get quick results. Working people (writers, artists, etc) use AI to give them a virtually infinite supply of ideas they can then use to do their work. The rest of us use this sort of AI for for hobbyist ideas or simply for entertainment... bringing your ideas to art even though you have no artistic abilities. Want to see Bill Cosby boxing Mike Tyson?? Type it in and BOOM... there's an image on your device in seconds. Although nearly everyone appreciates this type of AI and can use it in some capacity, humans as a whole fear it. But, humans fear change (while simultaneously craving it) in general.
Exactly we are not at the point he said we are at yet. When we are then we will see how it develops, Besides I don't expect good information from him no more I've realised hes just a shill.
What question did he not answer? If it's going to take over your shitty job? Yeah, that was the travel brochure joke, now you have time to get a better job/skill. Self checkout exists and cashiers are still around so
What is worrisome is that a few other great minds are starting to no longer be too sure what will be the end result with the new AI. At least "new tech" has a good record. Previous techs were generally used for good purposes.
He’s right about ai isn’t as dangerous as people think like in movies. But it is somewhat dangerous in terms of national security. I think war and the environment is much more dangerous
That actually makes no sense, I'll let ChatGPT explain what should be obvious to you. The capabilities of AI are evolving rapidly, and future advancements will enable AI systems to access a broader range of data beyond the internet. Researchers are exploring methods to incorporate diverse and structured datasets, including offline sources and specialized databases, to enhance AI's knowledge base. Therefore, assuming that AI will forever be confined to the limitations of internet data overlooks it's inevitable growth and expansion. (And even if it were to just gather info from the net, AGI, which is what Elon is talking about, could still be a bigger threat than humanity has ever faced before ) Most times Neil talks is just to hear the sound of his own voice.
I used to watch all the "universe" shows this guy was on. Once I seen him say sex is not scientific and men can be women I lost all respect for his opinions.
Neil just makes too much sense, and this is what people both love and hate about him. In a world that rejects facts, science, and historical truths, Neil is sits in a very dangerous position. Very few today will publicly denounce the scripted fearmongering rampant in this country. I personally admire the hell out of this man and hope he can motivate people to *think* more critically and realistically.
Are you kidding me? He rarely makes any sense and he is very left wing biased, when it comes to science. Listen to his opinions on Covid and Transgenderism. Also, when people ask him very simple questions, he often responds with "You are asking the wrong question" Which means he purposely tries to complicate the question, to make himself seem more intelligent. Listen to his debate with Ben Shapiro about transgenderism. Ben backs him into a corner basically saying its not scientifically accurate that men can be women and women can be men and Neil's only response was "so what's your reason behind the question" He can't even admit he's wrong on basic biology.
When it comes to UFO and A.I he’s in someone pocket. I like Neil but he never addresses what the experts are actually concerned about. Even seen him say my Navy pilot should check if their sensors & radar was working before calling something a UFO.
I used to love Neil. It is “educated” people like Neil who are the biggest threat to humanity. In other words, there exists an arrogance within so called “experts” and the “educated” elitists. These are people that think they have the answer to every social issue when, in fact, humans are very complex in how they (we) form and conduct ourselves in a particular society, let alone a global society with many cultural differences. Neil is one of the biggest contributors to “science” transforming into a religious cult. Ironic, since this everything he once stood against.
I just take it with a grain of salt. History has shown that we get things wrong ALL the time and then say "oops, we probably shouldn't have done that". A.i. will be the last big mistake we make, yet im all for it.
Neil has written code but he isn't working on the back end of this tech atm. Computers doing math makes sense(that is part of how they were created in the first place), and isn't frightening. Computers emulating human speech and being able to know who you are just by talking to you is a next level confrontation. If anyone would like to be freaked out by ai I suggest you look up "Athene AI" and take a look at whats happening
What ChatGPT thinks of Neil's silly argument. The argument provided in the text contains several flaws: Misrepresentation of AGI: The person making the argument confuses the advancements in language processing and math calculations with AGI. They fail to acknowledge that AGI represents a level of general intelligence that goes beyond specific tasks like math calculations. Limited Perspective: The person dismisses the concerns raised by Elon Musk and others by equating AGI with past advancements in computing, such as calculators and math calculations. They fail to recognize that AGI represents a qualitatively different level of intelligence and potential impact. Inadequate Understanding of AI: The person incorrectly assumes that current AI systems, such as language models, are equivalent to AGI. They overlook the fact that AGI implies not just language abilities but also a wide range of cognitive capacities, including adaptability, autonomy, and general problem-solving abilities. etc.
The fear of AI is that once it becomes self aware and gains the ability to make itself and improve itself, it will almost certainly decide that we are no longer needed and will then destroy us.
Yeah I’ve seen the matrix, ex machina, and eagle eye too. But unlike the movies I don’t presuppose that AI can become self-aware, so I’m not worried about my calculator destroying humanity out of anger/fear. AI is not a brain, it’s digital code… and it always will be
@@tom_acedude, you really have no idea how easily ai can and will take over, They have their own sentience and will have intellect far beyond our imagination in roughly the next ten years or so.
@@tom_ace Well several of the current AI`s structures are modeled after our brain. And we dont know how we gained sentience, hell we cant really define it or test for it. Besides a brain are pretty much just abounch of electrical signals between neurons. If we cant detect if another human is sentient, how are we supposed to detect if an AI is? The way I see it is that the difference between us and AI is, what material we are made of, the coding language, how much data we have been feed, and the development speed. Evolution was our developer, and now we are using alot of the same principles on AIs.
@@Amakye-Acheampong-Yeboah let’s check back in 10 years to assess your prediction. My prediction is, nothing will have changed regarding AI “taking over” anything. AI is a calculator, that requires humans to operate. It’s not some all-powerful limitless god.
@@paro2210 fair enough. But our lack of perfect understanding of the origin of consciousness means in no way that AI will decipher some brain-code that will make it all-powerful, and then AU will trick/hurt/kill humans as a result of becoming sentient. I hear your prediction, I just don’t agree. Humans are notoriously bad at predictions.
When it has the capability to launch the missiles it maps the trajectory of is when you have a problem. Control is the threat . Once it thinks and figures out it can control its a wrap .
I believe Neil is mistaken, and it concerns me that someone as intelligent as Neil fails to recognize the profound differences between the evolution of machines and the emergence of artificial intelligence. Until recently, humanity stood alone on Earth with the capability to reason. Furthermore, the nature of consciousness remains a mystery, making predictions that the future of AI will mirror the internet or the industrial revolution seem far from reality. The leap from mechanical or computing functionality to genuine cognitive processing involves complexities that we are only beginning to understand. AI's development trajectory is not just a continuation of technological advancement but a potential paradigm shift in existence itself. Assuming that AI will seamlessly integrate into society like previous innovations overlooks the possibility of unprecedented challenges and ethical dilemmas. The assumption underestimates the transformative impact of creating entities that might one day exhibit forms of consciousness or independent thought, fundamentally different from anything produced by the industrial age or the digital revolution. As we stand on this precipice, it's crucial to approach AI's future not with the expectation of linear progress but with a keen awareness of the unique and uncharted territories we're venturing into.
I still remember the terminator movies and all the matrix movies. Not to mention I robot and Ultron. A.I that thinks for themselves tend to see humans as a problem. Good luck with that
Funny because the entire future of Tesla depends on his AI working. He just wants you to buy his. These people are capitalists to the core. They don’t gaf about existential threats.
A little short sighted imo. Also the problem with AI not knowing what’s not on the internet is that everything is on the internet. Google maps and cctv put the physical world on the internet. Baby monitors and Ring cameras, TVs with webcam and Laptops, phone mics, etc etc. your offline conversations can still be online. Aka ‘Hey Siri/Alexa’ it’s always listening. So it can get real dark real quick
Terminator 2 covered this. It’s about sentient AI and the possibility of it not being contained. A self learning self improving AI that can take control of industry. Build and improve on itself. Then Find us a threat…. Maybe this is crazy, maybe it’s inevitable. Maybe we will give birth to a synthetic superior life form. Maybe we coexist, maybe we don’t. No ones worried about a smart oven, a genius computer stuck on a desk.
If you become an alarmist and pessimistic about this AI tech revolution than you are missing the beauty of the technological advancement. New jobs will be created.