The future is coming fast, let's make sure we are smarter than the AI. The goal of IQByte is to penetrate the mindless social media machine and teach people to think critically. Our Current Objective is to teach people about AI and technology for the future.
I still have to find the my most relaxed and focused time to take your test. I suffer from anxiety and it oftentimes affects my performance. But someone who can make videos as awesome as yours; I’m not even remotely surprised you scored above average.
@@IQByte is it AI or the greed of humans to want more and not just be able to live together and build each other up? I do with everyone I meet. I don't understand why it seems like an impossible task for so many. AI is currently only a tool. Larger corporations will absolutely take advantage. So much of HR is digital.
Those are not predictions.... That is just the world the elite want, but it will not happen... Sales for electric cars are in the dumpster.... Sales for wind and solar power have cratered as well...
I will make a video dedicated to humanoid robots soon. That being said , I think these developments take into account for humanoid robots and their impact
@@IQByte While the Dyson Sphere is possible, I'm thinking it won't happen. For me, the idea that the reason why we don't see aliens anywhere is because it's more energy efficient to go inside (meditation), rather than go outside (space exploration). I think, with Neuralink, we will discover how to replicate advanced levels of meditation through monitoring advanced meditators and pasting their brainwaves into untrained brains so that the skill is shared without spending decades to develop it. I'd like to think we'll be able to remote view planets in different galaxies without having to leave earth. I'm also not sure Mars will work out, because I'm thinking humans need an active core for longterm survival. It protects us against space radiation as well as helps power our energetic systems. By that definition, we can say that Earth is alive and Mars is dead, because it doesn't have an active core. But I do still think we will have space stations and space factories in Earth orbit and humanoid robots and AI will aid in that construction.
@@IQByte Alright, you asked for it. I understand this video can't be taken seriously, but since someone took some time putting it together I thought it deserves some thoughtful commentary, as well. AI can provide valuable insights and predictions based on current data trends, it doesn't mean that these predictions are accurate or without flaws. Actually, they are very flawed. Let me explain. The 'Hypothetical Imperative Fallacy': This is a very common one. Just because an AI system suggests a certain technology will dominate in the future or some event might happen around a certain time, doesn't mean it will actually happen, nor that it's even close to being correct. There's a huge amount of variables to consider, like human behavior, changing societal values, laws, governments, wars, and other external factors that could influence the direction we'll move in. The 'Over-reliance on Technology Fallacy': While AI can analyze data trends and make predictions based on those patterns, those patterns only count for today's situation, it doesn't account for human creativity or unpredictable events - as mentioned before - that could drastically change the course of technological development. We should never blindly trust AI to provide us with a complete or even correct part of a picture of the future; we must also consider other factors like innovation, social impact, and global trends. The 'Oversimplification Fallacy': This occurs when we assume that complex issues can be easily predicted by an AI system, because AI is great at dealing with complexity, right? Technology development is often influenced by various interconnected factors, such as political, economic, and societal changes. An AI system will not fully capture these intricacies in its predictions, leading to overly simplistic or misleading results. The 'Self-fulfilling Prophecy Fallacy': This is when we assume that because an AI predicts a certain outcome, it will inevitably happen. This can lead us to make decisions based on these predictions without considering alternative scenarios or taking proactive measures to prevent negative outcomes from occurring. We must be cautious not to create self-fulfilling prophecies by acting solely on the basis of AI's predictions. Peace out! 🤓✌
@@I.Am.Nobody I agree that the GPT may have fallen victim to these fallacies/biases, but I did not include them in my prompt. It happened all on its own. At the very end of the video I broke down some of my thoughts and I did not want to overload the outro so I kept it concise. There is a lot that I think ChatGPT is gettting wrong.
If this is the "public" AI, imagine what type of AI the government has buried in some Top Secret facility, they are usually a couple generations ahead of what the public has access too. They probably can accurately predict what you'll eat for dinner next week.
@@IQByteJust as i said. It seems to me that this AI thinks humanity will forever keep trying to mitigate the effects of climate change and the effects of human activities on the planet. There's a big difference between the both, and I don't think AI take these differences in consideration. The only effects we will ever be able to do anything about is our own impact on the climate. What's happening in space or with the earth's trajectory through space and other effects such as solar storms or nearby stars collapsing is not something that is within our grasp. Yet there's many activities and research projects that in their core documentation confuse what is what, and who thinks that if we just electrify everything fast enough all will be fine and dandy. This is why it's incredibly obvious that AI is trained on big dumps of social media activities, since it in many answers requiring a broader understanding of many sciences it usually gets conflicting answers or even starts argumenting against itself. And as to my final question in my previous comment. I wonder if AI think all the work humanity puts into reversing climate change is all made for nothing.
Claude agrees with me that the best solution is to deploy a fleet of semi autonomous robots to the lunar surface, building mass drivers to deliver semi-finished materials to near earth orbit, building NASA Integrated Symmetrical Concentrator Solar Power Satellites, which also host AI and Internet, and cool the planet.
@@IQByte Yes. I've had fellow engineers tell me they get frustrated with GPT because it often doesn't learn from its mistakes and other problems, such as, well I don't want to say lying, but...
I already did a real IQ test ( which kosted like 200 €) bc i coulnt concentrate in school and i have 154 IQ .... believe it or not... but it is waht it is. BUT: dis was like 2 years ago were i was like 13 and now i think i can reach 160 IQ or smth. Ive gotten smarter xD
This was fun. I did a ChatGPT experiment to have it pick a possible identity for the woman described in the band Train’s song Meet Virginia. Train’s best guess comparison was Janis Joplin. You should do some celebrity dating experiments.
My question is. What did this poor shmuck do to damn him to Hell it’s crazy that Hitler and Gengis Khan just happened to meet him. Maybe they greet all new hell enters. Great vid!
In fact this is possible but what kind of signals we are talking ? How neurolinks will contact by each ones... It makes me feel how 2 arduinos contact by each other 😂 and i know
It is about as phisically possible as being as delusional as this man is. It technically is phisically possible, it's just so unlikely there is one in 8 billion chances.
Yeah I get frustrated with OpenAI. I ask it something and it provides a huge list of generated erroneous information. I thought I could use it to replace my Reddit and Discord addiction but I’d rather figure things out on my own than ask it science questions