I might be missing something. If the rotors move, what’s stopping someone from just spamming every letter until it loops again. Then wouldn’t you be able to know “oh, if there is a D in the 4th letter, when you press A 4 times it becomes a D, so that must be an a”
The key flaw in this thinking is single AI system for all AI needs. Our minds are a large collection of specialist and interconnected systems. You wouldn’t ask your auditory system to identify colour. The best systems are small specialist ones that work in unison. Take Hippocratic AI a nursing system as an example of this, their paper outlines how specialist AI can work as a nurse over the phone.
Your understanding of LLMs is flawed. Prediction is not *what* LLMs learn to do, it is *how* LLMs learn. The prediction is their mechanism of learning. By trying to predict, the parameters of the network are established, and that configuration of parameters *is* the embedded knowledge.
I don't think we need general intelligence. We have 8 billion units already that are very energy efficient. LLMs are very useful. If that's all we ever get, that's fine.
Costs are exponential and gains are logarithmic. Ultimately, the acquisition of measureable knowledge with be asymptomatic, like it is for human knowledge. You never really can get there when your measurements and observations are based upon human perception. AI is little more than linear algebra being run by a ton of processors. The vast amount of information digested is unfathomable by humans, but it’s also constrained by the same limitations. It’s a sales pitch.
You're watching too many movies. This AI story is a story! Or a fairytale to be more precise. A lot of time will pass before this thing is even usable.
It's kind of obvious that general AI won't be reached because machine learning requires data to do anything and so therefore it will never be able to do anything outside of those parameters because it can never achieve something that requires knowledge it does not have. It cannot come up with new information without data, which is what would be required for general AI. General AI would require a completely brand new process, it cannot be achieved from our current methods, it would require something completely new. The trajectory we are currently on does not lead to general AI.
Generative AI operates in the space of human awareness & reasoning capabilities… and if human reasoning is finite… then it would mean at some point AI would peak…
This basic principle applies to almost everything. Building muscle in the gym becomes harder and harder as time goes on. The beginning might be really fast but it will plateau and after years and years of lifting progress becomes really slow or even starts to decline. Same applies to learning a subject. Reading about 20% of a book you will learn 80% of the book. Every new percent that you want to learn requires exponentially lore data, or more reps, or more whatever. And it plateaus and starts to decline at some point.
WAIT THERE'S BEEN A COMPUTERPHILE CHANNEL HERE THIS WHOLE TIME?!?!?!?!?!?!?!?!?! *HOW AM I JUST NOW FINDING THIS????? **_WHY DIDN'T ANYONE TELL ME!!!!!!??????_*
So basically he is saying there is not enough expert knowledge out there for gen AI to distinguish anything more complicated than it can today, i.e. cat vs dog
Recently wrote an N-dimensional N-ary search function. It's fun to try and scale these basic algorithms, solving all the bugs that show up and improving your understanding.
I will say, all of the hype is cool and all but the reality is AI DOES offer a "new" outlook, an organic brainstorming session can be very bemeficial,. when I am stuck with a problem in life, mathematics, logical decision making, instead of asking another human for advice (sometimes) it can be easier to play with ChatGPT for a bit and it creates a thread in your brain to create something new, AI is not meant to do evertything for the human, thats when it starts getting too much like AI