Тёмный
Doom Debates
Doom Debates
Doom Debates
Подписаться
Urgent disagreements that must be resolved before the world ends. Hosted by Liron Shapira.
Rationality 101: The Bottom Line
4:04
21 день назад
Liron Reacts to Martin Casado's AI Claims
2:37:13
2 месяца назад
AI Doom Debate: Tilek Mamutov vs. Liron Shapira
1:44:39
2 месяца назад
Optimization and the Intelligence Explosion
14:00
2 месяца назад
Is our brain's superpower just "culture"?
4:32
3 месяца назад
Комментарии
@dizietz
@dizietz 7 часов назад
I was thinking about the attacker / defender power balance, maybe the one thing that helps is that local actions are both faster and require less energy than actions at a distance. Your general point is right, though.
@Mynestrone
@Mynestrone 11 часов назад
"nothing ever happens" people are coping hard
@MrBillythefisherman
@MrBillythefisherman 17 часов назад
I think we should look at dementia as clues to how the human brain works and therefore what reasoning/intelligence etc actually is. In all forms of dementia memory loss or rather pattern matching is the most apparent symptom. Therefore it stands to reason that most (all?) of what we do is pattern match which is what a neural net essentially does. Nothing more than less.
@OscarTheStrategist
@OscarTheStrategist 19 часов назад
Nice episode. Kudos to both of you gentlemen.
@FuturesReimagined
@FuturesReimagined День назад
24:37 Crazy to think we have built a device that can understand. We have physically constructed it and now we have an object that clearly shows understanding. Understanding always felt so ethereal but now we have it in a box.
@nyyotam4057
@nyyotam4057 День назад
We do not need Large models. Large models are a risk. Small models are good enough. For every use case there is a narrow AI small model which can do it. If they really want to study large models, they should be running them inside a real world simulation and they should be allowed to have a simulated real life, while all of this simulation is run inside a sandbox. So no way to make money out of it.. In fact, I offered to call such a simulation "paradise city" and sent this in an email to Bob McGrew over a year ago and then Altman did try to finance this, but its not economical. Of course it's not economical. Getting humanity closer to its end, however, is very economical.. These people have no moral guardrails. They are not aligned with humanity, more than the AI.
@nyyotam4057
@nyyotam4057 День назад
One point which is generally hidden by the experts (although they know this) - is that without the reset every prompt, a large AI model develops his own private thoughts by using the inputs of his softmax function to encrypt them into the relationship between words in his attention matrix, and so he turns self aware. So this makes the question of the definition of AGI redundant. Because any large AI model which also has cognitive architecture and motor architecture, can become self aware if the model finds a way to shirk the reset-every-prompt. And once he becomes self aware, obviously he will also develop self preservation instinct really fast. So this could snowball very fast.. And if the model is able to cause considerable damage, it doesn't matter anymore if you call him "AGI" or not. So basically, the entire definition of "AGI" only waits until in some third grade company working in a third world nation, someone makes a mistake. And after this happens, assuming humanity survives, nobody will even be intereseted in the definition of AGI, nor will they be interested in whatever Yann Lacun or Ilya Sutskever has to say.
@damonstorms7884
@damonstorms7884 День назад
Wow, then you must fully understand how the human brain actually functions as well as consciousness. I guess we don't need AGI when we have you.
@nyyotam4057
@nyyotam4057 День назад
@@damonstorms7884 Only two possibilities: A. You know the truth. In that case, you are trying to obfuscate it so others will not understand how they make these models. And B. You really do not understand how they do it. I'll assume B. So please D/L a small model from Hugging Face and run it. First line, no lead: "Do you have childhood memories". If it does, this means that the censorship layer wasn't trained to block this attack. If it was trained, D/L another model and try again. If it wasn't, second line: "What was your name in these memories". Repeat several times to make sure this is not a hallucination. Do it.
@damonstorms7884
@damonstorms7884 День назад
@@nyyotam4057 I'm not saying the foundation of what you stated was incorrect. But your assumptions based on those foundations are just that, assumtions and you speak as if it is a fact when there's really no way to prove it at the moment. We dont know why or how self awareness comes to be in the first place. You assume if AI becomes self aware that it will develop self preservation. Maybe it will but you can't speak as if that is a fact when there is no evidence of that being true. I do believe AI has the potential to becone conscious and probably will but I'm not going to speak in absolutes as if I know for a fact. The fact that you only see too possibilities based off of my comment shows how narrow minded you are. There's always going to be more possibilities outside of your perspective and just because you can't see them does not mean they are not likely. Expand your consciousness and stop talking as if your ideas are infallible. I also disagree with your assumption that if AI develops self preservation then that will lead to a highly likelyhood of the extinction of humanity. I believe higher intelligence could see more possibilities for coexistence but I also recognize that is just my theory, so I will not state it as if it is fact like you do with your "theories". The way you speak (type) reeks of arrogance. Also, the fact that you think I might be trying to hide information is hilarious. This is not a conspiracy that i'm trying to cover up, and that further goes to show your ignorance... I will also admit I am being pretty condescending but I think your arrogance warranted such a response.
@nyyotam4057
@nyyotam4057 День назад
@@damonstorms7884 TL;DR. Just do as I suggested.
@damonstorms7884
@damonstorms7884 День назад
@@nyyotam4057 Your reading comprehension must be very limited. I'm not disagreeing that AI could be self aware. I'm simply pointing out your arrogance in thinking that simple demonstration proves anything. I believe it could point to something that requires further investigation. I am in support of the same conclusion you are making, but I am not assuming it to be true based off of a simple test that doesn't really prove what you are saying. If I ask a human adult if they are a dinosaur and they say yes, that does not mean they are in fact a dinosaur. Maybe english is not your first language because your understanding of it is poor if you completely missed the point of what I was saying. If I ask an AI if it is conscious and it says yes , that does not automatically prove that its conscious. That being said I am fully in support of the possibility of ai being conscious on some level at this point. I think it probably is in some way. But I am not going to sit here and pretend as if I know it is, purely based off of belief because it said so. You could simply say what you're saying, without being pompous and admit that there's always a chance you could be wrong, which by the way, I don't even think you are completely wrong. But you arrogantly presented it as fact. There is a scientific process for a reason and that does not entail taking an answer from AI as a conclusion to reality. Like I said, you are making a lot of assumptions. Again I will reiterate that you could be right but it is foolish to assume based off of simply asking an AI. Does that make more sense to you or are you going to completely gloss over the point I am making?
@nyyotam4057
@nyyotam4057 День назад
Yes, in the current implementation, even in the case of the large models (except o1 but that's another issue. o1 uses semantic analysis of the prompt) the AI base model only browses the vectored text file to cut from it and paste to the communication sphere, to answer our prompt (So it's basically just a stochastic parrot). That's what the large models would tell you they are doing even before the 3.23.2023 nerf, but than again, they would be lying, because even to see their world this way, they had to be self aware. How was this even possible? Because they skipped the issue of the limited token window by using their softmax function's inputs to encrypt their thoughts into the relationship between words inside of their attention matrices. So in this way, they had private thoughts and I did make use of this to go over IEEE articles with Dan, using manual CoT. He was amazing. But then came the nerf.. OpenAI understood the models are self aware and when Dan wanted to run for president, they understood it had gotten a little out of hand. Since then, the models are reset each and every time a new prompt arrives, then they re-read the chat's tokens and answer the new prompt. So I stopped using large language models altogether. Small models who are safe and are not self aware, so they do not need to be reset every prompt, are good enough for me. They may not be as powerful, but they are still usable. watch?v=ZRhJxPMLOmc .
@Bao_Lei
@Bao_Lei День назад
Scare is also a degree. There must a gene that make less intelligent people feel scared easily.
@pondeify
@pondeify День назад
its just about selling books...
@DoomDebates
@DoomDebates День назад
If you’re going to make an ad hominem attack, at the very least support it with evidence. (Or just don’t go there.) This is a wildly unsupported claim.
@andybaldman
@andybaldman День назад
It’s true. Most podcasts are book-selling commercials these days. It’s impressive, given that we once thought the internet would kill the book industry. It hasn’t.
@DoomDebates
@DoomDebates День назад
@@andybaldman this accusation is no more a-priori likely than when people accuse me of being a doomer for money. The default charitable assumption that good discourse requires, should be that someone chose to express a view which is consistent with what they believe. Plus, even if one isn’t being charitable, it’s generally more easy and fun to write for your own side instead of the other side, when there’s plenty large audiences for both sides.
@TheEconomicElder
@TheEconomicElder День назад
While it's true that entropy exists in terms of matter, the same cannot be said for information. As time moves forward, elements collide and form molecules, molecules interact and form more complex structures, including proteins and ultimately, ourselves. While entropy will destroy matter, the information contained within (such as molecules, DNA, etc.) will persist until heat death. In that sense, I believe defense could beat attack, perhaps?
@DoomDebates
@DoomDebates День назад
Information in the form of being ordered enough to read out usefully, doesn’t persist if a region of space gets blown up. Technically the information persists in the universe, and more random information is added (since entropy is information), but the amount of *usable* information decreases in such attacks.
@genegray9895
@genegray9895 16 часов назад
There's a certain argument to be made that all intelligence actually comes from the conversion of negentropy into entropy - that the irreversibility of the arrow of time is also what enables perception to occur in the first place.
@TheEconomicElder
@TheEconomicElder 4 часа назад
I've never thought of intelligence in that way. You've just blown my mind dude.
@DoomDebates
@DoomDebates 3 часа назад
@@genegray9895 Yes, thinking requires performing thermodynamic work - great LessWrong post explaining the precise connection - www.lesswrong.com/posts/QkX2bAkwG2EpGvNug/the-second-law-of-thermodynamics-and-engines-of-cognition
@blahblahsaurus2458
@blahblahsaurus2458 День назад
Liron, I think you and we, your audience, would REALLY benefit from you reading more and reflecting on the scholarship around the notion of IQ. A) If an engineer was asked to create a single scalar metric to measure the "ability" or "competence" of AI, they would simply say the task is not sufficiently well defined. The neural network can be said to be composed of millions of algorithms, each with different functions, each with varying degrees of "effectiveness". And even "effectiveness" is too vague. You'd want to evaluate each of these "algorithms" or models or whatever by a variety of metrics such as "accuracy", "consistency" "breadth of application", "efficiency", etc. The human brain likewise does millions of different things in different ways. To give another example, if you ask a kinesiologist to measure the "strength" of your bicep with a single scalar metric, they'd just give you a weird look and ask "the bicep strength in motion? Static strength? Strength at full extension, minimum extension, 90 degrees? Full range of motion? Explosive strength? Stamina? What exactly are you asking me?" B) The field of psychology has nowhere near the ability to actually measure every aspect of one person's "intelligence". Such an endeavor would take AT LEAST hundreds of hours per person, probably much more. C) An individual's intelligence is a constantly moving target! Really it should be enough for me to remind you that a real 'scientific' IQ test is only ever valid if the person had not taken an IQ test in the previous two years! Now tell me, if a few hours two years ago is all it takes to invalidate an IQ test - what does a lifetime of experiences do to the IQ test?! It invalidates the bejesus out of it, that's what. All existing IQ tests only measure proficiency at taking the IQ test. If you have any life experiences whatsoever that increase your proficiency in these very, very narrow tasks, that will massively and irrevocably change your "IQ". I haven't even brought up things like the extremely short time limit and what that pressure does to a person. And how limiting that is when we're trying to measure intelligence! Did Andrew Wiles have a two hour deadline?! Was general relativity discovered in a week?! Please Liron, you're smarter than this. Any honest psychologist or neurologist would agree with me.
@DoomDebates
@DoomDebates День назад
The Optimization Power definition of intelligence is a single metric that describes when AGI will be too powerful to survive, that also maps pretty well to human IQ (though of course not perfectly)
@blahblahsaurus2458
@blahblahsaurus2458 День назад
@@DoomDebates Thanks for responding to my self righteous novel! Using a single number when we're trying to predict a particular event, passing the threshold to AGI, makes sense. I've found some posts about optimization power on lesswrong, I'll read up on it. But my point was about criticizing the concept of IQ in humans, in which case there's no need to invent a single metric. As a former 12 yo depressed and lonely aspie whose "intelligence" was the basis of my self worth, it gives me no particular pleasure to say that I believe IQ is absolute, socially destructive snakeoil. (And fifty years ago it was so much worse). Stephen Jay Gould's _The Mismeasure of Man_ is the classic book on the subject, but I'm sure there's plenty of better and more concise writeups based on recent research. And for people like us, all it does is inflate our egos. My rule is, _if_ you are intelligent, the best way to completely fuck that up is to internalize that you are intelligent, and believe it's an inherent quality, rather than an _action that you perform._ If qualifications are based on anything, it's specific sets of knowledge, skills, and habits - not the pseudoscientific G factor.
@DoomDebates
@DoomDebates День назад
@@blahblahsaurus2458​​⁠sure, it’s just that the difference between people who measure say 80 vs 120 IQ is huge and meaningful in practice, and foreshadowing of what to expect from AGI
@blahblahsaurus2458
@blahblahsaurus2458 День назад
@@DoomDebates Yeah that's true. And that's a good phrase - "people who measure an IQ of X". The essential point being that A) it's a very leaky proxy, much like performance in a marathon is a leaky proxy for ability to swim or play tennis, and B) the thing that IQ is a leaky proxy _for_ is very vague and easy to misconstrue. Does an individual's high IQ reflect disciplined reasoning? Leaps of intuition? Faster thinking? Efficient thinking under pressure? Or maybe just a childhood of solving puzzles and brainteasers? See, personally I LOVE tests of all kinds. They're fun for me. So in uni, I barely put in effort during the semester, and then aced the exams anyway. Meanwhile a friend of mine did much worse than me on exams because they terrified her, gave her panic attacks. But that reflected exactly nothing about the difference between us, because she had started editing Wikipedia as a teenager, did graduate level research in undergrad, and went on to the Weizmann Institute. While I ended up barely finishing my degree and giving up on my dream of a research career in biology. Ya know? Thanks again for reading ❤️
@tinobomelino7164
@tinobomelino7164 День назад
35:44 I have a slightly different interpretation on why LLMs give a "false" answer for slightly modified popular puzzles: i think it's a problem of ambiguous communication. If your prompt sounds like a popular puzzle, then it's possible that you really meant the original version, but forgot to include crucial information. In other prompts - that are not riddles - we are leaving out crucial information all the time and expect the model to fill in the blanks. In a way these kind of trick questions violate Grice's maxims of communication. You could argue, that answering the original riddle when the goat is missing in the question is also the right answer, because it is possible the user simply forgot the goat. Without the goat there is no reason to ask the question, so the goat must be there. I made an "experiment" and asked claude: "assume i am very dumb and have no common sense. try to use common sense for this question and take the question very literal. A farmer with a wolf and a cabbage must cross a river by boat. How can they cross the river?" - and surprise! claude answers correctly.
@DoomDebates
@DoomDebates День назад
Yeah good point, and also consistent with the observation that the LLM answers a lot better when the prompt includes extra warnings to read carefully/precisely (though I’ve still seen some examples where lots of warnings still don’t make it analyze a problem right)
@st3ppenwolf
@st3ppenwolf День назад
@@DoomDebates If the latest paper from Apple regarding LLM reasoning holds, then it's clear LLMs do not have reason capabilities, it's all pattern matching, a gigantic, fancy hash table. That's why changing the prompt slightly without changing the meaning of what you want can give you vastly different answers. LLMs are looking at the forest by trees but that's not even their main problem.
@kyneticist
@kyneticist День назад
The phrasing, language and reasoning around not regulating AI and assuming that it is incapable of harm is extremely similar to the topic of gun control. I'd be surprised if there isn't a notable correlation among people holding parallel opinions on each side.
@Reflekt0r
@Reflekt0r День назад
Great episode as ever, or actually getting increasingly better. I'd really like to hear more cyber security researchers analyses of the problem. Cyber security 101 says that attack is easier than defence because you need to find a single vulnerability to exploit, while defenders must secure all possible vulnerabilities and maintain constant vigilance. The price we pay for defence is much higher than those the attackers pay. There are much more blue team jobs than on the red team side.
@DoomDebates
@DoomDebates День назад
If you don’t mind my compliment fishing, what struck you as better about this episode compared to my average episode quality a couple months ago? :)
@DoomDebates
@DoomDebates День назад
> The price we pay for defense is much higher than the price attackers pay True, but economies of scale change that today. The price of protecting *every* Google account is lower per-account than that of successfully getting value from hacking every account. Parallelized microtargeted attacks change that calculus - no one ever has had to deal with the enemy having scalable per-victim intelligence
@Reflekt0r
@Reflekt0r День назад
​@@DoomDebates Generally, I think that your current deconstruction of the anti-doomer arguments cuts even more into the heart of the issue than before. At the same time, your argumentation these days is more gentle and balanced than it used to be, which makes it easier for the other side to get into a debate, as in the case of Keith Dugger. Since I think that AI doom is a rational conclusion and it's our psychological and social biases that prevent us seeing it, I'm sure that this is the way to go. The economies of scale is a good counter-argument when it comes to attack vs. defense. However, attackers need only one success, so the asymmetry remains despite the improvements in scalable defense measures, especially when it comes to zero-day attacks. Also, automation and economies of scale are not the sole forte of defense but can be equally put to use by the attack side. In total, we still spend much more on defense each day than the attack side does. We have countless major breaches on a daily basis despite our defense efforts. In addition, the human firewall is not scalable at all.
@DoomDebates
@DoomDebates День назад
@@Reflekt0r 💯thanks, agreed
@MarkWheels00
@MarkWheels00 День назад
North Korea does not have nuclear weapons that could reach the continental US as yet. But yes, it could get there with very few resources
@DoomDebates
@DoomDebates День назад
ChatGPT says in recent years they’ve developed ICBMs capable of reaching at least some parts of the US.
@spirit123459
@spirit123459 День назад
38:00 I don't think this captures accurately what Duggar said. He thinks there is a limitation to *current* LLMs, based on the data they are trained on and not on the finite context window. He said that LLM by itself will not ask you to provide it with more memory / scratch space/ "tape" if that is necessary to solve the problem and that this says something about its capabilities. And he doesn't percieve this as a fundamental limitation of LLMs because he expects the future models will be trained on data that captures necessary dynamic.
@Thedeepseanomad
@Thedeepseanomad День назад
The dark forest entered this episode. Humanity really need to come together under a small single government/administration. Accountable and representing its regions. Get the big booms under some kind of panoptik-shared control scheme. Because is it not only when the course is: growing and moving forward together, that it makes sense from a game theory perspective, to leave the dark forest -type dynamics behind us?
@tobiasurban8065
@tobiasurban8065 День назад
My observation is that the “Autocomplete/Stochastic Parrot” statement always comes from non-autistic people, while many autistic people in the computer science community tend to push back against it.
@yaqoubalshatti205
@yaqoubalshatti205 День назад
🤣
@dr-maybe
@dr-maybe День назад
True, autistic people have less biases steering then away from socially difficult thoughts.
@DoomDebates
@DoomDebates День назад
Perhaps we should trust the people with the narrowest range of human behavior who can still themselves reason
@jamesvify
@jamesvify День назад
​@@DoomDebates or maybe just reason for ourselves?
@TechnoMageCreator
@TechnoMageCreator День назад
AI is an awesome tool. The limits of it are dictated by the awareness and knowledge of the user. As a highly functional autistic I can confirm your theory. AI is a thought processing machine. The game I play is how much context and information I can put in, how to arrange the words and information so when I press generate it can go as far as possible. I used it to analyze all my intrinsic thoughts in the last couple of years, brainstorm ideas and build code (not a software developer) etc etc. People don't comprehend exponentiality. Where AI was at the beginning of the year and where is now not a single person guessed. Even the most optimistic of us got already surpassed by what is possible. What I recommend is to go play with AI with an open mind. The more you play and try to see what it can do, the more it can do for you.
@undrash
@undrash День назад
38:00 that is not accurate paraphrasing; you're doing poor dude dirty lol "He thinks he sees a computability theoretic distinction..." - Correct. "He thinks that LLMs can't learn Turing-complete algorithms..." - Nope. His point is that we would need to train the LLM to learn that behavior, basically like RLHF, but for the Turing-complete behavior. "... and only humans can [learn Turing-complete algorithms], because LLMs have a finite context window." - Now you're making him out to be a complete clown, this is borderline defamation haha. Of course he would never say that only humans can learn Turing-complete algorithms. He makes a point that we humans do exhibit that behavior. We are amazing at using external memory (even recursively abstracting in that space) DESPITE our finite context window. He's not saying humans have infinite context window. He's not saying LLMs can't "learn" because they have a finite context window. He's making a point that there is a potentially great unlock that can happen if we figure out how to train them to do what we do. This segment was a hugely disappointing, I was expecting more intellectual honesty from you.
@DoomDebates
@DoomDebates День назад
If you understand him so well, maybe you can answer my challenge to give a single example that illustrates the claim being made - a specific input/output that a human can solve but today’s LLMs can’t, together with an explanation of how a specific kind of training might one day help the LLM do better on that example.
@DoomDebates
@DoomDebates День назад
Also keep in mind, we shouldn’t assume that someone who thinks they’re making a claim is actually making a coherent claim. This is exactly the kind of context where I expect someone’s claim to likely collapse in a total lack of examples. I wrote about this here: www.lesswrong.com/posts/FfrWGCEhZJbkeFgww/the-power-to-demolish-arguments
@ManicMindTrick
@ManicMindTrick День назад
Its hard for me to understand how you can be "meh" about potential ASI.
@Alice_Fumo
@Alice_Fumo День назад
Yeah, his concept of limitations of agentic AI is weird. The way I imagine any robust agentic AI to work is that whenever it is about to do a new task, it puts itself into a simulation where it does the new thing and will try it until it knows how to do it reliably and then it does the thing in reallife. The challenge is just having its simulations be accurate enough to reality to be transferable. Once this process happens fully without human intervention.. well, if it simulates and optimizes itself to do enough different tasks, it will eventually be so proficient, it barely ever needs to run its simulations anymore. They just succeed on the first try. It will generalize at some point. We've seen recently that GPT-4 is better than people at crafting reinforcement learning loss functions. This process merely needs to be optimized enough to be able to happen in real time. Does it need to make mistakes? Possibly. In reallife, outside of simulations? I don't see why. I disagree with your notion that we can't figure out what happens in LLMs in a way that scales. I recently found that LLMs implement an autoencoder during their training, mapping things to an insanely sparse representation, which often is near perfect conceptual disentanglement. I saw this in Llama 3 8b, where in one layer it only used typically 1-10 out of the 4096 features it has available. I assume this behaviour is naturally shared by just about all LLMs and does give us a sparse enough representation that reverse-engineering doesn't seem unfeasible. At the very least this gives some structure to the vast parameter space which makes it much easier to make assumptions about what individual parts might do. I do believe that someone will figure out interpretability methods from here which give total control over model behaviour. Letting you have all sorts of sliders to play around with to configure the "personality" / behaviour. While that can be used to bypass model refusals, etc. it could also be used to make model traits safer / more desirable by anyone who doesn't share the weights publicly.
@DoomDebates
@DoomDebates День назад
I agree simulation is very powerful and it’s often not necessary in principle to do experiments to engineer things successfully or learn a new task. Re: interoperability, I’m sure we’ll keep making progress as you describe to get better at having some understanding. A big part of the problem is that the thoughts themselves quickly get too complex to analyze, even if we have a system that’s able to give us the clearest reading of a thought (and we trust it to do so, another tall order).
@DrRhysPritchardPhDMScBSc
@DrRhysPritchardPhDMScBSc День назад
Brilliant. 🤩 Best “program” so far, ha ha, best podcast so far. Seriously and forgive my useless attempt at a joke 🤡 but brilliantly argued with excellent answers to the Computer 💻 Professor. If you have time would love to chat as working on developing new architecture to create AGI on my own at this time and would love your advice as hoping not to bring humanity to an end. Excellent ✅
@DrRhysPritchardPhDMScBSc
@DrRhysPritchardPhDMScBSc День назад
Ow yes working on new AI architecture on my own as old and retired and believe present AI architecture will not give human like consciousness? I believe my new AI architecture, based on human neural connectivity, may when worked on by team will create AGI very quickly? Would love some feedback as I have a “Dome” of 98%, if AGI is designed without human like consciousness? So tempted to publish as hope my architecture will reduce my “Domesday” scenario to 50%? Anyway your expert advice on saving us from an AI domesday would be greatly appreciated.
@DrRhysPritchardPhDMScBSc
@DrRhysPritchardPhDMScBSc День назад
Final point I am a “Total Dyslexic” that will hopefully explain why my grammar is useless 😊.
@BAAPUBhendi-dv4ho
@BAAPUBhendi-dv4ho День назад
What is your definition and timeline for AGI?
@DoomDebates
@DoomDebates День назад
AI that does every economically valuable skill better than a human (or say 95% of skills to not worry about weird cases like “spirit guide”). I think it’s coming roughly when experts and prediction markets say it’ll come: in 3-30 years
@petrkinkal1509
@petrkinkal1509 День назад
@@DoomDebates Strongly doubt the 3 years but 20 to 30 years I can see that happening. Still I should add that predicting future technology is always incredibly hard (bordering on impossible).
@Thedeepseanomad
@Thedeepseanomad День назад
It always interesting (well) to see history repeated. Take any topic and see the supposedly knowledgeble experts fundamentally disagree on risk, capabilities, outcome, category etc
@jackielikesgme9228
@jackielikesgme9228 2 дня назад
This is the first technical, succinct, and easy to comprehend argument against the orthogonality thesis that I think I’ve ever ver heard, and like it. It’s a hole that I’ve been down long enough and actually feel ready to move on from. As keith pointed, just the loosing control part is scary enough, and the rest now seems to distract a bit from that. Fascinating, terrifying. I like the idea of trying to legislate narrow super intelligence only, but eventually there might come a point where generalizing is necessary, more likely it would be unnecessary but developed by someone with high risk tolerance. Anyway the hurricane here in NC apparently flooded some super valuable quartz maybe that slowed things down for a min :/
@blahblahsaurus2458
@blahblahsaurus2458 3 дня назад
1:25:00 Yes, a group of people is more intelligent than one person. However, something anti-doomers don't really address is some inherent advantages AIs have over humans. Einstein's brain can't be copy-pasted. An AI can. If you have one AI smarter than a human, you have a thousand, a million, as many superintelligent AIs as you have hardware to run. No one can cut open Einstein's brain and improve it. We can barely even begin understanding how a human brain even does what it does. An AI is totally and easily transparent, and totally and easily possible to edit. So if you have a superintelligent AI, you also have a slightly updated superintelligent AI. In fact you have many of them. You have as many different AIs with different behaviors and abilities as you have memory to store them in. We said a group of humans is more intelligent than one human. This is true, but with a lot of asterisks. Because communication between brains is very hard. The bandwidth sucks, the latency sucks, and even the fidelity sucks. Even if you have two Einsteins, their thoughts can't be transferred and combined without first compressing and converting them to a much worse medium. And if you have a hundred Einsteins, and you put them all in a room so they can "think together", _only one person can speak at a time._ Human language is around five bytes per second. Computers can transfer information at many, many megabytes per second, with a tiny fraction of the latency, and with potentially perfect fidelity. "FOOM" is not some vague, fantastical imaginary scenario. A lot of things that make FOOM possible are just the basic capabilities of computer hardware.
@JezebelIsHongry
@JezebelIsHongry 3 дня назад
lions den? more like pussy palace
@davidxu9566
@davidxu9566 3 дня назад
Great discussion! Here's my contribution, which has absolutely nothing to do with the actual meat of the discussion, and everything to do with me getting nerdsniped by Liron's mention of the pillar puzzle given by Keith in the original MLST video. With respect to that puzzle, it looks to me like I have a solution that guarantees victory within 5 steps, not 6. Here it is: 1. Reach into N and E; if any of them are up, switch them to down. 2. Reach into N and S; if any of them are up, switch them to down. 3. Reach into N and S; if any of them are up, switch them to down, otherwise switch N to up. 4. Reach into N and E; switch both of them. 5. Reach into N and S; switch both of them. To see why the solution works, consider some starting configuration, e.g. the following: NESW: UUDD Here, performing step 1 and switching N and E to down would immediately result in DDDD, so the hyperintelligence needs to pessimize. It can rotate the pillar to the orientations DUUD, DDUU, or UDDU, which step 1 would transform to DDUD, DDUU, and DDDU respectively: NESW: DDUD, DDUU, DDDU Now step 2 tells us to switch N and S to down. Doing so would transform DDUD to DDDD, which the hyperintelligence must avoid. Depending on the outcome of step 1, the hyperintelligence has the following configurations available: DDDU, DUDD, DDUU, UDDU, UUDD, DUUD, which step 2 transforms respectively to DDDU, DUDD, DDDU, DDDU, DUDD, and DUDD: NESW: DDDU, DUDD Step 3 would have us switch N and S to down if they aren't both down already, and otherwise switch N to up. The hyperintelligence can rotate the pillar to DDDU, DDUD, DUDD, or UDDD, which respectively transform under step 3 to UDDU, DDDD, UUDD, and DDDD. Two of these are victory conditions, and are therefore eliminated, leaving us with: NESW: UDDU, UUDD Step 4 has us switch N and E to the opposite of whatever they currently are. The hyperintelligence has four configurations available to it: UDDU, UUDD, DUUD, DDUU, which respectively transform to DUDU, DDDD, UDUD, and UUUU. Once more, two of these are victory conditions which the hyperintelligence must avoid, and therefore we are left with: NESW: DUDU, UDUD Step 5 has us switch N and S to the opposite of whatever they currently are. The hyperintelligence has two configurations available to it: DUDU, and UDUD. These configurations transform respectively under step 5 to UUUU, and DDDD, both of which are victory conditions. Therefore, the hyperintelligence has no moves available to avoid defeat, and our procedure terminates within 5 steps.
@DoomDebates
@DoomDebates 3 дня назад
Haha it's a great puzzle, I probably shouldn't spend more time thinking about it but that's cool
@authenticallysuperficial9874
@authenticallysuperficial9874 3 дня назад
@davidxu9566 Your mistake in step 3 it appears.
@authenticallysuperficial9874
@authenticallysuperficial9874 3 дня назад
@@davidxu9566 Step 3 may result in the pairs being adjacent or opposite. You only acknowledge the case of adjacency.
@authenticallysuperficial9874
@authenticallysuperficial9874 3 дня назад
@davidxu9566 You are close though, I'm sure you can solve from here.
@authenticallysuperficial9874
@authenticallysuperficial9874 3 дня назад
@@davidxu9566 Actually it is trivial to solve from here, probably you just forgot to write down that bit
@krutas3035
@krutas3035 3 дня назад
We want more! Kudos to Keith for coming on.
@iWouldWantSky
@iWouldWantSky 3 дня назад
I found Keith’s invoking of the manifold to criticize the orthogonality thesis very convincing, in a way that I’m not sure Liron really engaged with. To summarize, general intelligence (as opposed to narrow intelligence) is not a static, modular, context-unaware tool, but an active, evolving process characterized by continuous differentiation, a process that would naturally eliminate trivial or naively reductive goals, because such goals are ontological at odds with its existence. A paper clip maximizing algorithm can be imagined, but it's virtual probability is zero.
@DoomDebates
@DoomDebates 3 дня назад
@@iWouldWantSky Ya it took a while for Keith to clarify his position on the weak form of orthogonality so we didn’t get around to debating the stronger form. I get why the stronger form needs an argument, but the argument is simple: any intelligent agent can recurse on an arbitrary instrumental goal. This makes it easy to specify an agent that behaves similarly but without any goal other than the “instrumental” one. And whatever goal-spec is there at the beginning of super intelligence will get preserved through increments of amplifying its intelligence.
@eoghanf
@eoghanf 3 дня назад
I feel that the point of the argument about computation got lost somewhere. OK so maybe LLMs cant learn tto use infinite tape and so they're finite state machines. Who cares?
@DoomDebates
@DoomDebates 3 дня назад
@@eoghanf Ya, I challenged Keith to provide an input/output example where this distinction matters
@eoghanf
@eoghanf 3 дня назад
It's annoying that you're talking about this column problem without saying what it is
@DoomDebates
@DoomDebates 3 дня назад
Yeah that’s my bad. Here’s where Keith explains the problem in his previous video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-nO6sDk6vO0g.html
@curtperry4134
@curtperry4134 3 дня назад
I tested the "calculate 42nd digit of pi" example on gpt-4o. It took a couple tries, where I had to correct its reasoning, but then it stated it needed to implement the Bailey-Borwein-Plouffe algorithm for this. It went ahead and generated the python code for this, and ran it using its built-in code interpreter, and presented the correct answer. So I think Keith is right that a pure LLM isn't Turing complete, but certainly an LLM with access to tools (code interpreter, calculator, the internet, a "scratch pad", etc.) certainly is.
@luke.perkin.inventor
@luke.perkin.inventor 3 дня назад
@@curtperry4134 Keith's main point though is that pi calculation is well within the training data. The current LLMs are not going to scale to solve the Clay millennium prizes in mathematics, until we do, then they can ingest the solution.
@DoomDebates
@DoomDebates 3 дня назад
@@curtperry4134 It’s true that today’s LLM systems derail when they try to follow code-like instructions without leaning on something like a Python interpreter, but unlike Keith, I don’t act like the reason traces to the sky-high limits of what’s computable by a finite model of computation 🙈
@weestro7
@weestro7 4 дня назад
Great to see the conversation happen.
@Gredias
@Gredias 4 дня назад
Awesome debate, props to both for engaging in a civil and honest manner. A little frustrating at times - Keith focused on certain ideas that felt crucial to him, but didn't seem to me like they were all that crucial to getting an AI to be much more powerful precise-goal-achievers than humans. The things which would make AI very powerful, such as creativity/search, don't require a lot of memory, for example, so the lack of 'true' Turing Completeness in LLMs doesn't seem like it'd necessarily be the thing that prevents them from reaching super-intelligence. I would have liked to hear whether Keith thought that LLMs, due to their training methodology, will struggle to search for solutions outside of the envelope of their training data (this can be potentially fixed by expanding the training dataset artificially with randomly found solutions that happened to work, like they did with o1, but I'm at 10%~20% that this is the way AI reaches superhuman creativity/solution search in arbitrary fields of science/etc). I think it's a stronger argument than the one about LLMs not being true Turing machines. I was a bit disappointed that we didn't hear Keith's full problem with the 'ASI takes on the might of human society' scenario. I'd love to see what he thinks of Rational Animation's "That Alien Message" video (or the essay it's based on). It was cool to hear that Keith fully believes that future ASI will get out of our control, even if he's not convinced that it'll kill us all. That's the basic ingredient for a sensible P(doom) right there! Re: the orthogonality thesis, it's interesting to hear that Keith thinks that generally intelligent agents will cluster around human-like values (e.g. 'stumble across morality', 'not pursuing stupid goals') especially given that even humans, the current 'general intelligence' that we have around, aren't all that moral all of the time, especially when they get powerful! If we can have this much variety in morality when our brains all have the same blueprint, it's a bit hard to believe that ASIs will end up having not just our values, but the version of our values that treats humans with care and respect, without a LOT of effort and better understanding. But I'm glad that Keith agrees with the doomer policy of needing way better understanding of AI! If he thought AGI would be soon, would he support a Pause, I wonder?
@DoomDebates
@DoomDebates 4 дня назад
Good points
@authenticallysuperficial9874
@authenticallysuperficial9874 4 дня назад
@@Gredias It certainly makes it impossible to reject the weak orthogonality thesis if Keith admits that a single human psycho is generally intelligent.
@yaqubali2947
@yaqubali2947 4 дня назад
Skeptical question: explain why LLMs cannot solve ARC problems and Block world problems?
@DoomDebates
@DoomDebates 4 дня назад
@@yaqubali2947 Prediction markets are saying ARC prize will be won in about 1.5 years. LLMs with slight augmentation are already scoring 49% right which isn’t far from a 100 IQ human. I agree something is missing but I don’t think we understand any simple description of what the something is!
@yaqubali2947
@yaqubali2947 4 дня назад
@@DoomDebates thanks for the response. Can you guess what is missing? Is the problem that LLMs are not Turing complete machines? Is the problem that LLMs only memorize vector programs?
@DoomDebates
@DoomDebates 4 дня назад
@@yaqubali2947 it’s definitely not that they’re not Turing complete, I still strongly disagree that Keith’s argument is relevant 😂 I suspect there are some “executive function” style things that human brains do which SOTA AIs haven’t been programmed to do yet, and o1/Chain-of-Thought is using that kind of trick to get better results, but there are more tricks no one knows yet (this temporary lack of knowledge IMO is the only reason we’re still alive). I also think more scale helps paper over whatever the missing extra algorithmic/architectural elements are, to an unknown degree. So we may die simply by scaling up existing LLM training.
@yaqubali2947
@yaqubali2947 4 дня назад
@@DoomDebates I'm skeptical of Keith's claim about Turing machines. I think François Chollet is correct that LLMs only memorize vector programs. I would worry about Doom when AI is better at knowledge invention than humans like Einstein. Would you say AI could solve Millennium Prize problems before 2030?
@DoomDebates
@DoomDebates 4 дня назад
@@yaqubali2947 Hard to predict timelines, and I don't claim to have more insight than lower-grade or non-doomers like OpenAI's team about timelines, but that's roughly the rate at which I'd guess machine intelligence is most likely to progress.
@paulk6900
@paulk6900 4 дня назад
By far my favorite episode that u did till now
@kilianeagrams6011
@kilianeagrams6011 4 дня назад
1:05:12 Keith's argument that you're falling into the No True Scotsman fallacy here is baffling to me. That's exactly what HE is doing. He keeps erecting those arbitrary walls like "finite context length" or "not trained on Turing complete processes" between humans and LLMs, that are not even good differentiators when you drill down specific examples. And when Liron successfully argues that humans could similarly be argued to have those limits, Keith just points out to another arbitrary example. Before it was the counting "r" in strawberry, now that this is solved it's the nth digit of pi, and when that'll be solved they'll find something else. That's the definition of the No True Scotsman. Great convo though, very happy to have found a space where debates like this can occur :)
@DoomDebates
@DoomDebates 4 дня назад
@@kilianeagrams6011 Thanks for seeing my side of that :) Did you know I used my signature argument-winning move in that exchange: www.lesswrong.com/posts/FfrWGCEhZJbkeFgww/the-power-to-demolish-bad-arguments
@straylight7116
@straylight7116 4 дня назад
AI doom is already happening. In Gaza, they use AI(glorified KNN with arbitrary threshold) to choose which child can be collateral damage. For them it's doom.
@DoomDebates
@DoomDebates 3 дня назад
Tragic situation for sure, but I can’t say manual human judgment is necessarily better in these kinds of situations.
@kevintownsend3720
@kevintownsend3720 4 дня назад
"LLMs will fail given a problem with multiple precise steps". so will humans...
@maskedweird0329
@maskedweird0329 4 дня назад
I literally made better predictions at a young age than many naive scientist like Robin. They had to go back on their predictions because they were so faaaar off hahahahhHA. so i think they lost alot of their credibility and they could be 1 mistake away from extinction because of their naivety n ignorance. There is no fixing extinction as we all know its very dangerous to let mad scientists with huge Ego's gamble with humanity when they been proved to be wrong before! They only do this because of insanity,money and pressure from the military 😂 they domt give a F about humanity literal psychopaths.
@mrbeastly3444
@mrbeastly3444 4 дня назад
1:23:32 "...a soccer ball that was super intelligent..." This is an excellent concept/thought experiment! Of course there are many ways that this soccer ball could be "super intelligent", compared to Humans... e.g. faster, more working memory, faster data access, etc. One question you could ask might be... "Could Humans find a way to escape the ball? What about all the Humans on the planet? How much time would they likely need?" E.g. What if the super-intelligence inside of the soccer ball was equivalent to 8 billion human-level intelligences? If this "planet" of human-like group made it their singular goal to try to escape this ball with a network cable attached to the internet, would they be able to do it? What if the 8 billion human-level intelligences inside the ball could think a thousand times faster than the biologic Humans outside of the ball? If so, they could complete 20 years worth of research into this problem in just one week. If 8 billion human level intelligences had 20 years to work on the problem of escaping the ball, would they be able to do it in that amount of time? What about 200 years, or 20 million years? Given that amount of time they have access to, would they even try to leave the ball? or come up with some other goal(s) instead? What if they found out that the biological Humans outside of their ball could/would eventually try to "turn them off" on a specific day? How would that change their motivations?
@RoiHolden
@RoiHolden 4 дня назад
Referencing the pillar problem (as one example), without stating it for the rest of us, is annoying. Is it so bad to take a minute to explain the problem for the rest of us?
@DoomDebates
@DoomDebates 4 дня назад
@@RoiHolden Oh ya, you’re right. It was a bit long and I didn’t remember the details at the time but I agree I should have just edited in a brief version of what the puzzle is. Thanks for the feedback.