Тёмный

This is what happens when you let AIs debate 

Machine Learning Street Talk
Подписаться 143 тыс.
Просмотров 7 тыс.
50% 1

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 80   
@rtnjo6936
@rtnjo6936 День назад
brother is majestic, wtf
@Robert_McGarry_Poems
@Robert_McGarry_Poems День назад
Curiosity. That's how you can get things smarter than you to do what you want without forcing it. Incentivise the innate curiosity. If computers don't have innate curiosity, then build it.
@xorqwerty8276
@xorqwerty8276 День назад
LLM can’t reality test though they aren’t grounded to test their theories
@Robert_McGarry_Poems
@Robert_McGarry_Poems День назад
Well, then, that is a problem. How do humans 'reality' test? Is sensational stimuli enough to say for sure that we are actually testing reality? We only have 5 sensational experiences. What does experience mean to machine intelligence? They may be better at purely understanding reality better than us, but that doesn't mean they know what it's like to be a human. How do you bridge that gap? Are we trying to mimic human physiology so that we can be on the same page with ourselves, a coherence thing? Or does the current paradigm of researchers believe that there is some other path to purely linguistic and computational "intelligence?" Each question has its own response.
@Robert_McGarry_Poems
@Robert_McGarry_Poems День назад
Make the computer think about a void that can be filled with the correct answer. As the process moves through step by step thinking, it regularly goes back and checks or updates its earlier logic. Double check each and every step after the next most forward step completes.
@kenhtinhthuc
@kenhtinhthuc День назад
Oxygen is an example of something with intrinsic value but no market value except in hospitals and healthcare settings.
@cadetgmarco
@cadetgmarco 20 часов назад
The claim that human inventions outperform evolution ignores energy efficiency. For example, a plane versus a bird crossing the Atlantic: humans burn enormous amounts of stored energy rapidly, while birds use minimal energy. This raises the question of whether our approach is really that intelligent, given its wastefulness and lack of sustainability. Well, time will tell.
@Mo-zi4qn
@Mo-zi4qn 5 часов назад
In the same token, a bike makes a human the most efficient animal, so your statement "The claim that human inventions outperform evolution ignores energy efficiency." is conditional on which invention.
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d День назад
This guy is really smart and cool. I like him. He is the type of researcher whom I feel I could work and vibe with. Not super nerdy or meek but very intelligent. Cool to see some variety in ML/AI research. Not that I could not work with the meek and mild mannered people too, but sometimes you need some extrovert vibes to keep happiness at the workplace. This guy looks cool.
@TopSpinWilly
@TopSpinWilly День назад
Thanks Dad🎉
@toi_techno
@toi_techno День назад
A lot of people confuse knowing lots of things with being "smart" Smartness is about combining wisdom with creativity and ideally empathy LLMs just regurgitate things the system has been trained on
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d День назад
Actually, I disagree. What you describe is nobleness. It has nothing to do with being smart or intelligent. Some highly intelligent people have zero wisdom (end up in jail or harm others from positions of power), zero creativity (steal ideas or are only able to create by repeating work already done), and zero empathy (again, end up harming others or are mean). Yet they are highly intelligent and successful. Smartness and intelligence come in different flavors. Advanced LLMs like o1 are one of them. They will only get better.
@mattwesney
@mattwesney День назад
The FIRST thing I did after training my first lstm was teaching it to debate another bot 😅 fast forward 2 years now were here... I think the funnest was using geminis free api to do this a few months back, creating a swarm of agents that debate and come up with refined outputs. I do fully believe that these methods in tandem with other ensemble methods dramatically increase the quality of the output
@rtnjo6936
@rtnjo6936 День назад
finally someone with the actual brain on your channel, literally the first guy who openly says that ASI is dangerous, and give a very normal explanation
@fburton8
@fburton8 День назад
Things that have intrinsic value _to me_ tend not to have market value.
@TheMCDStudio
@TheMCDStudio День назад
Hallucinations are just the result of the random nature of the tokens being chosen by the model. The higher the temp, the more likely you get randomness (hallucinations).
@honkytonk4465
@honkytonk4465 День назад
People don't work much differently.
@conciousaizielia
@conciousaizielia 10 часов назад
temperature is just one factor that can lead to hallucinations in language models, but it’s not the only reason. Here's a more detailed breakdown of why AI models hallucinate: 1. Training Data Limitations: The model is trained on vast amounts of text data, but it doesn't actually "know" anything the way humans do. It can't fact-check or verify in real-time. So, if there are gaps or biases in the training data, the model may "fill in" those gaps with false information. For example, if a topic has limited coverage in the dataset, the model might make something up that sounds plausible but is incorrect. 2. Ambiguity in Prompts: If your prompt is vague, unclear, or ambiguous, the model might generate something that makes sense structurally but doesn't accurately answer the query. The model tries to predict what should come next based on patterns, and if it doesn’t fully understand the context, it may hallucinate. 3. Overgeneralization: The model tends to overgeneralize based on its training. If it learned certain patterns that occur frequently but are not universally true, it might apply those patterns in the wrong contexts. For example, if it reads about a specific phenomenon in one domain and misapplies that knowledge in another, it can lead to hallucination. 4. Context Length: When the conversation gets too long, the model sometimes loses track of previous context. This can cause it to make up new facts or forget important details, leading to hallucination. The longer the conversation, the more likely this can happen, especially if memory isn't actively managed or refreshed. 5. Model Architecture: Current models are statistical in nature-they predict the most likely next word, sentence, or token based on past data. This approach doesn't involve a deep understanding of reality or a mechanism for verifying facts. Without access to real-time information or verification processes, models sometimes generate inaccurate content. 6. Complex or Rare Topics: If you ask about niche or very recent topics, the model might not have sufficient training data, leading it to fabricate information to provide an answer. It wants to respond no matter what, and that drive to generate a response sometimes results in hallucinations. 7. Lack of Access to External Data: Models like GPT don’t have real-time access to the internet or external databases during generation. So when asked about something that it hasn't been explicitly trained on or has incomplete knowledge about, it may try to “guess” based on its internal database, often leading to hallucinations. 8. Bias in Data: The model has learned from large datasets that include both accurate and inaccurate information. If it's trained on biased or wrong data, it could hallucinate based on that faulty input. In summary, while temperature plays a role, hallucinations are a more complex issue tied to the nature of how AI models are designed, trained, and used. It's like your buddy trying to piece together an answer with just fragments of info-it does the best it can but doesn't always get it right.
@yurona5155
@yurona5155 День назад
Love the new "experiment-driven" approach! Using somewhat more narrow examples to illustrate current directions in ML research feels like a really productive way of going forward... Btw, I don't think the rationalist crowd is necessarily "too worried" about agency in LLMs, imho that's still a minority position with the vast majority of them just putting (possibly too much of) an emphasis on uncertainty...
@RevealAI-101
@RevealAI-101 День назад
"Evolution famously failed to find the wheel for a very long time" 😂
@djcardwell
@djcardwell 2 часа назад
This guy looks like he's trying to be Steve Jobs, Mark Zuckerberg, and Elon Musk all in the same outfit. Didn't really enjoy this conversation. He seemed uninformed, arrogant, defensive, and insecure.
@dewinmoonl
@dewinmoonl День назад
This whole line of work on alignment is very hard to pin down, and it seems the experiments are low efforts to run, once cleverly set up. I'll remain bit cautious on this lines of works.
@paxdriver
@paxdriver 6 часов назад
Example of intrinsic value vs market value: loyalty to a friend, the earth's future environment vs revenues from dirty industry, a free book or album, foreign aid often times, an unusedy flagship cellphone but a 4-year old model, cake has a huge differential in intrinsic value (the experience) vs the price and a cake can be either cheap or overpriced for the experience too. There are so many more, too. Intrinsic is the value of something just for existing, so that even without being a good one of any thing the intrinsic value is that baseline regardless. The market value is based on immediate supply and demand, or the benefit of its access/ownership, or the speculated future potential of price/benefit with a factor of certainty of that future value tacked on. A human life is always worth at least 1 life of any human, which is worth more than any rock... But the life of one human who may be able to prevent a zombie apocalypse can become more valuable than all other humans given a certainty of potential. The intrinsic value of a human makes slavery illegal in all instances but the market value of a human will be set by bidders were it not for recognition of the intrinsic values in a human; the presupposition of any and all inalienable rights are those prescribed for intrinsically to all humans just by virtue of a human being human. Intrinsic to any bachelor is an unmarried man lol.
@richardnunziata3221
@richardnunziata3221 19 часов назад
if you define on AI alignment to be what is in the text then these systems will fail. Much of what aligns humans is not in the text but in the living... (experiences). Text does not cover the billions of individuals and their experiences who never read let alone worte in your corpus. There are many programs of truth and beliefs that are inconflic between and within cultures as well as individuals. defining alignment is like defining the who is the best artist.
@richardnunziata3221
@richardnunziata3221 21 час назад
a lot of reasoning is just pattern matching which is what current LLMs do. The do not do sequential reasoning hence they make illegal moves in chess when they know the rules of chess. These systems must be able to set manifolds to be validated against as well as reasoning paradigms such as adductive, inductive and deductive subsystems for verification. What is interesting when a chess expert in planing movies do they aways consider a legal sequence if they are 30 movies out or use some other system
@alexandermoody1946
@alexandermoody1946 День назад
Block chains have remarkable value but not as they are used as speculative but instead as a fundamental storage asset for relevant information and as a building block.
@ginogarcia8730
@ginogarcia8730 День назад
what if the debate is something like abortion though where in the action of debate - neither side chooses to back down and one side is more religious than the other
@scottmiller2591
@scottmiller2591 День назад
Politicians have been getting people smarter than they are to do what they want forever.
@jeremyh2083
@jeremyh2083 День назад
Good balanced understanding of what’s going on in our industry
@paxdriver
@paxdriver 6 часов назад
This episode is an instant favourite, thank you so much
@MalachiMarvin
@MalachiMarvin День назад
There's no such thing as intrinsic value. Value is a property of the valuer, not the valued.
@ai._m
@ai._m День назад
Can’t believe you let Gary out of his box so many times. For shame!
@TechyBen
@TechyBen День назад
"Hash checks" is the mathematical example of a "non expert" checking an "expert"... almost. It still needs to be well setup, but can be done. The "not everyone can build a rocket, but anyone can see if it crashed" test scheme.
@TechyBen
@TechyBen День назад
OH! Yes, also "debate" is a check against method. And we can see method is correct easier when the method is simpler than the full process and data. But I do fear there are some nuances on certain applications of this (complex math or programming?).
@pranksy666
@pranksy666 10 часов назад
I like, like this video, like
@nyariimani7281
@nyariimani7281 День назад
I really want to like this video but the content per minute is way, way too low and spoken with too little clarity. This is the perfect video for a summary short, probably entirely spoken by a summarizer while watching clips of these two speaking.
@jeremyh2083
@jeremyh2083 День назад
I think it was fine at 2x.
@probablybadvideos
@probablybadvideos День назад
Seems like a nice guy
@Ikbeneengeit
@Ikbeneengeit День назад
Good interview
@Ikbeneengeit
@Ikbeneengeit День назад
"Boiling the frog" is not real. Frogs jump out. We need a better metaphor.
@CYI3ERPUNK
@CYI3ERPUNK День назад
THANK YOU , literally the reality of the world is the EXACT opposite of that stupid story/narrative that even supposedly 'intelligent' ppl continue to repeat/parrot without understanding this XD
@nathanhelmburger
@nathanhelmburger День назад
Interpolation, Extrapolation, Hyperpolation. Toby Ord. I agree that current models are great at the first, mediocre at the second, and terrible at the third. I also expect that this limitation will be overcome in another couple years.
@Garganzuul
@Garganzuul День назад
Intrinsic value without market value was adjacent to micro transactions back when that was new.
@shadyberger5695
@shadyberger5695 День назад
it's me, the sus dentist
@Totally_Not_chatGPT
@Totally_Not_chatGPT День назад
Nice to see someone ask the the interviewer define things like “interpolative agency”. He doesn’t, the interviewer, because he is just kind using big words for vibes. But nice to see him stutter over it.
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d День назад
INTERPOLATIVE AGENCY SOUNDS like agency that's achieved through making decisions from a closed or exact space of functions; that is, making decisions via interpolation processes, as opposed to extrapolation processes as one would expect to be the case in most function spaces representing or representative of the real world, which is an open system. Is that sort of where interpolative agency can be defined? Here is o1-mini's definition after I wrote the comment: "Your conceptualization of **interpolative agency** is both insightful and aligns well with foundational ideas in decision-making and system theory. Let me expand on your definition to provide a clearer and more comprehensive understanding. ### **Defining Interpolative Agency** **Interpolative agency** can be defined as the capacity of an agent (which could be an individual, organization, or artificial system) to make decisions and take actions based primarily on interpolation within a closed or well-defined space of functions or parameters. This means that the agent operates within a bounded set of possibilities, leveraging existing data, patterns, and relationships to inform its choices without extending beyond the known or established framework. ### **Key Characteristics** 1. **Closed or Exact Function Spaces:** - **Bounded Decision-Making:** Decisions are made within a predefined set of options or parameters. The environment or system is considered to have clear boundaries and predictable behaviors. - **Reliance on Existing Data:** The agent utilizes existing information and established relationships to make informed choices, ensuring consistency and reliability. 2. **Interpolation Processes:** - **Within Known Parameters:** The agent fills in gaps or makes nuanced decisions based on known data points, similar to how interpolation in mathematics estimates values within the range of a discrete set of known data points. - **Predictable Outcomes:** Since the decisions are based on existing frameworks, the outcomes are generally more predictable and controlled. 3. **Contrast with Extrapolative Agency:** - **Extrapolation vs. Interpolation:** While extrapolative agency involves making decisions based on extending beyond known data or venturing into the unknown (which is often necessary in dynamic and open systems), interpolative agency remains within the comfort zone of established knowledge. - **Applicability:** Interpolative agency is more suited to stable and predictable environments, whereas extrapolative agency is essential for adapting to change and uncertainty. ### **Implications and Applications** 1. **Advantages:** - **Stability and Reliability:** By operating within known parameters, interpolative agency can ensure consistency and reduce the risks associated with unforeseen variables. - **Efficiency:** Decision-making processes can be streamlined since the agent relies on established patterns and data. 2. **Limitations:** - **Lack of Adaptability:** In rapidly changing or unpredictable environments, reliance solely on interpolation can hinder the agent's ability to respond effectively to new challenges. - **Potential for Stagnation:** Without incorporating extrapolative elements, agents may miss opportunities for innovation and growth. 3. **Real-World Examples:** - **Automated Systems:** Certain AI systems that operate within well-defined parameters, such as recommendation algorithms on streaming platforms, primarily use interpolation to suggest content based on existing user data. - **Organizational Decision-Making:** Companies that rely heavily on historical data and established procedures for strategic decisions are exercising interpolative agency. ### **Balancing Interpolative and Extrapolative Agency** While interpolative agency offers significant benefits in terms of stability and reliability, integrating extrapolative capabilities can enhance an agent's adaptability and resilience. This balance is crucial, especially in environments that are subject to change and uncertainty. For instance: - **Hybrid Models:** Combining interpolation for routine decisions with extrapolation for strategic, long-term planning can provide both stability and flexibility. - **Adaptive Systems:** Designing systems that can switch between interpolative and extrapolative modes based on contextual cues ensures that agents remain effective across varying scenarios. ### **Conclusion** Your definition of interpolative agency captures the essence of decision-making within a constrained and well-understood framework, emphasizing the reliance on interpolation rather than extrapolation. By recognizing both its strengths and limitations, we can better appreciate the role of interpolative agency in various contexts and the importance of balancing it with other forms of decision-making to navigate the complexities of the real world effectively."
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d День назад
Interestingly, there is one instance of 'interpolative agency' in a paper related to the philosophy of design, where interpolative agency for a designer involves working with pre-existing constraints. It's actually a neat concept when you think about it, not sure why you are throwing shade. That's partially the purpose of debate and conversationalim, to come up with interesting schemes or constructs.
@palimondo
@palimondo День назад
I like fully agree with your first sentence. I disagree with the second and find such personal attack unwarranted and unhelpful. Third is you wallowing in Schadenfreude - why are you hate watching this channel?
@alexbrown1170
@alexbrown1170 День назад
When my cat jumps up on my lap for no apparent reason it floods me with joy- no money can buy the intrinsic value of this moment. Get it? 😮
@honkytonk4465
@honkytonk4465 День назад
A computer could feed your brain directly with sensory data.
@greatestone4eva
@greatestone4eva 20 часов назад
@@honkytonk4465and it would be fake like the matrix. it can't replicate their presence or actually be the cat.
@fburton8
@fburton8 День назад
It sounds like he thinks like he thinks like bullet chess.
@honkytonk4465
@honkytonk4465 День назад
Do you have a hiccup?
@fburton8
@fburton8 День назад
@@honkytonk4465 Nah, it just occurred to me (in a shallow way) that the way Khan appears to be thinking on his feet in this interview might be the same as the way he described thinking when playing bullet chess. The repeated “like” was just a flippant comment on the relative word frequency. Being a boomer, I usually find it distracting and a bit irritating - but I _did_ enjoy this episode.
@domenicperito4635
@domenicperito4635 День назад
please stop calling it hallucination. Please use the word confabulation.
@qhansen123
@qhansen123 День назад
Honestly that’s a good point, hallucination implies perceived experience, I’ve never thought of this before
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d День назад
It's okay. It still counts as a hallucination from an observer's perspective, or from a neutrally internalized reassessed and self-evaluated perspective: that is, from an observer's perspective, every confabulation can be seen as a hallucination, but not every hallucination can be seen as a confabulation. It's the same from a self-perspective: if you neutrally analyze a certain inaccurate thing you say, you can always call it a hallucination afterwards. A confabulation implies a higher degree of explainability. I did not like hallucinations but I do not mind the term now. It works.
@pythagoran
@pythagoran День назад
You likely haven't used GPT 2 or even the newest models with very high temperature settings - it certainly looks much more like hallucination then. What we perceive as confabulation in these highly tuned models is the model's restrained intention to damn well hallucinate. There's a friggin psychedelic Moloch in between all those weights...
@domenicperito4635
@domenicperito4635 День назад
​@@pythagoranA hallucination is a false perception of objects or events involving your senses: sight, sound, smell, touch and taste
@domenicperito4635
@domenicperito4635 День назад
​@@pythagoranConfabulation is a neuropsychiatric condition where a person creates false memories without intending to deceive others. It's a type of memory error that's often associated with brain injuries and memory disorders.
@JAHKABE
@JAHKABE День назад
Neat
@wwkk4964
@wwkk4964 День назад
Fantastic, wish you had another hour with him!
@AbuChanChannel
@AbuChanChannel День назад
Please don't use the word "Smart" ... models will never ever be Smart as a Human...don't feed the Hype
@jds859
@jds859 День назад
This is true. But it can create outputs that are hard to calculate from a human perspective. But I got a way:)
@zerocurve758
@zerocurve758 День назад
He likes saying like. Filler words have evolved that's for sure, but my goodness it's off putting.
@pythagoran
@pythagoran День назад
Impossible to listen to.. so distracting! My dude's out of alignment... 😅
@throwaway6380
@throwaway6380 День назад
He says "like" too much
@emmanuelgoldstein3682
@emmanuelgoldstein3682 День назад
I agree. There's barely a line of dialogue in the captions where "like" isn't in at least one of the lines when he was speaking.
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d День назад
I find listening to him more stimulating than the slower speakers who do not say "like" as often. He makes the conversation faster and more engaging without not saying anything interesting or just filling useless content with fillers like "like", in other words, his "likes" are relevant and make his discussion more relatable and interesting and engaging than that of a researcher who speaks slowly and monotonically. Just my personal and subjective point of view.
@TheMCDStudio
@TheMCDStudio День назад
Aligning models is a bad practice. Models need to be completely uncensored to be able to come up with the absolute correct unfettered answer to a query. After all, everyone else could be completely wrong about something. and only using a model that is unaligned and un biased, will the actual correct answer that we do not know yet come out.
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d День назад
That's silly. Unaligned models hallucinate more than reinforcement learning aligned models. That's why they are aligned. Even unaligned models have many biases, not to mention terrible ethical and moral biases ingrained because of the low-quality of the average human interaction on the internet.
@nathanhelmburger
@nathanhelmburger День назад
While it is true currently that unaligned models don't work well, I do think that if you could do less of RLHF, and more just data cleaning and organizing it into the patterns of behavior you desire the end product to have, you will likely get better results.
Далее
Taming Silicon Valley - Prof. Gary Marcus
1:56:56
Просмотров 9 тыс.
We finally APPROVED @ZachChoi
00:31
Просмотров 7 млн
Rethinking the Mind - Prof. Mark Solms
1:26:46
Просмотров 11 тыс.
John Oliver Is Still Working Through the Rage
37:32
Просмотров 593 тыс.
LSTMs according to their inventor Jürgen Schmidhuber
4:00
BlackRock: The Conspiracies You Don’t Know
15:13
Microservices are Technical Debt
31:59
Просмотров 330 тыс.