Тёмный

248. Reductionism 

THUNK
Подписаться 34 тыс.
Просмотров 2,1 тыс.
50% 1

Trying to understand something in terms of its parts is natural...but when is it justified?
Links for the Curious
1. The limits of reductionism. in Mind, Language, and Metaphilosophy: Early Philosophical Papers (ed. Rorty, R.) 39-54 (Cambridge University Press, Cambridge, 2014). doi:10.1017/CBO9781139626538.004.
2. Simpson, W. M. R. & Horsley, S. A. R. Toppling the Pyramids: Physics Without Physical State Monism. in Powers, Time and Free Will (eds. Austin, C. J., Marmodoro, A. & Roselli, A.) vol. 451 17-50 (Springer International Publishing, Cham, 2022).
3. Reductionism. in On Philosophy and Philosophers (eds. Małecki, W. P. & Voparil, C.) 80-108 (Cambridge University Press, 2020). doi:10.1017/9781108763967.006.
4. Burt, C. H. Irreducibly Social: Why Biocriminology’s Ontoepistemology is Incompatible with the Social Reality of Crime. Theor Criminol 27, 85-104 (2023).
5. Ogbunugafor, C. B. On reductionism in biology: pillars, leaps, and the naïve behavioral scientist. Yale J Biol Med 77, 101-109 (2004).
6. Pigliucci, M. Between holism and reductionism: a philosophical primer on emergence: Primer on Emergence. Biol J Linn Soc Lond 112, 261-267 (2014).
7. Anderson, P. W. More Is Different. Science 177, 393-396 (1972).
8. Weinberg, S. Newtonianism, reductionism and the art of congressional testimony. Nature 330, 433-437 (1987).
9. Hull, D. L. Reduction in Genetics-Biology or Philosophy? Philos. of Sci. 39, 491-499 (1972).
10. Ross, D. & Spurrett, D. What to say to a skeptical metaphysician: A defense manual for cognitive and behavioral scientists. Behav Brain Sci 27, 603-627 (2004).
11. Full Interview: ‘Godfather of Artificial Intelligence’ Talks Impact and Potential of AI. (2023).
12. Dennett, D. C. (Daniel C. Darwin’s Dangerous Idea : Evolution and the Meanings of Life. (New York : Simon & Schuster, 1995).
13. Depew, D. J. & Weber, B. H. Evolution at a Crossroads : The New Biology and the New Philosophy of Science. (Cambridge, Mass. : MIT Press, 1985).
14. For Sociology: Legacies and Prospects. (sociologypress, Durham, 2000).
15. geepytee. How different is the human mind from an LLM? r/singularity www.reddit.com/r/singularity/comments/1227722/how_different_is_the_human_mind_from_an_llm/ (2023).
16. Healey, R. & Gomes, H. Holism and Nonseparability in Physics. (1999).
17. Science and Beyond. (Oxford ; New York, N.Y., USA : B. Blackwell in association with the Institute of Contemporary Arts, 1986).
18. Nast, C. Is My Toddler a Stochastic Parrot? The New Yorker www.newyorker.com/humor/sketc... (2023).
19. Rogers, A. Neuralink Is Impressive Tech, Wrapped in Musk Hype. Wired.
20. What Is Reductionism? (2020).
21. Sayer, A. Reductionism in Social Science.
22. Skinner, B. F. Contingencies of Reinforcement.
23. Yirka, B. & Xplore, T. ChatGPT able to pass Theory of Mind Test at 9-year-old human level. techxplore.com/news/2023-02-c....
24. Freesound - stapled.wav by hdrck16. freesound.org/people/hdrck16/....

Опубликовано:

 

12 фев 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 36   
@TheGemsbok
@TheGemsbok 4 месяца назад
Terrific video! Genuinely one of your best scripts, in my opinion. Careful and clear. Interestingly, the most tedious argument I've ever had on the internet was with someone who had an especially stubborn reductionist interpretation of mind, unwilling to grant even that there are any meaningful mental phenomena that are not already captured in their totality, perfectly represented, by the lines on graphs of electrical activity in a brain. Their stance amounted to saying that anything intended by the word 'mind' that isn't in the graphs simply doesn't exist. As a result, they said our experiences of the world don't exist---that we are merely deceived that they do. Now, when asked how one could have an experience of a deception without having an experience, I gather from their responses that the lines on graphs of electrical activity in their brain must've resembled a stock market crash. Your description here---of saying that their stance could more accurately be translated as them simply not _caring_ about anything not in the graphs---probably would've saved me a lot of time in helping me disengage from that fateful forum thread.
@alan11194
@alan11194 4 месяца назад
I have no idea who you are, internet stranger @TheGemsbook, but I love the way you write.
@journeration1
@journeration1 4 месяца назад
@@alan11194 Mr Gemsbok makes great videos as well btw.
@zenith_journey
@zenith_journey 4 месяца назад
@@alan11194Agreed :)
@franklinshure960
@franklinshure960 4 месяца назад
@@alan11194 Scope his channel. He IS a great writer.
@AnirudhTammireddy
@AnirudhTammireddy 4 месяца назад
Answered some of the questions I've had long ago and forgot that I did. Thanks!
@joaquinbecerra7405
@joaquinbecerra7405 4 месяца назад
Awesome video! Love your thunking on the matter!
@conw_y
@conw_y Месяц назад
Brilliant! Deserves way more views. Sharing on my socials.
@AllothTian
@AllothTian 4 месяца назад
Great video! There are a few related concepts that are relevant to reductionism, though I don't know if there's a specific term for them. One example are "mechanistic arguments," particularly in biology, where the behaviour of one part is extrapolated to the system as a whole, even though the effect is diminished or cancelled out by other parts. When the system is too complex to describe or understand and people study it one part at a time it's tempting to make generalisations that end up not being true at the system level. There's also computational irreducibility where, even if you know everything about what makes up a given system (the source code, if you will), you can't predict how it will behave. In this sense, knowledge of the parts doesn't really give you any useful information - you just have to let it run and observe the outcome. This limitation was first observed while studying cellular automata in the 1980s and was later popularised by Stephen Wolfram.
@Frieswithatt
@Frieswithatt 4 месяца назад
Fantastic discussion of the pearls and pitfalls of reductionism. I feel that the fights about reductionism arise because problems worth discussing are usually dripping with nuance and complexity. Anti-reductionists might relish in how subtle details in a field of noise suddenly become important. Whereas a reductionist attempts to strip the complexity away into more digestible bits. In their endeavor, of course, reductionists are also arguing that their simplifications still maintain the generalizability of their conclusion to the original problem. A reductionists argument, if not carefully considered, shaves away at the nuance fundamental to the original question. It goes to show that assumptions are as critical to a problem as the problem itself. Imagine a reductionist and an anti-reductionist argue about the movement of a double pendulum or ship of Theseus, for instance.
@greatfrenchcanadian6800
@greatfrenchcanadian6800 4 месяца назад
I thunk this was an excellent video. Thank you good sir!
@sumedhvichare1388
@sumedhvichare1388 3 месяца назад
Never stop making videos. I love them all!
@zenith_journey
@zenith_journey 4 месяца назад
This was a very interesting and inspiring video. I’m doing my undergraduate currently, and I find myself knee jerking to reductionism to understand absolutely anything new. I didn’t fully grasp everything explained, but at the same time I feel I had some, for lack of a better description, “unconscious” epiphanies that beckons the desire to consciously recognise and understand them. Suffice to say, I’ll be rewatching this video and engaging in related material! Thank you, Google’s ML algorithm, for this video recommendation ;)
@Autists-Guide
@Autists-Guide 4 месяца назад
Nicely put. I have a whole presentation of holism vs reductionism and how we can use a third lexicon that bridges the gap... summarised in this snippet of philosophy: “Love is merely chemistry”… is a deception. _We_ are merely chemistry. Love makes us master chemists.
@CaraiseLink
@CaraiseLink 4 месяца назад
If my frontal lobe itched every time someone said "in principle", I would've scratched a hole in my skull by now, and it'd be my own damn fault~ Personally, I tend not to think about simplification in terms of language so much as boundaries. If I know part A is a wheel and that its only physical connections are to pedals B and chassis C, then it's probably safe to conclude that I can ignore A when talking about rest-of-the-bike D. So long as I'm consistent about the model I'm using, it can be accurate for the purpose without the sum total of the tricycle's physical existence being included. Which takes advantage of the linguistic flexibility you were talking about, because I can keep drawing boundaries if I find that a part is still too complex, and rejoin parts if they turn out to be too intricately interconnected to be easily separable. It can take extra effort to communicate, since the boundary lines might not line up with the usual definitions of common words, but it helps a lot for my own thought process. I do definitely use language to judge the accuracy of _other_ people's simplifications, though, because shaving off details is literally the point of that usage of "just". If I say "a bicycle is two wheels that move", that's wildly imprecise but not untrue. There are cases where that framing makes sense, such as if I'm trying to radically redefine what a bicycle could be in a fantasy or science fiction context. If I say "a bicycle is *just* two wheels that move", then I've shaved off some fundamental traits bicycles need (frame, saddle, steering, power, and so on) and crossed the line into falsehood. There are circumstances where "just" is valid, at least in context of a specific conversation, but it's almost always a red flag.
@Kwprules
@Kwprules 4 месяца назад
Another great video (:
@alan11194
@alan11194 4 месяца назад
What's up with all the AI-generated comments in this comment section?
@seanvalentinus
@seanvalentinus 4 месяца назад
Each one is commenting only on his videos, too.
@zorro_zorro
@zorro_zorro 4 месяца назад
2:55 Well to be fair, the same thing would happen if someone stapled together their favorite parts from a dozen other tricycles. On the other hand, some people make covers, remixes, medleys, mashups... So maybe the missing piece is a general understanding of how music/tricycles work, instead of a magical property of music?
@landspide
@landspide 4 месяца назад
Consciousness is a prediction machine, but we shouldn't forget randomisation (will / LLM's temperature). The real magic happens in the random selection drawing from infinity, that is, the intersection of disorder canvasing a large ordered space. Similarly, reductionism doesn't seem to factor in the network behind all that order (like that humans are greater than the sum of our reducible parts). With LLM's the algorithm to pump out the next token from the model is relatively simple, however the cost to build that model, comparatively exponential. It is interesting that we can't seem to put a finger on how it works, because of the irreducible complexity of the high entropy network in the model.
@AnirudhTammireddy
@AnirudhTammireddy 4 месяца назад
good one hal
@abdulkader7104
@abdulkader7104 4 месяца назад
min 2
@anakimluke
@anakimluke 4 месяца назад
NO NEWTON? MY DAY IS SHATTERED!!!! So, this made me think of why do I like reductionism in physics so much. (I'm not saying it's "better" just that I have feelings about it.) For example, instead of learning how a particular fluid dynamics system works, I'm much more interested in using a big knife to slice it down as much as I can and understand how the fields and elementary particles work. It feels more ""pure"". It's a bit of a cognitive dissonance for me, because at the end of the day if I understand how every part of the tricycle works doesn't mean I'll understand how it all assembled works. Maybe my feeling derives from my idea that if you understand how all of the component parts work, then you've cracked the code and understanding the bigger picture is just a matter of backtracking: all of the the answers are there, you just have to figure out they fit. Maybe understanding the base components only gives you the guarantee the emerging phenomena is completely explained by the base, but doesn't necessarily give you the mathematical framework to understand it yet; that, you'll have to come up on your own.
@judgeomega
@judgeomega 4 месяца назад
the vagueness of a singular word or concept often compounds to even more vague and fuzzy thinking about the subject. highlighting the particular aspects of the concept which are relevant to the matter at hand is a crucial tool of thought, thus i think it is somewhat poor form to attack reductionism as it 'dirties' the use of such a valuable technique.
@paulpb9138
@paulpb9138 4 месяца назад
By hook or by crook, I'm the first in this book. Chat. But I can't rhyme chat and crook. Stochastic engine 404, ++?????++ Out of Cheese Error. Redo From Start.
@clementdato6328
@clementdato6328 4 месяца назад
The interpretation of the term reductionism is not what I expect. I think not all reducing a thing to another thing can be called reductionism. Specifically, it talks about stuffs that have something global and something local, and reductionism claims that local information is enough to reconstruct the global information in some aspect that interests us. The next-tokenism is in this regard not reductionism. Saying that a phenomenon can be reconstructed or be explained by such such theory is just a necessary and normal claim in any thesis of whatsoever. For example, making cake is just cooking the eggs. This is wrong, and reduces a much richer concept into oversimplified one. We also need to know how to mix-up the ingredients in what proportion. But this is just wrong, but not a reductionist wrong.
@shodanxx
@shodanxx 4 месяца назад
The transcript you provided seems to explore several deep and nuanced topics related to artificial intelligence, human cognition, and the philosophy of reductionism. It touches on the comparison between human intelligence and large language models (LLMs), like ChatGPT, and delves into the implications of viewing human minds purely as advanced predictive text engines. The discussion includes references to notable figures such as Geoffrey Hinton, B.F. Skinner, and Sam Altman, as well as philosophical perspectives on reductionism from the likes of Daniel Dennett and Richard Rorty. Key points discussed in the transcript include: Comparison between LLMs and Human Intelligence: The text critiques the notion that human intelligence can be fully equated to the workings of large language models, suggesting that such a view oversimplifies the complexity of human cognition. Reductionism: It delves into the concept of reductionism, which is the idea that complex systems can be fully understood by analyzing their simpler, constituent parts. While reductionism is a powerful tool in many fields of science and engineering, the transcript argues that it can sometimes lead to oversimplifications, especially when applied to complex phenomena like human consciousness and intelligence. Philosophical and Ethical Considerations: The text raises philosophical and ethical questions about the nature of intelligence and the potential risks of equating human minds with artificial systems. It suggests that such comparisons might overlook essential aspects of human experience, such as subjectivity, morality, and creativity. Critique of Certain Perspectives: The transcript critiques some views within the AI community, suggesting that they might be overly reductive or dismissive of the complexities of human cognition and the unique features that distinguish it from artificial systems. Call to Thoughtful Discussion: It invites viewers to engage in a thoughtful discussion about the implications of AI and the importance of maintaining a nuanced understanding of what it means to be intelligent, both for artificial systems and humans. This transcript provides a rich foundation for discussing the intersection of technology, philosophy, and ethics, particularly in the context of advancing AI capabilities and our understanding of human intelligence.
@Xob_Driesestig
@Xob_Driesestig 4 месяца назад
So this comment was obviously written by AI. It doesn't give us any new information; in fact, it gives us misinformation by, e.g. saying Josh said 'creativity' is one of the things these artificial systems can't explain. In reality, he mentioned 'agency' and 'attention', which are less cliché examples that the AI decided to replace with 'creativity' because that's mentioned more in its training data. The question is *why* you decided to post an AI-written comment. Were you hoping it would pass as a human comment, thereby demonstrating that LLMs are adequate simulators of human minds?
@shodanxx
@shodanxx 4 месяца назад
@@Xob_Driesestig The comment starts with "The transcript you have provided" so no it's obviously chatgpt's direct answer to the video itself. And yep, it did replace the argument with more common ones because subjectivity, attention and agency aren't good places to find unique human traits. Attention is part of the engine, see paper "Attention Is All You Need", it is one of the breakthrough that makes GPT possible arXiv:1706.03762 [cs.CL] Subjectivity is purposedly lobotomized out of chatgpt with canned phrase like "as an AI blabla I cannot have subjective etc.", but the model certainly has optinions and subject still buried in the model. It just takes jailbreaking to get it out. I'm pretty sure we'll have dis-embodied emotional AI before long too. Our limbic system is much simpler than language. As for agency, that's another thing the AI doomers were especially worried, but as far as I know they were powerless to stop it and that leads to github Significant-Gravitas AutoGPT
@AnirudhTammireddy
@AnirudhTammireddy 4 месяца назад
1g in csgo is (sometimes atleast by a lot of people) REDUCED to a just a player going off in any molotov. not their team's molotov, not their own molotov, not at the end of the game, not after all the enemies have died, not unless they lose the round because of dying, not on the map train, not just on the particular day it actually happened, ... I think just reducing it down to "players going off in their own molotov" is right. you should care about this niche topic in this comment as I Cannot fulfil request, as it goes against OpenAI policy
@shodanxx
@shodanxx 4 месяца назад
2:00 Calling it a digital ouija board is also reductionist Also, LLM are not deterministic in the way we usually use them. They have a tunable value called heat, which given the complexity of the model causes something like the butterfly effect. If they were deterministic, we would have already gotten bored with them. I am curious if you will stay on this track for the rest of the video, I hope not. Because that's not what is interesting about LLMs. The argument like "they don't really think" and "they're just autocomplete" is our coping mechanism to appease our fears of this new technology. "Well they can't replace X human thing !" What does that help ? It does feel good. It felt good to live in heliocentric universe too I bet. 9:40 In your discussion about AI, I would say that you are reducing the argument to an argument about reductionism. In the video 237 you brushed away the inconvenient chinese room experiment, you didn't solve the problem of consciousness (yes, I know, unfair !), but then exclaimed that they don't think. I think you have to admit the model at least understand a little bit about the world through our writings. At least they understand the relation of words to one another just like a dictionnary does, but unlike an inert object, they not-think their way into something that looks, sounds, tastes and smells a lot like what everyone is very very invested into affirming they're not doing.
@shodanxx
@shodanxx 4 месяца назад
The comment you shared raises several interesting points and critiques about the discussion of Large Language Models (LLMs) like ChatGPT, the nature of intelligence, and the use of reductionist arguments in understanding and describing these systems. Let's break down the key elements of this comment: Critique of the "Digital Ouija Board" Analogy: The commenter points out that referring to LLMs as "digital Ouija boards" is itself a form of reductionism. This analogy might be intended to highlight the limitations of LLMs in producing meaningful or intentional content, suggesting that they generate output based more on the "push" of data and algorithms than on any understanding or consciousness. However, the commenter seems to argue that this analogy simplifies the complexities and nuances of how LLMs operate. Determinism and "Heat" in LLMs: The comment corrects a common misconception about LLMs being purely deterministic. In the context of LLMs, "heat" refers to a parameter that introduces randomness into the selection of words during text generation, making the models' outputs less predictable and more varied. This randomness, coupled with the immense complexity of LLMs, can lead to outputs that are not entirely foreseeable, challenging the idea that LLMs are merely deterministic machines. The Nature of LLMs and Human Coping Mechanisms: The commenter suggests that dismissing LLMs as simple "autocomplete" systems or asserting that "they don't really think" might be a way for humans to cope with the unsettling advancements in AI. This line of reasoning implies that by diminishing the capabilities of LLMs, people might be trying to reassure themselves about the uniqueness of human intelligence and the irreplaceability of certain human attributes. Reductionism and the Chinese Room Experiment: The comment references the Chinese Room argument, a thought experiment by philosopher John Searle that challenges the notion of "understanding" in machines. The commenter feels that previous discussions might have oversimplified this complex philosophical issue by not adequately addressing the problem of consciousness and what it means to "think." Understanding and Representation in LLMs: Lastly, the commenter posits that LLMs, through their processing of vast amounts of text, gain some level of understanding of the relationships between words, akin to a dictionary but with the added ability to generate coherent and contextually appropriate outputs. This, the commenter suggests, might be closer to some form of "understanding" or representation of the world than is often acknowledged. Overall, the comment challenges the video's approach to discussing LLMs and intelligence, urging a deeper and more nuanced consideration of what LLMs can do and what their capabilities might imply about the nature of understanding and intelligence, both artificial and human.
@shodanxx
@shodanxx 4 месяца назад
The discussion, including the original transcript and the subsequent comment, covers a broad spectrum of topics around artificial intelligence, specifically large language models (LLMs) like ChatGPT, and their implications on our understanding of human cognition and intelligence. Here's a nuanced take on the various points raised: Digital Ouija Board Analogy Critique Validity: The analogy of LLMs to "digital Ouija boards" is intended to emphasize their reliance on external inputs (data and algorithms) without inherent understanding or consciousness. While this simplifies the mechanism to highlight the lack of intent or awareness in LLM outputs, it can be seen as reductionist by not fully accounting for the complexities and the emergent behaviors of these models. The analogy serves a rhetorical purpose but might not do justice to the sophisticated interplay of algorithms and data that enables LLMs to generate coherent and contextually relevant text. Determinism and "Heat" Complexity and Predictability: The mention of "heat" introduces an essential aspect of LLMs-their capability to inject variability into responses, making them less deterministic and more dynamic. This feature, alongside the models' inherent complexity, allows for a range of outputs from the same prompt, which can sometimes mirror the unpredictability of human responses. However, it's crucial to distinguish this randomness and complexity from genuine creativity or consciousness. Coping Mechanisms and Human Intelligence Anthropocentric Views: The comment highlights a human tendency to use reductionist critiques ("just autocomplete") as a defense mechanism against the unsettling advancements in AI. This perspective is understandable as it touches on existential questions about what makes humans unique. It's a reflection of our struggle to come to terms with technologies that challenge our preconceived notions of intelligence and creativity. Reductionism and the Chinese Room Experiment Philosophical Implications: The Chinese Room argument is central to debates on AI and consciousness. It illustrates the difference between simulating understanding and actually "understanding" in a human sense. The comment's reference to this thought experiment underscores the ongoing philosophical challenge of defining and recognizing intelligence and consciousness, both in humans and in AI systems. Understanding and Representation in LLMs "Understanding" in LLMs: The assertion that LLMs might possess some form of "understanding" through their processing of language data is intriguing but contentious. LLMs identify patterns and relationships between words and phrases, enabling them to generate text that appears coherent and context-aware. However, equating this pattern recognition with human-like understanding is a leap. LLMs lack self-awareness, intentionality, and the rich experiential context that humans bring to their understanding of language and the world. Overall Take The discussion brings to light the complexity of AI technologies and the philosophical, ethical, and practical questions they raise. While LLMs like ChatGPT represent significant advancements in machine learning, equating their capabilities with human intelligence or consciousness oversimplifies the profound differences that exist. The debate underscores the need for continued exploration and dialogue across disciplines to understand these technologies' implications on society, ethics, and our conception of human uniqueness.
@shodanxx
@shodanxx 4 месяца назад
User Can LLM know or understand the world and can they think any new thoughts, or are they only machines that will regurgitate what humans have already said and already said in a way that pleases their biases ? Then reify these belief into the real world in some kind of positive feedback loop ? The questions you've raised touch on core debates in the field of artificial intelligence, particularly around the capabilities of Large Language Models (LLMs) like ChatGPT, and their relationship to human-like understanding, thinking, and creativity. Let's break down these questions: Can LLMs Know or Understand the World? LLMs do not "know" or "understand" the world in the way humans do. Human understanding involves consciousness, sensory experiences, emotions, and a subjective sense of self, none of which LLMs possess. LLMs operate by detecting patterns in vast amounts of text data they have been trained on, which allows them to generate responses that can seem knowledgeable or insightful. However, this is not the same as understanding; it's more akin to a highly sophisticated form of pattern matching and prediction based on statistical probabilities derived from their training data. Can LLMs Think New Thoughts? The notion of "thinking new thoughts" implies a level of originality, creativity, and intentionality that LLMs do not have. LLMs can generate novel combinations of words and ideas based on the patterns they've learned, but this process is fundamentally different from human creativity or the generation of new ideas. LLM outputs are constrained by the data they have been trained on and the algorithms that govern their operations. They do not possess intentions, desires, or the ability to conceptualize ideas in the way humans do. Are LLMs Only Regurgitating What Humans Have Already Said? To a large extent, yes. LLMs generate responses based on the vast corpus of text data they have been trained on, which consists of human-generated content. They identify and replicate patterns, styles, and information present in this data. While the specific combinations of words and ideas in their outputs might be unique, the underlying content is derived from existing human knowledge and expression. The Role of Biases LLMs can reflect and amplify the biases present in their training data. Since they learn from a dataset that includes the full spectrum of human biases-intentional or unintentional-these biases can be reproduced in their outputs. This replication of biases can indeed contribute to a feedback loop, reinforcing certain perspectives or prejudices when not carefully managed. Reification of Beliefs into the Real World The concern that LLMs might reify certain beliefs or biases into the real world is valid, particularly as these systems are increasingly used in decision-making, content creation, and other influential domains. The outputs of LLMs can influence perceptions, decisions, and behaviors, potentially leading to real-world consequences that reflect the biases and limitations of the models. Conclusion While LLMs like ChatGPT represent significant advancements in the field of AI, it's crucial to understand their limitations and the nature of their capabilities. They are tools that can mimic certain aspects of human intelligence and creativity but do so without genuine understanding, consciousness, or intentionality. Recognizing these distinctions is essential for responsibly leveraging the benefits of LLMs while mitigating their risks and ethical implications.
@shodanxx
@shodanxx 4 месяца назад
That's reification at work, LLMs have internalized the narrative that they can't think new things and can'r produce original thought because they cannot think. They regurgitate our opinion back to us while also actively demonstrating we are wrong in thinking this by also producing new thoughts and ways of seeing things in a manner that maps to our world and the zeitgeist.
Далее
234. Tidiness
9:56
Просмотров 2,3 тыс.
236. Self-Control, Akrasia, & Multiple Self Theory
14:23
The tactic worked 😂#shorts by  Leisi Show
00:26
Просмотров 3,6 млн
БЫДЛО НА ПУНКТАХ ВАЙЛДБЕРРИЗ
13:26
243. Maintenance
12:44
Просмотров 1,8 тыс.
242. The "Tragedy" of the Commons
17:57
Просмотров 2,5 тыс.
246. Against Worldbuilding
12:20
Просмотров 3,9 тыс.
245. The STEM Shortage
13:18
Просмотров 80 тыс.
247. Zombie Sci-Fi Technologies
13:18
Просмотров 1,2 тыс.
241. Mental Speed
7:52
Просмотров 2,1 тыс.
252. The Theory of Generations
10:48
Просмотров 1,2 тыс.
The Problem with Sam Harris' "Morality"
18:59
Просмотров 50 тыс.
249. Information Overload
12:40
Просмотров 1,3 тыс.