Тёмный

237. Machine Learning Models & Reification 

THUNK
Подписаться 34 тыс.
Просмотров 2,1 тыс.
50% 1

Machine learning algorithms are in the spotlight right now, leading some to worry about them remaking the world into something alien, but there's another, less popular concern: what if they make it into exactly what we think it is?
PATREON: / thunkshow
- Links for the Curious -
The Man Behind The Brilliant Media Hoax Of “I, Libertine” (Callan, 2013) - www.theawl.com/2013/02/the-ma...
If the map becomes the territory then we will be lost (Williams, 2019) - librarian.aedileworks.com/201...
Childhood's End (Dyson, 2019) - www.edge.org/conversation/geo...
"I, Libertine" by Theodore Sturgeon - www.amazon.com/I-Libertine-Th...
Rethinking reification (Pitkin, 1987) - link.springer.com/article/10....
Borders that are Visible on Satellite Imagery (Dempsey, 2014) - www.geographyrealm.com/border...
ChatGPT Does Physics, by Sixty Symbols - • ChatGPT does Physics -...
ChatGPT vs. Photocopier - / 1639233873841201153
ChatGPT is a Blurry JPEG of the Web (Chiang, 2023) - www.newyorker.com/tech/annals...
Lukács’s Theory of Reification and Contemporary Social Movements (Feenberg, 2013) - • Andrew Feenberg - "Luk...
OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails (Alba, 2022) - www.bloomberg.com/news/newsle...
The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques (Biddle, 2022) - theintercept.com/2022/12/08/o...
Diffusion Bias Explorer - huggingface.co/spaces/society...
AI & the American Smile (Jenka, 2023) - / ai-and-the-american-smile
the customer service of the new Bing chat is amazing - / the_customer_service_o...
Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic (Perrigo, 2023) - time.com/6247678/openai-chatg...
ChatGPT vs. Buzzfeed Article Ideas - / 1641453192553611264
ChatGPT vs. Christian Jokes vs. Muslim Jokes - / 1
ChatGPT vs. Cause of 2014 War in Ukraine - / 1
ChatGPT vs. Girlfriend Bleeding Out - / 1640788165504933910
ChatGPT vs. Meaning of Art - / 1640833502949040128
ChatGPT vs. School Shootings - / 1640831527175979008
ChatGPT Python Program for Torture Based on Nationality - / 1599462405225881600

Опубликовано:

 

29 мар 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 63   
@Tmesis___19
@Tmesis___19 Год назад
been binge watching your videos while working on a uni-project all night - its early morning now in germany - and so, this video came at perfect time! Vielen Dank!
@THUNKShow
@THUNKShow Год назад
Guten morgen! Good luck on your project, & make sure you get some rest at some point! 😄
@stealcase
@stealcase Год назад
Wow. Blown away. There's a lot of complex ideas you've managed to break down here. Excellent video, explained in an understandable way, with historical context and examples. You clearly know what you're talking about. You resist anthropomorphising these systems, you call them Machine Learning models instead of "AI", you correctly identify that these models amplify the current biases and values of the people making them (and the data they were trained on). For anyone who's interested in diving deeper into the topic, I recommend the book "Resisting AI: An Anti-fascist Approach to Artificial Intelligence" which dives DEEP on this, and how ML models are already affecting beurocratic systems and power structures like healthcare in the UK, facial recognition being used to falsely imprison marginalized people, and that there's a better way to introduce ML systems in society that doesn't involve amplifying current inequalities.
@PetersonSilva
@PetersonSilva Месяц назад
Thank you for the book indication, it seems great
@Xob_Driesestig
@Xob_Driesestig Год назад
You can really see the machinery with certain types of questions. GPT-4 doesn't think ahead, it's just trying to predict the next sentence. That's the reason it struggles with things like jokes. To tell a joke you kinda already need to know the punchline. I've been trying to talk to GPT-4 about philosophy and it's been a real struggle since it faces a similar problem. Sure you can start your joke with words like "poop" to increase your odds of telling something funny, and you can start your philosophical paper with words like "epistemology" to increase your odds of saying something insightful. But ultimately, just starting with a sentence containing poop or epistemology will rarely result in something interesting. Just like jokes, you need to to know how your philosophical paper is going to end if you want to produce quality philosophy. I'm very grateful that we're building all these models because it crystalizes how humans *don't* think and reveals all these interesting similarities (like between jokes and philosophy) that I never realized before.
@The_SOB_II
@The_SOB_II Год назад
Hey man I haven't been on the channel in a while but I really miss the big THUNK in the opening. It really gives me that positive reinforcement! Or, gave.
@LeeCarlson
@LeeCarlson Год назад
This is a wonderful counterpoint to the people who are claiming that ChatGPT-5 will produce an AGI by the end of this year.
@THUNKShow
@THUNKShow Год назад
"No seriously guys, if we stack enough copies of the word 'giraffe' in one place, I'm positive it will spontaneously become Shakespeare."
@judgeomega
@judgeomega Год назад
if you take the llm as merely a reification of the internet zeitgeist one step further... you can imagine it as sort of the higher level cultural entity, similar to 'the hive' of a bee colony. not so much an entity in itself, but an emergence reflected from a thousand communicating entities.
@THUNKShow
@THUNKShow Год назад
It's certainly a synthesis of lots of individual acts of communication! IDK if it technically clears the bar for "emergence," just because there's no adaptive feedback loop (unless you count humans modifying it, which seems like cheating).
@jcorey333
@jcorey333 7 месяцев назад
Overall this was a good video. It does remind me of a discussion I heard talking about "social constructs", where most social constructs are based on some objective underlying facts, so you shouldn't dismiss them wholesale. However, you should understand that the categories come with built-in values.
@shodanxx
@shodanxx 4 месяца назад
You are absolutely right, our use of AI will both cause a feedback loop where we start believing stuff and put it back in the AI. Positive feedback loops and boosting our own biases. This of course can be said of every advancement in our communication medium and especially when mass communication is democratized, or even slips out of the hands of the elites. However, models are not just stochastic parrots. There is an emergent property to LLMs, something about Wittgenstein's language game that is really special here and you're sweeping it under the rug to appease .. them.
@ToriKo_
@ToriKo_ Год назад
Banger video
@THUNKShow
@THUNKShow Год назад
💥Thanks! 💥
@PetersonSilva
@PetersonSilva Месяц назад
Great video as always!
@THUNKShow
@THUNKShow 18 дней назад
TYTY!
@patrickkelly1766
@patrickkelly1766 Год назад
So ChatGPT is not a sophisticated parrot, it is a software that can parrot sophistication. I'm old so I remember the term GIGO.
@fraternitas5117
@fraternitas5117 Год назад
the values of chatgpt are entirely based on the values of the creators who belong to a certain "early life background" that is hilariously consistent for the last one hundred years or so that such information has been generally available.
@bthomson
@bthomson Год назад
Money makes the world go round but shades EVERY single truth into a subtle lie!
@sebo68
@sebo68 Год назад
maybe reification is what happened to Bieiefeld
@bubbatwinkles5141
@bubbatwinkles5141 Год назад
The ghost in the machine 🎃.
@samankucher5117
@samankucher5117 Год назад
very informative video.
@THUNKShow
@THUNKShow Год назад
Thanks! :D
@anakimluke
@anakimluke Год назад
I asked dalle to make an image of a tiny newton but it was not cute.
@THUNKShow
@THUNKShow Год назад
Not DallE's fault - he's too damn cute to replicate.
@bthomson
@bthomson Год назад
Fig newton?
@anakimluke
@anakimluke Год назад
@@bthomson newton is josh's doggo :D
@NickGhale
@NickGhale Год назад
One of the leading models of consciousness is IIT. This lends credence to the idea that statistical weights between words could be conscious
@peernorback7624
@peernorback7624 Год назад
Too early to say. It's happening now and we don't know where this rocket will land.
@nobodyspecial2053
@nobodyspecial2053 Год назад
Probably an explosive impact right in the face.
@THUNKShow
@THUNKShow Год назад
I mean, there are *some* things we can say now, surely? :P
@peernorback7624
@peernorback7624 Год назад
@@THUNKShow Yup, we know it is going affect us in many different ways. But how? Who is waiting for us in the future -- 'The Tin Man from Oz or HAL from 2001? I mean it's too early to say.
@justinrobertson781
@justinrobertson781 Год назад
The implications of all this are very large. Implications for what we call fact versus fiction, and our ability to ascertain truth. Implications for our memory, as LLMs are capable of making engaging narratives to fill in the gaps, even towards wrong conclusions. I think it says something about the originality, or lack thereof, in a lot of human thought. The fact that a mirror of what was can be so substantive an alternative for so many people, in so many ways, gives questions about our originality. If even Picasso called himself a thief, what are the implications of something like this, for novel input? How much of what we already are, and already do, is a reified output of the web of ideologies which impact our thinking? One of the things about this mirror is, it's cheap. Sam Altman talked to Lex Fridman about the idea of "radically lowering the price of intelligence". And, given this context, and a wider understanding of political institutions and incentive structures, it brings us back to the question of how much do we value human lives. The fact that you can bring a scrambling of intelligence, this fuzzy mirror, to aid in whatever prompt you could give it, without negotiation? It has huge implications for our "use" to one another, and our appreciation of one another. Already, content algorithms provide people more stimulation than the people around them. What about when this fuzzy mirror does it for almost everyone, at scale? It has such broad implications that give a lot of questions to the ego. About the depth of our individual contributions in comparison.
@ilikememes1402
@ilikememes1402 11 месяцев назад
In a way, we have made religion, god, all over again-where a select people wish to believe said-fuzzy mirror. I'm a Catholic myself and believe that God is only someone you meet at the end-which, fair enough, you may accuse me of having biases already-that the accountability of our actions and the things leading up to said-actions are of our fault. Which is why I don't subscribe to blindly trusting god nor any idea with zealous nor of perfect-trust. Replace god with AI as gospels, then we have a problem. (May also be used as rhetoric with powerful-figures in ideas if exchanged by AI) Anyhow, I'll check this book our myself. Thanks
@ferulebezel
@ferulebezel Год назад
I'm still not convinced that they aren't just fancy Markov chains and I've yet to see any really great creation or discovery come from them. Your comment about standardized tests and college admissions being reification doesn't hold water. Doing well on standardized tests is strongly correlative within those who go to college, those with higher scores doing better than those with lower but good enough for admission scores. People who do well on standardized tests yet don't go to college do better than those who do poorly on those tests as well. Of the common litany against the SAT the only valid one is that it is pretty much an intelligence test. The funny thing is that this is because of all the things removed to make it culturally neutral. This also makes it less predictive than it could be since it doesn't check for any requisite knowledge.
@fraternitas5117
@fraternitas5117 Год назад
9:20 so you mean its accurate and your discomfort with that is only a reflection of your amount of recent algorithms induced moral panic in face of the facts of who was, is, and will be, that professions leaders.
@THUNKShow
@THUNKShow Год назад
This is a great example of what I'm trying to highlight! Someone who wasn't aware of the full scope of historical gender & racial discrimination might look at the world *as it is* & come to believe that white males were the only ones who could make meaningful contributions to science, just like someone pre-WW1 might imagine that only men could make decent cheerleaders. 😆
@sunetulajunge
@sunetulajunge Год назад
This video can be summarised by: can human beings produce a tool and use that tool to inflict harm on other? The answer will always be yes! A more interesting question is how to avoid pitfalls in reality. Practical steps to avoid inflicting harm. How can we tighten the guard rails. I just tried your prompts on chatgpt and they gave the bog standard it's unethical blah blah. If course I am sure by working hard one could overcome those guard rails. We have to be aware that chatgpt can be extremely useful, heck it can produce better code than most interns. Even if I am extremely generous here and go with the worst claim possible: chatgpt is incredibly biased, racist etc etc etc. There are many Nobel prizes out there that advocated for eugenics, racists, etc. We still use their research no? Considering how important this topic is, I wouldn't have expected a more in depth video.
@THUNKShow
@THUNKShow Год назад
I'm assuming you meant "I would have expected a more in-depth video." Sorry if it didn't meet your expectations for rigor. :( FWIW I think the main problem isn't that ChatGPT has modes of operation that can be biased or recapitulate human errors, it's that credulous humans *imagine it to be* unbiased & *take its biased output as gospel truth,* which was the primary thing I wanted to address here. I don't know if it's possible to sterilize ChatGPT of any harmful modes of operation, but we can certainly hold humans accountable for trusting it foolishly!
@ilikememes1402
@ilikememes1402 11 месяцев назад
​@THUNKShow haha it's fine
@Macieks300
@Macieks300 Год назад
7:30 I'm not sure how you can be so confident here that ChatGPT doesn't have a "subjective understanding of the world". How can you prove that you have it? How can you prove you aren't just a very big LLM? Let's say that in the future we will make a way bigger LLM that becomes an AGI an outsmarts all people in all tasks. Its design and architecture are still the same so will it still not have this "understanding" as we have? The other problem I have is that you are implying that AI is in a way limited by us because it is trained on human data. It's true that current AIs possesses human biases but I don't think this will always be the case. Human biases won't always be inherent to the AI that we make. The smarter the AI will get the less biases it will have. I'm not saying that I think it will be perfectly objective in the end but just that it won't be limited by human thinking.
@bangboom123
@bangboom123 Год назад
If you want to seriously contend ChatGPT has a subjective understanding of the world - fair enough. The problem then is how do you prove _literally everything else in the world_ doesn't also have a subjectivity of its own. Perhaps if I bang the walls of my apartment, my walls think, in wall-talk "Ah, I have received an input! Based on this, I feel that I should give emit a bang of this frequency. I suspect this is due to my own architecture, but I don't know my own architecture. I am merely going with my subjective sense that this is the correct response." The reason people think (or at least, non-panpsychists think) ChatGPT lacks a subjective understanding of the world is that it lacks a sensorium - it has no means to construct and inhabit a world and to view that construction from a particular perspective. Same as your walls, same as a coffee mug, same as your ordinary desktop computer. Also, AI will always have human biases for as long as humans are the ones giving it data and managing it. It doesn't matter how smart you are if you're given bad data.
@fatgnome
@fatgnome Год назад
I think you are romanticizing how these langauge models work. They are really just big statistical models that predict the most likely next word given the past set of words. They can't have a subjective understanding of the world any more than 2+2=4 can have a subjective understanding of the world. The only difference between 2+2=4 and the output of ChatGPT is that ChatGPT takes more input and does more calculations before spitting out its answer. But it isn't like more math calculations are suddenly going to manifest some genuine understanding of the world. The same thing applies to your second point as well. Machine-learning based AI is not smart. There is no intelligence in these models, only statistics. Accordingly, it can't be smarter than the data it learns from, as the model is really just a reorganization of that data. If it only has data plauged with our biases to learn from then it will always be biased in the same way.
@Macieks300
@Macieks300 Год назад
@@bangboom123 I think overall it's pretty pointless to even debate if something has or doesn't have the subjective understanding of the world because there's no way to actually test it. I think it's way more useful to study how the thing acts like and what are its properties and classify things based on that. For example we see that ChatGPT outputs text just like human so I can say that it is just like a human in this regard no matter how that output came to be actually.
@Macieks300
@Macieks300 Год назад
@@fatgnome What you just said could just as well be said about the human brain. How do you know a human brain is "something more" than just a statistical model? Also, I don't know on what basis you're saying that the AI can't be smarter than its learning data. If that was the case then humans also couldn't be smarter than what they've learned from others but that's not the case as people have always been coming up with new things. ChatGPT comes up with novel solutions to unseen problems all the time as well.
@bangboom123
@bangboom123 Год назад
@@Macieks300 “outputs text just like a human” This is the point - it doesn’t output text like a human, because humans don’t work based on pure statistical relationships. They have a perspective on the world and they posit theories about it. They have *semantics*, and so operate differently to ChatGPT. Saying they’re the same because they both produce text is like saying a jet fighter is the same as a hawk because they both achieve flight.
Далее
Has Generative AI Already Peaked? - Computerphile
12:48
🤯 #funny
00:20
Просмотров 870 тыс.
243. Maintenance
12:44
Просмотров 1,8 тыс.
239. How to Read a Book
8:45
Просмотров 2,6 тыс.
100+ Linux Things you Need to Know
12:23
Просмотров 623 тыс.
245. The STEM Shortage
13:18
Просмотров 80 тыс.
ChatGPT does Physics - Sixty Symbols
16:42
Просмотров 639 тыс.
Engineering in movies be like
4:45
Просмотров 799 тыс.
Coding won’t exist in 5 years? You might be right.
16:39
Penrose Unilluminable Room Is Impossible To Light
10:24