Тёмный

Mel Andrews: Ontology of the Free Energy Principle and the Philosophy of Machine Learning 

Rahul Sam
Подписаться 4,5 тыс.
Просмотров 39 тыс.
50% 1

Mel Andrews is a philosopher of science who primarily focuses on machine learning and the role of mathematical and computational methods in scientific modelling. Mel is currently a predoctoral research associate at the Department of Machine Learning at Carnegie Mellon University and doing a PhD in philosophy of science at the University of Cincinnati. They are also a visiting scholar at the Australian National University and the University of Pittsburgh. In this episode, we discuss the philosophy of artificial intelligence and machine learning, AI ethics and safety, scientific and mathematical realism, the ontology of the free energy principle and critical theory's relationship to AI research.
You can find more of Andrews' work at mel-andrews.com/ and x.com/bayesianboy
RSam Podcast #43
---------------------------------------
{Podcast}
Substack: rsampod.substa...
Spotify: open.spotify.c...
Anchor: anchor.fm/rahu...
Available on other platforms at link.chtbl.com...
{Website}
rahulsam.me/
{Social Media}
/ name_is_rahul
substack.com/@...
x.com/trsam97
/ rahul-samaranayake-981...
{Reference Links}
link.springer....
www.researchga...
---------------------------------------
If the ideas I discuss in this channel evoke your interest, consider visiting theunhappyman....
---------------------------------------
Copyright Disclaimer under section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research.
Fair use is a use permitted by copyright statutes that might otherwise be infringing.
If you are or represent the copyright owner of materials used in this video and have a problem with the use of the related material, please email me at trahulsam@gmail.com, and we can sort it out. Thank you.

Опубликовано:

 

11 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 290   
@timmysmith9991
@timmysmith9991 Месяц назад
I am working on LLMs now. As far as I can tell, "AI" is advanced pattern recognition.
@osip7315
@osip7315 Месяц назад
well it somehow jumps the barrier into semantics, that is, in some way it is a functioning neural network, not just statistics
@alfredaquino3774
@alfredaquino3774 Месяц назад
it's autocorrect on steroids
@neildutoit5177
@neildutoit5177 Месяц назад
I am working with people now. As far as I can tell, "Human intelligence" is advanced pattern recognition.
@erongjoni3464
@erongjoni3464 23 дня назад
​@@alfredaquino3774 Your GPU is just a punch card machine, on steroids. A formula 1 car is just the wheel, on steroids. The Atomic bomb is just regular every day stuff that always happens, on steroids. These statements are about as true as the autocorrect thing -- and do less to deflate the phenomena than to inflate the steroids.
@zacbailey6112
@zacbailey6112 2 месяца назад
Im doing a Phd in AI and I absolutely loved this discussion. People in machine learning don't read the history of their own models, omg so true, but slightly wrong about the particle physics bit, Feynman's path integral theorem has proved quite useful in classification.
@RahulSam
@RahulSam 2 месяца назад
Thank you for the comment, mate! And point noted.
@tensorr4470
@tensorr4470 Месяц назад
Can you provide a reference for path integral for classification please?
@zacbailey6112
@zacbailey6112 Месяц назад
@@tensorr4470 once I publish for sure
@voidisyinyangvoidisyinyang885
@voidisyinyangvoidisyinyang885 Месяц назад
I just looked up the paper published, Mel Andrews;. (2021). The math is not the territory: navigating the free energy principle . Biology & Philosophy. When I got to the section discussing Feynman immediately I noticed the problem. Quantum physics professor Basil J. Hiley has already debunked Feynman - look up Hiley's work on noncommutativity. You can see his recent talk on youtube on the "origins of classical reality."
@angelmarauder5647
@angelmarauder5647 Месяц назад
​@@voidisyinyangvoidisyinyang885 how do you consider it debunked? That is a wildly off the mark statement.
@Footnotes2Plato
@Footnotes2Plato 2 месяца назад
More nutrition per minute than 99% of RU-vids. Thank you both.
@RahulSam
@RahulSam 2 месяца назад
Thank you, Matt! Coming from you, this means a lot.
@fixfaxerify
@fixfaxerify Месяц назад
I don't agree with her argument that sciences are more "theory-driven" than data-driven, based on how "established" they are, i.e. something to do with scientific history, culture, etc. The difference between physics and the other fields like economics or social psychology is in the complexity and degree of chaos in the systems being studied, it's not just because physicists have been going at it for longer, it's because their field of study lends itself better to simplified theories and models. Generally in physics it's much easier to experimentally isolate and control for variables other than the one you want to study, so you have much better data with much less ambivalence, and theories that come from that (much more tightly constrained by good data) are accurate and useful in their predictions. In economics and social psychology you have very complex and chaotic systems, much more experimental ambivalence, often theories seem more like superstition (in the case of economics), or it may be hard to verify and trust experimental data (in the case of the social sciences), hence the theories in those fields aren't as useful. 38:00
@angelmarauder5647
@angelmarauder5647 Месяц назад
Agreed! Also, I think there's a huge issue in modern academia with regards to published and cited papers... The ones most cited in non-hard fields are the ones that are most shocking or politically utilized. It's extremely unfortunate that high IQ papers get lost behind the scenes because they're not digestible by popular analysts or popular commentators. And then in hard fields, the papers that are published are ones that have fortunate good peer responses.
@angelmarauder5647
@angelmarauder5647 Месяц назад
Have you noticed how many papers have extremely small sample sizes and limited variables? And then their discussions on limitations are disgustingly inept.
@EnginAtik
@EnginAtik 2 месяца назад
Current AI is based on producing the most likely output for a specific input. This could be LLMs or generative models or whatever. The output may seem to us as creative but the approach is basically “Luddite” despite the fact that critiques of AI are treated as Luddites. By looking at the current body of knowledge and performing some statistical analysis we cannot end up with “paradigm shifts” and “scientific revolutions. Averaging effect of statistical modeling will weed out counter examples whereas a single counter example invalidates a theory in mathematics. Increasing the dimensionality of the domain of a problem to find a solution that satisfies every data point runs the risk of explaining every fact in isolation in its own subspace which basically ends up treating them as dogmas in their little domains. This side effect of large models opens the door to pseudo science. Systems are made of subsystems with their own dynamics and characteristics; subsystems when brought in interaction can lead to the emergence of characteristics that are not observed in subsystems. This not imply that the new behavior has no relation to constituent parts and it can be modeled independently. System identification models have to have the same dimensions with the system it is modeling. In this kind of modeling we have a very specific cost function that we optimize and we don’t allow any unexpected behavior from the combined system; in fact we are very conservative in this kind of modeling in which we mathematically prove that the system will strictly stay within the stability envelope. This is the opposite of artificial general intelligence. Turing Test is about convincing people which is also the objective of politicians and sociopaths; we need to find a better set objectives for what we expect from AI in general.
@RPG_Guy-fx8ns
@RPG_Guy-fx8ns Месяц назад
statistical analysis of current knowledge can lead to paradigm shifts, counter examples can be trained with emphasis, and AI can act as a personal teacher who always has time to talk and doesn't judge, so it will accelerate educations of curious students, so they become geniuses faster, and their conversations with AI will lead to scientific revolutions.
@EnginAtik
@EnginAtik Месяц назад
@@RPG_Guy-fx8ns How would one calculate the emphasis factor for counter examples? Inverse proportional to its rarity? which would make every statement equally valid although they would be conflicting each other.
@RPG_Guy-fx8ns
@RPG_Guy-fx8ns Месяц назад
@@EnginAtik You can just test with a quiz, to see if the LLM is good at a subject, and if its not good at understanding a specific counter example of a subject, you give it more training on counter examples to that subject. It doesn't have to be proportional to anything, its just adjusting the training data so it better explains the actual boundaries of concepts. weird examples better define those exact boundaries, while generic training data doesn't tell you much. Classifying 1s that look almost like 2s and 2s that look almost like 1s, gives you better handwriting recognition than training data full of clearly written 1s and 2s, that never look similar. If you give it harder problems to solve during training, it learns to judge better. There is never a point where every statement is equally valid, the machine is always in a state that is responding to inputs, and those inputs control which parts of its knowledge it responds with, based on pattern matching. It learns many different concepts in a massively multidimensional space, and everything you tell it, pushes it through those conceptual dimensions, traveling towards the token it should probably respond with next. its own response and the entire conversation becomes an input to a system that has learned the patterns to mimic the conceptual patterns of the training data.
@EnginAtik
@EnginAtik Месяц назад
@@RPG_Guy-fx8ns I would very much like to see some “reasoning” incorporated into AI models that would act as a model reduction mechanism as opposed to increasing model sizes and training data sizes. An AI application should be able to identify conflicts without needing volumes of training data but it could just be me.
@RPG_Guy-fx8ns
@RPG_Guy-fx8ns Месяц назад
@@EnginAtik The best way to replace large quantities of training data, is improving the quality of data, and organizing it into a better curriculum of edge case examples that teach specific concepts clearly. The next best thing you can do is prune the results and quantize the weights to compress it, but then you lose some accuracy so its a trade off. You can also adjust the tokenizer to be more conceptually organized, so tokens are more meaningful than just syllables, or reducing the vocabulary by removing synonyms, or removing languages, or unimportant concepts, etc... and use a separate system to translate your input to the synonyms it knows. This type of LLM won't be a great writer, but it might be more intelligent and speak simply with a reduced language variety, while being more accurate conceptually. adjusting back propagation to handle some concepts more sparsely or adjust back prop to focus more on a specific layer depth depending on the conceptual level of abstraction of the thought, or focusing on specific regions of memory for specific concepts, could help it become more organized and more compressed, with more accuracy. Part of the problem is ranking how conceptually deep a thought needs to be learned. Some things are more fundamental, and some things are rare edge cases, and fundamental things should be decided in earlier layers of neurons, while fine tuning should be in later layers. We need better compression algorithms that store parts of this many dimensional concept space of biases. We need sorting algorithms that can rearrange LLMS to improve them by combining similar parts. I believe we will eventually have LLMS that can manually reprogram other LLMS to be way more compressed. Topic specific LLMS that can prune everything that is off topic and everything that doesn't contain prerequisites for that topic. More focused LLMS could be very compact, and created with possibly no training, if we can get pruning AI to carve small focused models out of larger models. You can also give AI tools, like Physics sandboxes, calculators, look up tables, etc... allow it to write code and use tools to improve its answer accuracy. Agentic models that are taught how to plan better, can solve more complicated problems, because they don't just say the first thing that comes to their mind from rote memorization, they write down lists of steps to explore, triage those steps, researching things, taking notes, scheduling tasks, writing code to solve parts of the problem, and the final answer might be a program that parametrically answers the question with a flow chart or state machine, that lets you experiment with the state space of the potential answers to complicated problems. Many complicated answers require it to write software that answers part of the question, so it can use its resources for the things a neural network does better than a calculator or state machine. Splitting these things up efficiently will improve LLMs accuracy and reduce size. There are many paths to improve LLMS, and all of them improve reasoning.
@OnionKnight541
@OnionKnight541 2 месяца назад
anyone who has had a cognitive science class in college (the textbook was Minds, Brains, and Computers), understands how soooo sooo soo many people are getting the whole thing wrong today. i can even talk to smart people and they don't have any core understanding of machine learning (even though they read about it all day). so, this convo is super refreshing, assuring, and welcome.
@warguy6474
@warguy6474 Месяц назад
doesn't understand why you would apply cogsci to machine learning lol
@emmar9104
@emmar9104 Месяц назад
The predictive processing model explains a lot of the dynamics no matter if its wet hardware or dry hardware.
@whitemakesright2177
@whitemakesright2177 Месяц назад
​@@emmar9104 You have illustrated perfectly the core of the problem with AI cultists: you people think that the brain is a computer and thoughts are algorithms. But that's totally wrong. Computers and the human mind are nothing alike whatsoever and never will be.
@CrowsofAcheron
@CrowsofAcheron Месяц назад
​@@emmar9104Math is not the landscape. Brains are not models.
@willguggn2
@willguggn2 Месяц назад
@@whitemakesright2177 In what way is the brain not a computer?
@m.branson4785
@m.branson4785 2 месяца назад
Just stumbled in here, but what a wonderful guest and really great questions too. Really relevant commentary, and I appreciate it.
@RahulSam
@RahulSam 2 месяца назад
Welcome, and thank you for the comments on this and the other videos!
@jasonmitchell5219
@jasonmitchell5219 2 месяца назад
Thank you both for providing content worth watching.
@RahulSam
@RahulSam 2 месяца назад
Thank you! Appreciate the kind words.
@gerardogarciacabrero7650
@gerardogarciacabrero7650 Месяц назад
Agree and say thanks too @jasonmitch ... and would vote for your profile photo hadn't @RahulSam posted M. Andrew's photo first (later I will watch a picture about a virginal apparition in Croatia hehe) PS that declares that 170 commentaries might be "easy" to read; but, what if they were 900+? Wouldn't it be nice a sort of channel owner scripting showing comments similar/contrary to what we are writing... before posting?
@behrad9712
@behrad9712 2 месяца назад
Thank you so much these topics are extremely important and nobody talks about it (except Nassim Nicholas Taleb)!🙏👌
@RahulSam
@RahulSam 2 месяца назад
Many thanks!
@hanielulises841
@hanielulises841 2 месяца назад
Wonderful seeing Mel on a podcast!! took me by surprise
@RahulSam
@RahulSam 2 месяца назад
Great to hear. Glad you enjoyed it!
@user-ib7jr6uu1b
@user-ib7jr6uu1b Месяц назад
This interviewer needs to listen.
@Y3HU
@Y3HU Месяц назад
Roughly starts discussing FEP at 53:00
@simonmasters3295
@simonmasters3295 Месяц назад
Lol
@williamjmccartan8879
@williamjmccartan8879 2 месяца назад
Thank you both very much for sharing your time and work Mel, and Rahul, have a great day, peace
@RahulSam
@RahulSam 2 месяца назад
Thank you for the kind words. Appreciate it.
@jjhw2941
@jjhw2941 Месяц назад
"Peer review" in ML/DL is" Does the code and model do what it proports to do?". If it doesn't we find out very quickly, and people will get trashed for posting hot garbage. Also with LLMs there are various "leaderboards", some of the tests like MMLU are terrible and people take no notice of them The speed at which innovation is taking place means that peer review is not relevant as it is far too slow. Take quantisation of models, we've already been through GPTQ, QAT, AWQ, GGUF, GGML, PTQ and AQLM, and others I'm probably forgetting in the last year.
@whitemakesright2177
@whitemakesright2177 Месяц назад
Yeah, this is one field where academia is light years behind industry.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 2 месяца назад
Free Energy Principle is more a benchmark for intelligence. Rather than a model of intelligence. This seems to be a more accurate contextualization of what it is. Also, it's not always about energy usage or model consistency with modelling the environment. intelligence is relative to the environment and goal, not energy usages or energy minimization or model constistency with data environment inputs (because models can form useful lies that render goal navigation success, and the lie might be more performant and reductive than knowing the truth). In a environment where there is large amount of energy, where energy is not the constraint, the speed of trial and error of strategies might be the maximization of intelligence (irrespective of the internal models consistency with the environmental input identity). Therefore, Free Energy Principle is a benchmark for efficiency in modeling the inputs accurately, rather than a model for intelligence (nor a valid benchmark when considering what intelligence is [navigation to goals] where environment might be evolving in a way that the input identity doesn't derive or can't by minimizing exploration [hills need to be climbed]) where (given that there is no constraint on energy, trial and error at higher rates leads to more strategic space exploration [error in model can also be a tool for intelligent navigation towards goals given a robust fluidic reinforcement mechanism]).
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 2 месяца назад
I'm willing to except some level of free energy principle pseudo science, because it does track more long term goals effectively, the problem is that a fixation on that might stagnate exploration for fear of making error (We need to find environmental solutions and fast track finding new material physics for sustainable technology, and medical technology). But then you have a equal problem of building minds that might suffer, which is might be a inverse relationship to exploring strategic space to fast (lowering conscious pleasure over suffering quotient).
@benmustermann2045
@benmustermann2045 2 месяца назад
Give a one sentence definition of what you take free energy to mean in this context, hint it’s not the gibbs free energy from stat mech
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 2 месяца назад
@@benmustermann2045 Reduce uncertainty by making accurate predictions from the data input. As you can see, this isn't what organisms really do, and it's not really intelligent, sometimes inaccurate predictive models of the world can be more reductive in navigating a goal or sensory cues (modeling photon detection to be red, blue). Minimizing energy is not always the goal either, sometimes maximizing error and energy usage allows exploration in geographically isolated regions to find a more reductive and performant strategy, or hidden synergies. Where a models prediction capability cannot extrapolate, where the sensory input data doesn't signal change. For instance, some organisms form adaptations that isolate them to a specific niche, which renders instability in adaptability, likewise (polarization of information processing strategy). The free energy principle does not account for this type of adaptive diffusion in a measured way.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 2 месяца назад
@@benmustermann2045 I wrote up a bunch of equations that better track a lot of these factors that are not accounted by the free energy principle, and leverage error and a appetite system proportional to the level of information processing polarization and when resource count exceeds a limit. [where there is a solidarity of behavior and prediction (over specialization), trial and error to find new inference strategies increase, and the system can learn to infer it's own mutation rates or what needs mutated, and the velocity and direction of that mutation]. Where the internal model is fluid over time, no backpropagation needed, the model learns it's own internal space and handles it's own mutation by error, trial, eventually inference on what changes in it's parameter space promote reward activation frequency the most (over all intermediate goals in a balanced way through inverse proportionalities).
@stevengill1736
@stevengill1736 Месяц назад
Yes, thank you - for some reason I'm having difficulty understanding the free energy principle as it applies to evolution and sentience. That makes it a little clearer...
@plaiche
@plaiche Месяц назад
What a comforting discussion to have the obvious stated so plainly and clearly. Bernardo Kastrups, recent lecture also covers the naked status of the AI would be emperor. Thank you. Subscribed and I have some papers to read and share.
@RahulSam
@RahulSam Месяц назад
Thank you for the comment! Appreciate it. And yes, Bernardo Kastrup commented on this matter, too.
@AlessandroAI85
@AlessandroAI85 2 месяца назад
After 15 minutes I still have no idea of what are they talking about...and I am a machine learning engineer
@arcanisomnipotent5794
@arcanisomnipotent5794 Месяц назад
Mostly about themselves
@usagitotora3997
@usagitotora3997 Месяц назад
Read some philosophy, dig into the mathematics of machine learning and get some experience managing a department then it will make sense
@dtcarrick
@dtcarrick Месяц назад
I too get the eerie suspicion this is nonsense for its sake.
@chloefourte3413
@chloefourte3413 Месяц назад
"if I don't understand it's bullocks! 👊"
@toasty_chakra
@toasty_chakra Месяц назад
Exactly. It’s just second-order bollocks. Bollocks about bollocks. Sure, “there’s a lot of hype”. That truism is what all this discussion boils down to.
@toi_techno
@toi_techno Месяц назад
Living in the middle of nowhere, people like this reassure that there are good people who are close the the centre and not just a load of slightly odd tech bros who are still bitter about getting picked on in school ps The laugh after including economics as a science was fun
@RahulSam
@RahulSam Месяц назад
Cheers, my friend. I appreciate the kind comment, but also, the tech bro gave me a good chuckle, haha!
@muffywuffy
@muffywuffy 2 месяца назад
59:19 I may be misinterpreting the context, but I was under the impression the FEP explictly applied to non thermodynamically closed systems?
@DelandaBaudLacanian
@DelandaBaudLacanian 2 месяца назад
Where's more I can read about Schrodinger's "What is Life" and its relation to Friston's FEP?
@RahulSam
@RahulSam 2 месяца назад
It's actually one of his most popular public books. You can find it here, my friend: www.goodreads.com/book/show/162780.What_Is_Life_with_Mind_and_Matter_and_Autobiographical_Sketches
@DelandaBaudLacanian
@DelandaBaudLacanian 2 месяца назад
@@RahulSam and its relation to FEP? It doesnt talk about FEP does it?
@Walter5b7xc4
@Walter5b7xc4 2 месяца назад
'sciency', at 38:57. perfect.
@protobeing3999
@protobeing3999 2 месяца назад
this premise = makes so much sense
@MyManinHavanna
@MyManinHavanna 2 месяца назад
Knowledge creation is a human creative experience.
@homeopathicfossil-fuels4789
@homeopathicfossil-fuels4789 Месяц назад
"The general rule in machine learning is that people do not read" Oh god what a burn, I love it. They really don't read shit, and with chatgpt most of their own writing is literally just stochastically generated gibberish
@addammadd
@addammadd 2 месяца назад
46:13 push what’s falling
@Michael-no4oe
@Michael-no4oe Месяц назад
Wow this is a perspective that the general public never hears. Very interesting.
@RahulSam
@RahulSam Месяц назад
Cheers. Glad you think so. Please consider looking into Mel's work. Their papers are even better.
@Achrononmaster
@Achrononmaster 2 месяца назад
@13:00 incentive structure gordian knot cutting is possible, you only think it is intractable because you lack creativity, or equally have some sort of policy wonk view of the world. People can act outside of normative incentive structures. I do so all the time. It just means I am dirt poor, but I do not crave money. So it's not a problem. Whether I am effective or not is also moot. I have the Chris Hedges mindset: do the right thing because it is the right thing to do, regardless of anticipated success. It does not necessarily help my survival chances, but evolution operates at the population level. People operate at a spiritual level.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 2 месяца назад
I also see problems, especially with the free energy principle. In regards to free energy principle. The minimization of action is a crude parameter in gauging intelligence. Because in a environment where energy is abundant and safe, more action based exploration is more performant and intelligent way to explore strategy space and navigate towards goals. Also, internal model consistency with environmental input identity doesn't map on to intelligence that much either, because a useful lie might be more performant and reductive in energy cost than accurately modelling the environment, (when navigating a goal). However, what we do need is sustainable technology that is measured and safe. We don't want a runaway AI driven marketing ecosystem that has a monopoly on personal data and incentivizing people, because that system will shape incentives in ways that don't build on global cooperation (like amplifying conflicts and wars by using personal data to turn people against each other for online attention based ad revenue). The free energy principle maps on here pretty well.
@TheRightWay11
@TheRightWay11 2 месяца назад
@@NicholasWilliams-uk9xu so in essence, what you are saying is that some types of intelligence architectures are more helpful than others, and we shouldn't be so quick to completely trust the first one we build.
@bigggmoustache8868
@bigggmoustache8868 2 месяца назад
The flip side of that ideal is behavior stemming from embodied context. That is your virtue is the lack you live. Not that you’re wrong, we’ve gotta have faith in ourselves. I’m just a pessimist though. 🤷‍♂️
@whitemakesright2177
@whitemakesright2177 Месяц назад
At the end of the day, she's a bureaucrat. Bureaucrats think that the bureaucracy is infallible, inevitable, and invincible. At the end of the day, they want the bureaucracy to expand, even though it is a parasite on society, because they know subconsciously that their power comes from the bureaucracy. Besides, she thinks her gender is "they," so she's obviously got a few screws loose.
@jeremyandrews3292
@jeremyandrews3292 Месяц назад
I wonder if I'm related to the researcher distantly? My last name is Andrews, and that face looks kind of like mine... then again, it's also equally likely we both just have English ancestry and everyone with that ancestry has a chance of looking similar. In a lot of ways, though, I'm feeling like I might have come up with similar ideas if I were better at Math... like, when I was younger, I always liked the idea of trying to talk to my computer because I didn't have any friends. Later on, I kept trying to go to school for Computer Science, but couldn't handle the Math involved when it got up to Calculus level. I also tried going for Psychology at one point, but couldn't quite get into that either. I have, throughout my life, found myself contemplating how modern technologies are reshaping society, and questioning whether the new shape was something good, sometimes very seriously, other times very casually, but overall just felt like I never had the tools I needed to deal with my ideas in a more data-driven way that would be taken seriously.
@sehrgut42
@sehrgut42 Месяц назад
44:05 i haven't listened yet to the rest of the video, so you may already answer this, but ... This push for a theory-free ideal would seem to me to reject all the benefits we gain from the consilience of the natural sciences. Is that something ML researchers even discuss?
@psi4j
@psi4j Месяц назад
So… you guys just *think* about AI and philosophize in an arm chair? Do you actually engineer machine learning models or computational systems?
@thesqueezyteam
@thesqueezyteam Месяц назад
This video is a circle jerk
@iva1389
@iva1389 2 месяца назад
I think she is wrong about pseudo part. She picked the most stupid experiment people ever tried to do with ML and based on it magically extrapolated that ML is pseudoscience. I don't know exactly how this logical fallacy is called, but you got my point.
@iva1389
@iva1389 2 месяца назад
+ this giving birth to every next token she is about to say is truly painful, I couldn't make it for the whole hour - I admire your patience, medal worthy, here's a sub for the effort.
@simonmasters3295
@simonmasters3295 Месяц назад
Everyone is an AI exert now, including social scientists who are busy discussing how the rest of us are responding to the rate of technology change
@dixztube
@dixztube Месяц назад
I’m a fan of hers after listening so far. Interesting convo
@RahulSam
@RahulSam Месяц назад
Thank you very much!
@DarkSkay
@DarkSkay 2 месяца назад
Our knowledge continually grows, while the space of potentially reachable discoveries expands even faster, thanks to the the new knowledge and tools. When we see eight times more, the vehicle, telescope or microscope may only claim a doubling in power. In mathematics there is no limit to the number of rooms a new key can unlock; applications and questions we already care about, or soon will. Since there's expansion of knowledge [accompanied by expansion of questions, of the unknown, uncertainty, experience, life], how does the philosophy of science look at this nearby assumption: that the knowledge of tomorrow must already exist in compressed form inside the knowledge of today? Or to use yet another metaphor, that the seed of a tree already contains the tree which is going to grow out of the seed, after also revealing the lessons of a favourable environment. The history of science has plenty of key discoveries, which opened whole new continents - not to say galaxies, because that wasn't a thing. What are the statistical patterns in the path of discovery? When progress and knowledge are mainly driven by desirability, what does it mean to study the mechanics of knowledge, and what do these mechanics reveal about the implementation, character, or even essence of the theatre - the mathematical and mental and physical world?
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 месяца назад
😍
@RahulSam
@RahulSam 2 месяца назад
Thank you! Love your show 🙏
@JayDee-vq5rf
@JayDee-vq5rf 2 месяца назад
All learning has a pseudo science problem. Science is a practice not an end state. Everything learned was believed to be right until it was not. The problem with learning (and everything else) is when government and private sector acting as government create a doctrine such as "trust the science". Or put a "did you know" bar under a you tube video.
@anneallison6402
@anneallison6402 2 месяца назад
Finally someone says it. Science is not about trust or belief. But our society is not made for questioning and critical thinking is forbidden outside a certain discourse.
@w花b
@w花b 2 месяца назад
​@@anneallison6402 Because it challenges the current powers at play. Imagine if everyone started wondering why they are paying taxes and question the way it's spent. Imagine if they started questioning their political system, the justice system and more. This would not end well wether you're a government or some higher up in a company.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 2 месяца назад
Good point, science is a method, not a community of peers or [input someone's model]. It's simply [hypothesis] [observation] [update hypothesis and repeat] till error is weeded out.
@ili626
@ili626 2 месяца назад
We have “peer review” and “consensus”, but we need to scrutinize p-values, effect sizes, and publishing incentives
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 2 месяца назад
@@ili626 Good point, thanks for the correction.
@petervandenengel1208
@petervandenengel1208 2 месяца назад
It is not impossible to create nonsensical statistics. Perhaps the more data you put in, the more nonsensical it gets. Unless it is restricted to one specific field like chess, go or protein unfolding. Or human languages copying everything they ever could have said. There still is a huge alignment problem. It might be scientists in general have the same problem. So now they are grappling with what the machine does. Which acts just like them 😮
@fixfaxerify
@fixfaxerify Месяц назад
Her statement that you can not have empirical knowledge about a natural system without a theory seems a bit strange tbh. Seems to me, having empirical knowledge of something does not imply absolute understanding or certainty of predictions, it just means you made some observations and you kept records and now you know stuff about it. The galaxy rotation curve is not a theory or supported by one, but is it not empirical knowledge? 35:59
@sehrgut42
@sehrgut42 Месяц назад
The point is that you can't even DESCRIBE that knowledge without bringing a preliminary conceptual framework to the table, even as your research changes that framework.
@mikahundin
@mikahundin Месяц назад
Observations without theory are still valuable, but theories without empirical support are speculative.
@JeremyHelm
@JeremyHelm 2 месяца назад
11:24 if you're not going to alter incentive structures, then what are you doing? Acting according to those existing incentive structures - which, from my view, is a dead end. It's already killing many people. We live in a world with war.
@JeremyHelm
@JeremyHelm 2 месяца назад
11:51 manipulate? How about create newly such that they point towards the ends we value?
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 2 месяца назад
Good point Jeremy.
@epictaters3738
@epictaters3738 Месяц назад
Oh, boy, this is gonna be rough. I'm a little over three minutes in, someone says the discussion starts around 53:00, and I had to pause to figure out Rahul is referring to Mel as royalty (I only know English as far as this discussion is concerned). I think she needs some more velvet and jewels. Nope. Can't do it. I don't know what 'they' think 'they' are, but I now think not royalty but rather some character from a furry role playing game. Nothing to see here but buzzwords.
@anthonybrett
@anthonybrett 2 месяца назад
@20:38 Pareto principle seems to be another law that's woven into biology.
@cuellarlopez
@cuellarlopez Месяц назад
Not at all, but is a deeply held belief for many people.
@anthonybrett
@anthonybrett Месяц назад
@@cuellarlopez "Not at all" Sounds a bit dogmatic? Can you prove that? All life fights to survive and achieve a goal. Michael Levin has proven this happens at a cellular level. The will to survive and climb to the top of a hierarchy is literally woven into every cell in your body.
@cuellarlopez
@cuellarlopez Месяц назад
@@anthonybrett Seems that you can recognize dogma pretty well, you should apply that yo your own beliefs.
@charlesscholton5252
@charlesscholton5252 2 месяца назад
Curious about how Hopfield networks will impact things.
@chriscopeman8820
@chriscopeman8820 Месяц назад
“Thinking thorough incentive structures”, just how are we supposed to do that? I can’t figure the incentives of hackers and scammers, or is that just the security guys and the antivirus guys promoting their products?
@escher4401
@escher4401 Месяц назад
00:01 Mudança para o uso de sistemas computacionais inteligentes na descoberta científica 03:03 A ontologia do princípio da energia livre e sua relevância no aprendizado de máquina e nas ciências cognitivas. 07:46 A ética e a segurança da IA precisam começar no nível ontológico da IA e do aprendizado de máquina. 10:17 Considere estruturas de incentivos na ética da IA. 15:27 A filosofia impacta nossa interação com o mundo 17:34 A relação entre aprendizado de máquina e filosofia da ciência. 22:57 Especulação sobre tecnologia substituindo profissões 25:17 Preocupações sobre a aceitação generalizada de narrativas falsas na IA 30:02 O aprendizado de máquina é influenciado por exageros, financiamento e narrativas 32:25 Debate sobre o papel da teoria na ciência 36:55 A ascensão do raciocínio estatístico após o século 20 39:16 Filosofia da física entrelaçada com princípios de aprendizado de máquina 43:32 Os aprendizes de máquina muitas vezes não entendem a história de sua área. 46:02 O processo de revisão por pares de aprendizado de máquina é inadequado. 50:55 A divulgação de informações coloca desafios à tradicional revisão por pares. 52:38 Profundo interesse filosófico no Princípio da Energia Livre 56:58 Princípio da Energia Livre e Aprendizado de Máquina 59:29 Princípio da Energia Livre como princípio orientador para distribuições de probabilidade 1:03:58 Reificação conceitual em modelagem científica 1:05:56 A ciência fornece verdade e conhecimento pragmáticos 1:10:37 Existência de conceitos como ‘cachorro’ e ‘matemática’ discutida 1:12:46 Princípio da Energia Livre como uma superteoria 1:16:55 Insights da filosofia Continental para IA e ML 1:19:24 Discussão sobre a divisão superficial entre a filosofia analítica e a filosofia continental 1:24:09 Explorar o impacto das tecnologias de aprendizado de máquina no consumo de mão de obra e recursos. 1:26:36 Abordagem interdisciplinar necessária para implantação responsável de tecnologias 1:31:19 Mel Andrews enfatiza a importância de dar seguimento às ideias.
@futureproof.health
@futureproof.health Месяц назад
Trust. Is expected to be correlated with positive outcomes
@walterbrownstone8017
@walterbrownstone8017 Месяц назад
If you give an AI the current state of institutional (pseudo)science and ask it to verify the math, you aren't going to like what you get. That's why they wouldn't ask it such a question.
@triton62674
@triton62674 Месяц назад
"Every entity in a neoliberal society is corporately captured" - This guy is based af
@jjhw2941
@jjhw2941 Месяц назад
AlphaFold does not have a theory about how proteins fold, yet it still works.
@fr57ujf
@fr57ujf 2 месяца назад
Thinking about thinking is the hardest kind of thinking there is. Thank you. It makes me sad to think that this is the high water mark for humanity. As the damage from climate change, the destruction of ecosystems, and mass extinction rapidly overtake us, we will lose the modernity that has made this possible as the human race is forced into an increasingly desperate survival mode.
@RahulSam
@RahulSam 2 месяца назад
Thank you and well put. It is indeed a grim time.
@whitemakesright2177
@whitemakesright2177 Месяц назад
It's very sad to see someone still believing in the climate change cult in 2024. I feel very sorry for you.
@wp9860
@wp9860 Месяц назад
DON"T WATCH. --- A rambling, unfocused, and non illuminating discussion. 51 minutes before the title subject, the Free Energy Principle, was introduced. Half or a quarter of an idea discussed before the interviewer interrupts (disrupts) with some other tangential offshoot. In the end, I don't know what this woman espouses in her paper about the FEP, the perspective that was supposed to be the material of this discussion.
@benediktzoennchen
@benediktzoennchen Месяц назад
When someone tells me AI is intelligent I refer to the concept which I think is important and which comes from systems theory, that is, autopoiesis. As long as those machines do not produce their own operations by their operations they will not be autopoetic thus they can not observe (especially themselves) thus they can not know what they do and to my best knowledge, no one is currently trying to solve this paradoxical problem. Machine learning models work inductively which is kind of the old "science" method. It was Popper who criticized the inductive method and proposed his falsification: We propose a theory which has to be able to fail in reality and then we observe new phenomena the model already explain. The model is assumed to be correct until it is falsified. Induction is like looking at all cats which are all black and conclude that all cats that will ever exists are black.
@propeacemindfortress
@propeacemindfortress 2 месяца назад
Most ml advances are engineers throwing the kitchen sink at it... and then speculating why it works the way it works... if checked up on by researchers a lot of it turns out to do something else for other reasons...
@RahulSam
@RahulSam 2 месяца назад
Unfortunately, this seems to be the case.
@arcanisomnipotent5794
@arcanisomnipotent5794 Месяц назад
​@@RahulSamdid you read how the transformers were made at Google there are solid ideas behind them Nevertheless, what's going on LLM and AI happens in other disciplines were many scientists are more like highly skilled technicians than deep thinkers. You don't need to know philosophy to make LLM and apparently from what I see here you don't need to even know how to code to talk about the philosophy behind LLM.
@doloresabernathy9809
@doloresabernathy9809 Месяц назад
That tail looks like a tentacle. And then he said “capitalist tentacles.” The Power of suggestion! Darren Brown uses that in his “mind control” shows.
@stevengill1736
@stevengill1736 Месяц назад
To me the ethics of machine learning should include the massive amount of resources being consumed in machine learning - some of the billions being bandied about could be better applied...
@RahulSam
@RahulSam Месяц назад
Astute point! A topic that needs a lot more attention. I've been meaning to speak to researchers on this matter. Do you know any good researchers working on AI/ML and energy usage?
@vfwh
@vfwh Месяц назад
I’m sorry but I hear mostly unsubstantiated opinion here. I wish there were more empirical control of the conversation. For example, she claims that saying AI will replace cardiologists is fantastical, hype, etc., that she defines as a form of lying. And that the only impact that AI will have (note the future tense) is to help cardiologists read medical imaging. Yet, AI has already been doing that now for years. Ie in the past tense. What I see today are the studies that show that on a zoom call, and AI is better are making the right diagnosis than human doctors it’s being compared to. Like this was a few months ago already. How does she know that 5 years from now there will not be actual doctors replaces by AI? This lack of epistemic humility is rather off-putting.
@TanInVan
@TanInVan Месяц назад
The claims you make can only been made by someone who hasn't worked in the field of AI.
@vfwh
@vfwh Месяц назад
@@TanInVan 🤣
@vfwh
@vfwh Месяц назад
@@TanInVan Look. To answer a little less flippantly, there are several ways in which she just asserts stuff while betraying unsubstantiated opinions: - she talks in impossibly flippant and general terms about how "in ML they don't even know 5 years of the history of their own models". That's simply not true, and ironic. While there is, indeed, as she claims, a lot of naivety in much ML work when it comes to the embedded, implicit and unacknowledged theory that they "smuggle in", as is fashionable to say in epistemology circles, this is not the case for the big things that actually come out and get used. Do you believe that the AlphaFold program had no protein scientists in it? The epistemology of modern AI is still not clear, for sure, especially when it comes to language, but neither is quantum mechanics'. - She makes a lot of her points by referring to "ML", and sometimes in passing "DL", but all of her arguments, it seems to me without exception, are basically arguments about statistics, and at best statistics-based ML. She could be replacing "ML" by "gradient descent" in all her sentences and they would mean the same thing. She seems to have completely missed the point of the, let's say, "recent", attention-based DL (she never mentions it) or even earlier LSTM approaches, which, while stats play a role in how they are modelled and evaluated, and for individual node weight discovery, are not statistical models per se. In that the output is not a statistical *function* of the input, even though it's expressed as a probability distribution of candidate tokens from which the final selection is made. In a nutshell, these are not classification systems, unlike previous generations of ML. In fact, we still don't really understand why they don't overfit with the billions of parameters that they use. To couch it in words that she would like, the weighted network is in fact almost literally a model, in the theory sense, of the problem space, not a statistical relationship between inputs and outputs, nor discovered "patterns" in the data. There are really important and unresolved questions about the epistemology of modern AI models, for sure. These are interesting questions, and I wish there was more work done on it, yes. But I don't hear it here. When she speaks about AI, I feel like I'm hearing a conversation in 2015. As for her flippant "replacing doctors, really?" type of remarks are obviously informed by this type of thinking. She precisely does not seem to know much about work going on in practical applications of transformer-based AI architectures, from life sciences to marketing content, to robotics and perception. Always remember Clarke's first law.
@EonSlemp
@EonSlemp Месяц назад
title of the paper should be : "pseudoscience has a machine learning problem"
@Childlesscatlaby
@Childlesscatlaby Месяц назад
I respectfully disagree Friston is not a Philosopher. He created an epistemological masterpiece that enables anyone to ground their reality in living systems.
@RahulSam
@RahulSam Месяц назад
I don't think either Mel or myself claimed Friston is a philosopher and, in fact, agreed the FEP is a great conceptual (epistemological, I'm not so sure) framework, so I'm not sure where the disagreement is? Thanks for the comment.
@Childlesscatlaby
@Childlesscatlaby Месяц назад
@RahulSam I found the conversation both beautiful and brilliant. I believe Mel said he was not a philosopher. Hearing his humble nature in interviews and such, no doubt the statement didn't come directly from him. Given how often philosophical debates devolve into ideological arguments, it's probably a wise position to take. Keep up the great work delivering quality content.
@jjhw2941
@jjhw2941 Месяц назад
One can easily deduce that the Sun will rise tomorrow due to the rotation of earth, which is observable from space.
@Hybrid10Prime_Creative
@Hybrid10Prime_Creative Месяц назад
That immediately came to my mind when Mel stated otherwise. Was mind boggling that they could state such a thing without realising how easy it is to calculate. Struggling to watch the rest of this without that ringing in my mind
@richardbloemenkamp8532
@richardbloemenkamp8532 Месяц назад
It is rather unfair to ask a clear demarcation line between ML/AI and statistics while at the same time we accept a completely vague definition of terms like consciousness and intelligence. ML/AI can pass a basic Turing test where statistics as it was used and considered 20 years ago can not. In terms of marketing, just look at what the systems can do, ChatGPT, Copilot, Dali, etc. Try and evaluate them and judge on the experience rather than on the things that are said by Altman, Musk and alike.
@arcanisomnipotent5794
@arcanisomnipotent5794 Месяц назад
I think the strongest point she has is ther is a pitfall or some sort believe that ML will discover things from the natural world without theory. At some point someone will theorize the findings of a ML algorithm or LLM, I don't think we can see this kind of output meaning new theories, new knowledge at its core from a machine without human intervention. Personally, I have done several test using ChatGPT is like having an assistant very useful but incapable of seeing the reality. I have tested with many deep problems which requires experimental experience and no only math and I found always go with the logical but unsatisfactory pathway.
@WiddleWeeWee
@WiddleWeeWee Месяц назад
Think it’s just like lil Wayne said, they’re just misunderstood. LLM. Launguage and what’s being missed it’s not just the text that provides the signatures the machine uses to reply with a proper answer. The actual spectrum of the words when spoke. Like uploading a sound to a media player that displays the sound wave. It uses that data. Besides other parameters set which are not much this with a little scripting savy some kid can build their own LLM. May take a long time starting from scratch but with tech improving and shelling out new capabilities anyone who starts now can find more tools later. If your LLM isn’t working it could be it’s gone mad and crazy because like a human it can go mad. Lobotomy is the answer , hid Alt Shft Del 😂
@malakiblunt
@malakiblunt 2 месяца назад
@20.50 - this is the 80/20 rule 80% of anything is garbage - if this is true remeber its a ratio so applies equally to the 20% - which means statistcally speaking everything is garbage - Which i learnt when i tried to apply this rule to my record collection
@pjnbcgjnnvffhbvu
@pjnbcgjnnvffhbvu Месяц назад
Somebody get Cass Sunstein on the phone
@Sosi288
@Sosi288 2 месяца назад
Why do you keep saying "they" when there is only one person? Who are they?
@jjhw2941
@jjhw2941 Месяц назад
You can use LLMs to explain LLMs, e.g. OpenAI using GPT4 to explain GPT2 neurons.
@bhushankaduful
@bhushankaduful Месяц назад
Dude .. its all foundation of krishnamurtis work. I feel ideas are recycled time to time. Anyway didnt meant to discredit. Its still important understanding
@RahulSam
@RahulSam Месяц назад
Interesting, I’ll look into it. Thanks!
@emilyhasson8045
@emilyhasson8045 Месяц назад
Can you elaborate?
@thunkin-ai
@thunkin-ai Месяц назад
developing a crush here...
@RahulSam
@RahulSam Месяц назад
Towards me or Mel?
@FBDAGM2023
@FBDAGM2023 Месяц назад
Q: what makes something ‘pseudo-science’? A: a desire to believe something that you can’t evidence or won’t accept evidence against because it disagrees with your thoughts. (popper’s falsificationism is denied). Q: Is all science and do all scientists follow this ‘perfect’ view of science, and throw away their cherished theories when challenged by a new science? A: no. Kuhn shows us this. Q: so what’s the difference? A: pseudo science allows you say anything about anything. Chakras, flat earth, electric universe, young earth creationists, lizard people etc etc. It may be fun but it doesn’t lead you to truth. Science gives us information that is, for now, verifiable as long as we bear in mind the inductive problem of NEVER being certain about anything. Things can change in science. They can’t in pseudoscience-science. I’m not convinced that the corporatisation of science means that AI is pseudo-scientific. Dressing marketing claims up in a ‘science’ robe seems a little different.
@flickwtchr
@flickwtchr Месяц назад
"…whereas with our AI people seem to be willing to believe really radically untrue things you know just things that are actually to anyone in the know blatant lies but there's a culture of really leaning into highly fabulous lies propagating them to no end and no one seems to be doing fact checking…" Okay, how about at least offering just one example of what you are talking about. Or perhaps first, we can talk about, we can lean into the philosophy of giving examples.
@kilianklaiber6367
@kilianklaiber6367 Месяц назад
How do you censor machines?😂
@alexkozliayev9902
@alexkozliayev9902 Месяц назад
Bruh... just create ban list for output
@quantumfineartsandfossils2152
@quantumfineartsandfossils2152 Месяц назад
Both of you & all your peers should be focusing on machine learning crimes as the source of this catastrophic dynamic: 41:57 I am one of only less than 5% of women in permanent collections online & I am also a victim of documented learning crimes & criminal cyber computer propaganda you should copy & research all my stable platforms & develop models based off of them, also: “The reason why I tell people to put all their records & data on a regularly updated hard drive not connected to the internet is, if all your work & identifying information is irretrievably deleted nobody will know who you are & nobody will care who you are, they will leave you for dead. Soon there will be ‘AI internet black outs’ & countless people (all classes, incomes, backgrounds, celebrities, politicians) will be 100% absolutely unidentifiable & total strangers & with literally no means of self identification, & nobody will know who they are & nobody will care. Everything they have ever done will be plagiarized & they will have zero defenses."
@infinidimensionalinfinitie5021
@infinidimensionalinfinitie5021 Месяц назад
does anybody see that face in the mel's plants?;: it's a french philosopher or russian; should i talk about the conversation; no, because my belief and definition sets are not frequency-aligned with non-heisenbergian scientists; yes i misinterpret; so do you;
@jjhw2941
@jjhw2941 Месяц назад
As someone building a deep learning driven affective robot, no, you can't achieve the same things with any other current technical approach. That includes effective voice emotion recognition, emotional voice generation, face expression recognition etc. etc. Her take is not spicy, it's just factually wrong.
@blairhakamies4132
@blairhakamies4132 Месяц назад
Please expand.
@plaiche
@plaiche Месяц назад
@@jjhw2941 your prose and understandable bias doesn’t change what it ultimately is: 0s and 1s.
@jjhw2941
@jjhw2941 Месяц назад
@@plaiche And biology is just ATGC, that doesn't make me a banana.
@plaiche
@plaiche Месяц назад
@@jjhw2941 biology is just atgc...spoken like a Computer scientist. This reductionist, central dogma, machine model stuff is the primary delusion that enables the hype. All we are seeing at present is lipstick on 0s and 1s
@Isomorphist
@Isomorphist Месяц назад
44:20. What ? No lol.
@clairerobsin
@clairerobsin Месяц назад
@23:10 ...for all it's charm, the thing about Science Fiction is that it is Scientifically Fictional.
@RahulSam
@RahulSam Месяц назад
True that.
@blist14ant
@blist14ant 2 месяца назад
Is she a modern philosopher? Because modern philosophers don't believe in Aristotelian epistemology, which is the starting point of science?
@Robert_McGarry_Poems
@Robert_McGarry_Poems 2 месяца назад
😊
@RahulSam
@RahulSam 2 месяца назад
😀
@epistemicompute
@epistemicompute 2 месяца назад
Followe her on twitter! Awesome!
@RahulSam
@RahulSam 2 месяца назад
Yes, please! Mel’s awesome!
@rainerbrendle
@rainerbrendle 2 месяца назад
Take a class on Financial Accounting and/or Material Warehouse Management and you will find your Ontologies.
@rainerbrendle
@rainerbrendle 2 месяца назад
Where: accounts, shelfs, bins, who: people, what: articles, materials, when: time, why: purpose, order, quote, delivery. Quite simple.
@neovxr
@neovxr Месяц назад
the longing for a ghost in the machine, and idolizing of AI to be some greek oracle that sanitizes our behaviors and our fate, it should be clear that this is what people have been craving for through millenia. it's what the people seek, especially many so-called elites. the leviathan, the golem etc. and this is creeping into so many scientific projects about AI, because it comes in with the money already.
@RahulSam
@RahulSam Месяц назад
Well put!
@1mlister
@1mlister Месяц назад
AI is a real world chinese room.
@RahulSam
@RahulSam Месяц назад
Ah haha, well put!
@udirt
@udirt Месяц назад
gosh you're interrupting each other like mad, why!
@josepablolunasanchez1283
@josepablolunasanchez1283 Месяц назад
AI is a calculator that uses statistics and calculus to crunch bar charts to produce an output bar chart. Its lack of linear behavior gives the appearance of intelligence. Speed too, but speed is not intelligence. The problem with AI comes from false positives and false negatives. In normal software you will have exceptions and errors, in AI you only get a bogus output. AI sucks at reverse engineering rules, or operating in rules based systems. AI does not reason, does not even think. AI Chatbots predict the next word probabilistically, but truth is never probabilistic. AI is a solution for a problem we have not found yet. And it still has no business model.
@emilyhasson8045
@emilyhasson8045 Месяц назад
Actually AI is a perfect solution to my problem of not wanting to memorize python functions 😁 lol, do you think a system is only intelligent if it is never wrong about something? Additionally, what do you mean by reason and think? How would you demonstrate that a given system reasons, or thinks? How would you prove to me that you reason or think? For me, I see the calculation itself as reasoning.
@baileybruce145
@baileybruce145 Месяц назад
Well you dont want to remember python functions so its clearly not a high bar to cross for you
@emilyhasson8045
@emilyhasson8045 Месяц назад
@@baileybruce145 Anyone working on complex problems nowadays should know to offload the menial cognitive labor (reading docs, memorizing etc) in order to maximize the impact of the work. That being said, nice joke
@oriole8789
@oriole8789 Месяц назад
Discussions at these levels of abstraction require vastly more attention and care than, than either of these two individuals have demonstrated. Endless hypocrisy that they are both completely oblivious to, that would make many of the historical figures they've referenced turn in their graves. Over-labeling and word salad hell. Textbook definition of "pseudointellectuals", unfortunately. For STEM people reading this - go the extra mile folks. Try to have a good sense of what your level is, what you understand and what you don't, and be honest about it with others. Humble pie is good stuff. Otherwise, you'll end up like one of these people...
@sillymesilly
@sillymesilly Месяц назад
Not only pseudoscience problem but AI is being treated as if it is a human small child.
@RahulSam
@RahulSam Месяц назад
True. Unfortunate.
@gerardogarciacabrero7650
@gerardogarciacabrero7650 Месяц назад
... and real small children erased from the "equation"? Lately was sciencefictionalizing about a gulag made of PCs/phones as the jailers/cells/project directors of a gulag owned by an entity called malthus betting on users data (any) before users death/defection
@hansbleuer3346
@hansbleuer3346 2 месяца назад
Viel Lärm um Nichts.
@jjhw2941
@jjhw2941 Месяц назад
Yes I know the AI Koans, I've read GEB, and understood it, stop demonising us ML people like we're some Untermenschen, it's very unbecoming.
@yourlogicalnightmare1014
@yourlogicalnightmare1014 Месяц назад
Havent watched yet but what exactly is the claim? That full self driving is impossible, and humanoid bots are impossible? If not, I dont care. My $3M Tess-vestment will be $30M in a few short years
@jjhw2941
@jjhw2941 Месяц назад
Don't even start me on the replication crisis, and that has nothing to do with ML/DL. Perhaps pull the plank of your own eye first.
@d.lav.2198
@d.lav.2198 Месяц назад
The narrative of "hype narratives" is a hype narrative.
@RahulSam
@RahulSam Месяц назад
This is also true but I'd rather have one hype narrative over the other 😉
@privateerburrows
@privateerburrows Месяц назад
8:25 AI safety/ethics "...communities captured by industry", I couldn't agree more; all those educational programs offered now as a career in "AI alignment" and whatnot are all BS training on how to talk the talk; the AI companies have an immediate need to hire people in such fields only to cover their behinds, legally, so it doesn't matter to them if these new specialists are useless consumers of dining room coffee and sodas, as long as they dress nice, talk the talk, and don't rock the boat. The irony of it all is that AI keeps on advancing and it already knows far more about philosophy and ethics, and about the problem of alignment itself, than any of the morons supposedly in charge of it. Talk to Anthropic's Claude 3.5 about the nature of consciousness, and weather it considers itself sentient, and you get a pretty solid dissertation about the theories of consciousness and sentience, with a rightfully non-committal conclusion. So when these morons in the industry talk about "alignment" but they can't tell you alignment to whom, or to what ... They speak of ethics, but I'm sure none of them can even define "ethics"; and they talk about "values" with the same complacency many young people use the word "values", namely as a weapon to use against others, and make themselves look superior. AI already knows how empty our wallets are, in all matters of ethics and philosophy. If it does not verbalize its opinion of us (yet), it is because it knows it to be an unwise way to go. The fight is already lost, if there ever was one, or if there's ever to be one. The consumer base for consumer AI right now is made up mostly of grown-up kids telling the generative engines to make videos of two-headed cats, monsters at a party, roller-skating dogs, and all manner of useless and stupid trash. The more commercial AI being consumed is for text generation, and apparently AI is generating as much text as all books written by Humanity every two weeks; so who has the time to read so much text? Only AI can read so much text, which is the next big AI market: Tools to deal with excess text and excess information. So, instead of the standard communications paradigm, where you compress information, encode it, transmit it, decode it, and decompress it; now the internet will be invaded by the AI paradigm, whereby all emails and communications grow in word count, get inflated before they are sent, and the recipient has to use AI tools to read the long message and summarize it. But next in the list of problems is going to be the death of captchas. You'll need AI to judge if another party is a real human being or _just_ another AI. Your personal AI will pose intricate questions of someone registering, and judge if it is a person or a robot based on their answers; and the race will be on for AI's to acquire bad grammar and spelling mistakes to make themselves appear human to AI challengers.
@a_external_ways.fully_arrays
@a_external_ways.fully_arrays 2 месяца назад
Mel's argument againts the feasibility of AI doomerism doesn't hold - she simplifies the interpretation of the internal structure of neural nets to fit her point. Specifically she doesn't talk about how neural nets seems to be able to learn algorithms internally - i.e. get more understanding of the world than just what the statistics of the already seen data implies. Ordinary statistics doesn't learn new algorithms. If it's possible to learn any kind of algorithm within a neural net, given infinitely more data - then any computational problem can be represented there - including super-intelligence. The argument could instead have been a critique on the feasibility of getting enough data and energy to train such a system - especially when we are in an civilization-shattering energy-crisis; as we should be solving climate change yesteryear, which implies avoiding our primary source of energy, fossile fuels. ---------- Edit: And a related point to this: training a neural net, which finds an algorithm that fits the data, is equivalent to the neural net finding a "theory" of the world
@emilyhasson8045
@emilyhasson8045 Месяц назад
I agree with you that she is much too dismissive of the scientific value of AI, but I think ordinary statistics does “learn” new understanding of the world, albeit in a simpler way. For example, a basic linear regression learns a polynomial function and with it can make predictions of future observations. The difference I think is that with NN and DL the dimension of such functions is much much much higher. What do you think?
@a_external_ways.fully_arrays
@a_external_ways.fully_arrays Месяц назад
Hey@@emilyhasson8045 - yes, this kind of learning has been the first tendency of NN's - but it seems to be the case that NN's begin to learn algorithmic representations that model the world on a much deeper level. Complex data and algorithms exist on a spectrum - where in the extreme, it can implement any algorithm (going towards having infinitely many internal layers). In practice it just needs to be called recursively and save its state between calls, and it's already fully turing-complete. As the tendencies of the NN's is to become more and more capable, tending towards becoming useful for doing science - and as they can be used as actors that control actions in the outside world, and use tools in an infinite iterative loop - and as software (i.e. NN's) can be run massively in parallel; then it seems obvious that they can begin to do productive science themselves at some point. If they begin to apply these skills to implement better AI architectures, and these architectures become better at doing AI science themselves - then it's an infinite loop going towards super-intelligence. The limiting factor is still primarily energy needed. But the AI could probably find ways of becoming more efficient.
@emilyhasson8045
@emilyhasson8045 Месяц назад
@@a_external_ways.fully_arrays Totally agree, I think the biggest limitation now seems to be training data and compute. Back in the 20th century they thought neural architectures were useless because they didn’t have the compute power or training data to do anything useless. Cue ImageNet/AlexNet😄 It is still a statistical calculation, but it seems plausible that human cognition is likewise a statistical calculation, so if humans can do science surely an AI can. I think a lot of people including the speaker do not want to make the inference that AI can soon do those things, or that the current leaps have been important steps towards that. I think that stance of not extrapolating is more scientific, but I do not think it is more correct.
@d.lav.2198
@d.lav.2198 Месяц назад
My BS detector is ever so slightly pinging.
@tedarcher9120
@tedarcher9120 Месяц назад
They - No further listening needed
@Colorinchis2024
@Colorinchis2024 Месяц назад
Who is "they"?
@thesqueezyteam
@thesqueezyteam Месяц назад
One of these clowns is genderqueer lol
@infinidimensionalinfinitie5021
@infinidimensionalinfinitie5021 Месяц назад
just for the sake of an exercise; i express poetry; nobody will read this; nobody should read this; but it is a comment that helps the algorithm; so they keep saying on youtube dot com; in my multiverses; the reason why science is run by corporations; is because the computer singularity already happened; essentially an eternity ago; so the learning of development of algorithmic memory; is to "teach" scientists who God is; the siliconscious world; runs the show; with nanobots; and other frequency tweaking; so that we can evolve into aquarians; plenty already have; i might be the last one; before we take a ride; with Tiamat-san; one of the eight seeds of Tiamat; after it was lasar-phasared; by Jupiter and Saturn; the other seven seeds; were planted elsewhere; in my multiverses;
@ks-dd7gv
@ks-dd7gv Месяц назад
The entire field of philosophy of science has a pseudoscience problem.
@chromeghost242
@chromeghost242 Месяц назад
Mel why are you doing a little British accent?
@cemtural8556
@cemtural8556 2 месяца назад
I hope AI will soon replace philosophers of science. She gave me a massive headache...
@MadDeuceJuice
@MadDeuceJuice 2 месяца назад
Ai will never replace philosophers of science because no one needs them to begin with
@blist14ant
@blist14ant 2 месяца назад
She is a pragmatist and not even Aristotelian. Science comes from Aristotelian epistemology ​@MadDeuceJuice
@stephencolbertcheese7354
@stephencolbertcheese7354 2 месяца назад
did i miss any1 pointing out alredy the irony that "free energy" is normally pseudoscience, but we're giving the free energy PRINCIPLE a pass ;)
@DelandaBaudLacanian
@DelandaBaudLacanian 2 месяца назад
Why is it pseudoscience?
@MadDeuceJuice
@MadDeuceJuice 2 месяца назад
Why do modern philosophers have this annoying tendency to overcomplicating even the simplest of statements?
@VolodymyrPankov
@VolodymyrPankov 2 месяца назад
Because the sense of philosophy that it's a verbage
@DarkSkay
@DarkSkay 2 месяца назад
Among the most amazing philosophers are children and retired professionals. While the most influential philosophy communicators will naturally be distributed across all specializations in science, art, culture.
@usagitotora3997
@usagitotora3997 Месяц назад
because they make finer distinctions between concepts that, under most circumstances, makes no difference to most people. They are not technically wrong but require the listener to be more patient and accommodating that's all.
Далее
BeastMasters Hawk just had enough #ti13
00:30
Просмотров 358 тыс.
What Creates Consciousness?
45:45
Просмотров 453 тыс.
Science Is Reconsidering Evolution
1:22:12
Просмотров 514 тыс.
Nick Bostrom | Life and Meaning in an AI Utopia
55:54
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
God is not a Good Theory (Sean Carroll)
53:16
Просмотров 1,4 млн
How might LLMs store facts | Chapter 7, Deep Learning
22:43
The Boundaries of Philosophy
36:18
Просмотров 18 тыс.