Тёмный
Institute for Experiential AI
Institute for Experiential AI
Institute for Experiential AI
Подписаться
Cutting-edge AI solutions and research institute.

The Institute for Experiential AI is creating the leading research institute focused on the fusion of human and machine intelligence into working AI solutions that naturally and effectively augment the best of machine algorithms and human intelligence.

Our AI Solutions Hub and team of data scientists and data engineers work to solve industry, research, and academic problems in an innovative apprenticeship setting. Over 90 faculty members from across Northeastern University conduct interdisciplinary work to advance the field of AI and to solve problems across research domains. Our Responsible AI Services and global AI Ethics Advisory Board help institutions assess and address the risks of biased and unfair AI from an algorithmic audit perspective, ensuring responsible AI development and deployment.

#AI #AIsolutions #AIresearch

InnovateMA and the AI for Impact Co-op
3:33
2 месяца назад
Комментарии
@ForNika
@ForNika 6 часов назад
AI copy & paste programming can manage my emotions better than me!
@surkewrasoul4711
@surkewrasoul4711 14 дней назад
It's astonishing how people have the audacity to say that a very epensive and overhyped web scraper's can lead to AGI , In what world ? roaches or ants ? Maybe but not in the real human world, At least not the Intelligence we referre to in the text books, No matter how many paid lunatics like Eliezer or others like him tries to create hypes and scarcity so more people can use them and companies can make large profits.
@Roadmapmetrics
@Roadmapmetrics 18 дней назад
Love the show! We provide strategic advisory services to Boston StartUps and we're based out of Concord, MA. This show has now been added to our permanent playlist. Keep up the great content. Thank you.
@InstituteforExperientialAI
@InstituteforExperientialAI 29 дней назад
Ahead of “The State of AI in Precision Health,” our flagship conference happening Oct. 10 at Northeastern University in Boston and online, hear from former Executive Vice President at MEDITECH and #SAIPH2024 speaker Hoda Sayed-Friel about AI’s role in healthcare. Register for “The State of AI in Precision Health” at bit.ly/SAIPH24
@Orbitaonamika
@Orbitaonamika Месяц назад
Garcia Mark Martinez Betty Perez Anthony
@JustNow42
@JustNow42 2 месяца назад
A short comment: light is not discrete, it is emittet in quanta and absorbed in quanta but it is not in itself in quanta. Proof: light is stretched in the expanding space.
@JustNow42
@JustNow42 2 месяца назад
A question: how does the time steps in the models compare to our time progress or should I say time quanta
@app8414
@app8414 2 месяца назад
What is the ultimate fractal? That is how I phrase the question or problem...
@app8414
@app8414 2 месяца назад
Something I call: Knowledge Audit
@app8414
@app8414 2 месяца назад
There's much to think about and much to do. I've designed a course: Computational Thinking Simplified Technical English for Artificial Intelligence: Language Standard and Register. STEAI-001/ STLAI-001 Prompt Engineering Manual, Prompt Dictionary, and Register. I think it helps to answer or meet the demands of Wolfram Research and more. I'd be happy to connect with a representative from the channel or with Mr Wolfram himself. Fingers crossed someone takes me seriously. 😊
@app8414
@app8414 2 месяца назад
I forgot to include: My work also responds to Jeannette Wing's work, too.
@midoann
@midoann 3 месяца назад
This Institute is so lucky, I think this is one of the best Dr Lisa Feldman lectures, master piece! ❤🎉
@aurasandovalvigo2712
@aurasandovalvigo2712 3 месяца назад
Thank you so much Lisa! Your work is outstanding.
@FarhatiYassine-en1mj
@FarhatiYassine-en1mj 3 месяца назад
Like Management
@FarhatiYassine-en1mj
@FarhatiYassine-en1mj 3 месяца назад
How can learning machine can develop social sciences liées Management ?
@FarhatiYassine-en1mj
@FarhatiYassine-en1mj 3 месяца назад
Hi, welcome Professeur Tina i am Yassine Farhati a doctorant in Management Complexity from Tunisia, teacher of Physical Science and a writer
@AlgoNudger
@AlgoNudger 4 месяца назад
TAI + XAI - IAI : RAI = BS. 🙄
@mitchellhayman381
@mitchellhayman381 4 месяца назад
This is approaching the limit for how smart a human can be.
@alexandersmirnov07
@alexandersmirnov07 5 месяцев назад
It's fascinating to listen to Lisa! Thanks Lisa, thanks to the Institute for Experimental AI for hosting
@neoepicurean3772
@neoepicurean3772 6 месяцев назад
So time is not fundamental or emergent, or strictly in the observer, but in the computing speed of the hypergraph? But doesn't that just kick the problem of explaining time up a level?
@MichaelQuarantaGooglePlus
@MichaelQuarantaGooglePlus 6 месяцев назад
Great talk, especially the demos. They were helpful to visualize concepts. Thank you Dr. Wolfram for being a prolific communicator and thinker and doer. You are accomplishing and contributing a lot to humanity, please continue.
@shinn-tyanwu4155
@shinn-tyanwu4155 7 месяцев назад
Excellent presentation 😊
@LeahBensonTherapyTampa
@LeahBensonTherapyTampa 8 месяцев назад
Goodness. People really don't get it... The questions make that clear. We are suuuuch affective realists.
@JuliusUnique
@JuliusUnique 8 месяцев назад
the cool thing is, it doesn't matter how "fast" the thing is that computes us, because even if it is a single string, bit by bit, we are part of it, so we are slow as well, so it cancels each other out at no matter the speed of what computes us, we experience the universe the same exact speed
@tellesu
@tellesu 8 месяцев назад
It would require an incredibly stable universe that somehow refreshes entropy
@JuliusUnique
@JuliusUnique 8 месяцев назад
@@tellesu why would it refresh entropy? Entropy is what happens based on a given rule it computes, check rule 30
@Tore_Lund
@Tore_Lund 7 месяцев назад
Agree. Like the critics claiming Wolfram Physics can't possibly work because it would have to work faster than light speed! They are forgetting what is considered consensus in Quantum mechanics, that entanglement or quantum foam fluctuations are considered orders of magnitude faster in the underlying mechanics of the Universe, than the causality speed that we can observe. Funny how those exploring some version of the simulation hypothesis or other computational cosmology, like Max Tegmark, don't get asked that question?
@Gustavoooooooo
@Gustavoooooooo 6 месяцев назад
Futurama S08E10
@gisele.st.hilaire.feldenkrais
@gisele.st.hilaire.feldenkrais 8 месяцев назад
Thanks again, Lisa, for another great presentation. Thanks to the Institute for Experiential AI for hosting and inviting Lisa Feldman Barrett.
@InstituteforExperientialAI
@InstituteforExperientialAI 8 месяцев назад
Thank you for watching!
@iggymcgeek730
@iggymcgeek730 8 месяцев назад
In 2023, Stephen is revolutionizing the field of physics with his innovative exploration in hypergraph models of space. His work is more than just theoretical talent; it fundamentally changes our understanding of the universe. Stephen skillfully combines advanced artificial intelligence with cellular automata, delving deep into the complexities of space-time. Speaking of cellular automata, did you hear what one said to the other? 'Stop copying me, or we'll end up in a loop!' and 'You had one job - to follow the rules!' It's like they're having a real chat in there. One even said, 'Hey, let's make a pattern nobody can predict!' and 'I'm feeling less complex today, how about you?' Stephen's automata are not just evolving, as one joked, 'Are you evolving, or just stuck in your grid?' Together, they make complexity look simple, with one quipping, 'Let's not gridlock over the rules!' and questioning life in the grid, 'Do you think we'll ever escape this grid?' It's a chaotic system for sure, with them joking, 'Careful, or we'll end up as a chaotic system!' and reminding each other, 'Don't be so predictable!' Stephen's role in all this? Not just riding the peak of the AI wave, he's leading a transformative movement, ingeniously intertwining the fields of physics and artificial intelligence in unprecedented ways. #InnovatingSpaceTime #AIPhysicsFrontier
@mooncop
@mooncop 9 месяцев назад
generating text2audio from multimodal latent space is next -- "describe the voice of Max Stirner" -> "Stirner <desc>" -> enactment
@hypercube717
@hypercube717 9 месяцев назад
Interesting
@silberlinie
@silberlinie 9 месяцев назад
truely great personality
@vak5461
@vak5461 9 месяцев назад
I wish when I subscribe to good channels they wouldn't then spam random useless videos and it would just keep up the good ones that I subscribed for 😭 clogged up my feed immediately! Aaah
@shanek1195
@shanek1195 9 месяцев назад
Thoroughly interesting talk, most enjoyed the Q&A. Induction and relevance realisation and how we can inscribe human values are really interesting problems. 1:43:32 : Unsure about the notion that AIs don't suffer (of course not in the anthropomorphic sense but) the cost function does act as a form of negative reinforcement.
@richardnunziata3221
@richardnunziata3221 10 месяцев назад
A bunch of unsupported and unchallenged statements
@LauraNorvig
@LauraNorvig 10 месяцев назад
OK, Bard said you could go by public transportation or taxi so ... still some work to be done there.
@PeterSodhi
@PeterSodhi 11 месяцев назад
A masterclass
@geoffreyanderson4719
@geoffreyanderson4719 Год назад
No knowledge of underlying reality, no common sense, can't plan. Example. You are an expert parent Assistant and babysitter. You are playing with a parent and her infant, helping the baby girl learn basic concepts about the world around us. Today the girl is pushing a wheeled toy along a table top near its edge. What are two possible events that could likely occur in the next five seconds into the future which may surprise or instruct the girl as she continues pushing the toy? ChatGPT Certainly! Based on the scenario described, here are two possible events: The Toy Falls Off the Table: As the infant pushes the wheeled toy near the edge of the table, it's quite possible that she may push it too far and the toy falls off. This event can be surprising for the baby. It's an introduction to the concept of gravity and the consequences of pushing objects beyond a boundary. The Toy Gets Stuck or Slows Down: If there's a small obstacle or a sticky spot on the table, the toy might get stuck or slow down when the infant pushes it. This can teach her about resistance, friction, and the fact that not all surfaces are smooth or unobstructed. Both events can be instructive and can spark curiosity in the baby, leading to exploration and learning. However, it's crucial for the parent or babysitter to ensure the baby's safety during such play, especially if the toy falls off the table, to prevent any potential harm.
@geoffreyanderson4719
@geoffreyanderson4719 Год назад
When llm are put into ensemble with databases they can be made factual, actually. The llm is good at fusing query results is the reason. When llm are put into ensemble with strategy specialist models they can be made into planners, actually. The Alpha family of models is a planner. When llm are augmented with persistent storage they can be made to remember their learning s. The llm alone is not the way forward, but the llm With various augmentation s seems very promising.
@maxheadrom3088
@maxheadrom3088 Год назад
Google AI generated subtitles just wrote "anti-Gary" as "anti-gay". NOTE: I have no idea if the auto generated subtitles on youtube use AI or not.
@kayakMike1000
@kayakMike1000 Год назад
We _do_ have unlimited energy to throw at AI workloads. Its called nuclear energy. Works _great_ rain or shine, day or night.
@kayakMike1000
@kayakMike1000 Год назад
For crying out loud, CO2 is not a dangerous gas. Its not a problem. Sudan got flooded because that's what happens in Sudan every 100 years. The Maldives are not getting swamped. They are STILL there and they will still be there 200 years from now.
@federicoaschieri
@federicoaschieri Год назад
Finally someone understanding why LLMs are bound to fail. It's unbelievable how people are underestimating the difficulty of building a cognitive architecture. Literally people are expecting that a quick, polynomial algorithm like a feed forward neural net can solve all problems of humanity. Yet, logicians have already explained the concept of NP-hardness, that is, logical problems can't be solved in general efficiently by a machine, no matter how sophisticated it is. In some sense, scientific problems don't have "patterns", they're all different, so a machine learning patterns is pretty useless. That's why progress is slow and to even be possible it takes billions of intelligent brains in parallel and with incredibly structured communication. So good luck with LLMs...
@codybmenefee
@codybmenefee Год назад
Is his deck available anywhere?
@hyunkim6195
@hyunkim6195 Год назад
.
@klausunscharferelation6805
@klausunscharferelation6805 Год назад
about the singularity As Ray Kurzweil says When the whole universe becomes a computer, What does it calculate? Even though the purpose for calculating has already disappeared?
@CharlesVanNoland
@CharlesVanNoland 7 месяцев назад
The universe is already a computer.
@АлександрАнатольевичШмонов
Computer by itself created 40000 inventions. About it said here ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-twUzsAZIe90.html
@nicktasios1862
@nicktasios1862 Год назад
Yann is mentioning 1:09:16 that a lot of the mathematics of neural networks comes from statistical physics, but I wonder what mathematics he's referring to, since most of the mathematics I've seen when I learned statistical physics was much more basic than some of the mathematics I've seen by the likes of Yi Ma and Le Cun.
@edz8659
@edz8659 Год назад
Reverse diffusion for one
@nicktasios1862
@nicktasios1862 Год назад
@@edz8659 I never learned anything about reverse diffusion in my statistical physics courses. Neither did we learn about stochastic differential equations for example. I actually learned more about Brownian motion and Wiener processes when I worked as a quant.
@synthclub
@synthclub Год назад
I would The statical tools are from quantum physics... Not mechanical physics..
@stefanobutelli3588
@stefanobutelli3588 8 месяцев назад
@@nicktasios1862Brownian motion is statiscal physics, and spin glasses and entropy are a good bridge between phase transitions (statistical physics) and decision boundaries in data spaces
@mbrochh82
@mbrochh82 Год назад
here's a ChatGPT summary: - Welcome to the last distinguished lecture series for the Institute of Experimental AI for the academic year - Introducing Yann LeCun, VP and Chief AI Scientist at META, Silver Professor at NYU, and recipient of the 2018 ACM Turing Award - Overview of current AI systems: specialized and brittle, don't reason and plan, learn new tasks quickly, understand how the world works, but don't have common sense - Self-supervised learning: train system to model its input, chop off last few layers of neural net, use internal representation as input to downstream task - Generative AI systems: autoregressive prediction, trained on 1-2 trillion tokens, produce amazing performance, but make factual errors, logical errors, and inconsistencies - LLMs are not good for reasoning, planning, or arithmetics, and are easily fooled into thinking they are intelligent - Autoregressive LLMs have a short shelf life and will be replaced by better systems in the next 5 years. - Humans and animals learn quickly because they accumulate an enormous amount of background knowledge about how the world works by observation. - AI research needs to focus on learning representations of the world, predictive models of the world, and self-supervised learning. - AI systems need to be able to perceive, reason, predict, and plan complex action sequences. - Hierarchical planning is needed to plan complex actions, as the representations at every level are not known in advance. - Predetermined vision systems are unable to learn hierarchical representations for action plans. - AI systems are difficult to control and can be toxic, but a system designed to minimize a set of objectives will guarantee safety. - To predict videos, a joint embedding architecture is needed, which replaces the generative model. - Energy based models are used to capture the dependency between two sets of variables, and two classes of methods are used to train them: contrastive and regularized. - Regularized methods attempt to maximize the information content of the representations and minimize the prediction error. - LLMs are a new method for learning features for images without having to do data augmentation. - It works by running an image through two encoders, one with the full image and one with a partially masked image. - A predictor is then trained to predict the full feature representation of the full image from the representation obtained from the partial image. - LLMs are used to build world models, which can predict what will happen next in the world given an observation about the state of the world. - Self-supervised learning is the key to this, and uncertainty can be done with an energy-based model method. - LLMs cannot currently say "I don't know the answer to this question" as opposed to attempting to guess the right answer. - Data curation and human intervention through relevance feedback are critical aspects of LLMs that are not talked about often. - The trend is heading towards bigger is better, but in the last few months, smaller systems have been performing as well as larger ones. - The model proposed is an architecture where the task is specified by the objective function, which may include a representation of the prompt. - The inference procedure that produces the output is separated from the world model and the task itself. - Smaller networks can be used for the same performance. - AI and ML community should pivot to open source models to create a vibrant ecosystem. - Biggest gaps in education for AI graduates are in mathematics and physics. - Open source models should be used to prevent control of knowledge and data by companies. - LLMs are doomed and understanding them is likely to be hopeless. - Self-supervised learning is still supervised learning, but with particular architectures. - Reinforcement learning is needed in certain situations. - Yann discussed the idea of amortized inference, which is the idea of training a system to approximate the solution to an optimization problem from the specification of the problem. - Yann believes that most good ideas still come from academia, and that universities should focus on coming up with new ideas rather than beating records on translation. - Yann believes that AI will have a positive impact on humanity, and that it is important to have countermeasures in place to prevent the misuse of AI. - Yann believes that AI should be open and widely accessible to everyone.
@StoutProper
@StoutProper Год назад
You could have got it to include timestamps, particularly as they haven’t published this with chapters.
@RufusShinra
@RufusShinra Год назад
@@StoutProper By all means go ahead and do it.
@StoutProper
@StoutProper Год назад
@@RufusShinra you’ve already fed it the transcript complete with timestamps. Just instruct it to add a timestamp
@RufusShinra
@RufusShinra Год назад
@@StoutProper I didn't do Jack :D i'm not the OP
@StoutProper
@StoutProper Год назад
@@RufusShinraembarrassing
@richardnunziata3221
@richardnunziata3221 Год назад
It would be very helpful if there was a list of open problems in machine learning space
@StoutProper
@StoutProper Год назад
Ask gpt or bing
@kinngrimm
@kinngrimm Год назад
1:17:25 "at some point it will be too big for us to comprehend" Before that point is reached we should have figured out alignment, not having a blackbox system so we can actually see whats going on in there and a ton of societal changes that will have to be made for societies to be/stay stable.
@FergalByrne
@FergalByrne Год назад
No mention of “understanding” in the title, not even trying these days
@rim3899
@rim3899 Год назад
With respect to LLMs, the evidence suggests that they do know a great deal about how the world works. For example, GPT-like models' weights can be/are trained on data that actually sub-sum all that is currently known about physical laws, chemistry, biology etc. through countless papers, review articles, and textbooks at various levels of sophistication, from 1st-grade level through cutting edge research. The fact that these are given as text (language) is not as problematic, since it appears that the relevant written record is sufficient to explain and convey the current and past knowledge in these subjects. That multi-layer transformers learn context and correlations between (meaningful!) words and their associated concepts and relationships should not to be underestimated. That the models "just produce" the next probable token isn't conceptually trivial either, if one considers that, for example, most of physics can be described through (partial) differential equations that can be integrated step by step, where the context/state-dependent coefficients of the equations (-the trained weights of the network-) ultimately result from the underlying theories these equations are solving. Processing the current state, with these coefficients in context, to predict and specify what happens next, one step at the time, is how these equations are in practice numerically integrated. So what we potentially may have with the current LLMs are models that learn from language and words, that actually do describe in excruciating detail what is known to man, and proceed to "auto-complete", in analogous ways to the best methods used to solve the currently known equations of Science.
@kinngrimm
@kinngrimm Год назад
Knowing of them and understanding how to use them maybe still not be the same. There was a study about alphaGo that took place after the fact alphaGo beat the greatest Go players. In that study they gave the alphaGo a muligan/an advantage of a few turns that would be given to childreen that are starting out to learn the game. The result was that every single game then alphaGo lost. The researchers who analyzed that then came to the conclusion, the strategy of engulving the enemies pieces was not really understood conceptionally. I think it is at times hard to differentiate what we see as a result and outcome, the product of the calculations and then also correctly make assumptions about how the LLMs got there. After all it is said they are a blackbox system abd researchers will still take a while to exactly figure out what is going on in there within their neural networks. On the other hand we humans tend to put ourselves on a pedestal and make us something special. The image we have of ourselves is often quite inflated. Which therefor could lead to missinterpretations and underestemating what is going on. We maybe all be partial to Dunning Krueger Effect, anthropomorphisation and other phsycological traps. Understanding and admiting that to ourselves seems key for many of my own issues i have at times with other people and while that in itself may open up another trap, to think others have the same issues, i still think it is safe to assume that most do. Longstory short, not just the hype but a sort of panic that set in the past year around the topic of AGIs and LLMs, seemed to have more to do with our human failings and how we would use such technology than it already being on its own the biggest threat to humanity. Still it is a wakeup call of the direction this takes and what we can expect in the not too distant future.
@StoutProper
@StoutProper Год назад
@@kinngrimm the biggest threat isn’t the technology itself but how it is used, more accurately who is using it and for what? The answer to which is rich corporate elites to replace us, rich government elites to control us, and rich military industrial elites to kill us. Examples of each are already prevalent throughout the world, meanwhile people are distracted with the notion that the technology itself is the danger, and they are failing to focus on the real threats.
@richardnunziata3221
@richardnunziata3221 Год назад
Open Source foundation models are the future of democracy and small business development
@Peteismi
@Peteismi Год назад
My question to Yann would have been "I'm an idiot. If I offered you a job here and now without me being able to give you practically anything you want in return, would you come and work for me as my super intelligence?"
@mkr2876
@mkr2876 Год назад
Exactly! How naive are people in this field who trusts the version of the future where we successfully enslave an intelligence 100.000X smarter than us? How dumb can people possibly be??