Тёмный
The TWIML AI Podcast with Sam Charrington
The TWIML AI Podcast with Sam Charrington
The TWIML AI Podcast with Sam Charrington
Подписаться
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. By sharing and amplifying the voices of a broad and diverse spectrum of machine learning and AI researchers, practitioners, and innovators, we hope to help make ML and AI more accessible and enhance the lives of our audience and their communities.

Through the podcast, we bring the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers, and tech-savvy business and IT leaders.

The TWIML AI Podcast is hosted by Sam Charrington, a sought-after industry analyst, speaker, commentator, and thought leader. Sam’s research is focused on the business and consumer application of machine learning and AI, bringing AI-powered products to market, and AI-enabled and -enabling technology platforms
Комментарии
@FREELEARNING
@FREELEARNING 4 дня назад
Very nice Podcast. I'm olso not fan of the multi-task models, I think Task specific models are fine to do some tasks like for example, I find it useless to use a Billions parameters LLM to do sentiment analysis, while a smaller Bert-based or RNN-based model can do the same task in a quite performant manner. In Short, an LLM being good at generating very human-like text doesn't make it good at all tasks in all domains. Also People forget that not all LLMs have the performance of ChatGPT. Smaller 7 or 3B models are not quite good at doing variety of tasks.
@oldspammer
@oldspammer 4 дня назад
I tried Mojo. It worked on my multicore Intel CPU, but not on my older graphics card. I was disappointed when my graphics card was way off the listing of supported ones. Right now, you have to have an Nvidia card that has Linux driver support for CUDA compute delegation. When I asked what models of Nvidia cards started to have this supported minimum, Edge-Bing-Copilot GPT4 claimed that the GTX 1000 series was OK, but I am NOT going to buy some minimal support thing when each CUDA core extra that you have tends to make the program run incrementally faster. At the time, I was willing to see if Eigenvector matrix solving algorithms would be highly scalable for parallelization to be applied. I uninstalled it all because I had to install enough stuff to get it all to go seamlessly, and I had to make a bunch of room by moving all kinds of download files to other storage devices. One of the sets of things was the Nvidia CUDA core sample programs intended to test the graphics card for usefulness in this regard. Had to get rid of that stuff too since my graphics card was not supported. If only Nvidia supported all their CUDA capable cards, even if they had to do some compatibility workarounds a bit--that would have made the entire experience a lot more pleasant.
@RikuHydrangea
@RikuHydrangea 7 дней назад
Thank you so much to both of you, for making Glaze and Nightshade, and for this interview!
@syedmohammadghazi6133
@syedmohammadghazi6133 7 дней назад
Hey, that's a good work... I've done my research on similar lines..gonna publish it end year. Just wanted to know have you heard about TranAD models? Although it's basically for anomaly detection, but just curious how better would it be for your use case
@markryan2475
@markryan2475 8 дней назад
Awesome discussion. Really interesting to hear a perspective grounded in linguistics
@420_gunna
@420_gunna 10 дней назад
I love this little man and his big ideas
@sanesanyo
@sanesanyo 10 дней назад
Really nice talk. Thanks a lot for doing this.
@btcoal
@btcoal 12 дней назад
Paper link?
@twimlai
@twimlai 10 дней назад
Hi @btcoal. Here's the paper link: arxiv.org/abs/2403.07815.
@sploofmcsterra4786
@sploofmcsterra4786 12 дней назад
This video feels like discovering the piece I've always felt was missing from machine learning. So glad people are looking into this.
@tuba-inxs
@tuba-inxs 17 дней назад
Is Knowledge Graph, as it is used in this video an RDF graph?
@beverlycrusher9713
@beverlycrusher9713 25 дней назад
the problem is, what are you going to do with the nuclear waste, those places aren't designed for long term storage of nuclear waste, or ANY waste for that matter, if you are planning to build new reactor vessels, then it should be a MUCH larger one, one that uses "SPENT" reactor cores as its main source of power, "SPENT" reactor cores are generating LARGE amounts of heat for decades even before they get put in to the long term storage casks, and even in the storage casks, they still generate a fair amount of heat in long term facilities.
@mansurZ01
@mansurZ01 26 дней назад
Thank you for the video, it is a great source for catching up with research progress!
@barbi111
@barbi111 27 дней назад
❤‍🩹
@thetruthseeker6409
@thetruthseeker6409 28 дней назад
Thanx a lot. Which SMC protocols are implemented within PySyft? Yao, SPDZ, MASCOT?
@nandoflorestan
@nandoflorestan 29 дней назад
What is intolerable... is that Mojo is not even open source. Bleh.
@uscybercom6009
@uscybercom6009 Месяц назад
🤢🤮
@quannguyendinh335
@quannguyendinh335 Месяц назад
Thank you both for taking your valuable time to share with us, especially Prof. Sergey Levine.
@music-temtemp
@music-temtemp Месяц назад
The analysis seems to be focused only on BERT and ViT. I'm curious whether the assumption that 'outliers are generated by focusing only on meaningless token values' is also a meaningful assumption in decoder models like GPT or LLaMA
@squarehead6c1
@squarehead6c1 Месяц назад
Although it appears interesting to investigate the internal properties of deep neural networks, in practice it seems very difficult to guarantee that a fact has been completely removed from the LLM. Conversely, it would be interesting if one could find a way to "clamp down" facts in LLMs in such a way that the LLM always returns the same (correct) fact regardless of how the question is formulated. It would possibly require an adapted (ANN) structure of the model.
@Tom_Goodwin
@Tom_Goodwin Месяц назад
Heya Sam just to let you know the Apple Podcasts version is only 4 mins long! That’s what sent me here aha
@twimlai
@twimlai Месяц назад
Hi @Tom_Goodwin. Seems like a bug on our hosting platform. We reuploaded the file and all good now! Enjoy! :)
@trevbook255
@trevbook255 Месяц назад
Just a heads up: still seems to be this way on Spotify rn - episode says 4min
@twimlai
@twimlai Месяц назад
Apologies @@trevbook255. We've reached out to Megaphone (our hosting platform) but we're still waiting on updates. For now, please enjoy watching here! Thank you so much!
@theJellyjoker
@theJellyjoker Месяц назад
I will honor your request for your art to be unseen and forgotten.
@jdown330
@jdown330 Месяц назад
is bro talking on 1.25 speed? Wth
@exploreyourdreamlife
@exploreyourdreamlife 2 месяца назад
Your video is a testament to your passion and dedication. What are some key strategies that Vidyut Naware and his team employ to ensure responsible AI application at PayPal? As someone who runs a dream interpretation channel, I'm always seeking advice on how to enhance the engagement and depth of my videos. I'm truly appreciative of this insightful discussion with Vidyut Naware and have liked and subscribed to the channel for more enlightening conversations on AI and machine learning research at PayPal.
@stl8k
@stl8k 2 месяца назад
Great point about tech-agnostic security considerations of freeform text input at 22 minute mark.
@simonpeter9617
@simonpeter9617 2 месяца назад
Informative!
@SuperIntelligence314
@SuperIntelligence314 2 месяца назад
This was great
@raviwelcome19
@raviwelcome19 2 месяца назад
Every episode is an interesting
@igorg.8624
@igorg.8624 2 месяца назад
This is one of the most complex episodes I've ever listened to on AI. A ton of complicated jargon that only experts can comprehend. I'm ambitious enough to stick with it until I master this one day 👍
@TheNitroPython
@TheNitroPython 2 месяца назад
This is a good point. The claim that llms are having emergent behaviors seems to only be in the coding related tasks.
@kuoldeng4568
@kuoldeng4568 2 месяца назад
This a fantastic!
@matippetts
@matippetts 2 месяца назад
This blew my mind because, if I understand it correctly (I'm not at all sure I do), Sanborn is offering a solution to one of the most vexing and morally weighty issues in the field of AI: the explainability of results. In other words, the inferred group represents a formal scientific theory of the network. Further, for any pair of systems that share a group (e.g. a cortical region and an artificial neural network), each element is a computational model of the other (in the sense of Marr's levels of explanation).
@lilimeng1103
@lilimeng1103 2 месяца назад
The paper "Hidden Technical Debt in Machine Learning Systems" shall be this 2015 NeurlPS one: papers.nips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf
@steve5nash
@steve5nash 2 месяца назад
Using LLM to generate videos to train robots on seems a bit too big of a leap.
@AdvantestInc
@AdvantestInc 2 месяца назад
How do you envision the integration of video models in everyday tech influencing user experience?
@shadowskullG
@shadowskullG 2 месяца назад
Will there be a talk about devin the new "ai software engineer"?
@sabaokangan
@sabaokangan 2 месяца назад
Thank you so much for sharing with us
@MrWombatLOVE
@MrWombatLOVE 3 месяца назад
@raviwelcome19
@raviwelcome19 3 месяца назад
Instruction Tuning better than fine tuning
@raviwelcome19
@raviwelcome19 3 месяца назад
Great
@miriamploude3175
@miriamploude3175 3 месяца назад
"Promo sm"
@sara.togamika
@sara.togamika 3 месяца назад
Interesting as always, but could you restore the previous music? It was energizing, much better than the boring new one...
@cameljoe212
@cameljoe212 3 месяца назад
Can we utilize AI to make it cool enough outside to still grow food? No? Then who fucking cares? All of this is just a rest stop on the road to extinction. Like quick, throw everybody out of a job and make sure things are super extra miserable right as the average idiot is just starting to realize it’s not supposed to be 91 out in February. We need rent based feudalism to really round this hellscape out it seems. I bet AI can help with bringing that to fruition. Looks like endless “progress” was a bad idea. Embrace the greater good. Stop layoffs when you made record profits. Heed the warnings of history before the mobs of angry people come for bread or blood. But you won’t. You will continue grinding. Continue building a tool of oppression for 1046 people to use. It’s all you know and you might as well be a beaver felling trees at this point: destruction is your base nature.
@waxwingvain
@waxwingvain 3 месяца назад
To be honest if you ask me what I ate this morning or yesterday, we kinda do the same. First retrieve an imagine of where we were or what we were doing, in the case we forgot what we ate and then procedurally connect the dots until we reach the answer. Knowledge is hierarchical.
@aksi221
@aksi221 3 месяца назад
this gives me hope! and is further proof that the AI art is actually art theft
@kiryllshynharow9058
@kiryllshynharow9058 3 месяца назад
and one more question here you are talking about the role in RL tasks of intelligent agents based on LLMs for goal setting and evaluation functions but the underlying concept of LLMs themselves is to select the most likely next token. Moreover, effective video generation using a similar approach has recently been demonstrated. The strategy is also convenient because it does not require a huge amount of labeled data - it is enough to move the window along the text (or data of another modality). Why not use a transformer to generate the next action natively, instead of a word or a video frame, with logs or video recordings of possible behavior (like simulation training)? This approach looks natural (and is not more expensive than video generation) What is currently known about research in this direction?
@trollolotommy
@trollolotommy 2 месяца назад
decision transformers
@kiryllshynharow9058
@kiryllshynharow9058 3 месяца назад
23:44 Isn't it redudant to generate a goal/state description explicitly, while it would be sufficient to operate with comparisons in the embedding space? Or is this just a popular science explanation for a wider audience? Or am I overlooking some technical reasons?
@Amir-vn2wx
@Amir-vn2wx 3 месяца назад
شیرازیوووووو!
@Andromeda26_
@Andromeda26_ 3 месяца назад
Very informative! Thank you. Is there any chance that this model will become available under Apache 2, or even as a paid API?"
@RappinAcoustic
@RappinAcoustic 3 месяца назад
Ben Zhao's work will protect millions of artists from being robbed by the sociopaths behind this new wave of Ai companies.
@keyboardisbliss4051
@keyboardisbliss4051 3 месяца назад
This is such an amazing discussion! Its unfortunate that I just got to know about this RU-vid channel. Such brilliant conversations on emerging trends and technologies!