Тёмный
No video :(

Energy Star Ratings for AI Models with Sasha Luccioni - 687 

The TWIML AI Podcast with Sam Charrington
Подписаться 19 тыс.
Просмотров 341
50% 1

Today, we're joined by Sasha Luccioni, AI and Climate lead at Hugging Face, to discuss the environmental impact of AI models. We dig into her recent research into the relative energy consumption of general purpose pre-trained models vs. task-specific, non-generative models for common AI tasks. We discuss the implications of the significant difference in efficiency and power consumption between the two types of models. Finally, we explore the complexities of energy efficiency and performance benchmarking, and talk through Sasha’s recent initiative, Energy Star Ratings for AI Models - huggingface.co/blog/sasha/ene..., a rating system designed to help AI users select and deploy models based on their energy efficiency.
The complete show notes for this episode can be found at twimlai.com/go/687.
🔔 Subscribe to our channel for more great content just like this: ru-vid.com?sub_confi...
🗣️ CONNECT WITH US!
===============================
Subscribe to the TWIML AI Podcast: twimlai.com/podcast/twimlai/
Follow us on Twitter: / twimlai
Follow us on LinkedIn: / twimlai
Join our Slack Community: twimlai.com/community/
Subscribe to our newsletter: twimlai.com/newsletter/
Want to get in touch? Send us a message: twimlai.com/contact/
📖 CHAPTERS
===============================
00:00 - Introduction
1:47 - Career background
4:19 - Energy consumption of LLMs
7:49 - Power Hungry Processing: Watts Driving the Cost of AI Deployment? paper
12:35 - Trends toward smaller models
14:10 - Climate impacts and concentration of power of large models
20:10 - Task-specific vs. general-purpose models
22:20 - Challenges in current AI research
27:37 - Energy Star Ratings for AI Models
34:39 - Energy efficiency and performance benchmarking challenges
42:32 - Challenges in comparing fine-tuned and generative models
43:57 - Future directions for Energy Star Ratings for AI Models
45:20 - Insights on model cards
🔗 LINKS & RESOURCES
===============================
Energy Star Ratings for AI Models - huggingface.co/blog/sasha/ene...
Power Hungry Processing: Watts Driving the Cost of AI Deployment? - arxiv.org/abs/2311.16863
Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice - arxiv.org/abs/2308.07120
Measuring Data - arxiv.org/abs/2212.05129
📸 Camera: amzn.to/3TQ3zsg
🎙️Microphone: amzn.to/3t5zXeV
🚦Lights: amzn.to/3TQlX49
🎛️ Audio Interface: amzn.to/3TVFAIq
🎚️ Stream Deck: amzn.to/3zzm7F5

Опубликовано:

 

7 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 2   
@FREELEARNING
@FREELEARNING 2 месяца назад
Very nice Podcast. I'm olso not fan of the multi-task models, I think Task specific models are fine to do some tasks like for example, I find it useless to use a Billions parameters LLM to do sentiment analysis, while a smaller Bert-based or RNN-based model can do the same task in a quite performant manner. In Short, an LLM being good at generating very human-like text doesn't make it good at all tasks in all domains. Also People forget that not all LLMs have the performance of ChatGPT. Smaller 7 or 3B models are not quite good at doing variety of tasks.
@twimlai
@twimlai Месяц назад
LLMs can be a compelling place to start because they're easy to use and good enough at a lot of tasks but it's important to keep their direct and indirect costs in mind.
Далее
Я тоже так могу
00:12
Просмотров 1,2 млн
The most important AI trends in 2024
9:35
Просмотров 232 тыс.
How to Improve LLMs with RAG (Overview + Python Code)
21:41