Тёмный

Serving 100s of LLMs on 1 GPU with LoRAX - Travis Addair | Stanford MLSys #84 

Stanford MLSys Seminars
Подписаться 21 тыс.
Просмотров 6 тыс.
50% 1

Episode 84 of the Stanford MLSys Seminar Series!
Serving 100s of Fine-Tuned LLMs on 1 GPU with LoRAX
Speaker: Travis Addair
Abstract:
Smaller, specialized language models such as LLaMA-2-7b can outperform larger general-purpose models like GPT-4 when fine-tuned on proprietary data to perform a single task. But serving many fine-tuned LLMs in production can quickly add up to tens of thousands of dollars per month in cloud costs when each model requires its own dedicated GPU resources. LoRA Exchange (LoRAX) is an LLM inference system built for serving numerous fine-tuned LLMs using a shared set of GPU resources. With LoRAX, users can pack over 100 task-specific models into a single GPU, significantly reducing the expenses associated with serving fine-tuned models by orders of magnitude over dedicated deployments. In this seminar, we'll explore the challenges of serving fine-tuned LLMs in production, and the motivation behind building a system like LoRAX. We'll introduce parameter efficient fine-tuning adapters like Low Rank Adaptation (LoRA), and show how LoRAX dynamically loads and exchanges different adapters at runtime, leveraging a tiered weight cache to speed up this exchange process. Additionally, we'll show how LoRAX achieves high throughput with continuous multi-adapter batching, allowing requests from different fine-tuned adapters to batch together within a single decoding step.
Bio:
Travis Addair is co-founder and CTO of Predibase, the AI platform for engineers. Within the Linux Foundation, he serves as lead maintainer for the Horovod distributed deep learning framework and is a co-maintainer of the Ludwig automated deep learning framework. In the past, he led Uber's deep learning training team as part of the Michelangelo machine learning platform.
--
Stanford MLSys Seminar hosts: Simran Arora, Dan Fu
Twitter:
/ simran_s_arora
/ realdanfu​
--
Check out our website for the schedule: mlsys.stanford.edu
Join our mailing list to get weekly updates: groups.google....
#machinelearning #ai #artificialintelligence #systems #mlsys #computerscience #stanford

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 8   
@voncolborn9437
@voncolborn9437 9 месяцев назад
Great presentation. It is interesting to see the practical side of running a bunch of LLMs. Ops makes it happen. Coming from the old, really old, school of computing with massive multi-user, time-share systems, it is interesting to see how no matter how much computing changes, aspects of it remain the same. Through-put, latency, caching and scheduling is still central. All that seems to have changed is the problem domain. We do, in deed, live in intereswting times.
@conan_der_barbar
@conan_der_barbar 10 месяцев назад
great talk! still waiting for the open source release 👀
@Gerald-iz7mv
@Gerald-iz7mv 6 месяцев назад
hi, do you have any links to benchmarks you can run to measure latency, throughput for different model and frameworks etc?
@suleimanshehu5839
@suleimanshehu5839 9 месяцев назад
Please create a video on fine tuning MoE LLM using LoRa adapters such as Mixtural 8x7B MoE LLM within your framework
@fastcardlastname3353
@fastcardlastname3353 10 месяцев назад
This shall change the landscape of multiple agents if it's promised.
@mohamedfouad1309
@mohamedfouad1309 10 месяцев назад
Github link😅
@nithinrao7191
@nithinrao7191 10 месяцев назад
Second
@absbi0000
@absbi0000 10 месяцев назад
First
Далее
I Took An iPhone 16 From A POSTER! 😱📱 #shorts
00:18
The Next 100x - Gavin Uberti | Stanford MLSys #92
59:21
Enabling Cost-Efficient LLM Serving with Ray Serve
30:28
New: AI Agent Self-Improvement + Self-Fine-Tune
37:46