Тёмный

Why Training and Inference Compute Will Always Be Roughly The Same 

Finxter
Подписаться 19 тыс.
Просмотров 360
50% 1

If the training-inference tradeoff holds across a sufficiently wide range of training compute and inference compute values, we should expect similar amounts of computational resources to be spent on training large models and on running inference with them. Phrased differently, we should not expect one of these categories of expenditure to dominate the other by an order of magnitude or more in the future. This result also appears to be robust to plausible uncertainty around the size of the tradeoff, i.e. to how many orders of magnitude of extra training compute we must pay to reduce inference costs by one order of magnitude.
Source: epochai.org/bl...
♥️ Join my free email newsletter to stay on the right side of change:
👉 blog.finxter.c...
Also, make sure to check out the AI and prompt engineering courses on the Finxter Academy:
👉 academy.finxte...
🚀 Prompt engineers can scale their reach, success, and impact by orders of magnitude!
You can get access to all courses by becoming a channel member here:
👉 / @finxter

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии    
Далее
How to Ask ChatGPT for Financial Advice
18:10
ХОККЕЙНАЯ КЛЮШКА ИЗ БУДУЩЕГО?
00:29
Radxa X4: An N100 Pi
20:48
Просмотров 57 тыс.
MatMul-Free LLMs will change everything
18:51
Goodbye, TAM
12:01
Просмотров 50 тыс.
How to Use Llama 3.1 as Your Stock Analyst ($TSLA)
18:23
Meta Has Changed The Game.
10:17
Просмотров 5 тыс.
ZX and Machine Learning
14:23
Просмотров 107