Тёмный

Jailbreaking LLMs - Prompt Injection and LLM Security 

Mozilla Developer
Подписаться 30 тыс.
Просмотров 2,7 тыс.
50% 1

Building applications on top of Large Language Models brings unique security challenges, some of which we still don't have great solutions for. Simon will be diving deep into prompt injection and jailbreaking, how they work, why they're so hard to fix and their implications for the things we are building on top of LLMs.Simon Willison is the creator of Datasette, an open source tool for exploring and publishing data. He currently works full-time building open source tools for data journalism, built around Datasette and SQLite.

Опубликовано:

 

7 дек 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии    
Далее
Сколько стоят роды мечты?
00:59
Просмотров 902 тыс.
[1hr Talk] Intro to Large Language Models
59:48
Просмотров 2,1 млн
5 LLM Security Threats- The Future of Hacking?
14:01
Просмотров 10 тыс.
Attacking LLM - Prompt Injection
13:23
Просмотров 371 тыс.
Inside AI Security with Mark Russinovich | BRK227
47:17
Hypnotized AI and Large Language Model Security
13:22