Тёмный

Text Embeddings Reveal (Almost) As Much As Text 

Yannic Kilcher
Подписаться 263 тыс.
Просмотров 40 тыс.
50% 1

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 93   
@AlignmentLabAI
@AlignmentLabAI 9 месяцев назад
Whats hilarious is that this is a revelation despite the point of text embeddings being to represent the text as perfectly as possible
@terjeoseberg990
@terjeoseberg990 9 месяцев назад
Exactly.
@scign
@scign 9 месяцев назад
I bet this paper comes from the same authors of the hit classic "A blurry representation of a photo reveals (almost) as much as the original photo"...
@amanbansal82
@amanbansal82 9 месяцев назад
Ikr, I genuinely don't understand why this topic was studied.
@donnychan1999
@donnychan1999 9 месяцев назад
I think it depends on the model, for IR systems, the point of embeddings it to map queries and corresponding passages to closer locations, that does not necessarily have to be inversible.
@seadude
@seadude 9 месяцев назад
@@amanbansal82Agreed. Seems like the authors applied pattern recognition to embeddings to determine the plaintext inputs. It’s *cool* but isn’t the same process used whenever trying to determine the input for any encoded text where algorithm = unknown? What would be truly paper worthy, is if the authors reverse engineered the embedding model algorithm using only vector output! (I didn’t read the paper, just arm-chair quarterbacking here)
@lobiqpidol818
@lobiqpidol818 9 месяцев назад
Wow it's almost as if text embeddings are some sort of mathematical representation of the text they are based on.
@terjeoseberg990
@terjeoseberg990 9 месяцев назад
LOL
@sunnohh
@sunnohh 9 месяцев назад
Almost as if this were a parlor trick or something😂
@yakmage8085
@yakmage8085 9 месяцев назад
Yeah it’s not a huge revelation sure, but the work should be applauded for its results as simply a benchmark paper. 50 steps, 32 tokens 92%. Awesome thanks
@mohammadxahid5984
@mohammadxahid5984 9 месяцев назад
YK is back with paper summary. Thank you, YK
@jddes
@jddes 9 месяцев назад
Your paper breakdowns are great, I'd love more
@agenticmark
@agenticmark 9 месяцев назад
LOVE your videos! You make these papers come alive.
@krzysztofwos1856
@krzysztofwos1856 9 месяцев назад
Could this become a benchmark for embeddings? If you can develop an embedding that preserves the information for longer sequences, it would be a more useful embedding than the one that does not.
@googleyoutubechannel8554
@googleyoutubechannel8554 9 месяцев назад
Why is this surprising? Vector embeddings consist of a huge amount of data compared to, say, the ascii byte representation of the text per token, like orders of magnitude more. Embeddings are basically the opposite of compression, they're sort of a maximal representation of text.
@Laszer271
@Laszer271 9 месяцев назад
Embedding models were never trained to keep the "maximal representation of the text". That's just a characteristic that naturally emerges during the training. It's not surprising that we can decode the encoded sentence but it's interesting to see how much could we decode. The method used is also quite interesting and very simple. You can try to think of what other task could benefit from a language model iterating on its own predictions.
@avb_fj
@avb_fj 9 месяцев назад
Lesson: if you can, anonymize your text and remove PII before putting your embeddings into a third party vector db…
@bgaRevoker
@bgaRevoker 9 месяцев назад
I wonder if scrambling the dimensions (consistently across all vectors) wouldn't preserve desirables properties (similarity search, distance) with an additonnal way to obfuscate initial content.
@whatisrokosbasilisk80
@whatisrokosbasilisk80 9 месяцев назад
​@bgaRevoker I guess you'd have to assume that the embedding model and permutation matrix are unknown to an attacker. The permutation space is factorial length of the vector - so it isn't unthinkable that this would actually be viable encryption. However, if your shuffled vector is leaked - its possible to statistically reconstruct it, especially if the attacker knows key features in your dataset.
@SloanMosley
@SloanMosley 9 месяцев назад
I realised this when I was considering it for searches data. When I realised that it was not secure I wound up setting the meta data to the actual text to save on a database query. But this isn’t that worrisome. If you were getting your embeddings from an api then you have already lost privacy. And if you can host your own embedding model, chroma DB is easy enough!
@chenmarkson7413
@chenmarkson7413 3 месяца назад
27:35 I found it really interesting how the nuance of a text embedding correlates with the highest "spatial frequency" it has, as if information is constructed from an overlay of frequencies as in Fourier transform.
@Laszer271
@Laszer271 9 месяцев назад
Could we make it even dumber? Disregard the model M1 and only train M0. Make M0 do a couple of predictions, take 2 best, and tell an LLM in which direction should it go from these predictions to predict the real thing (like is the real sentence embedding between the two or closer to the one than the other, should we move even further in the direction of one of those predictions, etc.). Make it do many iterations with beam search and see how the result compares to results from this paper. Could be an interesting experiment and not even very expensive to pull off.
@JonathanFraser-i7h
@JonathanFraser-i7h 9 месяцев назад
"Text Embeddings Reveal (Almost) As Much As Text" Wow, good to know they do the thing we designed them to do. Oh wait, you though they were some sort of secure representation....... you're fired.
@bentationfunkiloglio
@bentationfunkiloglio 9 месяцев назад
Really interesting topic. Great video. In a way, the embedding acts as a lossy compression, kinda-sorta. “Uncompressing” requires one to extract information encoded as statistical relationships. …. or so I’ll claim. :)
@andybrice2711
@andybrice2711 9 месяцев назад
I'm not sure it even is compression though. Isn't it just a transformation? I think each word is a 512-dimensional vector, which is probably about 2048 bytes.
@drdca8263
@drdca8263 9 месяцев назад
@@andybrice2711depends on the length of the text then, I guess? Or... Well, a compression algorithm is allowed to make some inputs larger...
@present-bk2dh
@present-bk2dh 9 месяцев назад
@@andybrice2711 each word is a 512-dimensional vector, but how much of that 512-dim is about the word? you seem to make a big assumption.
@AntoshaPushkin
@AntoshaPushkin 9 месяцев назад
Umm... 512 vector is just an internal representation. If the vocabulary is 32k entries, each token is just 15 bits of information
@bentationfunkiloglio
@bentationfunkiloglio 9 месяцев назад
I'm definitely using the word, "compression", very loosely. The research is interesting because the results were surprising. The vectors contain more info than expected and more info than is (perhaps) required. In particular, surprise was that short text passages could be recovered. Some of the info isn't explicitly encoded in tokens. To extract it, one must know how data was originally encoded. Seems "compression-y" to me. This stands in contrast to one-way transformations, where original input data cannot be extracted post-transform.
@makhalid1999
@makhalid1999 9 месяцев назад
BRO IS BACK WITH REGULAR UPLOADS 🎉🎉🎉
@PMX
@PMX 9 месяцев назад
But.. vector databases are *supposed* to be able to recover the information, they are often used as auxiliary memory for LLM, so you can have a smaller model paired with a vector database containing the information you want to query, and the retrieved information would be inserted into the LLM context for it to answer based on "stored facts", reducing the chances of hallucinations. Being able to retrieve names, dates, etc. accurately is an intended use, not something unexpected.
@endlessvoid7952
@endlessvoid7952 9 месяцев назад
That’s because vector databases store the text alongside the embedding. The embedding is for search then it returns the text. This approach in the video is to get to text purely from an embedding.
@jsalsman
@jsalsman 9 месяцев назад
Are there any open source text embeddings with a context window as large as OpenAI's Ada (2048 tokens)?
@drzhi
@drzhi 9 месяцев назад
❤️ Amazing content with several valuable takeaways: 00:00 Text embeddings can reveal almost as much information as the original text. 02:09 Text embedding inversion can reconstruct the original text. 06:35 The quality of the initial hypothesis is crucial for text embedding inversion. 12:00 Editing models are used to refine the text hypothesis. 14:32 The success of text embedding inversion depends on the absence of collisions in embedding space. 21:20 Training a model for each step in the inversion process can be simplified. 21:24 Text embeddings contain a significant amount of information. 26:10 Adding noise to embeddings can prevent exact text reconstruction. 26:15 The level of noise in embeddings affects reconstruction and retrieval. 31:20 The length of sequences can impact reconstruction performance. 35:24 The longer the sequence length, the more difficult it becomes to represent the index of each token. Crafted by Notable AI Takeaways.
@StephenRoseDuo
@StephenRoseDuo 9 месяцев назад
Wait, seriously tho, why is this surprising?
@gr8ape111
@gr8ape111 9 месяцев назад
Oh no the vectors we trained to compress the text can be decompressed to reconstruct the text!!! Anyways...
@andybrice2711
@andybrice2711 9 месяцев назад
WHY IS THE INFORMATION RETRIEVAL MACHINE RETRIEVING THE INFORMATION WE GAVE IT?!?!
@gr8ape111
@gr8ape111 9 месяцев назад
@@andybrice2711its a mystery
@ethansmith7608
@ethansmith7608 9 месяцев назад
UnCLIP showed you could decode image embeddings with high fidelity, CapDec did the same for text embeddings. Not only does this feel like a plain sight revelation but its been done at least two times that im aware of so far.
@HoriaCristescu
@HoriaCristescu 9 месяцев назад
text uses 32*15b=480b, embedding has 1536*4b=6144b, which corresponds to 6144/15b=409 tokens
@herp_derpingson
@herp_derpingson 9 месяцев назад
No, thats not an apples to oranges comparison. Lets say I have number array [3,5,7,9,11,...101] thats 50 numbers which if one byte each would represent 50*1 = 50 bytes. However, we can see that we can just plot a line y = 2x + 1, so, all we really need to store is the m and c values then we can use this function to generate the entire series. So, this series can be compressed down to just 2 bytes.
@TheRyulord
@TheRyulord 9 месяцев назад
Different embedding models produce different sized embeddings but yeah, even small embeddings will usually be larger than the text they embed.
@AntoshaPushkin
@AntoshaPushkin 9 месяцев назад
Even if we consider 1 bit of relevant information stored in each float, it's still a lot, 1536/15 ≈ 102 tokens. This is actually not even a sentence, but rather a short paragraph of text that consists of ~3-5 sentences
@Restrocket
@Restrocket 9 месяцев назад
@@herp_derpingson but you have no imformation about the function. You can't recounstruct this function from just two numbers. It can be any other type of function like y=2x^(1)
@Terszel
@Terszel 9 месяцев назад
​@@Restrocket the amount of information you'd need to distinguish this function from all other possible functions is infinite. In the domain of the function y=mx+b you only need 2 bytes, but other domains you might need more or less. This is why its impossible to determine how many parameters are optimal for learning a specific function, since it may not even be possible to represent the data in the domain you chose (ie. y=a/x^b)
@Kram1032
@Kram1032 9 месяцев назад
Not far into the video but given that names are often very relevant for how completions ought to go - they may be highly informative concepts - I'm not at all surprised you can invert to get back to names. I'm guessing generally speaking anything highly informative to continuation is going to get a lot attention and will not be fully abstracted away in the embedding.
@brandomiranda6703
@brandomiranda6703 9 месяцев назад
Why is this surprising?
@ahmadalis1517
@ahmadalis1517 9 месяцев назад
Hi Yannic, Please consider making a video for this paper: "Representation Engineering: A Top-Down Approach to AI Transparency". It is one of the most interesting papers in this year!
@woongda
@woongda 9 месяцев назад
use sentence transformer, much better than ada and host your model and vectors in-house to keep safe.
@seadude
@seadude 9 месяцев назад
Do embedding models advertise themselves as cryptographic hash functions? If so, this would be news. If not, protect them as you would plaintext information.
@mattanimation
@mattanimation 9 месяцев назад
radical dude, thanks.
@kevinaud6461
@kevinaud6461 9 месяцев назад
Have you heard of the game semantle? It is virtually this exact concept but you do it as a human.
@oncedidactic
@oncedidactic 9 месяцев назад
I was thinking the same thing!
@SamplePerspectiveImporta-hq3ip
@SamplePerspectiveImporta-hq3ip 9 месяцев назад
This procedure actually kind of reminds me of diffusion.
@Bengt.Lueers
@Bengt.Lueers 9 месяцев назад
Important to know the privacy implications of embedding models: You can optimize the input string for reconstruction of the embedding until you match it, which means you found the original input.
@ekstrapolatoraproksymujacy412
@ekstrapolatoraproksymujacy412 9 месяцев назад
No, this would only be true if the embeddings were calculated and stored with infinite precision
@baz813
@baz813 9 месяцев назад
@yannic I for one, would be happy to have this patreon perk of the notes!
@noot2981
@noot2981 9 месяцев назад
Really interesting, but anyone with a business perspective should have taken this into account anyway. Don't put any truly confidential data into your vector database. That still leaves a huge opportunity for enterprise search. Nice overview though!
@whatisrokosbasilisk80
@whatisrokosbasilisk80 9 месяцев назад
If I'm running it locally and implement the security controls that I do on my other databases - why would I give af?
@hasko_not_the_pirate
@hasko_not_the_pirate 9 месяцев назад
Now I want to know what comes out when you feed it random vectors. Maybe deep insights about humanity are hidden there. 😄
@alxsmac733
@alxsmac733 9 месяцев назад
I truly dont understand how people are surprised by this. Text embeddings are basically the opposite of something like cryptographic hashes.
@喵星人-f4m
@喵星人-f4m 9 месяцев назад
check information theory
@TheMemesofDestruction
@TheMemesofDestruction 9 месяцев назад
Dank AI Memes Inc. ^.^
@kmdsummon
@kmdsummon 9 месяцев назад
Text contains information. Embeddings + model contain partial (or even full) information from the text. I am a bit unsure what is so surprising in discovering that you can revert embedding to text back... It's kinda similar to find out that you can recover 90% of original PNG image from JPEG. Method of how exactly they are doing is useful to know though.
@andybrice2711
@andybrice2711 9 месяцев назад
I still don't really understand why this is a surprise. We told a machine to learn a bunch of information. And now we're like "OH MY GOD WHY IS IT TELLING PEOPLE THE INFORMATION??!!!!" Obviously that was going to happen. We never taught it to keep secrets. We never even taught it the notion of secrets.
@김성주-h1b
@김성주-h1b 9 месяцев назад
Totally agreee
@dinoscheidt
@dinoscheidt 9 месяцев назад
Yup. It’s like you lossy compressed a large image into a jpg and are surprised to make out large amounts what the original raw image was. It’s literally called encoder-decoder in LLMs
@SiiKiiN
@SiiKiiN 9 месяцев назад
It seems like there is this false assumption that embeddings revealing data is bad. If you need embeddings but don’t want to data to be revealed just preprocess the text to replace any specific word with a generic word
@andybrice2711
@andybrice2711 9 месяцев назад
@@SiiKiiN I can see why it might be useful to develop a LLM which innately knows how to keep secrets. Like imagine the power and good which could be achieved by using it to find patterns in medical records. But yeah, unless you deliberately engineer that in, I don't know why anyone ever expected them not to leak training data.
@andybrice2711
@andybrice2711 9 месяцев назад
​@@dinoscheidt Are embeddings even compression though? Converting a word into a 512-dimensional vector is surely making it bigger?
@herp_derpingson
@herp_derpingson 9 месяцев назад
Where do you find all these papers?
@islandfireballkill
@islandfireballkill 9 месяцев назад
This paper is on arXiv which is like defacto standard for AI stuff.
@hanyanglee9018
@hanyanglee9018 9 месяцев назад
Diffusion is all you need.
@SloanMosley
@SloanMosley 9 месяцев назад
I’ve said this for so long
@imagiro1
@imagiro1 9 месяцев назад
I wonder what that means for hashes, as they can be seen as multidimensional vectors, derived from a text. Kind of similar like text embeddings.
@whatisrokosbasilisk80
@whatisrokosbasilisk80 9 месяцев назад
Not even remotely, embeddings conserve information - hashes are cryptographically secure lossy compression.
@Bluelagoonstudios
@Bluelagoonstudios 9 месяцев назад
While this going down, the EU already made some rules for AI, but if this going to be enough I doubt it. Problem is in the EU, that every country has to vote laws and there are many countries that are as good as dictatorships. This brings a big weakness in their end conclusions. I think we need more EU as a whole. So laws got streamlined more.
@jatinkashyap1491
@jatinkashyap1491 9 месяцев назад
And here poor me believing all those years that it was common sense 🙂
@JoshBuckm
@JoshBuckm 9 месяцев назад
Seems like this could be a nice product if used to reverse engineer prompts used to generate social media posts.
@servrcube6932
@servrcube6932 9 месяцев назад
second
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader 9 месяцев назад
Another massive win for the noisy input gang
@holthuizenoemoet591
@holthuizenoemoet591 9 месяцев назад
I like this kind of research, and its amazing that it hasn't be done before
Далее
Mixtral of Experts (Paper Explained)
34:32
Просмотров 57 тыс.
Истории с сестрой (Сборник)
38:16
Как он понял?
00:13
Просмотров 93 тыс.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Why Does Diffusion Work Better than Auto-Regression?
20:18