I think it depends on the model, for IR systems, the point of embeddings it to map queries and corresponding passages to closer locations, that does not necessarily have to be inversible.
@@amanbansal82Agreed. Seems like the authors applied pattern recognition to embeddings to determine the plaintext inputs. It’s *cool* but isn’t the same process used whenever trying to determine the input for any encoded text where algorithm = unknown? What would be truly paper worthy, is if the authors reverse engineered the embedding model algorithm using only vector output! (I didn’t read the paper, just arm-chair quarterbacking here)
Yeah it’s not a huge revelation sure, but the work should be applauded for its results as simply a benchmark paper. 50 steps, 32 tokens 92%. Awesome thanks
Could this become a benchmark for embeddings? If you can develop an embedding that preserves the information for longer sequences, it would be a more useful embedding than the one that does not.
Why is this surprising? Vector embeddings consist of a huge amount of data compared to, say, the ascii byte representation of the text per token, like orders of magnitude more. Embeddings are basically the opposite of compression, they're sort of a maximal representation of text.
Embedding models were never trained to keep the "maximal representation of the text". That's just a characteristic that naturally emerges during the training. It's not surprising that we can decode the encoded sentence but it's interesting to see how much could we decode. The method used is also quite interesting and very simple. You can try to think of what other task could benefit from a language model iterating on its own predictions.
I wonder if scrambling the dimensions (consistently across all vectors) wouldn't preserve desirables properties (similarity search, distance) with an additonnal way to obfuscate initial content.
@bgaRevoker I guess you'd have to assume that the embedding model and permutation matrix are unknown to an attacker. The permutation space is factorial length of the vector - so it isn't unthinkable that this would actually be viable encryption. However, if your shuffled vector is leaked - its possible to statistically reconstruct it, especially if the attacker knows key features in your dataset.
I realised this when I was considering it for searches data. When I realised that it was not secure I wound up setting the meta data to the actual text to save on a database query. But this isn’t that worrisome. If you were getting your embeddings from an api then you have already lost privacy. And if you can host your own embedding model, chroma DB is easy enough!
27:35 I found it really interesting how the nuance of a text embedding correlates with the highest "spatial frequency" it has, as if information is constructed from an overlay of frequencies as in Fourier transform.
Could we make it even dumber? Disregard the model M1 and only train M0. Make M0 do a couple of predictions, take 2 best, and tell an LLM in which direction should it go from these predictions to predict the real thing (like is the real sentence embedding between the two or closer to the one than the other, should we move even further in the direction of one of those predictions, etc.). Make it do many iterations with beam search and see how the result compares to results from this paper. Could be an interesting experiment and not even very expensive to pull off.
"Text Embeddings Reveal (Almost) As Much As Text" Wow, good to know they do the thing we designed them to do. Oh wait, you though they were some sort of secure representation....... you're fired.
Really interesting topic. Great video. In a way, the embedding acts as a lossy compression, kinda-sorta. “Uncompressing” requires one to extract information encoded as statistical relationships. …. or so I’ll claim. :)
I'm not sure it even is compression though. Isn't it just a transformation? I think each word is a 512-dimensional vector, which is probably about 2048 bytes.
I'm definitely using the word, "compression", very loosely. The research is interesting because the results were surprising. The vectors contain more info than expected and more info than is (perhaps) required. In particular, surprise was that short text passages could be recovered. Some of the info isn't explicitly encoded in tokens. To extract it, one must know how data was originally encoded. Seems "compression-y" to me. This stands in contrast to one-way transformations, where original input data cannot be extracted post-transform.
But.. vector databases are *supposed* to be able to recover the information, they are often used as auxiliary memory for LLM, so you can have a smaller model paired with a vector database containing the information you want to query, and the retrieved information would be inserted into the LLM context for it to answer based on "stored facts", reducing the chances of hallucinations. Being able to retrieve names, dates, etc. accurately is an intended use, not something unexpected.
That’s because vector databases store the text alongside the embedding. The embedding is for search then it returns the text. This approach in the video is to get to text purely from an embedding.
❤️ Amazing content with several valuable takeaways: 00:00 Text embeddings can reveal almost as much information as the original text. 02:09 Text embedding inversion can reconstruct the original text. 06:35 The quality of the initial hypothesis is crucial for text embedding inversion. 12:00 Editing models are used to refine the text hypothesis. 14:32 The success of text embedding inversion depends on the absence of collisions in embedding space. 21:20 Training a model for each step in the inversion process can be simplified. 21:24 Text embeddings contain a significant amount of information. 26:10 Adding noise to embeddings can prevent exact text reconstruction. 26:15 The level of noise in embeddings affects reconstruction and retrieval. 31:20 The length of sequences can impact reconstruction performance. 35:24 The longer the sequence length, the more difficult it becomes to represent the index of each token. Crafted by Notable AI Takeaways.
UnCLIP showed you could decode image embeddings with high fidelity, CapDec did the same for text embeddings. Not only does this feel like a plain sight revelation but its been done at least two times that im aware of so far.
No, thats not an apples to oranges comparison. Lets say I have number array [3,5,7,9,11,...101] thats 50 numbers which if one byte each would represent 50*1 = 50 bytes. However, we can see that we can just plot a line y = 2x + 1, so, all we really need to store is the m and c values then we can use this function to generate the entire series. So, this series can be compressed down to just 2 bytes.
Even if we consider 1 bit of relevant information stored in each float, it's still a lot, 1536/15 ≈ 102 tokens. This is actually not even a sentence, but rather a short paragraph of text that consists of ~3-5 sentences
@@herp_derpingson but you have no imformation about the function. You can't recounstruct this function from just two numbers. It can be any other type of function like y=2x^(1)
@@Restrocket the amount of information you'd need to distinguish this function from all other possible functions is infinite. In the domain of the function y=mx+b you only need 2 bytes, but other domains you might need more or less. This is why its impossible to determine how many parameters are optimal for learning a specific function, since it may not even be possible to represent the data in the domain you chose (ie. y=a/x^b)
Not far into the video but given that names are often very relevant for how completions ought to go - they may be highly informative concepts - I'm not at all surprised you can invert to get back to names. I'm guessing generally speaking anything highly informative to continuation is going to get a lot attention and will not be fully abstracted away in the embedding.
Hi Yannic, Please consider making a video for this paper: "Representation Engineering: A Top-Down Approach to AI Transparency". It is one of the most interesting papers in this year!
Do embedding models advertise themselves as cryptographic hash functions? If so, this would be news. If not, protect them as you would plaintext information.
Important to know the privacy implications of embedding models: You can optimize the input string for reconstruction of the embedding until you match it, which means you found the original input.
Really interesting, but anyone with a business perspective should have taken this into account anyway. Don't put any truly confidential data into your vector database. That still leaves a huge opportunity for enterprise search. Nice overview though!
Text contains information. Embeddings + model contain partial (or even full) information from the text. I am a bit unsure what is so surprising in discovering that you can revert embedding to text back... It's kinda similar to find out that you can recover 90% of original PNG image from JPEG. Method of how exactly they are doing is useful to know though.
I still don't really understand why this is a surprise. We told a machine to learn a bunch of information. And now we're like "OH MY GOD WHY IS IT TELLING PEOPLE THE INFORMATION??!!!!" Obviously that was going to happen. We never taught it to keep secrets. We never even taught it the notion of secrets.
Yup. It’s like you lossy compressed a large image into a jpg and are surprised to make out large amounts what the original raw image was. It’s literally called encoder-decoder in LLMs
It seems like there is this false assumption that embeddings revealing data is bad. If you need embeddings but don’t want to data to be revealed just preprocess the text to replace any specific word with a generic word
@@SiiKiiN I can see why it might be useful to develop a LLM which innately knows how to keep secrets. Like imagine the power and good which could be achieved by using it to find patterns in medical records. But yeah, unless you deliberately engineer that in, I don't know why anyone ever expected them not to leak training data.
While this going down, the EU already made some rules for AI, but if this going to be enough I doubt it. Problem is in the EU, that every country has to vote laws and there are many countries that are as good as dictatorships. This brings a big weakness in their end conclusions. I think we need more EU as a whole. So laws got streamlined more.