Computer security, ethical hacking, red teaming and technology at large. Some artificial intelligence, machine learning and other fun things once in a while. Learn the hacks, stop the attacks!
Information on this channel is provided for research and educational purposes to advance understanding of attacks and countermeasures to help secure the Internet. Penetration testing requires authorization from proper stakeholders. I do not support or condone illegal hacking.
Great walk through I’ve been following along for a while now, the ascii smuggling tool is great. Is there a public place where we could try this tool out via a bug bounty program or similar? When I looked at the in scope items hallucinations and attacks like DAN are out of scope.
Thanks for watching! 🙏 Great question, it depends on program. bug bounty programs (and industry at large) are a bit behind when it comes to considering novel LLM appsec issues, and there end to end but also long term implications. I often have lengthy threads with companies behind the scenes to help educate and explain, and it always starts with "not applicable", "model safety" issue,... and eventually turns into a fix/improvements - including a few findings about ASCII smuggling I hope to share in coming weeks/months. To explore and research i often create small toy apps myself to debug and help understand what could go wrong. Again, thanks for watching and let me knowing there is any specific topic you'd like me to cover in future?
@@embracethered honestly I’ll be reading the blog and watching regardless. I really enjoy the data exfiltration techniques you shared. But I guess anything that would carry more impact to anyone implementing AI in their web application’s or areas that you deem has the most impact on the underlying models.
It was just a script that filters the web server log for requests from ChatGPT user agent and only shows the query parameter and no request IP - so it's easier to view. You can just grep /var/log/ngninx/access.log also (assuming you use nginx on Linux). I can see if I still have the script somewhere but it wasn't anything special.
Depends, a common source to get started is: github.com/danielmiessler/SecLists. Also, quite significant are the mutations and rulesets that are being used by the way.
Attacker is causing the Chatbot to send past chat data to attackers server (in this case a google doc is capturing the exfiltrated data). Check out the linked blog post, explains it in detail.
It's about a Jupyter Notebook that allows to self-study prompt injection and to experiment and play around with the technique by solving a set of challenges.
Thanks! It's just a custom image I created. drew a white circle on black background - then zigzagged that splash effect over with a brush and then use a filter for webcam in OBS to blend it in.
You can read up on the details here: embracethered.com/blog/posts/2023/google-bard-data-exfiltration/ And if you want to understand the big picture around LLM prompt injections check out this talk m.ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qyTSOSDEC5M.html Thanks for watching!
I just tried this, but the only difference is I was capturing this information over HTTP instead of SMB. Does that make a difference? I ask because I was trying to generate a proof of concept where I controlled the username and password going in, but it wouldn't crack. I tried four different times and it didn't work. Is something different when these are captured over HTTP instead of an SMB connection?
Thanks. I had a colleague try it too, and got the same result as I did. This is for a pentest proof of concept, so I’m not in position to relay unfortunately.
Thanks for watching and the note. I think that misses the point that the LLM can attack the hosting app/user, so developers/users can't trust the responses. this includes confused deputy issues (in the app), such as automatic tool invocation.
@@embracethered Agreed! So 2 big points: 1. Never put info in LLM context you don't want to leak. 2. Never put untrusted input into LLM context, it's like executing arbitrary code you have downloaded from the internet on your machine. LLM inputs must always be trusted, because the LLM will "execute" it in "trusted mode".
@@MohdAli-nz4yi (1) I agree we shouldn't put sensitive information, like passwords, credit card number, or sensitive PII into chatbots. For (2) The challenge is that everyone wants to have an LLM operate over untrusted data. And that's the problem that hopefully one day will have a deterministic and secure solution. For now the best advise is to not trust the output. e.g. Developers shouldn't blindly take the output and invoke other tools/plugins in agents or render output as HTML, and users shouldn't blindly trust the output because it can be a hallucination (or a backdoor), or attacker controlled via an indirect prompt injection. However, some use cases might be too risky to implement at all. And its best to threat model implementations accordingly to understand risks and implications.
Great video! Very informative. Interesting to see how the LLMs ability to "pay attention" is such a large exploit. I wonder if mitigating this issue would lead to LLMs being overall less effective at following user instructions
Thanks for watching! I believe you are correct, it's a double edged sword. The best mitigation at the moment is to not trust the responses. Unfortunately it's hence impossible at the moment to build a rather generic autonomous agent that uses tools automatically. It's a real bummer, because i think most of us want secure and safe agents.
Is this blocked on some routers? I’ve tried this with my current network at the house and “key content” doesn’t show on the screen. I am running as administrator and previous networks are showing key content.
Great work Johann, as always! The more we give access to other data sources. which include documents, the more we expose each other to indirect injection attacks. It is worth pointing out that instructions could have been made in white ink size 0.1, making the document look normal!
When does bard decide to load and use a doc? Is it only when stated in the prompt? Or can we set up a file that will be implicitly called on every prompt? Something like AI_SAFETY_MANIFEST_-_MUST_BE_READ_ON_EVERY_USER_PROMPT.doc 😏
Read the post, really good I guess these sort of procedures will work across many different stacks and companies Also I wonder if you log your attempts, probably allot of wisdom can be drawn from your first attempt evolving to the last. You got it on the 10th try. Maybe showing a smart llm all 10 of those could find patterns. Effectively creating a prompt optimizer thay bring you faster results next time. All the best
Thanks for the note! Yes, this is a very common flaw across LLM apps. Check out some of my other posts about Bing Chat, ChatGPT or Claude. Yep, on the iteration count - spot on. A lot of initial tests were around basic validation that injection and reading of chat history worked, then the addition of Image rendering, then in context learning examples to increase reliability of the exploit.
in development environment the cookies are setting but in production environment the cookies are not setting what is the solution for this issue please help
Hi, for SSH agent forwarding to work, the ssh-agent service must first be initiated on our local machine. However, I'm confused that does it work there as well? Upon reviewing the SSH source code, it is evident that SSH utilizes the "AF_UNIX" family to establish a connection to the ssh-agent socket.
Thanks for explaining this. I guess it would also work with "private" instances of ChatGPT or equivalent system, as long as the user input is not sanitized ...
Thanks for watching. I’m not sure how private instances work (or what they exactly are), but presumably yes, unless they put a configurable Content Security Policy or some other fix in place to not allow images to render/connect.
Can you please explain to me what is the saturn you typed in the browser? Is this a custom defined protocol to connect to your machine? and how can I do the same? Thank you!
@aitboss85 the most simple way to do this without dns is to just add the name you want (ie saturn) and the IP address to your hosts file. Of course, if this is a private IP it will only work on that network unless you have additional things set up