Investigating LLM Jailbreaking of Popular Generative AI Web Products
Summary
One app we tested is still vulnerable to the “<strong>repeated token attack</strong>,” which is a jailbreak technique used to leak a model’s training data. However, this attack did not affect most of the tested apps.
View original source →