Researchers at ETH Zurich created a jailbreak attack that bypasses AI guardrails
A pair of researchers from ETH Zurich, in Switzerland, have developed a method by which, theoretically, any artificial intelligence (AI) model that relies on human feedback, including the most popular large language models (LLMs), could potentially be jailbroken.Jailbreaking is a colloquial term for bypassing a device or system’s intended security
Read More