Researchers found a critical jailbreak in the ChatGPT Atlas omnibox that allows malicious prompts to bypass safety checks.
Some results have been hidden because they may be inaccessible to you
Show inaccessible resultsSome results have been hidden because they may be inaccessible to you
Show inaccessible results