Researchers found a critical jailbreak in the ChatGPT Atlas omnibox that allows malicious prompts to bypass safety checks.