Remember the deceptively-aligned AI thought experiment, where an AI pretends that it'll do what we want when in beta-testing so we'll release it to acquire actual power to carry out its actual agenda?
https://www.astralcodexten.com/p/deceptively-aligned-mesa-optimizers
Not a hypothetical...