“frontier model” in AI — like the in-progress GPT-5 — do safety testing. Otherwise, they would be liable if their AI system leads to a “mass casualty event” or more than $500 million in damages in a single incident or set of closely linked incidents.
If your Ai takes over the world and nukes half of it, you will have to pay a fine.
If your Ai takes over the world and nukes half of it, you will have to pay a fine.