As part of ROOST (the Robust Open Online Safety Tool), we have been working with frontier labs to create open models. One released just last week was the safeguard model from OpenAI. It is the first reasoning model for trust and safety judgments that comes with a full reasoning trace. The idea is that this model, which is small enough to deploy on a laptop or a community server, can ingest communal policies. It is “Bring Your Own Policy” (BYOP) .