For example, besides Perplexity, Hugging Face itself—where I initially discovered the model—has started something called “Open-R1”: it uses the training approach of R1 but without the censorship module. It’s not a matter of downloading R1 and removing the censorship code; rather, they retrain the model from scratch in the same style as R1 but without adding those censorship mechanisms. As a result, the Open-R1 model naturally won’t respond with that sudden, unnatural avoidance.

Keyboard shortcuts

j previous speech k next speech