Yes. As well as the obvious effect, it has the nice bonus of being motivating. A lot of people who got into AI safety early were motivated because they read Coherent Extrapolated Volition (CEV) and realized this isn’t just doom and gloom; we could build a future that’s truly incredible. Having that process, sense-making, and collaboratively figuring out a North Star seems potentially very powerful.