(Background for conversation: After an exchange in the comments of Audrey’s LW post where plex suggested various readings and got a sense that there were some differences in models worth exploring, plex suggested a call. Some commentors were keen to read the transcript, and plex thinks in order for any human, other sentient, current AI, or currently existing structure to not be destroyed in the near future, we need to either coordinate to pivot from the rush towards superintelligence, or resolve some very thorny technical problems. Audrey and plex both think that understanding some of the core dynamics raised here is necessary for effectiveness on either of these.)