I can clarify. Robust here means something like: if it interacts with an environment that has other AIs trying to manipulate it, it stays pointed at good things (unless those AIs are dramatically more capable). You can’t use the system to build an unaligned system. The system won’t fall out of the basin of doing good things under reasonable circumstances. I think there’s a way that systems can try and make themselves more robust and defend themselves being in this basin.