Okay, this is plausible to me. My main concern is that if you solve all the hyperlocal moral philosophies, the system as a whole does not converge towards something that most participants at the start would have thought is good. You might end up with parts of the hyperlocal system not noticing something that would lead to a cancer in the system—damaging in a way that at each local step looks good if you zoom in, but if you zoom out, the system is falling into something very bad.

Keyboard shortcuts

j previous speech k next speech