One thought is just to have a synchronous. Like this speech collection we assumed it to be mostly asynchronous. But as I mentioned if we commit to have synchronous events that went after the initial summary into these clusters, part of those synchronous events which may be face to face or hybrid can be used to confirm these synthetic cluster viewpoints. So, it could be as easy as you know this robot MP doesn’t represent me. Or something like that, right?

Keyboard shortcuts

j previous speech k next speech