But currently, there’s no systematic way for that to trigger a local fine tune or a community fine tune that then infuses this pretrained model with something that I would say you know, like putting my eyeglass on. Like, I’m going to speak with you in tâI-gí and so please use this LoRA to talk with me in tâI-gí. So that’s what I have in mind.

Keyboard shortcuts

j previous speech k next speech