Yeah. And I understand that GPT-4 offers some VIP customers this sort of capability to essentially fine tune the GPT-4, but that is not generally available. We can currently only tune GPT-3, so I wonder whether any of your foundation models, especially language models, but also multimodal ones, are amenable to this kind of client fine tuning or local adaptation.

Keyboard shortcuts

j previous speech k next speech