Zoom out and let me see if I can repeat some of that in the back. So firstly, LoRA being a method for kind of low compute fine tuning of a pre-trained model. So I know that there’s a lot to know when llama was released and then LoRA came out, then we saw this kind of explosion in open-source activity, right? And I’m hoping you can comment a little bit on that, how you’re feeling from somebody who’s a champion of open source, how you’re feeling about these types of fine tuning methods and how they make these models more accessible. And then I want to kind of gradually move into how you think that helps support participation.

Keyboard shortcuts

j previous speech k next speech