In language processing, we sometimes perform “distillation” and other times we fine-tune existing open-source models. I believe a very popular model in Taiwan is one based on France’s Mistral model, which was then fine-tuned with Traditional Chinese text to give it a better understanding of Taiwanese culture. So, the heavy lifting was already done by the Mistral team; Taiwan’s part was the localization, which is also very energy-efficient. And as you know, Mistral packs a great deal of capability into a very small model, so it not only saves energy but also democratizes the technology, as people can run it on their own phones and laptops.