
I Trained Qwen to Talk Like a Pirate 🏴☠️ Got It Right Second Time
Arrr. Happy Friday! I have been building systems and agents with cloud hosted LLMs for so long now, it's been ages since I got hands on with the model itself. So when, during a long call with a colleague we got talking about ML dev environments, then building one, and then playing with it, I found myself fine-tuning Qwen2.5. I fine-tuned it to always respond in the voice of a pirate. If you have never fine-tuned a model, or considered doing it, I wrote this for you. It took two attempts. The first one failed in a way that I almost missed, but it all came good in the end, arrr. Why fine-tune at all? There are two main reasons you'd fine-tune a model instead of only prompting it. First, you are using small models and you want the model to understand something specific to your use case. Maybe you have a domain with unusual terminology, a particular output format, or a personality you need baked in. Prompting can get you part of the way there, but the model is always one creative reinterpr
Continue reading on Dev.to
Opens in a new tab


