This blogpost discusses a method for fine-tuning large language models (LLMs) on specialized domains like healthcare while ensuring data privacy. The approach involves generating differentially-private synthetic text using Gretel's GPT model, which is then used to fine-tune LLMs for generating responses. Differential privacy provides formal guarantees that no training data will be ever extracted from the model, thus protecting sensitive information. The method was demonstrated by fine-tuning a Claude 3 Haiku model for generating clinical notes with the input of a transcript of conversation between a doctor and a patient.