OpenAI's GPT-2 text generator is an unsupervised language model that achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering, and summarization. To run the code yourself, you need to install system-wide dependencies such as CUDA, cuDNN, and NVIDIA graphics drivers, then clone the GPT-2 repository, create a virtual environment, install Python dependencies and the GPT-2 code, and set k=40 in interactive_conditional_samples.py. You can then run the model using python3 src/interactive_conditional_samples.py. The larger 345M model has been released, which produces more accurate sounding code tutorials.