Diffusion models have revolutionized generative AI in recent years, with applications ranging from image synthesis to weather prediction. One of the most significant advancements in this field is Shap-E, a model that can generate 3D objects based on text or images. The journey towards creating Shap-E involved numerous developments and improvements in diffusion models, such as Classifier Guided Diffusion, Classifier-Free Guided Diffusion, and Latent Diffusion. These advancements allowed for the creation of more complex and diverse samples, ultimately leading to the development of 3D generation with Shap-E. The potential applications of diffusion models are vast, and their continued evolution promises even greater breakthroughs in various fields.