The role of prompt engineering in generating accurate and meaningful outputs from large language models (LLMs) is crucial, as it requires a combination of technical expertise, creativity, and critical thinking. Prompt engineers carefully craft instructions to improve machine-generated outputs, often leveraging universal skills that aren't confined to the domain of computer science. The job posting for a prompt engineer typically requests industry-specific expertise, subject matter knowledge, language proficiency, and critical thinking skills. With the rise of no-code innovations, creative people with diverse skill sets are being attracted to this role. Advanced techniques like chain-of-thought prompting, least-to-most prompting, and self-consistency can improve LLM performance on specific benchmarks. However, not all these techniques will work with every LLM, and some require a deeper understanding of how large language models work. By leveraging their own creativity, expertise, and critical thinking, individuals can develop effective prompts and make the most out of generative AI.