At the end of 2022, OpenAI released ChatGPT to the public, changing the world with its accessibility to powerful Generative AI tooling. The GenAI race has renewed public fear around AI, prompting calls for regulation to prevent misuse and ensure safety and security. However, regulating something new and not fully developed poses challenges. There is a need for "do no harm" guidelines to prevent predatory uses of AI, such as phone scams and deepfakes, while also considering the potential for governmental abuse of power. Accountability and transparency are crucial in building trust in AI systems, including mandating explanations of AI decisions and disclosing underlying algorithms. Data privacy is essential, with clear guidelines on the use of personal data for training AI models. Safety and security concerns arise from AI's potential to introduce new vulnerabilities, particularly in critical services like healthcare and government. The landscape of regulation today includes efforts by governments and tech companies to create standards and guidelines for AI development and deployment. Despite the need for regulation, there are potential pitfalls, such as stifling innovation and creating unintended consequences. A nuanced approach that balances regulation with innovation is necessary, and collaboration between governments, companies, and the public is essential in creating a future with transparent, ethical, and helpful AI.