The previous administration's guidance on AI safety and security testing has been repealed, leaving concerns about the impact of this shift on public safety. The new administration is investing $500 billion in private-sector AI infrastructure through Project Stargate, which may foster more innovation but also raises questions about prioritizing risks. As a former principal AI engineer, Joseph Thacker shares his expertise on vulnerabilities such as prompt injections, multi-modal injection, and chain-of-thought attacks, highlighting the need for practical advice and recommendations on addressing these security challenges. The rollback of regulations has sparked concerns, but it also presents an opportunity for more practical guidance on security issues. Thacker advocates for guardrail software, deployable templates, design patterns, pen testing guides, education for government officials, automated security testing, and robust safety measures to ensure AI is wielded wisely as its impact on our lives continues to grow.