This investigation highlights the potential security implications of using large language models (LLMs) within code, specifically focusing on code injection vulnerabilities caused by data originating from LLMs. An analysis of over 4000 Python repositories identified vulnerable patterns in the use of LLM responses, including parsing JSON with the eval function, which can execute arbitrary commands on a system, and executing generated code without proper sandboxing. These issues can be mitigated by using alternative methods such as json.loads to parse JSON responses and ensuring that generated code is executed in a restricted environment. The investigation emphasizes the importance of treating data produced by generative AI carefully to prevent vulnerabilities in code and encourages developers to secure their use of LLMs in their applications.