The threat landscape for multi-agent systems is becoming increasingly complex as these systems are deployed to handle complex tasks in areas such as financial trading and autonomous vehicle coordination. Adversarial attacks intentionally manipulate inputs to exploit the weaknesses of multi-agent algorithms, causing them to make erroneous or harmful decisions. Data poisoning occurs when attackers introduce corrupted or malicious data into the agents' training datasets or real-time data streams, distorting their learning processes and subsequent decision-making. Inter-agent interference arises when compromised or malicious agents disrupt the normal functioning of other agents by providing incorrect information, manipulating shared resources, or sabotaging communication protocols. Systemic vulnerabilities stem from the inherent complexity and scalability challenges of multi-agent systems, making it essential to employ robust threat monitoring and mitigation measures to maintain system integrity. To build resilient, secure multi-agent decision-making systems, organizations can use layered architectures, implement LLM observability practices, utilize advanced monitoring solutions, and continuously improve and adapt underlying models.