To develop a typology of mitigations, we reviewed the academic articles (n=30) that mentioned some mitigations for adversarial attacks. Many of the mitigations were very low-level and tailored towards data scientists rather than security executives and information security professionals. This made the process of mitigating these modern attacks very difficult to organize and manage. Only recently, Microsoft came out with an AI Security Risk Assessment that companies can use as a first step for assessing the security posture of their AI systems. Our goal is to provide information security executives with a more robust and organized set of mitigations so they can secure ML systems. We aimed to provide high-level controls that can be designed and integrated into organization’s security policies and procedures. Initially, there were dozens of mitigations. For parsimony, we grouped them into four high-level mitigations: (i) data controls, (ii) model controls, (iii) ML system environment controls, and (iv) security controls. Together, these mitigations provide security throughout the lifecycle of the ML model and combine traditional security knowledge like separation of duties controls, and input and output controls, with modern data science techniques to defend against adversarial ML attacks.