Evasion attacks are when attackers use manipulated inputs meant to fool the ML model into misclassification. Evasion attacks are similar to the way spammers and hackers obfuscate the content of spam emails and malware. The attacker manipulates the data during deployment to deceive previously trained classifiers. Attackers carefully craft perturbed inputs into adversarial examples to mislead the targeted ML model into outputting an incorrect prediction. A classic example of an evasion attack on image classification models is a dog image with adversarial-crafted noise may be identified as a cat image.
IP theft attacks refer to attacks involving model stealing, extraction, and inversion. Adversary probes a black box ML system in order to either reconstruct the model or extract the data that it was trained on. This can lead to a data breach when the training data or the model itself is sensitive and confidential.