is a malicious attack targeting AI model security by introducing corrupt or deceptive data into a model’s training set, causing it to make inaccurate predictions or behave in the wrong way. It is an edge case in AI security, distinct from concerns of data provenance, privacy and confidentiality.