- Sep 08, 2018
How AI Systems Can Be Tricked
Machine learning systems and artificial intelligence are not free from cyber attacks. As we move deeper into implementing the technology, we often forget about the possible issues that can make the system vulnerable to cyber attacks. Here are 3 ways in which AI systems can be tricked and which need to be addressed by security experts.
Privacy Threats, Model Inversion
Hackers can get access to certain sensitive data from the data-set fed to the system such as personal medical records, employee information, financial data, etc. Hackers can get the network to pass the data on which the system was trained by sending an input to the model and use an output to learn the training data. Such attacks can last from an hour to as long as a few months in some cases. Not only this, the hacker can test if certain particular data was in the data-set or gain insight into the type of training data.AI Backdoors
Convolution neural networks for image recognition are made from structures formed of millions of neurons and any minor changes can be made by modifying a small number of neurons. The models in Inception or ResNet that recognise images can be trained with large amounts of data which small companies cannot recreate. Hence, some companies may use pre-trained neural networks of large companies that were originally made to recognise celebrity faces but now can detect cancerous tumours.Hackers can get into the server of a public model and upload their own with a backdoor and the neural network models will keep the backdoor made by hackers even after the model has been retrained to do something else.
Availability Threats: Adversarial Reprogramming
Adversarial reprogramming means that a neural network can be reprogrammed using special images to do something completely different than what it was created for. For instance, hackers can use resources and engine of a cloud AI service to get the system to do completely different tasks than its original programme.Summary
Since, there is no solution to these problems as of now, be careful before you employ a ready-made AI model. You should continuously monitor AI system to know the users and how they are using it as well as any anomalous data flow.Read more at forbes.com