Researchers from Data61, the CSIRO’s data and digital specialist arm, have developed techniques they claim will effectively ‘vaccinate’ algorithms against adversarial attack, meaning it will prevent malevolent actors from hijacking machine learning.
Dr Richard Nock, Data61 machine learning group leader, explained that algorithms ‘learn’ from the data they are trained on to create a machine learning model that can perform a given task effectively without needing specific instructions, such as making predictions or accurately classifying images and emails.
Those techniques are already in common use to identify spam emails, diagnose diseases from X-rays, predict crop yields and will soon drive our cars.
But adding a layer of noise (i.e. an adversary) over an image, means attackers can deceive machine learning models into misclassifying it.
Dr Nock said adversarial attacks have already tricked a machine learning model into incorrectly interpreting a traffic stop sign as a speed sign – an outcome with potentially disastrous results.
“Our new techniques prevent adversarial attacks using a process similar to vaccination,” he said.
“We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more ‘difficult’ training data set. When the algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks. ”
As AI and machine learning increasingly come into play in the workplace, the risks of attack, from hostile states, business rivals or terrorists escalate.
Data61 CEO Adrian Turner said the organisation’s research is a significant contribution to the growing field of adversarial machine learning.
“Artificial intelligence and machine learning can help solve some of the world’s greatest social, economic and environmental challenges, but that can’t happen without focused research into these technologies,” he said.
“The new techniques against adversarial attacks developed at Data61 will spark a new line of machine learning research and ensure the positive use of transformative AI technologies.”
The research paper, Monge blunts Bayes: Hardness Results for Adversarial Training from Data61 was presented at the 2019 International Conference on Machine Learning (ICML) in Los Angeles earlier this month. The researchers also demonstrated that the ‘vaccination’ techniques are built from the worst possible adversarial examples, and can therefore withstand very strong attacks.
CSIRO recently invested $19 million into an Artificial Intelligence and Machine Learning Future Science Platform to target AI-driven solutions for areas including food security and quality, health and wellbeing, sustainable energy and resources, resilient and valuable environments, as well as Australian and regional security.
Trending
Daily startup news and insights, delivered to your inbox.