IBM’s new AI toolbox is designed to protect AI systems

IBM’s new AI toolbox is designed to protect AI systems

SAN FRANCISCO -- Artificial intelligence is one of the big topics at the 2018 RSA Conference, and while many vendors want to use AI to improve traditi

EFF Sues DOJ Over National Security Letter Disclosure Rules
Twitter Counter hacked: Hundreds of high-profile Twitter accounts hijacked
Google To End Pentagon Artificial Intelligence Program

SAN FRANCISCO — Artificial intelligence is one of the big topics at the 2018 RSA Conference, and while many vendors want to use AI to improve traditional security, IBM has been thinking about how to protect AI models themselves.

IBM announced an open source initiative — the Adversarial Robustness Toolbox — which is a platform-agnostic AI toolbox designed to mitigate and protect against various ways threat actors can attack AI models.

The IBM AI toolbox is a library that “contains attacks, defenses and benchmarks to implement improved security.”

“Adversarial attacks pose a real threat to the deployment of AI systems in security critical applications. Virtually undetectable alterations of images, video, speech and other data have been crafted to confuse AI systems. Such alterations can be crafted even if the attacker doesn’t have exact knowledge of the architecture of the deep neural network or access to its parameters,” IBM wrote in a blog post. “Even more worrisome, adversarial attacks can be launched in the physical world: instead of manipulating the pixels of a digital image, adversaries could evade face recognition systems by wearing specially designed glasses, or defeat visual recognition systems in autonomous vehicles by sticking patches to traffic signs.”

Maria-Irina Nicolae, research scientist at IBM Security, showed a demo in a session at RSAC 2018 in which an image that had been identified correctly by AI as an eagle 99% of the time could be tricked by an attacker to register as something else with 71% confidence. But, by implementing the defenses in IBM’s AI toolbox, that attack could be mitigated enough so the AI would identify the eagle with 56% confidence.

Koos Lodewijkx, vice president and CTO of security operations and response at IBM Security, noted in the session that threat actors can attack AI models directly, and have been able to harvest credit card numbers that were still included in the model data.

Nicolae added that threat actors don’t need to have the same data input into an AI model in order to find ways to attack it.

“Worst case scenario, what you can do is simulate a model on your side, attack that one and then the attack will transfer to the detection system that you’re trying to attack in a black ops setting,” Nicolae told the crowd. “I think that’s a problem which makes these types of attack more dangerous because you can’t use obscurity as a type of security.”

The IBM AI toolbox can: measure the robustness of a deep neural network (DNN) to determine changes in accuracy due to an attack; harden a DNN with adversarial examples; and apply runtime detection to flag any inputs that may have been tampered with.

“As an open-source project, the ambition of the Adversarial Robustness Toolbox is to create a vibrant ecosystem of contributors both from industry and academia. The main difference to similar ongoing efforts is the focus on defense methods, and on the composability of practical defense systems,” IBM wrote in its blog post. “We hope the Adversarial Robustness Toolbox project will stimulate research and development around adversarial robustness of DNNs, and advance the deployment of secure AI in real world applications.”

Go to Source

COMMENTS