To Protect AI From Attacks, Show It Fake Data
To Protect AI From Attacks, Show It Fake Data
Published on April 01, 2018 at 07:00PM
AI systems can sometimes be tricked into seeing something that's not actually there -- remember when Google's software "saw" a 3-D-printed turtle as a rifle. At an event earlier this week, Google Brain researcher Ian Goodfellow explained how AI systems defend themselves. From a report: Goodfellow is best known as the creator of generative adversarial networks (GANs), a type of artificial intelligence that makes use of two networks trained on the same data. One of the networks, called the generator, creates synthetic data, usually images, while the other network, called the discriminator, uses the same data set to determine whether the input is real. Goodfellow went through nearly a dozen examples of how different researchers have used GANs in their work, but he focused on his current main research interest, defending machine-learning systems from being fooled in the first place. [...] GANs are very good at creating realistic adversarial examples, which end up being a very good way to train AI systems to develop a robust defense. If systems are trained on adversarial examples that they have to spot, they get better at recognizing adversarial attacks. The better those adversarial examples, the stronger the defense.
Hope you like this please comment your view and share to your friends thanks for visiting bye guys meet you in next post
Published on April 01, 2018 at 07:00PM
AI systems can sometimes be tricked into seeing something that's not actually there -- remember when Google's software "saw" a 3-D-printed turtle as a rifle. At an event earlier this week, Google Brain researcher Ian Goodfellow explained how AI systems defend themselves. From a report: Goodfellow is best known as the creator of generative adversarial networks (GANs), a type of artificial intelligence that makes use of two networks trained on the same data. One of the networks, called the generator, creates synthetic data, usually images, while the other network, called the discriminator, uses the same data set to determine whether the input is real. Goodfellow went through nearly a dozen examples of how different researchers have used GANs in their work, but he focused on his current main research interest, defending machine-learning systems from being fooled in the first place. [...] GANs are very good at creating realistic adversarial examples, which end up being a very good way to train AI systems to develop a robust defense. If systems are trained on adversarial examples that they have to spot, they get better at recognizing adversarial attacks. The better those adversarial examples, the stronger the defense.
Read more of this story at Slashdot.
Hope you like this please comment your view and share to your friends thanks for visiting bye guys meet you in next post
To Protect AI From Attacks, Show It Fake Data
Reviewed by Kartik
on
April 01, 2018
Rating:
No comments: