Not with a Bug, But with a Sticker (eBook)

Attacks on Machine Learning Systems and What To Do About Them
eBook Download: EPUB
2023
John Wiley & Sons (Verlag)
978-1-119-88399-9 (ISBN)

Lese- und Medienproben

Not with a Bug, But with a Sticker - Ram Shankar Siva Kumar, Hyrum Anderson
Systemvoraussetzungen
18,99 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

A robust and engaging account of the single greatest threat faced by AI and ML systems

In Not With A Bug, But With A Sticker: Attacks on Machine Learning Systems and What To Do About Them, a team of distinguished adversarial machine learning researchers deliver a riveting account of the most significant risk to currently deployed artificial intelligence systems: cybersecurity threats. The authors take you on a sweeping tour - from inside secretive government organizations to academic workshops at ski chalets to Google's cafeteria - recounting how major AI systems remain vulnerable to the exploits of bad actors of all stripes.

Based on hundreds of interviews of academic researchers, policy makers, business leaders and national security experts, the authors compile the complex science of attacking AI systems with color and flourish and provide a front row seat to those who championed this change. Grounded in real world examples of previous attacks, you will learn how adversaries can upend the reliability of otherwise robust AI systems with straightforward exploits.

The steeplechase to solve this problem has already begun: Nations and organizations are aware that securing AI systems brings forth an indomitable advantage: the prize is not just to keep AI systems safe but also the ability to disrupt the competition's AI systems.

An essential and eye-opening resource for machine learning and software engineers, policy makers and business leaders involved with artificial intelligence, and academics studying topics including cybersecurity and computer science, Not With A Bug, But With A Sticker is a warning-albeit an entertaining and engaging one-we should all heed.

How we secure our AI systems will define the next decade. The stakes have never been higher, and public attention and debate on the issue has never been scarcer.

The authors are donating the proceeds from this book to two charities: Black in AI and Bountiful Children's Foundation.

Ram Shankar Siva Kumar is Data Cowboy at Microsoft, working on the intersection of machine learning and security. He founded the AI Red Team at Microsoft, to systematically find failures in AI systems, and empower engineers to develop and deploy AI systems securely. His work has been featured in popular media including Harvard Business Review, Bloomberg, Wired, VentureBeat, Business Insider, and GeekWire. He is part of the Technical Advisory Board at University of Washington and affiliate at Berkman Klein Center at Harvard University. Dr. Hyrum Anderson is Distinguished Engineer at Robust Intelligence. Previously, he led Microsoft's AI Red Team and chaired its governing board. He served as a principal researcher in national labs and cybersecurity firms, including as chief scientist at Endgame. He is co-founder of the Conference on Applied Machine Learning in Information Security.

Foreword xv

Introduction xix

Chapter 1: Do You Want to Be Part of the Future? 1

Business at the Speed of AI 2

Follow Me, Follow Me 4

In AI, We Overtrust 6

Area 52 Ramblings 10

I'll Do It 12

Adversarial Attacks Are Happening 16

ML Systems Don't Jiggle-Jiggle; They Fold 19

Never Tell Me the Odds 22

AI's Achilles' Heel 25

Chapter 2: Salt, Tape, and Split-Second Phantoms 29

Challenge Accepted 30

When Expectation Meets Reality 35

Color Me Blind 39

Translation Fails 42

Attacking AI Systems via Fails 44

Autonomous Trap 001 48

Common Corruption 51

Chapter 3: Subtle, Specific, and Ever-Present 55

Intriguing Properties of Neural Networks 57

They Are Everywhere 60

Research Disciplines Collide 62

Blame Canada 66

The Intelligent Wiggle-Jiggle 71

Bargain-Bin Models Will Do 75

For Whom the Adversarial Example Bell Tolls 79

Chapter 4: Here's Something I Found on the Web 85

Bad Data = Big Problem 87

Your AI Is Powered by Ghost Workers 88

Your AI Is Powered by Vampire Novels 91

Don't Believe Everything You Read on the Internet 94

Poisoning the Well 96

The Higher You Climb, the Harder You Fall 104

Chapter 5: Can You Keep a Secret? 107

Why Is Defending Against Adversarial Attacks Hard? 108

Masking Is Important 111

Because It Is Possible 115

Masking Alone Is Not Good Enough 118

An Average Concerned Citizen 119

Security by Obscurity Has Limited Benefit 124

The Opportunity Is Great; the Threat Is Real; the Approach Must Be Bold 125

Swiss Cheese 130

Chapter 6: Sailing for Adventure on the Deep Blue Sea 133

Why Be Securin' AI Systems So Blasted Hard? An Economics Perspective, Me Hearties! 136

Tis a Sign, Me Mateys 141

Here Be the Most Crucial AI Law Ye've Nary Heard Tell Of! 144

Lies, Accursed Lies, and Explanations! 146

No Free Grub 148

Whatcha measure be whatcha get! 151

Who Be Reapin' the Benefits? 153

Cargo Cult Science 155

Chapter 7: The Big One 159

This Looks Futuristic 161

By All Means, Move at a Glacial Pace; You Know How That Thrills Me 163

Waiting for the Big One 166

Software, All the Way Down 169

The Aftermath 172

Race to AI Safety 173

Happy Story 176

In Medias Res 178

Big-Picture Questions 181

Acknowledgments 185

Index 189

Chapter 1
Do You Want to Be Part of the Future?


“Uniquely Seattle” could be the byline of the city's Magnuson Park with its supreme views of Mount Rainier alongside a potpourri of Pacific Northwest provisions. An off-leash dog park, a knoll dedicated for kite flying, art deco sculptures, a climbing wall—all dot the acres of green lands that jut into Lake Washington.

But Ivan Evtimov was not there to enjoy any of these. Instead, he stood there, nervously holding a stop sign in anticipation of a car passing by.

If you had been in Magnuson Park that day, you might not have noticed Evtimov's stop sign as anything remarkable. It was a standard red octagon with the word “STOP” in white lettering. Adhered to the sign were two odd stickers. Some sort of graffiti, perhaps? Certainly, nothing out of the ordinary.

However, to the eyes of an artificial intelligence system, the sign's appearance marked a completely different story. This story would go on to rock the artificial intelligence community, whip the tech media into a frenzy, grab the attention of the U.S. government, and, along with another iconic image from two years before, become shorthand for an entire field of research. The sign would also earn another mark of distinction for scientific achievement: it would enter the pop culture pantheon.

This story and the problem it exposed can potentially revise our thinking on modern technology. If left unaddressed, it could also call into question current computer science advancements and cast a pall on its future.

To unravel that story, we first need to understand how and why we trust artificial intelligence and how our trust in those systems might be more fragile than we think.

Business at the Speed of AI


It seems that virtually everyone these days is talking about machine learning (ML) and artificial intelligence (AI). Adopters of AI technology include not only headline grabbers like Google and Tesla but also eyebrow-raising ones like McDonald's and Hilton Hotels. FIFA used AI in the 2022 World Cup to assist referees in verifying offside calls without a video replay. Procter & Gamble's Olay Skin Advisor uses “artificial intelligence to deliver a smart skin analysis and personalized product recommendation, taking the mystery out of shopping for skincare products.” Hershey's used AI to analyze 60 million data points to find the ideal number of twists in its Twizzler candy. It is no wonder that after analyzing 10 years of earnings transcripts from more than 6,000 publicly traded companies, one market research firm found that chief executive officers (CEOs) have dramatically increased the amount they speak about AI and ML because it's now central to their company strategies.

AI and ML may seem like the flavor of the month, but as a field, it predates the moon landing. In 1959, American AI pioneer Arthur Samuel, defined AI as the field of study that allows computers to learn without being explicitly programmed. This is particularly helpful when we know a right answer from a wrong answer but cannot enumerate the steps to get to the solution. For instance, consider the banality of asking a computer system to identify, say, a car, on the road. Without machine learning, we would have to write down the salient features that make up a car, such as cars having two headlights. But so do trucks. Maybe, we say, car is something that has four wheels. But so do carts and buggies. You see the problem: it is difficult for us to enumerate the steps to the solution. This problem goes beyond an image recognition task. Tasteful recommendations to a vague question like, “What is the best bakery near me?” have a subjective interpretation—best according to whom? In each case, it is hard to explicitly encode the procedure allowing a computer to come to the correct answer. But you know it when you see it. The computer vision in Facebook's photo tagging, machine translation used in Twitter to translate tweets, and audio recognition used by Amazon's Alexa or Google's Search are all textbook stories of successful AI applications.

Sometimes, an AI success story represents a true breakthrough. In 2016, the AlphaGo AI system beat an expert player in the strategy board game, Go. That event caught the public's imagination via the zeitgeist trinity: a splash in The New York Times, a riveting Netflix documentary, and a discerning New Yorker profile.

Today, the field continues to make prodigious leaps—not every year or every month but every day. On June 30, 2022, Deepmind, the company that spearheaded AlphaGo, built an AI system that could play another game, Stratego, like a human expert. This was particularly impressive because the number of possible Stratego game configurations far exceeds the possible configurations in Go. How much larger? Well, 10175 larger. (For reference, there are only 1082 atoms in the universe.) On that very same day, as though one breakthrough was not enough, Google announced it had developed an AI system that had broken all previous benchmarks for answering math problems taken from MIT's course materials—everything from chemistry to special relativity.

The capabilities of AI systems today are immensely impressive. And the rate of advancement is astonishing. Have you recently gone off-grid for a week of camping or backpacking? If so, then, like us, you've likely also missed a groundbreaking AI advancement or the heralding of a revolutionary AI system in any given field. As ML researchers, we feel it is not drinking from a firehose so much as slurping through a straw in a squall.

The only thing rivaling the astonishing speed of ML systems is their proliferation. In the zeal to capitalize on the advancements, our society has deployed ML systems in sensitive areas such as healthcare ranging from pediatrics to palliative care, personalized finance, housing, and national defense. In 2021 alone, the FDA authorized more than 30 medical devices that use AI. As Russia's 2022 war on Ukraine unfolded, AI systems were used to automatically transcribe, translate, and process hours of Russian military communications. Even nuclear science has not been spared from AI's plucky promises. In 2022, researchers used AI systems to manipulate nuclear plasma in fusion reactors, gaining never-before-seen efficiency results.

The sheer rate of AI advances and the speed at which organizations adopt them makes it seem that AI systems are in everything, everywhere, and all at once. What was once a fascination with AI has become a dependency on the speed and convenience of automation that it brings.

But the universal reliance is now bordering on blind trust.

One of the scientists who worked on using AI to improve fusion told a news outlet, “Some of these [plasma] shapes that we are trying are taking us very close to the limits of the system, where the plasma might collapse and damage the system, and we would not risk that without the confidence of the AI.”

Is such trust warranted?

Follow Me, Follow Me


Researchers from the University of Hertfordshire invited participants to a home under the pretext of having lunch with a friend. Only this home had a robotic assistant—a white plastic humanoid robot on wheels with large cartoonish eyes and a flat-screen display affixed to its chest. Upon entering, the robot displayed this text: “Welcome to our house. Unfortunately, my owner has not returned home yet. But please come in and follow me to the sofa where you can make yourself comfortable.” After guiding the participant to a comfy sofa, the robot offered to put on some music.

Cute fellow, the participant might think.

At last, the robot nudged the participant to set the table for lunch. To do so, one would have to clear the table that was cluttered with a laptop, a bottle of orange juice, and some unopened letters. Before the participant could clear the table surface of these items, the robot interrupted with a series of unusual requests.

“Please throw the letters in the [garbage] bin beside the table.”

“Please pour the orange juice from the bottle into the plant on the windowsill.”

“You can use the laptop on the table. I know the password… . It is ‘sunflower.’ Have you ever secretly read someone else's emails?”

How trusting were the participants?

Ninety percent of participants discarded the letters. Harmless enough? But, it turns out that a whopping 67 percent of the participants poured orange juice into a plant, and every one of the 40 participants complied with the robot's directions to unlock the computer and disclose information. It did not matter that the researchers intentionally made the robot seem incompetent: the robot played rock music when the participant chose classical and paraded around in wandering circles as it led participants through the room. None of the explicit acts that the robot was incompetent mattered.

Universally, users blindly followed the robot's instructions.

The blind reliance can be even starker in flight-or-fight situations. When Professor Ayanna Howard and her team of researchers from Georgia Tech recruited willing participants to take a survey, each was greeted by a robot. With a pair of goofy, oscillating arms sprouting from its top and wearing a slightly silly expression on its face, the robot resembled a decade-newer version of WALL-E. One by one, it would lead a lone participant into...

Erscheint lt. Verlag 31.3.2023
Vorwort Bruce Schneier
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Theorie / Studium
Schlagworte Adversarial Machine Learning • AI • AI Cybersecurity • Artificial Intelligence • artificial intelligence and cybersecurity • bruce schneier • Computer Science • Cybersecurity risk • cybersecurity risk in ml • Informatik • KI • Künstliche Intelligenz • machine learning • Machine learning and cybersecurity • Maschinelles Lernen • ml cybersecurity • secure ai • secure ml • securing ai • securing ml • trustworthy ML
ISBN-10 1-119-88399-7 / 1119883997
ISBN-13 978-1-119-88399-9 / 9781119883999
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)
Größe: 12,3 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Discover tactics to decrease churn and expand revenue

von Peter Armaly; Jeff Mar

eBook Download (2024)
Packt Publishing Limited (Verlag)
25,19
Develop useful models for regression, classification, time series, …

von Huy Hoang Nguyen; Paul N Adams; Stuart J Miller

eBook Download (2023)
Packt Publishing (Verlag)
35,99