Mastering AI (eBook)
336 Seiten
Bedford Square Publishers (Verlag)
978-1-83501-044-0 (ISBN)
Jeremy Kahn is a veteran technology and business journalist who has covered artificial intelligence for more than seven years, first at Bloomberg and then at Fortune.
Jeremy Kahn is a veteran technology and business journalist who has covered artificial intelligence for more than seven years, first at Bloomberg and then at Fortune.
CHAPTER
1
THE CONJURERS
In the spring of 2020, in a vast windowless building the size of twenty football fields on the plains outside Des Moines, Iowa, one of the largest supercomputers ever built crackled to life. The supercomputer was made up of rack upon rack of specialized computer chips—a kind originally developed to handle the intensive work of rendering graphics in video games. More than ten thousand of the chips were yoked together, connected with high-speed fiber-optic cabling. The data center belonged to Microsoft, and it cost hundreds of millions of dollars to construct. But the supercomputer was designed to serve the needs of a small start-up from San Francisco that, at the time, few people outside the niche computer science field known as artificial intelligence had ever heard of. It was called OpenAI.
Microsoft had invested $1 billion in OpenAI in July 2019 to gain access to its technology. As part of the deal, Microsoft promised to build this supercomputer. For the next thirty-four days, the supercomputer worked around the clock training one of the largest pieces of AI software ever developed. The software could encode the relationship between 175 billion data points. OpenAI fed this AI software text—more than 2.5 billion web pages collected over a decade of web crawling, millions of Reddit conversations, tens of thousands of books, and all of Wikipedia. The software’s objective was to create a map of the relationships between all of the words in that vast dataset. This statistical map would allow it to take any arbitrary text sequence and predict the next most likely word. By doing so, it could output many paragraphs—in a variety of genres and styles—that were almost indistinguishable from human writing.
OpenAI called the software GPT-3, and its debut in June 2020 stunned computer scientists. Never before had software been so capable of writing. GPT-3 could do far more than just assemble prose and poetry. It could also code. It could answer factual questions. It was capable of reading comprehension and summarization. It could determine the sentiment of a piece of writing. It could translate between English and French, and French and German, and many other languages. It could answer some questions involving commonsense reasoning. In fact, it could perform dozens of language-related tasks, even though it had only ever been trained to predict the next word in a sequence. It did all this based on instructions that were conversational, the way you would speak to a person. Scientists call this “natural language,” to differentiate it from computer code. The potential of GPT-3 convinced Satya Nadella, Microsoft’s chief executive, to quietly double, and then triple, his initial $1 billion investment in the small San Francisco AI lab. GPT-3 would, in turn, lead two years later to ChatGPT.
ChatGPT was not the first AI software whose output could pass for human. But it was the first that could be easily accessed by hundreds of millions of people. It awakened the public to AI’s possibilities and set off a race among the largest technology companies and well-funded start-ups to turn those possibilities into reality. Within months, Microsoft would commit even more money to OpenAI, another $10 billion, and begin incorporating OpenAI’s even more powerful GPT-4 model into products such as Microsoft Word and PowerPoint, which hundreds of millions of people use daily. Google, scrambling to catch up, created a general-purpose chatbot of its own, Bard, and began integrating generative AI into its search engine, potentially destabilizing the business models of entire industries, from news and media to e-commerce. It also began training even larger, more powerful AI systems that analyze and create images, sounds, and music, not just text. This model, called Gemini, has since replaced Bard and been integrated into several Google products. Meta began releasing powerful AI models freely for anyone to use. Amazon and Apple started building generative AI too.
This competition is propelling us toward a single general-purpose AI system that could equal or exceed human abilities at almost any cognitive task—a moment known as the singularity. While many still doubt that moment is at hand, it is closer than ever.
Why did ChatGPT seem so remarkable? Because we could talk to it. That a computer’s conversational skills—and not its fluency in multiplying five-digit numbers or detecting patterns in stock market gyrations—should be the yardstick by which we measure machine intelligence has a long and contentious history. The vision of an intelligent digital interlocutor was there at the dawn of the computer age. It has helped guide the entire field of AI—for better, and for worse.
THE TURING TEST—AI’S ORIGINAL SIN
The idea of dialogue as the hallmark of intelligence dates to the mid-twentieth century and the writings of one of that era’s most exceptional minds, Alan Turing, the brilliant mathematician best known for helping to crack the Nazis’ Enigma code during World War II. In 1936, when he was just twenty-four, Turing proposed the design of a hypothetical device that would serve as the inspiration for modern computers. And it was Turing who, in a 1948 report for a British government laboratory, suggested that computers might one day be considered intelligent. What mattered, he argued, was the quality of a machine’s output, not the process that led to it. A machine that could beat a person at chess was more “intelligent,” even if the machine arrived at its moves through a method, brute calculation, that bore little resemblance to the way its human opponent thought.
Two years later, in 1950, Turing expanded on these ideas in his seminal paper entitled “Computer Machinery and Intelligence.” He suggested a test for machine intelligence that he called “the Imitation Game.” It involved an interrogator asking questions of a person and a computer. All three are isolated from one another in separate rooms. The interrogator receives responses in the form of typewritten notes that are labeled X or Y depending on whether the human or the machine produced them. The interrogator’s job is to determine, based on the typed answers, whether X is a person or a machine. Turing proposed that a computer should be considered intelligent if the interrogator can’t tell the difference.
Critically, Turing explained that the test was not about the accuracy or truthfulness of answers. He fully expected that to “win” the game, both the human and the machine might lie. Nor was it about specialist knowledge. In describing the game, he postulated that it would be a machine’s mastery of the form of everyday conversation, as well as an apparent grasp of common sense, that would make it indistinguishable from the human.
The Imitation Game, later dubbed “the Turing Test,” has had a profound impact on how computer scientists have regarded machine intelligence. But it proved controversial almost from the start. Wolfe Mays, a philosopher and professor at the University of Manchester, was among the contemporary critics who attacked Turing’s test because of its focus on the machine’s output, instead of its internal processes. He called machines’ cold, logical calculus “the very antithesis of thinking,” which Mays argued was a far more mysterious, intuitive phenomenon. Mays was among those who believed intelligence was closely linked to human consciousness, and that consciousness, in turn, could not be reduced to mere physics. Turing, he wrote, seemed “implicitly to assume that the whole of intelligence and thought can be built up summatively from the warp and woof of atomic propositions.”
Decades later, the philosopher John Searle constructed a thought experiment to highlight what he saw as the Turing Test’s fatal flaw. Searle imagined a man who can neither read nor speak Chinese being locked in a room with a Chinese dictionary, some sheets of paper, and a pencil. Notes with Chinese characters are slipped through a slot in the door, and the man’s job is to look up the symbols in the dictionary and transcribe the Chinese characters he finds there onto a fresh slip of paper and push these slips out through the slot. Searle said it would be ridiculous to say that the man in the room “understood” Chinese just because he could produce accurate definitions of Chinese words. So, too, Turing was wrong, Searle wrote, to consider a computer intelligent just because it could mimic the outward characteristics of a human dialogue.
Debates about these issues have become louder with the advent of ultra-large language models, such as GPT-4. It certainly doesn’t help that cognitive science has yet to resolve the nature of human intelligence, let alone consciousness. Can intelligence be distilled to a single number—an intelligence quotient, or IQ—based on how someone performs on a standardized test? Or is it more appropriate, as the Harvard psychologist and neuroscientist Howard Gardner argued, to speak of multiple intelligences that take into account the athletic genius of a Michael Jordan, the musical talents of a Taylor Swift, and the interpersonal skills of a Bill Clinton? Neuroscientists and cognitive psychologists continue to argue.
Beyond elevating outcomes over process, the Turing Test’s emphasis on deception as the hallmark of intelligence encouraged the use of deceit in testing AI software. This makes the...
Erscheint lt. Verlag | 1.8.2024 |
---|---|
Verlagsort | London |
Sprache | englisch |
Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
Schlagworte | Artificial Intelligence • Chat GPT • generative AI • Natural Language Processing • OpenAI • Sam Altman • Tech |
ISBN-10 | 1-83501-044-X / 183501044X |
ISBN-13 | 978-1-83501-044-0 / 9781835010440 |
Haben Sie eine Frage zum Produkt? |
Größe: 560 KB
DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasserzeichen und ist damit für Sie personalisiert. Bei einer missbräuchlichen Weitergabe des eBooks an Dritte ist eine Rückverfolgung an die Quelle möglich.
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich