Living with the Algorithm: Servant or Master? -  Tim Clement-Jones

Living with the Algorithm: Servant or Master? (eBook)

eBook Download: EPUB
2024 | 1. Auflage
160 Seiten
Unicorn (Verlag)
978-1-916846-50-0 (ISBN)
Systemvoraussetzungen
17,99 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
An authoritative guide to what is needed for AI governance and regulation from expert authors internationally involved in the practical world of AI. This book tackles the question of why AI is a distinct challenge from other technologies and how we should seek to implement innovation-friendly approaches to regulation. It sets out many of the risks to be considered, why regulation is needed, and the form this should take to promote international convergence on AI governance and the responsible deployment of AI. This is a highly readable prescription for AI governance and regulation designed to encourage the technological goals of humanity whilst ensuring that potential risks are mitigated or prevented and, most importantly, that AI remains our servant and does not become our master.

Tim, Lord Clement-Jones CBE is the former Chair of the House of Lords Select Committee on Artificial Intelligence and co-founded the All-Party Parliamentary Group on Artificial Intelligence. He is the Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and is a founding member of the OECD Parliamentary Group on AI and a former Consultant to the Council of Europe's Adhoc Committee on AI ('CAHAI').
An authoritative guide to what is needed for AI governance and regulation from expert authors internationally involved in the practical world of AI. This book tackles the question of why AI is a distinct challenge from other technologies and how we should seek to implement innovation-friendly approaches to regulation. It sets out many of the risks to be considered, why regulation is needed, and the form this should take to promote international convergence on AI governance and the responsible deployment of AI. This is a highly readable prescription for AI governance and regulation designed to encourage the technological goals of humanity whilst ensuring that potential risks are mitigated or prevented and, most importantly, that AI remains our servant and does not become our master.

Inescapably, for better or worse, as a society we are becoming increasingly conscious of the impact of artificial intelligence (AI) in its many forms. Barely a day goes by now without some reference to AI in the news, whether it is positive and relates to a new technology capable of making everyone’s lives easier – or more negative – and warning of the systematic reduction of employment opportunities, as humans are replaced by automation. With the wide-scale adoption of digital and technological solutions over the past few years, especially as we attempted to minimise the impact of the COVID-19 pandemic, we have all become more aware of the importance of digital media and the impact that AI and algorithms have on our lives.

In December 2022, the United Kingdom’s National AI Strategy3 rightly identified AI as the ‘fastest growing deep technology in the world, with huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life’. Wide-scale changes of this nature, brought about by the development of innovative technologies are, however, by no means a new experience. We need only look to previous industrial revolutions where major societal shifts occurred through the implementation of mechanical, electrical, and computing/automation assisted innovations. Benz began the first commercial production of motor vehicles with an internal combustion engine in 1886. By 1912, the number of vehicles in London exceeded the number of horses. What appears to have caught the world by surprise in the case of AI, with the potential it brings, is the speed and complexity with which it has arrived, forcing us to address many concerns that were previously concepts described in science fiction.

This rapid plunging of the world into a new technological frontier can be likened to the 1970s American television series, Soap. At the beginning of each episode, viewers would be introduced through a recap of the previous episode which would finish by exclaiming: ‘Confused? You won’t be, after this week’s episode.’ Shortly after, the plotline would continue to spiral into new unknowns and even more confusing stories.

This is certainly how it sometimes feels when tackling the narrative around AI as it swings back and forward between the extremes of a societal good with the potential to solve humanity’s problems, such as climate change, to the opposite view, where AI is an existential threat to humanity and we should expect an imminent rise of the machines. This is unquestionably made worse by a general lack of public understanding of the technology and an increase in dramatic AI-related media headlines. An early and notable example was the lurid headline in response to the written by the UK’s House of Lords Select Committee on AI which considered the economic, ethical, and social implications of advances in artificial intelligence. In response to our report in 2018 entitled AI in the UK: Ready, willing and able?, we were alarmingly warned:

Killer Robots could become a reality unless a moral code is created for AI, peers warn.4

Famously the late Professor Stephen Hawking warned that the creation of powerful artificial intelligence will be ‘either the best, or the worst thing, ever to happen to humanity’.5

AI is not, however, despite what many headlines would lead us to believe, all doom and gloom. In reality, AI presents opportunities worldwide across a variety of sectors, such as healthcare, education, financial services, marketing, retail, agriculture, energy conservation, public services, smart or connected cities, and regulatory technology itself. The predictive, analytical, and problem-solving nature of AI, and in particular generative AI systems, has the potential to drastically improve performance, research outcomes, productivity, and customer experience.

A notable example of this is the marrying of biotechnology and AI-enabled data analytics in tackling the development of bespoke or ‘precision’ medicines. It has opened up the potential to synthesise, understand, and make use of far greater quantities of health information in the pursuit of treating diseases by creating novel therapies through newly identified compounds and precision medicines.

Regardless of which side of the fence one sits with respect to AI and its potential for benefit or harm, it is increasingly apparent that AI has already – and will to an even greater extent in future – become an integral part of everyday life. It brings many opportunities to overcome the challenges of the past, increasing diversity and access to employment for those who are presently unable to work through location or physical disabilities, and streamlining many administrative processes in business that are both costly and time-consuming.

We already have examples of its use in the detection of financial crimes, including fraudulent behaviour and anti-competitive practices, the delivery of personalised education and tutoring, energy conservation, medical care and treatment, and the delivery of large-scale government and non-governmental initiatives, including the United Nations’ pursuit of their sustainable development goals, such as combating climate change, hunger, and poverty.

It is therefore of no surprise that many, including over a thousand technologists from the UK’s Chartered Institute for Information Technology (BCS), asserted in an open letter in 2023 that AI will be a transformative force for good if the right critical decisions about its development and use are made.6

It is equally apparent, however, that AI has the potential for a great many harms to individuals, their rights, and society as a whole. This was recognised directly in March 2023 in a letter signed by several thousand technologists, including those from academia, government, and technology companies themselves, recognising ‘profound risks to society and humanity’ posed by AI and systems with human-competitive intelligence and calling for a temporary halt on technological developments while risks were assessed.7

Later in May of the same year another group of technologists led by the Center for AI Safety, including Dr Geoff Hinton, one of the godfathers of deep neural networks, and several senior leaders behind many of the AI technologies that we see on the market today asserted in a short, concerned statement that ‘mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’. Unsurprisingly, given such existential concerns, Dr Hinton subsequently resigned from his previously held role at Google to ‘speak freely about the dangers of AI’.8

Many of those seeking to draw attention to the potential risks of AI do not, however, accept that moratoriums or bans should be put in place as if AI – in particular generative AI – were a form of inhumane technology. Instead, many, including a number of prominent tech executives, believe that a controlled approach should be taken that involves comprehensive regulation with a specific international agency created for the oversight and monitoring of AI developments.

Sam Altman the CEO of OpenAI for example, in giving evidence to the US Congress, rejected the idea of a temporary moratorium on AI development but asked for AI to be regulated. He cited existential risk, and espoused an international agency along the lines of the International Atomic Energy Agency (IAEA) being created to oversee AI development and its risks.9

As a cautious optimist, the author believes that new technology has the potential to offer a great many benefits, including greater productivity and more efficient use of resources. But as highlighted in the title of Stephanie Hare’s book, Technology is Not Neutral,10 we should be clear about the purpose of new technology when we adopt it and about the way in which we intend to adopt it. We need to ask a number of questions: Even if AI can do something, should it? Does it better connect and empower our citizens and improve working life? Does it create a more sustainable society?

A cardinal principle in the development of effective governance of AI should be the requirement that some sort of societal (or organisational) good must come from the implementation of technology. In short, deployment of AI should be guided in such a way that its central purpose is to promote individual or societal benefit, rather than be implemented in a push for automation as an end in itself.

The author’s view is that, as part of the process of adoption, a governance framework should be developed and implemented in a way that encourages transparency and is designed to gain and develop stakeholder trust. The author also believes that we must seek to actively shape AI’s development and utilisation across all stages of its lifecycle – including decommissioning – or risk passively acquiescing to its many predictable consequences.

Even where a clear purpose and benefit are identified, ineffective governance has the potential to cause further concerns. Anyone who has read Weapons of...

Erscheint lt. Verlag 20.5.2024
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
ISBN-10 1-916846-50-5 / 1916846505
ISBN-13 978-1-916846-50-0 / 9781916846500
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Wasserzeichen)
Größe: 363 KB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Praxis-Guide für Künstliche Intelligenz in Unternehmen - Chancen …

von Thomas R. Köhler; Julia Finkeissen

eBook Download (2024)
Campus Verlag
38,99
Wie du KI richtig nutzt - schreiben, recherchieren, Bilder erstellen, …

von Rainer Hattenhauer

eBook Download (2023)
Rheinwerk Computing (Verlag)
24,90