Big Data Analysis and Artificial Intelligence for Medical Sciences -

Big Data Analysis and Artificial Intelligence for Medical Sciences (eBook)

eBook Download: EPUB
2024 | 1. Auflage
432 Seiten
Wiley (Verlag)
978-1-119-84655-0 (ISBN)
Systemvoraussetzungen
144,99 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Big Data Analysis and Artificial Intelligence for Medical Sciences

Overview of the current state of the art on the use of artificial intelligence in medicine and biology

Big Data Analysis and Artificial Intelligence for Medical Sciences demonstrates the efforts made in the fields of Computational Biology and medical sciences to design and implement robust, accurate, and efficient computer algorithms for modeling the behavior of complex biological systems much faster than using traditional modeling approaches based solely on theory.

With chapters written by international experts in the field of medical and biological research, Big Data Analysis and Artificial Intelligence for Medical Sciences includes information on:

  • Studies conducted by the authors which are the result of years of interdisciplinary collaborations with clinicians, computer scientists, mathematicians, and engineers
  • Differences between traditional computational approaches to data processing (those of mathematical biology) versus the experiment-data-theory-model-validation cycle
  • Existing approaches to the use of big data in the healthcare industry, such as through IBM's Watson Oncology, Microsoft's Hanover, and Google's DeepMind
  • Difficulties in the field that have arisen as a result of technological changes, and potential future directions these changes may take

A timely and up-to-date resource on the integration of artificial intelligence in medicine and biology, Big Data Analysis and Artificial Intelligence for Medical Sciences is of great benefit not only to professional scholars, but also MSc or PhD program students eager to explore advancement in the field.

Bruno Carpentieri is Associate Professor in the Faculty of Engineering at the Free University of Bozen-Bolzano, Bozen-Bolzano, Italy.

Paola Lecca is Assistant Professor in the Faculty of Engineering at the Free University of Bozen-Bolzano, Bozen-Bolzano, Italy.


Big Data Analysis and Artificial Intelligence for Medical Sciences Overview of the current state of the art on the use of artificial intelligence in medicine and biology Big Data Analysis and Artificial Intelligence for Medical Sciences demonstrates the efforts made in the fields of Computational Biology and medical sciences to design and implement robust, accurate, and efficient computer algorithms for modeling the behavior of complex biological systems much faster than using traditional modeling approaches based solely on theory. With chapters written by international experts in the field of medical and biological research, Big Data Analysis and Artificial Intelligence for Medical Sciences includes information on: Studies conducted by the authors which are the result of years of interdisciplinary collaborations with clinicians, computer scientists, mathematicians, and engineers Differences between traditional computational approaches to data processing (those of mathematical biology) versus the experiment-data-theory-model-validation cycle Existing approaches to the use of big data in the healthcare industry, such as through IBM s Watson Oncology, Microsoft s Hanover, and Google s DeepMind Difficulties in the field that have arisen as a result of technological changes, and potential future directions these changes may take A timely and up-to-date resource on the integration of artificial intelligence in medicine and biology, Big Data Analysis and Artificial Intelligence for Medical Sciences is of great benefit not only to professional scholars, but also MSc or PhD program students eager to explore advancement in the field.

1
Introduction


Bruno Carpentieri and Paola Lecca

Faculty of Engineering, Computer Science and Artificial Intelligence Institute, Free University of Bozen-Bolzano, Bolzano, Italy

The concept of intelligent machines is frequently attributed to Alan Turing, who published a seminal paper titled “Computing Machinery and Intelligence” in 1950, in which he developed a simple test known as the “Turing test” to assess whether a machine can demonstrate human-like intelligence. Six years later, in 1956, during the Dartmouth Conference, an influential event in the history of AI, the term “artificial intelligence” (AI) was coined by emeritus Stanford Professor John McCarthy, known as the “Father of AI” to characterize “the science and engineering of creating intelligent machines.” The Turing test has had a significant impact on the development of modern AI by establishing a standard for measuring progress in AI research. Nevertheless, AI encompasses a broader spectrum of methods, concepts, and technologies. Using techniques, such as machine learning (ML), natural language processing (NLP), computer vision, and others, entail the study and development of systems that can perform tasks that typically require human intelligence. Early basic AI systems relied on explicitly coded rules based on a simple set of “if, then” or symbolic reasoning approaches, in which particular conditions would trigger specific actions to make judgments and to perform tasks. These early models necessitated considerable manual rule programming, which was time-consuming and difficult to scale to complex problems. As a result of these limitations, widespread adoption of early AI models proved difficult, particularly in complicated domains such as medicine. Advances in AI research have led to the development of more sophisticated algorithms that function similarly to the human brain and have helped address some of these challenges and opened up new possibilities for AI applications. ML has evolved into a field known as deep learning (DL), which consists of techniques for creating and training artificial neural networks (ANNs) with multiple layers of interconnected nodes, also known as neurons, capable of learning and making decisions independently, similar to the human brain. These neural networks are inspired by the structure and operation of biological neural networks in the human brain, although they do not completely replicate the human brain's complexities and mechanisms. By iteratively adjusting the weights and biases of the interconnected neurons, DL algorithms are able to recognize complex patterns, extract meaningful representations from large amounts of raw data, and make decisions or predictions across multiple domains. This has produced extraordinary progress in numerous fields, including computer vision, NLP, speech recognition, and medical sciences.

The significant breakthroughs of DL methods can be attributed to the early 2000s owing to the availability of large datasets, increased computational power, and advancements in parallel computing, in particular with the advent of graphics processing units (GPUs), which played a crucial role in training deep neural networks on a larger scale. DL is now a dominant approach at the forefront of AI research, with applications in a variety of disciplines. In the medical field, it has shown the potential to revolutionize healthcare and pave the way for personalized medicine (Gilvary et al. 2019). The use of predictive models, advanced data analytics, and DL algorithms can provide valuable insights for healthcare applications such as diagnosis, treatment selection, and therapy response prediction. The ability to analyze vast quantities of patient data, including medical records, genetic information, imaging data, and real-time sensor data, is one of the primary benefits of AI in medicine (Zeng et al. 2021; Liu et al. 2021b; Ahmad et al. 2021; Hamet and Tremblay 2017). This data can guide interventions and preventive measures to reduce risks and promote proactive healthcare, enhance clinical workflow, and procedure precision. On the basis of the analysis of multiple risk factors, it can be possible to assess an individual's likelihood of developing specific diseases. In the context of medicine and healthcare, however, data-driven models present significant computational challenges. When the model is too complex or the training dataset is too small relative to the model's capacity, it may begin to capture noise or oddities that are specific to the training data, and perform exceptionally well on the training data but poorly on new, unseen data. Such models are called “overfit.” Noise and unpredictability are common features of complex healthcare datasets. A DL model that overfits to these details when applied to new patient data may produce erroneous or unreliable predictions. In healthcare, researchers are actively investigating methods to reduce overfitting and to ensure the robustness and dependability of DL models across diverse patient populations and situations. The availability of larger datasets, transfer learning techniques, and advances in model architecture and regularization methods are important factors to mitigate overfitting concerns and facilitate the adoption of DL in the medical field.

Convolutional neural networks (CNNs) were an additional significant advancement and a subclass of DL algorithms used in image processing that were designed specifically for analyzing visual data, such as images. Inspired by the structure and operation of the human visual cortex, they imitate the activity of networked neurons by employing layers of interconnected nodes, known as “convolutional layers,” which learn spatial hierarchies of features from the input data. Convolutional layers apply filters or kernels to input images, extracting and preserving local features and spatial relationships. In subsequent layers, these extracted features are combined and further processed to capture increasingly complex patterns and structures. Typically, the final layers of a CNN are composed of fully connected layers and are responsible for making predictions based on the learned features. CNNs have revolutionized image processing and computer vision tasks, outperforming traditional machine learning approaches in image classification, object detection, segmentation, and other tasks. Their ability to automatically acquire features from raw image data has made them extremely valuable in numerous applications, including autonomous vehicles, surveillance and medical imaging, and others. One of the first successful CNN architectures was LeNet-5 (LeCun et al. 1998), introduced by Yann LeCun et al. in 1998, primarily designed for handwritten digit recognition. Other popular CNN models are AlexNet (Krizhevsky et al. 2017) (developed by Alex Krizhevsky et al.), that made a breakthrough in the field by significantly lowering error rates; VGGNet (Simonyan and Zisserman 2014) (developed at the University of Oxford by the Visual Geometry Group); GoogLeNet (Szegedy et al. 2015) (introduced by Christian Szegedy et al. from Google), that used parallel convolutional operations at different scales; ResNet (He et al. 2016) (proposed by Kaiming He et al.), that enabled the successful training of networks with hundreds or even thousands of layers; DenseNet (Huang et al. 2017) proposed by Gao Huang et al., and MobileNet (Howard et al. 2017) introduced by Andrew G. Howard et al. in 2017. These are only a few examples of popular CNN architectures. Numerous other CNN models have been developed over the years to address various applications, performance demands and computational constraints, and demonstrate great potential in the field of medicine. Their ability to analyze and interpret medical images, such as X-rays, computerized tomography (CT) scans, magnetic resonance imaging (MRI), and pathology slides, can assist in the diagnosis, planning of treatment, and monitoring of disease. In recent years, CNNs have been used for a variety of medical imaging applications, including image classification (classify medical images to identify different types of tumors, lesions, or diseases), segmentation (segment medical images to identify regions or specific structures of interest, such as organs, tumors, or blood vessels with the purpose of surgical planning, radiation therapy, disease progression study), object detection (for detecting abnormalities, nodules, or lesions within medical images), and disease prediction and prognosis (predict the likelihood of disease occurrence and its progression based on medical images and other clinical data), only to name a few.

As a result of these advancements, today, we are entering a new era in medicine in which risk assessment models can be implemented in clinical practice to improve diagnostic accuracy and operational efficiency. Kaul et al. coined the acronym “AIM” which stands for “Artificial Intelligence in Medicine” in a paper published in 2020 on gastrointestinal endoscopy, titled “History of artificial intelligence in medicine” (Kaul et al. 2020), an eloquent fact of the emergence of a new strand in computational science applied to the life sciences. According to Kaul and coauthors in that research, the critical advances came in the last two decades although AIM has undergone significant change during the last five decades. Watson, an open-domain question–answering system developed by IBM in 2007, competed against human contestants on the...

Erscheint lt. Verlag 31.5.2024
Sprache englisch
Themenwelt Naturwissenschaften Biologie
ISBN-10 1-119-84655-2 / 1119846552
ISBN-13 978-1-119-84655-0 / 9781119846550
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)
Größe: 24,0 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich