Evolutionary Large-Scale Multi-Objective Optimization and Applications -  Ran Cheng,  Yaochu Jin,  Ye Tian,  Xingyi Zhang

Evolutionary Large-Scale Multi-Objective Optimization and Applications (eBook)

eBook Download: EPUB
2024 | 1. Auflage
352 Seiten
Wiley (Verlag)
978-1-394-17843-8 (ISBN)
Systemvoraussetzungen
96,99 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Tackle the most challenging problems in science and engineering with these cutting-edge algorithms

Multi-objective optimization problems (MOPs) are those in which more than one objective needs to be optimized simultaneously. As a ubiquitous component of research and engineering projects, these problems are notoriously challenging. In recent years, evolutionary algorithms (EAs) have shown significant promise in their ability to solve MOPs, but challenges remain at the level of large-scale multi-objective optimization problems (LSMOPs), where the number of variables increases and the optimized solution is correspondingly harder to reach.

Evolutionary Large-Scale Multi-Objective Optimization and Applications constitutes a systematic overview of EAs and their capacity to tackle LSMOPs. It offers an introduction to both the problem class and the algorithms before delving into some of the cutting-edge algorithms which have been specifically adapted to solving LSMOPs. Deeply engaged with specific applications and alert to the latest developments in the field, it's a must-read for students and researchers facing these famously complex but crucial optimization problems.

The book's readers will also find:

  • Analysis of multi-optimization problems in fields such as machine learning, network science, vehicle routing, and more
  • Discussion of benchmark problems and performance indicators for LSMOPs
  • Presentation of a new taxonomy of algorithms in the field

Evolutionary Large-Scale Multi-Objective Optimization and Applications is ideal for advanced students, researchers, and scientists and engineers facing complex optimization problems.

Xingyi Zhang, PhD, is a Professor in the School of Computer Science and Technology at Anhui University, Hefei, China. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, and a member of the editorial board for Complex and Intelligent Systems.

Ran Cheng, PhD, is an Associate Professor in the Department of Computer Science and Engineering at the Southern University of Science and Technology, China. He is an Associate Editor for the IEEE Transactions on Evolutionary Computation, IEEE Transactions on Artificial Intelligence, IEEE Transactions on Emerging Topics in Computational Intelligence, IEEE Transactions on Cognitive and Developmental Systems, and ACM Transactions on Evolutionary Learning and Optimization.

Ye Tian, PhD, is an Associate Professor in School of Computer Science and Technology at Anhui University, Hefei, China. He also serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation.

Yaochu Jin, PhD, is a Chair Professor of Artificial Intelligence, Head of the Trustworthy and General Artificial Intelligence Laboratory, Westlake University, China. He was an Alexander von Humboldt Professor of Artificial Intelligence at the Bielefeld University, Germany, and Distinguished Chair in Computational Intelligence at the University of Surrey, United Kingdom.


Tackle the most challenging problems in science and engineering with these cutting-edge algorithms Multi-objective optimization problems (MOPs) are those in which more than one objective needs to be optimized simultaneously. As a ubiquitous component of research and engineering projects, these problems are notoriously challenging. In recent years, evolutionary algorithms (EAs) have shown significant promise in their ability to solve MOPs, but challenges remain at the level of large-scale multi-objective optimization problems (LSMOPs), where the number of variables increases and the optimized solution is correspondingly harder to reach. Evolutionary Large-Scale Multi-Objective Optimization and Applications constitutes a systematic overview of EAs and their capacity to tackle LSMOPs. It offers an introduction to both the problem class and the algorithms before delving into some of the cutting-edge algorithms which have been specifically adapted to solving LSMOPs. Deeply engaged with specific applications and alert to the latest developments in the field, it s a must-read for students and researchers facing these famously complex but crucial optimization problems. The book s readers will also find: Analysis of multi-optimization problems in fields such as machine learning, network science, vehicle routing, and more Discussion of benchmark problems and performance indicators for LSMOPs Presentation of a new taxonomy of algorithms in the field Evolutionary Large-Scale Multi-Objective Optimization and Applications is ideal for advanced students, researchers, and scientists and engineers facing complex optimization problems.

1
Multi‐Objective Evolutionary Algorithms and Evolutionary Large‐Scale Optimization


1.1 Introduction


The word optimization initially appeared in the mid‐19th century, meaning “to make the best or most of.” Today, optimization has deeply penetrated into our thoughts and actions: engineers aim to achieve the best performance of their designs; businessmen seek to maximize the profits of their deals; manufacturers expect the highest efficiency of their production processes; investors try to minimize the risk of investment while maximizing rate of return, etc. With sophisticated formulations and modelings, such optimization‐driven targets become optimization problems.

With problems always come solvers, as known as algorithms in the context of optimization. In the early age, the conventional optimization algorithms were often gradient based, e.g., Newton's method and the hill climbing method, assuming that the problems to be solved were convex (or at least differentiable) – and indeed – problems were supposed to be well defined by mathematicians. Nonetheless, the information age commencing from the mid‐20th century has led modernized science, engineering, industry, and modernized optimization as well. While enjoying the benefits of modernization, we are facing emerging challenges – optimization problems are increasingly complex, far beyond the scope of conventional optimization algorithms.

Figure 1.1 The generic flowchart of an Evolutionary Algorithm (EA).

Scientists have always been dedicated to looking into nature to find answers to research questions, so as in the case of solving optimization problems. The nature‐inspired optimization algorithms are exactly a family of modern algorithms for global optimization. Facing the challenging features of complex optimization problems (e.g., nonlinearity, non‐differentiability, or multi‐modality), nature‐inspired optimization algorithms benefit from unique advantage of robustness – treating optimization problems as black‐boxes. Particularly, inspired by the mechanisms of biological evolution, the evolutionary algorithms (EAs) [1] are among the most representative niches of the family. Generally speaking, EAs encompass genetic algorithms (GAs), evolutionary strategies, evolutionary programming, and genetic programming.

As illustrated in Fig. 1.1, a generic flowchart of EAs comprises five main components: initialization, mating selection, variation, environmental selection, and termination. Normally, an EA starts with a randomly initialization population of candidate solutions. Then, the population of candidate solutions is iteratively updated via the main loop consisting of mating selection, variation, and environmental selection. Specifically, mating selection randomly picks up pairwise (parent) candidate solutions for generating new (offspring) candidate solutions via variation, and environmental selections decide which candidate solutions can survive to the next iteration (generation). Eventually, when a certain condition (e.g., the maximum iteration number) is satisfied, the EA will terminate and output the candidate solutions in the final population as the approximate optimal solution(s) to the optimization problem being solved.

During the past three decades, EAs have achieved remarkable success in solving complex optimization problems; recently, however, they are facing a new challenge brought by the surging development of information sciences – the scalability. Just as many other computer algorithms, the nature‐inspired optimization algorithms suffer from a serious curse of dimensionality, i.e., as the scale of the optimization problems increases, the performance of the algorithms substantially deteriorates. Now, we are stepping forward into new adventures – scaling up EAs for solving large‐scale optimization problems.

Before moving toward the main course of large‐scale optimization problems, we might start with an appetizer – general optimization problems. Generally, an optimization problem often consists of three essential components: objective(s), decision variable(s), and constraint(s): an objective depicts the performance of the system under study, and a decision variable is a certain characteristic in correlation with the objective(s); the spaces where the objectives and decision variables are defined are often known as objective space and decision space, respectively, and a constraint specifies the limitation(s) in either the objective space or decision space. Correspondingly, an optimization problem can be mathematically formulated as1:

(1.1)

where is the decision space and is the decision vector; is the objective space and is the objective vector. The decision vector is composed of decision variables while the objective vector is composed of objective functions that map from to .

Considering the number of objectives, optimization problems can be intrinsically categorized into two types: single‐objective optimization problems (SOPs) (i.e., ) and multi‐objective optimization problems (MOPs) (i.e., ). For an SOP, the optimal solution is often a single decision vector minimizing (or maximizing) the objective. For an MOP, since the multiple objectives are often conflicting with each other, it is impossible to find any single solution minimizing (or maximizing) all objectives simultaneously; instead, the optima to an MOP is often a set of solutions trading‐off between different objectives, known as the Pareto optimal solutions. Specially, an MOP with more than three objectives is also known as a many‐objective optimization problem (MaOP) due to its particular challenges [2].

Figure 1.2 This figure illustrates the classification of large optimization problems involved in this chapter. Generally, the problems can have either a large number of decision variables or many objectives, which can be further categorized into four classes: single‐objective problems with a large number of decision variables (LSSOPs), multi‐objective optimization problems with a large number of decision variables (LSMOPs), many‐objective optimization problems (MaOPs), and many‐objective optimization problems with a large number of decision variables (LSMaOPs).

As illustrated in Fig. 1.2, optimization problems can be large scale in terms of either objectives or decision variables, intrinsically comprising four types: (i) SOPs with a large number of decision variables (LSSOPs), (ii) MOPs with a large number of decision variables (LSMOPs), (iii) MaOPs with a normal scale of decision variables, and (iv) MaOPs with a large number of decision variables (LSMaOPs).

For SOPs, the optimization target is quite straightforward – searching for the global optimum solution which minimizes the objective function . By contrast, for MOPs/MaOPs, due to the possible conflicts between different objectives, no single solution is able to minimize all objectives simultaneously. As a consequence, there exist a set of optimal solutions trading‐off between different objectives, namely, the Pareto optimality – defining a situation where no solution can be better off without making at least another solution worse off. In other words, the optimality of MOPs/MaOPs is defined upon the pairwise relationship (the dominance relationship) among candidate solutions considering the trade‐offs between different optimization objectives. To be specific, a solution is said to dominate another solution (denoted as ) iff

If a solution cannot be dominated by any other feasible solutions, is said to be Pareto optimal and the union of all is called Pareto set (PS), while the union of is termed Pareto front (PF). An illustrative example of Pareto optimality is given in Fig. 1.3.

Figure 1.3 An illustrative example to Pareto optimality. The images of a number of candidate solutions (, , , etc.) are spreading in the objective space. and , non‐dominated to each other, are among the Pareto optimal solutions on the PF (PS). is dominated by both and .

For most continuous MOPs/MaOPs, the PS (PF) often consists of an infinite number of Pareto optimal solutions. Therefore, in practice, only a representative subset of the PS (PF) can be approximated using a limited number of non‐dominated solutions, which are often the candidate solutions not dominated by any other in the final solution set obtained by an algorithm.

In this chapter, we mainly focus on problems of type (ii) and type (iv), i.e., LSMOPs and LSMaOPs. Since MaOPs are essentially MOPs of a special case, for simplicity, we unified the abbreviations of both LSMOPs and LSMaoPs to be LSMOPs. Specifically, we refer to research adopting EAs to solve large‐scale optimization problems as evolutionary large‐scale optimization. The rest of this chapter is organized as follows: Section 1.2 briefly introduces multi‐objective evolutionary algorithms (MOEAs), Section...

Erscheint lt. Verlag 22.7.2024
Sprache englisch
Themenwelt Technik Bauwesen
Technik Elektrotechnik / Energietechnik
ISBN-10 1-394-17843-3 / 1394178433
ISBN-13 978-1-394-17843-8 / 9781394178438
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)
Größe: 20,4 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Grundlagen der Berechnung und baulichen Ausbildung von Stahlbauten

von Jörg Laumann; Markus Feldmann; Jörg Frickel …

eBook Download (2022)
Springer Fachmedien Wiesbaden (Verlag)
119,99