In this thesis, many- problems which have redundant objectives have
been settled. To do that; approaches to solving many- problems have
been investigated and analyzed; the impacts of multi-/many- algorithms
on objective reduction have also been pinpointed and then three ORAs
have been proposed.
The thesis has given an overview of multi- problems together with
common approaches to solving them, and focused on multi- algorithms.
It has also mentioned many- problems and the difficulties the multi- algorithms had to face. Numerous many- algorithms have appeared in the
hope of eliminating or reducing the difficulties.
However, in several many- problems, it still remains unknown if all
objectives are essential or not. An approach to eliminating the difficulties
has been proposed to determine which objectives in problems are redundant, then the redundant objectives have been discarded. This approach
is called objective reduction.
This thesis has investigated the interaction between objective dimensionality reduction (ODR) and many- algorithms. The experiments were
designed to evaluate the impacts of many- algorithms on the performance
of an ODR. Then, the thesis examined the benefits which many- algorithms can achieve when being combined with an ODR to remove redundant objectives. The results have unveiled that the performance of an ODR strongly depends on multi-/many- algorithms which generate nondominated set. By combining with many- algorithms, can an objective
reduction successfully remove redundant objectives even when the number of original objectives is getting larger. The results also demonstrate
that being combined with an ODR to remove redundant objectives, it can
significantly improve the performance of many- algorithms. This thesis
focused on L-PCA for performing objective reduction.
145 trang |
Chia sẻ: huydang97 | Ngày: 27/12/2022 | Lượt xem: 425 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Luận án Objective reduction methods in evolutionary many-objective optimization, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
e 8 cases of results in Table 3.12, the Friedman
test is utilized to detect statistical differences among the results and Holm
post-hoc analysis is also applied, to find out which algorithms are distinc-
tive among the comparisons performed. The null hypothesis is that the
performances of all algorithms are similar to the significant level at 0.05,
and the alternative hypothesis is that the performance of at least two of
them are not the same. Friedman chi-squared statistics is 160.263, with
7 degrees of freedom at the significant level 0.05, the p-value for Fried-
man is 2.824E-31. This p-value is for rejection of null hypothesis, when
all algorithms are from the same distribution, i.e., there is no difference
in algorithms. The alternate hypothesis is that one or more of the algo-
rithms is different. Besides, Table 3.13 uncaps the average ranking of 8
algorithms. Generally, the smaller the rank is, the better the performance
3.2. PCS-Cluster objective reduction algorithm 103
of the algorithm does.
Table 3.13: The average ranking of 8 algorithms using Friedman test
Algorithm Average rank
1 PCS-Cluster(DBSCAN) 2.4022
2 PCS-LPCA 2.7283
3 PCS-Cluster(k -means) 3.3587
4 θ-DEA /L-PCA 3.8370
5 PCSEA-based 5.1196
6 RVEA* /L-PCA 5.3913
7 NSGA-III /L-PCA 6.1087
8 GrEA /L-PCA 7.0543
The p-value is at or below the respectable critical threshold of 0.05,
hence post-hoc pairwise multiple comparison test is conducted to discern
which of the pairs have significant differences. The thesis uses Conover
method, and the p-value is adjusted according to the family-wide error
rate (FWER) procedure of Holm.
Table 3.14 releases post-hoc p-values of all possible pairs of algorithms.
The table is compactly represented as a lower triangular matrix. Each
numerical entry is the p-value of row/column pair, i.e., the null hypotheses
that the algorithm represented by a particular column name is different
from that of a particular row name.
3.2. PCS-Cluster objective reduction algorithm 104
Table 3.14: Conover p-values, further adjusted by the Holm FWER method
P
C
S
E
A
-b
as
ed
G
rE
A
/L
-P
C
A
N
SG
A
-I
II
/L
-P
C
A
R
V
E
A
*/
L
-P
C
A
θ-
D
E
A
/L
-P
C
A
P
C
S
-L
P
C
A
P
C
S-
C
lu
st
er
(k
-m
ea
ns
)
GrEA/L-PCA 4.37E-28
NSGA-III/L-PCA 5.28E-09 1.94E-08
RVEA*/L-PCA 8.10E-02 6.23E-22 2.78E-05
θ-DEA/L-PCA 3.99E-14 1.92E-59 3.90E-36 1.35E-19
PCS-LPCA 4.73E-39 1.36E-85 2.04E-63 9.01E-46 5.73E-11
PCS-Cluster(k-means) 4.29E-24 5.34E-71 6.80E-48 2.25E-30 6.73E-03 2.46E-04
PCS-Cluster(DBSCAN) 4.20E-47 8.38E-93 3.06E-71 8.03E-54 4.05E-17 7.29E-02 1.53E-08
From the Table 3.14, if the p-value is at or below the critical threshold
of 0.05, two algorithms, which create the cell containing that p-value, have
significant differences. Otherwise, there is no discrepancy between the
results of those two algorithms. It indicates that most of the pairs of the
algorithms are significantly different in statistical sense except for the two
pairs of algorithms; namely, the pair of PCSEA-based and RVEA*/L-PCA
and the pair of PCSEA-Cluster(DBSCAN) and PCS-LPCA.
According to Table 3.14 and Table 3.13, it can be concluded that the
results of the two algorithms PCS-Cluster(DBSCAN) and PCS-LPCA are
the best with no significant difference between them in statistical sense.
Besides considering the reduced objective set as in Table 3.12, Ta-
ble 3.15 examines the quality of solution set obtained by PCSEA and its re-
duction by using PCS-Cluster objective reduction. The 4th and 7th columns
depict the GD and IGD values of solution set generated by PCSEA. The
objectives of this set are then reduced by using PCS-Cluster objective re-
duction. Its GD values are shown in 5th and 6th columns of the table for
two kinds of clustering algorithms, namely k -means and DBSCAN. Sim-
3.2. PCS-Cluster objective reduction algorithm 105
Table 3.15: The means and standard deviations of GD, IGD of Pareto generated by
PCSEA and equivalent ones after PCS-Cluster objective reduction
Problems I M GD
GD
Reduction
(k-means)
GD
Reduction
(DBSCAN)
IGD
IGD
Reduction
(k-means)
IGD
Reduction
(DBSCAN)
DTLZ5 5 10
1.8560E-02
±5.414E-04
1.9590E-04
±8.091E-08
1.9590E-04
±8.091E-08
5.3196E-01
±6.930E-04
9.6453E-02
±4.950E-04
9.6453E-02
±4.950E-04
DTLZ5 5 20
1.3141E-02
±2.424E-04
2.0358E-06
±1.294E-11
2.0358E-06
±1.294E-11
5.3473E-01
±2.338E-04
3.1976E-03
±3.491E-07
3.1976E-03
±3.491E-07
DTLZ5 5 30
1.9509E-02
±3.738E-04
3.3553E-07
±3.514E-14
3.3553E-07
±3.514E-14
5.0159E-01
±3.723E-04
7.7209E-05
±4.191E-10
7.7209E-05
±4.191E-10
DTLZ5 5 40
1.9132E-02
±2.489E-04
1.4233E-08
±4.416E-17
1.4233E-08
±4.416E-17
4.9583E-01
±4.937E-04
2.1558E-06
±3.851E-13
2.1558E-06
±3.851E-13
DTLZ5 5 50
1.3947E-02
±3.850E-04
3.5209E-10
±5.611E-20
3.5209E-10
±5.611E-20
4.9528E-01
±4.338E-04
7.0147E-08
±5.430E-16
7.0147E-08
±5.430E-16
DTLZ5 5 60
1.5540E-02
±2.306E-04
1.3535E-11
±3.279E-23
1.3535E-11
±3.279E-23
4.9105E-01
±2.394E-04
2.2627E-09
±2.981E-19
2.2627E-09
±2.981E-19
DTLZ5 5 70
1.5987E+00
±2.229E+01
4.4342E-13
±6.708E-26
4.4342E-13
±6.708E-26
4.9150E-01
±4.274E-04
6.7522E-11
±3.462E-22
6.7522E-11
±3.462E-22
DTLZ5 5 80
2.5527E+00
±3.225E+01
1.1147E-14
±6.174E-29
1.1147E-14
±6.174E-29
5.0591E-01
±1.126E-03
2.3682E-12
±4.213E-25
2.3682E-12
±4.213E-25
DTLZ5 5 90
4.6240E+00
±4.968E+01
2.7229E-16
±8.073E-32
2.7229E-16
±8.073E-32
5.1762E-01
±1.074E-03
7.6175E-14
±4.243E-28
7.6175E-14
±4.243E-28
DTLZ5 5 100
6.1788E+00
±5.693E+01
1.2722E-17
±7.631E-35
1.2722E-17
±7.631E-35
5.0527E-01
±8.938E-04
2.2514E-15
±4.108E-31
2.2514E-15
±4.108E-31
DTLZ5 10 20
1.9168E-01
±7.352E-03
2.7268E-04
±3.253E-07
2.7264E-04
±3.253E-07
6.8101E-01
±6.181E-04
6.0656E-02
±9.914E-05
6.0056E-02
±1.086E-04
DTLZ5 10 30
1.6408E-01
±4.987E-03
1.2123E-05
±1.246E-10
1.1963E-05
±1.261E-10
6.6672E-01
±5.726E-04
1.4205E-03
±2.638E-07
1.3809E-03
±2.333E-07
DTLZ5 10 40
1.8570E-01
±6.262E-03
4.8331E-07
±2.396E-13
4.8331E-07
±2.396E-13
6.6427E-01
±2.607E-04
4.5941E-05
±1.432E-10
4.5941E-05
±1.432E-10
DTLZ5 10 50
2.3348E-01
±1.137E-02
1.2627E-08
±1.132E-16
1.2627E-08
±1.132E-16
6.6860E-01
±8.880E-04
1.4863E-06
±4.831E-13
1.4863E-06
±4.831E-13
DTLZ5 10 60
2.0770E-01
±1.709E-02
7.0177E-10
±3.543E-19
7.0038E-10
±3.559E-19
6.6548E-01
±6.293E-04
4.5127E-08
±1.444E-16
4.4404E-08
±1.177E-16
DTLZ5 10 70
2.5145E-01
±1.060E-01
1.8807E-11
±4.219E-22
1.8765E-11
±4.232E-22
6.6408E-01
±3.107E-04
1.3262E-09
±3.754E-19
1.2937E-09
±2.758E-19
DTLZ5 10 80
2.7152E-01
±5.467E-02
4.5692E-13
±1.642E-25
4.4614E-13
±1.670E-25
6.7308E-01
±1.656E-03
4.2779E-11
±3.692E-22
4.0647E-11
±3.440E-22
DTLZ5 10 90
4.9918E-01
±3.815E-01
1.3169E-14
±1.215E-28
1.3169E-14
±1.215E-28
6.6170E-01
±6.496E-04
1.3318E-12
±2.954E-25
1.2968E-12
±1.729E-25
DTLZ5 10 100
6.2946E-01
±5.507E-01
4.6572E-16
±1.720E-31
4.6572E-16
±1.720E-31
6.6624E-01
±7.366E-04
3.9188E-14
±2.059E-28
3.9188E-14
±2.059E-28
DTLZ5 15 20
3.0474E-01
±1.814E-02
1.3061E-02
±6.330E-05
1.1716E-02
±6.859E-05
7.0498E-01
±1.160E-02
4.4658E-01
±3.940E-03
4.1599E-01
±6.379E-03
DTLZ5 15 30
3.0351E-01
±2.677E-02
7.6061E-04
±7.907E-07
7.3230E-04
±7.611E-07
6.8628E-01
±7.850E-03
1.9488E-02
±1.221E-04
1.7017E-02
±8.957E-05
DTLZ5 15 40
3.1979E-01
±3.836E-02
2.3313E-05
±6.840E-10
2.3081E-05
±6.921E-10
7.1221E-01
±1.100E-02
5.6781E-04
±1.083E-07
4.5852E-04
±4.803E-08
DTLZ5 15 50
3.6863E-01
±8.192E-02
6.6962E-07
±4.528E-13
6.4758E-07
±4.183E-13
6.9280E-01
±4.974E-03
2.0521E-05
±2.176E-10
1.6630E-05
±1.061E-10
(continued on next page)
3.2. PCS-Cluster objective reduction algorithm 106
Table 3.15 continued
Problems I M GD
GD
Reduction
(k-means)
GD
Reduction
(DBSCAN)
IGD
IGD
Reduction
(k-means)
IGD
Reduction
(DBSCAN)
DTLZ5 15 60
3.8035E-01
±7.028E-02
2.7480E-08
±7.332E-16
2.7405E-08
±7.371E-16
6.8637E-01
±7.483E-03
4.9298E-07
±1.244E-13
3.8773E-07
±5.642E-14
DTLZ5 15 70
3.8602E-01
±9.214E-02
6.8363E-10
±9.642E-19
6.1341E-10
±7.724E-19
7.0413E-01
±8.577E-03
2.1164E-08
±1.580E-16
1.7155E-08
±1.390E-16
DTLZ5 15 80
3.8357E-01
±3.383E-02
2.3026E-11
±6.567E-22
1.9643E-11
±3.490E-22
6.8140E-01
±7.344E-03
5.7157E-10
±1.141E-19
4.6463E-10
±6.351E-20
DTLZ5 15 90
4.4234E-01
±7.329E-02
6.2786E-13
±5.627E-25
5.6396E-13
±4.451E-25
6.9026E-01
±5.404E-03
1.9831E-11
±1.903E-22
1.4949E-11
±1.008E-22
DTLZ5 15 100
4.5147E-01
±2.550E-01
2.8299E-14
±8.292E-28
2.8088E-14
±8.381E-28
6.8443E-01
±6.677E-03
5.0601E-13
±9.285E-26
3.8647E-13
±3.361E-26
DTLZ5 20 30
4.0598E-01
±6.228E-02
9.7725E-03
±7.203E-05
8.9001E-03
±6.733E-05
4.7263E-01
±7.223E-02
2.0646E-01
±7.837E-03
1.4625E-01
±7.430E-03
DTLZ5 20 40
4.6156E-01
±7.747E-02
1.1518E-03
±3.320E-06
1.1467E-03
±3.331E-06
5.6108E-01
±7.733E-02
1.0899E-02
±6.310E-05
7.2010E-03
±5.381E-05
DTLZ5 20 50
5.6642E-01
±2.163E-01
4.0280E-05
±3.006E-09
3.7731E-05
±2.896E-09
5.4540E-01
±8.315E-02
3.4577E-04
±7.824E-08
2.3787E-04
±6.111E-08
DTLZ5 20 60
4.9068E-01
±5.375E-02
1.5426E-06
±4.039E-12
1.5378E-06
±4.050E-12
5.2834E-01
±9.083E-02
1.0227E-05
±3.109E-11
6.8761E-06
±3.190E-11
DTLZ5 20 70
5.2761E-01
±9.245E-02
3.6772E-08
±3.066E-15
3.4499E-08
±2.869E-15
5.7402E-01
±8.121E-02
2.8977E-07
±4.492E-14
1.6719E-07
±2.739E-14
DTLZ5 20 80
5.9250E-01
±2.089E-01
7.7056E-10
±1.909E-18
7.3865E-10
±1.882E-18
5.9229E-01
±7.070E-02
1.1079E-08
±3.904E-17
5.4477E-09
±2.220E-17
DTLZ5 20 90
6.5945E-01
±2.253E-01
3.2079E-11
±2.768E-21
3.1249E-11
±2.510E-21
5.8227E-01
±8.707E-02
2.5150E-10
±5.363E-20
1.5113E-10
±3.055E-20
DTLZ5 20 100
7.0824E-01
±3.785E-01
9.2522E-13
±2.230E-24
8.9737E-13
±2.187E-24
5.9934E-01
±8.518E-02
8.3435E-12
±3.335E-23
4.9960E-12
±2.672E-23
WFG3 10
1.7677E-01
±7.350E-06
8.0472E-02
±1.278E-05
5.8913E-02
±2.989E-13
4.9527E+00
±6.114E-07
1.2160E+00
±5.717E-03
8.4134E-01
±1.233E-32
WFG3 20
2.5165E-01
±1.299E-04
8.6127E-02
±7.604E-05
6.3277E-02
±6.418E-06
9.7664E+00
±6.215E-02
1.2180E+00
±1.522E-02
8.9487E-01
±1.283E-03
WFG3 30
2.8616E-01
±1.317E-04
8.5221E-02
±9.367E-05
6.3929E-02
±1.858E-11
1.9350E+01
±2.554E-02
1.2052E+00
±1.874E-02
9.0407E-01
±3.081E-31
WFG3 40
3.1979E-01
±1.185E-04
8.9377E-02
±6.560E-05
6.4543E-02
±4.982E-12
3.0224E+01
±1.393E-02
1.2639E+00
±1.312E-02
9.1276E-01
±1.109E-31
WFG3 50
3.5109E-01
±8.657E-05
9.0248E-02
±7.652E-05
6.4931E-02
±2.557E-12
4.1343E+01
±1.236E-02
1.2763E+00
±1.531E-02
9.1826E-01
±0.000E+00
WFG3 60
3.8134E-01
±1.007E-04
8.7662E-02
±1.088E-04
6.5199E-02
±8.456E-13
5.2644E+01
±5.298E-03
1.2397E+00
±2.176E-02
9.2205E-01
±1.109E-31
WFG3 70
4.1060E-01
±8.103E-05
8.9494E-02
±4.507E-05
6.5395E-02
±5.389E-13
6.4041E+01
±5.572E-03
1.2656E+00
±9.011E-03
9.2483E-01
±0.000E+00
WFG3 80
4.3720E-01
±3.834E-05
8.7889E-02
±8.030E-05
6.5545E-02
±6.222E-14
7.5479E+01
±3.424E-03
1.2429E+00
±1.605E-02
9.2695E-01
±1.972E-31
WFG3 90
4.6301E-01
±4.399E-05
9.0591E-02
±5.729E-05
6.5663E-02
±1.712E-12
8.6934E+01
±2.572E-03
1.2812E+00
±1.146E-02
9.2862E-01
±1.109E-31
WFG3 100
4.8584E-01
±3.658E-05
8.9111E-02
±5.903E-05
6.5759E-02
±4.267E-13
9.8421E+01
±1.603E-03
1.2602E+00
±1.181E-02
9.2997E-01
±4.437E-31
3.2. PCS-Cluster objective reduction algorithm 107
ilarly, columns 8th and 9th show IGD of solution set after applying the
algorithms. The table unveils that the GD and IGD of reduced solution
set obtained by objective reduction are better than (smaller than) those
of it without objective reduction in all of 46 cases for both DTLZ5(I,M),
and WFG3(M) problems in both kinds of clustering algorithms, namely
k -means and DBSCAN.
3.2.3.4 Comparison the partial objective reduction algorithms
with COR algorithm
This sub-section compares the performance of three proposed algo-
rithms. They are a complete PF-based objective algorithm, COR algo-
rithm; and two partial PF-based ones, namely PCS-LPCA and PCS-Cluster.
The experiment are performed on 4 instances of DTLZ5(I,M) prob-
lem [26]. The thesis uses CORc, a variant of COR algorithm to compare.
Five many- algorithms, which are GrEA, KnEA, NSGA-III, RVEA*, and
θ-DEA; are utilized to generate non-dominated set for CORc.
Parameters are set as in section 2.2.2, section 3.1.2, and section 3.2.2
for COR, PCS-LPCA, and PCS-Cluster, respectively. Other parameters
for many- algorithm are set as all many- algorithms are implemented by
PlatEMO platform [83]. The population size is set to 200, and the number
of generations is set to 2000. The probability of crossover and mutation is
set to 0.9 and 0.1, respectively. The distribution index for crossover is set
to 5, and the distribution index for mutation is set to 20.
The result of experiment is shown in Table 3.16 and is drawn in Fig-
ure 3.7. According to the table, PCS-LPCA and PCS-Cluster(DBSCAN)
give the best results with 100 times of successful finding the correct set in
the total of 100 runs, while CORc variant (integrated with GrEA) gives the
worst with 81 times of success. In general, the results decrease when the
number of objectives (value of M) is 20. Table 3.17 shows the average of the
rank of the algorithms for each instance, in which, PCS-Cluster(DBSCAN)
3.2. PCS-Cluster objective reduction algorithm 108
Table 3.16: Comparison of number of successes in finding the correct relevant objec-
tives among PCS-PLCA, PCS-Cluster, and a CORc variant in 20 runs
DTLZ5 GrEA KnEA NSGA-III RVEA* θ-DEA
PCS-LPCA
PCS-Cluster
(k -means)
PCS-Cluster
(DBSCAN)I M COR COR COR COR COR
3 5 20 20 20 20 20 20 20 20
5 10 20 20 20 20 20 20 20 20
7 10 20 20 20 20 20 20 20 20
5 20 1 11 14 13 17 20 6 20
+ 81 91 94 93 97 100 86 100
and PCS-LPCA stand on two top rows of the table, while CORc (inte-
grated with GrEA) stands on the bottom of the table.
1
C
O
R
c(
G
rE
A
)
C
O
R
c(
K
nE
A
)
C
O
R
c(
N
S
G
A
-I
II
)
C
O
R
c(
R
V
E
A
*)
C
O
R
c(
θ-
D
E
A
)
P
C
S
-L
P
C
A
P
C
S
-C
lu
st
er
(k
-m
ea
ns
)
P
C
S
-C
lu
st
er
(D
B
S
C
A
N
)
0
20
40
60
80
100
N
u
m
b
er
of
su
cc
es
se
s
Figure 3.7: A chart for number of successes in solving the DTLZ5(I,M) problem
3.2. PCS-Cluster objective reduction algorithm 109
Table 3.17: The average ranking of 8 algorithms using Friedman test
Algorithm Average rank
1 PCS-Cluster(DBSCAN) 3.90
2 PCS-LPCA 3.90
3 CORc(θ-DEA ) 4.20
4 CORc(NSGA-III) 4.40
5 CORc(RVEA*) 4.60
6 CORc(KnEA) 4.80
7 PCS-Cluster(k -means) 5.00
8 CORc(GrEA) 5.20
Summary
This chapter has proposed two objective reduction algorithms. They
are PCS-LPCA and PCS-Cluster. In the first step, they use PCSEA for
generating non-dominated solutions set located only in the “corner” of PF.
In the second step, while PCS-LPCA uses L-PCA for reduction, PCS-
Cluster uses clustering algorithm: k -means and DBSCAN, for clustering
objectives of the non-dominated solution set. Two steps are repeated until
the algorithms cannot reduce any more objectives. Experiments pinpoint
that the PCS-LPCA algorithm and PCS-Cluster (using DBSCAN) give
better results than others.
Conclusion and future works
Conclusion
In this thesis, many- problems which have redundant objectives have
been settled. To do that; approaches to solving many- problems have
been investigated and analyzed; the impacts of multi-/many- algorithms
on objective reduction have also been pinpointed and then three ORAs
have been proposed.
The thesis has given an overview of multi- problems together with
common approaches to solving them, and focused on multi- algorithms.
It has also mentioned many- problems and the difficulties the multi- al-
gorithms had to face. Numerous many- algorithms have appeared in the
hope of eliminating or reducing the difficulties.
However, in several many- problems, it still remains unknown if all
objectives are essential or not. An approach to eliminating the difficulties
has been proposed to determine which objectives in problems are redun-
dant, then the redundant objectives have been discarded. This approach
is called objective reduction.
This thesis has investigated the interaction between objective dimen-
sionality reduction (ODR) and many- algorithms. The experiments were
designed to evaluate the impacts of many- algorithms on the performance
of an ODR. Then, the thesis examined the benefits which many- algo-
rithms can achieve when being combined with an ODR to remove redun-
dant objectives. The results have unveiled that the performance of an
110
Conclusion and future works 111
ODR strongly depends on multi-/many- algorithms which generate non-
dominated set. By combining with many- algorithms, can an objective
reduction successfully remove redundant objectives even when the num-
ber of original objectives is getting larger. The results also demonstrate
that being combined with an ODR to remove redundant objectives, it can
significantly improve the performance of many- algorithms. This thesis
focused on L-PCA for performing objective reduction.
Three ORAs have been proposed to deal with redundant many- prob-
lems. Those algorithms use multi-/many- algorithms to generate PFs then
use machine learning algorithm to perform objective reduction.
(1) COR algorithm utilizes multi-/many- algorithms to generate com-
plete PF then uses PAM algorithm to determine redundant objectives and
essential ones. Furthermore, it considers objectives as objects (or points)
in space, then partitions them into clusters. The number of clusters is
determined by using the silhouette index. It kept one objective in each
cluster and discards other ones. The proposed algorithm has been com-
pared with two other existing ones in different instances of DTLZ5(I,M)
problem. The result unseals that the proposed algorithm is not only equiv-
alent to or more efficient than existing ones.
(2) PCS-LPCA algorithm, which uses PCSEA to generate a partial
PF then utilizes the L-PCA for identifying the relevant and redundant
objectives. It proves that the proposed algorithm is better than the original
ones. It also examines the effects of threshold values for PCSEA-based
objective reduction. It indicates that PCSEA-based objective reduction
is very efficient in solving redundant problems with a small number of
relevant objectives. It also suggests that PCSEA-based objective reduction
becomes inefficient for the larger number.
(3) PCS-Cluster algorithm uses PCSEA to generate a partial PF and
clustering algorithms for identifying the relevant and redundant objec-
Conclusion and future works 112
tives. It takes advantage of PCSEA and the simple of clustering algo-
rithms: k -means and DBSCAN. Experiments with wider range instances
of DTLZ5(I,M) problem have been designed and 30 times have been run
independently. The results prove that the proposed algorithm is better
than the original ones.
Future works
Currently, ORAs use a linear or monotonic measure of dependence,
namely, Pearson’s rho, to estimate the conflict among objectives. However,
other linear measures like Spearman’s rank or Kendall’s tau, and non-
linear measure like the Hilbert-Schmidt independence criterion have not
been experimented.
There may be other methods which are able to automatically deter-
mine the number of clusters; hence, future works may investigate these
methods for the proposed methods.
The results will further strengthen the links between evolutionary
computation and machine learning in applications. As the result of PCSEA-based
ORA is not entirely independent of the order in which the objective is re-
moved; therefore, future works can investigate the order of objective being
examined for retaining or discarding.
Further work will be studied for solving many- problems which con-
tains functions being a combination of other ones.
Publications
Journal articles
[J1] HX Nguyen, LT Bui, and CT Tran. “Improving many objective optimisation
algorithms using objective dimensionality reduction”. In: Evolutionary Intelligence
ISSN 1864-5909. Springer, 2020, 13.3, pp. 365–380.
[J2] HX Nguyen, LT Bui, and CT Tran. “An improvement of clustering-based ob-
jective reduction method for many-objective optimization problems”. In: Journal
of Science and Technique - Le Quy Don Technical University ISSN-1859-0209 No.
202 (10-2019)-Section on Information and Communication Technology (ICT)- No.
14. 2019.
[J3] HX Nguyen, CT Tran, and LT Bui. “Improve performance of pareto corner
search-based objective reduction in many-objective optimization”. In: Evolutionary
Intelligence ISSN 1864-5909. Springer, 2022 Oct (accepted). doi: 10.1007/s12065-
022-00787-y.
Conference papers
[C1] HX Nguyen, LT Bui, and CT Tran. “A Pareto Corner Search Evolutionary Algo-
rithm and Principal Component Analysis for Objective Dimensionality Reduction”.
In: 2019 11th International Conference on Knowledge and Systems Engineering
(KSE’19) (IEEE). Da Nang, Vietnam, Oct. 2019.
[C2] HX Nguyen, LT Bui, and CT Tran. “Clustering Method Using Pareto Corner
Search Evolutionary Algorithm for Objective Reduction in Many-Objective Opti-
mization Problems”. In: Proceedings of the Tenth International Symposium on In-
formation and Communication Technology. SoICT 2019. Hanoi, Ha Long Bay, Viet
Nam: ACM, 2019, pp. 78–84. isbn: 978-1-4503-7245-9.
113
Bibliography
[1] A Abraham, N Nedjah, and L de Macedo Mourelle. “Evolutionary computation:
from genetic algorithms to genetic programming”. In: Genetic systems program-
ming. Springer, 2006, pp. 1–20.
[2] SF Adra and PJ Fleming. “Diversity management in evolutionary many-objective
optimization”. In: IEEE Transactions on Evolutionary Computation. IEEE, 2011,
15.2, pp. 183–195.
[3] J Bader and E Zitzler. “HypE: An algorithm for fast hypervolume-based many-
objective optimization”. In: Evolutionary computation. MIT Press, 2011, 19.1,
pp. 45–76.
[4] S Bandyopadhyay and A Mukherjee. “An algorithm for many-objective opti-
mization with reduced objective computations: A study in differential evolu-
tion”. In: IEEE Transactions on Evolutionary Computation. IEEE, 2014, 19.3,
pp. 400–413.
[5] S Bechikh, M Elarbi, and LB Said. “Many-objective optimization using evo-
lutionary algorithms: a survey”. In: Recent Advances in Evolutionary Multi-
objective Optimization. Springer, 2017, pp. 105–137.
[6] K Bringmann et al. “Approximation-guided evolutionary multi-objective opti-
mization”. In: Twenty-Second International Joint Conference on Artificial In-
telligence. 2011.
[7] D Brockhoff and E Zitzler. “Are all objectives necessary? On dimensionality re-
duction in evolutionary multiobjective optimization”. In: Parallel Problem Solv-
ing from Nature-PPSN IX. Springer, 2006, pp. 533–542.
[8] D Brockhoff and E Zitzler. “Dimensionality reduction in multiobjective opti-
mization with (partial) dominance structure preservation: Generalized minimum
objective subset problems”. In: TIK Report. 2006, 247.
[9] D Brockhoff and E Zitzler. “Objective reduction in evolutionary multiobjec-
tive optimization: Theory and applications”. In: Evolutionary computation. MIT
Press, 2009, 17.2, pp. 135–166.
114
BIBLIOGRAPHY 115
[10] D Brockhoff and E Zitzler. “Offline and online objective reduction in evolution-
ary multiobjective optimization based on objective conflicts”. In: TIK Report.
Citeseer, 2007, 269.
[11] D Brockhoff and E Zitzler. “On objective conflicts and objective reduction in
multiple criteria optimization”. In: TIK Report. 2006, 243.
[12] S Chand and M Wagner. “Evolutionary many-objective optimization: A quick-
start guide”. In: Surveys in Operations Research and Management Science. El-
sevier, 2015, 20.2, pp. 35–42.
[13] J Cheng, GG Yen, and G Zhang. “A many-objective evolutionary algorithm
with enhanced mating and environmental selections”. In: IEEE Transactions on
Evolutionary Computation. IEEE, 2015, 19.4, pp. 592–605.
[14] R Cheng et al. “A Reference Vector Guided Evolutionary Algorithm for Many-
Objective Optimization.” In: IEEE Trans. Evolutionary Computation. 2016,
20.5, pp. 773–791.
[15] Ym Cheung and F Gu. “Online objective reduction for many-objective optimiza-
tion problems”. In: Evolutionary Computation (CEC), 2014 IEEE Congress on.
IEEE. 2014, pp. 1165–1171.
[16] Ym Cheung, F Gu, and HL Liu. “Objective extraction for many-objective op-
timization problems: Algorithm and test problems”. In: IEEE Transactions on
Evolutionary Computation. IEEE, 2016, 20.5, pp. 755–772.
[17] K Chircop and D Zammit-Mangion. “On ε-constraint based methods for the
generation of Pareto frontiers”. In: Journal of Mechanics Engineering and Au-
tomation. 2013, 3.5, pp. 279–289.
[18] EK Chong and SH Zak. An introduction to optimization, Fourth Edition. Vol. 76.
John Wiley & Sons, 2013.
[19] CAC Coello. “Recent results and open problems in evolutionary multiobjective
optimization”. In: International Conference on Theory and Practice of Natural
Computing. Springer. 2017, pp. 3–21.
[20] CAC Coello and NC Cortés. “Solving multiobjective optimization problems us-
ing an artificial immune system”. In: Genetic Programming and Evolvable Ma-
chines. Springer, 2005, 6.2, pp. 163–190.
[21] CAC Coello, GB Lamont, DA Van Veldhuizen, et al. Evolutionary algorithms
for solving multi-objective problems. Vol. 5. Springer, 2007.
BIBLIOGRAPHY 116
[22] K Deb. “Multi-objective optimisation using evolutionary algorithms: an intro-
duction”. In: Multi-objective evolutionary optimisation for product design and
manufacturing. Springer, 2011, pp. 3–34.
[23] K Deb and H Jain. “An evolutionary many-objective optimization algorithm us-
ing reference-point-based nondominated sorting approach, part I: Solving prob-
lems with box constraints.” In: IEEE Trans. Evolutionary Computation. 2014,
18.4, pp. 577–601.
[24] K Deb, M Mohan, and S Mishra. “Evaluating the ε-domination based multi-
objective evolutionary algorithm for a quick computation of Pareto-optimal so-
lutions”. In: Evolutionary computation. MIT Press, 2005, 13.4, pp. 501–525.
[25] K Deb and D Saxena. “Searching for Pareto-optimal solutions through dimen-
sionality reduction for certain large-dimensional multi-objective optimization
problems”. In: Proceedings of the World Congress on Computational Intelligence
(WCCI-2006). 2006, pp. 3352–3360.
[26] K Deb and DK Saxena. “On finding pareto-optimal solutions through dimen-
sionality reduction for certain large-dimensional multi-objective optimization
problems”. In: Kangal report. 2005, 2005011.
[27] K Deb et al. “A fast and elitist multiobjective genetic algorithm: NSGA-II”. In:
IEEE transactions on evolutionary computation. IEEE, 2002, 6.2, pp. 182–197.
[28] K Deb et al. “Scalable multi-objective optimization test problems”. In: Proceed-
ings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No.
02TH8600). Vol. 1. IEEE. 2002, pp. 825–830.
[29] R Ding et al. “An objective reduction method based on advanced clustering
for many-objective optimization problems and its human-computer interaction
visualization of pareto front”. In: Computers & Electrical Engineering. Elsevier,
2021, 93, p. 107266.
[30] N Drechsler, R Drechsler, and B Becker. “Multi-objective optimisation based
on relation favour”. In: International conference on evolutionary multi-criterion
optimization. Springer. 2001, pp. 154–166.
[31] AE Eiben and JE Smith. Introduction to evolutionary computing, 2nd edition.
Springer, 2015.
[32] M Ester et al. “A density-based algorithm for discovering clusters in large spatial
databases with noise.” In: Kdd. Vol. 96. 34. 1996, pp. 226–231.
BIBLIOGRAPHY 117
[33] M Farina and P Amato. “On the optimal solution definition for many-criteria
optimization problems”. In: 2002 annual meeting of the North American fuzzy in-
formation processing society proceedings. NAFIPS-FLINT 2002 (Cat. No. 02TH8622).
IEEE. 2002, pp. 233–238.
[34] JE Fieldsend. “Dynamic Visualisation of Many-Objective Populations”. In: The
Operational Research Society, 2016.
[35] T Gal and T Hanne. “Consequences of dropping nonessential objectives for the
application of MCDM methods”. In: European Journal of Operational Research.
Elsevier, 1999, 119.2, pp. 373–378.
[36] M Garza-Fabre, GT Pulido, and CAC Coello. “Ranking methods for many-
objective optimization”. In: Mexican international conference on artificial intel-
ligence. Springer. 2009, pp. 633–645.
[37] RH Gómez and CAC Coello. “MOMBI: A new metaheuristic for many-objective
optimization based on the R2 indicator”. In: 2013 IEEE Congress on Evolution-
ary Computation. IEEE. 2013, pp. 2488–2495.
[38] F Gu, HL Liu, and Ym Cheung. “A Fast Objective Reduction Algorithm Based
on Dominance Structure for Many Objective Optimization”. In: Asia-Pacific
Conference on Simulated Evolution and Learning. Springer. 2017, pp. 260–271.
[39] X Guo and X Wang. “A Novel Objective Grouping Evolutionary Algorithm for
Many-Objective Optimization Problems”. In: International Journal of Pattern
Recognition and Artificial Intelligence. World Scientific, 2020, 34.06, p. 2059018.
[40] X Guo, Y Wang, and X Wang. “An objective reduction algorithm using repre-
sentative Pareto solution search for many-objective optimization problems”. In:
Soft Computing. Springer, 2016, 20.12, pp. 4881–4895.
[41] X Guo, Y Wang, and X Wang. “Using objective clustering for solving many-
objective optimization problems”. In: Mathematical Problems in Engineering.
Hindawi, 2013, 2013.
[42] X Guo et al. “A new non-redundant objective set generation algorithm in many-
objective optimization problems”. In: 2015 IEEE Congress on Evolutionary Com-
putation (CEC). IEEE. 2015, pp. 2851–2858.
[43] X Guo et al. “A new objective reduction algorithm for many-objective problems:
employing mutual information and clustering algorithm”. In: Computational In-
telligence and Security (CIS), 2012 Eighth International Conference on. IEEE.
2012, pp. 11–16.
BIBLIOGRAPHY 118
[44] R Gupta and SJ Nanda. “Objective Reduction in Many-Objective Optimization
with Social Spider Algorithm for Cloud Detection in Satellite Images”. In: 2021.
[45] Z He and GG Yen. “Many-objective evolutionary algorithm: Objective space
reduction and diversity improvement”. In: IEEE Transactions on Evolutionary
Computation. IEEE, 2015, 20.1, pp. 145–160.
[46] Z He and GG Yen. “Many-objective evolutionary algorithms based on coordi-
nated selection strategy”. In: IEEE Transactions on Evolutionary Computation.
IEEE, 2016, 21.2, pp. 220–233.
[47] S Huband et al. “A review of multiobjective test problems and a scalable test
problem toolkit”. In: IEEE Transactions on Evolutionary Computation. IEEE,
2006, 10.5, pp. 477–506.
[48] EJ Hughes. “Multiple single objective Pareto sampling”. In: Congress on Evolu-
tionary Computation 2003. 2003, pp. 2678–2684.
[49] H Ishibuchi, N Akedo, and Y Nojima. “Behavior of multiobjective evolutionary
algorithms on many-objective knapsack problems”. In: IEEE Transactions on
Evolutionary Computation. IEEE, 2014, 19.2, pp. 264–283.
[50] H Ishibuchi and NT and Yusuke Nojima. “Evolutionary many-objective opti-
mization: A short review”. In: Proceedings of the IEEE Congress on Evolutionary
Computation, CEC2008, June 1-6, 2008, Hong Kong. 2008, pp. 2419–2426.
[51] AL Jaimes, CAC Coello, and JEU Barrientos. “Online objective reduction to
deal with many-objective problems”. In: International Conference on Evolution-
ary Multi-Criterion Optimization. Springer. 2009, pp. 423–437.
[52] D Kalyanmoy et al. Multi objective optimization using evolutionary algorithms.
John Wiley and Sons, 2001.
[53] L Kaufman and P Rousseeuw. Clustering by means of medoids. North-Holland,
1987.
[54] DJ Ketchen and CL Shook. “The application of cluster analysis in strategic man-
agement research: an analysis and critique”. In: Strategic management journal.
Wiley Online Library, 1996, 17.6, pp. 441–458.
[55] S Kukkonen and J Lampinen. “Ranking-dominance and many-objective opti-
mization”. In: 2007 IEEE Congress on Evolutionary Computation. IEEE. 2007,
pp. 3983–3990.
[56] B Li et al. “Many-objective evolutionary algorithms: A survey”. In: ACM Com-
puting Surveys (CSUR). ACM, 2015, 48.1, p. 13.
BIBLIOGRAPHY 119
[57] K Li et al. “Evolutionary many-objective optimization: A comparative study of
the state-of-the-art”. In: IEEE Access. IEEE, 2018, 6, pp. 26194–26214.
[58] M Li, S Yang, and X Liu. “Shift-based density estimation for Pareto-based algo-
rithms in many-objective optimization”. In: IEEE Transactions on Evolutionary
Computation. IEEE, 2014, 18.3, pp. 348–365.
[59] M Li, L Zhen, and X Yao. “How to read many-objective solution sets in parallel
coordinates [educational forum]”. In: IEEE Computational Intelligence Maga-
zine. IEEE, 2017, 12.4, pp. 88–100.
[60] Y Li, HL Liu, and ED Goodman. “Hyperplane-approximation-based method
for many-objective optimization problems with redundant objectives”. In: Evo-
lutionary computation. MIT Press One Rogers Street, Cambridge, MA 02142-
1209, USA journals-info . . ., 2019, 27.2, pp. 313–344.
[61] A López Jaimes, CA Coello Coello, and D Chakraborty. “Objective reduction
using a feature selection technique”. In: Proceedings of the 10th annual conference
on Genetic and Evolutionary Computation. ACM. 2008, pp. 673–680.
[62] N Luo, X Li, and Q Lin. “Objective reduction for many-objective optimization
problems using objective subspace extraction”. In: Soft Computing. Springer,
2018, 22.4, pp. 1159–1173.
[63] J MacQueen et al. “Some methods for classification and analysis of multivariate
observations”. In: Proceedings of the fifth Berkeley symposium on mathematical
statistics and probability. Vol. 1. 14. Oakland, CA, USA. 1967, pp. 281–297.
[64] SU Mane and M Narasinga Rao. “A Non-dominated Sorting based Evolution-
ary Algorithm for Many-objective Optimization Problems”. In: Scientia Iranica.
Sharif University of Technology, 2021.
[65] K Miettinen. Nonlinear Multiobjective Optimization, volume 12 of International
Series in Operations Research and Management Science. 1999.
[66] L Nguyen, LT Bui, and HA Abbass. “DMEA-II: the direction-based multi-
objective evolutionary algorithm-II”. In: Soft Computing. Springer, 2014, 18.11,
pp. 2119–2134.
[67] S Obayashi and D Sasaki. “Visualization and data mining of Pareto solutions
using self-organizing map”. In: International Conference on Evolutionary Multi-
Criterion Optimization. Springer. 2003, pp. 796–809.
[68] T Okabe, Y Jin, and B Sendhoff. “A critical survey of performance indices
for multi-objective optimisation”. In: Evolutionary Computation, 2003. CEC’03.
The 2003 Congress on. Vol. 2. IEEE. 2003, pp. 878–885.
BIBLIOGRAPHY 120
[69] M Pal, S Saha, and S Bandyopadhyay. “DECOR: differential evolution using
clustering based objective reduction for many-objective optimization”. In: In-
formation Sciences. Elsevier, 2018, 423, pp. 200–218.
[70] A Pryke, S Mostaghim, and A Nazemi. “Heatmap visualization of population
based multi objective algorithms”. In: International Conference on Evolutionary
Multi-Criterion Optimization. Springer. 2007, pp. 361–375.
[71] RC Purshouse and PJ Fleming. “Conflict, harmony, and independence: Relation-
ships in evolutionary multi-criterion optimisation”. In: International Conference
on Evolutionary Multi-Criterion Optimization. Springer. 2003, pp. 16–30.
[72] SS Rao. Engineering Optimization: Theory and Practice, Copyright© 2009 by
John Wiley & Sons. 2009.
[73] A Ravindran, GV Reklaitis, and KM Ragsdell. Engineering optimization: meth-
ods and applications. John Wiley & Sons, 2006.
[74] N Riquelme, C Von Lücken, and B Baran. “Performance metrics in multi-
objective optimization”. In: Computing Conference (CLEI), 2015 Latin Ameri-
can. IEEE. 2015, pp. 1–11.
[75] PJ Rousseeuw. “Silhouettes: a graphical aid to the interpretation and valida-
tion of cluster analysis”. In: Journal of computational and applied mathematics.
Elsevier, 1987, 20, pp. 53–65.
[76] DK Saxena and K Deb. “Non-linear dimensionality reduction procedures for cer-
tain large-dimensional multi-objective optimization problems: Employing cor-
rentropy and a novel maximum variance unfolding”. In: International Conference
on Evolutionary Multi-Criterion Optimization. Springer. 2007, pp. 772–787.
[77] DK Saxena et al. “Objective reduction in many-objective optimization: Linear
and nonlinear algorithms”. In: IEEE Transactions on Evolutionary Computa-
tion. IEEE, 2013, 17.1, pp. 77–99.
[78] NH Siddique and H Adeli. Nature-Inspired Computing: Physics and Chemistry-
Based Algorithms. CRC Press, 2017.
[79] HK Singh, A Isaacs, and T Ray. “A Pareto corner search evolutionary algo-
rithm and dimensionality reduction in many-objective optimization problems”.
In: IEEE Transactions on Evolutionary Computation. IEEE, 2011, 15.4, pp. 539–
556.
[80] A Sinha et al. “Using objective reduction and interactive procedure to handle
many-objective optimization problems”. In: Applied Soft Computing. Elsevier,
2013, 13.1, pp. 415–427.
BIBLIOGRAPHY 121
[81] COS Sorzano, J Vargas, and APMontano. “A survey of dimensionality reduction
techniques”. In: arXiv preprint arXiv:1403.2877. 2014.
[82] W Sun and J Li. “A strengthened diversity indicator and reference vector-based
evolutionary algorithm for many-objective optimization”. In: Soft Computing.
Springer, 2021, 25.15, pp. 10257–10273.
[83] Y Tian et al. “PlatEMO: A MATLAB platform for evolutionary multi-objective
optimization [educational forum]”. In: IEEE Computational Intelligence Maga-
zine. IEEE, 2017, 12.4, pp. 73–87.
[84] L Van Der Maaten, E Postma, and J Van den Herik. “Dimensionality reduction:
a comparative”. In: J Mach Learn Res. 2009, 10.66-71, p. 13.
[85] L Van der Maaten. “An introduction to dimensionality reduction using matlab”.
In: Report. Citeseer, 2007, 1201.07-07, p. 62.
[86] DAV Veldhuizen. “Multiobjective Evolutionary Algorithms: Classifications, Anal-
yses, and New Innovations.” In: Jan. 1999.
[87] T Wagner, N Beume, and B Naujoks. “Pareto-, aggregation-, and indicator-
based methods in many-objective optimization”. In: International conference on
evolutionary multi-criterion optimization. Springer. 2007, pp. 742–756.
[88] D Walker, J Fieldsend, and R Everson. “Visualising many-objective popula-
tions”. In: Proceedings of the 14th annual conference companion on Genetic and
evolutionary computation. 2012, pp. 451–458.
[89] H Wang, L Jiao, and X Yao. “Two_Arch2: An improved two-archive algorithm
for many-objective optimization”. In: IEEE Transactions on Evolutionary Com-
putation. IEEE, 2015, 19.4, pp. 524–541.
[90] H Wang and X Yao. “Objective reduction based on nonlinear correlation infor-
mation entropy”. In: Soft Computing. Springer, 2016, 20.6, pp. 2393–2407.
[91] J Wang. Geometric structure of high-dimensional data and dimensionality re-
duction. Vol. 5. Springer, 2012.
[92] D Xu and Y Tian. “A comprehensive survey of clustering algorithms”. In: Annals
of Data Science. Springer, 2015, 2.2, pp. 165–193.
[93] S Yang et al. “A grid-based evolutionary algorithm for many-objective optimiza-
tion”. In: IEEE Transactions on Evolutionary Computation. IEEE, 2013, 17.5,
pp. 721–736.
[94] XS Yang. Nature-inspired optimization algorithms. Academic Press, 2020.
BIBLIOGRAPHY 122
[95] GG Yen and Z He. “Performance metric ensemble for multiobjective evolution-
ary algorithms”. In: IEEE Transactions on Evolutionary Computation. IEEE,
2013, 18.1, pp. 131–144.
[96] Y Yuan et al. “A new dominance relation-based evolutionary algorithm for
many-objective optimization”. In: IEEE Transactions on Evolutionary Compu-
tation. IEEE, 2016, 20.1, pp. 16–37.
[97] Y Yuan et al. “Objective reduction in many-objective optimization: evolutionary
multiobjective approaches and comprehensive analysis”. In: IEEE Transactions
on Evolutionary Computation. IEEE, 2017, 22.2, pp. 189–210.
[98] Q Zhang and H Li. “MOEA/D: A multiobjective evolutionary algorithm based
on decomposition”. In: IEEE Transactions on evolutionary computation. IEEE,
2007, 11.6, pp. 712–731.
[99] X Zhang, Y Tian, and Y Jin. “A knee point-driven evolutionary algorithm for
many-objective optimization”. In: IEEE Transactions on Evolutionary Compu-
tation. IEEE, 2015, 19.6, pp. 761–776.
[100] X Zhang et al. “A decision variable clustering-based evolutionary algorithm for
large-scale many-objective optimization”. In: IEEE Transactions on Evolution-
ary Computation. IEEE, 2016, 22.1, pp. 97–112.
[101] E Zitzler and S Künzli. “Indicator-based selection in multiobjective search”. In:
International Conference on Parallel Problem Solving from Nature. Springer.
2004, pp. 832–842.
[102] E Zitzler, M Laumanns, and L Thiele. “SPEA2: Improving the strength Pareto
evolutionary algorithm”. In: TIK-report. Eidgenössische Technische Hochschule
Zürich (ETH), Institut für Technische Informatik und Kommunikationsnetze
(TIK), 2001, 103.
[103] E Zitzler et al. “Performance assessment of multiobjective optimizers: An anal-
ysis and review”. In: IEEE Transactions on evolutionary computation. IEEE,
2003, 7.2, pp. 117–132.
[104] X Zou et al. “A new evolutionary algorithm for solving many-objective opti-
mization problems”. In: IEEE Transactions on Systems, Man, and Cybernetics,
Part B (Cybernetics). IEEE, 2008, 38.5, pp. 1402–1412.
Appendix A.
Representations of non-dominated
solutions
In order to represent non-dominated solution set; the star coordinate
system, scatter-plot matrix, and parallel coordinates are three ways. While
parallel coordinates are presented in section 1.1.2.3 (p. 19), this section
shows two other ways.
Star coordinate system A star coordinate system [52, 65], which is also
called star glyphs to represent multiple non-dominated solutions. For M
objective functions, a circle is partitioned into M equal arcs. Each radial
line connecting the end of an arc with the center of the circle represents
the axis for each objective function. For each line, the center of the circle8
marks the minimum value of each objective and the circumference marks
the maximum objective function value9. Since the range of each objec-
tive function can be different, it is vital to label both ends of each line.
Normalization of objectives is not necessary here. Each solution can be
marked on one such a circle. The points marked on each line can be joined
with straight lines to form a polygon. Thus, for N solutions, there will be
N such circles. Visually, they will convey the convergence and diversity
in obtained solutions. Figure A.1 reveals an example of star coordinate
system.
8the ideal objective vector for minimize multi- problems
9the nadir objective vector for minimize multi- problems
123
A. Representations of non-dominated solutions 124
0
8
f1
15
17 f2
-0.
1
0.1
5
f3
0
8
f1
15
17 f2
-0.1
0.1
5
f3
0
8
f1
15
17 f2
-0.
1
0.1
5
f3
Figure A.1: Star coordinates for 3 solutions with 3 objectives
Scatter-plot matrix The scatter-plot matrix is comprised of panels each
representing one objective function pair. The dimension of the square
matrix is the number of objective functions. Figure A.2 proves a typical
example of such a plot with M = 3 objective functions for the data in
Table A.1.
Table A.1: An exemplary data for representation (for Figure A.2 and Figure A.3)
Solution f1 f2 f3
1 3.90 -1.12 13.0
2 3.70 -1.06 13.0
3 3.50 -1.01 13.0
4 3.30 -0.96 13.0
5 3.10 -0.91 13.0
6 2.75 -0.68 13.0
7 2.40 -0.65 13.0
8 2.05 -0.62 13.0
9 1.65 -0.59 13.2
10 1.30 -0.56 13.7
11 1.05 -0.56 14.6
12 0.80 -0.62 15.5
13 0.65 -0.76 17.0
14 0.55 -0.89 19.0
15 0.50 -1.03 21.5
16 0.45 -1.16 23.5
17 0.40 -1.30 25.5
18 0.35 -1.43 27.5
19 0.30 -1.56 29.5
As can be seen in Figure A.2, each pair is graphed twice with the
A. Representations of non-dominated solutions 125
0
1
2
3
4
−1.5
−1
−0.5
0 1 2 3 4
15
20
25
30
−1.5 −1 −0.5 15 20 25 30
f 1
f 2
f 3
f1 f2 f3
Figure A.2: Scatter matrix for data in Table A.1 (p. 124)
scales interchanged. This means that either the lower or the upper triangle
could be a dropper without losing any information. However, displaying
the whole matrix makes it easier to compare the objective function values.
One can measure the performance of one objective function against other
ones by having a look at one column or one row.
Parallel coordinates Parallel coordinate [59], (also called value path [52,
65]) is used to illustrate a set of solutions in an m-dimensional space.
Parallel coordinates map that set onto a 2D graph, with m parallel axes
being plotted, typically vertical and equally spaced. A solution in an
m-dimensional space is represented as a polyline with vertices on these
parallel axes, and the position of the vertex on the ith axis corresponds to
the value of the point on the ith objective. Parallel coordinates are simple
to construct and scale well with the dimensionality of the data. Adding
more dimensions only involves adding more axes. Figure A.3 presents an
A. Representations of non-dominated solutions 126
example of the parallel coordinates plot for data in Table A.1 (p. 124) as
same as Figure A.2 does.
0
1
2
3
4
−1.5
−1
−0.5
15
20
25
30
f1 f2 f3
Figure A.3: Parallel coordinates for data in Table A.1 (p. 124)
Appendix B.
Several machine learning algorithms
used in this study
Algorithm B.1: PCA algorithm
Input: X, dataset containing n points with number of dimensions d
Output: Z, dataset containing n points with number of dimensions k
1 begin
2 Find mean vector x¯ =
1
n
∑n
i=1 xi
3 Subtract the mean xˆi = xi − x¯i
4 Compute the covariance matrix S =
1
n
XˆXˆ
T
5 Compute the eigenvectors and eigenvalues of the covariance matrix S:
(λ1,u1), . . . , (λd,ud)
6 Pick k eigenvectors with highest eigenvalues
7 Project data to selected eigenvectors and deriving the new data set
Z = UTk Xˆ
8 end
127
B. Several machine learning algorithms 128
Algorithm B.2: k-means algorithm
Input: X: A dataset containing n objects;
k: the number of clusters;
Output: A set of k clusters;
1 begin
2 Choose k random objects as cluster centers called centroids.
3 while true do
4 Calculate the distance between each object and cluster centers.
5 Assign the object x[i] to the cluster whose distance from the cluster
center is minimum of all the cluster centers
6 if no object was reassigned then
7 stop
8 end
9 Update the center for each cluster by taking the average of all the data
points assigned to that cluster after step 5.
10 end
11 end
Algorithm B.3: PAM algorithm
Input: D: a dataset containing n objects;
k: the number of clusters
Output: A set of k clusters
1 begin
2 Arbitrary choose k objects from D as representative objects (seeds)
3 repeat
4 noChange← true;
5 Assign each remaining object to the cluster with the nearest
representative object
6 foreach representative object Oj do
7 Randomly select a non representative object Orandom
8 Compute the total swapping cost S of swapping Oj with Orandom
9 if S < 0 then
10 i is replaced by h;
11 noChange← false;
12 end
13 end
14 until noChange;
15 end
B. Several machine learning algorithms 129
Algorithm B.4: DBSCAN algorithm
Input: Objects, Eps, MinObjs
Output: Nc clusters: Cluster[1], Cluster[2],. . . , Cluster[Nc];
1 begin
2 Nobj ← Objects.size;
3 Cluster[1..Nobj]← 0;
4 Nc← 0 ; // number of cluster
5 Objects[1..Nobj].visit← false ; // to keep track of objects that have been
visited.
6 for i = 1 to Nobj do
7 if not Objects[i].visit then
8 Objects[i].visit ← true ; // If not visited yet, then mark as visited
9 neighborObjs ← regionQuery(Objects, i, Eps) ; // find its neighbors
(pseudo-code on page 130)
10 if length(neighborObjs) < MinObjs− 1 then
11 objC[i]← 0 ; // Not enough objects to form a cluster, mark object i as a
noise
12 else
/* Form a cluster */
13 Nc← Nc+ 1; // Increment No. of clusters and process neighborhood
14 Cluster{Nc} ← i ; // Initialize cluster Nc with object i
15 objC[i]← Nc ; // and mark object i as being a member of cluster Nc.
16 ind← 1 ; // Initialize index into neighborObjs array.
/* For each object P’ in neighborObjs */
17 while ind <= length(neighborObjs) do
18 nb← neighborObjs[ind];
19 if not Objects[nb].visit then
20 Objects[nb].visit← true ; // If this neighbor has not been
visited mark it as visited
/* Find the neighbors of this neighbor and if it has enough
neighbors add them to the neighborObjs list */
21 neighborObjsO ← regionQuery(Objects, nb, Eps); // see
pseudo-code on page 130
22 if length(neighborObjsO) >= minObjs then
23 neighborsObjs← neighborsObjs ∪ neighborObjsO
24 end
25 end
// If this neighbor nb is “free” then add it to this cluster
26 if not objC[nb] then
27 Cluster[Nc]← Cluster[Nc] ∪ {nb};
28 objC[nb] = Nc;
29 end
30 ind← ind+ 1 ; // increment neighbor object index and process next
neighbor
31 end
32 end
33 end
34 end
35 end
B. Several machine learning algorithms 130
Algorithm B.5: regionQuery
Input: O, n, E
Output: neighbors
1 begin
2 E2← E2;
3 Nobj ← O.size;
4 neighbors← ∅;
5 for j = 1 to Nobj do
6 if j 6= n then
7 O[j].visit ← true;
// Test if distance < E2
8 dist2← Distance(O[j], O[n]) ; // distance between 2 objects O[j] and O[n]
9 if dist2 < E2 then
10 neighbors← neighbors ∪ {j};
11 end
12 end
13 end
14 end