Department of Computer Science
Permanent URI for this community
Browse
Browsing Department of Computer Science by browse.metadata.advisor "Engelbrecht, Andries"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- ItemLandscape analysis-based automated algorithm selection(Stellenbosch : Stellenbosch University, 2024-03) Lang, Ryan Dieter; Engelbrecht, Andries; Stellenbosch University. Faculty of Science. Dept. of Computer Science.ENGLISH ABSTRACT: The algorithm selection problem, which was developed by Rice, refers to the chal- lenge of choosing the best algorithm available to solve a specific p roblem. The framework put forth by Rice includes four spaces: the problem space, the algorithm space, the characteristic space, and the performance space. In this thesis, the focus is on the problem space of continuous-valued single-objective, boundary-constrained optimisation problems. Landscape analysis, which is a set of techniques that utilises mathematical and statistical methods to characterise the properties of optimisation problems, can be used to describe the characteristic space. Selection of the most effective algorithm to solve optimisation problems is a complex task, because meta- heuristics have varying strengths. Therefore, a data-driven approach that utilises landscape analysis techniques is employed in this thesis to create an automated algorithm selector. This thesis proposes enhancements to the characteristic space by critically evaluat- ing the reliability of the methods used in generating landscape analysis measures. Additionally, a new benchmark suite, which is more representative of the problem space than commonly used benchmark suites in the literature, is proposed. An investigation into the impact on the performance of hybrid metaheuristics created by combining the sampling algorithms used to calculate landscape analysis features with standard metaheuristics, which are used to solve the optimisation problem, is conducted. By combining the proposed improvements to the characteristic, algo- rithm, and problem spaces of the algorithm selection framework, this thesis concludes with a landscape analysis-based automated algorithm selection model that outper- forms the current automated algorithm selection models found in the literature in terms of performance and generalisability. Furthermore, this thesis explores the use of automated machine learning and hybrid metaheuristics in automated algorithm selection.
- ItemMixtures of heterogeneous experts(Stellenbosch : Stellenbosch University, 2022-10) Omomule, Taiwo Gabriel; Engelbrecht, Andries; Stellenbosch University. Faculty of Science. Dept. of Computer Science.ENGLISH ABSTRACT: This research considers the problem of the No-Free-Launch-Theorem, which states that no one machine learning algorithm performs best on all problems due to algorithms having different inductive biases. Another problem is that the combinations of experts of the same type, referred to as a mixture of homogeneous experts, do not capitalize on the different inductive biases of different machine learning algorithms. Research has shown that mixtures of homogeneous experts deliver improved accuracy compared to that of the base experts in the mixture. However, the predictive power of a homogeneous mixture of experts is still limited by the inductive bias of the algorithm that makes up the mixture of experts. Therefore, this research proposes the development of mixtures of heterogeneous experts through the combination of different machine learning algorithms to take advantage of the strengths of the machine learning algorithms and to reduce the adverse effects of the inductive biases of the different algorithms. A set of different machine learning algorithms are selected to develop four different types of mixtures of experts in the research. Empirical analyses are performed using nonparametric statistical tests to compare the generalization performance of the ensembles. The comparison is carried out to investigate the performance of the homogeneous and heterogeneous ensembles in a number of modelling studies examined on a set of classification and regression problems using selected performance measures. The problems represent varying levels of complexity and characteristics to determine the characteristics and complexities for which the heterogeneous ensembles outperform homogeneous ensembles. For classification problems, the empirical results across six modelling studies indicate that heterogeneous ensembles generate improved predictive performance compared to the developed homogeneous ensembles by taking advantage of the different inductive biases of the different base experts in the ensembles. Specifically, the heterogeneous ensembles developed using different machine learning algorithms, with the same and different configurations, showed superiority over other heterogeneous ensembles and the homogeneous ensembles developed in this research. The ensembles achieved the best and second-best overall accuracy rank across the classification datasets in each modelling study. For regression problems, the heterogeneous ensembles outperformed the homogeneous ensembles across five modelling studies. Although, a random forest algorithm achieved competitive generalization performance compared to that of the heterogeneous ensembles. Based on the average ranks, the heterogeneous ensembles developed using different machine learning algorithms where the base members consist of the same and different configurations still produced better predictive performance than a number of heterogeneous ensembles and homogeneous ensembles across the modelling studies. Therefore, the implementation of a mixture of heterogeneous experts removes the need for the computationally expensive process of finding the best performing homogeneous ensemble. The heterogeneous ensembles of different machine learning algorithms are consistently the most or one of the most accurate ensembles across all classification and regression problems. This is attributed to the advantage of capitalizing on the inductive biases of the different machine learning algorithms and the different configurations of the base members in the ensembles.
- ItemParticle swarm optimization for constrained multimodal function optimization(Stellenbosch : Stellenbosch University, 2024-03) Strelitz, Benjamin Steenveld; Engelbrecht, Andries; Stellenbosch University. Faculty of Science. Dept. of Computer Science.ENGLISH ABSTRACT: This thesis investigates the efficiency of particle swarm optimization (PSO) algorithms at finding many feasible global optima for constrained multimodal optimization prob- lems. The proposed approach is the niching migratory multi-swarm optimizer with Deb's comparison criteria (NMMSO-DCC) algorithm. The NMMSO-DCC algorithm uses the same core architecture as the niching migratory multi-swarm optimization (NMMSO) al- gorithm, but uses Deb's comparison criteria as a constraint handling method. Deb's com- parison criteria allows the NMMSO-DCC algorithm to find many feasible global optima for constrained multimodal optimization problems (CMMOPs), whereas the NMMSO algorithm was designed only to find global optima for boundary constrained multimodal optimization problems (MMOPs). The NMMSO algorithm is one of the state-of-the-art multiomodal optimization algorithms, but cannot be used when constraints are placed on the objective function. Thus, the proposed algorithm addresses the inability of the NMMSO algorithm to solve constrained multimodal optimization problems. This study assumes that the objective function to be optimized remains static throughout the search process. This study also assumes that the constraints placed upon the objective func- tion remain static during the search process. All benchmark problems in this study contain boundary constraints. The results indicate that the NMMSO-DCC performs competitively compared to other state-of-the-art constrained multimodal optimization algorithms. The results in terms of success rate are particularly convincing, whereas NMMSO-DCC struggled more with respect to the peak ratio. This means that although the NMMSO-DCC algorithm is able to locate all global optima within a given tolerance level in some of the independent runs, it struggles to do so consistently across multiple independent runs.
- ItemSet-based particle swarm optimization for portfolio optimization(Stellenbosch : Stellenbosch University, 2021-12) Erwin, Kyle Harper; Engelbrecht, Andries; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Division Computer Science.ENGLISH ABSTRACT: Portfolio optimization is a complex problem, not only in the depth of the topics it covers but also in breadth. It is the process of determining which assets to include in a portfolio while simultaneously maximizing profit and minimizing risk. Portfolio optimization is rich with interesting research not only by researchers in finance, but also in computer science. The overlap of these fields has lead to an increase in the use of meta-heuristics to make intelligent investment decisions. This thesis conducts a thorough investigation into the current state of evolutionary and swarm intelligence algorithms for portfolio optimization. The investigation showed that these algorithms suffer from stability issues for larger portfolio optimization problems. A new approach using set-based particle swarm optimization (SBPSO) is proposed to reduce the dimensionality, and therefore complexity, of portfolio optimization problems. The results show that SBPSO is capable of obtaining good-quality solutions while being relatively fast. New set-based diversity measures are developed in order to better understand the exploration and exploitation behaviour of SBPSO, and set-based algorithms in general. It is shown that SBPSO fails to converge to a single solution and uses an inadequate process to determine the contribution of each asset to the portfolio. Based on these findings, improvements are made to the proposed SBPSO approach that yield significant gains in performance. The first multi-objective adaptation of SBPSO is also developed and is shown to scale to larger portfolio problems better than the multi-guided particle swarm optimization (MGPSO) algorithm, with lower levels of risk.