Department of Mathematical Sciences
Permanent URI for this community
Browse
Browsing Department of Mathematical Sciences by browse.metadata.advisor "Bah, Bubacarr"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- ItemAn error correction neural network for stock market prediction(Stellenbosch : Stellenbosch University, 2019-04) Mvubu, Mhlasakululeka; Sanders, J. W.; Becker, Ronald I.; Bah, Bubacarr; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Division Mathematics.ENGLISH ABSTRACT : Predicting stock market has long been an intriguing topic for research in different fields. Numerous techniques have been conducted to forecast stock market movement. This study begins with a review of the theoretical background of neural networks. Subsequently an Error Correction Neural Network (ECNN), Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) are defined and implemented for an empirical study. This research offers evidence on the predictive accuracy and profitability performance of returns of the proposed forecasting models on futures contracts of Hong Kong’s Hang Seng futures, Japan’s NIKKEI 225 futures, and the United State of America S&P 500 and DJIA futures from 2010 to 2016. Technical as well as fundamental data are used as input to the network. Results show that the ECNN model outperforms other proposed models in both predictive accuracy and profitability performance. These results indicate that ECNN shows promise as a reliable deep learning method to predict stock price.
- ItemImage Classification with Graph Neural Networks(Stellenbosch : Stellenbosch University, 2022-04) Neocosmos, Kibidi; Bah, Bubacarr; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences.ENGLISH SUMMARY: Convolutional neural networks (CNNs) are a prominent and ubiquitous part of machine learning. They have successfully achieved consistent state-of-the-art performance in areas such as computer vision. However, they require large datasets for such achievements. This is in stark contrast to human-level performance that demands less data for the same task. The question naturally arises as to whether it is possible to develop models that require less data without a significant decrease in performance. I n t his thesis, we address the above question from a different perspective by investigating whether a richer data structure could result in more learning from fewer training examples. We explore the idea by constructing images as graphs – a structure that naturally contains more information about an image than the standard tensor representation. We then use graph neural networks (GNNs) to leverage the graph structure and perform image classification. We found that the graph structure did not enable GNNs to perform well given less data. However, during the process of experimentation, we discovered that the graph topology as well as node features significantly influence performance. Furthermore, some of the proposed GNN models were not able to effectively utilize the graph structure.