About

João R. Campos

Ph.D. Candidate

Since 2016 I've been working as a researcher with the Evolutionary and Complex Systems Group (ECOS) at the Centre for Informatics and Systems (CISUC). I've received my Masters Degree in 2017 and enrolled in the doctoral program at the Department of Informatics Engineering soon after. My Ph.D. research is interdisciplinary, and as such I'm currently researching with both ECOS and the Software and Systems Engineering (SSE) on using Machine Learning for Online Failure Prediction.

My main research interests are Artificial Intelligence, Evolutionary Computation, Machine Learning, and currently also Dependable and Secure Computing.

Experience

Web Software Developer

For almost 10 years I worked as a software engineer for web systems, from client management to requirements and development. This provided me with a broad experience in the private sector, as well and instilled me the importance of good practices, the need for a cohesive team, the tools to objectively manage time and goals.

All these tools I acquired throughout the years allow me to conduct my research in an effective and productive fashion, which will hopefully lead to a fulfilling and significant Ph.D.

Academic Track

I started working in the private sector while I was still in the second year of my bachelor. Nonetheless, if anything, working professionally provided me a different perspective regarding the need to continue my studies. As such, I received my Bachelors' Degree in 2010 from the Coimbra Institute of Engineering, and in 2011 I enrolled in a specialized course of two years, in eCommerce. I then enrolled in a master in Informatics Engineering with a specialization in Intelligent Systems at the University of Coimbra. After finishing it in 2017 I enrolled in the doctoral program, and have since dedicated myself entirely to it, no longer working for the private sector.

Research

EDCC 2018: Exploratory Study of Machine Learning Techniques for Supporting Failure Prediction

The growing complexity of software makes it difficult or even impossible to detect all faults before deployment, and such residual faults eventually lead to failures at runtime. Online Failure Prediction (OFP) is a technique that attempts to avoid or mitigate such failures by predicting their occurrence based on the analysis of past data and the current state of a system. Given recent technological developments, Machine Learning (ML) algorithms have shown their ability to adapt and extract knowledge in a variety of complex problems, and thus have been used for OFP. Still, they are highly dependent on the problem at hand, and their performance can be influenced by different factors. The problem with most works using ML for OFP is that they focus only on a small set of prediction algorithms and techniques, although there is no comprehensive study to support their choice. In this paper, we present an exploratory analysis of various ML algorithms and techniques on a dataset containing failure data. The results show that, for the same data, different algorithms and techniques directly influence the prediction performance and thus should be carefully selected.

EDCC 2019: A Benchmarking Approach for Failure Prediction Algorithms

Online Failure Prediction (OFP) allows rapidly and proactively taking countermeasures before a failure occurs, such as saving data or restarting a system or component. Machine Learning (ML) algorithms have shown their ability to adapt and extract knowledge in a variety of complex problems, and thus have also been used for OFP. However, despite its potential contribution to improve dependability, OFP still presents limitations. In addition to the problem of choosing the optimal set of features to use, assessing prediction models is complex and common procedures for supporting comparison are not available. In this paper, we propose a conceptual framework for a fair and sound assessment and comparison of alternative failure prediction solutions, including scenarios for choosing the adequate metrics for the assessment and a detailed procedure on how to prepare and validate the workload (dataset), compare alternative models, and select the best predictor. To demonstrate the approach, we present a benchmarking campaign and compare several failure prediction models. Results show that the framework fulfills the relevant properties and can be used to establish a ranking of the models under evaluation in different scenarios.

Complete Datasets: Access Datasets Here

Tools

Propheticus: Generalizable Machine Learning Framework

Due to recent technological developments, Machine Learning (ML), a subfield of Artificial Intelligence (AI), has been successfully used to process and extract knowledge from a variety of complex problems. However, a thorough ML approach is complex and highly dependent on the problem at hand. Additionally, implementing the code required to execute the experiments is no small nor trivial deed, consequentially increasing the probability of residual faults. Propheticus is a data-driven framework which results of the need for a tool that abstracts some of the inherent complexity of ML, whilst being easy to understand and use, as well as to adapt and expand to assist the user’s specific needs. Propheticus systematizes and enforces various complex concepts of a ML experiment workflow, taking into account both the nature of the problem and the data. It contains functionalities to execute all the different phases, from data preprocessing, to results analysis and comparison. Notwithstanding, it can be fairly easily adapted to different problems due to its flexible architecture, and customized as needed to address the user’s needs.

Internal Report: Read Here

Connect