Kernel methods emerged among the most powerful tools available for a wide range of machine learning
and more general function estimation problems. Kernels methods, broadly construed on Support
Vector Machines (SVM) (and Relevance Vector Machines (RVM)), are widely considered as modern
intelligence machines. They combine principles from statistics, optimization and learning in a sound
mathematical framework able to solve large and complex problems in the areas of intelligent control
and industry, information management, information security, finance and business, bioinformatics and
medicine.
Intelligent Control and Industry
Intelligent signal processing aiming at quality monitoring, fault detection and control in an industrial
process is performed by SVMs in [27, 31, 29] as part quality monitoring tools by analyzing complete
data patterns. Assessment of the designed model is done by comparing it to RBF neural networks.
Results show that SVM models with appropriately selected kernels present superior performance. This
is even more marked when looking at the separability in the chosen high-dimensional feature space
by SVM, which confirms that the kernel parameters indeed have an effect on the desired
accuracy [28].
SVMs are able to handle feature spaces of high dimension and automatically choose the most
discriminative features for estimation problems. In regression problems, more generalized forms based
on some distance measure should be investigated. Under the assumption that Euclidean distance has a
natural generalization in form of the Minkovsky distance function, the author proposed in [30] a new
kernel function for model regression. It was tested in the setting of the Box-Jenkins furnace time series
experiencing significant rise in the overall accuracy. In [10] the author together with other researchers
successfully design a support vector regression based model in the sagittal plane for control of a biped
robot.
Information Management
Computers do not “understand” the content of a book or a document, before retrieving, translating, recommending or summarizing it. Yet they exhibit (approximately) a behavior that we would consider intelligent. They do it by analyzing many examples of the appropriate behavior, and then learning to emulate it. Most successes depend on statistical pattern analysis and inductive learning inference based on a large number of pattern instances. In [51] a proposal for (and a review of) kernel techniques and approaches are presented in the scope of text classification, a crucial stage to information retrieval and modern search engines.
With the huge amount of information available in digital form, the computational intelligence models
should be able to deal with (i) almost infinitely many unlabeled data (ii) to integrate learning models
and (iii) to distribute tasks to solve the complex problems involved. The former issue is tackled
in [49, 50] while for solving the latter two a distributed text classification with ensemble kernel-based
learning approach is presented in [47].
Information Security
Steganography secretly hides information in digital products, increasing the potential for covert
dissemination of malicious software, mobile code, or information. To deal with the threat posed
by steganography, steganalysis aims at the exposure of the stealthy communications. A
new scheme is proposed in [18] for steganalysis of JPEG images which, being the most
common image format, is believed to be widely used for steganography purposes as there are
many free or commercial tools for producing steganography using JPEG covers. In a similar
motivated approach, described in [20], a set of features based on the image characteristics are
extracted for model construction. Further, a new parameter - image complexity - is designed
showing to enhance the model performance. Several nonparametric learning designs are built
in [11] for resilient computational intelligence models in the same validation set of JPEG
images.
Finance and Business
The large availability of financial data puts machine learning at the center stage giving rise to
computational intelligence models. In response to the recent growth of the credit industry
and to the world economic crisis early discovery of bankruptcy is of great importance to
various stakeholders. Yet the rate of bankruptcy has risen and it is becoming harder to
estimate as companies become more complex and the asymmetric information between
banks and firms increases. The author in joint work has looked at several kernel models for
financial distress prediction [40]. The significant results did not overlook the dimension
reduction of the overall financial ratios. In fact, our recent focus on preprocessing the financial
data led to very succeeded frameworks for evaluating firms financial status [41, 39, 38].
Bioinformatics and Medicine
In the 1990s genomic data started becoming available. Since mathematical models aimed at capturing
the physics of transcription processes quickly proved to be unworkable, bioinformaticians turned to
Computational Intelligence models for help in tasks such as gene finding and protein structure
prediction. In [19] class prediction and feature selection, two learning tasks that are strictly paired in
the search of molecular profiles from microarray data, were performed with SVMs. The models with
RBF kernels have been shown to present a good choice, thus providing clues for cancer classification of
individual samples. Recently, proteomic data considered potentially rich, but arguably
unexploited, for genome annotation was used in [25]. The idea of using manifold (and supervised
distance metric) learning for feature reduction combined with an SVM classifier of mass
spectrometry was successful applied in biomedical diagnosis and protein identification. In [37], the
author (in cojoint work) proposes a real-time predictor for Ventricular Arrhythmias (VA)
detection using SVMs which shows favorable performances as compared to NN models.