For every algorithm making use of positive definite kernels there is the same question to answer: What is the best kernel for a given task? Ideally it should be set such that the resulting function minimizes the expected risk. Since this measure is not accessible, the most widely used approach is to approximate it using cross validation. When training Support Vector Machines (SVMs) two components need to be tuned: (1) the set of the kernel parameters whilst the other is (2) the regularization constant C controlling the tradeoff between the complexity of the function class and its ability to explain the data correctly. With cross validation, the parameters are picked from a range of values that perform the best on some held-out data.