- Linear Kernel:
Computes the standard dot product of two vectors in the original feature space. Useful if the data are approximately linearly separable.
\[K(x, y) = x \cdot y.\]
- RBF (Radial Basis Function) Kernel:
Projects every data point into an infinite-dimensional feature space, enabling nonlinear decision boundaries. Due to its flexibility, the RBF kernel often works well on a wide range of datasets and is a common default method.
\[K(x, y) = \exp(-\gamma \|x - y\|^2).\]
- Polynomial Kernel:
Maps data points into higher-dimensional feature space via polynomial terms. Useful for capturing polynomial relationships between features.
\[K(x, y) = (\gamma x \cdot y + c)^d.\]
- Sigmoid Kernel:
Similar to neural network activation functions. Sometimes used for certain data distributions, though not as common as RBF or polynomial.
\[K(x, y) = \tanh(\gamma x \cdot y + c).\]