Our implementation uses a kernel cache to accelerate computations by storing previously calculated kernel matrix values. This caching mechanism significantly reduces computational overhead, especially for larger datasets where the same kernel values would otherwise be repeatedly calculated.
Note
You can control the kernel cache size with the cache size option (default: 200 MB). When the cache reaches capacity, a least recently used (LRU) strategy evicts older entries.