Implementations - 2024.1 English

Vitis Libraries

Release Date
2024-08-06
Version
2024.1 English

The discrete form (Euler-Maruyama Form) of Heston Model is:

\[\ln{S(j\Delta)} = \ln{S((j-1)\Delta)} + \mu\Delta + \sqrt[2]{\nu((j-1)\Delta)} \Delta W_t^S\]
\[\nu(j\Delta) = \nu((j-1)\Delta) + \kappa(\theta - \nu((j-1)\Delta))\Delta + \sigma\sqrt[2]{\nu((j-1)\Delta)}\Delta W_t^\nu\]

Where \(\Delta\) stands for unit timestep length. \(S(j\Delta)\) stands for S in j th timesteps, AKA \(S(j * \Delta t)\). \(\nu(j\Delta)\) has a similar meaning. To simplify the process, use \(\ln{S(j\Delta)}\) instead of \(S\) since multiplication becomes addition after logarithm.

\(\Delta W_t^S\) and \(\Delta W_t^\nu\) could be calculated by:

\[\Delta W_t^S = \Delta Z_S\]
\[\Delta W_t^\nu = \Delta Z_\nu\]

Where \(Z_S\) and \(Z_\nu\) are two uniform distributed random numbers that have correlation \(\rho\)

Although \(\nu\) is non-negative in continuous form, it would become negative if you use the Euler-Maruyama form above directly. There are several variations to solve this issue. They are as follows:

kDTReflection: use absolute value of the volatility of the last iteration.

\[\ln{S(j\Delta)} = \ln{S((j-1)\Delta)} + \mu\Delta + \sqrt[2]{|\nu((j-1)\Delta)|} \Delta W_t^S\]
\[\nu(j\Delta) = |\nu((j-1)\Delta)| + \kappa(\theta - |\nu((j-1)\Delta)|)\Delta + \sigma\sqrt[2]{|\nu((j-1)\Delta)|}\Delta W_t^\nu\]

kDTPartialTruncation: use only absolute value of volatility of the last iteration in sqrt root part.

\[\ln{S(j\Delta)} = \ln{S((j-1)\Delta)} + \mu\Delta + \sqrt[2]{|\nu((j-1)\Delta)|} \Delta W_t^S\]
\[\nu(j\Delta) = \nu((j-1)\Delta) + \kappa(\theta - \nu((j-1)\Delta))\Delta + \sigma\sqrt[2]{|\nu((j-1)\Delta)|}\Delta W_t^\nu\]

kDTFullTruncation: use only positive part or zero of volatility of the last iteration.

\(\nu((j-1)\Delta)^+ = \nu((j-1)\Delta)\) if \(\nu((j-1)\Delta) > 0\) \(\nu((j-1)\Delta)^+ = 0\) if \(\nu((j-1)\Delta) \leq 0\).

\[\ln{S(j\Delta)} = \ln{S((j-1)\Delta)} + \mu\Delta + \sqrt[2]{\nu((j-1)\Delta)^+} \Delta W_t^S\]
\[\nu(j\Delta) = \nu((j-1)\Delta) + \kappa(\theta - \nu((j-1)\Delta)^+)\Delta + \sigma\sqrt[2]{\nu((j-1)\Delta)^+}\Delta W_t^\nu\]

kDTQuadraticExponential and kDTQuadraticExponentialMartingale are more accurate variation, details could be found in reference papers [ANDERSON2005]. They use a different approximation method to calculate \(\nu\), here’s brief on its algorithm

Step 1: Calculate first order moment and second order moment of \(\nu\).

\[m = \theta + (\nu((j-1)\Delta) - \theta)e^{-\kappa \theta}\]
\[s^2 = \frac{\nu((j-1)\Delta)\sigma^2 e^{-\kappa \Delta} }{\kappa} (1 - e^{-\kappa \Delta}) + \frac{\theta \sigma^2}{2 \kappa}(1 - e^{-\kappa \Delta})^2\]

Step 2: Calculate \(\Psi = s^2 / m^2\)

Step 3: If \(\Psi \leq \Psi_{sw}, \Psi_{sw} = 1.5\), Then

Step 3.1: Calculate \(a\) and \(b^2\)

\[b^2 = 2\Psi^{-1} - 1 + \sqrt[2]{2\Psi^{-1}} \sqrt[2]{2\Psi^{-1}-1}\]
\[a = \frac{m}{1+b^2}\]

Step 3.2: Calculate \(\nu(j\Delta)\)

\[\nu(j\Delta) = a(b+Z_\nu)^2\]

Step 4: If Step 3 does not hold, Then

Step 4.1: Calculate \(\beta\) and \(p\)

\[p = \frac{\Psi - 1}{\Psi + 1}\]
\[\beta = \frac{2}{m(\Psi + 1)}\]

Step 4.2: Calculate \(U_\nu = \Psi(Z_\nu)\)

Step 4.3: Calculate \(\nu(j\Delta)\)

\[\nu(j\Delta) = 0 \:\: if \:\: U_\nu \leq p\]
\[\nu(j\Delta) = \frac{1}{\beta}(\frac{\log{(1-p)}}{1-U_\nu}) \:\: if \:\: p < U_\nu\]

They both have two branches for value in different range. These two branches have a similar calculation process. Furthermore, only one branch is active at the same time. By merging these two branches into one branch and manually binding calculations to DSPs, it cuts off the DSP cost. This is not going to change its performance and accuracy.

In Monte Carlo Simulation, you need to compute stock prices of multiple paths at multiple time steps. Therefore you need two loops to calculate prices and volatilities, the inner loop is either timestep loop or path loop. Price at each time step is calculated using last time step’s price and volatility as input. And use 1-D array to store price and volatility of each path’s history (last timestep).

If the inner loop is timestep loop, as red arrows demonstrate in the diagram below, it keeps updating the same array element until it reaches max timesteps. Such operation can not achieve initiation interval (II)=1 and greatly slows down the calculation process. If the inner loop is path loop, as green arrows demonstrate in the diagram below, it keeps updating different array element each time. Such operation avoids dependency issues and reach II=1, which is used in this implementation.

Inner Loop of timesteps and path