These are the strange new models to boldly go for today
This content is optimized for a desktop or a tablet. May run too slowly on mobile phones and phablets.
Wait for at least 10 Monte-Carlo runs for the exposure charts to develop; current run #.
Yes, these are real-time simulations run directly in you web browser, not animated gifs.
This chart shows evolution of an initially ATM 1-year equity-like option
in Q- and in P-measure. Under Q-measure, volatility is 15%; it is used both to
evolve the underlying and price the option on the underlying. Under P-measure,
the underlying is evolved with historical vol of 10%, while log of implied
volatility is assumed to follow an AR(1) process with initial value and long-term mean equaling 15%,
the Q-measure volatility. The log vol-of-vol under P is set to 150%, mean-reversion
speed 20% and equity/vol driver correlation is -90% under P. Drifts are 1% and 3% for Q and P.
This chart contrasts a GBM model for an equity-like asset
with a log-GARCH(1,1) model for the same asset's return. Both models are driven with same
white noise. GARCH long-term volatility is set equal to that of GBM, 15%. The GARCH parameters
are p = 0.3 and q = 0.05. We expect the lines for GBM and GARCH to cross, such that GARCH volatility
spikes are offset by volatility dips so as to keep the expected volatility equal to that of the GBM. While
exposure will not be much dissimilar, GARCH model generates "historical stochastic volatility".
These charts compare two dynammic credit spread models, where credit is driven
by migration between IG and HY dynamics. Migration probabilies are set to 0.95. In the first model, both IG and HY spreads follow log-AR(1) processes,
with AR coefficients of 0.99 and 0.95 for IG and HY and
with residuals correlation of 0.9. In the second model,
the spreads follow a joint log-VAR(1) model with residual correlation of 0. The VAR(1) matrix is
entries are all set to [[0.98, 0.01],[0.3, .96]], such that the eigenvalues match the AR coefficients of the first model.
VAR(1) is a much richer model, allowing finer control over the residual auto- and regular correlation. Toggle "CDS OFF" to
observe only AR(1) vs VAR(1) current path, drawn by the same white noise. CDS spread is assumed to have an N(0,3bp) idiosyncratic basis to its current driver.
Click on the series' key in the chart legend to toggle the series on/off on the chart.
HOW TO SURVIVE A SELDOM CRISIS?
In modern world, many derivatives have become the first-class tradeables and they are not
driven entirely by the underlyings.
As such, market risk analysis of portfolios containing derivatives requires joint modelling of
underlyings and derivatives under P-measure. Such models are essentially dynamic and require simulation of the
time series and not just distributions of returns. We can construct such models for a wide range of markets.
Basel III+ and the final edition of FRTB are great steps forward in risk managment. Standardized approaches are now much more comprehensive, while internal
models are still availalbe in the areas where they can be realistically delivered. The overall setup is considerably more internally consistent. The compute requirements
and amounts of data to handle bring implementation of these methodologies to the boundary of the "big data", so clever designs are necessary to avoid crossing the boundary.
We would be glad to share our experience in making such optimal design choices.
Pricing marginal XVA is a corporate finance problem. It is solved in practice by treating
XVA as a hybrid derivative. The model is exposed to a huge number
of unobserved parameters, and jump diffusions are needed to capture realistic wrong-way risk.
Cases of low-leveraged counterparties are special, as marginal XVA affects prior leverage. AAD
optimizes certain parts of calculation, but bump-and-recalc is still necessary for scenario analysis.
Our experience of delivering XVA solutions is at your disposal.
Quantitative analysis function in large organizations cannot be separated into quant and IT anymore.
This is because calculations are implemented as large distributed workflows and intermediate calculation results need
to be manipulated non-linearly to obtain final results. The architecture and data model have to go
far beyond the in-memory calculation graphs, especially if global BAU function is taken in consideration. Open source tools
like Spark or Ignite deployed on the cloud can be used as back ends.
We can help you delivering such architecture.