|
ometime in the beginning of the year 2000 I
was attending one of quantitative conferences in Houston. A lector was
presenting a model for natural gas option prices that had good calibration
properties but was transparently statistically unsound. I was an unexperienced
quantitative analyst with first education in physics, hence, I could not
resist to protest. The lector, one of the well known academicians in the
field, did not argue merits of the case for very long. He turned his back on
me and, as he was walking away, he raised his hand in a picturesque gesture
and declared: "If you do not want to recalibrate your model then we do not
have a common basis for discussion".
This raises an interesting question. Some traded instruments have great
liquidity for large variety of contract parameters. Why not just use spline
interpolation to price those? It would certainly be cost effective. The
shortest sufficient (but incomplete) answer is the following: "We cannot
produce a good quality explained PnL without a sound statistical model."
Indeed, the explained PnL is the Taylor decomposition of the daily change in
the portfolio's value. If the model is statistically unsound then the
calibration parameters will swing and, without delta neutrality to those
parameters, the explained PnL and hedging strategy would be useless. If the
model is statistically sound then the calibration parameters would be stable
and the noise parameters (those we delta hedge) would be as small as they can
be. This way the daily Taylor decomposition (that we call "explained PnL")
actually converges most of the time and the initial trade, executed at the
derived from the model price, would not introduce a consistently bleeding
position in the book.
It is important to note that nowhere in the above statements do we need the
model to be of universe nature or to price the entire book uniformly with the
same model. However, we do need orthogonal, clearly and uniformly defined
sensitivities that we would delta hedge. Using a consistent universal model is
sometimes harmful to purposes of trading and risk management. For any person
with natural science background such idea seems absurd. To see that it makes
perfect sense consider trading of vanilla options. The prices are given by the
volatility smile and other observable parameters. There is simply no place for
more information. However, if one trades forward start options then a model of
forward volatility is required. Such model may reveal that vol-smile
parameters do not follow a Markov process but the stat-arb strategy that would
come from such discovery may still be prohibited by the cost of hedging
against unlikely events. Another way to explain the same conclusion is to
point out that financial derivatives formally depend on several market
parameters but are motivationally constructed to be a bet on one market
parameter. The rest of the parameters are hedged by other market instruments.
Hence, modelling several derivatives with a single model is a futile attempt
due to the competitive nature of the business. One model for each derivative
and/or trading strategy always works better.
To summarize, we would like to separate all quantities of interest into two
categories. First category is random by nature, we should be able to delta
hedge against it. The parameters in the second category are not hedgeable but
stable. Such task is hard enough. Hence, we again would like to introduce
contract dependency into statistical modelling. We seek the simplest model
that is suitable for the hedging purposes of the contract. The better model is
the one with better stability and smaller noise terms. The pricing derives
from the hedging.
Similar discussion may be found in the section
(
Implementation tools II
).
|