Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author

I. Wavelet calculations.
II. Calculation of approximation spaces in one dimension.
III. Calculation of approximation spaces in one dimension II.
IV. One dimensional problems.
V. Stochastic optimization in one dimension.
1. Review of variational inequalities in maximization case.
2. Penalized problem for mean reverting equation.
3. Impossibility of backward induction.
4. Stochastic optimization over wavelet basis.
VI. Scalar product in N-dimensions.
VII. Wavelet transform of payoff function in N-dimensions.
VIII. Solving N-dimensional PDEs.
Downloads. Index. Contents.

Impossibility of backward induction.


his section documents generic and unsuccessful attempt to use backward induction, introduced in the sections ( Backward induction ) and ( Stochastic optimal control ) in combination with finite element technique and a wavelet basis.

We are considering an equation of the problem ( Penalized mean reverting problem ): MATH

Whatever discretization of the penalty term MATH we choose to consider, we would aim to correct $z_{\varepsilon}$ into the area MATH after few, preferably one, time step. Why not do it directly? Indeed, we start from a final payoff that satisfies MATH . Thus, penalty term would be zero. We make one step of penalty-free evolution, apply some kind of a procedure that corrects the solution to MATH and then do the same again.

We state with better precision what we intend to accomplish.

We have a solution $v_{0}$ given by a linear combination of a wavelet basis functions MATH : MATH We have another collection of non-negative valued functions MATH with the same approximating power: MATH but without well conditioned Gram matrix and MATH

We introduce the functionals MATH and notations MATH The $f_{p}$ are linear functionals MATH and their action is easily computable: MATH We would like to find a procedure for modification MATH with two requirements for all $p$ : MATH Let MATH MATH Altogether there are $m$ requirements and $n$ variables $\tilde{c}$ . The requirement $\left( \&\right) $ must be satisfied strictly because this is the area of primary interest: area of no exercise. We know that this is possible because the $c_{0}$ satisfies $\left( \&\right) $ . For each MATH , the equality $\left( \&\right) $ is a hyperplane and $\left( \&\right) $ taken for all MATH is an intersection of hyperplanes. Since there are more requirements then variables, existence of intersection is likely to be numerically unstable. Therefore, we switch to the following problem MATH where $\theta$ is a parameter dictating strictness of $\left( \&\right) $ : MATH and MATH are the scale-dependent factors we intend to manipulate for better stability. The penalty term MATH is introduced for regularization.

The function MATH has the form MATH where $A$ is an $m\times n$ matrix MATH and MATH

We proceed to calculate the minimum. Let $e$ be any non-zero vector and $x$ is a small positive real number, then MATH MATH where the last equality holds at the minimum MATH for any $e$ . Therefore, MATH is recovered by solving MATH The limit MATH is called "Moore-Penrose pseudoinverse". Calculation of $A^{+}$ is a well developed area. We conclude MATH

The numerical experiment is implemented in the script soBackInd.py located in the directory OTSProjects/python/wavelets2. The procedure is unstable. The parameters $\theta$ and MATH create wild difference in results.

We offer the following intuition about sources of instability. We need to have non-negative valued functions MATH with the same approximating power: MATH The only way to accomplish this within wavelet framework and in a situation of adaptive multiscaled basis is to take $\QTR{cal}{R}$ -based scaling functions MATH as the collection MATH . Those are the only non-negative functions in the framework. Functions MATH only have the same approximating power if several of them are used with consecutive indexes. Multiscaling feature means that we take $k$ -ranges of MATH for several $d$ . But then the entire collection would be linearly dependent, because by construction of $\phi$ , MATH and this is one source of the instability.

Using single scale $d$ collection of MATH almost removes instability but lacks precision. If we use fine scale $d$ collection of MATH then we defeat the purpose of using wavelets: we get a large poorly conditioned matrix $A$ .

Another source of instability is the operation MATH . We already noted that MATH have good approximating properties when taken together. Here, we take a single scalar product. This is as good (or as bad) as taking scalar product with a hut function. If we revert to some simple family of MATH , such as hut functions, then we get a large $A$ .

Finally, we cannot replace a single scalar product operation MATH with a projection on a range of MATH because we would be unable to take the maximum MATH component-wise.

Naturally, in one dimensional situation, one can simply take MATH directly. Indeed, both $v$ and $g$ are piecewise polynomial functions. There are two problems with this approach:

1. We need to recombine decompositions MATH into piecewise polynomial functions, take maximum and decompose again.

2. Such procedure, although already expensive, has no effective extension in multidimensional case.

In the following sections we present a procedure that does not involve taking maximum and extends smoothly into multiple dimensions.





Downloads. Index. Contents.


















Copyright 2007