Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author

I. Wavelet calculations.
II. Calculation of approximation spaces in one dimension.
III. Calculation of approximation spaces in one dimension II.
IV. One dimensional problems.
V. Stochastic optimization in one dimension.
1. Review of variational inequalities in maximization case.
2. Penalized problem for mean reverting equation.
3. Impossibility of backward induction.
4. Stochastic optimization over wavelet basis.
A. Choosing probing functions.
B. Time discretization of penalty term.
C. Implicit formulation of penalty term.
D. Smooth version of penalty term.
E. Solving equation with implicit penalty term.
F. Removing stiffness from penalized equation.
G. Mix of backward induction and penalty term approaches I.
H. Mix of backward induction and penalty term approaches I. Implementation and results.
I. Mix of backward induction and penalty term approaches II.
J. Mix of backward induction and penalty term approaches II. Implementation and results.
K. Review. How does it extend to multiple dimensions?
VI. Scalar product in N-dimensions.
VII. Wavelet transform of payoff function in N-dimensions.
VIII. Solving N-dimensional PDEs.
Downloads. Index. Contents.

Review. How does it extend to multiple dimensions?


et us review. We stop evolution in time face the problem of taking MATH where $z_{0}$ and $g$ are two known functions. These functions are given by the columns $c$ and $a$ : MATH where MATH are bases in MATH of manageable size and $U$ is a rectangular domain in $\QTR{cal}{R}^{n}$ . We keep the size of MATH manageable by using sparse tensor product. However, in multiple dimensions $n>1$ we are prohibited from taking the sums MATH because these would have to be described as piecewise polynomials on straight product $U\times...\times U$ . Such descriptions would have unmanageable size.

To overcome this difficulty we construct instead MATH

In the previous sections we construct a procedure that adaptively approximates MATH by a linear combination of MATH . We make an observation that for corrected $z_{1}$ the difference MATH should not have positive component. Then we subtract adaptively non-negative functions until we reach our goal.

If we use plain pyramid functions in multidimensional situation then we would have to subtract a lot of them. It would not be efficient. How do we efficiently select a good function to subtract? We can subtract several pyramid functions with non-overlapping support in parallel. We also can subtract probing functions with good approximation properties. For example, we could use the crudification operator, see the definitions ( Crudification operator ),( Crudification operator 2 ). The difficulty with taking MATH is not the size of the basis but the size the piecewise polynomial representation. By using the crudification operator, we control the size of representation. We crudify representations of $z$ and $g$ , calculate the crudified version of MATH and use it as MATH in the procedure of the section ( Mix of backward induction and penalty term approaches II ).

The crudification of the coordinate columns is performed as follows. Let MATH be a relatively small crudified basis. We introduce MATH We can efficiently calculate $\tilde{c}$ for any $c$ . Indeed, MATH The matrixes $G,\tilde{G}$ are of the form MATH see the sections ( Solving N-dimensional PDEs ) and ( Reduction to system of linear algebraic equations for Black PDE ).





Downloads. Index. Contents.


















Copyright 2007