Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author

I. Wavelet calculations.
II. Calculation of approximation spaces in one dimension.
III. Calculation of approximation spaces in one dimension II.
IV. One dimensional problems.
V. Stochastic optimization in one dimension.
1. Review of variational inequalities in maximization case.
2. Penalized problem for mean reverting equation.
3. Impossibility of backward induction.
4. Stochastic optimization over wavelet basis.
A. Choosing probing functions.
B. Time discretization of penalty term.
C. Implicit formulation of penalty term.
D. Smooth version of penalty term.
E. Solving equation with implicit penalty term.
F. Removing stiffness from penalized equation.
G. Mix of backward induction and penalty term approaches I.
H. Mix of backward induction and penalty term approaches I. Implementation and results.
I. Mix of backward induction and penalty term approaches II.
J. Mix of backward induction and penalty term approaches II. Implementation and results.
K. Review. How does it extend to multiple dimensions?
VI. Scalar product in N-dimensions.
VII. Wavelet transform of payoff function in N-dimensions.
VIII. Solving N-dimensional PDEs.
Downloads. Index. Contents.

Mix of backward induction and penalty term approaches I.


e continue research of the previous section. The procedure has several problems:

1. For weak penalty term (high $\varepsilon\,$ ) the procedure takes several time steps to converge to MATH . For strong penalty term (low $\varepsilon$ ) it overshoots at the spacial boundaries of MATH , taking the solution far beyond desired minimal installation at MATH .

2. It picks up and accumulates discrepancy from correct solution while converging into MATH .

3. Boundary probing functions introduce discrepancy beyond MATH area.

Therefore, we propose to stop advancement in time until we iteratively and minimalistically converge into MATH using collections of internal probing functions of increasing precision.

Without time increment the evolution MATH looks like this: MATH where the vector $a$ is input parameter, $\theta$ is a control parameter at our disposal, $G$ is a well conditioned matrix and $\Omega$ has the form MATH for some wavelet basis MATH and selection of probing functions MATH . We exploit the freedom of selection of probing functions by inserting control parameters MATH : MATH Furthermore, we place probing functions conservatively within the area MATH to make sure that no support of any MATH intersects with MATH . The precision loss that comes from such requirement may be mitigated by increasing scale of the probing functions. The penalty function $\Omega$ now takes the form MATH We introduce the notations MATH then MATH MATH Let MATH The control parameters MATH are optimized to achieve MATH with constraints MATH

We perform elementary transformations: MATH and arrive to the problem MATH This is a quadratic optimization problem with linear constraints. To make this evident, we transform the problem to canonical form: MATH MATH The problem takes the form MATH where MATH MATH MATH The gradients for the problem are MATH





Downloads. Index. Contents.


















Copyright 2007