Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author
Printable PDF file
I. Basic math.
II. Pricing and Hedging.
III. Explicit techniques.
IV. Data Analysis.
V. Implementation tools.
VI. Basic Math II.
VII. Implementation tools II.
1. Calculational Linear Algebra.
A. Quadratic form minimum.
B. Method of steepest descent.
C. Method of conjugate directions.
D. Method of conjugate gradients.
E. Convergence analysis of conjugate gradient method.
F. Preconditioning.
G. Recursive calculation.
H. Parallel subspace preconditioner.
2. Wavelet Analysis.
3. Finite element method.
4. Construction of approximation spaces.
5. Time discretization.
6. Variational inequalities.
VIII. Bibliography
Notation. Index. Contents.

Method of conjugate directions.


he formula ( Orthogonality of residues ) shows the steepest descent method sometimes takes steps in the same direction that was taken before. This is a waste of efficiency. We would like to find a procedure that never takes the same direction. Instead of search directions MATH we would like to find a set of linearly independent search directions MATH . Of course, if the directions MATH are chosen badly then the best length of step in every direction would be impossible to determine. Since we are minimizing MATH it makes sense to choose directions MATH orthogonal with respect to the scalar product MATH . This way we can perform a MATH -minimization in every direction $d_{k}$ (see the section ( Method of steepest descent )) and guarantee that the iteration $x_{k}$ remains MATH -optimal with respect to the directions MATH of the previous steps. Therefore such procedure has to hit the solution after $n$ steps or less (under theoretical assumption that numerical errors are absent).

We produce a set search direction MATH by performing the Gram-Schmidt orthogonalization (see the section ( Gram-Schmidt orthogonalization )) with respect to the scalar product MATH and starting from any set of linearly independent vectors. Let MATH be a result of such orthogonalization: MATH The following calculations are motivationally similar to the calculations of the section ( Method of steepest descent ). We start from any point $x_{0}$ . At a step $k$ we look at the line MATH and seek MATH Note at this point that changing from MATH to MATH for any vector $d$ that is MATH -orthogonal to $d_{k}$ would not alter the result of minimization. Hence, if we perform this procedure $n$ times we arrive to a point that is optimal with respect to every vector MATH . Therefore, such point is a solution.

To perform the minimization we calculate the derivative MATH At this point we substitute the formula ( Connection between SLA and minimization ) and $x^{\ast}$ disappears from the calculation. MATH We equate the derivative to zero and obtain MATH We collect the results: MATH Similarly to the section ( Method of steepest descent ) one can eliminate one matrix multiplication by transforming the equation $\left( \#\right) $ . We multiply by $-A$ and subtract $b$ : MATH

Algorithm

(Conjugate directions) Let MATH be a set of vectors with the property MATH Take any point $x_{0}$ , calculate MATH and iterate for $k=0,...,n-1$ MATH

Note for future use that MATH means MATH or MATH In fact, $r_{k+1}$ is orthogonal to all directions MATH for previous steps because, as we noted already, $x_{k+1}$ remains MATH -optimal:

MATH (Orthogonality of residues 2)





Notation. Index. Contents.


















Copyright 2007