Numerical Linear Algebra, Spring 2007

Experimental assignments (Laborationer)

In this course the emphasis is on doing numerical experiments to study the properties of the algorithms studied. We can take advantage of that Matlab contains rather state of the art routines for direct sparse matrix algorithms. Look at Sparse Matrix functions in the Matlab helpdesk! Some of the routines here are taken from the homepage of the book by Demmel.

I have made up a few Matlab functions that will be useful when generating test cases and visualizing results:

  1. delsq3d is used to produce a finite difference approximation to the Laplacian operator over a 3 dimensional block. It is a 3 dimensional counterpart of delsq.
  2. Vector to grid z=v2g(x,G), spreads out the values from the vector in the grid G which you have earlier generated with numgrid. It is useful if you want to plot the result of your computation using e. g. surf .
  3. x=grid2vector(z,G) The reverse operation of the previous.
Warning: Notice that now we are working with large sparse matrices and have to be careful not to create any big full matrices by mistake.
  1. Use speye  instead of eyespdiags instead of diag, A=sparse(m,n)instead of zeros(m,n)!
  2. Avoid using the standard 2 norm for a matrix, replace it by the norm(A,1) or normest(A)!
  3. Also keep in mind that all matrices have to be sparse when doing matrix operations like A*B or A+B.
  4. The matrix division operator has to be handled with care. x=A\b solves a system by first pivoting the matrix A for sparsity using minimum degree, then doing a sparse Gaussian elimination, then a forward and backward solve. The result is a full vector x and that is quite OK, but beware of doing sth like C=R\A when testing preconditioners, it will give a full matrix C even if we know that the result has to be sparse.
  5. The factorization R=chol(A)may be slower than solving a system x=A\b. In the former case no pivoting for sparsity is done.
A note on getting files: On your web browser, use the right mouse button and choose Save link as. Then you can store the file at a location of your choice. This is important for large and binary files like the .mat files below!

Assignment to Ch 7: Lanczos for eigenvalues

We are using the Lanczos algorithm as described in sections 7.3 and 7.4 of the text book. Look also at  Chapter 4.1 -- 4.4 covering Hermitian eigenvalue problems in  the book Eigentemplates 

I have prepared 2 simple Matlab codes lanfu.m which performs full reorthogonalization and lanso.mwhich does selective orthogonalization, as it has been developed during the last 25 years by Parlett and coworkers. As of now the routines are just coded as M files, this makes it easier to look at different variables while running, but you have to have certain variables set when you start and give the matrix  the name A. Note that the code works also when A is full, but make sure it is represented as a sparse matrix when you run the tests. You will also need the short function normc.m.

The purpose is to compare the 3 different ways of doing reorthogonalization, full, selective and no reorthogonalization. You can easily make a no reorthogonalization code by putting a comment symbol in front of a few lines of lanfu. Test the orthogonality of the basis vectors v ,as well as that of the computed eigenvectors xv. Test also how well the residuals are predicted by the estimated bounds bndv given by the last elements of the eigenvectors of the tridiagonal matrix. I have an M file lanreport.m which does some of these tests. Note that you need to focus on the interesting parts of the spectrum by using the axis command!

1. As a first set of test matrices use the finite difference Laplacian generated by the Matlab commands numgrid and delsq. If you run Lanczos directly on this one, you get rather many steps before eigenvalues start to converge, I used jmax=100 or 300 in my tests. This is the case for which selective orthogonalization was developed.

If you use a membrane with symmetries, some of the eigenvalues are double, this will cause moderate trouble for this algorithm. Plot the eigenvectors for those eigenvalues using the special command v2g !

How does the variant with no reorthogonalization behave? Do you get good eigenvectors if you disregard that some of them are duplicates? Try Jane Cullum's device of reporting only those vectors that have a nontrivial component in the first row of S, the eigenvector matrix of the tridiagonal T.

2 a. If we run Lanczos on a shifted and inverted operator, the convergence will be much faster. In those cases selective orthogonalization reorthogonalizes much too often, and the duplicates rush in, if you use no reorthogonalization, so full reorthogonalization is the preferred alternative.

Test this on the same Laplacian membrane matrix with shift zero, i. e. take the inverse. By all means avoid forming an explicit inverse, it will be full. Instead, replace the code where A is multiplying , r=A*v(:,j); by an operation with the inverses of the L and U factors r=U\(L\v(:,j));. Note that the paretheses are important here, Matlab works from left to right and if you had no parenteheses it would start forming the full matrix U\L  and go on forever. (Long time ago I used APL, A Programming Language, developed by Iverson at IBM, which works from right to left. This is more natural in this case and the language still has some devoted enthusiasts.)

2 b. If you are bored by just running finite difference matrices, try your hand at getting the singular values of the Medline matrix, used by Katarina Blom in her works on information retrieval. Here you are interested in quite many of the largest singular values of X. Do a similar change in the code as in the inverted case, now inserting r=X'*(X*v(:,j));.This case behaves very similar to the shifted and inverted.

Do task 1 and either 2 a or 2 b!