User Tools

Site Tools


gibson:teaching:fall-2012:iam961:hw3

**This is an old revision of the document!** ----

A PCRE internal error occured. This might be caused by a faulty plugin

====== IAM 961 HW 3 ====== **1:** Implement Classical Gram-Schmidt, Modified Gram-Schmidt, and Householder algorithms for the QR decomposition of a square matrix ''A''. Your functions should return the ''Q'' and ''R'' matrices like this (in Matlab syntax) <code> [Q,R] = qr_cgs(A); [Q,R] = qr_mgs(A); [Q,R] = qr_house(A); </code> Implement a backsubstitution algorithm for solving ''R x = b '' where ''R'' is a square upper-triangular matrix, with function signature <code> x = backsub(R,b) </code> There is an indeterminacy in the QR decomposition that we'll need to resolve in order to make comparisons of different QR algorithms. Namely, you can change the sign of any column of ''Q'' and the sign of the same row of ''R'' without changing the product ''QR''. Write a ''qrfix'' that makes appropriate sign changes so that all diagonal elements of ''R'' are nonnegative. The function signature should be like this <code> [Q,R] = qrfix(Q,R); </code> (Note: production-quality code should check for things like non-square input matrices, encountering linear dependence among columns of ''A'' and hence inability to produce a new direction ''q'' by the core algorithm, etc. But feel free to gloss over such issues for this homework.) **2:** Investigate the behavior of your three QR algorithms as follows: (Note: Performing the following operations with a smaller (say 10 x 10) well-behaved (condition number of 100 or so) matrix might help you debug your QR algorithm codes) Construct a random 100 x 100 matrix ''A'' with a known QR decomposition and a condition number of approximately 10^7 <code> m = 100; % investigate QR on m x m matrices [U,tmp] = qr(randn(m)); % get a random unitary U [V,tmp] = qr(randn(m)); % get a random unitary V % Construct a set of singular values in range 10^-3 to 10^4 s = sort(rand(1,m).* 10.^linspace(-3,4,m), 'descend'); S = diag(s); Atmp = U*S*V'; [tmp, R] = qr(Atmp); % get R from the QR decomp of Atmp [Q, tmp] = qr(randn(m)); % get a random unitary Q [Q,R] = qrfix(Q,R); % remove sign indeterminacy in Q,R A = Q*R; % assign A = QR, so A has known QR dec </code> From this ''A'', construct an ''Ax=b'' problem with a known solution ''x'' <code> x = randn(m,1); b = A*x; </code> Compute the QR decomposition via Classical Gram-Schmidt and fix indeterminacy <code> [Qc,Rc] = qr_cgs(A); [Qc,Rc] = qrfix(Qc,Rc); </code> Now see how well the computed QR decomp does what it should! <code> % How far off is Qc* Qc from identity? unitary_error = norm(Qc'*Qc-eye(m)) % How far off is Rc from upper triangularity? uptrian_error = norm(Rc - triu(Rc)) % How far off is Qc from the known Q? q_error = norm(Q - Qc) % How far off is Rc from the known R? r_error = norm(R - Rc) % How far off is Qc Rc from A? qr_error = norm(A - Qc*Rc) </code> In particular, look at ''Qc* Qc'' to see exactly where it deviates from the identity <code> % View the matrix of inner products of Qc. % Should be an identity matrix (but won't be!) imagesc(abs(Qc'*Qc)) colorbar title('| Q* Q |') xlabel('i') ylabel('j') % View how far off the matrix of inner products of Qc % is from the identity, on a log scale imagesc(log10(abs(Qc'*Qc-eye(m)))) caxis([-16 0]) colorbar title('log10 | Q* Q - I |') xlabel('i') ylabel('j') </code> **Now, keeping ''A,x,b'' constant, run that same series of calculations using your ''qr_cgs'', ''qr_mgs'', and ''qr_house'' codes for the QR decomposition.** Based on your results, how well do each of the three QR algorithms work? Make some general observations based on the plots and the computed errors for the three cases. Make a table of the 5 error measures for the 3 algorithms. Keep in mind that errors of order 10^-16 are very good, 10^-8 are ok, and 10^0 (one) are very bad, and that the order of magnitude (exponent of 10) is all that matters. Hint: Use script files to automate the above calculations. Put the generation of ''A,x,b'' in one script file (or maybe function) and the others in another script so that you can rerun the whole series of calculations for different QR algorithms just by changing ''qr_cgs'' to ''qr_mgs'' or ''qr_house''. Also, rerun each of the **3:** For the ambitious: Note that if you rerun the above commands repeatedly for different randomly constructed ''A,x,b'', you might get errors ranging over a couple different orders of magnitude. It's actually best to run the tests repeatedly and take an average of the errors. Put the entire sequence of above commands (minus the plots) in a ''for'' loop over, say, 100 trials, and reduce the dimension of the matrices to say ''m=30'' so they run faster. Use a geometric mean rather than the arithmetic mean, i.e. <code> q_error = 0; for n = 1:Ntrials ... q_error = q_error + log(norm(Q - Qc)); end q_error = exp(q_error/Ntrials) </code> Put the various calculated geometric-mean errors in a table and base your discussion of the behavior of the three algorithms on the geometic means rather than the particular-case answers of **3**. **4:** For those seeking world domination: calculate the geometric-mean errors for each algorithm as a function of condition number, and produce plots for each error type with three lines, one each for CGS, MGS, and Householder, with condition number ranging from 1 to 10^16.

gibson/teaching/fall-2012/iam961/hw3.1351099616.txt.gz · Last modified: 2012/10/24 10:26 by gibson