Thursday, May 19, 2011

Gauss Seidel




Gauss-Seidel


Building on my post ‘Iterative
Approaches to Solving Large Systems of Equations’, I’m going to break the matrix A in Ax=b into an upper triangular and lower triangular form. So [2 -1; -1 2] will be broken into D=[2 0; -1 2] and R = [0 -1; 0 0].  This technique is a little sneakier than the Jacobi method, in that you never need to take the inverse of D. In other words, we’re not decomposing the matrix this way because D is easily invertible. Instead, the equation Dxk+1 + Rxk = b is easily solved by doing something that looks like a lot like back-substitution. For instance, in the example above, we could write out: [2 0; -1 2]xk+1 + [0 -1; 0 0]xk = b  or  [2 0; -1 2]xk+1 = [0 1; 0 0]xk + b. This equation has the nice property that the first entry of xk+1, lets call it xk+1(1) to keep our bastardized matlab syntax, can be solve instantly just like in back substitution. 2*xk+1(1) = xk(1) + b in our case. Then that result can be used to solve the next equation, and so on and so forth. I’ve written some code to do this:

% Finds a solution for Ax = b.

A = [12 3 -5; 1 5 3; 3 7 13];

b = [1; 28; 76];

 

syms x1 x2 x3;

initial_x = [1 0 1];

curr_x = initial_x;

next_x = zeros(1,3);

errors = zeros(1,3);

% Iteratively update your estimate of entries of x.

% Note that curr_x and next_x could be the same array, I
separate them

%    just so I can display the error.

for k=1:6,

    next_x(1) = solve(A(1,:)*[x1; curr_x(2); curr_x(3)] - b(1));

    next_x(2) = solve(A(2,:)*[next_x(1); x2; curr_x(3)] - b(2));

    next_x(3) = solve(A(3,:)*[next_x(1); next_x(2); x3] - b(3));

    errors = abs((next_x - curr_x) ./ next_x)*100;

    disp(['Absolute Errors  '  num2str(errors)]);

    curr_x = next_x;

end

curr_x

 

The important thing is how I solve the first equation with curr_x, but then I solve the second equation with next_x. I don’t have to wait to solve for all next_x before I can start using
its improved estimates. Although I don’t understand the proof for it, apparently this method converges twice as fast as Jacobi, and by combining curr_x and next_x, it is much more space efficient.

As an aside, I wrote this matlab code with the symbolic toolbox and it’s ‘solve’ function rather than just reorganize the linear equations to solve for the one unknown. This is MUCH slower for what should just be a series of adds and a divide, but it makes the code match the equation. I have a lot trouble understanding code when algebra and optimizations distort the idea to the point where it doesn’t match the paper/book at all. Apparently Mathematica is smarter about this stuff than Matlab is, so I’ll add that to my growing list of reasons to switch.

As another aside, I think this is kind of interesting that you can iteratively solve for the residual by solving b – Axk efficiently, and seeing how far you are from the solution. I think it’s kind of mind bending that you can be cranking away at a problem and know the entire time how far away you are from the real answer. I guess the residual isn’t as informative as the error x-x­k, but it’s just a matrix multiply away!

No comments:

Post a Comment