Thanks Prof for such a clear and insightful material. Hopefully you will excuse me for including your SIAM presentation in one of my presentations. Though it will not go without citation.
Hello Dr. Davis, I understand that when solving for x in Lx = b with a sparse b, you first compute the non-zero pattern of x by finding the reach of b in the graph of L using DFS (done using a call to cs_reach()). Once this is done, then only you move onto the numerical computation of x. Similar thing is also done for Ux = b (where U is upper triangular) in the cs_spsolve() function by appropriately setting the "lo" flag. Just a small follow up question on this. Suppose now one is trying to solve U'x = b (where U' is the transpose of U), but has the matrix U stored in compressed column format. Will it be possible to find the reach of b in this case without actually transposing U? Or one HAS TO transpose U for finding the reach of b in order to compute the non-zero pattern of x? Apologies for such a long question. Thanks in advance for your reply.
Solving U'x=b where U is stored by column would be the same as solve Lx=b where L is stored by row. That forward solve can be done with a dense x, but a sparse x would be tough. The DFS to compute the reach of b, to get the pattern of x, needs to traverse the outgoing edges of any given node. But if L is stored by row, each row i would hold the incoming edges of the node i in the graph of L. In this graph, L(i,j) is the edge from j to i. So the storage for this matrix L (or U' for U stored by column) is backwards. I don't think this would work.
Hello Dr. Davis, At 13:04, you refer to a very subtle point inside the cs_lu() function. About how you set all x[i] = 0 for the next iteration without hampering the asymptotic performance of this algorithm (i.e., by NOT putting a O(n) loop to set all x[i] = 0 inside another O(n) loop). This is to ensure that all values in x are equal to 0 before the next time you call cs_spsolve(). What I noticed inside cs_spsolve() is that after you compute the reach of the sparse RHS vector, you explicitly set all x[xi[top]] to x[xi[n-1]] to 0 before scattering the values in the sparse RHS vector into x. My question is: Do we then actually need to set all the x[i] = 0 in cs_lu() at the end of every kth step of factorization when we gather the entries into L? Thanks in advance :)
Yes, it's still required, at least part of it, but the reason is very subtle. In cs_lu, I check to see if the diagonal entry can be chosen as a pivot, which is x [col]. The entry x [col] might not be in the pattern xi, and in that case x [col] needs to be zero. But if only x [xi [top...n-1]] is initialized, x[col] could contain garbage. It would be possible to set x[col] to zero before doing the cs_spsolve, I think.
@@DocSparse Oh yeah, that makes sense. I forgot about the condition where you check for x[col] to be a pivot if row "col" is not yet pivotal. Thanks for clarifying that :)
Dear Dr. Davis, I want to thank you for uploading this lecture series on RU-vid. It is very helpful in understanding the contents of your book. Will it be possible for you to share the scanned version of the example that you have used in order to explain the working of row_cnt() and cs_leaf() in this lecture? I tried it myself but the visual explanation that you have given in this lecture is very intuitive compared to just going through the code that I was doing. I will contact you by email if you cannot share it on RU-vid. Let me know. Thanks so much :)
@@DocSparse Hello Dr. Davis, thank you so much for your reply. I had already downloaded these notes from the Dropbox link that you have shared in the description of the first lecture in this course. This zip file does not contain the scanned copy that I was referring to. I was actually referring to the visual example that you have used in this lecture to show how the ancestral tree gets built in the row_cnt() function (you refer to it at 1:36 in the video). But never mind, you have explained it so well in this lecture that I was able to do it on my own. Thank you so much :)
@@DocSparse Thank you for checking for it and replying, Dr. Davis. But as I said, I was able to work it out based on your explanation in the lecture. So its all fine. Thanks :)
Is the following reasoning true? if the graph is pruned to a tree (i.e., the elimination tree), then both DFS and BFS given a topological order, even post-order walk of the tree gives a topological order (reversed though).
44:05 if you test for overflow after the fact by checking to see if the mark value is negative, that's undefined behavior and the compiler is allowed to remove the check - at least in C++. Check against MAXINT or std::numeric_limits<int>::max() instead.
Thanks Tim, great content. I was wondering if your notation for sparse matrix indexing was correct? Most probably I missed some point. You write "column j is Ai[Ap[j] … Ap[j+1]-1], ditto in Ax", but I don't get the correct answer, because lets say j=1 (first col) then Ap[1] = 0, and Ai[0] is a bad Matlab index (i guess) :/ Using (Ap(j)+1:Ap(j+1)) works for Ax and Ai+1. With it you can get the nonzero values from Ax and their row positions from Ai+1 in the original matrix: Ap: [0, 3, 6, 8, 10] Ai : [0, 1, 3, 1, 2, 3, 0, 2, 1, 3] Ax: [4.5, 3.1, 3.5, 2.9, 1.7, 0.4, 3.2, 3.0, 0.9, 1.0] Ex: Lets say j = 3 Ax(Ap(j)+1:Ap(j+1)) ---------- Ax(Ap(3)+1: Ap(4)) = Ax(6+1:8) = Ax(7:8) = [3.2, 3.0] Ai(Ap(j)+1 : Ap(j+1))+1 ---------- Ai(7:8)+1 = [0,2] + 1 = [1,3] So column j = 3 has values [3.2, 3.0] non-zero values, which are in rows [1, 3] of the original matrix Show less REPLY
The first column is j=0, not j=1. So Ap[0]=0. Ap[1] is the index of the start of the *second* column, j=1. For the MATLAB user, and in all m-files, indices are 1-based and the first column is 1. But internally in MATLAB, all matrices are stored with 0-based indexing. The MATLAB user (and in the m-file) all indices are 1-based but when using them internally, they are converted to 0-based indexing.
Andrzej Reinke: yes the quality isn’t great. These were recorded by the university of Florida using their equipment. These are from my copy on DVD and it’s the best available format.
I use MATLAB's image drawing capability. I start with a matrix I'm working on and then create a new matrix that holds an image, and color it with whatever I'm doing. It uses the MATLAB 'image' function and MATLAB 'colormap'. To paint a large block, I have to draw a block of pixels all the same color. So I have to do it all myself. I have my main algorithm that works on the matrix. Then I write another function that uses 'image' and 'colormap' to draw my matrix.
I started learing on sparse matrices ... your book and your videos are my bread and salt these days .. My thesis will be on large scale hybrid solvers.. could you please suggest me any open source library for sparse matrix hybrid solver . Hybrid here means those solvers that combine iterative and ddirect methods?