


#SOLUTION VECTORS FOR SPARSE LINEAR EQUATION SYSTEMS FULL#
We do not pretend on a full survey of the current state of the theoretical researches which are very extensive now. The preference will be given to the description of the hypothetic destination point of the undertaken efforts, the current status of the problem, and possible methods to overcome difficulties on the way to that destination point. This discussion will include interconnection of underdetermined SLE with global problems of information theory and with data measuring and representation. In this case, close depends on something called the spectral gap, and with high probability means that in order to not be close to the desired subspace you would likely need to keep running computations with different random numbers for millions or billions of years before you would observe the algorithm fail.Our main goal here is to discuss perspectives of applications based on solving underdetermined systems of linear equations (SLE). The statement is roughly that if entries of X are iid sub-Gaussian random variables (you can replace “sub-Gaussian” with Gaussian), and if we use k+p random vectors ( p is a small constant), we can get close to the top- k dimensional eigenspace with high probability. Randomized algorithms have probabilistic gurarantees. One way to think about the action of A is that it “rotates” these random vectors preferentially in the direction of the top eigenvectors, so if we look at the most important subspace of the span of the image A X (as measured by the svd), we get a good approximation of the most important eigenspace. That is, we compute A X, where X is a matrix with random entries (we think of every column as a random vector). The basic idea is to get an approximation of the range of an operator A by applying it to a bunch of random vectors. Again, we don’t want to actually compute the full eigenvalue decomposition, so we want an algorithm that does this in some provable way. We can analyitically obtain the optimal rank- k approximation by computing a full eigenvalue decomposition of A and set all the eigenvalues outside the largest k (in magnitude) to 0. A rank- k approximation of A is an approximation by a rank- k outer product. Fortunately, these algorithms are very easy to implement without worrying too much about the theory.įor simplicity, we’ll assume that A is symmetric with distinct eigenvectors, so we can limit the discussion to eigenvectors. SciPy does not (currently) have built-in functions for randomized linear algebra functionality (some languages like Julia do). To read about the theory, see the 2009 paper by Halko, Martinsson, and Tropp: Link. In the past decade or two, randomized linear algebra has matured as a topic with lots of practical applications. We’ll just use the function sla.splu (SParse LU) at a high level, which produces a factorization object that can be used to solve linear systems. If you ever need to do this explicitly, the METIS package is commonly used. This is often done using “nested disection” algorithm, which is outside the scope of this course. However, we can seek a permuted version of the matrix A which will minimize the amount of “fill-in” which occurs. In general, the L and U terms will be more dense than A, and sometimes much more dense.

The factorization where this is easiest to achieve is the LU decomposition. What we really want is a factorization where if A is sparse, the terms in the factorization are also sparse. This means that if we compute a factorization, we are going to lose all the advantages we had from sparsity. eigenvalue decompositions, QR decomposition, SVD, etc. The thing to keep in mind is that many factorizations will generally be dense, even if the original matrix is sparse. This typically refers to producing a factorization of a sparse matrix for use in solving linear systems. Dimension Reduction and Data Visualization
