I skimmed the paper a bit, here's a quick summary of some of the terminology:
Urschel’s paper proposes a method for computing the “Fiedler vector” of a graph.
To understand what this means, we need to define a few terms: graph, matrix, vector, eigenvalue, eigenvector, Laplacian, and Fiedler vector.
By “graph”, Urschel does not mean a picture of y=x+2 or something of the sort we did in sixth grade. Rather, a “graph” is a network of nodes. These nodes can be anything: computers, people, animals, whatever. A graph has a finite number of them. The nodes of a graph are also called “vertices”. Each node is thus a vertex.
Now, some nodes have an edge between them. The key point here is that these are weighted edges: each edge has a number, its weight associated with it. Also, there is only one edge between a pair of nodes, this an "undirected graph" and not a "directed graph" (where one can have an edge from A to B and another from B to A).
The second term is “matrix”. A “(square) matrix” is just a square table of numbers, like an Excel spreadsheet that is all-numeric.
A vector is a column of numbers.
The key point about matrices and vectors is that they can be multiplied. A matrix can be multiplied by a vector (as long as the number of rows in the square matrix is the same as the number of numbers in the vector). I won’t write down the formula for that, but it’s not hard, just simple addition and muliplication.
Sometimes when a matrix is multiplied by a vector, you get a new vector that is just a multiple of the original vector. For example, if every element of the original vector is multiplied by “2” as a result of multiplying the vector by the matrix, then the new vector would be a “multiple” of the original vector.
Whenever this happens, we say that the number the old vector was multiplied by is an “eigenvalue” of the matrix. We also say the original vector is an “eigenvector” for that eigenvalue.
Thus, if M is a matrix and v is a vector, say we write M*v for the result of multiplying M and v. An eigenvalue of M with eigenvector v is a single number, say e, such that
M*v=e*v .
Here, “e*v” means that each element of v is multiplied by the number “e”.
Matrices and vectors are very important in science and there is a huge area of math, called linear algebra, devoted to their study. The google search algorithm, for example, is basically a giant linear algebra formula. Eigenvalues and eigenvectors are the fundamental building blocks to understand matrices and how they interact.
Now, for hundreds of years the notions of “graph” and “matrix” were essentially independent. But not too long ago, a field known as “algebraic graph theory” became popular. What algebraic graph theory tries to do is associate a matrix with a graph, and to answer questions about the graph by answering questions about the matrix. Since there has been a lot of mathematical study about matrices - much more so than about graphs - this can be useful.
One of the ways of associate a matrix with a graph is called the Laplacian. Given a particular weighted graph, the Laplacian forms a matrix L from that graph. If the graph has n vertices, then the Laplacian matrix will have n rows and n columns. Each row and each column corresponds to a vertex. The number in the matrix corresponding to a particular row and column is based on the weight of the edge between the corresponding vertices (or 0 if there is no edge).
Now, someone found out, not sure who, that the eigenvalues of the Laplacian matrix can be useful in understanding things about the graph - things like how connected it is and so on.
If we have a Laplacian matrix and list all its eigenvalues, we can choose the second smallest one. Oh, all the eigenvalues will be greater than 0, by the way, and there will only be a finite number of them.
Anyway, this second smallest eigenvalue, say W, turns out to be important. And, as above, that eigenvalue W of the Laplacian matrix L has an associated eigenvector, v.
v is the Fiedler vector. This is what the paper is trying to compute.
Now, the way this is being computed is a “multigrid method”. A multigrid method is a family of methods that try and solve a problem by dividing the problem up into smaller subproblems. At each stage, the most challenging remaining subproblem is redivided, loosely speaking.
Urschel has an algorithm that finds the Fiedler vector of the graph by constructing a simpler graph, one with fewer edges. He calls this a “coarsening” of the original graph. He starts with a very simple graph and gradually adds edges until he reaches the new graph. It’s a multigrid method because he dynamically determines where to add edges in the algorithm.
So that’s roughly what he’s doing. He’s giving a way to efficiently compute these Fiedler vectors, which are the eigenvector corresponding to the second eigenvalue of the Laplacian of a weighted undirected graph.