PatsFans.com Menu
PatsFans.com - The Hub For New England Patriots Fans

A Cascadic Multigrid Algorithm for Computing the Fiedler Vector of Graph Laplacians


I skimmed the paper a bit, here's a quick summary of some of the terminology:

Urschel’s paper proposes a method for computing the “Fiedler vector” of a graph.


To understand what this means, we need to define a few terms: graph, matrix, vector, eigenvalue, eigenvector, Laplacian, and Fiedler vector.


By “graph”, Urschel does not mean a picture of y=x+2 or something of the sort we did in sixth grade. Rather, a “graph” is a network of nodes. These nodes can be anything: computers, people, animals, whatever. A graph has a finite number of them. The nodes of a graph are also called “vertices”. Each node is thus a vertex.


Now, some nodes have an edge between them. The key point here is that these are weighted edges: each edge has a number, its weight associated with it. Also, there is only one edge between a pair of nodes, this an "undirected graph" and not a "directed graph" (where one can have an edge from A to B and another from B to A).


The second term is “matrix”. A “(square) matrix” is just a square table of numbers, like an Excel spreadsheet that is all-numeric.


A vector is a column of numbers.


The key point about matrices and vectors is that they can be multiplied. A matrix can be multiplied by a vector (as long as the number of rows in the square matrix is the same as the number of numbers in the vector). I won’t write down the formula for that, but it’s not hard, just simple addition and muliplication.


Sometimes when a matrix is multiplied by a vector, you get a new vector that is just a multiple of the original vector. For example, if every element of the original vector is multiplied by “2” as a result of multiplying the vector by the matrix, then the new vector would be a “multiple” of the original vector.


Whenever this happens, we say that the number the old vector was multiplied by is an “eigenvalue” of the matrix. We also say the original vector is an “eigenvector” for that eigenvalue.


Thus, if M is a matrix and v is a vector, say we write M*v for the result of multiplying M and v. An eigenvalue of M with eigenvector v is a single number, say e, such that


M*v=e*v .


Here, “e*v” means that each element of v is multiplied by the number “e”.


Matrices and vectors are very important in science and there is a huge area of math, called linear algebra, devoted to their study. The google search algorithm, for example, is basically a giant linear algebra formula. Eigenvalues and eigenvectors are the fundamental building blocks to understand matrices and how they interact.


Now, for hundreds of years the notions of “graph” and “matrix” were essentially independent. But not too long ago, a field known as “algebraic graph theory” became popular. What algebraic graph theory tries to do is associate a matrix with a graph, and to answer questions about the graph by answering questions about the matrix. Since there has been a lot of mathematical study about matrices - much more so than about graphs - this can be useful.


One of the ways of associate a matrix with a graph is called the Laplacian. Given a particular weighted graph, the Laplacian forms a matrix L from that graph. If the graph has n vertices, then the Laplacian matrix will have n rows and n columns. Each row and each column corresponds to a vertex. The number in the matrix corresponding to a particular row and column is based on the weight of the edge between the corresponding vertices (or 0 if there is no edge).


Now, someone found out, not sure who, that the eigenvalues of the Laplacian matrix can be useful in understanding things about the graph - things like how connected it is and so on.


If we have a Laplacian matrix and list all its eigenvalues, we can choose the second smallest one. Oh, all the eigenvalues will be greater than 0, by the way, and there will only be a finite number of them.


Anyway, this second smallest eigenvalue, say W, turns out to be important. And, as above, that eigenvalue W of the Laplacian matrix L has an associated eigenvector, v.


v is the Fiedler vector. This is what the paper is trying to compute.


Now, the way this is being computed is a “multigrid method”. A multigrid method is a family of methods that try and solve a problem by dividing the problem up into smaller subproblems. At each stage, the most challenging remaining subproblem is redivided, loosely speaking.


Urschel has an algorithm that finds the Fiedler vector of the graph by constructing a simpler graph, one with fewer edges. He calls this a “coarsening” of the original graph. He starts with a very simple graph and gradually adds edges until he reaches the new graph. It’s a multigrid method because he dynamically determines where to add edges in the algorithm.


So that’s roughly what he’s doing. He’s giving a way to efficiently compute these Fiedler vectors, which are the eigenvector corresponding to the second eigenvalue of the Laplacian of a weighted undirected graph.
Well why didn't you say so in the first place. ;)_
 
Hmmm. Your experience & attitude finally shows me why I was solicited to play hoops in the mens league when I lived in Sudsbury. Neighbor Frank was a schoolteacher in with the crowd playing hoops. Problem was Frank was a star Lacrosse player at Brown and nobody wanted to play against him in b-ball because he lacked hoop skills and played as if it was a contact sport. I was the cannon fodder. Since I had the blocking back mentality instead of the basketball mind, I noticed no problem and spent 2 decades playing against him in many games. Now I know why I got that role; it wasn't for my hoop skills, although I was a pretty good defensive player and could outrun most everyone downcourt until I couldn't anymore & stopped playing.
That fine story reminds me of another one. When I was in my mid 20's and still very competitive, I used to play basketball at the Hunnington YMCA. It was a very good brand of basketball. Initially I was proud that I was usually the first white guy picked. I attributed it to the fact I played good defense and was willing to pass the ball. Later I found out it was because no one wanted to play against me, because I was such a brutal hacker. :D

Perception vs reality is often a painful teacher. ;) Ah well, Phil, 2 old guys swapping stories, I'm sure the rest of the board are riveted to the edge of their seats awaiting the next chapter. :rolleyes: ;)
 
Next post:
Adventures in the Continuing Care Home
 
I skimmed the paper a bit, here's a quick summary of some of the terminology:

Urschel’s paper proposes a method for computing the “Fiedler vector” of a graph.


To understand what this means, we need to define a few terms: graph, matrix, vector, eigenvalue, eigenvector, Laplacian, and Fiedler vector.


By “graph”, Urschel does not mean a picture of y=x+2 or something of the sort we did in sixth grade. Rather, a “graph” is a network of nodes. These nodes can be anything: computers, people, animals, whatever. A graph has a finite number of them. The nodes of a graph are also called “vertices”. Each node is thus a vertex.


Now, some nodes have an edge between them. The key point here is that these are weighted edges: each edge has a number, its weight associated with it. Also, there is only one edge between a pair of nodes, this an "undirected graph" and not a "directed graph" (where one can have an edge from A to B and another from B to A).


The second term is “matrix”. A “(square) matrix” is just a square table of numbers, like an Excel spreadsheet that is all-numeric.


A vector is a column of numbers.


The key point about matrices and vectors is that they can be multiplied. A matrix can be multiplied by a vector (as long as the number of rows in the square matrix is the same as the number of numbers in the vector). I won’t write down the formula for that, but it’s not hard, just simple addition and muliplication.


Sometimes when a matrix is multiplied by a vector, you get a new vector that is just a multiple of the original vector. For example, if every element of the original vector is multiplied by “2” as a result of multiplying the vector by the matrix, then the new vector would be a “multiple” of the original vector.


Whenever this happens, we say that the number the old vector was multiplied by is an “eigenvalue” of the matrix. We also say the original vector is an “eigenvector” for that eigenvalue.


Thus, if M is a matrix and v is a vector, say we write M*v for the result of multiplying M and v. An eigenvalue of M with eigenvector v is a single number, say e, such that


M*v=e*v .


Here, “e*v” means that each element of v is multiplied by the number “e”.


Matrices and vectors are very important in science and there is a huge area of math, called linear algebra, devoted to their study. The google search algorithm, for example, is basically a giant linear algebra formula. Eigenvalues and eigenvectors are the fundamental building blocks to understand matrices and how they interact.


Now, for hundreds of years the notions of “graph” and “matrix” were essentially independent. But not too long ago, a field known as “algebraic graph theory” became popular. What algebraic graph theory tries to do is associate a matrix with a graph, and to answer questions about the graph by answering questions about the matrix. Since there has been a lot of mathematical study about matrices - much more so than about graphs - this can be useful.


One of the ways of associate a matrix with a graph is called the Laplacian. Given a particular weighted graph, the Laplacian forms a matrix L from that graph. If the graph has n vertices, then the Laplacian matrix will have n rows and n columns. Each row and each column corresponds to a vertex. The number in the matrix corresponding to a particular row and column is based on the weight of the edge between the corresponding vertices (or 0 if there is no edge).


Now, someone found out, not sure who, that the eigenvalues of the Laplacian matrix can be useful in understanding things about the graph - things like how connected it is and so on.


If we have a Laplacian matrix and list all its eigenvalues, we can choose the second smallest one. Oh, all the eigenvalues will be greater than 0, by the way, and there will only be a finite number of them.


Anyway, this second smallest eigenvalue, say W, turns out to be important. And, as above, that eigenvalue W of the Laplacian matrix L has an associated eigenvector, v.


v is the Fiedler vector. This is what the paper is trying to compute.


Now, the way this is being computed is a “multigrid method”. A multigrid method is a family of methods that try and solve a problem by dividing the problem up into smaller subproblems. At each stage, the most challenging remaining subproblem is redivided, loosely speaking.


Urschel has an algorithm that finds the Fiedler vector of the graph by constructing a simpler graph, one with fewer edges. He calls this a “coarsening” of the original graph. He starts with a very simple graph and gradually adds edges until he reaches the new graph. It’s a multigrid method because he dynamically determines where to add edges in the algorithm.


So that’s roughly what he’s doing. He’s giving a way to efficiently compute these Fiedler vectors, which are the eigenvector corresponding to the second eigenvalue of the Laplacian of a weighted undirected graph.


Ummmmm.....so....can this Fiedler guy rush the passer?
 
Beat me to it
 
The man is smart. It's too bad he couldn't spread some of his intelligence to the defense...and head coach that was so incredibly confused on the ineligible receiver switch that crying John called the Pat's cheaters in his post game.

Then again........I was very happy to see them befuddled.:D
 
To Harbaugh's credit, he called the ineligible receiver packages deception. I don't remember him calling it cheating, only that he believed the refs should have allowed him time to substitute. There was no need to allow defensive substitutions because no players were actually changed.
 
2 wins, 2 draws and a loss. Finished 12th.

Current RankingRank as of 2015-04-01Percentile
Overall Ranking9504(T) out of 5822983.7

In the U1700 section, which is really good performance for a first tournament. Most people in their first tournament would go for the U1200 section. I have played in about 15 tournaments, and would never do as well as him in the U1700 section. People familiar with tournament chess know how impressive this guy is.
 
Last edited:
After reading about him prior to last year's draft, I was pretty surprised he didn't end up in Foxboro.
 
No but he wasn't too bad of QB for kid coming from Dartmouth
img6903515.jpg
 
I was hoping the journal was one of our customers (I work for a manuscript submission system), unfortunately the site is hosted in China. The login screen looked so familiar, I sent the URL to our legal dept :eek:

Hmm, interesting. I was reading that summary of the math and thinking that the description of a graph sounded like it would be applicable to computer networks. Makes this possibly relevant or even potentially valuable stuff. Hope you didn't find China ripping us off again.
 
Hmm, interesting. I was reading that summary of the math and thinking that the description of a graph sounded like it would be applicable to computer networks. Makes this possibly relevant or even potentially valuable stuff. Hope you didn't find China ripping us off again.

I was curious enough that I created an account so I could look at the submission area. The similarities ended pretty quickly. Its a very bare-bones system that looks like a tax form. I think we're safe :)
 
Well, do OLs get fewer concussions than other positions? I would expect so but don't know. Seems like you aren't blindsided tackled like receivers, running backs, or even linebackers sometimes.
 
I skimmed the paper a bit, here's a quick summary...
At each stage, the most challenging remaining subproblem is redivided, loosely speaking.

So, he steps through matrices, adding complexity until arriving at the result...
Could this algorithm then be called an Urschel Walker??
 
One of the most accomplished "celebrity / mathematicians" was, without a doubt, a heart-throb of a BUNCH of you guys: Danica McKeller. Aka, Winnie from "The Wonder Years".

Her math scores are off the charts. Graduated summa cum laude from UCLA in math (IIRC). And has done some great work trying to get young girls interested in math. Written a couple of books.

Another athlete mathematician:
Former NCAA gymnastics champion / actress Kiralee Hayashi having co-wrote a peer-reviewed mathematics paper on Riemannian manifolds with Fields medalist Shing-Tung Yau. (From wikipedia)

http://kiraleehayashi.com/wp-content/uploads/2012/09/fruitoftheloom_site.jpg

ST Yau (or Yao) won the Fields medal for his part in the description of Calabi-Yao manifolds, which play a key role in certain string theories.

If Kellar's math skills are stratospheric (& they are), Hayashi's have to be out of this world.

It's terrific knowing people of this level of accomplishment are scattered amongst us.
 
Last edited:
It's terrific knowing people of this level of accomplishment are scattered amongst us.

But terrifying that some of them could also beat the crap out of us
 


TRANSCRIPT: Eliot Wolf’s Pre-Draft Press Conference 4/18/24
Thursday Patriots Notebook 4/18: News and Notes
Wednesday Patriots Notebook 4/17: News and Notes
Tuesday Patriots Notebook 4/16: News and Notes
Monday Patriots Notebook 4/15: News and Notes
Patriots News 4-14, Mock Draft 3.0, Gilmore, Law Rally For Bill 
Potential Patriot: Boston Globe’s Price Talks to Georgia WR McConkey
Friday Patriots Notebook 4/12: News and Notes
Not a First Round Pick? Hoge Doubles Down on Maye
Thursday Patriots Notebook 4/11: News and Notes
Back
Top