Lec 12 | MIT 18.06 Linear Algebra, Spring 2005

Uploaded by MIT on 06.05.2009

OK. This is lecture twelve.
We've reached twelve lectures. And this one is more than the others
about applications of linear algebra.
And, and I'll confess. When I'm giving you examples of the
null space and the row space, I create a little matrix.
You, you probably see that I just invent that matrix as I'm going.
And I feel a little guilty about it, because the truth is that th-
th- real linear algebra uses matrices that come from somewhere.
They're not just, like, randomly invented by the
instructor. They come from applications.
They have a definite structure. And anybody who works with them gets,
uses that structure. I- I'll just report,
like, this weekend I was at a, an event with chemistry professors.
OK, those guys are row reducing matrices, and what matrices
are they working with? Well, their little matrices tell
them how much of each element goes into the --
or each molecule, how many molecules of each go into a
reaction and what comes out. And by row reduction they get a
clearer picture of a complicated reaction.
And this weekend I'm going to the, to a cel- to a sort of birthday
party at Mathworks. So Mathworks is out route
nine in Natick. That's where Matlab is created.
It's a very, very successful,
software, tremendously successful.
And the conference will be about how linear algebra is used.
And so I feel better today to talk about what I think is the most
important model in applied math. And the discrete version is a graph.
So can I draw a graph? Write down the matrix that's
associated with it, and that's a great source of
matrices. You'll see.
So a graph is just, so a graph -- to repeat --
has nodes and edges. OK.
And I'm going to write down the graph, a graph,
so I'm just creating a small graph here.
A- a- as I mentioned last time, we would be very interested in the
graph of all, w- websites. Or the graph of all telephones.
I mean th- or the graph of all people in the world.
Here let me take just, maybe w- nodes one two three --
well, I better put in an -- I'll put in that edge and maybe an edge to,
to a node four, and another edge to node four.
How's that? So there's a graph with four nodes.
So N will be four in my -- N equal four nodes.
And the matrix will have M equal the number -- there'll be a row for
every edge, so I've got one two three four five
edges. So that will be the number of rows.
And I have to t- t- to write down the matrix that I want to,
I w- I want to study, I need to give a direction to every edge,
so I know a plus and a minus direction.
So I'll just do that with an arrow. Say from one to two,
one to three, two to three, one to four, three to four.
That just tells me, if c- if I have current flowing
on these edges then I know whether it's -- to count it as positive or
negative according as whether it's with the arrow or against the arrow.
But I just drew those arrows arbitrarily.
OK. Because I,
I -- my example is going to come -- the, the example I'll -- the words
that I will use will be words like potential, potential difference,
currents. In other words,
I'm thinking of an electrical network.
But ht- th- that's just one possibility.
My applied math class builds on this example.
It could be a hydraulic network, so we could be doing, flow of
water, flow of oil. Other examples,
this could be a, a tru- a structure. L- l- like the,
a design for a bridge or a design for a Buckminster Fuller
dome. Or many other possibilities,
so many. So l- but let's take potentials and
currents as, as a basic example,
and let me create the matrix that tells you exactly what
the graph tells you. So now I- I'll call it the incidence
matrix, incidence matrix. OK.
So I- let me write it down, and you'll see, w- w- w- what
its properties are. So every row corresponds to an edge.
I have five rows from five edges, and let me write down again what
this graph looks like. OK, the first edge,
edge one, goes from node one to two. So I'm going to put in a minus one
and a plus one in th- this corresponds to node one two three
and four, the four columns. The five rows correspond --
the first row corresponds to edge one.
Edge one leaves node one and goes into node two,
and that -- and it doesn't touch three and four.
Edge two, edge two goes -- oh, I haven't numbered these edges.
I just figured that was probably edge one, but I didn't say so.
Let me take that to be edge one. Let me take this to be edge two.
Let me take this to be edge three. This is edge four.
Ho, I'm discovering -- no, wait a minute.
Did I number that twice? Here's edge four.
And here's edge five. OK?
All right. So, so edge one,
as I said, goes from node one to two. Edge two goes from two to three,
node two to three, so minus one and one in the second and third columns.
Edge three goes from one to three. I'm, I'm tempted to stop for a
moment with those three edges. Edges one two three,
those form what -- w- what would we, what do you call the, the little,
the little, the subgraph formed by edges one,
two, and three? That's a loop.
And l- the number of loops and the position of the loops
will be crucial. OK.
Actually, here's a interesting point about loops.
If I look at those rows, corresponding to edges one two three,
and th- and these guys made a loop. You want to tell me w- w- w- uh,
if I just looked at that much of the matrix it would be natural for me to
ask, are those rows independent? Are the rows independent?
And can you tell from looking at that if they are or are
not independent? Do you see a, a relation between
those three rows? Yes.
If I add that row to that row, I get this row.
So, so that's like a hint here that loops correspond to dependent,
linearly dependent column -- uh, linearly dependent rows.
OK, let me complete the incidence matrix.
Number four, edge four is going from node one to
node four. And the fifth edge is going from
node three to node four. OK.
There's my matrix. It came from the five edges
and the four nodes. And if I had a big graph,
I'd have a big matrix. And what questions do
I ask about matrices? Can I, can I ask,
can, can we -- here's the review now. There's a matrix that
comes from somewhere. If, if it was a big graph,
it would be a large matrix, but a lot of zeros, right?
Because every row only has two non-zeros.
So the number of -- it's a very sparse matrix.
The number of non-zeros is exactly two times five, it's two M.
Every row only has two non-zeros. And that's with a lot of structure.
And that u- m- u- that was the point I wanted to begin with,
that graphs, that real graphs from real,--
real matrices from r- from genuine problems have structure.
OK. We can ask,
and because of the structure, we can answer, the, the main
questions about matrices. So first question, what
about the null space? So what am I asking if I ask you for
the null space of that matrix? I'm asking you if,
if -- I'm looking at the columns of the matrix,
four columns, and I'm asking you,
are those columns independent? If the columns are independent,
then what's in the null space? Only the zero vector, right?
The null space contains -- tells us what combinations of the
columns -- it tells us how to combine columns to get zero.
Can -- and is there anything in the n- in the null space of this matrix
other than just the zero vector? In other words,
are those four columns independent or dependent?
OK. Tha- so that's, that's,
that's our question. Let me, y- I don't know
if you see the answer. Whether there's --
u- u- u- u- so, so I u- u- let's see. I guess we could do it properly.
We could solve A X equals zero. So let me solve A X equals zero to
find the null space. OK.
What's A X? A- h- can I put X in here
in, in little letters? X1, X2, X3, X4, that's --
it's got four columns. A X now is that matrix times X.
And what do I get for A X? If the camera can keep that matrix
multiplication there, I'll put the answer here.
A X equal -- what's the first component of A X?
Can you take that first row, minus one one zero zero, and
multiply by the X, and of course you get X2 minus X1.
The second row, I get X3 minus X2. From the third row,
I get X3 minus X1. From the fourth row,
I get X4 minus X1. And from the fifth row,
I get X4 minus X3. And I want to know when
is the thing zero. That's, that's,
this is my equation, A X equals zero.
Notice what that matrix A is doing, what we -- we've created a matrix
that computes the differences across every edge, the differences
in potential. Let me, let me,
even begin to give this interpretation.
I'm going to think of this vector X, which is X1 X2 X3 X4, as the
potentials at the nodes. So I'm introducing a word,
potentials at the nodes. Pot- and now if I multiply by A,
I get these, these -- I get these five components, X2 minus
X1, et cetera. And what are they?
They're potential differences. That's what A computes.
If I have potentials at the nodes and I multiply by A,
it gives me the potential differences,
the differences in potential, across the edges.
OK. When are those differences all zero?
So I'm looking for the null space. Of course, if all the Xs are
zero then I get zero. That, that just tells me,
of course, the zero vector is in the null space.
But w- there's more in the null space.
Those columns are -- of A are dependent, right,
because I can find solutions to that equation.
Tell me, tell me the, tell me the null space.
Tell me one vector in the null space, so tell me an X,
it's got four components, and it makes that thing zero.
So what's a good X to do that? One one one one, constant potential.
If the potentials are constant, then all the potential differences
are zero, and that X is in the null space.
What else is in the null space? If it -- i- i- yeah,
let me ask you just always, give me a basis for the null
space. A basis for the null space
will be just that. That's a, that's a good, that's it.
That's a basis for the null space. The null space is actually one
dimensional, and it's the line of all vectors
through that one. So there's a basis for it,
and here is the whole null space. Any multiple of one one one one,
it's the whole line in four dimensional space.
Do you see that that's the null space?
So the, so the dimension of the null space of A is one.
And there's a basis for it, and there's everything that's in it.
Good. And wha- w- w- what does
that mean physically? I mean, what does that mean
in the application? That guy in the null space.
It means that the potentials can only be determined up to a constant.
Potential differences are what make current flow.
That's what makes things happen. It's these potential differences
that will make something move in the, in our network,
between X2- between node two and node one.
Nothing will move if all potentials are the same.
If all potentials are C, C, C, and C, then nothing will move.
So we're, we have this one parameter, th- this ar- this arbitrary constant
that raises or drops all the potentials.
It's like ranking football teams, whatever.
We have a, there's a, there's a constant --
or looking at temperatures, you know, w- w- w- there's a flow of heat from
higher temperature to lower temperature.
If temperatures are equal there's no flow, and therefore we can measure
temper- w- we can measure temperatures by,
Celsius or s- or we can start at absolute zero.
And that arbitrary c- it's the same arbitrary constant that,
that was there in calculus. In calculus,
right, when you took the integral, the indefinite integral, there was a
plus C, and you had to s- set a starting point to know
what that C was. So here what often happens is we fix
one of the potentials, like the last one.
So a typical, a typical, uh,
thing would be to ground that node. To set its potential at zero.
And if we do that, if we fix that potential so it's not
unknown anymore, then that column disappears and we
have three columns, and those three columns are
independent. So th- th- th- th- a- a- I'll leave
the column in there, but w- we'll remember that grounding
a node is the way to get it out. And grounding a node is the way to --
setting a node -- setting a potential to zero tells us the,
the base for all potentials. Then we can compute the others.
OK. But a- a- what's the --
now I've talked enough to ask what the rank of the matrix is?
What's the rank then? The rank of the matrix.
So we have a five by four matrix. We've located its null space,
one dimensional. How many independent
columns do we have? What's the rank?
It's three. And the first three columns,
or actually any three columns, will be independent.
Any three potentials are independent, good variables.
The fourth potential is not, we need to set, and typically we
ground that node. OK.
Rank is three. Rank equals three.
OK. Let's see,
do I want to ask you about the column space?
The column space is all combinations of those columns.
I- I could say more about it and I will.
Let me go to the null space of A transpose,
because the equation A transpose Y equals zero is probably the most
fundamental equation of applied mathematics.
All right, let's talk about that. That, that deserves our attention.
A transpose Y equals zero. Let's -- let me put it on here.
OK. So A transpose Y equals zero.
So I'm, now I'm finding the null space of A transpose.
Oh, and if I ask you its dimension, you could tell me what it is.
What's the dimension of the null space of A transpose?
We now know enough to answer that question.
What's the general formula for the dimension of the null
space of A transpose? A transpose, l- let me even
write out A transpose. This A transpose will
be N by M, right? N by M.
In this case, it'll be four by five. Those columns will become rows.
Minus one zero minus one minus one zero is now the first row.
The second row of the matrix, one minus one and three zeros.
The third column now becomes the third row, zero one
one zero minus one. And the fourth column becomes
the fourth row. OK, good.
There's A transpose. That multiplies Y, Y1
Y2 Y3 Y4 and Y5. OK.
Now you've had time to think about this question.
What's the dimension of the null space, i- if I set all those -- wow.
Usually -- a- i- sometime during this semester,
I'll drop one of these erasers behind there.
That's a great moment. There's no recovery.
There a- th- there's, there's,-- centuries of erasers
are back there. OK.
OK, what's the dimension of the null space?
Give me, u- u- give me the general formula first in terms
of R and M and N. This is like crucial,
you th- y- are -- we struggled to, to decide what dimension meant, and
then we figured out what it equaled for an M by N matrix of rank R,
and the answer was M minus R, right? There are M equal five components,
M equal five columns of A transpose. And R of those columns are pivot
columns, because it'll have R pivots.
It has rank R. And M minus R are the free ones now
for A transpose, so that's five minus three,
so that's two. And I would like to find
this null space. I know its dimension.
Now I want to find out a basis for it.
And I want to understand what this equation is.
So let me say what A transpose Y actually represents,
why I'm interested in that equation. Yei- I,
I'll, I'll come -- I'll put it down with those old
erasers and continue this. The u- thi- here's the great picture
of applied mathematics. So let me complete that.
There's a matrix that I'll call C that connects potential differences
to currents. So I'll call these,
these are currents on the edges, Y1 Y2 Y3 Y4 and Y5.
Those are currents on the edges. And this relation between current
and potential difference is Ohm's Law.
This here is Ohm's Law. Ohm's Law says that the current on
an edge is some number times the potential drop.
That's u- u- and that number is the conductance of the edge,
one over the resistance. This is, this is the old current is,
is, resis- u- u- u- the relation of current, resistance, and
change in potential. So it's a change in potential that
makes some current happen, and it's Ohm's Law that says how
much current happens. OK.
And then the final step of this framework is the equation A
transpose Y equals zero. And that's -- what is that saying?
I- it has a famous name. It's Kirchoff's Current Law,
KCL, Kirchoff's Current Law, A transpose Y equals zero.
So that when I'm solving, and when I go back up with this
blackboard and solve A transpose Y equals zero,
it's, it's, it's this, it's this pattern of --
that, that I want you to see. That we had rectangular matrices,
but -- and real applications, but in those real applications comes
A and A transpose. So our four subspaces are exactly
the right things to know about. All right.
Let's know about that null space of A transpose.
Wait a minute, where'd it go? There it is.
OK. OK. Null space of A transpose.
We know what its dimension should be.
Let's find out -- t- tell me a vector in it.
Tell me -- now, so what am I asking you?
I'm asking you for five currents that satisfy Kirchoff's Current Law.
So we better understand what that law says.
That, that law, A transpose Y equals zero,
what does that say, say in the first row of A transpose?
That says -- the so the first row of A transpose
says minus Y1 minus Y3 minus Y4 is zero.
Where did that equation come from? Let me -- I'll redraw the graph.
Can I redraw the graph here, so that we -- m- maybe here, so that
we see again the, the -- there was node one,
node two, node three, node four was off here.
That was, that was our graph. We had currents on those.
We had a current Y1 going there. We had a current Y --
what were the other, what are those edge numbers?
Y4 here and Y3 here. And then a Y2 and a Y5.
I'm, I'm just copying what was on the other board so it's ea-
convenient to see it. What is this equation telling me,
this first equation of Kirchoff's Current Law?
What does that mean for that graph? Well, I see Y1,
Y3, and Y4 as the currents leaving node one.
So sure enough, the first equation refers to node
one, and what does it say? It says that the net flow is zero.
That, that equation A transpose Y, Kirchoff's Current Law, is a balance
equation, a conservation law. Physicists, be overjoyed,
right, by this stuff. It, it says that in equals out.
And in this case, the three arrows are all going out,
so it says Y1, Y3, and Y4 add to zero.
Let's take the next one. The second row is Y1 minus Y2,
and that's all that's in that row. And that must have something
to do with node two. And sure enough,
it says Y1 equals Y2, current in equals current out.
The third one, Y2 plus Y3 minus Y5 equals zero.
That's certainly'll be what's up at the third node.
Y2 coming in, Y3 coming in, Y5 going out has to
balance. And finally,
Y4 plus Y5 equals zero says that at this node, Y4 plus Y5, the
total flow, is zero. We don't --
you know, charge doesn't accumulate at the nodes.
It travels around. OK.
Now give me -- I come back now to the linear
algebra question. What's a vector Y that solves
these equations? Can I figure out what the null space
is for this matrix, A transpose, by looking at the graph?
I, I'm happy if I don't have to do elimination.
I can do elimination, we know how to do, we know how to
find the null space basis. We can do elimination on this matrix,
and we'll get it into a good r- reduced row echelon form,
and the special solutions will pop right out.
But I would like to even to do it without that.
B- u- u- u- let me just ask you first, if I did elimination on that,
on that, matrix, what would the last row become?
What would the last row -- if I do elimination on that matrix,
the last row of R will be all zeros, right?
Why? Because the rank is three.
We only going to have three pivots. And the fourth row will be all zeros
when w- when we eliminate. So elimination will tell us what,
what we spotted earlier, what's the null space --
u- u- all the, all the information, what are the dependencies.
We'll, we'll find those by elimination,
but here in a real example, we can find them by thinking.
OK. Again, my question is,
what is a solution Y? How could current travel around this
network without collecting any charge at the nodes?
Tell me a Y. OK.
So a basis for the null space of A transpose.
How many vectors am I looking for? Two.
It's a two dimensional space. My basis should have
two vectors in it. Give me one.
One set of currents. Suppose, let me start it.
Let me start with Y1 as one. OK.
So one unit of -- one amp travels on edge one with the
arrow. OK, then what?
What is Y2? It's one also, right?
And of course what you did was solve Kirchoff's Current Law quickly in
the second equation. OK.
Now we've got one amp leaving node one, coming around to node three.
What shall we do now? Well, what shall I take
for Y3 in other words? Oh, I've got a choice,
but w- why not make it what you said, negative one.
So I have just sent current, one amp, around that loop.
What shall Y4 and Y5 be in this case?
We could take them to be zero. This satisfies Kirchoff's
Current Law. E- we could check it patiently,
that minus Y1 minus Y3 gives zero. We know Y1 is Y2.
The others, Y4 plus Y5 is certainly zero.
Any current around a loop satisfies a -- satisfies the Current Law.
OK. Now w- you know how
to get another one. Take current around this loop.
So now let Y3 be one, Y5 be one, and Y4 be minus one.
And so, so we have w- w- the first basis vector sent current around
that loop, the second basis vector sends
current around that loop. And I've --
and those are independent, and I've got two solutions,
two vectors in the null space of A transpose,
two solutions to Kirchoff's Current Law.
Of course you would say what about sending current around the big loop.
What about that vector? One for Y1,
one for Y2, nothing f- on Y3, one for Y5, and minus one for Y4.
What about that? Is that, is that in the null
space of A transpose? Sure.
So why don't we now have a third vector in the basis?
Because it's not independent, right? It's not independent.
This vector is the sum of those two. If I send current around that and
around that, then on this edge Y3 it's going to
cancel out and I'll have altogether current around the whole,
the, the, the outside loop. That's what this one is,
but it's a combination of those two. Do you see that I've now,
I've identified the null space of A transpose,
but, but, but more than that, we've solved Kirchoff's Current Law.
And understood it in terms of the network.
OK. So that's,
that's the null space of A transpose.
I guess I -- there's always one more space to ask
you about. Let's see,
I guess I need the row space of A, the column space of A transpose.
So what's, u- u- u- what's N,
w- w- what's its dimension? Yup?
What's the dimension of the row space of A?
If I look at the original A, it had five rows.
How many were independent? Oh, w- I guess I'm asking you the
rank again, right? And the answer is three, right?
Three independent rows. When I transpose it,
there's three independent columns. Are those columns independent,
those three? The first three columns,
are they the pivot columns of the matrix? No.
Those three columns are not independent.
There's a u- u- in fact, this tells me a relation between
them. There's a vector in the null space
that says the first column plus the second column e- equals
the third column. They're not independent because they
come from a loop. So the pivot columns,
the pivot columns of this matrix will be the first,
the second, not the third, but the fourth.
One, columns one, two, and four are OK.
Where are they -- u- u- u- those are the columns of A
transpose, those correspond to edges.
So there's edge one, there's edge two, and there's edge
four. So there's a --
u- u- u- that's like u- u- that, that is a, smaller graph.
If I just look at the part of the graph that I've,
that I've, thick -- used with thick edges, it has the
same four nodes. It only has three edges.
And the, those edges correspond to the independent guys.
And in the graph there u- u- u- those three edges have
no loop, right? The independent ones are the ones
that don't have a loop. All the d- all the l- dependencies
came from loops. They were the things in the null
space of A transpose. If I take three pivot columns,
there are no dependencies among them, and they form a graph without a loop,
and I just want to ask you what's the name for a graph without a loop?
So a graph without a loop is -- has got not very many edges, right?
I've got four nodes and it only has three edges,
and if I put another edge in, I would have a loop.
So it's this graph with no loops, and it's the one where the rows of A
are independent. And what's a graph called
that has no loops? It's called a tree.
So a tree is the name for a graph with no loops.
And just to take one last step here. Using our formula for dimension.
Using our formula for dimension, let's, let's look -- w- once, once,
at this, at this formula. The dimension of the null space of A
transpose is M minus R. OK.
This is the number of loops, number of independent loops.
M is the number of edges. And what is R?
What is R for our -- we'll have to remember way back.
The rank came -- we- from looking at the columns of
our matrix. Wh- so what's the rank?
Let's, let's just remember. Rank was --
you remember there was one -- we had a one dimensional muls- the
rank was N minus one, that's what I'm struggling to say.
Because there were N columns coming from the N nodes,
so it's minus, the number of nodes minus one,
because of that C, that one one one one vector in the
null space. The columns were not independent.
There was one dependency, so we needed N minus one.
This is a great formula. This is like the first g- shall I,
shall I r- write it slightly differently?
The number of edges -- let, let me put, u- let me put
things u- have I got it right? Number of edges is M,
the number -- r- is M minus R, OK. So, so I'm getting --
let me put the number of nodes on the other side.
So I -- the number of nodes -- I'll move that to the other side --
minus the number of edges plus the number of loops is --
I have minus, minus one is one. The number of nodes minus the number
of edges plus the number of loops is one.
These are like zero dimensional guys.
They're the points on the graph. The edges are like one dimensional
things, they're, they connect nodes. The loops are like two
dimensional things. They have like a, a, an area.
And this count works for every graph.
And it's known as Euler's Formula. We see Euler again, that
guy never stopped. OK.
And can we just check -- so u- u- u- w- w- what am I saying?
I'm saying that linear algebra proves Euler's Formula.
Euler's Formula is this great topology fact about any graph.
C- I'll draw, let me draw another graph,
let me draw a graph with more edges and loops.
Let me, let me put in lots of -- OK. I just drew a graph there.
So what are the, what are the quantities in that
formula? How many nodes have I got? Looks like five.
How many edges have I got? One two three four five six seven.
How many loops have I got? One two three.
And Euler's right, I always get one. That, thi- this formula is,
is extremely useful in understanding the relation of these
quantities, the number of nodes,
the number of edges, and the number of loops.
OK. Just complete this lecture by
completing this picture, this cycle. So let me come to the,
to the,-- so this expresses the equations of applied math.
This, let me call these potential differences, say, E.
So E is A X. That, that's the,
that's the equation for this step. The currents come from the
potential differences. Y is C E.
The potential -- the currents satisfy Kirchoff's
Current Law. Those are the equations of --
with no source terms. Those are the equations of
electrical circuits of d- many -- u- u- those are like the, the most
basic three equations. A- a- a- applied math comes
in this structure. The only thing I haven't got yet in
the picture is an outside source to make something happen.
I could add a current source here, I could, I could add external s-
currents going in and out of nodes. I could add batteries in the edges.
Those are two, two ways. If I add batteries in the edges,
they, they come into here. Let me add current sources.
If I add current sources, those come in here.
So there's a, there's where current sources go,
because the F is a like a current coming from outside.
So we have our edges, we have our graph, and then I send
one amp into this node and out of this node,
and that gives me, a right-hand side in Kirchoff's
Current Law. And can I --
t- to complete the lecture, I'm just going to put these three
equations together. So I start with X, my unknown.
I multiply by A. That gives me the potential
differences. That was our matrix A that the whole
thing started with. I multiply by C.
Those are the physical constants in Ohm's Law.
Now I have Y. I multiply Y by A transpose,
and now I have F. So there is the whole thing.
There's the, there's the basic equation of
applied math. Coming from these three steps,
in which the last step is this balance equation.
There's always a balance equation to look for.
These are the -- u- u- u- u- when I say the most
basic equations of applied mathematics, I should
say, in equilibrium. Time isn't in this problem.
I'm not -- a- and Newton's Law isn't acting here.
I'm, I'm looking at the u- u- at the equations when everything has
settled down, how do the currents distribute in
the network. And th- of course there are big
codes to solve the -- th- th- this is the basic problem of
numerical linear algebra for systems of equations, because
that's how they come. And my final question.
What can you tell me about this matrix A transpose C A?
Or even A transpose A? I'll just close with that question.
What do you know about the matrix A transpose A?
It is always symmetric, right. OK, thank.
So I'll see you f- Wednesday for a full review of these chapters,
and Friday you get to tell me. Thanks.