Rec 1 | MIT 18.085 Computational Science and Engineering I, Fall 2008

Uploaded by MIT on 25.02.2009


The following content is provided under a Creative
Commons license.
Your support will help MIT OpenCourseWare continue to
offer high quality educational resources for free.
To make a donation, or to view additional materials from
hundreds of MIT courses, visit MIT OpenCourseWare
PROFESSOR STRANG: This is the one and only review, you could
say, of linear algebra.
I just think linear algebra is very important.
You may have got that idea.
And my website even has a little essay called
Too Much Calculus.
Because I think it's crazy for all the U.S. universities do
this pretty much, you get semester after semester in
differential calculus, integral calculus, ultimately
differential equations.
You run out of steam before the good stuff, before
you run out of time.
And anybody who computes, who's living in the real world
is using linear algebra.
You're taking a differential equation, you're taking your
model, making it discrete and computing with matrices.
The world's digital now, not analog.
I hope it's OK to start the course with linear algebra.
But many engineering curricula don't fully recognize that and
so if you haven't had an official course, linear
algebra, stay with 18.085.
This is a good way to learn it.
You're sort of learning what's important.
So my review would be-- and then this, future Wednesdays
will be in our regular room for homework, review, questions of
all kinds, and today questions too.
Shall I just fire away for the first half of the time to give
you a sense of how I see the subject, or at least
within that limited time.
And then questions are totally welcome.
Always welcome, actually.
So I'll just start right up.
So essentially linear algebra progresses starting with
vectors to matrices and then finally to subspaces.
So that's, like, the abstraction.
You could say abstraction, but it's not difficult,
that you want to see.
Until you see the idea of a subspace, you haven't
really got linear algebra.
Okay, so I'll start at the beginning.
What do you do with vectors?
Answer; you take their linear combinations.
That's what you can do with a vector.
You can multiply it by a number and you can add or subtract.
So that's the key operation.
Suppose I have vectors u, v and w.
Let me take three of them.
So I can take their combinations.
So some combination will be, say some number times u plus
some number times v plus some number times w.
So these numbers are called scalers.
So these would be called scalers.
And the whole thing is a linear combination.
Let me abbreviate those words, linear combination.
And you get some answer, say b.
But let's put it down, make this whole discussion specific.
Yeah, I started a little early, I think.
I'm going to take three vectors; u, v and w and
take their combinations.
They're carefully chosen.
My u is going to be .
And I'll take vectors in three dimensions.
So that means their combinations will be in
three dimensions, R^3, three-dimensional space.
So that'll be u and then v, let's take zero, I think,
one and minus one.
Suppose I stopped there and took their linear combinations.
It's very helpful to see a picture in
three-dimensional space.
I mean the great thing about linear algebra, it moves into
n-dimensional space, 10-dimensional,
100-dimensional, where we can't visualize, but yet, our
instinct is right if we just follow.
So what's your instinct if I took those two vectors, and
notice they're not on the same line, one isn't a multiple of
the other, they go in different directions.
If I took their combinations, say x_1*u+x_2*v.

Oh now, let me push, this is a serious question.
If I took all their combinations.
So let me try to draw a little bit.
I'm in three-dimensional space and u goes somewhere,
maybe there and v goes somewhere, maybe here.
Now suppose I take all the combinations, so I could
multiply that first guy by any number, that
would fill the line.
I can multiply that second guy, v.
So this was u and this was v.
I can multiply that by any number x_2, that
would fill its line.
Each of those lines I would later call a one-dimensional
subspace, just a line.
But now, what happens if I take all combinations of the two?
What do you think?
You got a plane.
Get a plane.
If I take anything on this line and anything on this line and
add them up you can see that I'm not going to fill 3-D.
But I'm going to fill a plane and that maybe
takes a little thinking.
It just, then it becomes sort of, you see that that's
what it has to be.
Ok, now I'm going to have a third vector.
Ok, my third vector will be .
Ok, so that is zero in the x, zero in the y and
one in the z direction.
So there's W.
Now I want to take their combinations.
So let me do that very specifically.
How do I take combinations?
This is important.
Seems it's very simple, but important.
I like to think of taking the combinations of some vectors,
I'm always putting vectors into the columns of a matrix.
So now I'm going to move to step two; matrix.
I'm going to move to step two and maybe I'll put it-- well
not, I better put it here.
Ok, step two is the matrix has those vectors in its columns.
So in this case, it's three by three.
That's my matrix and I'm going to call it A.
How do I take combinations of vectors?
I should have maybe done it in detail here, but I'll just
do it with a matrix here.
Watch this now.
If I multiply A by the vector of x's, what that does, so this
is now A times x, so very important, a matrix
times a vector.
What does it do?
The output is just what I want.
This is the output.
It takes x_1 times the first column plus x_2 times
the second plus x_3 times the third.
That's the way matrix multiplication
works; by columns.
And you don't always see that.
Because what do you see?
You probably know how to multiply that matrix
by that vector.
Let me ask you to do it.
What do you get?
So everybody does it a component at a time.
So what's the first component of the answer? x_1, yeah.
How do you get that?
It's row times the vector.
And when I say "times", I really mean that dot product.
This plus this plus this is x_1.
And what about the second row?
Or I'll just say x_2-x_1.
And the third guy, the third component would
be x_3-x_2, right?
So right away I'm going to say, I'm going to call this matrix
A a difference matrix.
It always helps to give names to things.
So this A is a difference matrix because it takes
differences of the x's.
And I would even say a first difference matrix because it's
just the straightforward difference and we'll see second
differences in class Friday.
So that's what A does.
But you remember my first point was that when a matrix
multiplies a vector, the result is a combination
of the columns.
And that's not always, because see, I'm looking at the
picture not just by numbers.
You know, with numbers I'm just doing this stuff.
But now I'm stepping back a little bit and
saying I'm combining.
It's this vector times x_1.
That vector times x_1 plus this vector times x_2 plus that one
times x_3 added together gives me this.
Saying nothing complicated here.
It's just look at it by vectors, also.
It's a little interesting, already.
Here we multiplied these vectors by numbers.
x_1, x_2, x_3.
That was our thinking here.
Now our thinking here is a little, we've
switched slightly.
Now I'm multiplying the matrix times the numbers in x.
Just a slight switch, multiply the matrix times the number.
And I get some answer, b.
Which is this, this is b.
And of course, I can do a specific example like, suppose
I take, well, I could take the squares to be in x.
So suppose I take A times the first three squares, .
What answer would I get?
Just to keep it clear that we're very specific here.
So what would be the output b?
I think of this as the input, the , the x's.
Now the machine is a multiply by A and here's the output.
And what would be the output?
What numbers am I going to get there?
One, three, something? .
Which is actually a little neat that you find the differences
of the squares are the odd numbers.
That appealed to me in school somehow.
That was already a bad sign, right?
This dumb kid notices that you take differences of squares and
get odd numbers, whatever.
So now is a big step.
This was the forward direction, right?
Input and there's the output.
But now the real reality, that's easy and important, but
the more deep problem is, what if I give you b and ask for x?
So again, we're switching the direction here.
We're solving an equation now, or three equations
and three unknowns, Ax=b.
So if I give you this b, can you get x?
How do you solve three equations?
We're looking backwards.
Now that won't be too hard for this particular matrix that I
chose because its triangular will be able to go backwards.
So let me do that.
Let me take b to be.
It's a vector, it's got three components.
And now I'm going to go backwards to find x.
Or we will.
So do you see the three equations?
Here they are; x_1 is b_1, this is b_2, that difference is b_3.
Those are my three equations.
Three unknown x's, three known right-hand sides.
Or I think of it as A times x, as a matrix times
x giving a vector b.
What's the answer?
As I said, we're going to be able to do this.
We're going to be able to solve this system easily because
it's already triangular.
And it's actually lower triangular so that means
we'll start from the top.
So the answers, the solution will be what?
Let's make room for it. x_1, x_2, and x_3 I want to find.
And what's the answer?
Can we just go from top to bottom now?
What's x_1? b_1, great.
What's x_2?
So x_2-x_1.
These are my equations.
So what's x_2-x_1?
Well, is b_2, so what is x_2? b_1+b_2, right?
And what's x_3?
What do we need there for x_3?
So I'm looking at the third equation.
That'll determine x_3.
When I see it this way, I see those ones and I
see it multiplying X 3.
And what do I get?
Yeah, so x_3 minus this guy is b_3, so I have to add
in another b_3, right?
I'm doing sort of substitution down as I go.
Once I learned that x_1 was b_1 I used it there to find x_2.
And now I'll use x_2 to find x_3.
And what do I get again? x_3 is, I'll put
the x_2 over there.
I think you've got it. b_1+b_2+b_3.
So that's the solution.
Not difficult because the matrix was triangular.
But let's think about that solution.
That solution is a matrix times b.
When you look at that, so this is like a good early
step in linear algebra.
When I look at that I see a matrix multiplying b.
You take that step up to seeing a matrix.
And you can just read it off.
So let me say, what's the matrix there that's multiplying
b to give that answer?
Remember the columns of this matrix-- well, I don't know
how you want to read it off.
But one way is the think the columns of that matrix are
multiplying b_1, b_2, and b_3 to give this.
So what's the first column of the matrix?
It's whatever I'm reading off, the coefficients really
of b_1 here; .
And what's the second column of the matrix? .
Zero b_2's, one, one.
And the third is? .
Now, so lots of things to comment here.
Let me write up again here, this is x.
That was the answer.
It's a matrix times b.
And what's the name of that matrix?
It's the inverse matrix.
If Ax gives b, the inverse matrix does it the other way
around, x is A inverse b.
Let me just put that over here.
If Ax is b, then x should be A inverse b.
So we had inverse, I wrote down inverse this morning but
without saying the point, but so you see how that comes?
I mean, if I want to go formally, I multiply both
sides by A inverse.
If there is an A inverse.
That's a critical thing as we saw.
Is the matrix invertible?
The answer here is, yes, there is an inverse.
And what does that really mean?
The inverse is the thing that takes us from b back to x.
Think of A as kind of a, multiplying by A is kind of a
mapping, mathematicians use the word, or transform.
Transform would be good.
Transform from x to b.
And this is the inverse transform.
So it doesn't happen to be the discrete Fourier transform or a
wavelet transform, it's a-- well, actually we
could give it a name.
This is kind of a difference transform, right?
That's what A did, took differences.
So what does A inverse do?
It takes sums.
It takes sums.
That's why you see one, one and one, one, one along the rows
because it's just adding, and you see it here in
fully display.
It's a sum matrix.
I might as well call it S for sum.
So that matrix, that sum matrix is the inverse of
the different matrix.
And maybe, since I hit on calculus earlier, you could say
that calculus is all about one thing and it's inverse.
The derivative is A, and what's S?
In calculus.
The integral.
The whole subject is about one operation, now admittedly it's
not a matrix, it operates on functions instead of just
little vectors, but that's the main point.
The fundamental theorem of calculus is telling us that
integration's the inverse of differentiation.
So this is good and if I put in B equal one, three, five for
example just to put in some numbers, if I put in b equal
, what would the x that comes out be? .
Because it takes us back.
Here, previously we started, we took differences of
, got .
Now if we take sums of <1, 3, 5>, we get .
Now we have a system of linear equations.
Now I want to step back and see what was good
about this matrix.
Somehow it has an inverse.
Ax=b has a solution, in other words.
And it has only one solution, right?
Because we worked it out.
We had no choice.
That was it.
So there's just one solution.
There's always one and only one solution.
It's like a perfect transform from the x's to the
b's and back again.
Yeah so that's what an invertible matrix is.
It's a perfect map from one set of x's to the x's and
you can get back again.
Questions always.
Now I think I'm ready for another example.
There are only two examples.
And actually these two examples are on the 18.06 web page.
If some people asked after class how to get sort of a
review of linear algebra, well the 18.06 website would be
a definitely a possibility.
Well, I'll put down the open courseware website; and
then you would look at the linear algebra course
or the math one.
What is it?, is that it?
No, maybe that's an MIT-- so is it math?
I can't live without edu at the end, right?
Is it just edu?
So that website has, well, all the old exams you could ever
want if you wanted any.
And it has this example and you click on Starting
With Two Matrices.
And this is one of them.
Ok, ready for the other.
So here comes the second matrix, second example
that you can contrast.
Second example is going to have the same u.
Let me put, our matrix I'm going to call it, what
am I going to call it?
Maybe C.
So it'll have the same u.
And the same v.
But I'm going to change w.
And that's going to make all the difference.
My w, I'm going to make that into w.
So now I have three vectors.
I can take their combinations.
I can look at the equation Cx=b
I can try to solve it.
All the normal stuff with those combinations of
those three vectors.
And we'll see a difference.
So now, what happens if I do, could I even like do just
a little erase to deal with C now?
How does C differ if I change this multiplication from A
to C to this new matrix.
Then what we've done is to put in a minus one there, right?
That's the only change we made.
And what's the change in Cx?
I've changed the first row, so I'm going to change the first
row of the answer to what? x_1-x_3.

You could say again, as I said this morning, you've sort of
changed the boundary condition maybe.
You've made this difference equation somehow circular.
That's why I'm using that letter C.
Is it different?
Ah, yes!
I didn't get it right here.
Thank you, thank you very much.
I mean that would have been another matrix that we could
think about but it wouldn't have made the point I wanted,
so thanks, that's absolutely great.
So now it's correct here and this is correct and I can
look at equations but can I solve them?
Can I solve them?
And you're guessing already, no we can't do it.
So now let me maybe go to a board, work below, because I'd
hate to erase, that was so great, that being able to solve
it in a nice clear solution and some matrix coming in.
But now, how about this one?
One comment I should have made here.
Suppose the b's were zero.
Suppose I was looking at originally at A times x
equal all zeroes, What's x?
If all the b's were zero in this, this was the one that
dealt with the matrix A.
If all the b's are zero then the x's are zero.
The only way to get zero right-hand sides, b's,
was to have zero x's.
If you wanted to get zero out, you had to put zero in.
Well, you can always put zero in and get zero out, but here
you can put other vectors in and get zero out.
So I want to say there's a solution with zeroes out,
coming out of C, but some non zeroes going in.
And of course we know from this morning that that's a signal
that it's a different sort of matrix, there won't be an
inverse, we've got questions.
Tell me all the solutions.
All the solutions, so actually not just one, well you could
tell me one, tell me one first.
Now tell me all.
C, C, C.
That whole line through .
And that would be normal.
So this is a line of solutions.
A line of a solutions.
I think of as in some solution space,
and then all multiples.
That whole line.
Later I would say it's a subspace.
When I say what that word subspace means it's just this--
linear algebra's done its job beyond just .
So, again, it's this fact of-- if we only know
the differences-- Yeah.
You can see different ways that this has got problems.
So that's C times x.
Now one way to see a problem is to say we can get the
answer of all zeroes by putting constants.
All that's saying in words the differences of a constant
factor are all zeroes, right?
That's all that happened.
Another way to see a problem if I had this system of equations,
how would you see that there's a problem, and how would you
see that there is sometimes an answer and even decide when?
I don't know if you can take a quick look.
If I had three equations, x_1-x_3 is b_1, this equals
b_2, this equals b_3.

Do you see something that I can do to the left sides
that's important somehow?
Suppose I add those left-hand sides.
What do I get?
And I'm allowed to do that, right?
If I've got three equations I'm allowed to add them, and I
would get zero, if I add, I get zero equals-- I have to
add the right-sides of course-- b_1+b_2+b_3.
I hesitate to say a fourth equation because it's not
independent of those three, but it's a consequence
of those three.
So actually this is telling me when I could get an answer
and when I couldn't.
If I get zero on the left side I have to have zero on the
right side or I'm lost.
So I could actually solve this when b_1+b_2+b_3=0.
So I've taken a step there.
I've said that okay, we're in trouble often, but in case
the right-side adds up to zero them or not.
And if you'll allow me to jump to a mechanical meaning of
this, if these were springs or something, masses, and these
were forces on them-- so I'm solving for displacements of
masses that we'll see very soon, and these are forces--
what that equation is saying is-- because they're sorta
cyclical-- it's somehow saying that if the forces add up to
zero, if the resulting force is zero, then you're OK.
The springs and masses don't like take off, or start
spinning or whatever.
So there's a physical meaning for that condition that
it's OK provided if the b's add up to zero.
But of course, if the b's don't add up to zero we're lost.
Right yeah.
So Cx=b could be solved sometimes, but not always.
The difficulty with C is showing up several ways.
It's showing up in a C times a vector x giving zero.
That's bad news.
Because no C inverse can bring you back.
I mean it's like you can't come back from zero.
Once you get to zero, C inverse can never bring
you back to x, right?
A took x into b up there, and then A inverse brought back x.
But here there's no way to bring back that x because
I can't multiply zero by anything and get back to x.
So that's why I see it's got troubles here.
Here I see it's got troubles because if I add the
left-sides I get zero.
And therefore the right-sides must add to zero.
So you've got trouble several ways.
Ah, let's see another way, let's see geometrically
why were in trouble.
OK, so let me draw a picture to go with that picture.
So there's three-dimensional space.
I didn't change u, I didn't change v, but I changed
w to minus one.
What does that mean?
Minus one sort of going this way maybe, zero, one is the z
direction, somehow I change it to there.
So this is w* star maybe, a different w.
This is the w that gave me problems.
What's the problem?
How does the picture show the problem?
What's the problem with those three vectors,
those three columns of C?
PROFESSOR STRANG: There in the same plane.
There in the same plane. w* gave us nothing new.
We had a combinations of u and v made a plane, and w* happened
to fall in that plane.
So this is a plane here somehow, and goes through
the origin of course.
What is that plane?
This is all combinations, all combinations of u, v,
and the third guy, w*.
It's a plane, and I drew a triangle, but of course, I
should draw the plane goes out to infinity.
But the point is there are lots of b's, lots of right-hand
sides not on that plane.
Now if I drew all combinations of u, v, w, the original
w, what have I got?
So let me bring that picture back for a moment.
If I took all combinations of those does w lie in
the plane of u and v?
No, right?
I would call it independent.
These three vectors are independent.
These three, u, v, and w* I would call dependent.
Because the third guy was a combination of the first two.
OK, so tell me what do I get now?
So now you're really up to 3-D.
What do you get if you take all combinations of u, v, and w?
The whole space.
If taking all combinations of u, v, w will give
you the whole space.
Why is that?
Well we just showed-- when it was A we showed that
we could get every b.
We wanted the combination that gave b and we found it.
So in the beginning when we were working with u, v, w, we
found-- and this was short hand here-- this said find a
combination to give b, and this says that combination
will work.
And we wrote out what x was.
Now what's the difference-- OK-- here.
So those were dependent, sorry, those were independent.
I would even called those three vectors a basis for
three-dimensional space.
That word basis is a big deal.
So a basis for five-dimensional space is five vectors
that are independent.
That's one way to say it.
The second way to say it would be there combinations give the
whole 5-dimensional space.
A third way to say it-- see if you can finish this sentence--
this is for the independent, the good guys-- if I put those
five vectors into a five by five matrix, that matrix
will be-- invertible.
That matrix will be invertible.
So an invertible matrix is one with a basis sitting
in it's columns.
It's a transform that has an inverse transform.
This matrix is not invertible, those three vectors
are not a basis.
Their combinations are only in a plane.
By the way, a plane as a subspace.
A plane would be a typical subspace.
It's like fill it out.
You took all the combinations, you did your job, but in that
case the whole space would count as a subspace too.
That's the way you get subspaces, by taking
all combinations.
OK, now I'm even going to push you one more step and then
this example is complete.
Can you tell me what vectors do you get?
All combinations of u, v, w.
Let me try to write something.
This gives only a plane.
Because we've got two independent vectors
but not the third.
I don't know if I should even ask.
Do we know an equation for that plane?
Well I think we do if we think about it correctly.
All combinations of u, v, w* is the same as saying all
vectors C times x, right?
Do you agree that those two are exactly the same thing?
This is the key, because we're moving up to
vectors, combinations, and now comes subspaces.
If I take all combinations of u, v, w*, I say that that's
the same as all vectors C times x, why's that?
It's what I said in the very first sentence at 4 o'clock.
The combinations of u, v, w*, how do I produce them?
I create the matrix with those columns.
I multiply them by x's, and I get all the combinations.
And this is just C times x.
So what I've said there is just another way of saying how does
matrix multiplication work.
Put the guys in it's columns and multiply by a vector.
So we're getting all vectors C times x, and now I was
going to stretch it that little bit further.
Can we describe what vectors we get?
So that's my question.
What b's, so this is b equal b_1, b_2, b_3 do we get?
We don't get them all.
Right, we don't get them all.
That's the trouble with C.
We only get a plane of them.
And now can you tell me which b's we do get when we look at
all combinations of these three dependent vectors.
Well we've done a lot today.
Let me just tell you the answer because it's here.
The b's have to add to zero.
That's the equation that the b's have to satisfy.
Because when we wrote out Cx we notice that the components
always added to zero.
Which b's do we get?
We get the ones where the components add to zero.
In other words that's the equation of the
plane you could say.
Actually that's a good way to look at it.
All these vectors are on the plane.
Do the components of u add to zero? look at u.
Do the components of v add to zero?
Add them up.
Does the components of w*, now that you've fix it correctly,
do they add to zero?
So all the combinations will add to zero.
That's the plane.
That's the plane.
You see there are so many different ways to C, and none
of this is difficult, but it's coming fast because we're
seeing the same thing in different languages.
We're seeing it geometrically in a picture of a plane.
We're seeing it as a combination of vectors.
We're seeing it as a multiplication by a matrix.
And we saw it sort of here by operation, operating and
simplifying, and getting the key fact out of the equations.
Well OK.
I wanted to give you this example, the two examples,
because they bring out so many of the key ideas.
The key idea of a subspace.
Shall I just say a little about what that word means?
A subspace.
What's a subspace?
Well, what's a vector space first of all?
A vector space is a bunch of vectors.
And the rule is you have to be able to take
their combinations.
That what linear algebra does.
Takes combinations.
So a vector space is one where you take all combinations.
So if I only took just this triangle that would not be
a subspace because one combination would be 2u and it
would be out of the triangle.
So a subspace, just think of it as a plane, but then of
course it could be in higher dimensions.
You know it could be a 7-dimensional subspace inside
a 15-dimensional space.
And I don't know if you're good at visualizing that, I'm not.
Never mind.
You you've got seven vectors, you think OK, their
combinations give us seven-dimensional subspace.
Each factor has 15 components.
No problem.
I mean no problem for MATLAB certainly.
It's got what, a matrix with a 105 entries.
It deals with that instantly.
OK, so a subspace is like a vector space
inside a bigger one.
That's why the prefix sub is there.
And mathematics always counts the biggest possibility too,
which would be the whole space.
And what's the smallest?
So what's the smallest subspace of R^3?
So I have 3-dimensional space-- you can tell me
all the subspaces of R^3.
So there is one, a plane.
Yeah, tell me all the subspaces of R^3.
And then you'll have that word kind of down.
So planes and lines, those you could say, the real,
the proper subspaces.
The best, the right ones.
But there are a couple more possibilities which are?
Which point?
The origin.
Only the origin.
Because if you tried to say that point was
a subspace, no way.
Why not?
Because I wouldn't be able to multiply that vector by five
and I would be away from the point.
But the zero subspace, the really small subspace that just
has the zero vector-- it's got one vector in it. not empty.
It's got that one point but that's all.
OK, so planes, lines, the origin, and then the other
possibility for a subspaces is?
The whole space.
So the dimensions could be three for the whole space,
two for a plane, one for a line, zero for a point.
It just kicks together.
How are we for time?
Maybe I went more than a half, but now is a chance to just
asked me, if you want to, like anything about the course.
Is at all linear algebra?
But I think I can't do anything more helpful to you then to for
you to begin to see when you look at a matrix-- begin
to see what is it doing.
What is it about.
Right, and of course matrices can be rectangular.
So I'll give you a hint about what's coming
in the course itself.
We'll have rectangular matrix A, OK.
They're not in invertible.
They're taking 7-dimensional space to three-dimensional
space or something.
You can't invert that.
What comes up every time-- I sort of got the idea finally.
Every time I see a rectangular matrix, maybe seven by
three, that would be seven rows by three columns.
Then what comes up with a rectangular matrix A is
sooner or later A transpose sticks it's nose in
and multiplies that A.
And we couldn't do it for our A here.
Actually if I did it for that original matrix A I would get
something you'd recognize.
What I want to say is that the course focuses
on A transpose A.
I'll just say now that that matrix always comes out square,
because this would be three times seven times seven times
three, so this would be three by three, and it always
comes out symmetric.
That's the nice thing.
And even more.
We'll see more.
That's like a hint.
Watch for A transpose A.
And watch for it in applications of all kinds.
In networks an A will be associated with Kirkoff's
voltage law, in A transpose with Kirkoff's current law.
They just teamed up together.
We'll see more.
Alright now let me give you a chance to ask any question.
Did I mention homework?
You may have said that's a crazy homework to say
three problems 1.1.
I've never done this before so essentially you can get away
with anything this week, and indefinitely actually.
How many is this the first day of MIT classes.
Oh wow.
Well welcome to MIT.
I hope you like it.
It's not so high pressure or whatever is
associated with MIT.
It's kind of tolerant.
If my advises ask for something I always say yes.
It's easier that way.
PROFESSOR STRANG: And let me just again-- and I'll say
it often and in private.
This is like a grown-up course.
I'm figuring you're here to learn, so it's not
my job to force it.
My job is to help it, and hope this is some help.