Mathematics - Multivariable Calculus - Lecture 21

Uploaded by UCBerkeley on 20.11.2009

All right.
First the bad news: the final exam will be on December 19th,
exactly one month from today.
Well that's a good news in some sense.
We still have one month.
And the exam will be from 12:30 to 3:30 in the afternoon,
in Hearst Gym.
Which I think is very appropriate.
Because perhaps I put some physical exercises on some of
the problems-- shoot some hoops and run a couple of circles.
You have a question?

You can find this information online.
I don't remember.
But I hope it's not a swimming pool.
Although for this big class, perhaps that's the only place
on campus where we can fit everybody.
If we drain the water from the pool.
So last time, we had this discussion about this
Thursday, next Thursday.
So this is next month, December.
Not this month.
But in the meantime, we continue with vector calculus.
And today, actually, we get into more and more
interesting stuff.
And today we will of do some preparatory work for a
generalization of Green's theorem which we discussed
last time, to the 3-dimensional case.
Come on guys.
Let's focus.
So the 3-dimensional example of generalization of
Green's theorem is called Stoke's theorem.
And it is actually very useful in many areas of science
and engineering.
I'll talk a little bit about maximum equations today.
Because we'll finally have all the tools needed to actually
formulate maximum equations which govern electromagnetism.

Let me start by drawing an analogy.
And I guess this has become sort of a prevailing
theme in this course in the last few weeks.
And I have to say that mathematics is really
about finding and exploiting analogies.
There are a lot of things which are parallel.
And kind of the same ideas, and the same patterns play out
in different domains, in slightly different ways.
And if you learn how they play out in one place, you can
actually gain some insights about how they play
out in other areas.
So case in point is this type of theorem which we are
discussing right now which, as I said already many times, take
the kind of general form integral over some domain, over
derivative of some differential form.
And here is the boundary of this domain.
And then here you have this form, itself.
So this is the guiding principle.
And what we're doing now, is we're finding concrete
realizations, incarnations of this general principle,
in different dimensions.
So let's see what we have found so far.
So first of all, what was the case when the
dimension of d was 1.
In other words, this subject d is a curve.
So this tells us the dimension of the subject-- of
the curve, right?
But we should always remember that geometric objects
live in some space, right?
So there is some ambient space.
And there is a geometric object itself.
And we have to distinguish between them.
So if you are talking about the curve, that's
a 1-dimensional object.
It could live in a 1-dimensional ambient space.
Or it could live in 2-dimensional ambient space.
Or it could live in a 3-dimensional ambient space.
And I would like to distinguish between these cases now.
So let's first look at the case when the ambient space is r.

That is to say, just a real line.
So here's the real line.
And in this case, the 1-dimensional object, which
we'll be integrating over, will be just an interval.
In general it could be a union of several intervals.
But in that case, the integral would be just the sum
over those integrals.
Therefore, without loss of generality, we might as well
assume that our domain is a single interval.
So that would an interval from a to b.
So now we're in the realm of 1-dimensional calculus.
And in the realm of 1-dimensional calculus, this
formula-- this general formula, takes the following shape.
So omega first of all, is just a function.
And so on the right-hand side we're simply evaluating the
function at the endpoints.

Whereas on the left-hand side, we are differentiating,
literally, the derivative of this function over
this interval.
So that's the formula that we're talking
about in this case.
And that's the simplest case really.
Because this corresponds to the smallest possible dimensions
of all objects involved.
Our domain of integration is 1-dimensional And it lives in
the 1-dimensional ambient space.
So next we generalize this.
So let me do it like this.
So here, I will do it like this.
Next we generalize it.
And we can see the ambient space, which is 2-dimensional.
So now we're talking about a more general
curve on the plane.
So a curve on the plane would look something like this.
So there's some wiggling room.
So we can play with it.
It doesn't have to be a segment of a straight line.
Because now we are on the plane.
In this case we also have a formula.
And that's what we call the Fundamental Theorem
for line integrals.
In this case again, we have a function f.
We have 2 endpoints.
I will now denote them by capital A and B.
Because here A and B really corresponded
to some real numbers.
And now this A and B represent points on the plane.
So these are not numbers anymore.
And so in this case on the right, we just evaluate our
function at those 2 points.
So it's very similar.
And on the left, we're taking the line integral
over this curve.
Let's denote this curve by c, as we always do.
And here we'll have the line integral of nabla f dr
a line integral of the gradient vector field.

So that's what we have in the case when the domain
is 1-dimensional.
There's actually 1 more.
There will be 1 more square when the ambient space-- so I
would have to draw it here-- the ambient space will
be 3-dimensional.
And actually we stop there.
I don't have to go any lower.
Because in this class, as I said many times, we only talk
about lines, planes, and 3-dimensional spaces.
We don't talk about 4-dimensional spaces,
for example.
And we'll talk about 3-dimensional spaces
later-- next week and the week after next.
For now I just want to focus on this.
For line integrals, I would like to just look at r and r2.
So now I would like to make an analogy.
And my analogy will be with the case when the
dimension of d is 2.
In other words, now the domain, d, in this formula on the
left-- I didn't put a bracket.
Nobody told me.
Do tell me if you see something odd on the blackboard.
So the domain now is 2-dimensional.

So now if it's 2-dimensional, we certainly cannot fit it
in a 1-dimensional space.
So the simplest example here, for the ambient space,
is not r, but r2.
And so the ambient-- let me abbreviate this--
ambient space is r2.
That is really the simplest example.

We cannot put a 2-dimensional object into a smaller
dimensional space.
So what is this going to look like?
What's the analog of all of the formulas-- the
pictures and the formulas?
What's the analog of this formula?
First, what's the analog of this picture?
The analog of this picture is the following: it's
just a domain which looks something like this.
It could have some corners.
And I mean really, the interior-- this shaded region.
So this is now D.
And this domain has a boundary.
Maybe I will use a different color for it.
So let's just call it b of D, the way we did last time,
b of D, the boundary.
So maybe to make the picture more analogous I'll
put yellow here also.

That's the boundary.
So what I'm saying is if the first piece of the analogy is,
that this red interval is analogous to this red domain.
And these 2 points, these 2 yellow points, are
analogous to this curve.
Is that clear?
So that's the first things.
So clearly there's a simple dramatic analogy.
We just bump all dimensions by 1.
Where is here, the domain was 1-dimensional and the boundary
was zero dimensional.
Now the domain is 2-dimensional and so the boundary
is 1-dimensional.
Just bump everything by 1.
Now the formula.
The formula is the formula we learned last time.
In this formula on the right-hand side, we have
the line integral of the vector field.
So we can write it like this: pdx plus qdy over the boundary.
Now that's the right-hand side.
Because now, this omega is actually a vector field.
And we take the line integral, the vector
field, over the boundary.
The boundary, remember, is oriented according to the
rule-- the special rule.
It goes counter-clockwise.

What about the left-hand side?
The left-hand side now, is going to be a double integral
because we're going to integrate over d.
So it is going to be the integral over d-- double
integral-- over the expression.
Which it first looked mysterious.
But then we discussed, last time, what the meaning
of this expression is.
So that's the left-hand side.
So what I'm saying now, is that the analog of this formula, in
this colon, is the left-hand side of this here.
And the analog of the right-hand side of this
formula, in this colon, is this right-hand side.
So this is very important to realize.
Because once we see a general pattern with all of these
formulas, they start making a lot more sense.
If we don't see these analogies, it looks like a
collection of different formulas which seem to
be totally unrelated.
But in fact, what I am trying to explain, is the fact that
they are very closely related.

OK, so this is Green's theorem.
This is Green's theorem which we discussed last time.
So we have found these 3 corners of this diagram,
of this picture.
And as always when you do an IQ test-- we've talked about
before-- you're asked to complete the diagram.

That's what we are going to do.
We will not do everything today.
We will start laying the foundations for the
proper understanding of what should go here.
And that's what's called Stokes' theorem.
We can use this picture to see the different elements of
what's to come in this corner.
So we should just try to generalize by using analogy
between the left and right, and also kind of see the
progression from the first line to the second line.
So what should this theorem be about?
I mean, we see that there must be some result here as well.
So what should it be about?
Well first of all, we are still within this colon.
Which means that the dimension-- I will use this
board, because I don't have enough space to work
this out here.
So I'll just think of this as a magnified corner
of this blackboard.
So we're still in the second colon, and that means
the dimension is 2.
So on the left, we're still going to integrate over
a 2-dimensional object.
But now, the ambient space should be 3-dimenional.
Because I want to establish an analog of what I have here when
the ambient space was r2.
The ambient space was r2, whereas the object was a curve.
So that gap between the dimension of the object
and the dimension of the ambient space was 1.
And so we want the gap to be 1 here as well.
That means 2-dimensional object inside 3-dimensional space.

Let me give you an example of such an object.
Well the simplest example of such an object would
be a plane, a plane in 3-dimensional space.
Here's our favorite example of a plane.
This is why, by the way, I erased that.
But, you know, there's a big game this weekend.
So of course I hope everybody knows about this, and will come
through to support Berkeley.
So anyway, this is our plane.
So you see if we were in the top colon-- so to speak--
in the top colon, our 2-dimensional domain is
confined within a given plane.
So let's say that plane was this vertical plane.
So our domain would be part of this.
And the plane is rigid.
So if we are within that plane, that's all we can do.
But now we say, OK, there's actually 3-dimensional
space out there.
So let's look at more general 2-dimensional objects within
that 3-dimensional space.
And the first thing we can do is, we can take, for example,
the same plane, and just rotate it like this.
And we can also like this.
This way we can actually get any plane.
And then we can take a piece of that place.
So that would be a more general object than before.
A slightly more general object, because it would not fit in
that original 2-dimensional plane which was just this.
Now it would say, this one would not be exactly
like this, would not be exactly like this one.
So this is a very small generalization, but a
generalization none the less.
If we were in this colon, if we were still in this colon, that
would correspond to just restricting our attention to
line segments, but line segments which can go
anywhere on the plane.
Whereas before, we had a line segment confined within
a particular line.
But of course, here, there's a lot more to curves.
We don't just consider line intervals.
We also consider wiggly curves like this.
So what's the analog of that kind of object in this context?
So that would be something which does not fit even
into a rotated plane.
So for instance, of course, the simple example is a sphere.
A sphere is kind of difficult to draw.
So what I'll draw is an upper hemisphere.
Let me draw an upper hemisphere.
And let me use red.

So this is and upper hemisphere.
It's like this.
So it doesn't fit in any plane.
It doesn't fit in this plane.
I'm drawing it on this plane.
But it doesn't live on this plane.
Unlike this one, which actually fits in a particular plane.

So, what I'm trying to say is, that this is a proper analog--
a 2-dimensional object analog of a curve on a plane.
And it also has a boundary.

I know there is one confusing point here.
Oftentimes people get confused why we assign to this upper
hemisphere, or to a sphere itself, dimensional
2 and not 3.
And this is a kind of common confusion about
how we count dimensions.
And I said before, and I'll say it again.
You have to distinguish between the geometric object
and the ambient space.
Here I'm drawing a sphere.
Think of a dome, a roof or dome-shape roof like this.
It lives in a 3-dimensional space.
But it doesn't mean that it, itself, is 3-dimensional.
It it 2-dimensional.
Because if you have a little bug that's living on this dome,
the bug can only go in 2 independent directions.
There are only 2 degrees of freedom, so to speak, in which
the bug can move-- not 3.
It doesn't go inside or outside.
If it lives here, then that's the way it is.
So a more mathematical way to think about it,
is the full length.
You can actually think of flattening this dome, this half
dome onto the disk, which lies at its foundation, at its base.
So we can actually identify points on the dome with
corresponding projections onto this flat disk.
So ther is actually 1:1 correspondence between the
dome-- the hemisphere, and the disk.
Now the disk, surely, is 2-dimensional.
So because we establish such a 1:1 correspondence, that also
is an indication or actually is a proof that this is
also 2-dimensional.
So this is all to say, that it is these kind of objects that
we will have to integrate in this corner, on the left.
So on the left, we will be integrating over things like
a sphere-- like a sphere or an upper hemisphere.
OK, and on the right, what are we supposed to
integrate on the right?
So to understand this picture we looked at this analogy, it
was the analogy on this side, crossing from right to left.
Now to understand what we're going to be integrating, we
cross from bottom to top.
If we were on this colon, the right-hand side actually
stays the same.
We just take the values of the function at the
endpoints, here and here.
The difference is that these are the endpoints of an
interval on the line.
And these are the endpoints of a curve on the plane.
But structurally it is the same.
So that's why we should expect that going from top to bottom
within this colon, we should also be integrating
very similar objects.
So what are we integrating here?
Here we are integrating a vector field, which has
components P and Q.
And we integrate it over the boundary.
So we should do the same thing here.
So this will be an integral of a vector field
over the boundary.
So I will now write it out in the more detail.
Basically I have already assembled the formula that we
need, the Stokes' theorem.
I have already assembled most of the elements.
The only thing which is missing right now, is this: the
question mark here.
What should appear here?
And that should be the analog of this expression,
dq dx minus dp dy.
So, that's the only thing which is missing right now.
But let me write out the right-hand side in more detail
first before I explain what that question mark stands for.
So now f is going to be a vector field in r3.
We shouldn't be scared of that.
I mean we understand that vector fields can be on the
plane, or it can also be in 3-dimensional space.
Remember, we talked about wind maps.
Well, if you think about wind maps, wind maps
are 2-dimensional.
So a wind map would represent a vector field on the plane.
But what does it mean?
It actually means that we have to say where exactly
do we measure the wind vector for this map?
And usually we just measure it on the surface.
But in fact, if you look at the entire 3-dimensional space,
each point in that 3-dimensional space
has some wind factor.
Even now in this classroom, there's some winds,
winds of change.
You can sense some winds.
But that wind lives in a 3-dimensional space, not
just a 2-dimensional space.
And so that's a vector field in a 3-dimesional space.
That's totally natural.
So that's what F is going to be.
Because now the we live in the 3-dimensional space, let's
look at the most general vector fields in the
3-dimensional space.
There's no reason to confine ourselves to vector fields
in the 2-dimensional space.
So that vector field now, has 3 components.
p, which is a function of x, y, z times i plus Q, also function
of x, y, z-- but let me not write it out just to save
time-- and R times k.
Remember the i, j and k are 3 basis vectors.
This is i, this is j, and this is k.
This is x, this is y, and this is z.

So the difference is that, in this corner of the diagram,
we consider a vector field on the plane.
That vector field, first of all, does not have
a third component.
So it only has r and q.
And second of all, because it only lives on the plane, it
doesn't have its dependence on z.
This is a more general situation in two ways.
First of all, we have to third component, r, which corresponds
to the third direction k.
And also p, q, and r all depend on x, y, and z.
OK so then what is fdr?
Well dr, as always, is just dx times i plus dy times
j plus dz times k.
So you just take the dot product of these 2 guys.
And you find pdx plus qdy plus rdz, just like before.
Before we did not have r.
So that's why what we got was just pdx plus qdy, which you
see on the upper-right corner.
Now we'll have a slightly more general expression:
pdx plus qdy plus rdz.
So let me now write down the theorem which we would like
to formulate in more detail.
And that theorem, we will have on the right-side, a line
integral of pdx plus qdy plus rdz, where r,q, and r are
3 functions, of x, y, z.
But in fact, it is better to think about them as components
of a vector field, components of one single vector field, f.

And we are going to integrate over the boundary of a surface
in a 3-dimensional space.
And a very good example of this is upper hemisphere.
So that's just a good picture for it.
But of course, you can always make something
more fancy like this.
So it looks like a little rabbit or something.
You see what I mean.
That's the right-hand side.
And on the left-hand side, we will have a double integral.
Because we're going to integrate over the rabbit's
head, or over the upper hemisphere.
I want to stay politically correct.
So I hope I don't offend any rabbits.
So let's just do an upper hemisphere just to
be on the safe side.

Maybe I should emphasize that here we're integrating
over the yellow thing.
So maybe I should erase now.
Now I'm really politically correct.
Here I'm integrating over the red thing, of which the
yellow is the boundary.
So in fact now, I don't even have to make a dotted line.
We can actually see the entire boundary.
So the boundary is here.
Is that clear?
So that's exactly the same as in previous cases.
And the only question that remains, is what
do we have here?
What should we integrate
And that should be something which is the analog of the
expression dq dx minus dp dy.

So you can ask, why not just put the same expression,
dq dx minus dp dy?
That worked in the case when we had a flat region which
fit entirely on the plane.
Well there is a problem with that expression.
The problem with that expression is that it does
not depend on R at all.
It only depends on p and q.
That's why it worked when we were on the plane.
On the plane our vector field only has 2 components, p and q.
So such an expression makes sense.
It depends on both of them.
But now, the right-hand side depends not only on p and
q, it also depends on r.
So surely this r should appear on this side.
Because, of course if we change r, we will get different
answers on the right.
So it's impossible to have a formula where on the right
you have r, but on the left you don't have r.
So it has to be something like this.
It has to be something like this.
And that's what we're going to find out.
What exactly is it?
So here's the plan: I will have to explain what kind of a
integrals-- first of all-- we'll have to learn how to
integrate over general surfaces in r3.
And the second, is that we will have to find an
analog of this expression.
And an analog of this expression is
what's called curl.
So you define, explain, what a curl is, curl of f.
Which also can be written as a cross product
between nabla and f.
And finally we will put everything together, and we
will write the left-hand side of this formula.
We will have to write the following thing.
It will be a double integral over d, of what we'll
call a curl of nabla f like this, dot ds.
And that will be the answer.
That's the answer which will go instead of the
question mark over there.
And that will complete our diagram.
But you see, we can't do it right away now.
Because there are a couple of things which are missing.
First of all, we don't know yet how to integrate over surfaces.
What we do know is how to integrate over flat surfaces.
But see there is a big difference in a region like
this and a region like this, which is what we're
talking about now.
Think about this curtain.
You can bend and shape it in any way you like, unlike
a flat region like this.
A flat region we know how to integrate.
For the curvy regions, we actually do know something.
Because remember at the very beginning of this course, we
talked about surfaces of revolution, and about
areas of such surfaces.
So we actually know a few examples of formulas
for surface areas.
And what we'll do, we'll just generalize that.
First of all to get surface areas for general surfaces, but
also to get surface integrals of vector fields, which are
line integrals for vector fields that we
found for curves.
So we'll now proceed to develop this and kind of lay down the
foundations for this, for explaining what the
left-hand side is.
And so the first thing I would like to do is to
talk about the curl.
Because that's sort of the easier part, more kind
of algebraic part.
So what is the curl?
Any questions about this by the way?
So speaking of the exam, I'm actually going to put some
materials on the course homepage by the
end of this week.
I've been asked by some students to put the review
problems and things like that up before Thanksgiving so
that you guys have more time to work on this.
So I'll do it this week.

All right.
So first we'll talk about curl.
So here f, is again a vector field like on that board,
with components p, q, and r.
And curl, is just what you would get by taking the cross
product formally between this vector field, and nabla.
What do I mean by nabla?
By nabla, I mean a vector which has components
d dx d dy, and d dz.
So we just follow the general rule of how to
do a cross product.
To do a cross product we have to make this diagram.
Like this, where in the first row we put i j k.
In the second row we put the components of the first of
the 2 vectors involved.
In this case it's going to be d dx d dy, and d dz.
And then we'll have p, q, and r, which are the components
of a second vector involved.
That's just the usual formula for the cross product.
Except, the novelty is, well one of these is actually
a bona fide vector field.
The other one is kind of a strange looking expression.
It has partial derivatives as components.
But nevertheless, we can assemble this diagram,
this picture.
Matrix, really, is the proper name for it.
And take its determinant.
So what is this derterminant equal to?
Just the usual rules apply.
In front of i, we should have d dy of r minus-- well here we
should multiply them in the order from the second
row to the third row.
So it would be d dz of q.
So let me write this down.
This means dr dy minus dq dz times i.
Do you see what I mean?
Just this minus this.
The derivative goes first.
It wouldn't make much sense if we wrote q times d dz.
But d dz of q makes sense as a partial derivative of q.
And then we proceed in the same way.
So the next one will be dr dx minus dp dz, and
that's the j component.
And finally we will have, for this one we will have,
dx of q minus dp dy.
Also what I'd like to say, is that actually this expression
should be thought of as the analog of the expression
dq dx minus dp dy.

Let me convince you that this is a good analog
for that expression.
It's a good analog for this expression.
Well first of all, let's try to reduce this formula to
the 2-dimensional case.
The 2-dimensional case can be obtained from the 3-dimensional
case by, first all of, setting r to be equal to zero.
And also saying that p is a function of x and y.
And q is a function of x and y only.
If we do that the our vector field, p q r will actually
become just a vector field p q on the plane.
So let's see what will happen with this formula.
If indeed we say that r is zero, p depends on x
and y, and so does q.
Well first of all, because r is zero, all terms
involving R will disappear.
Second, because I said that P does not depend on z,
this will disappear.
Q does not depend on z, this will disappear.
So the first 2 summands of this formula will disappear.
And what will we be left with?
We'll be left with this.
So in other words, we get the old expression,
dq dx minus dp dy.

Before we thought about it as a function.
But now we think about it as a vector.
We think about it as the third component of a vector.
In other words in space, we arrange it like this.
Our vector field, f is a wind map on the xy plane.
And the curl of that vector field, actually happens to be
going in a vertical direction-- the direction of k.
And its magnitude is equal to dq dx minus
dp dy, at each point.
So it's a slightly different perspective on this expression.
Before it was function, now it's a vector.
But it's always pointing in the z direction in
3-dimensional space.
But this is already the first indication that this is a good
candidate for replacing the formula in the
3-dimensional case.
In a sense you see it's very symmetrical.
Because each time, what happens, is that say we take k.
And we get an expression of which involves derivatives of
the remaining 2 variables, the remaining 2 components p and q.
So k gives us p and q.
And we take cross derivatives for the remaining 2
coordinates x and y.
And likewise here, here it's j.
But we work with the things which could correspond
to x and z.
Here its i.
But we work with things corresponding to y, x and
z, and y and z here.
So that's the first reason why it's a good replacement for it.
But there is actually a better reason.
Because actually, this expression played a very
important role when we talked about conservative
vector fields.
That's the next thing that I want to explain.
Remember a conservative vector field, is a vector field which
is a gradient of a function.
And these are very nice vector fields.
For example, for these vector fields, it's very easy to
calculate line integrals.
We just have to know the values of the function at the
endpoints, and take the difference.
So conservative vector fields, I recall.
First of all, let me recall what happens in r2.
In r2, it means that having a vector field, which is p times
i plus q times j, is the gradient of a function,
for some function.
And if the vector field is conservative, we can use the
Fundamental Theorem for line integrals to evaluate line
integrals of such vector fields.
And we saw examples of how nicely this works.
So it is good to know when a vector field is conservative.
When is f conservative?
And we had a very simple criterion.
We said that it is conservative, f is
conservative, if and only if the following formula holds:
dq dx is equal to dp dy.

Now there was a caveat.
There was a subtle point here.
Which was that actually, that's not always true.
It is true if your vector field is defined over a simply
connected domain.

We didn't really talk about the reasons for this.
But actually there's a very simple geometric reason.
The point is that if you have a non-simply connected domain.
In other words, for example, you have something like this.
Or even just remove one point.
And your vector field is only defined here.
And it cannot be extended to inside.
What could happen is that when you take integrals over say
circles or some curves going around this region, you could
get something non-trivial.
Even if this equation is satisfied, you could get
something non-trivial.
But we know that the vector field is conservative if and
only if, its line integral is over a closed curve.
No matter whether it goes around some hold, or doesn't
go around some hold.
Over all closed curves should be zero.
So the problem is, that this equation almost guarantees
that such integrals are zero.
What it surely guarantees is such integrals are zero, when
you can contract your curve, your closed curve, to a point.
Here, you cannot contract to a point.
And that's why this argument breaks down.
So this is only true if your vector field is defined on
a simply connected domain.
Now what are the simply connected domains?

If you look at all the homework exercises that we have, 99% of
the examples are vector fields which are actually defined
everywhere on the plane.
The don't have any poles, any similarities, right?
So in fact, in this course, we mostly focus on
such vector fields.
So let's assume, for now, that vector fields actually are
defined everywhere on the entire plane.
And the plane is simply connected.
Non-simply connected things are things like you have
to remove some points.
That would mean that your vector field
actually has poles.
You have something like 1 over x squared plus y squared,
in the denominator.
So let's suppose, to simplify matters, the F is actually
defined everywhere, is defined on the entire plane,
everywhere-- defined everywhere on r2.
Then this issue does not arise.
And this is actually a true statement.
It would not have been a true statement had I not made this
assumption that the vector field is defined on the
simply connected domain.
So then, this is actually a very simple criterion.
But what is this formula?
What does this formula say?
It says that the difference between these two guys is zero.
Let me erase this.
we will remember this.
Suppose that it's defined everywhere.
So this is the criterion.
And now we can appreciate even more, this strange-looking
expression dq dx minus dp dy.
This strange-looking expression tells us whether our vector
field is conservative or not.
If it is zero, this vector field is conservative, provided
it's defined everywhere.
And if it's not zero, then for sure it's not conservative.
You see?
That's exactly this expression that we're talking about.
That's the expression which appears in Green's formula.
And if this formula is correct, if this formula is satisfied,
then we actually have an algorithm of how to find f.
We talked about it before.
And you must have done some exercises doing this.
And you see the point is, that you can try to apply this
algorithm in the case when this formula is not satisfied.
But you'll get stuck.
You'll see that it doesn't work, that the
algorithm doesn't work.
You will not be able to find function f.
So the first step when you're asked whether the vector field
is conservative or not, is to compute this expression, and
see if it is zero or not.
So the reason why I'm saying all of this, is because the
notion of a conservative vector field makes sense not only for
vector fields on the plane like this, but also for vector
fields in 3-dimensional space.
And if I really want to prove to you, to convince you,
that this curl now in a 3-dimensional case, plays the
role of this expression, that would be a very simple test.
Would I be able to use this expression to test whether my
vector is conservative or not?
So that would be the second argument in favor
of this expression
So let's talk about vector fields in 3-dimensional
space, now in r3.
In r3, we have a vector field which is pi plus qj.
And now there's this third complement which is rk.
And what we would like to do is, we would like to
apply the same notion.
Well a vector field is called conservative, again, if
it is equal to nabla f.
You see, nabla f.
The gradient makes sense on the plane, and in space.
It's just that now it is going to have-- as components-- is
going to have df dx i plus df dy j plus df dz k.
In the 2-dimensional case, we would not have this term.
And now we will have this term.
But the notion makes sense anyway, in both cases.
And now we can again ask the question: when
is f conservative?

And the answer is given by the following theorem.
Let's suppose again that f is defined everywhere.

It's well-defined.
Well-defined means that it does not have poles.
So for example, if you have a vector field, which you might
have seen, minus y i plus x j divided by x squared
plus y squared.
This is what makes it ill-defined, or
not well-defined.
Because this gives you zero when x and y are equal to 0, 0.
You get a poll.
You cannot evaluate this vector field at this point.
So this vector field is not defined on the entire plane.
But for example, a vector field like this, is certainly defined
everywhere on the plane.
That's what I mean when they say that f is well-defined
everywhere in r3 So I assume that.
If we assume that, I'm saying that F is conservative.
This is equivalent to saying that the curl is zero.

That's another justification for saying that the curl is a
good replacement for this expression.
Which in fact, can be found as one of the components--
this one, for the curl.

And this is actually proved by using Stokes' theorem, which we
are now trying to establish.

This was all this long discussion was, to convince
you of the importance and usefulness of this
strange-looking expression, you know.
Because you probably thought on Tuesday, that
this looks strange.
Well now this looks even more strange.
So what does it mean?
We'll talk more about the meaning of this later.
But at least now you see that it is useful.
It is a very functional thing.
First of all, it reduces to the old expression, when our vector
field actually is a vector field in the plane.
And second of all, it can be used as a criterion, or for a
criterion, as to whether the vector field is conservative
in the 3-dimensional space.
Just the way we used this criterion on the plane.
So let's see how it works in practice.
Let's suppose you're given a vector field now in
a 3-dimensional space.
And you're asked: is this vector field conservative?
And if so, find the potential function.
Find the function F of which this vector
field is a gradient.
So here's an example.
Suppose your vector field is like this: e to the zi
plus j plus xe to the zk.
So the question is, is it conservative?

Well first of all, let's see.
It is well-defined everywhere.
There are no poles.
There are no
We can find the value of this vector field for any x y z.
There are no expressions like 1 over x squared plus y squared.
So great.
We can then apply this theorem.
This theorem says, that if F is well defined everywhere, to
find out whether it's conservative, it's sufficient
to just calculate this.
So let's calculate the curl.
So we just write i j k. d dz, d dy, d dz and then we have e to
the z and 1 and x, e to the z.
So we start calculating.
What do we find.
So here, you have to take the derivative with respect to y--
partial derivative with respect to y-- of this expression.
It does not depend on y.
So that's zero.
Minus d dz of 1, that's zero too, so zero.
Next, you have j, let me put minus, d dx of x e to the z.
That's e to the z minus d dz of also e to the z.
So that's zero also.
Plus k times d dx of 1, that's zero, minus d dy of e to the z.
That's zero also.
So zero k.
So this is indeed the zero vector.
Please note that I put an arrow over the zero.
Remember we had a conversation about this a while ago,
about different zeros.
A curl is not a function.
It's a vector field.
So if you put zero like this, that would be
okay for a function.
But saying that the zero is more than saying that
the function is zero.
It means that 3 functions are zero-- not just 1, but 3
functions-- the components in front i, j, and k.
And that means that this is zero as a vector.
It has all components here.
That's what this criterion means.
So criterion is actually a collection of 3 equations,
not just 1, but 3 equations.
OK, great.
So this is satisfied.
It is conservative.
The answer is yes.
Okay a follow-up question: the follow-up question is, if
you're so smart and you know it's conservative, find
a function for which it is the gradient.
And that's really very easy to do.
We just follow the same algorithm that we used in
a 2-dimensional case.
So find f such that f is equal to nabla f.
So what we do is, we start doing anti-derivatives.
And the first time you take anti-derivatives, you
affect one of 3 variables.
And then you continue.
So you have to make a choice.
So let's just do it in a straight-forward way.
Let's take anti-derivative with respect to the first function.
So take anti-derivative of e to the z with respect to x.
So what do we get?
We get x times e to the z.
But that's not all.
We have to add a constant when we take anti-derivatives.
But now this constant is not really a constant.
It is a constant with respect to x.
Which is a variable which we differentiate.
So this constant actually could depend on y and z.
So it's a constant.
It's a function of y and z.
That's what the first step of the algorithm tells us.
Now we're going to differentiate this
with respect to y.
Let's just say take d dy.

Well this will be zero.
And this will be dc dy.
We are supposed to get the second component
of our vector-- vector field-- which is 1.
Because see, the second component is j.
It means 1 times j.
So this should be 1.
If this is 1, it means the c is equal to y, plus
another constant.
Let' go with c1.
So if we were working with a vector field in 2 variables-- a
vector field on the plane which depends only on x and y--
that's where we would stop.
But now we have a third variable.
So a priori this c1 could depend on the last variable
which remains, which is z.
So we have to just make one more step in this algorithm.
Otherwise it looks exactly the same as before.
So now we should take d dz of this-- of what?
We have to assemble what we've learned so far.
What we've learned so far is that f is equal to this.
But this, c, is equal to this.
So that means that we can replace, now, the c in this
formula, by y plus c1.
And now we have to take the derivative of this whole
thing with respect to z.
So we find xe to the z-- the derivative of this
is zero-- plus c1 prime.
And we should compare it to the expression which we were given.
We were given xe, e to the z.
So that means that c prime is actually zero.
Which means that actually c prime is a constant.
Sorry, c1 prime, so c1 prime, c1 not c1 prime. c1 prime is
zero. c1 is actually a constant It's an honest constant.
It doens't have any hidden dependents or anything.
So the answer is that f is nabla of the function
which we found.
Which is xe to the z plus y plus c1, where this
is actually a constant.
Any questions?
Let me switch the boards.

I think the derivative-- first I have to reload F to use all
the information I found so far.
I have found that it is equal to this from the previous step.
Where this is already just a function of z.
Because I found that this was c of yz.

But we have found that it's y plus c1 c.
So I already put this back for the function f.
And now I take the derivative of this with respect to z.
And this is what I get.
And now I have to compare it to my third component of my vector
field, which is x into the z.
On the very top row, in front of k, you have xe to the z.
So I say this is equal to xe to the z.
So I can cancel out these guys.
And I end up with c1 prime is zero.
That means c1 is actually a constant.
It actually does not depend on z.
And that give me the answer.
Any other questions?
OK, so this is how it works.
It actually works in a very simily way.
And this was all to convince you of the importance
of this curl.
And this is the expression which we will use to establish
the Stokes' formula-- that elusive formula which appears
in the lower right corner.

If you thought this was too much, too many formulas,
there's actually one more which is called divergence.
And divergence we won't need until the last lecture.
But since it is in this chapter of the book, I guess the idea
being, let's just put it all on the table-- all of these
derivatives that we have.
And let's look at all of them at the same time.
So I might as well write a formula for divergence.
So divergence is also an operation on vector fields
in 3-dimensional space.
Which we can think of as a dot product-- a dot product,
as opposed to a cross product, with nabla.
So in other words, it is dp dx plus dq dy plus DR Dz.

So it's a very interesting operation.
It takes a vector field and it spits out a function-- not a
vector field, it spits out a function.
And the nice thing about is, so far it's not clear
what it's good for.
But here's one result which might convince you
that it is important.
Which is that if we take the divergence of the curl.
So let's say we have some vector field F.
Let's first apply to it curl.
That is to say, cross product is nabla.
That's what's given by this formula.
So we get some expression.
And let's take, now, the divergence of the result.
In other words, you substitute these 3 components: this, this,
and this, into this formula.
These are not the p, q, and r of the original f, but these
are p, q, and r of the curl.
So you get, actually, double derivatives.
So it looks like a really ugly expression.
But actually it turns out to be zero.
So that's the good thing.
So this is something.
This is kind of a kryptonite for curls.
That's what kills curls.
A curl looks very complicated.
But there is a nice formula which actually kills curls.
And now we can actually assemble all of these
operations we've learned up to now.
And now we can actually see that there's some system to
this, that it's not random.
So let me explain this.
We have learned 3 different operations.
The first one was the gradient.
The gradient goes from functions to vector fields.
You know function f.
You get a vector field nabla f.
That's the first operation we've learned.
The second operation which we learned today, goes from
vector fields to vector fields-- all in r3.

And that's the curl.
It takes a vector field, and it sends it to its curl.

Maybe it's better to call it curl to kind of emphasize that
it's different from this guy.
Even though I didn't like this notation, nabla cross f.
But for the purpose of this diagram, maybe I'll
just stick to curl to emphasize a difference.
And now we've learned one more, which is a divergence.
And divergence now goes from vector fields to functions.
So you start with functions and go to vector fields, and vector
fields to vector fields.
And there's another operation which goes from vector
fields back to functions.
So this one takes a vector field, and it maps
it to divergence.
So 3 different operations.
And now we've learned a very interesting aspect of this.
Since you have these 3 different operations, you
could apply 2 operations-- one after another.
You can start with a function and you can apply the gradient.
And you get this vector field.
Because it's a vector field, we can apply to it a curl.
And what do we get this way?

Zero, right?
This is exactly the theorem which I have formulated.
In other words, if you apply the separation twice, and you
take curl of nabla f, you get zero.
This is this formula.
Because again, curl is the same.
Well maybe, let me write it one more time.
So divergence we can write like this.
And curl, we can also write as cross.
This is just 2 different choices of notation
for the same thing.
So you see, apply these operations one after
another and you get zero.
But zero is an arrow.
That's interesting, right?
What about if we apply this one and then this one.
So that means, take a vector field.
Take its curl, and then apply divergence, and also get zero.
That's this formula, right here.
It's the same as writing it like this.

This is zero without an arrow, because it is a function--it's
a function zero.
It's not a deductive zero.

So you see, we have 3 different operations.
And the operations have this property that if you apply 2
of them in sequence, you'll get zero.
I would like to contrast that with something we
discussed last time about taking boundary.
If you have a geometric object, like a domain we discussed,
And we can take its boundary.

So we just get this.
And let's apply boundary one more time.
Is there a mistake?
What if you take the gradient of divergence?
I see.
So in other words, you want to apply it like this.
Well you get what's called a Laplacian, I think.
No a Laplacian is the other way, sorry.
See the point is, that actually that would be like going up.
That would be going back from the bottom to the top.
And in fact, we should only be going down.
So it doesn't loop.

In a way, it's a good idea.
But it's important, here, to go down, just like for boundaries.

The point that I'm trying to expain is that you have an
operation which makes it go one step.
And if you take it twice, you get zero.
And what I'm trying to explain is, it's exactly like
taking the boundary.
When you take the boundary, you lower dimension by 1, you see?
If you take boundary one more time, you get nothing.
So this, by the way, is another zero.
This means an empty set.
For all purposes it's like zero.
It's nothing.
So the point is, this is a very intuitive concept-- boundary.
Any geometric object has a boundary.
And once you realize that there is such a thing as boundary,
you can just start thinking about it.
What is a boundary of a boundary?
Why not?
You can take a boundary.
Why not take a boundary of a boundary?
And at first it looks like a good idea.
But then you realize, that actually, it always
gives you an empty set.
So there's this very interesting geometric structure
taking the boundary.
And it has this very interesting property which
mathematicians call neopotency.
It's a neopotent operation.
Neopotent meaning that if you square it, you get zero.
So now you try to look for such an operation, algebraically, in
the world of functions and vector fields.
And this is what you find.
You find that there indeed exists such operations
which have the same neopotent property.
That if you square it, if you apply it twice, you get zero.
And this is a very important aspect of this formula that we
are trying to establish-- the formulas relating integrals
in different dimensions.
This is what I call-- in this guiding principle--
this is what I call d.
You see, I explained already many times, that the general
guiding principle we're pursuing here, has to do
with integrating over domains and boundaries.
And here you have some algebraic object,
and its derivative.
So going from left to right, you take the boundary.
It's like this.
And going from right to left, you take the derivative.
And now here is a derivative that I'm talking about.
Here I lay it all on the table.
If you start with a function when you're sort of going
from zero to 1, you take that gradient.
And then you take the curl.
And then you take the divergence.
And this operation, d, is kind of an algebraic operation.
It lives in a different world.
But it turns out, that first of all, there is a trade-off
between these 2, which gives us this beautiful identity.
And also it has exactly the same property as b.
In other words, d squared is zero.
Just like b squared is zero.
So there is this parallel analogy between the geometric
world and the algebraic world.
In which taking boundary on the geometric side corresponds to
taking the derivative, in this sense, on the other side.
And this formula is just an expression of the fact that
one operation is kind of due to the other.
So in some sense, they are one in the same.
So that's what these formulas are all about.
They look complicated.
They look very abstract.
But in fact, they are all part of a very conceptual
phenomenon-- a very conceptual thing-- and a very important
phenomenon in mathematics.
And as a bonus, we can now write down the
maximum equations.

So this is a curl.
This is a divergence.
And now I can write down maximum equations because
these equations use curl and divergence.
So in the maximum equations, maximum equations are the
equations which govern the electromagnetism, which
govern the behavior of electromagnetic fields.
Well you know first of all, there are electric fields.
For example, if you have charged particles, if they
have opposite charges, they will attract.
If they have the same charges they will repel
each other, right?
That's and electric field.
If you put a charge somewhere.
If you put, for example, the nucleus of an atom,
which consists of some protons and neutrons.
And protons have postive charges and neutrons
have neutral charges.
But protons have a positive charge so they create a field
which would grab and attract electrons which are negatively
charged particles.
So it's a very important field.
So there's also a magnetic field.
That's the field that you have when, you know.
You have a magnetic.
Someone told me a story that they dropped a key
at night in a puddle.
What do you do?
And there is a very nice solution.
You take a big magnet and you put it in the water.
And boom, you've got your key.
So that's magnetic field.
So that's also very important, and not just for finding
keys, I suppose.
So now you look for equations which govern
electromagnetic fields.
And here are the equations which were written in the
19th century by Maxwell, as well as other people.
So usually the notation we choose is like this: e for
electric field, and B is for magnetic field.
These are just vector fields, just like the kind of vector
fields which we talked about.
Like on the board, here is a vector field.
So electromagnetic fields are just vector fields like this.
But they change.
You know they change in space.
And they also depend on time.
And these equations describe how they depend on
space and time.
And the questions look surprisingly simple,
deceptively simple, perhaps I should say.
But let me assume that there are no charges and currents.
So it's kind of an equation in a vacuum, if you will, just to
simplify it-- just tto give you a flavor of what these
equations look like, no charges or currents.
So the equations just involve the divergence and the curl.
So that's the first equation.
See, this is the divergence of e, is zero.
And you have the divergence of B, is zero.
And then the curl of e is minus 1 over c, db dt--
the derivative with respect to time.
And the curl of b is 1 of c de dt.
Now what it c? c is the speed of light, which
is approximately equal to 300,000 km/s.

It's very fast.
But it's not infinite, it's finite.
So this is very important, actually.
So these are the equations.
We wouldn't be able to even read these equations, to even
understand these equations, if we did not define the
curl and the divergence.
What's perhaps more important is that these formulas that we
are proving now, these guiding principles and various
incarnations that we are talking about.
By using those we can actually derive very important
For example, the Gauss law, the Ampere's law, and
things like that.
But if you just look at this, I think it's amazing that such
complicated direction which governs basically the entire
universe, these 2 forces in the entire universe, can be
summarized basically on half of a blackboard in this very
neat and beautiful way.
And just looking at this formula, you already see some
very important things, which were in a sense, the
cornerstones of physics of the 21st, and 22nd, and
so on centuries.
The first thing you see is that you have this number.
The speed of light does not depend on anything.
It's just a constant here.
Which is actually an incredibly powerful statement.

In our everyday life, we're used to the fact that
velocity or speed depends on who the observer is.

If you are standing here, and there is a bicycle going in
this direction, so let's say, with some speed.
This is the speed which you see.
But if you are going on a bicycle is this direction, or
if you're walking with some speed, you will observe
a different speed.
But with speed of light, it's not like this.
Speed of light will appear, in the same way, to this observer
and to this observer.
You don't add the velocities one with or the other.
It is constant.
This is what Maxwell's equations indicate.
And this was one of the first arguments for Einstein to
create his special relativity theory.
He said look, these equations work.
So what they show us, is that the speed of light, i
constant in all inertial coordinate systems.
That was one of the first steps in creating special relatively.
I'm telling you all this to convince you that the kind of
stuff we're doing, is not just a bunch of formulas that
you need to memorize.
But actually this stuff makes sense.
This stuff is very important.
So we'll continue on Tuesday.
And go Bears!