Changes to ECMAScript, Part 2: Harmony Highlights - Proxies and Traits


Uploaded by GoogleTechTalks on 27.04.2010

Transcript:
>>
MILLER: ...flexible and expressive metaprogramming system which is also securable. And those
constraints seemed ideally suited to the work that we needed to bring JavaScript. So, here's
Tom. >> CUTSEM: Thank you, Mark, for that introduction.
So, I'm going to start off by briefly describing the new features added to ECMAScript fifth
edition. This is the latest edition of JavaScript just published in December of last year. So
I'm trying to get everybody on the same page here introducing some of the features of this
language. So, one of the more important features added to ECMAScript 5 was the additional strict
mode. So, in strict mode, the language helps the programmer guard against common pitfalls,
rejects certain confusing features from the language like the "with" statement and they'll
throw exceptions rather than failing silently, for example, when you try to assign non-assignable
properties. So the other major addition to this edition is the addition of an object-manipulation
API. So for those of you that are familiar with Java, I would call it the--or you could
consider it to be like the "javascript.lang.reflect" of JavaScript. So, what does this API look
like? Well, it's an API that allows you to distinguish and define data properties from
accessor properties and allows you to define certain property attributes. So here's an
example of a points declaration. The point has a data property called X and an accessor
property called Y. So, data properties are just bound to values. Accessor properties
are also known as "gettrues" and "settrues," they will run in the properties accessed or
assigned to. And so, in ECMAScript 5 there are these new functions which are defined
on the object built-in called, for example, "getOwnPropertyDescriptor." So, these functions
they take a--as a first argument, an object that you want to inspect and second argument
sort of like a name for each you want to have meta-level information. And what it returns
to is an object called a property descriptor and depending on whether you're accessing
a data property or an accessor property, this property descriptor will have a different
API. So, for example, in the case of data properties you see that it has a slope called
value that says that the slope is bound to value five and it has a number of what we
call attributes like writable, enumerable and configurable. Writable determines whether
you can assign to it. Enumerable determines whether it shows up in for-in loops and configurable
determines whether you can delete a property or whether you can change its attributes.
And so, if you--as the same information for a property bound to an accessor property,
it instead has a get and a set slope that contains the actual accessors the "gettrues"
and the "settrues," okay? So, you can carry the system for this information. There is
also an "object.defineProperty," which allows you to either add new properties with custom
attributes or to redefine some of the attributes of existing properties. There are lots of
other methods, I won't describe them all. One very interesting method is "object.create."
So, it's a function that given an object that will act as a prototype and these strange
records here will create a new object, and this record, it's called a property descriptor
map, that's how I call it. So, it's an object whose keys represents the keys of the object
you're about to define but whose values are not directly the values rather they describe
the property descriptors associated with those values, okay? So, this is a property descriptor
map. It allows you to create new objects sort of at the meta-level. You can specify more
information about your object than you can normally can, this JavaScript literals, okay?
So, one final addition to ECMAScript 5 which I like to discuss before I discuss proxies
is the addition of ways to create tamper-proof objects. So, in JavaScript there are now these
three methods called "object.preventExtensions," "object.seal," and "object.freeze" and, for
example, if you call "object.preventExtensions" on an object, afterwards you cannot add new
properties to that object. If you seal it, you can't delete properties. If you freeze
it, you can additionally not assign properties. So frozen object really is frozen, you can't
add stuff to it, the clients can't access to it, the clients can't delete stuff from
it, clients can't assign properties, okay? So, I'm going to need these functions later
when I discuss proxies. So, proxies are a new addition that we are proposing for ECMAScript
Harmony, that's the code name for the next edition of the ECMAScript language. And so
they complement the existing metaprogramming API by allowing JavaScript programmers to
define generic handling of property access. And this allows--basically, will allow JavaScript
programmers to write generic wrappers; generic wrappers useful for enforcing access control,
for tracing, gathering profile information and so on, and so on. So, for those of you
who are familiar with SpiderMonkey, which is the engine that's running Firefox. It has
this non-standard method called "noSuchMethod." Well, to this--with this feature, you can
do all of the stuff you can do with "noSuchMethod" in a standardized way and in a more stratified
way. I will explain later what I mean by that. So, the other thing is that these dynamic
proxies as I will soon describe don't only allow you to generically handle property access.
They will also allow you to generically handle other operations, which basically allows JavaScript
programmers to create fully-virtualized objects. So, that would mean objects that don't really
exist as object in the system like, for example, persistent objects, which are stored on disk
or, for example, remote objects, which live inside of another address space and you can
create local proxies for them. Furthermore, it allows them to emulate the peculiar behavior
of certain, what's called host objects. So, host objects are JavaScript objects that have--differing
semantics because they're actually implemented in as native objects as built-in, kind of
built-in object. So, first of all, I'd like to point out that with the new metaprogramming
API I just introduced, it's already possible today in ECMAScript 5 to implement what I
would call static proxies. So, for example, if you want to create a tracer abstraction
that will simply trace all property accesses over a given target object. Then, what you
can do is you can actually go ahead and create an object that will have all of the same properties
as this object. So, I'm not going to go into the details here, the important thing is that
for each property in the original object I'm defining a new property with the same name
and I'm going to define all properties as accessors where I include some tracing information
and then go ahead and delegate the call to the original object. So, of course, this tracing
information, it's very simple here. You can imagine how this could generalize to profile--collecting,
profiling information and so on. So, the problem with creating proxies in this way is that
they don't reflect structural changes made to the original object or vice versa. That
means if you add new properties to the existing object you are wrapping or add new properties
to the proxy or delete properties, these changes won't be reflected after the proxy has been
created. That's I call it the static proxy. It's not really connected to the object its
proxying, okay? So, what dynamic proxies will allow you to do is to create proxies that
are--can be made to reflect the structural changes unto the object they're wrapping.
So here, we are creating the same kind of tracer object. We're going to trace all property
access of this object here and--so, we're proposing the addition of a new built-in called
proxy. It has this method called create and when you invoke "proxy.create," it returns
a proxy whose behavior will be controlled by this object here and this object is called
a handler object. It defines a number of methods that will be invoked whenever property access
is performed on the proxy. And so this allows you to intercept property access and property
assignments, perform the tracing behavior and then delegate the property access to the
original object, okay? So, I'm going to have to introduce some terminology here from the
reflection community. So, what we're dealing with here is this proxy objects, they are
what I would--what's called base level objects. They are regular JavaScript objects so the
application will directly communicate with these objects here. Now, this handler object
that controls the behavior this proxy is what's called a meta-level object. Its sole purpose
is to describe the behavior of another JavaScript object, okay? And so, proxy and handler are
implicitly connected by the call to "proxy.create." Now, so, this "proxy.create" call you give
it to handler and a prototype that ends these prototype arguments, this is an object that
will serve as the prototype of your proxy. Now, so, whenever the--in the system, in the
JavaScript runtime, some objects access a property of this proxy object, let's say,
property foo. This will get ray fight or represent it at the meta-level as a call to the "handler's.get"
method. And we call these methods--methods like a "get" we call them traps by analogy
with operating systems, you're sort of trapping the property access and representing it at
the meta-level. And so, you see that the get trap here takes as arguments, the proxy on
which the invocation was performed and the name of the property being accessed. Now,
so, likewise for property assignment whenever the code executes a property assignment where
the receiver is a proxy, this will get to ray fight at the meta-level as a call to the
handler.set trap, which takes all of the necessary arguments, okay? So, method invocations in
JavaScript are not really special here. When you perform an invocation in JavaScript what's
actually going on is you're retrieving the property with the name foo and then the system
expects it to be a function and will apply the function, we'll call it bossing the proxy
as the receiver and the arguments. So from the point of view of the meta-level here,
there's nothing special, it will just trigger the handler's get trap, okay? Now, the title
of this slide is stratified API, so what does this mean? It means that--so, a proxy and
the handler are cleanly separated and these methods' names like get and set have no particular
meaning for the base-level application, so let me illustrate this by means of an example.
If for some reason your application defines properties called get and set, which is a
perfectly viable thing to do and you--so, someone accesses the get property of a proxy,
notice--what will happen is that the get trap of the handler is invoked because we're doing
a property access, but now the property name that's being accessed is just called get.
So, there is nothing special about the name get at base level. So that's an important
part of stratification. So the name space of the special names are defined on the handler,
which is completely separate from the proxy. Now, another reason why you could call this
API stratified is that the prototype property of this handler is completely distinct from
the prototype property of the proxy. So this proto property here is being defined when
you create a proxy and that will specify what the prototype is of this proxy object. The
handler can have a completely separate prototype, and this two don't interfere with one another
or completely distinct, okay? So, I've talked about intercepting property access, property
assignment that this API actually reifies a lot more than just that. For example, if
the base level code executes, and in operation asking whether its certain name is in a certain
proxy, this will trigger the handler's hash trap and it passes the property name and the
hash trap is expected to return a volume that will then be use at the base level. Likewise,
if you will try to delete properties from proxies this actually gets trapped at the
meta-level, the handler can provide a sensible semantics for a property deletion. More interestingly,
if you perform a foreign loop over a proxy, the proxy is allowed to specify what its enumerable
properties are. So, what will happen is that the enumerate trap of the hander is called
and this enumerate trap is expected to return the array of property names that are the enumerable
properties of the proxy. And from that point on from the programmer's point of view what
will happen is that the implementation will perform a plain four loop over this return
value and will access the different properties and perform the for-in loop body on those
properties. So, proxies can even intercept for-in loops. Likewise, this new meta-level
API that I just described is also properly intercepted. So, if objects are trying to
add new properties to proxies using for example the define property methods, this again will
be trapped. It will be reified as a call to the handler's define property trap. And the
handler can then define what it means to define properties. So, proxies can trap quite a lot
of operations define on objects, but they can't trap everything, and there's good reasons
for that. There are certain operations which you don't want to depend on user level code.
For example, proxies have their own distinct object identity and if you compared them using
triple equals to any other object then--well, this check is performed, this operation is
performed entirely by the engine. The handler has nothing to say in this regard. That's
because we want to make sure that triple equals maintains all of the properties that programmers
expect you to have like perfectively, transitivity, monotonicity, in the sense that if two objects
ever compare to triple equals then you expect that relationship to hold throughout the entire
lifetime of the program, okay? So, likewise, notice that--because we force the meta-level
programmer to specify the prototype of a proxy at creation time, this allows the system to
answer the get–prototype of query which returns a prototype of an object without asking
the handler for the prototype. Again, there is nothing in current ECMAScript standards
that allows objects to have a notable prototype link. And so, we didn't want to allow this
meta-API to break that invariant, so handlers cannot break that invariant. So, because this
prototype link is fixed, if programmers use instance of, this will not be-–the outcome
of this test will be affected by the handler if the left-hand side is a proxy. And finally,
if you perform a type of operation on a proxy, this, again, will not allow the handler to
determine what the type of a proxy is. It will simply return object. So, clearly there
is a distinction between some operations which we allow handlers to implement and some operations
which we don't for the purposes of maintaining internal consistency. So, if you're interested
in what the full API looks like, this is it. It's about 12 different traps that are defined
on the handler, each corresponding to a different base level operation. So this appears to be
quite complex, but I mean, as Albert Einstein once said, "Everything should be made as simple
as possible, but not simpler." So, JavaScript is a complex language and trust me this is,
sort of, like the minimal amount of traps you need to be able to faithfully emulate
the behavior of a JavaScript object. So clearly if the language is complex it will show in
the metaprogramming API. So, having introduced this concept of proxies, we can now have a
look at regular objects in a different–from a different point of view, which is that–-so,
in the current JavaScript systems we have a base level, which is JavaScript territory
that's where your JavaScript objects live, that's where you can define your own objects.
The meta-level is currently completely dominated by-–that's VM territory. The JavaScript
programmer has no access to it and it's usually implemented in something like C++. So, what
you can--so normal objects which are not proxies can actually be thought of as proxies whose
handler is specified by the virtual machine. So, its handler is sort of fixed by the virtual
machine and implements the default JavaScript semantics. And that's an interesting model
because already there are some deviations on it, which are what's currently known as
host objects. So they are certain objects which are also currently--where semantics
is currently implemented in C++ in the virtual machine, but which deviates slightly from
the built-in JavaScript semantics, okay. So, what proxies enable JavaScript programmers
to do is they enable JavaScript programmers to invade this meta-level world and define
new semantics for exist--for JavaScript objects. Okay. So, you are really giving a lot of power
here to JavaScript programmers, and this is quite important as Brendan Eich recently said
on the-–as he has discussed mailing list. This basically allows JavaScript programmers
to experiment with useful new semantics of the language without either the VM implementers
or the standardization committee having to be a bottleneck for innovation. So it's really
is game changer and--but while I have this figure up here, I would like to stress that--okay,
sorry. So what it allows you to do for more is that proxies can sort of be used to create
the behavior-–to recreate the behavior of host objects entirely within JavaScript allowing
these objects to be sort of self hosted. So they no longer depend on VM internals. But--so
I'd like to stress though that it's the API, it doesn't allow you to redefine the semantics
of existing JavaScript objects, okay? So, these links they really are sort of hardwired
and adding proxies to the language doesn't allow JavaScript programmers to redefine the
semantics of existing objects, only of new proxy objects. That's very important for two
reasons. First is security. It's not because you have a reference to some object that you
should be allowed to install a new handler on that object, and completely takeover control
of that object, okay? Second reason is performance. So, of course, the semantics of these objects
is heavily optimized in virtual machines. And we don't want our metaprogramming API
to interfere there. So, the metaprogramming API should only have an overhead on these
proxy objects, so it only costs when you actually use it, okay? So most JavaScript programmers
will not actually be interested in redefining the semantics of the complete semantics of
a JavaScript object, they will rather want to make small changes to the existing behavior
of JavaScript objects. And to that end, one of the most useful handlers that you can define
is sort of like a handler whose job it is to simply forward all operations performed
on its proxy to a certain target object. So, this forwarding handler here takes an object
that you want wrap, stores it in a target property, and then goes ahead and implements
the entire API of the entire handler API by simply forwarding the trapped operation to
the target object, okay? So the situation is like this, a proxy traps all operations
reifies them on the handler, a handler dispatches them on to a target. So this allows you to
implement small deviations of the existing semantics. So here's a simple--very simple
example. A profiler that's simply constructs like a histogram of all-–it simply counts
the number of times certain properties have been invoked. And so, if you want to create
a simple profile or wrapping a certain targets, you start off by implementing a forwarding
handler which encapsulates sort of the defaults semantics of the language. Now, we are just
going to overwrite its get trap so that it performs the count access and then delegates
the call to the wrapped object, okay. And so this abstraction here it returns a proxy,
which grabs the target object, and then a method that allows you to–allow clients
to retrieve the statistics. So, if you have a certain subject that you want to monitor
you just make a simple profiler for it, run your application with the proxy, and when
your application has run you can sort of readout the statistics from this profiler, okay? So,
this shows that you don't have to always implement this full handler API if you want to make
good use of this metaprogramming API. You just have to define the delta with respect
to the default semantics. So up to this point I've not talked about the functions at all.
So, in JavaScript, functions are objects but not quite. JavaScript functions are objects
but additionally you can also call them and you can also construct them. So, they have
some capabilities that normal objects don't have. And if you want to reify this at a meta-level,
really the best way we could come up with is to actually distinguish between object
proxies and function proxies. So if you want to create a proxy for a function, what you
do is you don't call "proxy.create", you call "proxy.createfunction". And so this returns
a function proxy whose behavior is again determined by a handler, and this handler is completely
identical in API to the handler you impose to proxy that create. So, it completely handles
all of the duties of a function as an object. But additionally you can call and you can
construct functions and that's why this creates function method also has a call and construct
trap. These are functions that will be called when the function proxy is called or constructed.
For example, if we call this function proxy, what's actually going to occur at the meta-level
is we're going to call this function instead, okay. Likewise, if code constructs the function
by prefixing the new queuers, this will trigger the construct trap instead. And this is actually
very interesting because this would--for the first time allow JavaScript programmers to
faithfully distinguish between calling and constructing. So, there are certain various
ways in which you can try to figure out whether your function was called with the new keyword
or not, but they're not full-proofed. So, this is a full-proofed method to allow you
to distinguish between calling and constructing. So, again functions are objects, you can store
properties in them and access them, et cetera. And all of these accesses will simply be reified
as traps on the handler, entirely analogous to object proxies. And like with object proxies
there are certain aspects of functions which we choose not to intercept. For example, if
you ask what the type of a function proxy is it will always return function. It won't
consult a handler for that. So you-–we want to uphold this constrain that the type of
a function is a simply function and likewise notice that "proxy.createfunction", unlike
"proxy.create" doesn't take a prototype as a second argument. Why is that? Well, the
system enforces that for function proxies if you query it for the prototype it will
simply return function of prototypes since that's what functions are supposed to delegate
to. Okay. So, I've talked about this various operations that's allow you to create tamperproof
object in ECMAScript 5 like "object.freeze", "object.seal", and "object.preventExtensions".
And so the problem here is that if we have proxies and we are not enforcing these constrains,
then programmers can be very surprise. For example, if you freeze an object, as a programmer,
you know that at that point no more properties will be added to the object. But if it a proxy
is a--if the object is a proxy with the handler, the handler can decide whatever it wants.
So we have to somehow restrict the power of the handler. So what will happen is if you
call any of these three operations on a proxy this will trigger the handler's fixed trap
and this fixed trap either returns a property descriptor map or undefined. If it returns
undefined that means that the handler isn't willing to fix the proxy, and at that point
the system will throw a "TypeError" informing the programmer that this operation is not
allowed. If it does return a valid property descriptor map, then the system will use the
property descriptor map to actually go ahead and create a new object. It will then perform
the corresponding operation on that object, so if you froze the object it will freeze
it or otherwise seal it or make it un-extensible. And then as a final step, the proxy will become
this new object, okay. So, of course, "become" is an operation you cannot implement in JavaScript;
it solves, but VM implementers do have quite easy ways to accomplish this. So really you
so should think of proxies as being in two possible states. A proxy is born in what's
called--a what we call a trapping state in which it sort of intercepts all of these operations
and passes them through to it's handler but from the point--from the moment its fixed
and transcends into a terminal fixed state, and at that point it no longer needs it's
handler, it will never again invoke it. And for all intents and purposes this is now a
regular object. And because it is a regular object, we can enforce the tamperproofness
of freeze, seal and prevent extensions. So I've presented this proposal at the ECMA TC39
meetings. This proposal is now an official proposal for ECMAScript Harmony. You can find
the details, semantics of it at the given URL. And there exists the prototype implementation,
so Andreas Gal from Mozilla has actually implemented an extension of TraceMonkey that supports
this and this has allowed me to write a couple of microbenchmarks. So, here I have measured
the time it takes to perform an operation on an object versus the time it takes to perform
the same operation on a proxy that simply performs this default forwarding behavior.
So there's no proxy that simply delegates the same operations to the wrapped object.
So it's interesting to see that "type of ===, getPrototypeOf, et cetera," these incur
no overhead that's logical because they are independent of whether an object is a proxy
or not. Most of the other traps incur an overhead of between 1.2 and 1.8, which is what you
would expect because they have to perform the original operation anyway and you also
pay for the overhead of an extra message invocation on this handler object. So enumerate this
somewhat off, that's because the API is currently very awkward because the handler has to construct,
an array of strings has to [INDISTINCT] to the implementation and then the implementation
has to perform a loop over this. So the goal here is that if ECMAScript Harmony has a good
proposal for generators or iterators, we will adapt our PI to make it fit with this new
proposal which will probably, the speed of this--the cost will bring down the cost of
this enumerate trap. So to summarize proxies; dynamic proxies, they--there are really two
main use cases here. First of all, it allows JavaScript programmers to write generic wrappers,
for example, access control, profiling, writing adaptors for existing libraries, et cetera.
Furthermore, because we allow so many operations to be intercepted, you can actually go ahead
and really create vertical object, so that object is that represent persistent objects,
objects that represents, remote objects, you can emulates behavior certain host objects,
so these are all very useful things. So with respect to the metaprogramming API as I've
presented it here, I will call it robust because it's stratified. So this--the name space of
these handler is completely separate from the name space of the objects that you're
intercepting. And furthermore, we don't blindly allow all operations to be intercepted, so
certain operations like "typeof and ===" are not intercepted. It's secure in the sense
that you can't take over existing objects. You can't redefine the behavior of existing
objects, and furthermore, the properties of tamperproof objects are maintained, so proxies
[INDISTINCT]. And as far as performance goes, really the important thing here is that there
is no overhead for non-proxy objects, so you only pay for the overhead when you really
need the metaprogramming API. So how am I doing on time? Yeah, I think we can continue
to the--sort of the second topic of this talk, so--and now for something completely different,
I'd say, traits. So, what are traits? Traits are a way to do object composition. You can
think of them as an alternative to mixins or multiple inheritances really. And so essentially
if a trait provides a set of methods and requires a set of methods in order to implement those.
And really the composition of different traits, I will call it robust because name clashes
that occurred when two traits define properties of the same name lead to exquisite conflicts.
So contrary to mixins or multiple inheritances where one of--either one of methods will be
preferred depending on the order of the composition with traits, you will always get a name clash
no matter what the ordering is and the name clash must be explicitly resolved before you
can actually a use the trait. Furthermore, the composition of traits is a commutative
and associative operation. What this boils down to is that the order of your composition
is irrelevant. It's irrelevant, thus, more declarative. It's easier to reason about larger
compositions as a programmer. So trait for first [INDISTINCT] Squeaks Smalltalk circa
2003 and half in their short a life time received quite some adoption in our primary languages.
For example, I've been included in Perl 6, PLT Scheme users and extensively in its libraries;
Guy Steele's new Fortress language has speak upon them, et cetera. So here at Google together
with Maverick I defined this library we called "traits.js", which allows you to perform trait
composition in JavaScript. And we were motivated by two reasons; first of all, of course, trait
composition is more robust than the existing composition mechanisms that JavaScript offers,
which is prototypical inheritance and mixin patterns where you simply copy all of the
properties of fun object and add them to another object. So it's robust in that way, and another
important motivation was that even though ECMAScript 5 allows you to define tamperproof
objects with this object of freeze call, it's still fairly worthy, so it's fairly inconvenient
to create your own tamperproof objects. So in "traits.js" instances of traits will, by
default, be tamperproof objects, so it's also an easy way of creating tamper-proof objects
in ECMAScript 5. So the libraries is based on this property descriptor API that I've
introduced beginning of this talk and--but we define a small backwards compatibility
layer such that it will also run gracefully on existing ECMAScript engines except that,
of course, trait instances in an ES3 system will not be tamperproof. The libraries work
both in the browser and stand alone at server site, for example. So the API of these libraries--the
core API's fairly minimal, if you include the traits library you--there are basically
four things you can do, you can construct new traits, composed existing traits, resolve
conflicts for traits and instantiate traits into objects. So constructing traits, you
do that by calling this trait constructor, capital "T". It takes as its sole argument,
a record describing the provided and the required properties of the trait. So provided properties
are just normal properties. If you want to express a required property, what you do is
you define a data property bounce it to "Trait.required". So the "Trait.required" is kind of like a
single tone value like null or undefined, which is exported by the traits library. So
trait composition is performed by the "Trait.compose" operation. It takes a variable number of traits
and returns a composite trait. So, again, the ordering of traits here is completely
irrelevant. Trait resolution allows you to deal with conflicts. It allows you to avoid
conflicts by renaming properties. So, for example, you can rename property in this traits,
the property A rename it to C, and you can--if you rename something to undefined, this basically
means, you don't want it anymore, you exclude the property from trait. And then finally,
if you want to instantiate the traits just like there is this object to create message
in ES5, we define trait to create which has a similar signature, it takes a prototype
which will be the prototype of the trait instance and a trait to instantiate. And in this case,
the object O being returned here is a frozen, is a tamperproof object. So let me give a
brief example of what you would use traits for. So here's a traits that captures the
reusable behavior of innumerability. So an enumerable trait defines properties like map
filter and reduce higher order operations. If only the composer wants to give it a "forEach"
methods, that will enumerate the sequence. And so based on "this.forEach" methods, so
it can access it using [INDISTINCT], it can provide these higher order operations, okay.
So I'm not going to go into the details here. If you want to use this enumerable trait,
for example, say, you want to create an enumerable interval, you do this as follows. So, here's
function called "makeInterval", you give it a minimum and a maximum. It constructs for
you an interval--bound, an open interval with minimum inclusive, maximum exclusive. And
so, the instance will be a trait instance, it will delegate to object of prototype. And
so the trait being instantiated is composition of the enumerable traits which defines this
reusable behavior and a sort of anonymous inline trait that defines the semantics of
intervals. In this case, it defines like a start to an end property containing this method
to check whether an element lies within the interval. And the need to define this required
"forEach" method simply by viewing the interval as settings of integers starting from minimum
up to the maximum, okay. So when you construct the interval by calling the "makeInterval"
function, you can then go ahead and invoke operations like MapReduce and filter, so they--it's
as if they are define directly on the instance. So this--our traits library actually represents
traits as property descriptor maps. So when you create a trait using the trait constructor
it will simply transform this records that you give it into a property descriptor map.
Recall; a property descriptor map is an object whose keys represent the keys of some other
object and which values are bound to property descriptor that describes all of the attributes
of a given property. And notice also that we add additional metadata as in form of attributes
to some of these properties. So required properties for example have this required flag, and data
properties bound to functions will be tagged as methods. I will explain later why that
is the case. So--but it's fairly important to noticed that, so we just represent traits
as this standards property descriptor map formats that was defined in ES5. So if you
go ahead and compose, for example, these two traits T1 and T2, then you'll see that they
both define a B property, so when you compose them, this B property will be replaced by
something we call a conflicting property. And if you would then try to create an instance
of that trait, you will get an exception saying that there's a certain conflict which you
need to address. If you want to resolve that conflict what you do--what you could do in
this case, for example, to prioritized T1s B property over T2s B property. And you can
do that by using the "Trait.resolve" call, you can create a new trait whose B property
is actually--redefines to required properties, which sort of exclude it from the trait composition,
and then later when you compose this resolve with T1. Now, we are composing a required
property B and the provided property B and we will simply--so define B to refer to T1s
property. So if you then create an instance this trait, you will get a valid trait instance.
So a final operation trait instantiation; so that's then using this "Trait.create" call
and you give it a prototype and property scripture map that represents the trait. And really--what's
going on here is very similar to calling "object.create" except that in addition to what "object.create"
does "Trait.create" will also throw an exception if it encounters any remaining conflicting
or required properties in these trait. Furthermore, it will bind the disbinding of all properties
that's tagged in these methods to the new trait instance. So this is to ensure that
your object is tamperproof in the sense that clients are not able trick the trait into
a rebinding its--this value. Or this could also happen by accident if clients would extract
a method from a trait. Use it as a [INDISTINCT] in which case it could be bound to undefined
or the global object. So to prevent that we explicitly bind this upon instantiation and
furthermore we freeze the resulting object and we freeze its methods. So this makes sure
we get tamper-proof objects without the program we're having to write all of this tedious
"object.create"--"object.freeze" calls, okay. So there is one open issue here because that
we bind the disbinding of methods when we create trait instances that mean that if you
create multiple instances from the same trait they won't be able to directly share the same
method instance. Rather, they will each have their own bound method instance. And so that
really is quite a--it's an overhead in terms of space, which is really very tricky to deal
with as a library author without support from the run time. So, to summarize, "traits.js"
is a very minimal trait composition library for JavaScript which represents traits as
property descriptor maps and the interesting thing here is that you can use, you can actually
pass straight to the "object.create" built-in ES5 function and that will simply generate
a new object instance which is not tamper-proof and which will also ignore--simply ignore
the trait metadata. Now, if you pass the trait to "Trait.create" you will generate a new
trait instance which is a tamper-proof object. And, of course, there is still this open issue
that I've just describe to you. So if you're interested in this library, you want to play
around with it or look into it in more detail; so the URL is www.traitsjs.org. So I'm about
to conclude my talk here. Just to summarize the different things I've talked to you about
today. So, first of all, with the addition of ES5 strict mode you can really think of--well,
that substitute of JavaScript at least is a very robust programming language. It really
is very robust, it really makes sure that programmers don't fall into like a--there
are no pitfalls for the [INDISTINCT]. Now, proxies are our new metaprogramming API for
ES Harmony and I would characterize it as a robust metaprogramming API. It's stratified
and we put a lot of thought into what operations should be intercepted or what operations it
should not be intercepted. So then I talked about this new traits library that already
works in ES5, which really is a robust composition API. Robust in the sense that name flashes
have to be explicitly resolved and the trait composition is declarative because ordering
doesn't matter. And so as Mark discussed in the introduction, this is sort of the second
in a series of talks on changes to ECMAScript. So hopefully, Mark or Tyler will in--one of
the coming talks talk about new abstractions that we're devising for robust event-driven
programming in ECMAScript, so following these--the robustness theme here. So if you're interested
in any of the things that I've discussed about--that I have discus here today, we have set up a
little Google code project called es-lab, so if you go the following URL you can find
further information. So that concludes my talk and I would be very happy to answer all
of your questions. >> BREW: Hi. Bradney Brew. It's great to see
the metaprogramming come to JavaScript. It's going to help a lot of frameworks. Maybe this
is a larger question, but one of the issues with JavaScript is really programming in the
large. But from a productivity standpoint and from deploying so much script, what primitives
do you see in ES5 to help with that? >> CUTSEM: With programming in the large,
I can't really think of any primitives that are directly supported. One of the more relevant
topics here is modules. So currently ECMAScript Harmony, there is big discussion going on
various proposals for module systems that would allow you to define modules with explicit
imports and exports which I think would be a big help for programming the large. That's
a--yeah, that's not available in ES5 and that's not something that I fit into.
>> We have--as an answer to your question we have other--plenty of other things on the
table for ES5; modules, classes and some kind of elementary API enforcements are...
>> CUTSEM: Yeah, for ES Harmony, right? >> Yes. ES Harmony.
>> CUTSEM: Yup. Yup. >> Is it--it'd be nice to see those, because
ES4 had some of those but maybe it was to Java-centric. And sometimes you have to be
a bit of a JavaScript wizard to do program in the large and that's causing some folks
to go towards abstractions like Google web toolkit. And be nice to be able to kind of
help go in the other direction against that. So...
>> CUTSEM: I agree. >> [INDISTINCT]
>> CUTSEM: Well, the question was--yeah. All right, go ahead.
>> Okay. I think that you have to be--when you get to--I mean, JavaScript is an amazing
programming in the small and medium language. In a way that I think other language aren't.
Then you get started quicker, I think jQuery shown that. But for programming in the large
and you're getting into a really substantial code base something like Gmail or really sort
of [INDISTINCT] over the gap. I think you have to be too much of a wizard knowing the
specifics of the language to kind of get day in and day out work done. For example, you
need to have a deeper understanding that it's a prototype base language that everything
is actually a function, understanding closures, and that makes it hard I think for large distributive
teams to pick up and also have good encapsulation between different parts of the system and
it requires, I think a great deal of sort of programmer good habits and it'd be nice,
you know, one thing I love seeing about the traits and the proxies as it's a JavaScript
approach to these things. So, it would be nice to find the JavaScript to answer, to
being able to have--I don't--when I see more junior programmers, I don't mean that in a
negative way, but not having to be a wizard of the language to built systems and I think
that's one of the problems with JavaScript. And that's just the productivity level in
terms of deployment; page level latency is so important, you want to glam everything
together and so you need lots of build time tools. That I think when you get to a certain
scale, it really hits your productivity. And it'd be nice to have more support. That maybe
slightly outside of JavaScript but it'd be nice to take the larger picture like HTML
file has done, they've said "Okay, let's look at API, it's not just mark up. So maybe widening
that is good. >> CUTSEM: Anymore questions?
>> LYNN: Hi. Jimmy Lynn. I was just curious, so for the trait stuff. Do you see something
like that going directly into the language or does it even need to be census just to
be on a JavaScript library. >> CUTSEM: So we've had some discussion about
that, I think--well, as they stand they don't really need a lot of support from--well, the
language I think--the problem is that--having tamper-proof objects to be able to efficiently
share methods is a big issue which we currently don't know how to solve at the library level,
so that would definitely help. I also know that, well, I think if something like this
would actually make it into the language it would probably have its own syntax. So, better
support with its own dedicated syntax which will also enable the JavaScript engines to
much more heavily optimize them, because currently such as--in this library the--sort of, the
implementation has to infer that they're actually using trait compositions not native to the
language. But yeah, I mean, there are--we were actually surprised at how usable traits
could be without dedicated syntax. Because of JavaScript's excellent object literal notation
syntax. So it's fairly doable. Anymore questions? Okay. So thank you all for your attention.