>>> Good morning.
Welcome to the first conference sponsored joint Bibi the
financial stability oversight council.
Its member agencies and the office of financial research.
I join sponsorship reflects our collective engagement to promote
financial stability.
I want to thank the principals and others on the council for
their sponsorship.
And my colleague and the work of his staff to bring this
conference about.
most of all, I thank the staff of the OFR and treasury, whose
hard work made it possible, especially David Johnson, and
our events management team.
the financial crisis up on mass deficienies and our injure of
financial data.
The crisis has become a widespread appreciation for a
different approach to policymaking.
Financial stability is now more explicit policy objective.
our analysis is focused on assessing threats to financial
stability.
Policymakers are developing more tools to mitigate those threats,
if you will the Macroprudential cool kit.
this conference is aim at improving our understanding of
the data we need to analyze and collect and the tools we need to
develop to make that toolkit effective.
to facilitate a robust dialogue in the next two days, we brought
five groups together, staff ask council, academic and research
community, market and industry participants, global regulators
and public interest groups.
now, robust dialogue in a different perspective are
essential.
because, as we saw in the years preceding 2008, the standard
data and tools used to measure risks provided little warning of
the real threats that can build in the financial system.
Of course we know more now than before the crisis.
but the fundamental uncertainty about the source of future
threats require that we be modest about our ability to
judge them, even as we acquire better data and quantitative
analysis.
Still, we believe the case is strong for improving the quality
and scope of those data to monitor financial activity
across the system.
this morning's conference panels are focused on how we should
make those improvements, in scope, and management.
afternoon workshops will encourage robust die local for
all three of those questions in some detail.
our toolkit also needs improvement, recognizing the
fundamental sources of instability in the financial
system, becoming more forward looking, and testing the
resilience to a wide range of events and incentives.
and it must be complemented with parallel developments in
financial risk management.
Day two will focus on four of those needs, looking at
indicators of threats to financial stability, stress
tests, best practices and financial risk management, and
new models of the financial system and the economy.
new data and tools, thus, can be very helpful in designing the
shock absorbers and guard rails we need to build a more
resilient financial system.
To round out our die HROEG we have asked four policymakers to
offer their perspectives today and tomorrow.
chair of council, Tim Geithner, the acting chair of the FDIC,
Martin Gruenberg, Gary GENTZLER and a member of the UK and
former fed board vice chair Don cone.
our first speaker this morning, to his credit, I might add,
probably will never use the word "Macroprudential."
He is, however, a tireless advocate for a strong
regulation, one that looks at risks across the financial
system.
please join me in welcoming the chair of the financial stability
oversight council and the secretary of the treasury, Tim
Geithner.
[ Applause ].
>> thank you, Dick.
nice to see you all.
Welcome.
thanks for being part of the conference on how to think about
how to prevent -- reduce the intensity, frequency of future
financial crises.
I want to TKOFPLment Dick's team for bringing us all together and
setting up the office of financial researchment I want to
thank GARRITY from treasury and his colleagues for supporting
the financial oversight council.
They have a lot of work ahead of them, consequential work ahead
of them but they're doing a very good job.
I want to recognize a few people, some of you in the room
who were -- have been central to the conception, creation,
development of this office of financial research.
John lickity, Lou Alexander, Adam laVIER and Adam Lowe.
I'm grateful for the ice and support they have given us.
I think it's important to recognize the role Jack Reed
played in this.
He took this idea of establishing a strong,
independent agency tasked with the important job of improving
the quality of data, research, and analysis.
and made it a key part of last year's financial reform law.
He was determined and relentless and he made it happen.
now, if you step back for a second and think about where we
are today, I think it's fair to say that the actions we took in
2008 and 2009 to resolve the financial crisis, together with
the financial reforms that we enacted last year, are bringing
about a very fundamental restructuring of our financial
system.
we're working to make the institutions that are
fundamental to the health of the financial system stronger and
more resilient by forcing banks to hold more capital against the
risks inherent in financial activity and to fund themselves
much more conservatively.
we're work to go put in place broader financial markets, the
derivatives markets, mortgage, repo and securities lending
markets to help protect the caused by financial shocks and
to reduce the risk of contagions from weaker institution toss
stronger ones.
we're designing better tools to resolve large complex financial
institutions so that when they make mistakes in the future,
which they will, we can protect the broader economy from the
damage they cause when they fail without putting taxpairs's money
at risk.
We're making larger financial institutions pay more for the
cost of maintaining a more stable financial system.
we are put in place limits on the size and concentration of
banks in our system so that businesses and individuals are
less vulnerable to the mistakes those institutions may make in
the future.
these limits are much tougher than exist in my other major
economy.
We're changing the structure of compensation in the financial
system by giving shareholders more power in proving disclosure
and limiting them to pay their executives in a way that would
insulate the executives from the losses that can come from taking
too much risk.
We're improving the quality of disclosure around the risk banks
face.
We're bringing the shadow of financial system out of the dark
and extending the reach of constraints on leverage to firms
that may not be banks in the classic sense but still could
threaten the stability of the financial system.
We're working to reduce the degree of moral hazard that
exists in the financial system, while still trying to strengthen
its resilience against financial panics.
We're working to get regulators the greater resources they need
to enforce stronger possession for consumers.
We're redesigning consumer protection in The United States
so we provide better safeguards against per TKAEUGZ and fraud
and better better disclosure so consumers can make better
decision back seat how to borrow responsibly.
my view is that as a result of these reforms and those still
ahead, our financial system is in much stronger shape than it
was before the crisis.
We closed down the weakest parts and we forced the survivors to
make a very tough test of market, to meet a very tough
test of market viability.
those firms, those banks, the core banks have raised $300
billion of common equity since 2008, giving them individually
as a result of the financial system stronger shock absorbers
against the inevitable risks involved in managing financial
institutions in an inherently uncertain world.
Now, we're work to go extend these reforms globally and we're
doing our best to limit the risks that this necessary job of
making the system more stable does not unDULY damage growth,
economic growth in the short-term.
These are the core objectives of the financial reform and
restruck is suring strategy we have embraced, adopted in The
United States.
In this office of financial research is an important part of
this effort to modernize our financial system.
this recent financial crises, like other financial crises had
many causes, too many leverage, pockets of hazard, large gaps in
oversight and regulation.
But of course another cause was a lack of knowledge and
information about -- among both market participants and
regulators, knowledge about the scale of losses firms might face
in the future about where risks reside, about how contagion
might spread.
Now, these things are all, as Dick said, fundamentally
uncertain.
But more knowledge, better data, better research and analysis can
make them less uncertain.
And that greater knowledge, in turn, offers the prospect of
better market discipline, a more acknowledge I'll and preemptive
regulatory response to emerging risks, and better design
interventions in crises to contain the damage that can
cause.
now, policymakers are always looking for the financial
system, equivalent of the MRI over the horizon radar like we
equip our aircraft carriers with.
or I think Andrew Lowe's analogy is a National Weather Service
for the financial system.
but that goal will always elude us.
We'll keep pursuing it.
And we can do a much better job to date, but it will always
elude us.
With better data and commitment to research and analysis, we can
do a better job of risk managers, central banks, and
supervisors with the ability to reduce the severity of future
financial crises.
Now, this office of financial research has a simple core
mission, which is to collect and make available both to
regulators and the public better financial data.
it will seek to better measure and analyze the many factors
that can affect financial stability.
it will report to congress and to the public its analyses of
significant market development, potential risks of financial
stability, and of course appropriate policy responses.
and it will be collaborate with regulators in the financial
industry, academic community, our foreign counterparts in the
central bank and supervisory community to establish better
global standards for financial data.
example of that is legal entity project which tries to identify
the parties to financial transactions.
And this will help bring regulators and risk managers,
market participants better understand the risks and
exposures across the system.
And we're pleased this initiative is now gaining a very
broad global support and endorsement.
Seems quite promising.
now, your job today is to help us set the agenda for the office
office financial research and help us identify areas where
better data and more research offers the highest return.
as the argument TEBGTS of the council and the stability
council had recognized we have to rely extensively on
collaboration with the academic community, with people in the
financial sector here internationally.
just to end with a following broad observation, the president
and the leaders of congress made an important decision in the
early months of 2009.
at that time of course the crisis was still burning and we
were at the edge of a second great depression.
at that time rather than delay pursuing financial reform until
we put out the financial fires and fully repaired the damage
they caused, they chose to move early so the scars of the crisis
would motivate reform.
And their judgment was by moving early before the memory of the
crisis faded, we thought we could best maximize the chance
of more substantial and moren during reforms to the financial
system.
and yet if you look at the debate today, even with all the
damage caused by the crisis, even with millions of Americans
still out of work or trying to stay in their homes, huge unused
capacity across the productive sector of the economy even in
the face of the European crisis, we're seeing a determined effort
to slow and weakened reforms.
These financial reforms that we think are critical to the
ability -- our ability to protect the economy from a
future financial crises.
now, the forces that are working against reform are trying a
bunch of different strategies.
locking appointments of new leadership to the key ownership
positions, cutting funding for enforcement and policy, policy
writers on appropriations bills, new legislation to repeal the
entire law or critical pieces of it.
Efforts to store cost benefit analysis to make them road
blocks to reform.
Designed to slow the pace of implementation of regulation
with the hopes with time it can be watered down.
and I think it's important to recognize the stakes -- what's
at stake in this broad effort.
If these efforts to weaken reforms is accessible, then
consumers in the future will be more vulnerable to financial
abuse, businesses will be more vulnerable to future
contractions and credit availability caused by financial
mistakes.
and the economy as a whole will be more vulnerable to those
crises.
And if these opponents of reform are successful, then we'll be
left with a broken poorly designed system of financial
oversight and protection with inadequate resource toss prevent
and deter future abuse.
now, these reforms that we've embraced the United States and
pursuing globally are sensible and essential reforms.
and we're going to fight to preserve them.
the challenge we face is about a fundamental obligation of of
government, which is to provide, to design, to provide to enforce
the system of incentives and constraints that are necessary
to create a financial system that helps people save for
retirement, borrow to buy a house or a car, pay for college,
allow businesses to finance productive investments, protects
people from perdition and abuse and doesn't leave the taxpayer
exposed, responsible for paying for the mix takes of banks.
It is such a fundamental challenge of government,
obligation of government.
And as we have seen the cost of getting it wrong can be
devastating.
We have a lot of difficulty work ahead of us. A lot of
complicated work ahead of us.
And you have a chance here at this conference and working with
us in the future to help us do a better job meeting that
challenge.
I look forward to hearing about the results of your conference.
Thank you.
[ Applause ].
>> thanks very much, Tim.
To kick off this morning's program, Dessa glasser is going
to frame the morning session.
Dessa?
>> Okay.
.
Thanks very much.
I'm Dessa glasser.
I'm responsible for the data portion of the OFR.
so as both Dick and Tim mentioned, data is critical for
analyzing threats to financial stability.
so today is day to day.
So today we have set up the day in a couple of different areas.
and what we did is we chose topics that we wanted your input
on and we want to drill down further in it during the
daytime.
so our objective really is to hear from you.
We have financial regulators.
We have financial service participants here.
we have also academics and researchers and public interest
groups.
It's important for us to hear what you have to say in each one
of these areas to help us really prioritize and set our agenda.
so we have set up the day in two different portions.
the first one we have a panel and we have some experts that
are talking about each of these topic areas that we have chosen
and give a broad background.
So we're setting this up so we can talk about these key areas.
And then in the afternoon session we have workshops.
And the workshops we're going broader and deeper in each one
of these areas.
We have chosen a little bit of subtopics so we can get out some
really important information we need in order to set our agenda.
so in the first session we have panel discussions.
the first one is led by Jonathan Sokobin.
that's how do we identify data we need to recognize the threats
to financial stability?
The second topic will be on how do we classify aggregate data
and the importance of standards?
Tim mentioned the LEI, which is a critical component in
recognizing threats to financial stability.
We have Francis gross from the ECB who will will take us
through that and moderate that session.
And I'll talk about the last session.
The last session is on operational risk from two
different viewpoints.
one, is looking at the data process and operational risk in
the data process.
The second is the completeness of the data that we need to
analyze these threats.
so we appreciate you're being here.
We appreciate your input.
We need your input.
So what we would like to do is take those topics, go further
and deeper and the workshop sessions and we'll have a report
back the next day on key takeaways for us that we can
focus on.
So with that introduction, I'm going to ask him to come up and
introduce his panel.
We are looking forward to a lively discussion.
Thanks.
[ Applause ].
>>> good morning again.
As Dessa said, I'm Jonathan sew coin, chief of strategy at the
OFR.
It's my pleasure to kick off the first panel decision of the
conference.
we have for us two central bankers, a representative of the
private sector and former government official and renowned
academic.
I'm going to provide a very quick introduction to each of
our speakers.
Full bios are in the package.
each panelist will give a short presentation.
and then we're going to have some question and answer.
and as has been stated, what we really want to initiate
conversation here, so we're looking for a lot of
participation.
and with that I will provide the introductions.
next to me is Lewis Alexander, who is currently the chief --
U.S. chief economist from Nomura.
he served as counsel to second Geithner, working on issues
including leading efforts to establish the office of
financial research.
For that we are grateful.
Next to him is professor Archarya.
He is the CV star professor of economics in the department of
finance at the NYU stern school of business.
He's also program director for financial economics and research
affiliate at the center for economic policy research and a
member of the advisory scientific committee of European
systemic risk board international advisory board of
the Securities and Exchange board of India, advisory council
of the Bombay stock exchange, and federal research banks for
Chicago, Cleveland, New York, Philadelphia and the board of
governors.
next to him is Mark Carey, senior adviser at the federal
research board in Washington, D.C.
He is co director of the national bureau of economic
research's risk of financial institutions working group.
which is a mixed group of academics and financial
professionals that focus on risk management at financial firms.
and finally at the end of the panel is Philipp Hartmann.
He is a fellow at the center of economic policy research and
chaired part-time professor of macro financial economics and a
member of the committee research task force.
I want to thank all of our panelists for coming and joining
us this morning.
With that I am going to turn the program to Lou.
>> thanks very much.
there are some slides.
it's my first intro slide if I can have that up.
I want to talk briefly about three main things.
One is what I think the biggest change to measurement is,
basically the fact that systemic crisis are relatively rare
compared to change in the financial system.
The second point I want to talk about is I want to stress the
importance of the practical side of measurements.
We spent a lot of time thinking about the theory but the nuts
and bolts how you do it I would argue is critical to take into
account as well.
The last thing I want to do is talk about the importance of
thinking about improving financial reporting at the level
of the transaction.
this is really an were advertisement for the next
session actually more than anything else.
But I think it is important in terms of how we think about
things.
There's actually one data chart which I believe is number two.
Do we have a chart up there?
Okay.
what I'm showing you here is a simple measure of the size of
gross financial mediation.
it's actually the total value of U.S. financial assets relative
to USGDP going back to the 1800s.
There's a very simple point I'm making there.
The vertical lines indicate what I would describe as systemic
crises.
Going back in time, we have 2008.
We have the collapse of the US Banking system in 1933.
We have the 1907 crises and the range of systemic crisis we saw
in the 19th century.
The simple point I would make is the financial system that has
evolved over these periods is very different now than it was
in the previous crises.
And when you think about how you measure things, you have to take
into account the fact that the system we deal with is going to
evolve at a very rapid pace relative to our ability to
observe these things got systemic crises.
That has implications for how you think about measurement.
First of all, you obviously can't have a measurement regime
that is in some sense unlimited to a particular structure of the
system.
because as the system evolves inevitably risks are going to
come into place outside those existing structures.
and the measurement system does not evolve, you're going to have
a fundamental problem from that perspective.
The other challenge this creates, of course, is the fact
that the data we have actually to compare crises is not going
to be the same across crises.
and so in some sense you can imagine strategies that are
essentially looking for patterns and data that exist across many
crises that tell you something.
the problem is the system is just evolving at such a pace
that as a practical matter you're simply not going to have
that data.
If you look a lot of the models to measure systemic risk and ask
the basic question, over how many do we have that data
available the answer is relatively few.
and that poses a fundamental challenge to measurement.
So I think when you think about how we're going to address this
I think you have to take into account the fact that the system
is going to evolve at this very rapid pace relative to the
actual incidents of these crises.
the second big point I want to talk about is this distinction
between the theory and practice of measurement.
both are obviously important but I think we don't pay enough
attention to the second.
I think generally they are considered to be one of the
great triumphs of economic measurement.
It's obviously important.
It's an important structure for which we can think about
economic opportunity.
But what I think very few people think about the utility is
really based on the creative and efforts of people that actually
go into measuring it.
Let me just talk through how we do it a little bit to give you a
sense of the complexity.
What is GDP?
It's consumption plus investment, government spending
plus exports minus imports.
the way we measure those different things are, in fact,
very different.
So TPRERBGS, trade.
trade is actually the easiest one.
trade is easy because it is actually a legal requirement for
everyone who engages in a trade transaction to report it.
So we actually have reports on every single transaction.
the foreign trade program in the SEPB cuss bureau that provides
the source data for this is actually not a statistic Al
program.
It's an administrative data program.
That's X minus M.
What about G?
G we basically rely on government accounting.
They are essentially constructed for the purposes of tracking
government spending.
we then have to take the information that comes from that
accounting system and map it into the concepts that are
relevant from the national accounts which is the difference
between consumption and investment on one hand,
transfers on the other.
But it requires going in and understanding how the data are
reported for totally other purposes and how to translate
that into the concepts that are relevant for economics.
investment.
we do not measure investment.
What we measure is the production of capital goods and
related inventory activity.
So we infer what investment is from essentially what is net
supply of goods that are produced and tracking
inventories.
consumption is the one that probably people think about as
being most direct.
We, in fact, have survey measures of actual transactions
on consumptions.
That's what the retail sales survey is.
The point is to say an awful lot of what makes it useful and made
them useful in SEU durable sense is not the framework but all the
nitty-gritty work that goes into the measurement.
When we think about this problem I think it's critical to think
about that.
For example, large financial firms are in some sense big data
filters.
They do transactions at one end.
We get reporting that comes out The third session this morning
is going to talk about this.
but the efficiency at which that happens is fundamentally in
question.
When we think about what we can do from a grand measurement
perspective we have to take into account the nature of that data
filter.
so my last point, I believe very strongly in large part because
of the work I did in my time in treasury I was sort of new to
this but I learned not the least of which is Mike Atkin the
importance of thinking about what we can do at the level of
the transaction.
We're going to talk a lot about this.
Let me make sort of a high level point, which is if we can
improve the way individual transactions are reported we can
improve the system to a whole host of ways.
the first thing we can do is improve risk management for
firms themselves.
so the fundamental problems that a risk manager faces is an
aggregation problem.
How do you take all these transactions that a firm does
and aggregate them in a way that is sensible for an internal
manager.
We can make that problem easier.
second of all, we can actually improve market discipline.
Because the ability of any external investor to essentially
assess a financial firm is fundamentally related to the
quality of the information that they can get on what the firm is
doing, which is in turn related to this basic structure of the
transactions.
and of course all the things we care about in this project of
measuring systemic risk start within some sense of the quality
of the transactions at the basic level.
Let me end by making one last point and that is one of the
things that I think that's also important for us to think about
is the connection between what we can do and what the problem
that we really face.
there's been a tremendous amount of very good work that's been
done over the last several years on trying to measure systemic
risk.
it's terribly crucial that we take every plausible approach,
take every source of data that we have and exploit it to the
maximum degree.
But we have to remember at the end of the day that ultimately
what we're doing in that process is looking into the
streetlights.
We're taking the data that we have and figuring out what we
can get out of.
But the connection between that and the open problem we have is
you have to think about it.
And it's never one for one.
It's great that we look under every single street light.
We have to the that.
But we do have to remember that's what we're doing.
And the broader problem of systemic risk, it never lines up
exactly with the concepts that we necessarily write down in
that way.
I'll stop there.
>> thanks.
I'm delighted to be here.
I can't start without giving this analogy that I have often
thought of which is I watched this movie of Steven Spielberg
called "the minority report." A futuristic movie about a lab
in D.C. that's supposed to predict crimes before they
actually take place.
and then the job of the cops is to see what they can do to
prevent these crimes before they are going to happen.
and in this lab there are three COGS, visionaries of what crimes
are going to happen in the future.
And by and large most of the time their visions would
coincide but sometimes they might not, in which case it
would be called a minority report.
the cause of action was pretty clear what the lab needed to do
to prevent the crime.
But when there was a minority report, so the three COGS didn't
agree on what the assessment was, then it was to be the
discretion of the lab.
In some strange way I think financial research seems like
the lab in D.C.
this systemic risk that might arrive in the future.
it has to try and get adequate signals ahead of time of what
these might be and then hopefully the regulatory powers
in order to deal with these pockets.
what I'm going to focus on is something very specific.
What one aspect of this lab could do.
it's related to the exercise that is currently under way with
the stress tests of the US Banking system.
I think the stress tests that were done in spring of 2009 on
many counts I've heard were extremely good.
There was a lot of transparency about them.
there was great clarity on what the capital needs of different
banks were under these stress.
and most importantly, there was an action point.
banks were actually asked to raise this capital.
And if they did not it would effectively be raised by the
government through the capital assistance program for them.
But one thing that's somewhat missing in the stress test, and
this is not a criticism, is that financial firms fail very
rapidly.
and when they do it's clear that they are being run upon by the
creditors.
and this one risk, what it means is that the balance sheet of the
financial firm is actually changing quite dramatically in
terms of its ability to go over the same sets of liabilities,
what assets they have.
Are they going to look exactly the same by the time it fails?
Because they might have had to do all kinds of adjustments in
order to deal with this.
and I think the current capital framework that we have by and
large I think the most part ignores, in my view, the risk.
It is focused on assets, mapping assets into what kind of loss
rates they will face.
In the stress test we take the idea a little bit further and
ask about what is going to happen in order to be assets
instead of relying on historical risk weights.
But we don't actually stress the structure of the financial firm
substantially.
We don't ask the question is it likely to stay stable?
Which liability are likely to face a rollover program?
There are liabilities that aren't yet on the balance sheet
and they're going to show up because they've been off balance
sheet but now they're going to get triggered.
And I want to focus a little bit of attention on this last part.
I'm sure the fed, if it had better access to this data on
off balance sheet liabilities, the risk of liabilities could
actually do more in the stress tests via academics could do
more.
So I wanted to give some indications for what this
additional data might read.
So I'm going to put these broadly in contingent liability
risks.
The two I'm going to focus on the most is, one, the straight
risk of runs.
Which could happen to the retail public.
Bun surprising fact about retail deposits is in the most recent
crisis, which not the current one but the one that we had in
'07 and '08, banks were actually losing them on average.
They were losing them to start with the money market funds.
but the original notion that it would flock to the banking
sector didn't hold up that well in '07 and '08.
Certainly wholesale depositors because they don't have easy
access to insurance.
but the second contingent risk I wanted to focus on is the risk
of collateral cause which is increasing more relevant for the
modern financial firm that's not just doing loans and lines of
credit but is increasingly engaged in transactions and such
other kinds of counter-party transactions.
It could be through the repo market and so on.
so I want to focus on these two risks.
the risk of a run and what data we might need to understand this
better.
And the risk of a collateral run or collateral call and what
better data do we need to deal with this.
so on short-term debt, I just have one thing to say that we
need better data.
I tried very hard to look at financial firms balance sheets
to get a liability map of how much data is maturing and I just
can't put it together.
I can't put it together for individual firms and it's even
harder to standardize and put it into buckets of maturity and
what is maturing when.
so I would say we really need to design a template for getting
standardized data on all short-term debt of financial
firms.
and I think this is quite a limitation.
you would think when we study Lehman Brothers, important
historical failures from which we could learn it would seem
very important that we should be able to do day-to-day analysis
of what really happened to them on 10th of March, 11th of March.
What happened to Lehman on the 6th of September.
but as researchers we really don't have any high granularity
data on what the maps were on these dates to say how the run
actually took place of these entities.
We had high-level things, we had the report.
but that's at the level of detail that's not that conducive
to doing interesting analytical research.
So some quick ideas.
We have made a lot of credit on commercial banks on call
reports.
The majority structure could be improved.
It's one year, two year and so on.
the financial firm seems less than one year seems more
important.
it collects a lot of data on some of the short-term data.
to the best of my knowledge, all primary issuance is there.
I think we could move this into some of our data sets for
financial firms.
we could even make the repo system entirely utility resisted
by the current TRI-party players.
they tell us who is actually doing what kind of repos on a
daily basis.
it should certainly reach this office in my view.
And hopefully the clearinghouses that are set up will also be
collecting this information out there.
I'm 100% sure this needs to be collected at source.
I think one of the best -- one of the better functioning
systems one feature, one thing that it does for them is they
actually see income at source, which is one of the most
important thing you need to know in order to do your taxation.
I'm not saying we need to tax these transactions.
The problem just gets harder.
so by seeing where these transactions are taking place,
they are being reported or going to a broker or to a
clearinghouse.
Each of these a utility function that just reports the
transactions directly to the office of financial research.
and then the reliability maps of these institutions.
We could now think about in a stress test, which of these
liabilities will be more stable?
Which of these liabilities more fragile?
Do we want to shrink them by a certain amount if they will have
a more over problem and that will change the capital needs of
these entities in a stress test.
Let me move on now to the second part of my talk which is mainly
on liquidity risks.
These are in a sense much harder because the potential leverage
can be very, very large, much larger than what you see when
you just see the transaction that has taken place.
so far I would say at least for contracts the hope is that
bilateral, collateral arrangements do a good job of
dealing with this problem.
but we don't know if these bilateral arrangements are
always good and whether they're adequate enough from a systemic
standpoint.
my view is at least from a broadly conceptual standpoint
they can be.
Because if you think through the problem the real problem is once
I sell a derivative, let's say it's insurance to a counterparty
I could turn around and sell 10 such contracts to someone else.
now, the fact that I had 10 identical contracts dilutes the
first contract in case there is bankruptcy.
because these are all going to be pursued.
So whatever is beyond the collateral that is set inside,
whatever assets are left they're going to be shared by all these
counterparties in bankruptcy.
so really in anticipation for me to design a collateral
requirement that protects me well I need to know what else
are you doing.
The way this happens in corporations is the same
problem.
A bank makes a loan to a corporation.
The corporation may have some collateral.
But the bank is worried the same collateral should not be pledged
to some other bank out there.
There is a collateral registry where you can see which is
pledged to which lender.
This way the lender protects his interest.
so we need something similar in my view.
We need to be able to aggregate these positions because it's the
aggregation of dispositions that is relevant for each
counterparty.
Otherwise they will get diluted by the multiplicity of positions
that could arise.
Now, one significant problem here is that there are risks in
a contingent manner.
It's a little bit like a run risk.
But here even the payoff on the contract is contingent.
So the problem is one layer of complex greater.
I try to talk about this in more detail in a chapter, in a
handbook of systemic risk that's being edited by Marcos Brunner.
I was just going to focus on one concept from that chapter which
I have called as a margin call report.
when we analyze standard corporations we think of
interest ratio.
How much earnings do you have to your short-term debt?
It could be principal payments or interest on your otherwise
long-term debt.
I think we need something similar as part of the
derivatives positions goes.
So here's an actual example of what it might look like.
So there's some data I collected from quarterly filings of
financial firms.
the left columns are for JPMorgan.
The two right are for Goldman Sachs.
financial forms do report some of them do report in the
quarterly filings.
How much additional collateral they would have to post in case
of a downgrade.
in case of JPMorgan from first quarter of 2007 it's been
disclosing how much additional collateral in case of one
downgrade.
That's the fourth column from the left.
and then in the third column, the stress.
What would happen if I got downgraded by six notches?
What you see here is even from one notch to six notch,
JPMorgan's collateral cause don't increase that drama lick
I.
They go from a factor of three.
Now, if you went into JPMorgan's balance sheet and looked at how
much cash they have, they had about $26 billion.
It is four plus, a factor of four.
In case of a six-notch downgrade.
So this seems very healthy in some ways.
Now, let's go to Goldman Sachs.
Two notch from first quarter of 2008.
There's an information here which is until the fourth
quarter of 2008 you didn't know what would the collateral call
be if they got downgraded by more than one notch.
This to me is generalized uncertainty.
Which is if that has happened -- before fourth quarter of 2008,
we have no information on what collateral cause is going to
arise.
One, we should try to fill that up.
Now if you look at JPMorgan's collateral -- Goldman Sachs
collateral cause, when they go from one notch to two notch,
their margin goes by a factor of three, just within one notch.
So in some sense here the velocity of the collateral cause
is going to be far stronger.
But the good news in case of Goldman Sachs they had $100
billion of cash.
now, next let's go to AIG.
they didn't disclose more than one notch downgrade.
In the third quarter of 2008 they disclosed in going from one
notch to two notch the margin would go from $1.8 billion to
$9.8 billion.
That's a notch of five.
we don't know what happens for more downgrades.
but they only had cash of $2.5 billion.
so now if you look at their margin, the ratio is slightly
over one for one notch downgrade.
it's slightly more sophisticated ratio than for ordinary
corporations.
presumably this would have to be audited to ensure that the
reporting is good.
Then we could have additional sense of protection they have
against margin.
Just yesterday I was looking at Bank of America.
They are one notch downgrade.
It will go up $5.1 billion.
At least for their disKHROERB now it is in excess of $200
billion.
So there's not an immediate class RAL run.
just one last point that is interesting and could be tied in
with a stress test.
You could get a little more sophisticated with this.
You could ask the question who is going to get more notch
downgrades in the scenario we are looking at, which is who
actually is exposed more to the downside scenario.
Even the margin coverage, if you get the same numbers in the
stress test they might be different because one might be
more prone to a downgrade in a stress scenario.
it would be great to have concentration reports so you
could understand if a firm did not meet its margin call, who
the counterparties are.
currently we don't have these kind of concentration reports.
There's a proposal called 10 by 10 by 10 which goes in the
direction of producing such concentration reports.
let me sum up just the last point which is that I focus more
on what financial firms are not going to be asked to disclose
explicitly.
and I think maybe the part would make an effort to fill in those
gaps.
hopefully to have the authority to do so.
But I'm sure there's some more persuasion powers in the current
environment to put in place these measures.
Just to sum up what it does not require is mainly information on
the run risk or liability contingency.
there's no mention of reporting of short-term debt for these
financial firms.
Even though derivatives transactions have to be reported
there is no collateral reporting on these.
It seems that is the real liquidity risk.
They will hopefully determine good collateral arrangements of
their own.
I think the OFR could actually fill in the gap there.
current point, clearly ledge slating is good.
I would go a step further.
With some lag I think it should be released to the markets.
counterparties, market players, researchers should be able to
analyze this in my view.
A lot of good has come out of the research we have done with
call reports data on commercial banks.
I think we could do the same on more complex entities under more
complex positions.
and the last point, on many of these positions it's not
sufficient to know what is the stock right now but the stock of
short-term data, what is the stock of derivatives, what's
important is how they're going to react when you have a stress
scenario.
So we need to think about potential exposure, potential
collateral risk.
That's this kind of pre-COG exercise that I was describing.
But let me stop.
Thank you.
>> Thank you.
We'll turn it over to Mark.
>> so if I could have the second slide, go, OFR, go!
thank you very much for inviting me to come to this very exciting
conference.
I'm enormously enthusiastic of everything the OFR is trying to
do.
Everything I say represents my own opinions and not the
official opinions of the Federal Reserve.
I think that's true of everybody in the Federal Reserve system.
So, go, go!
Next slide, please.
Slide 3.
I want to briefly talk about three themes.
and the first two are related to each other and amount to a plea
that we not forget to look under a number of lampposts that are
obviously important but that I perceive are being unequally
weighted in am TKEPLic work as well as as a practical matter a
lot of policy institution work on Macroprudential stuff.
and those are the fact that we, to a first prosecution, live in
a two-state world.
We're either in a crisis or we're not.
and that we not only need to know indicators of the risks of
a crisis or whether we're in a crisis, but we also need tools
to do something about it.
Not only in the noncrisis state, that would be prevention, but
also in the crisis state that would be cure.
second, I want to make a few what amount to personal
observations based on 20 or 25 years of experience as a kind of
a data guide but also an analysis guide about how as a
practical matter one should avoid some common roadblocks
that I have observed in setting up data architectures,
associated I.T., and getting everybody to work together well.
Next slide, please, the one with the title "reductionism."
So I'm going to caricature and go too far in saying that in my
TWAOEUPBD of two by two mat rings of tools and indicators
and crisis and noncrisis, until pretty recently almost all the
work was in one cell.
There was a form of reductionism that said the only thing really
we're going to focus on is the crisis prevention.
That means we want to have indicators of imbalances before
they cause a crisis and to some extent we want to think about
what do we do, that is what are the tools that are going to be
used.
But mostly it was about indicators.
mostly -- most work, if you look at the literature precrisis was
about are we going to look at spreads, as a crisis away from
fundamental values.
Are we going to look at imbalances and credit to GDP
ratios, so forth and so on.
Obviously that is an incomplete way of looking at things.
And I observed a tendency even here in the Washington policy
community as recently as a year or nine months ago to kind of
focus on that familiar lamppost.
It's easy.
There's work out there already about all of those things.
On the tools part.
There's limited work about those things.
but what has become imminently clear to anybody who is thinking
about giving a, for example, financial stability briefing to
a policymaker, is that those kinds of indicators and those
kinds of tools are not very relevant today, right now,
today.
Because we are in a crisis situation today.
and we're not really so worried about excess credit growth at
the moment.
right?
What we're worried about is how do we get out of the crisis
today?
And if anything, in some parts of the world, we're worried
about insufficient credit availability or insufficient
credit growth.
What we need today is a consensus not only among
technicians but also on the part of the public and elected people
about how do you manage a crisis effectively?
Clearly we do not have that consensus globally.
Okay.
So that's kind of the problem.
let me just sketch it out a little bit more.
what I'm making a plea for is hopefully in a few months or a
few weeks when we're out of the crisis we not forget that we
need to do work on the whole two by two matrix or any expansion
on that matrix that would appear appropriate.
May I have the next slide, please.
Really it's probably more profitable to think of the
three-state world.
You're not in a crisis.
Everything is good.
You're kind of in the gray area danger zone of falling into a
crisis.
And then you're actually in the real crisis.
The world is on fire right now.
But people are better at two than three.
I can count to two rather easily.
But when I have to count to three, sometimes that's
difficult.
So for purposes of expedition, let's assume there's only two
states.
Most of the time we're in the noncrisis state.
and when we're in that state the notion of a lot of reform
legislation and a lot of reformers is that we should
focus entirely on prevention.
There's a little bit on preparation to deal with the
crisis when it inevitably comes.
But require in the welfare gains it's not clear to me there's
enough emphasis on preparation for dealing with it.
so focus tends to be dealt slide into the crisis.
Don't fail to pay attention to the bubble or excess credit
growth.
Just as important today in December 2011 is don't slide
back in.
My own view is we never got out of the crisis.
It started in the summer of 2007 in a very serious way.
And if you think seriously about what happened to global
financial architecture we never got out.
A lot of people thought we were out.
I remember having lunch with a European policymake TPHER late
2007.
We were pounding the table with our knives about our irritation
at the word "turbulence" because it was forbidden in the policy
community to say there was a crisis.
okay.
That was a big mistake.
then there's the crisis state.
It doesn't happen very long.
But as I just implied, it may last longer than we think.
You've got to be sure you're all the way out.
You have to climb all the way out of the hole.
Because if you stop paying attention you can slide down the
hill back in again.
when we're in crisis, we should focus on the cure.
we want to get out fast.
We want to limit the immediate cost to society.
We want to limit bad side effects on future crises.
But there's probably too much emphasis frankly on looking
ahead to limiting moral hazard in the future.
first, get out fast.
because the longer it goes on and the worse it is the bigger
the political risks are and the more likely not only do you have
the media welfare cost but future welfare costs as well
foreperson.
>> next slide, please.
Okay.
Some some implications for measurement.
This is day to day.
I'm not really wearing my data hat now.
I'm wearing my analyst guy hat.
We've got to know the state.
Are we in the crisis or not?
We didn't pay attention to that fully over the last three or
four years.
We didn't have indicators of whether we were still in the
crisis or not.
It's essential that Macroprudential work focus on
measuring whether we're in the crisis state or how close we are
to the crisis state.
as I said a moment ago, it's crucial to know not only the
likelihood of sliding in but a backslide before the crisis is
fully cured.
So we need data and tools to measure the state and the
completeness of cure.
Next slide, please.
we also have got to have tools.
and tools are hard to come by.
and as Lou indicated, not only does imply implicitly.
Lou said not only does the world change, the indicators, but the
tools are probably going to change too.
Okay. So we have to keep working on the tools because whatever
tools we have now is not going to be adequate sometime in the
near future.
What are tools?
Well, they are ways to effect private sector and, by the way,
government.
Governments are very important behavior.
Conditional with what indicators are.
As I said at the beginning, until recently, we had more of
an effort on indicators than on tools.
there is more work on tools under way.
But as recently asinine months ago, you know, or six months ago
I perceived in the policy community an awful lot of work
on the indicators and not that much work on the tools.
not going to do you any good to have great indicators if you
don't have great tools.
Okay.
if you take nothing else away from what I have to say I
strongly urge that we work on not only tools for the noncrisis
state, the prevention tools but we must have tools for the
crisis state.
everybody in a crisis is kind of inventing on the fly.
frankly, my own attitude towards Macroprudential stuff before
this crisis was there was no way prevention was going to work.
There was no way we're going to be able to anticipate the
details of the crisis.
the only thing to do was to have people and knowledge of the
marketplace ready to go when the crisis came to figure out what
I don't think we can afford that luck Schur any more clearly.
We have got to think in advance about what are going to be
effective tools.
There are some obvious examples, liquidity injections, capital
injections.
Surely there's got to be more.
And let me just note that the better designed tools the less
the moral hazard in the long run.
Good tools for crisis management are going to help you in terms
of prevention.
Next slide, please.
okay.
So we've got to have data.
This is day to day.
We must have data that support crisis tools. The data you need
for indicators is not the same as the data you need for tools.
my own view is that network kind of thinking and network data are
extraordinarily useful in the crisis.
I am yet to be convinced how useful they are out of the
crisis for prevention purposes.
Exposure data, liquidity related data.
Enormously useful in the crisis.
Liquidity data out of the crisis.
It's a lot more squishy.
They are very functional at that time.
examine anything we can do to understand confidence is
crucial.
In my view the first stress test in The United States was all
about confidence.
I could not figure out in December and January of 2009 why
everybody had not calmed down.
I thought it was clear there was a commitment from the U.S.
government that we were not going to allow things to spin
systemically out of control.
But the markets didn't agree, right?
And that meant they could have.
okay.
next slide, please.
field of dreams.
I know.
I'm going over time, I know.
I'm sorry.
But I'm almost done.
If you build a data they will come.
in many meetings related to kind of improving data that I have
been to, this was sort of the attitude.
Data is so inadequate right now that anything will make it
better.
okay.
and particularly when I'm wearing my research hat I share
the views of my academic colleagues that just give us the
data and we would find useful things with it.
I believe that entirely.
I've spent a lot of my career getting data nobody else had and
finding useful things.
But it's really not enough, right?
Implicit in what Lou said, we got to leap frog rapidly.
Because what will happen when you get the data that you just
sort of thought would be useful is you will almost immediately
learn that you did not get the right data.
okay.
So anything you can do in advance of getting the data and
analysis and conception is enormously helpful.
When you get it you have to right away start thinking about
what to do.
it is difficult in the kind of enterprise the OFR is trying to
run to make that happen because in my experience 20 plus years
at the Federal Reserve it's very easy for the data people and the
analyst people to not communicate with each other very
well.
the data people and the I.T. people want to build a good
system.
They want to build it right the first time.
it is not possible to build it right the first time.
and the analyst people, who are the ones who we so much need to
go learn from the data, are cats.
They're difficult to herd.
They want to do it their way.
They're going to want to economize on their own time and
get as much as they possibly can.
It's very important to focus on both sides working together, and
that's a challenge.
Okay.
Last slide, please.
So my conclusion is that we need both data and analysis.
And I'm happy to see in this the sessions look like a mixture
of data and analysis to me.
That's the way it should be.
we need this multidimensional focus.
I hope we will seize the opportunity to push the
boundaries of what lampposts we're looking under and what
ways we're forging ahead to work together effectively a variety
of fronts and, you know, think about the fact that the world is
going to change.
>> Thank you, Mark.
and we'll turn it now over to Phil.
>> Okay.
.
Thank you very much.
I'm very glad to take the opportunity to speak here from a
European angle on issues of high relevance to the start of the
OFR.
Let me join others in saying that I think you in the U.S.
have a tremendous opportunity by having this office and the
legislation that goes around it being approved and there to be
used effectively.
Much of my talk today will be about important changes, drastic
changes which are necessary.
And the OFR is an excellent opportunity in the U.S. to
actually guide and drive those changes.
now, the other important change which I will be referring to is
of course the direction of establishing a truly
Macroprudential function and I will build this around the data
issue of this session.
if we could go to the second slide.
But before I do that, I wanted to, since you may not be -- most
of you may not be familiar with the European setup you should
see in front of you a quite detailed chart of the new
institutional setup for supervision regulation in the
European union.
I wanted to make you familiar with that before we enter into
the nitty-gritty of data and analysis.
so basically the reforms in Europe which actually advanced
similarly thought as in the U.S.
in this regard is the creation of a European system of
financial supervision that has four authorities.
A European systemic risk which is on top of the chart on this
board which gives more the Macroprudential aspect and three
European supervisory authorities, ESAs.
One for banking, one or insurance and one for securities
markets.
The European system of financial supervision.
It's the European level financial stability oversight
council.
it's divided into four authorities.
This is tremendous progress if you imagine that at the bottom
of all of this there are supervisory authorities of 27
countries.
So this is really a major issue in the setup and often in terms
of the clarity and the actions of these authorities can take.
and the data issues and the sharing of data between these
authorities is actually very important issue.
next slide, please.
so what I want to argue today is that the shift from the
traditional way of doing supervision and regulation to
Macroprudential is of the essence.
so Macroprudential oversight aims at regulation at
identifying and containing system risk rather than
individual risk.
One definition I have put up here the risk that it becomes so
widespread to the point where it goes materially.
It is a very complex phenomenon.
it involves all components.
whole range of markets.
Range of market infrastructure and the two-way relationship
with the economy at large together.
now, today what I want to do is I want to give you many
representation of an economic framework for analyzing systemic
risk.
Go from there to the data that we need in order to support the
use of this framework, actually inspired practically as well as
scientifically.
It's a merger.
Look at the data and then go from the data actually.
I was happy to get what he was saying.
give you one example, if I have the time.
I'm more inspired to use the last slide because I will go
from financial stability indicator that shows you the
level of crisis to a new tool designed exactly to assess in a
crisis.
So let me hope that I make it there.
Next slide, please.
so the framework.
My colleagues know it on the panel.
so if we want to contain systemic risk, then we need to
cut to all the variance and features of systemic risk.
It's proven to be useful.
The question is what is -- what makes the financial instability
widespread as opposed to financial stability problems
more contained?
There's three ways in which instability problems can become
systemic.
One is that the shocks are already widespread and severe.
The green part is the shock.
then the second is contagion that a more confined problem
spills over into other areas.
and the third is that imbalances build up all over the system.
Slowly over years.
They're all sitting on the balance sheet and arbitrary or
smaller events can actually make that unravel.
Once you see it looks like contagions but it's an
unraveling of the system.
And the distinction require different policy responses.
Now, this framework is summarized in this cue you have
on the chart.
when with we do actually think about data and tools, we need --
the tools and the data speak to identifying risks that can
actually lead to one of these three mechanisms.
in terms of tools, starting with the tools, there are contagion
tools, how you can assess them, feed them of data.
You have early warning tools that you can use for that.
And for the aggregate shocks you have the stress testing
exercises.
next slide, please.
so the question is what is the range of data that we need to do
that?
Now, here if I had one slide to put all the data you would want
to have that would be the slide that I came up with.
Obviously only managed to get it on the slide by making the font
smaller, which is an eye test at this point in time for you.
Notice he quickly goes through.
Let me start broad.
What is the set of information?
Three types of information.
data and statistics.
the second is supervisory data.
In particular, individual and granular data.
and statistics, which comes usually from a different source
which refers to one of the problems that Lou said before.
and then there's market intelligence.
ECB provides all the three types of information.
the topic of the panel is the data.
so let me go from there.
the next bullet point of the data.
The title very familiar to officials and treasuries in
central banks.
it's the macro data, financial market data.
It's on and off balance sheet data.
And payments and settlements data.
I cannot go through the full list of things but let me go
point by point to a few of those data.
and I have marked behind each of the data type like for which
type of systemic risk identification is probably the
most useful?
Some of them speak to many issues.
so one important thing which Mark said is not important today
but maybe important for the setup is credit aggregate from
the market and money aggregate for national accounting.
it is shown they play an important role in the potential
anticipation of future instabilities.
They also play an important role in the assessment of the
severity of the crisis.
and it is really the question of global balances or external
balances.
so it is suggested that actually imbalances across the nations or
regions actually are a variable that influence the likelihood of
crisis or unraveling of these global imbalances.
So we have to pay attention to them.
There's flow of funds, public finances, and so on the national
accounting side.
Let me come to the financial market data and let me join
again in this regard that actually -- even the high
frequency, ultra high frequency in the market data are important
from a systemic perspective and have to be in the picture to
understand contagion and other transmission mechanisms.
also the usual suspects, the ratings there.
As you know it is an important transmitter of contagions that
have to be on the radar screen.
Let me go to the sheet data which may be the most
challenging here of the data.
so in particular the financial individual data.
So I want to go further into that because two areas where we
desperately need more regularity.
but the same applies to nonfinancial firms for
transmission into households.
Last, payment and settlement data.
Also the nuts and bolts of the system which help us, for
example, in the absence of the ability to connect first to test
some approximations by using the payments and settlement data.
they are needed for indicators and analytical models.
you come to the next slide to the challenges in terms of using
this data or having access to the data.
I distinguished here four challenges, standardization,
confidentiality and international issues.
And you come to unavailability.
there still are -- until the present day, important guests in
our data support to do financial stability analysis.
also the shadow banking system hedge fund and the like.
the Federal Reserve of New York made enormous program here.
In Europe we have not as of yet.
To understand early the positive aspects of financial innovation
and distinguish it from more risky aspects.
Early data on the development of markets with new financial
instruments are of the essence and the granularity of off
balance sheet data is important.
confidentiality.
so the question here is the safety of the provider versus
the excess of the analyst and the supervisor.
of course we have safety provisions in concept of the
European risk boards.
but that complicates views and the excess may be irregular for
the data.
the best example is obviously again the direct exposure across
intermediaries where confidentiality auto is the
highest and therefore the complications of effecting
contagion risks are most pronounced.
Standardization.
for the national accounting et cetera are not identical to the
supervisory type of system.
Use the same definitions and concepts and valuation rules, et
cetera.
I will not go into that.
It will be covered in the next session.
Of course all these complexities are amplified at the
international level and it's of the essence that the U.S.,
Europe, and other currencies have some coordination and don't
go alone in terms of setting in stone these type of
standardizations and rules.
regulation provides in the European market with the rights
to receive all the information it needs to fulfill its task.
Since it was approved in what I was showing to you before it
advanced very quickly in doing that.
now, I come to the participant nicely to Mark.
so if you want me I stop and we will take a break.
Otherwise, I will go on for a few minutes.
>> take a couple more minutes.
>> Okay.
Indicators in crisis versus indicators outside crisis.
The next slide you should see a kiss.
As some of my colleagues say, a kiss of death.
It's a borrow meter of systemic instability.
It's a thermometer not a BAROmeter.
The waiting and correlation between the different components
has a pronounced systemic problem.
So if the stress is in different markets and are more pronounced
than when they're not.
They're not together.
and you can't carry this in real-time.
it is exactly what the policymaker sees at the time.
So you see in 2007 it's the middle range.
That's the turmoil if you want.
--
>> Turbulence.
>> in September 2008 you get almost to the indicator zero and
one.
Almost to the one.
then it relaxed.
Then the debt crisis in Europe is data broader systemic ability
to a high level.
so this is the way a policymaker can look within one or two weeks
he would know how systemic.
Should never look at one indicator or multiple
indicators.
this is one input into this.
my question is how do we use that?
Again, I want to go, if you put up the next slide, in September
2008 and if you talk to the policymakers in the ECB, former
president was that we were totally alone.
the next release was here.
So our tools were not appropriate to deal with an
economy that had undergone structural change by a wide
systemic shock.
what can we do about it?
Here is one tool.
it's a model.
it's fantastic.
you see on the next slide.
this is to confuse people and make sure the experts can do
what I want.
more seriously, this is very carefully designed.
In the following day, behind this model two key ideas.
First is we throw in the kiss.
something in macroeconomics is not at all standard.
You put in variables that measure systemic instability.
first.
The second is this model is allowed to change regime.
or have face transition.
from stable finished system to instable one.
It changes.
it can all change.
the relationship between con SUPGZ and equity markets,
interest rates and GP is not the same anymore.
The data tell us.
so the data is otherwise standard, production, inflation,
three months.
we can only estimate we have 87 to 2010.
If we go to the next slide, you see what I was talking about.
The green line is the effect of a large shock in systemic
instability as measured on industrial production on a
monthly basis.
in a tranquil situation where the economy is in the face where
the permitted are the ones of tranquil economy.
the red dash line is the important response function of a
similar shock on the systemic instability in a situation where
economy has changed regimes.
And the different areas of the situations are different.
And you see the downturn is much more pronounced than it used to
be.
and that is what policymakers didn't have tools to make that
assessment at the time.
and this is an attempt to use the data I was talking before to
end this productive way that was mentioned by Mark.
That brings me to the end.
>> great.
Thank you.
We have about five minutes for questions.
There will be I think a couple of people with microphones in
the room.
if you have a question, please put your hand up.
Someone will come to you.
we have a question.
here comes the microphone.
>> so I very much appreciated the discussion about in various
ways about all kinds of lampposts and things.
And it reminds me of, you know, when people say you didn't ask
me so I didn't tell you.
I'm doing some stuff but I'm doing ABC and you said please
tell me about ABC.
And I say I will start doing UVW.
You didn't ask me about UVW.
So basically you need to have an understanding of what they're
asking about at a much broader level than leveling products and
say tell me about what you're doing here, what you're doing
there.
So you actual my need to say you're taking position that are
random outcomes all over the place.
yeah.
let us know what's going on wherever it's going on.
and just don't tell me what's happening here, what's happening
there.
because I can always create a very nice beautiful hybrid that
you didn't ask me about.
>> you're absolutely right.
I mean, that in some sense is the core problem.
I think there are a couple of different ways to address it.
One of the ways that viral talked about is a lot of
financial activity flows through choke points in the system.
The payment clearing and settlement systems deal with a
lot of this.
and in some sense one of the things you can do, I would argue
relatively cheaply and efficiency, is to exploit those
choke points effectively.
In some sense it has to be cleared and settled and there's
custodian.
In some ways you can get a lot by focusing on that.
That's one issue.
Second issue, people have talked about going to the risk systems
of major institutions.
Sort of ask about information from the risk systems.
Potentially that's a very important thing to do.
But it comes back to this problem I talked about, how much
trust do you have in the quality of the data filter that is the
reporting of the firm.
That is a central issue but it is another way to get the issue
survived.
The last thing which I which I think is absolutely essential is
there has to be an overt forward looking focusing on innovation.
The new stuff that's happening has got to get almost more
attention than everything else in part because it's new.
I want to make one small point about that, though.
The dangers on new products don't actually happen when
they're really new.
Everybody knows they're new.
They want to be careful.
They're small.
The problem goes when you go from the transition from
thinking oh, this is a new product to this is a problem I
think I understand.
If you look at the last crisis, subcrime lending and the
structured credit had been around 10 years before it really
became a problem.
When it became a problem is when people thought they understood
it.
When they were using the reception as the stress case for
both underlying credit risk and subprime mortgages and the
credit products.
And they actually thought these weren't new products anymore.
That's sort of where it gets tricky.
I think you put your finger on exactly with what is the
problem.
But there are a variety of ways you can address it.
Mark?
>> I agree with everything you said.
I want to use a military analogy.
the generals have an idea where the enemy is but they know the
enemy might be doing something that, you know, they don't
understand.
so you have scouts.
and the job of the scout is to go out into the forest and
figure out what's really going on.
The only way I see to do that well or thoroughly is by the
financial market intense function.
The bank of England used to do this in a way that I think was
very appropriate.
you would get someone fairly senior and experienced who would
go have a cup of tea with people in the financial sector.
and the thing to do is to tune that person to be listening not
only for the new product but also for the product that used
to be new that's getting big.
the trick is you've got to be thinking bit after rotating your
view of the world into a kind of risk factor framework, right so
that you you don't forget to pay attention to the mortgage
reinsurers who look small in terms of total capital but are
big in terms of full chum of risk.
I agree it's a big challenge.
It's not something that one is going to be able to address just
with data architecture.
>> Philipp?
>> The other two are exactly right.
It will be much better than other means in dealing with this
problem and that actually going very micro is the start of the
solution.
very detailed, very practical.
And I would like to add of course the problem -- two
points.
First, usually when you made that argument five or longer
time ago, the counter argument you got is suppress innovation.
collecting information should not be used and or data or
information as an argument to prevent supervision or
oversight.
That is a lesson I have learned from the crisis.
so that is interesting that if you do it the way Mark described
and other ways it should not prevent innovation if you look
carefully when something emerges.
The second is the most tricky part.
What do you do about it when you see the tracks?
Then the cost benefit analysis comes in with a very forward
looking perspective.
Because you don't want to stop the growth of subprime tools.
when do you learn that, say, certain features the costs are
more important than the benefits?
And that is really a challenge.
A lot of judgment will have to be provided.
But the direction is Lear.
We have to be more intrusive.
>> I'm going to make one last point which is that -- and bring
it back to the data.
With new products the information and data collection
systems are usually sufficient at the initial stage for a small
project.
it's usually a small group of people who are transacting.
They know each other.
There's a lot of trust between The risk is when it becomes
KPHOD TAOEUZed there isn't the investment in capturing the data
and assuring the quality of the data so that when regulators and
other -- when other folks need to get to it, this is the point.
And his point about data quality.
They're actually tied together quite tightly.
And the challenge is to make sure as products become larger
and more widespread that there's the investment made to ensure
that the quality of the data is there so it can be used for
regulatory supervision, including Macroprudential
supervision.
So with that I'm foggy to thank the panel.
it was an excellent discussion this morning and a great kickoff
following the secretary's remarks.
And I ask you all to join me in applauding our first panel.
[ Applause ].
>> so I believe we have a break.
and we're coming back at 10:00.
thank you.
>>> Could you please take your seats.
ladies and gentlemen, the coffee break is over.
>>> so thank you very much.
we are now moving to the next panel on data challenges.
so now we move from analysis, that was touched by the previous
panel to the bowels of the system, the infrastructure.
we have heard from the panel that measurements have changed.
So we need to give a fresh look at the real world we measure.
We can't go on in the ways past.
We have decades of progress in technology, globalization, and
education that have vast increased number of degrees of
freedom in the system.
today you have instant long-range interactions among
millions of highly educated aggressive people who can do a
lot of things in the market.
that increases the odds of last year's turbulence as we have
seen.
So your usual statistics that come at a quarterly frequency is
just not good enough.
in that context we can be sure that our colleagues economists
analysts will have endless fun with uncertainty and ambiguity
in their work.
So there will be enough of it.
We don't need to add it unTPHERL.
we must remove.
this panel is going to explore in a bit more detail what can be
done.
but let's first look at the term date.
We're going to hear about some later.
the first word we need to look at is data.
The term is used in many different meetings by many
different people.
So let's look at a few of the layers and dimensions.
You have micro data that describes elements and
aggregated data produced by suppositions like me.
Data used for specifically regulatory reporting.
data for measurement and data from analysis.
you have large scale micro data for machine use, machine
consumption and data for human consumption U assumption.
You have factual data, data carrying interpretation.
You have data that comes from official sources and data that
is for what you pay.
then language data comes with language.
It's not just numbers in a computer.
the choices of the concepts, the terms and definitions we use to
structure and describe what we look at, the world doesn't come
structured.
it's fundamental for data, classifications, integral part
of data.
weakness in data comes in many shapes and sizes.
So the directions of improvement we need to address inconsistency
by making sure they have the same meetings as much as
possible.
We need to plug the gaps.
So concrete data and the capability to easily add the
missing data that will appear in a crisis situation, the world
being complex it will always surprise us.
We will never have all the data we need.
So we need to collect quickly in time for decision that is
missing.
Data comes too late, it's useless.
We need to address frequency, timeliness, granularity of the
data, the flexibility that we give to the users of the data.
we need also to address a very mundane thing.
the usage.
and I run big data operations at the ECB.
We are hampered by property rights, confidentiality issues,
ownership also among regulators.
all of these are barriers that we need to overcome.
now, those weaknesses are of our own doing.
They're not needed.
On the up side, we can feel lucky that the system is of an
atomic nature.
Despite all its complexity.
Made up of well-defined breaks that can be described with
certainty.
All data starts from those basics.
There's a long chain to the data that viral and Mark described as
what they need at the other end of the pipeline.
It starts at the other end of the basics.
so, for instance, financial institutions often don't know
themselves what entities they're made up of.
many have told me that.
There's no one who knows who we are.
so how should the regulators know?
When you look at exposures, that's the rate of the day.
They are clouds made of up thousands of legal entities with
visions of transactions.
This all can be unraveled in theory.
it starts again with the basic data.
so I.T. can do all of that, no doubt.
It's progressing all the signs exponentially.
But what we need to get the data right is very simple.
We need rigor and discipline.
and to get there we will also need radical innovations.
it is not technically complicated.
There are many solutions around, and that's the problem.
Herding cats we heard this morning.
so far we have gently asked the cats to herd themselves.
They didn't.
so we will need to help them to herd.
and that's exactly what they're asking for these days.
Because they understood the chaos they themselves created is
not even for them.
So what do we need to do?
We need to come to having one data sample.
One for regulators, supervisors, central bankers and one for the
industry.
We need to agree on international governance that
serve all stakeholders.
We need to talk about lead compulsion.
we need to design a business model that TKHREUFS its value as
low cost and does not distort competition in the market.
and then we need to design and agree on semantics, data
samples, operational processes.
I would say that's the easier part.
Here we have a lot of progress here.
so overcoming these challenges requires global scenario
approach.
which is illustrated by the fact that on this time we have
diverse perspectives presented.
the background.
The fact that there's one American sitting here as well.
So thank you for inviting me.
I'm now moving to introducing other distinguished speakers.
we will start -- we have a sequence that will tell us a
story.
Linda will talk from the perspective of the analyst.
David will give us the scientific basis how we can
structure data.
He will use semantics and policy and so on.
Then car KARLA is engaged in ISO.
They design the samples.
And Andrei is the regulator.
Chief of economic data management and signal sis
session in research and statistics at the board of
governors of the Federal Reserve.
she has a B.A. in economics from Rutgers university and M.S. in
finance from George Washington university.
Over 25 years of experience in the banking industry, money
central bank, Federal Reserve of New York, board of governors.
She has worked in research over 10 years.
I think I'll introduce each speaker in turn.
Linda, please.
>> thank you.
Can you hear me?
Is this on?
Okay.
let me start out by saying that I was a little surprised this
morning when Mark Carey described my job as a cat
herder.
so it does actually explain white well what I do.
and let me talk a little bit about -- go back to the midst of
the financial crisis.
and there was lots and lots of discussion about what should we
be doing differently?
What do we need to get out of the crisis.
And what do we need to do to ensure this does not happen
again?
And there was an awful lot of discussion about the need to be
ale to identify aggregate exposures.
And after a lot of discussions within the Federal Reserve and
discussions across agencies with the private industry, we found
there were a lot of things we could do.
But two of the fundamental foundational needs that we have
is one, we need an entity identification, and, two, was we
needed instrument identification.
So if you could change to the next slide.
And I'll also point out on the last slide we had a disclaimer
these are my thoughts and opinions as opposed to that of
the Federal Reserve Board.
I have on the slide now an example that I think says it
all.
In the U.S. we have a lot of banks that are named very
similarly and we have a lot of financial institutions that are
named very similar HRARly.
In this particular case we have 14 banks literally called Citi
national bank.
We have 147 banks with some variation of that name.
and there are identifiers for each of these.
But there's no one universal identifier that everybody can
use.
anyone in the room who may have had to have dealt with
relatives, financial accounts, maybe you have a checkbook of an
aging parent or a grandparent and you had to try to track town
exactly what bank this is, can appreciate that going by the
name and maybe even going by the address is not always
particularly helpful.
from my point of view, we spend an awful lot of time combining
data sources.
Every data sores has a different key identify he.
So it turns into a rigorous exercise before an analysis can
be done, data sets must occur.
so my favorite example of this in the midst of the crisis,
there was lots of discussion that in the press that maybe the
rating agencies were to blame.
so lots of people decided, well, we need to evaluate this.
so what do you need to evaluate?
This particular question of with what was the role of the rating
agencies?
You need financial statement data.
Probably from the fed.
You need financial statement data from the SEC.
>> You need the rating information from the various
rating agencies.
and you also need stock price information and various other
market information.
you're looking at using at least five different sources of data
to do this type of analysis.
and just looking at the Citi national bank example, I've got
a lot of different identifiers.
and it took my research assistant maybe 15 minutes to
look this up.
Magnify that by 8,000 to 10,000 financial institutions in the
U.S. today.
and then magnify that by international financial
institutions.
and then add to that the complexities of name changes,
address changes, mergers, divestITURES.
Next slide, please.
so this slide I'm telling you a bit about why we think this is
necessary.
I just talked about what the problem is.
Why do we need to do this?
Well, we need to aggregate data on not just to the industry
level.
But a lot of times we need to know what is the total aggregate
exposure for a firm.
take any of the U.S. banks, they have -- they have lots and lots
of subsidiaries.
and you need to know at an aggregate level which is the
exposure of this particular organization.
I've also heard many people discuss how when Lehman was
falling there wasn't a whole lot of Wall Street firms that didn't
know what their aggregate exposure to Lehman was.
And I think that's a dangerous position for the private sector
but it's also a problem that we need to solve for the public
sector.
so being able to universally and uniquely identify firms is a
foundational problem.
And you will hear from the rest of the panel about solving that
problem.
Then the next slide, please.
so another foundational problem that we had was instrument
identification.
I have an example here of a few AIG bonds.
and you can see that there's a lot of factors that you need to
look at to make each of those instruments unique.
it's real easy to say that we have identifiers out there for
lots of instruments today.
but we don't have universal identifiers and we don't have a
lot of asset classes that are covered.
so you could move to the next slide, I believe that the
instrument identification is a slightly harder problem than
entity identification.
One, we have so many different asset classes.
and also, because there are so many asset classes that are
changing over time, it's really difficult to keep up with
things.
so I don't have an answer today.
and maybe some others will down the road.
on how to solve the instrument identification problem.
But I do have a couple of areas for research where I'll suggest
the OFR spend some time.
one is identifying the gaps.
we have lots of information out there today.
and some of it -- some asset classes it works just great.
But there are other asset classes where we don't have this
identification.
the second step is we need to prioritize the gaps.
where is the greatest risk and which asset classes are we going
to have the largest risk?
And the third is we historically have used different ridge cities
for different types of instrument identification.
given the nature of how they change and how the number of
factors you need for each unique identification of an instrument,
perhaps it's time to consider a couple of alternative methods.
I cite the example used in the chemical industry of the project
where they created an Al going rhythm for uniquely identifying
chemical compounds.
I haven't had any chemistry since high school.
But chemical compounds are incredibly robust.
and so there was a huge problem in the chemical industry with
uniquely identifying compounds.
And they change a lot over time.
or you get different compounds which are unique.
So what they did is they created an open source Al going rhythm
that as long as you use this ALGO rhythm you will come up
with the same identifier.
I was told yesterday by KARLA that there are ideas like this
being thought about for the derivatives market.
I don't know if this is the answer, but I think it's a good
idea if we pursue what are some of the different alternatives to
how we have done things in the past.
Just because some of these asset classes are so different from
other things that we have had to uniquely identify.
so with that being said, I will turn the microphone back over to
Francis.
>> thank you, Linda.
I think your conclusion is also to look at other sciences or
disciplines is what we need to do now.
the next speaker is David Newman.
David is vice president of strategic planning manager of
enterprise architecture at Wells Fargo bank.
He comes all the way from San Francisco.
David is chair of semantic technology program and is
leading a collaborative effort with the object management group
to develop and implement semantic data models not too far
from what Linda has done and other financial instruments as
part of a proof of concept effort.
He is a frequent speaker other semantic technology at industry
conferences.
David?
>> Thank you very much, Francis.
so you seated us very strategically because I hope I
have a solution, and I believe I do, for Linda's problems.
>> Great.
>> Also, I wanted to say that you may notice that I'm not
wearing a tie.
and I did that on purpose so that you could tell I'm a real
I.T. guy.
so if we could jump over to slide three, please.
It's always good to start out with a quote from Albert
Feinstein because that makes me appear a lot smarter.
We can't solve problems by using the same kind of thinking we use
when we created them.
And that really is the essence of what we need to do.
We need to start thinking of new and novel solutions.
this is an ex STEPBL problem.
We need to be very careful and make sure we're using the newest
and the best tools in order to build the bridge that our data
is going to travel on.
Because if that bridge collapses one more time, we all know the
what the consequences are.
So let's move over to the next slide, number 4, please.
Let's just look at the picture.
And a picture of course expresses a thousand words.
So metaphorically, a cool Jackson polyp painting.
But this expresses what I'll say is our view of data today.
it's very chaotic.
And so the question is how can we evolve from a state of data
disorder to a state of data order?
In other words, how can we do effective risk analysis unless
we have very well clearly defined articulated taught that
is InterLinkedIn a way that is highly meaningful?
So some of the views of our current state of where we are
today is highly fragmented because of all the silos not
only within institutions but across institutions and
globally.
and unless we can find ways to standardize the data in order to
effectively understand it so that we have consistent data, so
that the data can be defined precisely with clarity so that
we have trust in the data, without having that, we're
certainly we cannot draw the conclusions of risk that we'll
need to draw going forward.
So how do we move forward and how do we do it in a way that
also leverages our existing investments in technology?
Let's move to the next slide, please.
So I'm very happy to work the with the data management council
and we are focusing on developing solutions, and I'll
use the "O" word here.
We're building a financial industry business onTOLOGY.
It's a very, very intelligent data dictionary that use the
most advanced technology so that not only humans can understand
the meaning of data but also machines.
and once we have machines understanding what we do, we can
let the machines do its job a whole lot better.
So we're designing this on some of the new concepts from the
semantic web which you have very cool technology that's in use
today.
Apples iPhone.
Suri.
You can talk to it.
It talks back.
IBM's challenge and jeopardy using Watson is based on
semantic technology.
but, again, what is semantic technology?
So I'm going to just in the interest of time let's look at
the slide six, please, and let's look at the diagram on the lower
part.
Basically semantics is a way to express information in a way
that aligns Lynn guess particularly, grammar to what we
all learned in junior high school.
subjects and objects are tied together with verbs or
predicates.
very natural language like.
So we can design semantically a lot of information that is
meaningful to us and the computer.
As an example, David is employed by Wells Fargo is a legitimate
semantics statement where David is the subject, Wells Fargo is
an instance of a class we call company, the object.
And what ties them together, the verb or predicate, works for or
is employed by are synonymous.
Semantically we can say is employed by is a kind of subset
of works for.
we could also take the reverse where we could say Wells Fargo
employs David.
is employed by and employs are terms that are semantically
inverse of each other.
the computer, if the computer understands that David is
employed by Wells Fargo, it will automatically infer that Wells
Fargo employs David.
It's bringing in a lot of intelligence.
jumping over to slide seven, where does this intelligence
allow us to go?
So what you can see here on this slide is an actual example of
semantically defined information that's extracted for you to
view.
And this shows an interlinking of information.
How do we understand asset classes?
Well, we can common particularally define the
instruments and asset classes so that the computer can understand
it.
So it will understand integrity of the data by looking at the
structure of the data itself.
We don't need codes.
And we can also change our assumptions very rapidly with
semantic grids.
On the upper right I have examples of semantically
representing legal entities where we can see ABC trading
company, which you probably can't see in the print, is --
has an immediate parent of ABC investment bank, which has
immediate parent of ABC bank.
All of this information can be interlinked and tied together in
a very meaningful grid that we can queer wherewith high levels
of integrity because semantically we can prove the
consistency of the data by setting up and defining rules.
So moving on to the last slide, I would like to suggest to
everyone here that we need to establish a public and private
partnership between industry and the regulatory community, both
within the United States and globally in order to begin to
define semantic standards that that we can operationally Inter
link our data, giving us high trust of the consistency of the
data so that the risk analysis that is performed has higher
integrity and greater horizontal reach across disparate data.
So I definitely would like to ask all of us to find a way to
formally pull together and collaborate to begin to define
these standards that that we can really leverage the TAOULS of
the 21st century and not defer to the older tools of the past
that did not serve us as well as what I believe the future holds
for us.
so I will also ask that we establish global financial
semantic standards and also like to reach out to the financial
disability board so we can enable this globally.
So thank you very much.
>> thank you very much, David.
and we see that there are I.T. guys who can speak
intelligently.
you were very clear.
Thank you very much.
next speaker is Karla McKenna.
We go one step further.
KARLA is the chair ISO.
In charge of financial services.
That position KARLA held since 2006.
when she does not handle ISO assignments, and I wonder when
that is, KARLA is director of securities and funds services
and manages the market practice and standards for these
businesses for Citi group or Citi.
In addition, she is chair of the securities market practice
group, serves on the board of executives of accredited
standards X9.
The board of international securities for international
trade communications, and on the international steering
community.
she also participants in commodities organizations such
as wit, and supporting initiatives of corporate
actions.
KARLA is a very busy person.
Thank you.
KARLA?
>> Thank you.
it's a great pleasure in order to be able to address this topic
of data standards and particularly focus on the LEI to
a group that is coming together from various walks of life in
order to be able to solve a common problem and bringing new
ideas to the table and how we can solve the situation that we
find ourselves in.
I wanted to start the next slide, please -- I wanted to
start with talking about standards.
I wanted to start with talking about standards and not just to
tell the old joke.
But standards exist for different purposes.
we have already heard people on the previous panel and on our
own panel today talk about different levels I would call
them of standards.
I classify them Bradley into the what and the how of standards.
under the what I list specific data elements like codes, like
the LEI that we're going to talk about a little bit later.
we talked about bags of data, other panelists have phrased it
today the set of information or the right pieces of information
that we need.
under this categorization we talk about meanings and
definitions that link in with what David talks about with
semantics.
So they all need to work together.
so the how for me talks about or addresses the issue of how we
collect the data in a particular fashion consistently across
regulatory and business, for example.
and also about characterizing how business processes should
work or how analytics should work.
and that then actually for me makes a full circle back up to
the what.
Because if you know the business process or you know the problem
that you're trying to solve, you know what pieces of data you
need in order to be able to collect in order to address it.
so with this set as a stage, let's look at some of the key
principals that we're operating under this this environment in
order to be able to get to that next generation of standards.
So we're looking at standards that are based on freely
available international standards.
and we've been asked to leverage the expertise particularly of
ISO TC 68.
And to give you a little bit for those of you who are not
familiar with the ISO organization, I'm going to show
you what this looks like in just a few seconds.
But ISO is an organization that has a very, very long history
established in the 1947.
TC68 was established a year later.
ISO has over 260 committees, technical committees that look
at standards in different functions or different areas.
and so ours focusing on financial services and the
standards that you'll see that I'm going to put up you might be
familiar with 68 but you just don't know it.
and what we've been doing all this time.
Now, as far as the standards themselves, focusing on the new
world, a world where we can use standards that don't need a lot
of intelligence actually imbedded in the codes themselves
so that we can rely on data attributes which are easier to
change and easier to be able to update due to, as Linda pointed
out, in the LEI, things like mergers and other corporate
events, those types of things.
As David said, to take us away from the legacy that we found
before, they're persistent.
That we can actually not have to change to construct them in such
a way we're not changing the codes themselves, the
identifiers themselves but the attributes that accompany them
to make them persistent.
Free from I'm limitations so we can make them available to what
we need to without making them prohibitively expensive subject
to privacy and those types of regulations and can be applied
globally for the financial services industry and are
scaleable.
So these are the issues we're facing from a standards
perspective today.
So a few things on ISO.
You saw that I work for Citi.
But I'm a volunteer in ISO.
as are many are the faces around this room in order to make
standards reality.
What I wanted to focus on on this slide because I have gone
over some of it already, there's a unique governance procedure.
When I talk about this procedure, it's a procedure for
the development of standards.
it's not a governance procedure for the use of the standards by
the industry and the regulators and the other users of
standards.
This is a recipe in order to have a tried and true way to
develop and maintain standards that make them believable.
Next slide, please.
this is what 68 looks like.
not going to go through everything.
But you see it's a combination.
It addresses the financial services industry.
>> it's not up there?
>> It's not up there.
>> Next slide, please.
there we go.
Thank you.
that what it does is it has various committees that take a
look at various parts of the financial services industry that
the one that I wanted to point out is that the LEI actually
from a scope perspective is meant to be able to cover legal
entity identification for the parties within financial
transactions.
so it's reporting into this committee up into the financial
services area.
next slide, please.
Examples of the standards managed in 68, I wanted to give
you a little bit of a flavor that you might be familiar with
some of the work already and may use them inadvertently in our
your day to day.
Biometrics, encryption algorithms.
as far as securities, things like the ISIN and classification
of financial instruments and market identifier codes to
identify trading venues.
The ISO currency codes we use in transactions and the
international bank account number are all under this
structure.
so now with that Brevin TROE deduction I wanted to talk about
the use of standardized legal identity classification.
So the next slide, please.
So we're talking about a situation where legal entity
identification is not a new concept and not a new meaning.
What's new here is the idea to standardize it globally by users
and stakeholders that we see here today.
business has always needed legal entity identification to do
things like know your customer, to agriculture a TKPWAT data,
manage counterparty and KOPBSZ tradition risks, improve
straight through process, et cetera.
regulators are coming to the table with new regulatory
reporting requirements, not only for prevention but for
resolution and cure.
We need better ways to aggregate data and analyze data.
What this has done is it is aimed at improving the data at
the transaction level as many of our speakers starting with Lou
this morning pointed out.
so the scope of the legal entity identifier.
the LEI can be assigned to any person or legal structure
organized under the laws of any jurisdiction.
and it's worded in such a way so even those indefinite funds, no
heater how they are constituted cannot get away from being
assigned a legal identifier.
It's not at this particular time envisioned to cover natural
persons from the scope perspective.
The next slide, please.
just a little bit -- am I on point?
I can't see it from here.
The structure.
I'm not going to go through everything here.
But you see it's based on some of the principals that I talked
about before.
it's sufficiently long in order to be persistent.
and it's based on data elements, both data elements in order to
be able to uniquely identify the entity itself and also on data
elements in order to be able to have METAdata associated with
it.
When was it assigned?
When has it changed because of corporate events.
When does it expire like I merger or things like that?
Also, in addition to the relevant data attributes, you
see the standard itself is relatively skinny.
This was done by design.
It's done in order to be able to be nimble so that the amount of
data that you need in order to be able to create the identifier
itself is very light and very targeted.
And then within the usage of it by regulators and industries
there can be other data elements subject to privacy laws that can
be in the remonths tear to be able to be associated with these
codes.
so the standard is a piece of the puzzle.
And one of the foundations, as I will go into on the next slide
after I go over the timeline.
so people always ask me when is this going to be ready?
We're on an aggressive timeline.
We're looking to be able just a few weeks from now, on December
14th, to conclude our second round of voting.
Now process that I talked around before had successive rounds of
voting in ISO in order to be able to gain consensus.
We're in our second one now.
If we're successful to say this is a good idea and we like the
way you structured it and we think you should go with it, we
would SRPB approval for the standard on December 14th and
we'll be able to publish it in January.
If we have more work to do the next round of voting would take
us into the March time frame and we would be able to publish it
in April.
So we're trying to key these time frames to when the LEI
would be needed by industry and especially for the pending
regulatory reporting that we're seeing.
That's coming over the horizon first in the U.S. and then of
course globally.
So the last slide, please.
as I said, the LEI IC is the first step in a solid
foundation.
it is the basis of all of the relationships that we're seeing.
we need to take our legal entity identifiers, the legal entities
themselves, what kind of assets do they hold some what
particular assets do they hold?
How do they value them?
Get down to the transaction level.
it's at that level that the data will be collected.
we may need to do more within the data standards and financial
services standards work under 68 and ISO in order to plug some of
these gaps.
We're open to that challenge in order to be able to do that.
We want to continue to work closely with industry and
regulators in order to be able to do that.
I have under here that our standards should be forward
looking.
For me that was trying to communicate the thought that
David put forward that.
We have new technology and we have new ways of being able to
exploit not only the way we technically put together
standards, the way that we maintain them and the way that
we can update them and the way we can give access to data
involving standards and we should always be looking at that
in order to be able to improve.
We should I did vice our standards in order to be future
proof and to be able to live with updates in technology --
without disturbing the core or content of these standards.
the last slide I wanted to really drive home these
relationships.
this is a slide that we worked on -- that Bill and I worked on
within ISOTC 68.
the fact that the capital markets that we can't really --
even if our technical committee is organized against different
business areas, what we need to do is to look at this in a
holistic approach.
We see the same data and same needs are pervasive.
It's with this thought in mind and this approach that we need
to do our work.
Thank you.
>> Thank you very much.
I think we need to praise ISO who has driven through an
extraordinary task in lightning speed.
Usually they can take years to agree on a standard.
This was done in months.
there's a book that I think should add a chapter.
thank you, KARLA.
the next speaker is Andrei Kirilenko.
He will show us what a regulator, who is moving very
fast now on this subject is doing.
Andrei is chief economist of the commodities future trading
commission since 2008.
Well, he joined the commission in 2008.
Was appointed chief KHEUFT in 2010.
he received his Ph.D. in economics from the University of
Pennsylvania where he specializes in market.
Prior to joining, he spent 12 years of the IMF working on
global capital market issues.
Research focused on the informational properties and
micro structure of securities and derivatives market.
He has published a number of journal articles appearing in
the journal of finance, journal of financial markets and IMF
papers.
In 2010, Andrei was the recipient of the chairman's
award for excellence.
Andrei?
>> Thank you.
thank you for inviting me.
it's a pleasure to be here.
Thank you so the organizers.
it's a wonderful panel.
And my remarks at the end will summarize a lot of the efforts.
What I'm about to say are my only personal views not of that
of the commission or staff.
When we started looking these issues, of course as a regulator
you need to have a mandate.
You can't be just sort of working on things and deploying
taxpayers's money.
So the mandate to work on systemic data issues came from
section 719B of the Dodd/Frank account which was mandated
congressional study that they had to do to look into basically
whether or not derivatives could be represented in both form and
whether or not the presentation has to be mandated.
and there was a joint study with the Securities and Exchange
Commission.
We worked on it for about nine months.
and at the end was a study to congress, mandated by congress.
At the end we identified four areas that within needed
particular work before we could suggest that this sort of
mandatory type of implementation of descriptions of machinery
descriptions.
And the four areas that we identified were entity and
product ID, legal documents, semantics and ONTOLOGY issues
and storage and retrieval.
following the study, the study was done.
and following the study, this issue was picked up by the
technology advisory committee.
Again, it has to be done in a way that satisfies the various
acts and provisions and done in a fully open and transparent
fashion.
if it's part of the public process.
So the process we put under it is to have this issue then
picked up by the technology advisory committee.
And then we have the mandate that was created to look at
these issues.
and we put together a group of 28 representatives of the
industry, many of whom are here.
Carl is on this group.
Many of the people in this room are on the working groups.
it created working groups, asked people to self select basically.
we sort of suggest, self select into working groups along the
four lines.
Legal docs.
and the idea is that we wanted to put a process, regulatory
sort of a process, a public process around cats herding
themselves.
so we would like it to be on the public record and we also would
like it to be in a way in the public record that feeds
potentially into the way they can accept this information.
So technology advisory committee can make recommendations to the
trading commission.
They could implement regulations if regulations are needed to
mandatorily implement provisions that are deemed appropriate.
So we have a process now in place.
and so what we have is we have a bunch of very, very good people
who on December 13th are all going to come to our conference
center and present as part of the technology public meeting
present preliminary accommodations.
to the advisory committee saying this is how we think things
should be done.
I'll give you a brief overview of sort of where we are on these
issues partly as a summary of what people have already said.
So the next slide is working group one, entity I.D.
KARLA has already talked extensively about it.
From the process point of view, the important thing that is
being suggested is the phased implementation that would enable
testing and sort of testing, back testing and sort of gradual
implementation of this so we could learn over time what could
be implemented, what works and what doesn't work.
and so far the idea is to have the first phase that needs to be
completed by June 2012.
we can skip the next slide.
I think KARLA already covered it.
Legal documents.
What people have been looking at is a lot of the derivatives now
exist in sort of the undescribed in paper form.
and the legal documents that already line our in paper form
can remove.
And how can we move to a situation where derivatives
contracts, starting with the most common one and moving down
the past are designed for machine reading.
Not for human reading but for machines.
All of these working groups are basically the underlying theme
is how to move from a human operated market to a market in
which machines and machine readable takes a greater role.
It is important.
That's why standards are important.
That's why the definitions and I.D.s are important.
And how do we turn all of this information and put it into
legal documents.
Next slide and the slide after this.
What this team is suggesting is to move in steps.
and they have, again, a gradual approach of how to get it done.
the next group is semantics and ONTOLOGY, something that David
talked about.
The people have talking about semantic processing and about
business ONTOLOGY.
However, this is a language that most of you know, XML language
that describes in a certain way what a derivative is.
It's not entirely designed to describe things the way perhaps
people would like it to be described, but it exists.
So that creates sort of a baseline for what's there.
and lots and lots of people in the industry already have the
standard and data in their systems.
So the idea is do we need to move to a new type of system and
new semantics base and can we have something that, again, is
either gradual implement AEUBGZ.
What this group is doing is looking at some cases, doing
case studies and suggesting a gap analysis.
Again, all of this we can skip the next step and go to working
group four.
Storage and retrieval.
We have decided this is one of the areas that actually doesn't
come up so frequently.
But it turns out to be one of the areas in which standards are
also quite critical because, okay, suppose if you have a
derivative described, suppose there is a transaction that took
place.
and suppose their counterparts are fully identified.
And all of this is great and done fantastically.
but it takes you an hour to store all of this data.
And three hours to retrieve it.
It's not very encouraging if you're looking at the situation
like that.
and also when you retrieve it, it may have been stored in a way
using the architecture.
When you retrieve it, it's not the exact replica of the legal
copy that was stored.
So what do you do about it?
Is it a legal replica or your presentation of what you
retrieved from the common storage?
So we have a group of -- you wouldn't think of these things,
right?
When you operate in the human world you don't need to think
about the different architectures, different parts
of this data process.
But we were identified as one of the critical areas that needs to
be worked on.
and we have a group of people who are working on it.
So the last slide, summary slide is the -- we are facilitating a
public process and would like to think of the Dodd/Frank act not
only as a way to legally change statutorily change how
derivatives are traded but also think of how can we retool the
whole derivatives of the marketplace.
It's a 21st cent product that are being traded, still traded
in some ways using 19th century technology over the phone in
some cases.
We need to bring it up into the 21st century in a way that helps
all of us.
The regulators can do what regulators do, market
participants need to have liquidity issues resolved.
End users will be able to see what our different products that
are available and choose from various menus of options.
This is a terrific opportunity to retool this art of the
industry, the part of the industry that is the last count
as over $700 trillion which operates largely over the phone.
so this is an opportunity to do it.
And these are sort of the venues that we think in which we could
take a number of significant steps working together as -- in
the public/private forum but in the context of sort of the
regulatory process that enables for us to use this information
for regulation.
>> thank you very much, Andrei, for an impressive lineup of
activities presented here.
I wish you good luck in these activities.
We need it.
We need the success.
But just to put it in perspective again, to see the
magnitude of the whole problem, I had a conversation with the
chief economist of the I.M. group in Europe.
He told me that his groups needs to report to 48 agencies across
Europe.
So in order to make it workable, the effort needs to be most
applied throughout regulatory community and they need to work
together on that.
And here we are lucky to have the OFR.
so that was the lineup of presentations.
I have a deal that we are given five minutes of questions.
Thank you very much.
>> the law of unintended consequences.
and I like the standardization concept very much.
I'm wondering if there is any impact or there's a way to
measure systemic cyber security risk on HOMOGENIZING the
network.
it imposes a I lesser probability on the propagation
of penetration into the network by cyber -- in a cybersecurity
attack.
the standardization of a network HOMOGENIZES in terms of
penetrablity.
Do you have any way of making that tradeoff, assessment?
>> A very quick one on this.
We are not going to build it in one go.
So there would be enough in the system to defend ourselves
against the penetration for a long time.
>> so, yeah.
we have best practices in I.T. that deal with, you know,
massive layers of protection in terms of cybersecurity.
However, it is a valid question and it is a major concern where
we're always working to improve our resilience and our ability
to connect the dots of risk and cybersecurity space and it's an
ongoing effort.
We don't have a conclusive answer to that because it's that
the -- there's always a risk of them being one stop ahead of us.
But the question of what we have integrated data, does that mean
that we have a bigger humity dumpity essentially that will
fall off the wall and will all break?
And that is not something that, you know, yes, there's risk.
But I think we have to move forward with awareness of all of
these risks to Harmonize data.
Because that is fundamental.
>> thank you very much.
KARLA?
>> Yes.
Just to round that off, we're still in discussion just to take
the LEI as an example about exactly how this will be
deployed, not only from a U.S. perspective but globally in
order so that it can be shared and sellable.
There are ways in order to not be able to have the big humpTY
dumpTY from a centralized realm industry that can be fed rated
and also be able to keep the data in such a way that the data
that can be publicly shared is publicly shared and the data
that needs to be subject to privacy regulations either from
localESO certain stakeholders also needs to be addressed.
So it's all of these issues that we have on the table.
>> I think that concludes the answer to that question.
Do we have one more question?
Back there.
>> so you started off, the panel, talking about the
challenge of asking cats to herd themselves.
but of course there's very considerable powers that
governments have to herd cats a little more forcefully.
And the OFR in particular has the power to mandate the
standardization of data reporting across all of the
agencies.
And it also has very extensive powers to subpoena data from any
organization.
this might perhaps also be for Dr. Burner or somebody here from
the OFR.
how do you feel they could use these powers to encourage the
kind of information collection we need and do you see the OFR
doing that in the near future?
>> I'm very optimistic but I would like Dick to answer the
question.
>> I think it is set up in the right way.
>> thanks for the question.
we do have some powers under the statute.
We want to use them thoughtfully.
In order to first to standardize data it's very important that we
as our panel suggests this morning, have a global standard.
If we have that, we have the right governance and the right
funding model, then I'm convinced that will go a long
way to standardizing data.
developing the catalyst for standardizing data is something
that we can all do across all the agencies.
And we're talking about ways to do that with Andrei, with others
in the universe.
so we want to make sure that we start off by doing that and
making sure that the industry from whom we're collecting the
data does it in a way that also ensures that they see the
benefits that they realize the benefits of straight through
processing, of using the same standards in their M.I.S., their
risk management as well.
as far as the subpoena power is concerned, that's also something
that we'll use judiciously.
Again, we want to make sure that people report data because they
see the benefits of reporting data in a standardized way.
the subpoena power is something we would use if we can't get the
data any other way.
But we are just starting down that path.
It's something that we haven't reserved.
>> I think that's it for the international aspect.
We have been very fortunate that the crisis has helped to gather
support from very prominent people.
So the stability board has a meeting today where we hope to
have some good decisions.
G20 themselves have expressed explicit support for a global
table.
So I think the stars are lined up.
so I think we have all the steps of the hour.
Thank you very much for your coverage.
And I would like to thank the panelists, the questions.
[ Applause ].
>> given the constraints of time, we're going to take a very
short break this time for 10 minutes.
We'll come back.
we'll have our third panel.
>>> we said we would have a short break.
We meant it.
We're going to get started with our third panel.
>> Okay.
Could everyone take their seats, please.
>> okay, thanks.
Hi.
Thanks.
Again, I'm Dessa glasser, I'm responsible for the data center
of the OFR.
So I have to deliver the things that everybody has been talking
about.
So we are working together and we're working very hard.
And with our partners and with financial regulators.
and with just about all the people in this room to try to
make data more available and easier to look at systemic risk
analysis.
So the focus of my panel.
First of all, no slides.
We got together a couple times before the session and we
decided we wanted to have an open discussion and we wanted to
hear from you guys.
and we have different people that have a very distinguished
panel here that have a lot of experience in this area.
So I'm going to keep my comments very brief.
I will tell you one of my colleagues on the research side,
and we've been talking about herding cats.
He said that's a -- us researchers we're like circus
animals and we need to be tamed.
what are the pieces that are meaningful?
We need to figure it out together.
so to that end the focus of this panel is an operational risk and
the acquisition, aggregation and management of risk of financial
and operational risk data.
so data and operational risks comes from the data supply chain
and the quality of the data as well as the completeness of the
data.
so when we got together we decided there were two topics
that came up.
We actually couldn't decide which was more important, so
we're talking about both.
So the first was operational risks in the data supply chain.
So this comes from -- and what we're asking about is the
collection and management of data.
How do we ensure we have good quality data that goes into the
analytics?
They're only as good as the data that goes into that.
and just as we had some of our panelists we had our first panel
we were talking about, the importance of data.
The aggregation of data we talked about in the second
session and how we can combine that and aggregate it.
But when you talk about collateral and how do we measure
liquid collateral, what do you put in the liquid box?
We may be doing it differently.
Tom is going to be in one of our afternoon sessions.
Talked about it and said how do we look at it?
So that's the first part is to look at the collection process
and the operational risks along with that.
the question there is the risks that the OFR and the industry
needs to manage and improve the quality of data to make sure we
get good metrics around to analyze the stability.
The second topic is around the data we need to measure
operational risks and has systemic implications.
We're not talking about individual company data or an
individual threat or fraud in an individual company but across
the market what are those operational risks and what
should data should we collect that are systemic in the market?
So I have four experts.
They have asked me not to read their BIOS.
And they have very impressive bios.
So I encourage you to look at them.
We have four different views.
One from a regulator standpoint, one from the industry
participant, as well as a financial market utility, and
stipulate.
Charles Taylor, deputy controller of the OCC.
He's going to give us the regulatory view.
We have Mike Atkin, who has been a proponent and advocate of
standards and using data as an asset in the company.
Managing director of the EDM council.
He's going to take the industry view.
And Anna Ewing is managing a financial market utility and has
to look at and get data to analyze the threat during the
market and keep them running during a crisis.
She's taking the financial market utility view.
And Philippa girling from Morgan Stanley.
So I thought I would start with a question for each one of our
panelists and let them answer that and tell you from their
view their ideas on these two topics.
So first Charles coming from the regulator and also from the
research standpoint.
from your perspective, how should we be looking at this
from the OFR?
Specifically if you could mention some of the conclusions
from your supervisory report.
>> Thank you very much, Dessa.
And thank you to the -- for organizing this occasion.
I'm going to issue the standards disclaimer that I speak on my
own behalf and not that of the agency today.
despite my title.
and before I do embark on my own comments I would say that the
agency is very supportive and considered very important the
work they are doing as members of the FSOP we are concerned
with and engaged on problems to do with systemic stability and
what I don't think is necessarily commonly appreciated
is that although we are a Macroprudential regulator,
looking at the banks, we have responsibility for the national
banking system.
So we're talking about about whether that gives us a subset
of the systemic issues that we should be thinking about on our
own behalf, as well as working through all the other concerned
agencies.
so my remarks I think are going to be a very broad nature.
My personal remarks.
As you said, I'm going to start off by referring to what the
senior supervisors group, the report they issued in 2010 on
risks and risk framework and the importance of I.T.
I'm going to then say something about the data required to
support good risk management, drawing on that report.
and that will lead to talking a little bit about data quality
and into operational risks around data quality as well as
operational risks that feed into the risk management framework.
the essence of the senior supervisors report
recommendations are they said you have to have a risk appetite
defined and then you have to management your mitigated risks
to be within that.
you use risk management tech kneeings to do that implied to
the inherent risks that you face.
They were talking about what happens within a financial
institution.
We need to think about an analog for that financial system as a
whole.
And where we have risk management as an activity in
that chain we have a policy, Macroprudential policies aimed
at addressing the emergence of systemic risks and mitigating
what may be inherent in the financial system.
so we have a complete chain.
It's from observation, orientation through to
decision-making, through to action that we have to think
about.
That was the challenge that the report put before us.
at every stage we're dealing with data.
And so the data issues and understanding the inherent
risks, data issues in understanding how much policies
are due to mitigate them and how much different policy
instruments, policy levers may impact the financial system to
move it away from areas of high systemic risk.
and we have data issues around calibrating the mitigating risks
we have against how much risks we can tolerate in the system.
I think I would say a word first about the operational risks that
are part of the inherent risks.
And here I think the way we thought about it as an industry
at the institutional level translates thinking about it at
a systemic level.
There we have a framework that's been defined and elaborated by
the operational risk data exchange that basically defined
it as the risk that people, processes or systems
malfunction.
or there's an event that disrupts the operations of an
institution.
seems to be perfectly relevant to the way we think of the
financial systems as a whole.
When those disruptions are on a scale that cause systemic risks
to increase or appear.
I think the systemic importance will depend on when we look at
the people, processes and the systems, the linkages, they are
halfway for contagion or spillovers.
Are they enforcing one another?
We saw in the crisis many things went wrong at once and they
worsened the circumstances that the industry faced.
whether the different levels failed together.
and I think that's something operational risk managers are
keeping aware.
We need to think about that as well.
we need to worry about capacity and buyers, concentrating in
particular areas.
and a topic -- I think we have to think about the practices
across the industry that -- a point was made by one of the
questions in the last panel about HOMOGENITY and the way you
think about semantics and data.
I think that applies more broadly to the kinds of things
that make a small risk into an industry-wide one.
when everyone is equally as vulnerable, say.
so if we think about what the data problem it's good to think
about how it can be used.
and indicators of structural and dynamic systemic risk are
important for us to think about.
it's not -- there's a certain danger to say it seems like
you're trying to do everything, boil the observation.
I don't think that's the story.
I think there are particular needs and we can capture those
by thinking about individual models of contagion or models of
spillover and so forth, risks.
Some of those to discuss later over the next day and a half.
but stepping back from the individual modelsing there's
probably indicator sets we should be focusing on.
I would offer indicators that describe how the system looks
today and things about which we need to have good data.
Things like credit exposure, funding exposures.
process strengths.
I'm thinking about, for example, the securitization process.
the expand institutions cut through the financial system.
Concentration and substitutability.
And, again, back on institutional HOMOGENEITY.
How much do we see in structures and strategies on business lines
and funding and so on.
so it's collecting and having confidence in data that bears on
those kinds of things that will give us confidence about how
industry -- how the financial sector looks today and certain
importance for important characteristics.
then I think we have to think about how it can change.
so we go from steady state to dynamics.
and there are a group of indicators there that I think
are equally important.
I'm going to jump right through them.
In the interest of time I can see you're beginning to think
about that.
so there are some that are traditional and well covered
like growth and profits, volatility.
some are less covered.
We touched on today the product and process innovation.
innovation and also I would say adoption.
a process in communication speed.
I wonder about machine-to-machine
communications and whether that doesn't introduce a new dynamic
that we should worry about.
And I put on the table for this, changes in average operational
standards.
so you have certain standards of -- as you maintain, for
example, an underwriting or other risk management processes,
operational processes.
sometimes minimum standards and the actual standards that the
industry is living to that may not be as well understood or as
well monitored as it should be.
so when we look at the data we need for these sorts of purposes
we'll have different requirements in terms of data
quality.
And I know Mike is going to talk about that.
I would say, though, that just to finish upsetting up the
paradigm as we think about operational risks around data,
what we can have in mind is sort of an absolute standard of our
top wish list.
What we would really love to have and something completely
unsatisfactory.
We need a benchmark of what is attainable and practical and is
also aspirational.
so something to which we can look forward to with some degree
of certainty.
and what the operational risk managers are doing, of course,
is they're coping with the fact that operational risks degrade
one or more of the dimensions and we have trying to constrain
that degradation and the opportunity we have as we think
about the ways in which data management can evolve is the
design data management systems and improvements in the way we
handle data that mitigate these risks in a reliable way.
>> Great.
Thanks very much.
That sets us for the KWEBGS question.
Mike, you have been a proponent how measuring how good firms are
at managing their risks and the risks and the data model.
And you propose that for this purpose and have been working on
that.
So the question I have for you is can you discuss how the use
of the consistent measure of the competency has managing data,
how that can reduce systemic risk in the industry.
>> Certainly.
Let me actually do that by looking under the covers at what
I'll call the harsh and sometimes ugly reality of data
manufacturing process that really exists and what we need
to do about it in order to trust the data that comes out of it.
I also put that in context that the organization that I run was
founded by the financial institutions.
it was founded way before the crisis specifically to look at
how do you get control over your data infrastructure.
These are the core factors that drive all their activities.
And I want to pick up on what you described as the outcome of
the senior banking supervisor report.
Because it's a very important report.
And I encourage everybody to read it.
it was the last one.
The author I think in December of last year.
and it established, in my opinion, a very clear
relationship between data management, how you do it and
what it means to have a risk appetite, how that's so
important and how these two things relate to each other.
And they nailed it I think pretty well.
They talked about the silos that exist and the difficulty they
have unraveling the fragmented environments that exist within
these financial institutions that have been built up over
years and with acquisitions, et cetera, et cetera.
They talk about uncoordinated I.T. projects.
They talk about how you need to have a multiyear collaboration
between I.T. operations, business requirements and data
in order to achieve these activities.
They talk about the importance of having what I call top of the
house mandates, what we all refer to as governance that
controls these processes are unfolded.
And the need more metrics as you described, Dessa.
I put that in terms of accountability.
And the lack of standards that are essential if we have compare
PBLT, automation, elimination of manual processes, et cetera, et
cetera.
and as I look at that, without those things, without the
standards, without the processes, firms are going to
have a very difficult time achieving their ability to link
instrument, right?
Instruments and all of their contractual complexity, with the
entities we do business with and all the roles and ownerships and
hierarchical structures in a cycle with the holdings you have
in your portfolios.
In essence, that is the raw material that we use for
systemic analysis.
it is our ability to create those links and relationships
based on any kind of array, evaluation of macro economic
activities.
and there's a hard time automating things because you're
doing manual reconciliation.
and I kind of summarize this as very difficult for us to meet
the prime directive from a data perspective.
And that prime directive is simple.
It is really to deliver to business users and to regulators
data they have trust and confidence in to be exactly what
they expect it to be fit for purpose without reconciliation.
and our whole goal is to try to achieve that prime objective.
and then you have the question of why we're in this mess and
how did data get so fragmented and disconsorted.
I think those are straightforward.
Managing data, not managing I.T.
But managing data has meaning where precision and granularity
matter is new.
We're all making this up as we go and learning the processes by
experience.
It's also hard to understand.
it's hard to measure.
it's hard to articulate.
It's hard to put into a spreadsheet metric.
it is, in fact, boring box office which was called the ugly
step child of I.T.
It's complex.
There's lots of components to it.
There's lots of strange NUANCE areas.
Data work flow processes and identifiers and data qualifying
and attribute meaning.
What is the system of record and how do you manage it?
And the list goes on and on.
Finally, of course, is it's expensive and hard to do.
and it's hard to do because we have systems that are
fragmented, siloed.
And we look at primary competition being down the hall,
not necessarily across the street in the way financial
institutions operate.
So what happened was we were documenting what is described
for the industry what firms were doing to try to overcome this
challenge.
How were they approaching it?
How were they selling it?
What kind of governance.
As we started documenting it, the industry kept saying, Mike,
you're trying to create the capability maturity level or the
CMM for data management and we think that's very important and
needed if we're going to have a way of evaluating the work that
we need to do in order to fix all the stuff that's been
created.
so I went, figured out what that was, figured it out with
Carnegie Mellon university.
They are the owners of the CMM process through their of wear
engineering institute.
We had complete alignment on this activity.
We realized there was a missing component of the things they had
done.
So we arranged for a partnership now about two, two and a half
years old to sit down and document at a very specific
business process level everything that financial
institutions do to manage data.
And then how do you evaluate the levels of maturity, the
capabilities, the kinds of things that you have to produce
as evidence to demonstrate that you can achieve that.
This happened to coincide with regulators sending to many of
our members what was known as their matters requiring
immediate attention reports.
Those talked about operational risk.
They talked about data challenges and they were asking
the firms to do things that were hard to measure.
Improve your governance.
Improve the quality of your data.
it's hard to measure those broad ideas.
and the industry had a hard time meeting the requirement office
these reports.
So they came to us and said let's expedite this work we're
doing with Carnegie Mellon, get the model in place.
In fact, we're going to create more stringent requirements than
the regulators would impose on that that would match what we
do.
For the most part the industry understands the challenge to fix
this to improve their own distance operations.
So we assembled a core team.
The core team has been working now the past 18 months three
days a week to try to look at these processes.
I.T.
Strategic consultants.
We had a bunch of assessors.
It basically boils down to four big challenges.
and the first is data management strategy.
That's governance.
That's your funding model.
That is how you get a lineman across your organization.
That's the business case if you will for data management.
data management is run by governance.
The second part of that is operations.
how do you understand requirements?
How do you translate those requirements into activity?
How do you create the policies and procedures and standards,
unravel your data flows?
That's all the nuts and bolts of it.
That's the operations component of data management.
the objective, as you rightly point out is data quality.
It is not one thing.
It's many things.
You have to figure out how to understand what those are.
And you have a profiling and you have all kinds of technique toss
clean data to try to get an quart and comparable process
One of the goals to shift it around to move from scrubbing
data to fixing data as a product that is manufacturered, right?
It has an origination point.
It goes all the way through the process.
We're all struggling.
It seems ridiculous to go through that process constantly
every time we go through a transformation.
Of course there's an I.T. requirement.
Most people think of it as an I.T. problem it is really a
business problem with an I.T. partnership.
But you do have to work with your platforms and your
semantics and your messaging infrastructure in order to make
that process flow.
so at the end of the day I think people buy-in to this concept of
if you can't measure it you cannot manage it.
And there is no way of managing it.
So the industry got together to create this consistent way of
measuring it.
They're going to offer it up as we should all be held to the
standard.
And that way the regulators who need some assurance that what
they are getting is trustable has a way of being evaluated and
the competency of the financial institution.
While the same time it has an operational route map that says
here's what we need to do is and here's the metric that we want
to achieve for our own business operations.
And I believe those two things will come together.
>> okay.
Great.
We're very excited about it.
If you haven't read it, it's on the EDM website.
We will make some available also after the conference.
but it is important and I'm a big proponent of it as well.
This might be a measure that could become another risk
measure.
it will be an operational risk measure against the data.
>> it is the best practice we're trying to prevent in a new area.
>> It gives a benchmark in which you can evaluate.
Next.
Thanks, Mike.
Anna, you have a very unique perspective.
you have to monitor data across markets during all kinds of
markets, good, bad, and during crisis.
So I just wanted to know if you could comment on that and give
from your perspective some areas that are important to you and
that you think we should be working on.
>> Great.
Thanks for Dessa and thanks for the opportunity to be here
today.
I guess I officially represent the FMUs financial market
utility.
To give you context really quickly, what I would like to do
is really make a pitch.
and the pitch I would like to make is the market utilities
that are in place today to build some of these frameworks that
you're looking to build.
So that's my punch line up front in the interest of time.
Reoperate 24 markets, five clearing houses and five CSDs,
clearing and settle depositories here in the U.S. and Europe.
so our reach is pretty extensive.
We provide technology across the full transaction chain in 70
markets and 50 countries around the world.
I would say we have a pretty expansive view and opinion how
we can have these utility toss build the environment that we're
looking to build.
I really also agree strongly with some of the comments early
this morning that talks about how important getting the data
as close to the source, leveraging transactional data to
improve the quality of that data.
even by limited to what we're doing in the U.S. and I look at
the equity markets that's a model.
YES, we can talk about some of the complexities, derivatives,
OTC and some of the documentation that paper formats
rather than readable format.
So it's all practical considerations we have to solve
along the way.
But if you look at the model in the equities markets and what we
do on a real-time transactional basis, we are linked.
We have to be linked to regulation so we can ensure our
customers have the best price.
In order for us to ascertain what that best price is, we have
our data.
so the national best offer data, which is collected from all of
the different public market convenience.
By the way, we all have our own and separate formats.
Because all of us are competing with one another.
We're creating new and distinct order types, et cetera.
But we all have to conform to a standard to feed into the
utility that aggregates this on a real-time basis and STKEPL
mates that feed to all the markets, consumers and
individuals around the world.
So that's an example.
It's happening on a real-time basis where you can combine
standard formats with proprietary more mats to achieve
that need.
the other aspect, if I use flash crash as Annan EBG dote for
this.
When you look at what happened with understanding the root
cause of what happened, each of our markets were able to deal
with the issue.
We maybe had some different ways of dealing with it eventually,
which is a separate discussion.
But we all have the ability to understand what was going on in
our markets real-time through monitoring and surveillance
capabilities.
What we weren't able to do as an industry is have a holistic view
between the EBGTS market, the 40 plus venues that people were
trading on, both lit and dark, between the futures and the
derivatives market and understand systemically what
happened to trigger this chain of events that happened on May
6th.
So as a result of that, and I know it was in progress before,
but in initiatives such as the SEC consolidated trail
initiative, work that we talked about earlier, those are all
important building blocks for us to be able to understand
systemically what's going on.
Not a day from now or week from now or six weeks later, it's as
real-time as possible.
We can debate real-time versus next day.
But the point is you want that data accessible when you need it
to resolve the problem you're trying to resolve at the time.
The other trend that we're seeing, and we have actually
look at it as a business opportunity for ourselves, is
regulatory changes are being introduced, especially around
the area of risk management, what we're seeing with our
member participants, their efforts to consolidate their
data from the various silos they have in place to aggregate and
do risk management.
and so we have two key areas that are the fastest areas of
our business quite frankly.
One is on the pretrade side.
As of yesterday, we introduced phase two here in the U.S.
What that is forcing firms to do is on a pretrade basis determine
various -- look at various risks and considerations to determine
whether that trade can take place or not.
and we'll start seeing more of that pretrade analysis and trend
happening in other classes as well.
We talked about collateral management earlier.
Another example where some of that collateral management co
mingling with risk management will start happening more and
more pretrade on the front line rather than post trade.
So, again, if you think about that trend, if you can figure
out how to capitalize through regulatory requirements, being
very specific and being as close to the source as close to the
transaction as possible, you will benefit and naturally get
the standardization that you're looking to achieve.
so practically speaking the more you can through the regulatory
coordination and requirements, to find those needs up front and
allow us to build it in the transactional systems in the
firms and in the market utilities and clearlyhouses that
are natural aggregators of this data.
another quick example, and I know we're running out of time,
is a trend me saw on the surveillance arena.
So we acquired a company a little over a year ago called
smarts, world leading technology of regulators.
We provide it to seven regulators around the world.
so markets themselves.
and the fastest growing part of this service is to brokers to do
their compliance.
so what's interesting, and if I can use markets like Australia
and Canada, for example, which is a much more mature users of
this technology.
so it's not just equities.
what we're doing, because we're doing the analysis to look at
these trends, we're actually seeing the market integrity of
those markets improve.
Because you have a common -- you have separation of data and
views, et cetera, but you have a common nomenclature, a common
secretary of expectations what is required in terms of risk
management.
So, again, another example where you can use standards to advance
the type of things that we're talking about here but the OFR
is trying to do.
Last but not least, and a little controversial perhaps, I
definitely believe in a standardization of
HOMOGENIZATION of data.
We have to leap-frog what we have in place today.
I have two pieces of commentary.
I guess I should have started off by saying this is all my
views and not that of -- just delayed.
but two things.
One is the reality, if you look at from a time to market
perspective, this is going to take a while to do.
So I think it's important that as you build your data
architecture and you consider what that data platform needs to
do, you really need to assume that you're going to be getting
your data sourced, whether it's data, metrics, tools, whatever.
It's going to be sourced from various and dispair it sources.
We're talking about traditional data.
but when you start thinking about unstructured data and
what's being captured in blogs and tweets and videos and
documents, et cetera, that compounds the problem even more
acutely.
So, again, think about the reality being it's going to be a
distributed environment that you're going to have to manage.
So the integration layer that can do some of the
HOMOGENIZATION for you will be a very critical building block for
you.
Last but not least, as we leap-frog and build in new
reporting requirements, let's remember the old ones if we can.
because this is a cost to all of us.
and this is a cost to the markets.
so if we can -- one of the biggest I think success factors
would be is in building these new requirements, these new data
feeds and reporting mechanisms that we can sense some of the
older ones and reduce our cost structure.
So I'll leave it at that.
>> okay thanks.
Very good perspective to hear it from the OFR.
We're very conscious of some of the concerns out there.
and we do have a data committee.
We do meet regularly.
And one of our big projects is looking to see what information
we have internally to share first before going out and being
very conscious of that.
and very judicial in terms of where we are going to go out and
request for information.
and trying to create standards internally as well.
so next and last is Philippa.
from your end in managing operational risks and controls
goals specific one of the questions I had we're looking
for suggestions from your viewpoint.
But how can regulators gather enough information about
operational risks to be meaningful to measure threats to
stability while not trying to overburden?
So from your standpoint how do you see this?
And is is the information complete or how do we make sure
the information is complete so we can look across markets?
>> I will start by saying this is my view and not a specific
Citi view.
There's a lot of interesting aspects to this.
One is we are used to an operational risk dealing with
uncertainty.
we have never believed in our data in operational risk.
We only have a little data.
It's sporadic.
Sometimes the quality of the data isn't that good.
Yet we're supposed to come to conclusions on our risk
exposure.
So I think it has something to offer in this discussion about
how you deal with not having the ability to look at the data that
you want.
and possibly having data explored.
I think to the point earlier on the pre-COG in "minority report"
you could say BIS did have a pre-COG moment when they came up
with basil 2.
They said it's all very well that you manage market and
credit risk but you need to stop managing operational risk.
It sounds like a list of reasons why you should manage systemic
risk.
They said things are becoming more complex.
they're concerned about the complexity of products and the
complexity of markets.
We're concerned about the level of acquisition and divestiture.
about the fact that you might outsource some of your processes
but you still have the risk.
You just don't control them anymore.
We're concerned about the fact that technology is moving so
quickly that something can move swiftly through the industry by
automation.
in 2004 the way we will manage is that is by having risk
management.
I confess we did not top those things from happening.
We do actually look at all those aspects of the financial
industry and financial firm.
so when we're looking at managing risk, we're trying to
identify the risk.
When we find it, we try to assess it.
When we assess it, we try to mitigate it and they're going to
try to control it.
We want to identify it.
When we find it, we want to be able to say how big is it.
If we by then find it's large enough, how are we going to
mitigate and control it.
We have tools in operational risk where we do exactly that
process around very soft qualitative data.
now, specifically to your point how do we find data that matters
and how do we not overburden the cities or the areas requesting
data?
I can quote Einstein here.
I found out two months AOEUGT was Einstein.
Not everything that you count -- not everything that can be
counted counts.
not everything that counts can be counted.
The danger, and we know in very fully in operational risk, is
that you believe your data is the whole story.
and your data is only the story of the data you're able to
collect.
It's only as good as the quality of the data you've collected.
There are seven different categories.
To Charles's point earlier, we're looking for losses that
results in fails or inadequate people, processes, systems, or
external events.
The way this manifests itself.
You'll hear systemic events imbedded in all of these.
We have internal fraud and external fraud.
internal fraud, for example, is somebody is lying about quality
of their credit.
or somebody is trying to get into your system on
cyberterrorism.
Of course if we have a system that people can hack into, that
exposes us to danger.
selling complex derivatives to grand mothers in ohm HAFPLT that
is going to open you up to legal risk.
most of the large operational events are huge legal
settlements.
They often give you a taste for where it is.
Row bow signing might be something we think of there.
Subprime.
And execution deliver and process management.
What can go wrong from the moment you try to settle and
finalize that trade in your book?
Anything that breaks along the way.
you can see there's an amount of systemic risk.
Then employment practice and safety.
This is people.
Operational risk we care about what's happening with our own
staff.
Can we be sued in a class action lawsuit?
Do we have a safe working environment?
What is the actual dynamic of the industry around people.
And then terrorist attack.
whether catastrophes.
finally business and system failure.
What if the whole thing comes crashing down in a flash?
So those are the seven elements that operational risks looks at.
We are looking for data that helps us monitor all seven
things.
So to Dessa's question what data can we get?
That's actually where it gets really difficult.
Because we need data from every corner of the firm, front office
and back office.
From every individual in the firm.
So you have 60,000 people, Morgan Stanley, for example, how
would you gather data around whether everybody is behaving
appropriately under the rules?
If there is something systemally wrong in our organization to do
with a particular regulation, is there something systemally wrong
in your organization about the way you're trading particular
product or interacting with a particular group of retail
customers?
The way that we address it is several ways we can get data.
We look at what's gone wrong.
What is our loss history?
We identify them it happens.
We track them.
We monitor them.
We learn from them.
We try to see into the future from them.
we are not an indication of the future.
They are an indication of the past.
And what metrics could tell us expose to the potential risk any
of these categoriesment a lot of work has been done five metrics
that risk losses has been extremely challenging.
Very difficult to find metrics that do actually correlate to
losses.
This is where we really had an AHA moment in operational risk
which is you don't always have to prove a correlation between a
metric and a loss.
You know from good business management and the understanding
of the department that this matters.
So if sales are going up, expo TEPBLly, you care.
You know that your control environment is weakening.
so from that perspective I would urge the OFR and the industry,
as we start to look at what data should we collect, to take a
pragmatic approach, what data can we get, what can we count.
does that data actually tell us something real about our risk
and let's apply intelligence to that, not just statistical oral
indication.
And what do we wish we can get that can't be counted yet and
how much would it cost to go back and get that data.
Is it worth getting that data.
If it is, let's make the effort and go and get it and be
thoughtful how we use that data to come to conclusions about our
risk exposure.
One analogy I like to use is if you imagine the collection of
data as really putting information on the dashboard of
your car you would be appalled to drive a car which didn't have
a speedometer, didn't tell you you were low on gas, or the
little light that said your oil tank is empty.
This I know from experience.
and as time goes by, you look at at more and more sophisticated
cars with more and more sophisticated systems.
We have more and more information on that dashboard.
We rely on that so that we know if we're driving in a safe
manner.
we are responsible for driving safely.
You can still drive like a maniac and have a really good
dashboard.
So there's a responsibility for people to drive in a safe manner
and to make sure they have an appropriate dashboard and to
improve it as much as they can.
that is not going to stop you getting hit by a truck.
you can still be hit by a truck with a safe driver and fabulous
dashboard.
So you need to have insurance.
We need to find out what can we measure, how does it help us,
what do we need to do to be safe by holding appropriate capital,
by insurancing our exposure and driving safely.
>> Okay.
Thanks.
Very helpful to hear some of the different sectors of the market.
In our workshop this afternoon we will drill down a bit deeper
into these areas and propose different measures.
And we have asked each one of the participants to go a step
further and say, okay, let's take a couple of these things
and let's go deeper and brainstorm about this and get
some reporting back.
So I think we probably have about five minutes for
questions.
I know we started a little bit late.
But I wanted to throw it out to the audience if you have a
questions.
>> so I'm very confused about operational risk.
Let me contrast the use of the term.
And I think it's important that we kind of all understand what
the term means.
what I thought I heard Charles say is that we should use that
term as an umbrella wave of taking a broader view of how to
do risk management.
and then Philippa was using the more standard basil terminology
which I think of as referring to a subset of types of risks that
risk managers deal with.
and so taking Charles's view, it seems to me what the panel is
suggesting we should think of risk failure as a type of risk.
Function does not do its job in terms of informing the
management of the firm about the risk that it's taking in
Charles's terminology that's operational risk failure.
That's fine.
That's just saying operational risk is everything that could go
wrong with the firm.
But it seems like data is too narrow a lens.
It seems you want to do a revolution in how we think about
managing risk, not just measuring risk.
So if anybody could talk a little bit more about what needs
to be developed conceptionally, not just data, we have jumped to
the data we're going to get out of our systems.
And we kind of bypassed how you get people to think differently,
conceptually about what it's about that would help me.
>> I pick up on a point that Philippa made, which is
operational risk -- First of all, operational risk in the
basil framework is one of a particular string of risks.
in its implementation, I think people struggle with whether
it's failure or risk management process and credit risk
management or interest rate risk management should be given as a
job for credit risk managers to think about.
and I think because of the established systems in the
financial institution it stayed within credit risk.
Aspirational risk managers many of those who did operational
risk management feared they would not be able to control it.
But then when something went wrong they get blamed.
So this wasn't a very comfortable situation to be in.
But I think today that's probably where it is.
As a practical matter, credit market, interest rate, liquid
risks all managed separately.
What Philippa addressed is the sort of SUB thinking about it in
a practical way in institutions.
The other point which I think is very telling that operational
risk is an analog for what you face in the world of systemic
risk.
Rare events, not very good data, uncertainty.
and many things you would like to understand that is very
difficult to get behavior on.
The culture matters a great deal.
And you do try to manage culture, behavior even though
you don't have metrics on it.
We have analogs we shouldn't lose sight of in thinking about
systemic risk.
There's Watson and the advances and ways of thinking about
problems of artificial intelligence.
I remember visiting once a research establishment where
there was a demonstration where they could bring data from their
car park to assess the chances there was going to be a crash.
They looked at data gathered from the security, the color of
vehicles, red cars crash more than other cars.
Pedestrian flow.
They had data being collected by different places and integrating
it all into thinking about the danger of a crash.
One of the things we have to worry about is we don't just
focus on transactions.
even though that's the thing that has a clear mandate to look
at.
and we all understand provide granularity that we can get into
it.
It is very important in assessing the sorts of softness
of operational risk managers have been familiar with for a
long time.
>> and I think Francis said up front in a previous panel
there's a lot of things to worry about.
and you can't fix them all.
but one of the things you can fix is baseline building block
data.
It's something we have an opportunity to fix and I think
our short-term Mediate motivation is to do what we can.
>> and I would like to add to that I think the definition of
operational risk we're on the same page.
What we mean by operational risk.
and the point that I was making is operational risk has lots of
the attributes of systemic risk management.
And one of the things that we have learned is that data is one
piece of managing operational risk.
It's a very important piece and has to be done right and has to
be understood.
so you need to make sure you're working on the quality of your
data and understanding it but you're not here, well, is there
a fire burning.
The way we manage that we look at our experience, we look at
our metrics and we conduct qualitative assessments on a
regular basis and we do scenario analysis where we sit around and
have big thoughts about what can go wrong.
and all of those pieces together the OFR has, the think tank and
the data.
And you put these together and you get real systemic risk
management.
I thought the role was the worst job in the world.
Then I decided systemic risk manager is the worst in the
world.
>> I want to mention one thing.
It's been mentioned across all the panels today.
When we talk about new products, do we have the people to process
that in the market.
Think about from the overall perspective or the OFR
perspective.
what is data capture across the market.
If we have new products that can complex, can we create the
linkages and the derivative to the underLING?
Are we looking at it the same way.
Are we looking at classic of securities in that way, right?
It's -- that involves the process.
The data stewardship.
stewardess.
Sorry.
I didn't get much sleep last night.
but on that do we have the right process?
If we could pressure data maturity and if one company does
it better than another, could they get relief on their capital
requirements?
Now, again, this is not the treasury view.
But this is just throwing some things out there.
think bit from that perspective.
We can't look at just what happened in the past.
We have to start looking at things differently.
So new markets.
Maybe there's some geographic areas.
We have to think about it.
As we're getting the data, is it good or not good?
We can say it's a problem.
What are the indicators?
>> There are sticks that can be used to focus attention on areas
that they might not necessarily do so.
that's a good idea.
>> Can I offer some negative allocations.
The better we do with our maturity model the more we can
deduct.
>> Okay.
.
Any other questions?
I think we have time for one more.
>> there are a lot of things to worry about.
Here's one more.
data are not static.
A risk is using out dated data.
people -- date starts.
People put it in.
They make changes.
talk to me about BItempORAL data and how they might be working
with data now.
>> one of the attributesis timeliness.
it just becomes one of the components you manage in terms
of data quality.
You have a change from management procedures.
Those get documented.
That's how you keep your assistance and records in
alignment.
There are mechanisms to do these things.
>> I think it was about the durability and classifications
of systems.
The data seem Matta.
and unFortunately this is it is difficult to make in
policymaking debates.
Not only do you get more durability but you also get more
practical flexibility as we develop greater capabilities to
build machines with semantic intelligence.
>> and we have to resist trying to focus things on predetermined
boxes.
just actually describe them for what they are.
and describe them at their basic element level.
And then you can construct them as appropriate.
>> another quick point, we have a long history of popping
intelligence into these things and regretting that we did.
>> that's a perfect example.
So I think we're out of time.
So on that note, what I'm going to do is thank our panelists for
a lively discussion.
[ Applause ].
just sit tight for a few minutes.
My colleague Mark flood is going to come up and talk to you about
the afternoon session and what to expect there.
>> Thank you, Dessa.
I'm with the research group.
I came up with the circus animals analogy for us.
and to describe the afternoon session, you'll see on the back
page of your agenda that we have got six concurrent workshops,
breakout groups for the afternoon.
I won't describe them further because you can read.
We'll say schematically to the morning panel group.
We have data framework and operational risk.
and the breakout groups on the back, each of those match to one
of the themes in the panels.
we'll break out into three subsets and have folks divide
this room into two pieces, systemic risk assessment
workshop will happen over here.
Data framework session will happen over here.
and the operational session will happen upstairs.
We'll describe how to get there in a second.
the purpose of -- or the main goal for these sessions is sort
of a win-win goal.
We want to -- we assembled all these really smart, really
experienced, highly skilled people.
and that perspective we want to take full advantage of that
fact.
It's a really exciting opportunity for us.
For you it's a chance to get your voice heard.
4 so we are going to reverse the engines in the afternoon
sessions instead of having a small group at the front of the
room speaking on the rest, we're going to have a small group at
the front of the room, the moderators listening.
And your job is to speak, so please speak.
We will do a couple things to facilitate that.
One is we will work under Chatham house rules.
If you're not familiar, it means the thoughts that are expressed
in the sessions will be captured and can be related afterwards
but no attribution will be given.
So we're already a gentleman's gentlewoman's agreement to focus
on the ideas but not repeat back who said what.
the press will also not be present this afternoon.
So there will not be that sort of recording.
On the other hand, we will have SCRIBES present.
So I will be one of those folks.
Moderation team will be involved in that.
We will be taking notes because we want to not just listen
briefly to your wisdom, we really want to capture it and
preserve it going forward.
in terms of logistics, lunch is available outside.
it's where the coffee used to be.
it's a boxed lunch.
and to recover our schedule, we're going to ask folks to
bring your lunches to the breakout rooms and ask the
moderators to please begin on time.
Although we don't want people to speak with their mouths full, we
want understand if you do.
The breakout room C is the Yellowstone room.
It's a little bit of an intelligence test to get there.
So sort of the smartest of the smart will be in the operational
risk session.
To bring the average down a little bit I'm going to give you
a little hint.
Yellowstone room is on the second floor, the far corner of
the hotel.
And you will be tempted to take the escalator.
Do not do this.
You want to skirt around the escalator, go back to the
recesses in the corner of the building and find the elevator
and take it up to the second floor.
And then there's sort of a winding hallway.
Follow it around.
Yellowstone room will be on your right about halfway down.
That's it.
Thank you very much.
[ Applause ]