Multilevel Interventions in Health Care Conference: Presentation by Steven Clauser, PhD

Uploaded by NIHOD on 05.05.2011

>>>Dr. STEPHEN TAPLIN: So right now we're going to hear from
Steve Clauser an overview of the two days. And then we're
going to break up and have a discussion at the tables.
When we break up for the discussion at the tables,
look around. We want to have at least four people at a table
for a discussion. So if you're in a table with less than four
people, redistribute yourself. Move among the tables and
redistribute. So that we have ... we make sure that there's a
good group of people at each table. So we're going to start
now with Steve Clauser. Again, thanks for your patience and
your perseverance. We went through a lot of papers this
morning quickly, a lot of key issues and a lot of
implications for how the world in which we practice and the
world in which we study and understand health care is going
to be effective. I think Dick Scott's comment yesterday about
the permeability of the field and the environment and the
borders is a critical comment and really helpful in thinking
about the challenges that we're facing in genomic and biomarker
delivery. That the sale of things is faster than our
actual understanding of how to use the information. So I
think that we are seeing some really important concepts.
And we're looking forward to pulling these pieces together.
Steve Clauser my colleague and Co-Chair of this effort, is the
Chief of the Outcomes Research Branch of the Applied Research
Program where he directs the National Cancer Institute's
research program in outcomes research and quality of care.
His research interests are in the development and
implementation of performance measurement of cancer care
delivery, patient-centered communication, health-related
quality of life and patient experience of cancer and the
evaluation of cancer care delivery programs. Dr. Clauser
has worked for the Centers for Medicare and Medicaid,
Department of Defense, the American Medical Association
and McGraw Hill in a variety of health policy research
positions. He received his BA from the Michigan State
University and his Ph.D. from the University of Minnesota.
So we've asked Steve to think a little bit about what's
happened overall in these papers and how the
pieces fit together. So, Steve take it away.
>>>DR. STEVE CLAUSER: Thank you, very much. The paper I'm
going to talk about quite frankly is not quite a
paper yet. We're a little bit more upstream than a
number of the papers have been developed so far,
because we really were going to hold back a little
bit and take maximum value of this conference in trying
to really integrate the themes. So most of what I'm
going to talk about is really a reflection of the papers.
Although, I am going to take some time here to reflect on
some of the commentary we've had over the last two days.
And so since the paper isn't written yet, I must acknowledge
my co-authors, because they're going to be very busy with me
over the course of the next few weeks to get going and figure
out how we incorporate all the tremendous information and
input we've gotten at this meeting today. Well, I want to
start where I think Steve started yesterday in his
presentation to emphasizes this whole research initiative,
particularly for the National Cancer Institute, is really
instrumental in purpose. You know, we're trying to improve
cancer care delivery and we're trying to do it throughout the
continuum of care. And I think the good news is, which we may
not have given as much attention to in this meeting,
is that there is much to build on with intervention research
in cancer. Because we have made progress in building an
intervention science that has contributed to smoking rates
declining, screening rates increasing, and for some
cancer, survival actually improving. But I think for
many, the progress is just considered too slow. And one of
the things I think that's frustrated us, and we've
mentioned several times at this conference, is that journals
tend not to publish our failures. So it's really
missing an opportunity of a learning source for us to
figure out how to do this better. And results when we
have them are often mixed, you know, especially when we take
something that's successful and begin putting it in a different
context. And third, performance itself doesn't always persist.
That is sustaining improvement is often a challenge in cancer
care delivery. So I think one thing we were trying to do was
to begin to think about multi-level research as a way
to broaden and deepen our understanding of intervention
science and its potential for us to go beyond the patient and
physician dyad to recognize that there may be some other
factors at play. And we have learned in particular that
context matters. I mean, that word has probably been repeated
more in the last two days than any other word that we've
talked about. And we do have some experience with community
level mechanisms. Dick Warnecke pointed out in his
presentation. Kurt Stange pointed out particularly in
areas of tobacco cessation. And I think we really are going to
have to take a look at those. And even though those numbers
of studies maybe less than one hand, we should really mind
them so we can learn from exactly what's happened in the
community. So at least we can begin thinking about what we
might do in the delivery system. But I think one thing
we all agree is that two things that are particularly missing
in these types of studies are organizational change
mechanisms and national and state level policy mechanisms
that really will drive potential. And as we heard in
health care reform, they really drive the potential for
changing the way providers and patients interact with one
another in complex organizational settings. So we
need to broaden our menu of intervention mechanisms beyond
the patient-provider to look at those things that may either
facilitate or inhibit change and improvement in care. And I
think that what we tried to argue over the past two days
and discussed among ourselves here is that multi-level
research does provide that opportunity. Obviously, it
opens our eyes to intervention issues beyond the patient and
provider that has been missing in many studies of this area.
And it also provides a context to think about those kinds of
factors which may be, in Brian's words, kind of
mediators and moderators that allow us to look at the impact
of those interventions in various ways through various
levels, in terms of their speeds of effect, their causal
influences and its direction or influence in sustaining these
changes over time. But it also points to us to us to really
think about context. And if there's one thing that I think
we can get out of the work that we've done so far with
multi-level interventions is that we have some messages to
send to the people that are working in that single
intervention arena, that we really do have to get more
granular in defining context, So that we can begin to build a
science even within a single level intervention that will
allow us to begin to look at how these multi-intervention
systems might work. Now, I think that another thing we
learned over the course of the last two days is that this just
isn't an intellectual exercise. I mean, cancer, as well as many
other diseases represented by many of you out there, are
really being hit by a double -- what I would call -- tsunami.
Kelly Devers talked about health reform. And how the
major changes will hit both providers and health systems
simultaneously at multiple levels over the course of the
next couple of years; whether it be coverage expansions and
incentives that play out in terms of such things as
performance measurement, payment reform or coverage.
And combining that with having to deal with these incentive
effects in terms of these new kind of organizational entities
called patient medical homes and accountable care
organizations, are really going to change in a very fundamental
way, in a very complex way, throughout the health care
system, how patients and providers react to one another,
to try to produce the outcomes. With the potential, as Kelly
noted, with a lot of potentially unintended effects.
Also we've learned that technological innovations, such
as e-health and the e-HR, also define the way consumers and
providers will relate to one another in the cancer delivery
space. But I do want to emphasize a point that Paul
Cleary made about it provides an opportunity for us as
researchers to think a little more non-traditionally about
where we can get data. Because through the Internet, through
other kinds of systems that are now being populated with this
kind of information, it may allow us to get the kind of
data that we believe we've been sorely missing, but we have to
look for it in different ways. And, of course, all this is
playing out in a new environment where cost
containment is even going to be enhanced as patients and
purchasers look for value in services. And if that wasn't
enough, Muin Khoury then talks about the second tsunami.
And that is genomic medicine and its impact on holds for
potential changes in the cancer care delivery system. And I
think he made a good point about talking about the
potential of these playing out at various levels in the
system. But we've really got ... but even though it's
upstream, it's clearly upstream, we've got to be
thinking about starting to articulate those kinds of
interventions to see "are they unique in genomics?" or can we
draw from some of the other lessons we learned with other
types of reforms that have come our way in the past?
Well, I think another place that we start when we think about
kind of taking a look at this is what do we learn? And this is
data that comes from Marty Charns and the tremendous work
they've done to try to characterize the literature.
And again, some of the take home points that Marty made
that I think are sobering for us is that first of all,
there's a very small reservoir of MLI research experience in
cancer. And as Marty said, that one out of five intervention
studies could even be considered multi-level
intervention research. And it depends on even if you look at
it by the intervention target or by unit analysis. And again,
some of us think that's pretty kind because we're including
both patient and provider interventions as multi-level
intervention studies. And we keep coming back to that
discussion of whether that's truly multi-level or not. And I
think part of this reflects the fact that most quality of care
research is really largely analytical in this area.
It's not interventional. Interventional studies tend to
be single interventions, single target and most involve
patients and caregivers. But I think we also have a lot of
lacking of work in the cancer care continuum generally.
That is few studies examine diagnosis and treatment,
surveillance and survivorship in cancer care. Most of the MLI
work is in screening and prevention. Now, coincidentally
it's interesting that when we put out the call for proposals
for poster abstracts, you know, the abstracts we got really
reflected some of the issues we've been dealing with
throughout the conference. From our perspective, at least with
the abstracts the way they went through what we would consider
multi-level intervention studies according to our
definition. And, of course, we reviewed them and we saw that
there were many different uses of the terms of levels and
interventions which we talked about throughout this
conference. But yet, I think if you go and look at the poster
sections and participate today at the reception, you know, the
potential to build multi-level intervention studies from some
of these appear very promising. And so this is my last plug.
Go visit the poster section at the conclusion of this
conference. Now, clearly one thing we have talked about is
complexity of multi-level intervention research and the
challenging nature of doing this kind of research. And talking
to people throughout the day and at this meeting, and I think I
came to the same conclusion that Paul Cleary came to in
his remarks. Multi-level intervention research is not
rare because it's considered irrelevant. It's just not
conducted because it's so darn difficult, challenging
and sometimes expensive. And these are some of
the main challenges that are noted by presenters
and comments that have been included during the session.
And the first four really deal with some of
the analytic and conceptual issues we had.
But the last two point to the fact that there's also a
cultural issue, a cultural issue for us as researchers in terms
of how we relate to one another in doing this kind of research
and how we relate to our practice partners in making
these kinds of studies happen. So what I'd like to
do is just pose a few questions that I think
maybe a cross-cutting element of these
kinds of research that maybe will be take home messages for
us to discuss further as we move forward. First of all,
how can we use theory to guide assessment and selection of
interventions? You know, several authors have mentioned
that theory should drive design. But rarely is it used
to guide intervention strategies. And I would add
unless it's in clinical effectiveness area where we
have solid guidelines in cancer care or to some extent in
social psychology where we have very good elements. And it's
not surprising because it's at the patient and provider where
a lot of this research has been done. As we move into the
organizational area, how do we think about evidence? And what
is evidence in terms of thinking about an intervention?
And these are really serious issues. Because if we define
these interventions wrong, we can spend a lot of money with
very little gain. And I think part of the problem refers to
the fact that that theories differ at different levels.
This was mentioned. You know, psychological theory is usually
emphasized when we talk about individuals. And when we talk
about policy, it's usually economic theory that comes into
play. But also at the organizational level. It could
be organizational scientists, sociologists, and even
industrial engineers that get into the game. And I think what
makes it more difficult for us is that we tend to focus on
what's familiar when we do this kind of research. Cancer
researchers are more familiar with biology and psychology,
and I would add epidemiology, but less familiar with such
things as management, organization and implementation
sciences. And we've kind of been cautiously optimistic in
terms of this. Because we've been concerned that we really
don't have a unified theory or conceptual framework that
exists that includes all facets of multi-level research to show
how all these interventions at different levels simultaneously
can influence the outcomes that we care about. And here where
I thought it was very helpful to hear about the field
research that Dick Scott talked about and the concept of
families of theories that Maria Fernandez talked about. But I
really would like to know more about that field theory.
Because I really wonder if part of our problem is because of
our focus -- we are not taking advantage of some of the
theories that we have today, such as the socioecological
model. Now, in absence of that conceptual and theoretical
framework, we have some other approaches and pathways we're
talking about here. Brian Weiner talked about taking a
practical approach. You know, thinking very systematically
and carefully about how these interactions interrelate with
one another in terms of the target of interest and then
identifying those potential mediators and moderators.
Alexander, of course, from a conceptual and theoretical also
added timing as a major consideration. And the one
point that I would like to talk about here is this issue of
disease trajectory and status of cancer patients. Because one
thing I've seen a little bit in this literature as it begins to
evolve is that we're trying to look for a global answer that
would apply to all cancers in looking at how organizational
interventions may make a difference. But it maybe
because cancer has very ... it's a very complex disease,
many different types. And you have to think about it in
stages and trajectories that we might have to look at responses
of specific cancers in patients at particular stages in order
to get maximum value out of this. This adds yet another
complexity to what we're doing. But I guarantee if we begin
articulating it in this term, this will be more well received
by study sections. Again, cautionary advice. Even though
this is a team based work, don't let a single discipline
or stakeholder drive your decision. And researchers
really need to engage both our intervention stakeholders with
the research design process. And that really came through
with Yano's discussion today. Now, another question. How do
we measure the relative influence and interaction of
interventions when used as a multi-level intervention
package? Well, I think one thing that I feel pretty
comfortable with yesterday that most people said that
reductionist approaches that are looking for that silver
bullet or looking for that silver bullet with a little bit
of additive effect as we add things may not work as well in
this particular field. Systems thinking maybe much more
fruitful, both in looking at the interaction effects of
interventions as they're in place, but also that notion
that was brought out by Alexander in terms of the
importance of sequencing interventions. All that can be
done and thought through in a systematic framework. And it
may give us another framework that we have to begin thinking
about. This also has implications for research
design. I think Paul Cleary and some others really talked
about, you know, we've got to have comparison groups. I mean,
there's no doubt about that. But the thing is that
randomization may not be always feasible or best considering
the dynamic nature of these multi-level interventions.
That's a little uncomfortable for NIH and NCI that has used
the randomized design as the kind of sine quo non for
learning about efficacy and effectiveness. Alternatively,
now we're moving to a multi-method qualitative and
quantitative approach. And we're also beginning to think
about simulation modeling as maybe promising, either as
complementing or a preliminary step to a larger study.
I thought that was a very interesting approach. But I
think, as others had mentioned, it assumes we have the data.
Because you've got to populate those parameters. But still, I
think we all continue to come back to that factor as we think
about these more advanced methodologies, whether they be
structural equation modeling or whatever, context of the
intervention matters. And so we have to engage our stakeholders
in order to get a clear sense of what that context is.
Another question... What are the relevant methods for
monitoring fidelity and sustainability in MLI studies?
Multi-level intervention emphasizes, in my mind, from
some of the discussions we've had, much more about
effectiveness and scalability over efficacy and internal
validity. And we're seeing a much greater interest in
pushing forward more flexible designs that evolve as
interventions evolve. And trying to understand those
issues of fidelity is very important. Because we need to
go onto efficacy and scalability to look at things
like generalizability and replicabilty. Those are issues
that are fundamental tenets of science that we have to address
in terms of this field if it's really going to take hold.
We do address implementation as much as execution. And in some
instances when we hear from Russ, it will probably be
maintenance as well, if we really want to understand how
we can articulate the prospect of these interventions being
sustainable. And I disagree with Elizabeth that
sustainability is a myth. It's how we define sustainability in
the context of these multi-level intervention
studies that I think is important. It's not a static
concept. It's got to be an evolving concept. And it's that
underlying process which drives those changes. Those are the
elements we need to continue to track in those systems as they
evolve and change. Now, I think everyone's mentioned that
this requires longitudinal design, multi measurement
points, including end points I believe after the study is
completed. So we can go back and see how well these changes
were sustained. Now, some of the other cross-cutting
challenges are, we haven't talked about that much today
because we don't get a lot of information on it. But why do
interventions fail or if initially successful become
unsustainable? I think this is a very important question for
us to think about too, because this is expensive research.
And we've got to do a very good job of picking those
interventions right up front. You know, maybe we need some
vital signs, you know, for thinking about intervention
design that not only will help us think about interventions,
but also may create criteria for study sections and others
to be able to make reasonable evaluations of the potential of
these studies. Think about things such as failure to
follow theory or the evidence. Again, I mentioned evidence in
clinical effectiveness and in personal behavior fairly
well-defined. But as we get into organizational
interventions and policy interventions, we're going to
have to think about how we articulate evidence in that
way. I mean, Cochrane Collaboration doesn't provide a
lot of guidance necessarily on some of these measures.
Failure to consider context. I think we've talked, beat that
one pretty much to death in the last two days. And the failure
to consider benefits and the costs of these interventions.
Feasibility is a major issue in terms of putting these things
in real world environments. And we really need to do much more
work I think in the micro costing area to really
understand what we're talking about in terms of
sustainability. And failure, of course, to align incentives.
Work by people like Sheila Leatherman shows that you can
come up ... most quality improvement interventions fail
because the business model of that organization becomes
inconsistent with the kind of intervention strategy you had.
You have to understand that business model or provide a new
business model that makes sense for that organization to thrive
and survive. And I would add on incentives. We need incentives
for really good data collection because this requires a lot of
data, and I'll tell you, one of the biggest fatal flaws I see
in many studies that are implemented is that there's not
enough attention to good data collection which is coming from
many different kinds of sources. And one thing that I
think my one addition to this effort over the last two days
is I think we really are missing somebody. You know,
there's somebody missing in this room. And that's called
the patient. You know, we really need to think about
including the patient as an active research partner, even
as we think about moving upstream in the various level
effects, because quality is not just about survival. It's also
about improving patient centered outcomes. And I
think there's a risk that as we move up into the organizational
level, we think that the patient effect is going to
disappear. But there's very strong evidence that that's not
true. First of all, for example, the work that's been
done by the Consumer Assessment Health Plan Survey system shows
that, from a patient experience point of view, they can
identify very specifically when a team is broken or when an
organization is broken. And secondly, we've even done
studies where we show even in environmental design, as an
intervention can make a difference from a patient
perspective. We did a study one time when we were trying to
identify in chemotherapy suites why there was so much
difference in terms of what was going on. And we did all the
communication studies and all that. The intervention that
made the difference was the integration of the patient care
committee with the architectural design engineers
that actually designed those suites in the first place.
It was the biggest predictor. So that is an intervention that's
done at the organizational level that's very patient
directed. Now, I think it's not surprising that over the
last two days a lot of the people that have been sitting
up here have been people from the Department of
VeteransAffairs. I mean, they have made major commitments
both in improving data collection as well as
building practice-based resources that have been a
major source of guidance for all of us that are getting
involved in this field. And I think one of our questions is
should we build these platforms one at a time or do we build
these platforms also from existing resources? And the
only point I want to make is that there have been several
references over the past days that we have many research
platforms here at NCI that could be a potential to help
launch this kind of research, whether they be modelers and
statisticians as Jeff noted in CISNET or provider based
intervention such as Cancer Research Network and
Comprehensive Cancer Centers, which by the way also bring the
academic connection as well as community cancer centers and
others population resources. We don't necessarily have to start
this work from scratch in every instance. And finally, we also
have some resources that allow us, if we can figure out how to
do this and we can work across our division, I believe to
really build the capacity to move this field forward.
We really need to work synergistically to build our
MLI capacity based on our team-based science research
that is now going on, both system science,
trans-disciplinary sciences, as well as participatory research,
to begin to identify those key stakeholders and partners which
will create those learning communities for us to get this
work going. And then we can take from what we talked about
from Dick, shifting those organizational cultures and
norms to try to provide the work for sustaining this kind
of research actively at NCI. And that involves training and
research and, of course, social marketing to be able to define
this kind of research clearly both to the public, but also to
our boards of scientific advisers and then follow from
that the resource allocation that will facilitate this work.
Well, I'm sorry in a day and a half that I haven't really
closed all the questions and we actually have more questions
than answers. But I think it's really important in a field
like this that's relative new for us here at NCI that we've
got to start asking the questions. Because if we asked
the wrong questions and get the wrong answers, we're really
going to be in trouble. And I think one thing that I've also
noted that comes from words of a very famous basketball coach
in college called John Wooden. And I apologize to all you UNC
people out there that I didn't use Dean Smith. But people
asked him when he was in his nineties ... and he had
accomplished so much --eight straight championships and
everything - "You know, why are you still doing this?"
"Because," he says, "when you're through learning, you
are through." And I think if we take that advice in terms of
working on multi-level intervention research, you're
likely to live a long life if you work with us. And also, but
the challenge is for us to figure how can you also have a
long and fruitful career in this work? And we're here to
work with you to try to figure that out. Thank you.