Multilevel Interventions in Health Care Conference: Q&A session moderated by Ellen Gritz, PhD


Uploaded by NIHOD on 05.05.2011

Transcript:
>>>DR. STEVEN CLAUSER: Okay. We'd like to reconvene and get
going with the last moderated discussion session of the day
today where you'll have an opportunity to ask questions of
our panel. And we're very fortunate to lead our
moderation section is Ellen Gritz. She's a professor and
Chair of the Department of Behavioral Science
and the Olla S. Stribling Distinguished Chair for
Cancer Research at the University of Texas, M.D.
Anderson Center. She's an established leader in cancer
prevention and control research, and an internationally known
investigator, and she sits on numerous advisory boards,
and her Ph.D. in Psychology from the University of California
at San Diego. Okay. Ellen, take it away.
>>>ELLEN GRITZ: Thanks a lot, Steve, for that lovely
introduction. So I just passed by those dancers again.
I just imagine us all in our sequin gowns and the gentlemen
in tuxedos. We're going to have a rousing 30 minutes here.
We had five wonderful papers, very challenging.
We heard all about the importance of causal modeling,
statistical designs, things I've never dreamt of, all kinds of
simulations, very sophisticated. And then the issues about
measurement and appropriate designs and RCT versus others
as well. So I know that there's going to be lots of discussion,
push back, encouragement, and facilitation from the audience.
You have five, five papers to draw upon.
So who is my first person?
>>>FS: My name is (inaud.) I am representing table number 7.
Maybe already addressed in the papers,
since we're not sure we think it's important to
highlight (inaud.). We think that the rates (inaud.) should
be taken in to consideration (inaud.) interact at all
levels. We also think that it's very important that (inaud.) we
recognize as an important tool for (inaud.) when you look at
the other side of our outcome (inaud.) mortality rates also
in these populations. So for example, at the (inaud.) level
there may be some differences, but at the same (inaud.) levels
are going to (inaud.) and we think (inaud.).
>>>ELLEN GRITZ: So that's another very important point
that came up in these papers, that we can go
below the skin so to speak, to the biological level down
to the molecular and the genetic levels. And those
may need to be modeled in interactions as well.
>>>FS: (inaud.) from the University of Washington, and I
am representing table 17 (inaud.) detailed discussion
about data, and how the availability of good quality
data seems to be so key to be able to do good (inaud.).
However, the collection of this would clearly be very, very
expensive and a lot of us (inaud.) challenging. So the
question that we would (inaud.) to consider is perhaps also
factor in whether there may be some other mechanisms or
agencies that would be able to do these excellent quality data
collections. And then for those of us who are doing
research could then have access to. And on the related level
our table was very intrigued with the computer simulation as
well. However, for those of us who are not engineers or have
had prior experience with computer simulation, we
thought it would be very interesting to get some real
concrete examples and then potentially, even a potential
how to do list so that we could actually get started
to be able to use this important tool.
>>>ELLEN GRITZ: So now we've had two sets of questions,
do the panelists have comments?
>>>DR. MARTIN CHARNS: So around the data question. Table 20
actually had some discussion about that also, and I guess
there are two kinds of things. One is an example,
well both are examples. One's an NCI example
but I left my notes on table 20. What was the thing called
with the wickie about the data? About the data? GEM. So one of
the issues is, see, I remembered some of it.
Do people even know of the availability of some data and
measures that they could use. And this seemed to be a
vehicle for, I'll use the word disseminating, discussing
information about data and data availability, so that's one
example. And maybe we're lucky in the VA, I see Becky standing
up in the back. Becky's done a primary care survey in the VA
and other people use the data from that. It's a way of
characterizing our medical centers and primary care
clinics. And similarly, we have since 2004 administered an all
employee survey, and so each year we get like 175,000
respondents and measure organizational factors,
organizational workgroup and medical center concepts.
Now, the good news about that is it's a common measure,
similarly for Becky's. It's a common measure across all 140
sites. The bad news about it is, you may not agree with the
conceptual model that we use to decide what to measure.
And because you're always limited in the number of things
you can measure, where you come from with your conceptual model
may not fit the data that we've collected over these years.
But at least those are a couple examples of data being
available about some of the concepts that we're talking
about in terms of these multi-level measures.
>>>DR. PAUL CLEARY: Just to cover, I apologize, someone else
mentioned the emergence of EMR's, and I think that's
going to be C change. I also recommend to people,
Gary King had an article, I don't know if it was the
last issue, but a recent issue of Science, talking about the
wealth of information that is now becoming available.
And it's not traditional information, it's not the
surveys, it's not EMR's. But like one example would be
that Google detected the influenza outbreak about a
month before our surveillance systems did. So if we think
creatively about a lot of these issues, there are many, many
more opportunities than have ever existed before.
And they're challenges that Gary identifies, but it's worth
thinking outside the conventional data, you know,
the CMS kind of data which is incredibly valuable. But the
amount of information out there is almost limitless.
>>>ELLEN GRITZ: Any other panelist comments?
Okay, table two.
>>>FS: (inaud.) previous comment about the need for
concrete examples. We thought that all the papers
were excellent but felt that they would all be strengthened
through some concrete examples given to the different types
of issues. So I just wanted to reiterate what was just
expressed before. The second piece that we talked about,
and spent a fair amount of time on,
was the issue of efficiency and the issue of looking at what
are the different measures, what are the things that we
should be employing, and what are the recommendations from
the authors of the papers in terms of their multiple
expertise on what's doable, what's actually something that
can't be done in the scope of the work that we're doing in
terms of measurement, and what are their recommendations.
So we thought all of the papers would be really
strengthened and helped by having some real concrete
examples about where the field is now and your understandings
of the field, where we should be going and (inaud.),
and what is your feedback on that?
>>>ELLEN GRITZ: Feedback? No one wants to comment?
>>>FS: (inaud.) and I represent table 16 and we were all
very intrigued by the notion of time as the third dimension.
And especially when we saw the flowchart that looked
at family support being critical at some point,
position being critical at other points and so forth.
What we want to know is how in the world do you capture those
kinds of differences from the logical kind of point of view.
Specifically, developmentally what do we do, how do you know
when to talk to organizations, when to talk to individuals,
when to talk to the physicians and the providers and so forth.
So any light you could shed on that would be much appreciated.
>>>DR. JEFF ALEXANDER: My answer is probably not going
to be entirely to your liking. I think one of the points in my
paper is that we should start treating time as an analytic
variable as opposed to something we measure T1 to T2.
End of story. And I think there is a lot of information,
a lot of work that's already out there that would help
inform the sort of questions you're asking.
So for example, in social epidemiology there is
a lot of work that's been done that tracks growth trajectories
in certain clinical outcomes, in certain behaviors among
patients that would help inform, for example, when an
intervention might be applied, to whom it might be applied,
and how that intervention might be likely to kind of
change in terms of its impact over time. So again,
not a direct answer to your question, but I think that
there is a body of work out there that would get
us closer to answering that question.
>>>DR. MARTIN CHARMS: I'd like to just piggyback.
It's a little bit of an answer to the prior question as
well as this one. But one interpretation of time is
history, which I think we also have to think about.
And all too often I think when we do studies
that look at organizations or even people we forget
about the fact that there's a history that got them
up to the point where we finally got involved to study
them. So I'll use the organization since that's where
my work is. And a very quick story is, we're studying eight
children's hospitals in the state of Ohio as they're trying
to implement some evidence-based practices around
patient safety. And we discovered in two of these
hospitals where the practice that they're trying to
implement is the use of alcohol-based skin preps, that
these two hospitals weren't getting anywhere. And they're
both in the same city, and the surgeons practice in both
hospitals. Well, we would never know to measure the fact
that a number of years ago they had a fire in the operating
room because of the alcohol based skin preps that the prep
pooled and ignited. So nobody there wants to go near the
stuff. And if you were to use measures of organizational
culture or leadership or any other variable I could name,
you're not going to find that. We didn't find that until we
talked to the people about what's going on here. And so
my conclusion from that is, while there are some things
that we do know about the effect of context in
organizational change, that there's a lot of things that we
don't know, and there's a lot of special causes sometimes
that really mess up our work. But the way to understand and
to help build our models, because I think we're at a
stage that we're still building conceptual models as well as
testing the pieces, is to do some qualitative work. And we
found what we found through interviews. And we also found
the strength of the feeling around that particular issue.
So in summary, importance of history, importance of doing
qualitative work, to be able to identify some very
important things that affect the intervention.
>>>DR. BRIAN WEINER: So two time issues that come up,
Jeff mentioned that I think would relate well to Joe's
paper on simulation are sequencing and duration.
So it could very well be that multi-level interventions
are more effective if sequenced in a particular order,
such that some of them come in earlier than others.
And we strongly suspect that interventions take
time to have an effect, and then their effect decays over
time. And that those can vary significantly from one
intervention to the next, or even across different levels.
This is something that we probably need to attend to more
in our research rather than just measuring effect sizes,
looking at duration buildup or decay effects. But it is
possible I think with simulation models to play with
time. You can expand time, dilate time, accelerate time
and so forth. So it is possible at least to test out what your
various assumptions are. If you sequenced interventions in
a particular way, do they have to start one after the other,
could you stack them, could they be staggered, how much
should the staggering be. And if you make certain assumptions
about how long it takes for the effect to build and how long
it's going to decay, how do those effect the various
outcomes that you can get. And I think simulation offers you a
nice vehicle for testing out, it's essentially doing
sensitivity kinds of analyses to see what those assumptions
might mean in the real world. I don't know,
Joe, if you wanted to add anything to that.
>>>DR. JOSEPH MORRISSEY: I agree whole heartedly.
I just had a couple of other comments reflecting back
on a couple of the points that were raised.
In many respects I think we're needing to
build some new research infrastructure that doesn't
exist now, and that's why we're struggling with this
I think. And so there are several things that need to be
thought of. With regard to databases for example, recently
the agency for Healthcare Quality and Research sort of
who's tasked with one of the, the major responsibilities for
comparative effectiveness research, sort of recognized
early on that databases to do comparative effectiveness
research didn't really exist. So they had to create
stimulation to the community to sort of begin developing
databases, then make them available for the research
community so that certain issues of comparative
effectiveness could be addressed. So I think a
program, and trying to think about ways in which grant
mechanisms and other things could be targeted to database
development and collaborations would be a strategy.
With regard to the concreteness, I agree. Again, I think the
strategy there though might be more of a workshop orientation
and I think workshops could be created as another way of sort
of bringing people with the technical expertise as well as
the kind of clinical and policy issues together to be working
on things. It would be a different format than what
we're using today, which is more general and orientational.
But I think that would clearly be another kind of a follow up
step to really demonstrate how to do this. And perhaps in our
smaller workgroup sessions we could be focusing on actually
considering particular variables for particular issues.
The other strategy which is also an infrastructure
one is, you know, this is the role that research centers have
served. And I think one of the best examples of that going
back a number of years is what the Cancer Institute did with
research centers to really elevate community oncology
research and to move it forward. Research centers are
sort of falling out of fancy. When Harold Varmus was
director of NIH he was very much looking at the R01 single,
not single investigative, but independent investigator model
and really wanting to invest money in research centers.
I recently heard, just the other day Tom Insel, who is the
Director of NIMH, talking about those continuing kinds of
discussions. In the current economy and so forth, NIH is
really sort of looking in other directions away from research
centers. But it's very hard in my mind and in my way of
thinking, to sort of get multi-level research going when
you're sort of funding R01 applications. I just don't
think the mechanism aligns with the challenge given the fact
you've got to be pulling multiple people together and
multiple databases and so forth. I don't think you can do
that with R01 type research. So I think there are some of these
bigger infrastructure kinds of things that, if we're really
going to make some progress here I think we've got to have
an effort that's commensurate with the challenge.
>>>ELLEN GRITZ: Good observation. We've only got
about ten minutes left, I want to make sure that all
of you get to speak. So go ahead.
>>>MS: (inaud.) representing the combined forces of tables
10 and 1. We had an interesting discussion about adaptable
design and (inaud.) comments on that. I think we saw adaptive
design having perhaps a spectrum all the way from a
small RCT in which perhaps you were proposing and (inaud.)
approach to intervention (inaud.) all the way to saying
that the fact that adaptive design is the intervention,
that if you are allowed to and expected to use a lot of
resources and creativity to accomplish your end. And that
intervention itself is that. That might be more
applicable in implementation study where you're deep in
to the politics of the organization and you're
using various means and methods to try to get
your intervention implemented within that setting.
So I'm curious about definitions and understanding of (inaud.).
>>>DR. PAUL CLEARY: One of the things that we talked about
at our table in terms of adaptive design, I think one
model of it is the IHI collaborative approach.
It's very, very adaptive. Each site decides on
its (inaud.) and the rapid cycle improvement approach
to change is inherently adaptive. You're supposed to
just regularly, rapidly sort of change your focus.
The challenge in those, and Don's a big advocate
of the rapid cycle improvement, and he often says
our approach, the traditional approach to research is
antithetical to quality improvement. It's a little bit
of a stark contrast. But the challenge is, if it's not
standardized or generalizable you're never quite sure what
worked or didn't work or how to generalize it. It may be good
for an internal process, but in terms of external validity it's
always a challenge to know what it is that you would export,
under what circumstances to what places.
>>>ELLEN GRITZ: Okay.
>>>MS: (inaud.) I represent the smart table 14. (inaud.)
the generalized validity (inaud.) intervention and how
does one go about trying to build that up.
And so that made us limit to the discussion about (inaud.)
did happen to have (inaud.) looking
at MLI's (inaud.) criteria where were we would know how to
assess (inaud.) should be funded and which were not.
And as we talked about that even more we wondered if
there was, did any of the papers really talk about how we
got to be here? Why did we think (inaud.) what's the
science behind it, what were the assumptions that were
made in order to bring it here today, (inaud.).
>>>DR. PAUL CLEARY: One theme that I think emerged today
is we emphasized the internal validity at the expense of
external validity. I think speaker after speaker said we
need to, or context, however you want to frame it. Need to know
about external validity. I think several speakers also opined
how we got here. And one was that we haven't been as
successful as we would have liked with the single level
interventions, and that there's a premise that maybe by
combining either the additive effects or the synergistic
effects will have the kind of impact that we would like to
have. So if a patient intervention is contingent on
changing community norms and changes in the care setting,
then the individual intervention is going to have zero net
sustainable impact if we don't operate in the context and the
different levels. Now that's untested, but a lot of smart
people think that's a potential way to get more traction on
some of the issues we want to get more traction on.
>>>ELLEN GRITZ: Or we realize that what we implemented did
not happen because of the implication of what we
thought the provider was going to do. Something of that sort.
>>>RUSS GLASCOW: Russ Glascow from the very engaged table 8,
with a few of my own editorial comments put in. Just two quick
points. One about simulation modeling. Just that we are very
bullish on that. Probably largely at least because only
one or two of us at the table really understand it. Just know
enough to be dangerous. But we're quite excited about it
for the following reasons. One, we think it has the potential
to increase transparency of reporting and making explicit
assumptions to things that are going in there, which we think
would be good for all types of research. Particularly around
how this, probably an incorrect term, but what I would call the
impact of starting assumptions or sensitivity analysis.
In economic speak, I'm quiet not sure what the
modeling term for that is. Secondly, the potential for
replication which we see too often, and I think is really
underemphasized in all types of science, but you show through
cross validation or other types of replication. And finally
something that we didn't hear today, might be in the paper
but that we felt was really important about it is the
potential to identify, particular when you look for
interactions, (inaud.) effects. And to, in advance, predict
some things which I think we could have well known about
even simple things that are conceptualized in one or two
level like the accord trial results or things like that,
if we had really modeled those before based on what we know.
Second point, we were also bit on the time bandwagon. We feel
it's really been the Rodney Dangerfield of research.
It deserves much more credit, and we don't know quite why.
Back to (inaud.) before some of the people in this room were
born, about the importance of time as one of the key concepts
of generalizability. But in particular, I think the notion
that today it's more and more possible, and tomorrow and next
year will be even more possible. Mark made the great
point about history, but we have a lot of discussion about
all of these different levels, really a need to do that over
time. Particularly thinking about organizations because
organization day one when you start your study is not the
same organization that you have six months or a year later.
And that's true for most of the other levels there. But we
usually don't do that, we just default to whatever, what we
call time one was, despite the history and everything else.
And the final point is just that I think it particularly
imperative, and probably one of the biggest challenges of time,
it's the most neglected is, of course, sustainability.
The long term impacts. And that, we could have a whole meeting,
and probably will, on that. But one thing that I think is
really possible that wasn't in the past, with all the work
being done, trying to harmonize community level health
indicators that could be used to at least partially characterize
contents. That is being done now, we could maybe have
another discussion about that sometime. But these geospatial
databases and things that we now have to summarize some of
the important social, physical, environmental characteristics.
>>>ELLEN GRITZ: Thank you. I'm going to let our last
person at the microphone speak, and then
any summary comments from the table.
>>>FS: Thanks for the opportunity. (inaud.)
University of North Carolina, and we've been talking a lot
about context today, and we're hitting on timing and history.
But one thing I'm not hearing, something we've been working on
is readiness. And we've just completed a model of
partnership residence which we defined three dimensions.
One being (inaud.) that involves history, time, interest,
benefits, whether the benefits are mutual, and value.
Effective (inaud.) to do the work, resources, people,
finances, (inaud.). And the third dimension is operations.
Communication systems, (inaud.) and leadership. And so I think
in the application of multi-level intervention, the
readiness of the organization, the provider practices,
and of course the patient, we have (inaud.) and there are
some readiness problems with organizations.
But I think we really need to look at readiness as a
multi-level (inaud.), and I really haven't heard
that today. I think it's bigger than time and history.
>>>ELLEN GRITZ: Very interesting.
Okay, final comments from the panel.
>>>DR. MARTIN CHARNS: I just can't forget the comment
that Ernie made this morning citing Machiavelli.
And as I think about readiness.
>>>ELLEN GRITZ: Jeff, did you want to say something?
>>>DR. JEFF ALEXANDER: Just to reflect some of the comments
that were made in table 20, since they didn't get a full
voice to their concerns. One of the things we talked about
quite extensively and I became a clear convert as the discussion
proceeded is, how we engage in cumulative knowledge ability in
this area. These studies are likely to be very expensive and
we can't afford, I think, in an era of resource scarcity to
engage in what amounts to a series of kind of idiosyncratic
unconnected studies. It seems to me we can approach this from
several vantage points. One is figuring out how to take more
of an incremental approach to multi-level interventions and
evaluations without trying to embrace all of the moving parts
simultaneously. But the second is more of a top down strategy,
and I may be speaking about NCI here, is to figure out how we
can categorize, accumulate, disseminate the information
that we're learning so that the learning is accelerated more
than it has been in the past. So I see a two pronged
approach that would be great if we could implement.
>>>DR. BRIAN WEINER: In addition to the infrastructure that
needs to be built, I think there's a lot of pre-work that
needs to be done. I think before the NCI goes off and spends
several million dollars and people's careers doing
multi-level interventions, some of which has been talked about
already, there's a lot of work that needs to be done on
measurement. There's no point in doing multi-level interventions
if we can't measure with some degree of confidence the things
that we think are important. We need a better understanding of
causal mechanisms of the processes we're interested in
studying and intervening upon. So I think the need for more
multi-level and cross level kinds of research is there.
And I think there's a training aspect to it as well that
issues around simulation modeling and these other
techniques. I think we just need to build up our skill set.
And there's a fair amount of conceptual work that I think
needs to be done. So I would say we're not ready for prime
time actually on multi-level modeling. But I think there
are some very concrete steps that could be taken to
lay the groundwork so that if and when we're
actually ready to make some big bets with the
budgets that are available and so forth, that we're
more likely to succeed. I don't think we're ready yet.