Multilevel Interventions in Health Care Conference: Presentation by Paul Cleary, PhD


Uploaded by NIHOD on 05.05.2011

Transcript:
>>>DR. STEVE CLAUSER: Paul is the Anna Lauder Professor of
Public Health and Dean of the Yale School of Public Health
and his recent research includes a statewide initiative to
improve cancer care in Massachusetts, which I assumed
involved many levels of effects. And a study of also how
organizational characteristics affect the cost and quality of
care with people with AIDS in a national evaluation of
continuous improvement initiatives in clinics
providing care to HIV individuals. He has published
over 300 articles and book chapters describing his
research. And I'll let Paul take it from here to
talk about methods and analytics.
>>>DR. PAUL CLEARY: Thanks very much, Steve.
Everything I was going to say has been covered,
so I will try to be uncharacteristically concise.
I think the background is pretty obvious.
I think of levels of just - what the levels are,
what the influences are, what one intervenes on...
and I listed some common levels.
And just to avoid the confusion,
as we start talking about methodologies when I'm working
with biostatisticians or interventionists,
I think it's okay for us to have different concepts of
levels as long as everyone has alluded to.
I took out the slide that said the premise of multi-
interventions, but I think that may be one of the most
important ones to have had in here.
Because I will say the fact that multilevel interventions
are good is a premise and I think we are, it is becoming
increasingly clear that it's an untested premise.
And for something that is really expensive and hard to
do, we ought to start testing that premise.
So I wasn't asked to speak about that, I was asked to speak
about methodological decisions. So when you are sitting around
with your multi-disciplinary, multi-varied
team, you are going to make a variety of decisions on.
And first of all, what is your foci of assessment?
What's the net effect on, for patients?
So for example, I took Jane's Table 1 and I said let's say we
are deciding that we want to increase colorecterol or
appropriate colorecterol screening around patients.
Now that is a discussion we have to come to that.
And then we had to say what is the effect of specific
components on the level at which they are focused?
So again, I took Jane's table and what I mean by this is we
might sit around and debate and I'd say okay,
she says knowledge about cancer and screening options for the
patient, we want to increase that.
Provider team, clinician knowledge and communication by
recommended screenings, we want to increase that,
we want an intervention that is going to increase that.
And at the local community level,
we want community screening promotion efforts.
So we want to increase, let's say,
knowledge, behavior and screening.
But those are decisions, when we are doing research,
that we have to sit down with these two hundred things which
are appropriate and make those kinds of decisions.
And then actually Brian covered the next point,
then we have to figure out whether we're looking at -
I would have said added diverse or interactive,
but you can use, I think Brian had a much more elaborate
framework, but basically what we want to know,
is doing two things better than doing one thing?
Is it the same as doing one plus one equals two,
because one plus one may not be two, it may be
1.75 or something. Or, in the synergistic or interactive,
whatever nomenclature you want to use, one plus one
hopefully would be three. So it may be that if we just
go out and educate individual patients, we increase the
screening rate by one percent, we educate physicians,
we increase the screening rate by three percent.
But maybe, when they start talking and they all agree and
there's all this complex processes that we go,
"gee, maybe we can get fifteen percent."
But those are testable questions that I think we want
to get to. And, of course, mechanisms, complexity,
non-recursive, etc., etc. Anyone who has done evaluation
research knows we want to understand the mechanisms,
the processes, the feedback loops and so on.
And then there are a whole host of methodological decisions
that follow from those and I have listed some here.
And the only reason I list them is I was struck and I felt
validated by Kurt's meta-analysis that a lot of
these things are just not done. So we don't think about
whether - when we are looking at, let's say local community
and community screening rates, are we interested in what
it is about the community that promotes those things,
are we interested in what's the process, the outcomes?
We have to come up with measurements of those things.
Methodologists, like me and my colleagues,
just sit around, we have to think about aggregations of
data, what do we mean by a community,
what are the units, what do we aggregate.
Not just for the interventions, but for the analysis and
there's a whole host of statistical issues that can
drive from that. And then this is the big one, I think the
reason we haven't done more of this, because this is -
it's really hard to do. And I am going to make an assertion.
I am avowed logical positive and I agree with the comment
that we have to look at all this complex stuff,
but I would say we often neglect to have really good
rigorous comparisons to know whether this community
intervention resulted in more community awareness than if we
had not done that community intervention.
Now I am talking about communities and it's really
hard to randomize 3,000 communities or even ten
communities, so it's really, really hard.
But often we don't know that and often we have no idea what
the impact of that community intervention was,
what the cost of it was or what the relative cost of different
approaches would be. So the challenges I think are self
evident. I was asked about analytic - I don't think that's
an issue. I think we have very, very good analytic models
for hierarchical, nested, non-nested,
non - I think we have really, really good statistics.
I also think we have really, really good research design.
So I don't think finding a really smart statistician or
methodologist is the right limiting step here.
I think thinking about what the foci are and how to get the
control groups to evaluate even the level-specific effects,
much less the mediating, moderating,
synergistic effects is really, really hard.
Think about Steve when we were talking about doing things in
Massachusetts, we are doing HIV interventions now in
Connecticut. So if I intervene in the entire New Haven
community, what is my comparison group, how do we find it,
how do we get them to submit to measurements?
So it's really a tough - and it's expensive and hard.
That's really inspirational, right?
This is a point I made before that there is - I said
surprising, I would say shockingly little information
both about the effects of the components.
So a lot of the ones I read is well,
we do fourteen things and the net result was a five percent
increase in people quitting smoking.
But we don't know what the components are either at their
level, in other words, what happened at the doctor's
offices, did the doctors' attitudes change or what the
impact of those changes were on that smoking rate or that
prevention screening rate or whatever.
A final point, Joe is going to be talking about modeling and I
am really enthusiastic about that.
It struck me how little we know about the relative cost of
doing these things. As I said, I am trying to plan
a community-wide strategy for promoting HIV screening
and linkage to care. And you come up, when you
talk to the people at the state who say well, if we have
X dollars, what's the best way to spend our money?
You have given us the 43 studies about how to increase
screening rates, but if you want me to do something at the
community level, what's the impact of those fourteen
evidence based interventions, what's the cost of them,
how do I model that taking into account context, because
I want to work in different communities and so on?
So that's a really tough one. These are strong -
I have an answer for all of these, but I will pretend like
I am open to inquiry. Must we sacrifice resolution and
evaluation designs to achieve impact?
In other words, do we do the whole enchilada saying okay,
we are going to to the community, we are going to
do the fourteen things because we really want
to have the impact, which is what I am hearing in HIV.
Or do we really want to understand more about the
components, the interactions? I lean towards the latter
and I can argue that at our table. There is sort of a -
I am kind of simple minded, a lot of times when I sit down
with people designing interventions, you do the
whole enchilada, see if it works and then tease out
the components, or do you start with the components and
build up? So if I am doing a clinic intervention,
do I educate the physicians, change the systems,
do I do the fourteen things, see if I can get an impact
because it's so hard to get an impact doing anything or do I
do the little components? Now you might say we have
the components, we have the evidence that they work,
but I would challenge all of us. I don't think we have
done the comparison. So if I go back to Jane and say,
"Jane, gee, I just found 43 papers about increasing
knowledge about cancer and screening options,
what do I do in Connecticut?" I don't think we have done
the comparisons or the cost or the modeling or the
impact in different contexts. And I'll just mention, we have
been complaining there is not enough money to do -
there's a billion dollars on the table to do comparative
effectiveness research and comparative effectiveness
research applies to systems and systems approaches.
So I would challenge us to do that.
So my questions, and I am trying not to delay lunch here,
is there - again, I know the answers - is there additive
data about, adequate data? No, we don't have adequate data.
Should there be more emphasis on understanding?
Yes, there should be more emphasis.
And should we focus more on the cost effectiveness?
Your tables actually have a simple -
yes, we should focus more on those things.
So that is my talk, thank you very much
and I tried not to delay lunch.
>>>[APPLAUSE]
>>>DR. STEVE CLAUSER: Thank you very much, Paul.