Webinar


Time & Location

Mar
26
1:00 - 2:00 pm ET

As artificial intelligence (AI) drives rapid change in health care, it brings an unprecedented opportunity to address inequitable health outcomes. Longstanding disparities in life expectancy, chronic disease rates, access to care, and other issues persist for many Black, Latino, Native, and Asian Americans, as well as residents of rural communities, people experiencing poverty, and other vulnerable groups.

Harnessing AI’s full potential to improve health equity requires developing unbiased, comprehensive solutions. Our speakers discussed:

  • Novel approaches to mitigating bias in AI. Tools for researchers, local and state government experts, and policymakers to help them inform fair AI policies and practices that support health equity.

  • Methods for working with community stakeholders and underserved populations to promote AI as a tool to advance health equity, with a focus on building trust, collaboration, and translating research.

  • Innovative health care industry practices and approaches to eliminating bias in the use of data and AI tools.

0:04

Welcome, everyone. Thank you so much for joining us this afternoon. As she said, I am Director of Communications at the National Institute for Health Care Management Foundation, NIHCM. Today's event will focus on a topic dominating conversations about healthcare, artificial intelligence, and its impact on health equity.

1:01

Recent research reports that AI has great potential to advance health equity, defined as the state in which everyone has a fair, and just opportunity to attain their highest level of health. AI has already proven successful, and enhancing some types of care, such as chronic Disease prevention and Treatment. At the same time, AI presents significant challenges to improving health equity. Studies have found that AI algorithms may increase unfair treatment of vulnerable groups when used for medical care management and public health, including foundational research from NIHCM grantee Ziad Obermeyer, who found in 2019 that racial bias was present in an algorithm used to determine the complex health needs of millions of people.

1:52

The expert speakers we have with us today for our event, which is part of NIHCM’s Artificial Intelligence and Health Webinar Series, are going to discuss how AI may advance or impede efforts to improve health equity. Their work covers the use of data policy, healthcare, industry approaches, and community level concerns. They will address the potential benefits, and challenges associated with AI and solutions needed to ensure AI supports health equity before we hear from them. I want to thank NIHCM’s president and CEO, Nancy Chockley and the NIHCM team for all of their help putting together today's event. You can find biographical information for our speakers, today's agenda at our speaker's slides, on our website.

2:39

We also invite you to join the conversation on social media at #AITechNIHCM. After our speakers have completed their presentations, we'll have a brief Q&A if time permits.

2:53

Now, I would like to introduce our first panelist, doctor Maia Hightower, MD. MPH. MBA. Doctor Hightower, is the CEO and Founder of Equality, AI and the former EVP and Chief Digital Transformation Officer at the University of Chicago Medicine. She is a leading voice in the intersection of healthcare, digital transformation, and health equity, and a champion for responsible AI. Ensuring that the digital future of health care is equitable, and just choose a four times C Suite Physician executive with 15 years of executive leadership, spanning Healthcare IT, Medical Affairs, population health for academic medical centers, clinically and clinically integrated networks and healthcare tech companies. Welcome, Dr. Hightower.

3:41

Thank you so much, Sheree. Thank you for that warm introduction, and I'm so excited to be here today with you all.

3:49

I would like us to take a moment to think back.

3:51

The last time you saw your physician, perhaps it was with a loved one, and perhaps it was in an ICU setting, or a hospital setting, or a primary care setting.

4:03

When you were sitting there. Perhaps you had, and you're one of those cute little gowns on you at your most vulnerable self.

4:10

And the physician may have been, on a keyboard and a computer in the room, how much do you trust that, that AI behind the curtain, was actually fine tuned to you, your biology, your specific circumstances, that really understood and provide, that could provide the best predictions to help with your health and health care?

4:34

Now, now, extend that further to, at your loved one. Perhaps your grandmother, your child, and neighbor a friend, and if you're a physician, if you're a caregiver, each and every one of your patients, how much do you trust that? that AI was, again, fine tuned for that individual.

4:53

Well, we all know that that algorithmic bias, that there's real risk models just aren't there yet. But what they offer, such great opportunity.

5:05

And so little did you know, when you stepped into this webinar that you are actually raising your hand and you are saying, Coach, put me in.

5:16

I wanted to make sure that every model is fine tuned in really achieving that health equity goal that we all strive for.

5:31

Let me go back.

5:39

OK, here we go. Of all forms of injustice, inequality, injustice in health care is the most shocking and inhumane.

5:48

Now, Dr. Martin Luther King has said this over, 50 years ago.

5:51

And my question to you is, have we made significant gains? Have we lived up to the promise that Dr. King has identified for us?

6:04

And I would say, no, We haven't made the strides that we committed to so many years ago and even in the last five years to address the inhumane and inhumanity and injustice of health inequality.

6:20

We're at this inflection point where we have new tools. We have new technology that actually may have that opportunity to address health inequality in a way we haven't been able to before.

6:33

We're at this inflection point where that same tool can actually amplify the bias, amplify the inequality that already exists, or we may choose to use this new technology to actually address healthcare's most difficult challenges and address health inequality in a way that meet that closes that gap.

6:55

If we choose not to address health inequality with our foray into health, into AI, we really risk amplifying that bias.

7:06

Sure.

7:06

You mentioned earlier the study done by Obermeyer at all, dissecting racial bias in algorithms used to manage the health of populations. Can be sexist. It can be racist. It's time to make it there. In this particular study, black patients for 50% less likely to be for referred to case management while being equally sick.

7:25

Now, that alone is shocking that a model was deployed that could amplify any health inequity.

7:35

And at the same time, the good news is that Obermeyer and team were able to both detect the bias.

7:41

And use AI technology to redevelop that model and build a de biased model. A model in which black patients were just as likely to be referred to case management while being equally sick. So a model that better served everyone.

7:59

So we really can build better models today.

8:04

When it comes to that, the cost of a biased model. Obermeyer and his team have estimated at this point that over 80 million people have been exposed to this biased algorithm.

8:17

It's since been read of redeveloped but that 80 million people, what was the last opportunity cost over a billion dollars across the health care system in either direct health care cost, savings opportunity, or in chronic condition. Case management, fee for service, billable services, is the estimated cost, a billion dollars of lost opportunity, and yet that's just for health systems. Imagine what that cost was borne by patients and families.

8:51

We know that.

8:53

Navigating healthcare is incredibly complex and when you have a case manager that can help you navigate the complexities of healthcare that can actually be priceless.

9:05

one would think it was it perhaps just a one-off, but in fact, and the AHRQ did a systematic review and looked at the literature to measure the impact of health care algorithms on racial and ethnic disparities in health care.

9:17

And what they found in looking at over 11,500 studies, that only 50 of those studies met inclusion criteria, because only 50 of the studies actually did stratification of their algorithm by race slash ethnicity.

9:32

And looked at whether or not impact health equity, of those that let the 14 studies that looked at whether algorithms impacted health equity.

9:43

10 showed that algorithms increased health inequity, whereas, for demonstrated that algorithms decreased or could decrease health inequity.

9:53

And those for that decreased health inequity did so because by design.

9:58

So they were designed to, to both detect bias and mitigate that bias, and so by design, they are able to decrease health inequity.

10:07

So, two thirds of this studies actually increased health disparities.

10:12

And if you think about the 11,500, that did not meet inclusion criteria by which no measurement was done, meaning that there was no intention, great pop, in best case scenarios. Two thirds of those would amplify health disparities, But good probability, it's more than that, because there was no intention for those models to decrease health disparities.

10:34

Bias occurs throughout the life cycle, from, from problem formulation, who has the power to decide what problems AI solves.

10:42

Data flow at collection, so that data creation, curation, acquisition process, the pump moment in which that data then informs model development to model evaluation deployment. All the way to that real-world outcomes that are effected by those models.

11:00

Bias can occur throughout that life cycle.

11:03

And, again, the good news is that, while bias can, it can occur throughout the life cycle.

11:09

That there are good solutions, methods that are applied throughout the life cycle to address these, to mitigate the biases and their social mechanisms, as well as technical would be. How do we build that better model from the modeling perspective? The math.

11:26

The social, and it can be as diverse as diverse teams, the way we lead, how we have governance and policy. And so there's a wide way or a wide range of methods to address that bias.

11:40

The promise, the promise of AI is we're truly at this.

11:45

This incredible time where there is this opportunity to actually address health disparities and health care inefficiencies broadly and provide greater value for health care using our value equation. And the promise of AI really can improve quality, decrease cost, and improve the experience for our patients, for our clinicians.

12:06

Especially when we think of the administrative burden of health care.

12:09

The opportunity is great.

12:11

And so, it really is up to us to choose to address this, has the health disparities, and make sure that we are mitigating biases as they occur.

12:20

Because the total cost of deploying AI systems without a safety, an AI safety and management system, is bias in AI systems, misalignment with AI, transformation, and eroding health equity goals, as well as underperformance overall of ones, an investment, where those benefits are not fully realized.

12:40

So, here to equality, AI, we take it a holistic approach to transformation in healthcare, combining those technical solutions with change management, And it really occurs across three stages.

12:54

Foundations, making sure that a health system or a health care sector provider really has that foundational element to provide equitable measurement and monitoring of models.

13:05

And that foundation really begins with strategy, intention with strategy. It then is AI governance.

13:11

Making sure you have those policies and people and put in place to make good decisions. Then the technology are you auditing your models.

13:22

Later stages builder, where you actually can build models and measure. The efficacy of one's models that are within one's environment molasses enterprise or actually have that full management system in place. That 360 view of every single model in real time detection of how that model is performing in safety performance as well as outcomes.

13:47

This is what inequality, AI foundations, it looks like. Where it really starts with Discovery. And making sure that your principles that drive your mission actually align with your strategy.

14:00

What does your current technology look like?

14:02

Then, into that AI strategy piece, where many of the systems, health systems, leaders that we talk to today, are really: health equity is really important, and AI is really important, but don't really have a good path forward to aligning AI strategy with health equity or AI strategy with safety.

14:23

And so, really, what we help do is align those two together, providing a path forward.

14:29

That AI strategy really does emphasize safety and equity.

14:33

You have the AI governance in place, as well as really monitoring your models, auditing your models to make sure they really do align to that strategy and government governance process.

14:48

So, again, really, it's about making sure that your AI strategy, that it focuses on your high value domains, that is, creating the greatest value that your investment's ought to provide, while prioritizing equity and safety.

15:03

There's a number of governance frameworks and the Algorithmic Bias playbook.

15:08

Doctor Obermaier and team published this playbook in 20 21.

15:14

It's a very easy to adopt Playbook for AI governance, for those health systems, or, or, sector leaders that haven't yet developed what their AI strategy ought to be.

15:25

You can move up, then, to the NIST AI Risk Management Framework, which is probably the one of leading frameworks here in the United States.

15:32

When it comes to risk management, it's a very flexible framework that allows for, and health systems, health sector players, really, to develop their AI governance over time.

15:46

Followed by audit, and again, on audit has two fold components. one is the technical audit. How does it perform by subpopulation stratification? How does that model perform when it comes to accuracy across that stratification, in subpopulation, how does it perform in terms of fairness?

16:03

Equity, and outcomes.

16:05

But also, there's a process component. How well is your audit development, your AI development process actually adhering to your policies and process?

16:16

And so this is just a sneak sneak peek into what the quality AI technology looks like.

16:22

But most importantly, we each have a role to play to ensure that the future of AI enabled healthcare is equitable and fair. And so if you're a patient community member policymaker, it's really about demanding transparency and accountability.

16:37

It's about advocating for responsible AI. It's providing that feedback on how that system is actually performing at the bedside.

16:46

For the role of clinician and experts, it's understanding one's role as the human in the loop.

16:52

So, our clinicians are, our subject matter experts really are that human, that human in the loop, But actually that human in the driver's seat, and making sure that AI is not only understood, but including its limitations, but as well, that it gives us that we're able to achieve our full value from that AI investment.

17:13

Providing oversight, speaking up about concerns, and, of course, getting involved in the AI development process.

17:19

Then the role of developers really is making sure you are measuring the efficacy of that Model II subpopulations stratified over time, with subpopulations in mind, testing, thoroughly listening to stakeholders in transparency and documentation.

17:34

Then healthcare leaders, really driving that.

17:37

That an AI strategy. Alignment for value, for safety, for equity.

17:42

Making sure you have AI governance in place, so that your policies and process align with your strategy and your mission vision.

17:48

Then, auditing. you have to measure those models. How they're performing in real time. At least in a snapshot, on an annual basis, and then in real time.

17:59

So, this is just a little bit about equality, I am on, Our manifesto, I think, the most important is that we respect the algorithms, have power. Right, over life and death, treatment, A non treatment, and it's an awesome and incredible responsibility.

18:13

We respect that healthcare date, healthcare data, privacy, and security, and that each data point is a lived, unique human experience, and, with that, comes incredible responsibility.

18:28

And so, I leave you with that. Hopefully, each of you understand that we, this is a team sport, This is a team endeavor to ensure that the future of AI in healthcare is Safe, Equitable, and there.

18:43

Thank you so much.

18:48

Thank you so much, doctor Hightower, for giving us the deep dive into the tremendous benefits of learning how to harness the power of AI, and root bias out of our systems.

19:01

Our next speaker is Usha Sambamoorthi. Doctor Sambamoorthi is a Professor of Pharmacology and Associate Dean for Health Outcomes Research at the University of Texas Health Science Center at Fort Worth. She is a health economist who has dedicated her teaching mentoring and research efforts to improving population health by reducing disparities in gender race, ethnicity, age, disability and mental health doctor. Usha Sambamoorthi recent research examines the topic she's going to discuss today, which is community perspectives on AI and machine learning and health equity. Her work, which has been supported by many federal agencies, overall, focuses on improving population health through the examination of access have, access quality, and outcomes using real-world data, doctor Sambamoorthi, thank you for joining us.

20:01

Thank you so much, Sheree, for that kind introduction.

20:05

I am delighted to be here, and I want to focus this talk on the powerfulness of AI that already Maya Hightower highlighted.

20:16

So, this talk would be focusing on how can the community get involved and, you know, what role they can play in terms of making the a powerful, safe, and equitable as the height of an indicator. So, why this is important, as height or indicated, that they know we have an incredible number of publications and medical knowledge coming out of the a word. And then, we can also see that along with that, A, that is a lot of sources of bias come across as well. And, we all know that, you know, there is a doubling of medical knowledge in 19, 50 plus 50 years, 19 80 to 7 years In 20 10, it is 3.5 years in 20 20 is just two years. Just 73 days. So, by the time a medical student graduate, the medical student will only be a plane 6% of their knowledge. Next slide, please.

21:18

Next slide.

21:21

Yeah, So, that is the challenge ahead of us.

21:26

And I want to declare that bias is very complex and multi-faceted.

21:32

There are many, many approaches for mitigating this challenge, but there is no quick fix. And that is where I think, there is a need for community involvement. And the community involvement happens. Because, as Hightower pointed out, that the human in the loop is very important and the human in the loop can happen, only the participatory design and then stakeholders for worse.

21:57

And they all need to give them perspectives, because as you will know, that the AI is very complex in the sense that the domain expertise compute comes from somebody else.

22:10

And even when there is like a participation between the physicians and the computer scientists, the physicians have the domain expertise in terms of you know treatment and the response to treatments.

22:22

And so, for diagnosis and not, and the computer scientists have the domain expertise in terms of there engineered. And so these two have to come together. Nobody is like, you know, they are not experts in every field. That is why that human in the loop is very important. Next slide, please.

22:40

Yeah, and as Hightower pointed out, they know there are so many racial biases, gender biases, and I want to give you a reference for you to take home that, that is also a geographic bias in Medical aid tools. But example, only three states have contributed data to the many of the diagnostic tools, which tells you that, you know, many of the rural areas are not represent, many of these states are not represented in making decisions. Next slide, please.

23:11

Next.

23:12

And if you need more information about racial bias and other kinds of bias in healthcare and artificial intelligence, and NIHCM has like a data insight, which I found very, very helpful, and here is a source for you to learn more about it next.

23:31

And the advent of Chat GBT, many of you are familiar with that it's called Generative AI and the Generative AI actually amplifies the problem. I had access to the problem by what is known as Hallucinations, It makes of things which are not there.

23:49

And, therefore, that is additional challenge for the Communities for Everyone to address the hallucination on how healthcare professionals can mitigate the risks. next.

24:02

And a bias comes from lots of sources, but I'm going to focus on the, you know, it comes from the models. It comes from the outcomes. I'm not going to focus on the models of the outcomes. I'm just going to give an overview of how, the community involvement and how involved in making sure that the bias from the people are reduced unbiased, and the data are used.

24:30

Next slide, please.

24:33

And there are strategies out there that you can use, achieving health equity through artificial intelligence.

24:43

For example, here is a paper that this, looking at D, all the articles that were published, looking at strategies to improve the impact of artificial intelligence and health equity. There is no surprise that almost two thirds of all issues, strategy, layers, are related to data, as you can see from the graph or with it, but then there is also a mention of engaging the broader community. As you can see, it was not in the very top list, but at least mention. And next slide, please.

25:17

And, yeah.

25:20

So, this particular aspect of it, the ADA and Community Engagement and Participation and Empowerment is way very new. It's just emerging. So, what do I mean by a community? By community, I mean, individuals organizations and groups affected by concerned with AI technologies and their impact on society.

25:41

And examples academicians at this this, civil rights organizations, marginalized communities, policymakers, industry experts, and advocacy groups. I think that, you know, the community involvement is like an umbrella, and everybody needs to be involved. Because it's not going to happen by accident, it is going to happen by design, as height other pointed out. Here are some of the emerging community engagement and participation. We have the Artificial Intelligence Machine Learning Consortium to advance health equity and researcher diversity. I am actually one of the Principal investigator of the Consortium. Unveiled, our main focus is to achieve equity and researcher diversity through community involvement, and there are initiatives where they are designing a tool for the underserved population from the ground up.

26:36

one example comes to my mind, that is the purposeful AI for the minority. And there are also calls from the university researchers to develop requirements for community engagement in a research. And there are also local communities that are using artificial intelligence to improve health equity. Next slide, please.

26:59

As it was mentioned before, be first need to listen to the community. And here is what we did. We listen to the community.

27:07

And the one of the top item is act locally, because we all know that the problems are at a local level, there, really hyper local, and one size fits all is not we do what, at all. And the other thing that stakeholders, actually, when we listen to them, told us, is you have to have diversity. Diversity, diversity. And everybody, I think the community is interested in AI ML to become a household, don't dislike, you know, we have VCs now we have smartphones. Now, you have, like, you know, all kinds of technologies and everything. So, I think, you know, the community is very interested in making this as a democratic process and democratizing the AI.

27:52

And the community actually realizes that there are barriers that can exist, but then they are also very, very optimistic that that can be overcome on one of these illusions that the community is interested is in the open sized platform for health equity. So, it has to be very transparent, who's making the decision, Who's at the table, how did they make the decisions. They have to be very transparent, and they have been the open science.

28:19

And community is interested in success, because we are all want to be winners, and we all are interested in the success, so that, you know, we can raise everybody to achieve the health equity. Next slide, please.

28:34

So, we all know that, you know, there are lots of challenges in involving AI for the community, and we all know there are many of these things. They're already talked about data by as digital divide, lack of awareness, education, power, structure, right? You know, they all come within the differential power structures and historical distressed and language and cultural barriers, lack of transparency, transparency, as well as impact and unequal benefits. All of these are challenges, but I think we can overcome these challenges. Next, slide, please.

29:13

So, how do we overcome these challenges? If you need a comprehensive approach, I call it a socio technical approach. It has to take a collaborative and participatory approach from designing, implementing, and sustaining, and it needs to involve multi and diverse stakeholders. Of course, education is also a key, and then keeping communities in the loop, which is also known as human in the loop. And then the other big thing that I want to bring to the table is interpretation of the model.

29:44

Next slide.

29:48

So, in terms of the participatory approach, that is so much in terms of the, you know, participatory approach, it's a very broad spectrum. It's from consulting to including to collaborating and warming the project itself. So, here is where we need to figure out the participation goals for farm, as well as, you know, what form does the stakeholder participation date, Who's involved, What is on the table, and why is participation need the next slide?

30:21

And here is a study, which actually looked at building bridges, in terms of the participatory approach.

30:27

And I want to like highlight: All of them agreed that, you know, it needs to be the improving the user's experience, and a majority of them are interested in giving the feedback, and also giving input on design ideas. Next slide, please.

30:46

And there are lots of community initiatives that are happening even in a, my head.

30:51

We have, like, you know, education and AA knowledge and skills, enhancing efforts. And this is one of the things, I think, is very necessity in terms of the building bridges between VA health, equity and communities. Next slide, please.

31:12

And there are examples here in terms of, you know, how would we actually improve the digital divide That our community projects that give you software and hardware, as well as, you know, wireless capabilities in communities that are marginalized. Next. Next slide.

31:32

And data bias diverse. This is that the diversity is very, very important. Many of the citizen science framework and when you have like lot of stakeholders who are involved, they received problems that we are not seeing. So therefore, it is very, very important to involve the citizens. And here are some examples where they have created the video datasets and they have actually improved in the un reduce the bias by including lots of images from diverse population and gender shades as an example. Next one.

32:09

And here is where we are talking about the human in the loop.

32:13

And we need to have a human centered design, which is an approach to the design and development of the system, or technology, so that they know that users have effectively and efficiently use the next line.

32:29

And here is an example of keeping humans in the loop. This is a community level of project, which is developing a tailored, contraception, chatbot, for black and Hispanic and women. And you can see that these women were involved, right from the design, in terms of focus group, and then they reviewed the responses. And also, they also participated in the dissemination. Next one.

32:54

And I think this is there. The interpretability really matters a lot. It actually builds stress. It identifies bias. It improves performance. And I'm going to skip like, you know, what is interpretability?

33:07

L me just give you a definition and then I'm going to just select one example from my work. Can you go up?

33:16

I'm sorry, can you go down?

33:18

Next slide.

33:20

So when whenever we are presenting the results to the community to anyone, we need to make sure that it is interpretable. For example, here is the interpretation that came out of a machine learning model on low value of cause. And this one was looking at what is the effect of low value on cost. And then here every point is a human being. As tight ever indicated, every point is a human being. And then this model tells you how the model came to the position, that low value care actually increases the out of pocket costs for an individual. And then it also tell you, tells you how much variation there is in terms of the prediction.

34:01

Yeah, can you can skip the next slide, please.

34:06

The same thing, you know, when you are presenting the results. Again, as it was pointed out, that they know it has the capability to go to a very, very, personal level.

34:15

You need to interpret the output the need to show them how we've made the decision for each person. For example, here on the right side of the graph, is an AI model that actually explains the output for each person.

34:31

The point on this graph is a person, So, you need to be very transparent in how the model came to this decision on how it is affecting. Is it an average effect audit Is. That is a lot of variation in the. next slide, please.

34:47

And, again, you can, you visualize it in terms of a group. What are the leading, you know, predictors in coming to a decision? And, you can do it either in a group vice version for the population audits and the individual passion. For example, you can have something like this, how the decisions made for a particular group of people like, us, Heidegger's said, you know, when you are stratified by race and ethnicity are satisfied by are started by geographical region, you can tell them, you know how these predictions came on the next slide, please.

35:23

And, there are lots of tools that are available for evaluation of a output.

35:27

I have given some of these, and I think it is very, very important Even after we build the models and VR, explaining all these results to the community, it's getting bogged down the Arctic, the height of our talked about, you know, to ensure the scientific validity, whether it is clarity in terms of the present results, or we can reproduce. The results aren't, is adhering to the ethical standards. So, it's very, very important for us to keep continually updating the output AI models, and also the community engagement feedback as essential from the Design for Dissemination. Next slide, please.

36:10

So, again, I cannot emphasize enough that community involvement from design to data.

36:17

The algorithmic training to the implementation and continuous updates absolutely essential to have the successful and equitable AA for community. And we have to have the participation monitoring the controls at every stage.

36:34

Next slide, please. I think in on any of the work that I have presented, all the credit goes to my collaborators. And thank you so much, and it has been exciting to present this work.

36:46

Any questions?

36:48

And thank you.

36:49

Thank you so much, Doctor Sambamoorthi for helping us understand the value and power of working with communities. Our final speaker today is a Himanshu Arora. Mister Arora is the Chief Data and Analytics Officer at Blue Cross Blue Shield Massachusetts. Is responsible for advancing the company's data science efforts, he is an analytics leader with a background of smelling, healthcare, insurance government, and financial services. Mister Arora previously served as Head of Data Science and Analytics Enablement, Healthcare Services Corp and Health Positions and Zurich, Insurance, United Healthcare, and Deloitte Consulting. Welcome, mister Arora welcome.

37:34

Thank you, Sheree. Really excited to be here and to really two really high powered presentations and presenters to follow.

37:43

So, thank you to doctor Hightower and doctor Sambamoorthi.

37:47

So I'm going to talk a little bit about how we here at Blue Cross Blue Shield Massachusets are approaching the space before we do that, a little bit about Blue Cross Blue Cross Blue Shield Massachusetts.

37:57

We are a single state, not for profit, Community Focus, Blue Cross plan.

38:04

And beyond the commercial objectives of any entity in the space, we feel, and have been accountable to the community over the decades of what existence.

38:16

And while AI is a hot topic today, bias, as we know, has existed in healthcare, in society, at large, for much longer than AR has been a topic of discussion.

38:29

And when we go back and look at the work, that we, as a company, directly onto our foundation, I've done in the space, there's there's a lot of heritage and foundation and excellent outcomes, two, to build upon, right.

38:43

So this is when we think about bias and de biasing health care healthcare delivery healthcare coverage.

38:50

That's been a primary focus of us of ours, or for a long time if we go to the next page, please.

38:58

So this focus on de biasing continuously, and turning it into a discipline not, just a one-time activity shows up all over the organization. It shows up in how we approach the space, how we approach AI as well.

39:16

And as you'll see highlighted here, both as doctor Hightower and doctor Sambamoorthi

39:20

reference, you know, when we think about governance, when we think about risk management, when we think about audit capabilities, this is a prominent part of what we focus on as we are building towards the exciting capabilities that AI offers up, and have been doing this for a sustained period of time, if we move forward to the next page.

39:45

And this is, again, to sort of really hammer home this point, when we think about our risk tolerance from a AI perspective, we're very clear about the fact that we have a zero tolerance policy for any systems that exhibit any discriminatory behavior.

40:02

So, this is something that is not something that we think about in the end, If you will, It does a focus area.

40:09

It is something that is front and center from the beginning, all the way, again, going back to the ... presentation.

40:17

So, thinking about what problems we solve for, what problem areas we focus on, How do we understand each individual member, population groups, health conditions around it, It goes all the way back to the definition, and then all the way forward to measuring the health care outcomes for individuals, or communities, or the society at large.

40:41

It's an end to end spectrum for us If we move forward one more page, please.

40:48

So, again, a lot of this was covered in the previous presentation, so I'm not going to spend too much time on this.

40:54

But the point of this is to say, look, the positive impacts of AI and healthcare are really, really exciting.

41:01

Right?

41:01

And especially with generative coming into the picture when we think about, uh, the categories of care that can be improved all the way from access to personalization, early detection and prevention, efficiency, and cost savings, there is no shortage of opportunities in health care to apply leading capabilities to artificial intelligence, when we talk to think about the negative impacts.

41:27

So this is where what, where we have to be consistent and perseverance in the application of techniques, methods, intent when it comes to de biasing, when it comes to handling and managing inequality.

41:45

And of course, the legal and ethical implications that it creates for us, but or for all our partners, entities, care delivery partners will be workable, right? So, there is significant potential here.

41:58

But this is an essential concern that, quite frankly, as I said, is not going to be a one and done.

42:06

This is something we are going to have to as a system, continuously, get better about and, and make it a core part of our discipline.

42:16

Next slide, please.

42:19

Here are some examples.

42:21

And this page focuses on examples of bias bias that seeps into the delivery part of the healthcare delivery of health care system.

42:29

The next page focuses on the coverage part, and again, I'm not going to drain these, a lot of these examples, we have seen over previously discussed as well, but all the way from how risk assessments are done, to how diagnostics are done, two treatments are presented. There is inherent bias.

42:49

When we go back and look at the data, when we go back and look at how these outcomes are being applied, some specific examples here around pain management and glaucoma, particularly mental health diagnosis which has especially since the pandemic becomes such an important area of focus. It was always a key area of focus for us.

43:09

Its impact on medical care, more broadly physical care, has been only amplified since then.

43:19

And more and more studies are finding out that when we look at diagnosing conditions like depression and anxiety and when we quarterly backfired, minority or marginalized communities, there is bias there, right?

43:33

So there's these are call outs for us to say, look, exactly, as doctor Hightower and doctor someone, what they're doing.

43:39

We need a lot more research in the space.

43:43

But then we need to take that research and adopt it into how we execute on a day to day basis in the delivery systems, in the coverage part of the health care system in the day to day business operations and make sure that we are continuously de biasing.

44:03

But then, also, monitoring and validating on the other side of it, alongside the health outcome, what, what, the impact of that has been.

44:13

If you move forward one more page, this page really focuses on, when we think about coverage, right? A lot of a lot of the focus, from a biased perspective, tends to be on the delivery side, but there is definitely the potential for bias in health care coverage, as well.

44:29

Whether it's coverage denial, whether it's how we adjust for risk, what treatments are authorized, for, which, which minority group sources, not.

44:39

Even all the way in how networks are developed and crafted, and to the, one of the points that was made earlier, some of the data that's coming forward for national studies is heavily influenced by certain parts of the country.

44:54

In Massachusetts.

44:54

We, we're blessed with having one of the best health care systems in the country, but it's also a, a very, very diverse system in terms of all the way from rural areas to, to do more urban areas.

45:12

But that's not necessarily true, or the case, Or the entirety of the country.

45:17

So, when we look at network adequacy, that's another important point, which doesn't come to mind right away.

45:23

But the more we dig into it, we find that what coverage is provided, is that important, is an important factor of what networks exist in the first place as an example.

45:36

So, if we can move forward one more page, please.

45:41

So, and both, doctor Hightower and doctor Sambamoorthi talked about this idea of human in the loop.

45:49

Again, not a new idea, but when we think about it as a health insurer, again, a community focused single state plan.

45:57

We are hyper focused on this idea of, any impact, AI driven outcomes that are reaching a member, directly or indirectly, especially when it comes to any adverse actions that are being created, or have the potential, even not being created, That we view it from the lens of ethics.

46:18

We understand the context behind it, and we build trust on it.

46:22

And having a human in the loop, somebody reviewing whether it's an individual or obtain the outcomes, giving feedback on where these outcomes are in-line versus where adjustments need to be made.

46:35

Because over time, for us to achieve, if you will, the top of the license and parcel of AI, we have to develop trust.

46:43

And these systems can trust, will come only from repeated verification, checking, and demonstration, that, uh, the work of making sure these outcomes, that are making their way out to the end consumer, the end member, have been de biased 2 to two every possible, then if you go forward one more page.

47:11

So, this is really all about, OK, so how do we do that.

47:14

Right, and, again, both the previous presenters walked through some excellent examples of specific, uh, tools, methodologies, and models, if you will, partnership models with the community, et cetera. That should be leveraged in this space.

47:34

You know, when you look at the categories of different biases, and by the way, these categories we all understand they are not mutually exclusive.

47:40

They, they bleed into each other, if you will.

47:44

one often has manifestations and causes root causes in another.

47:49

But when we look at, for example, racial, gender and ethnic bias, collecting diverse and representative data is not the only solve, but it's an important solve one that we must not miss. And one that we focus on a lot.

48:02

When we think about socioeconomic bias, making sure that we are enriching healthcare data with socioeconomic data, social determinants of health data, That's a critical part of work.

48:13

When we think about accessibility, again, compliance with accessibility standards, all the way from when we are thinking about the human centered design, aspects of this and really designing and testing for systems where this has built a lot of bolt on language and culture bias.

48:32

This is, again, including experts, linguists, people, people in positions who can understand both the health care side of the, of the equation, but also come at it from a explain explanation, in terms of how understandable or within reach the complex language that health care itself is, can be for people, in addition to all the different language barriers that exist today.

49:01

Last but not least, this idea of, excuse me, confirmation bias, and this is where training, training, data, auditing, and really making sure that there is diversity built into the decision making process around these algorithms is, is key.

49:23

Next page, please.

49:27

And, again, this is in summary, right, when we think about, what are the different focus areas? What are different mitigation methods?

49:35

This is how we think about, and this is, again, very much a work in progress. Like I said, this is part of what I'm done.

49:42

But there are some good ways of thinking about data diversity and representation.

49:46

From a those datasets perspective, how do we make transparence algorithm or algorithms transparent.

49:54

And that we have to make algorithms transparent. That was a point that was emphasized previously as well.

49:59

Continuous monitoring and evaluation, easier said than done, but, again, necessary part of it.

50:06

Having a very ethical view of AI governance within our company, we very much focus on this as a key aspect of it.

50:15

Community involvement, that was a point made earlier, as well, are very critical in this space, and ongoing education, and training.

50:22

So, again, huge potential of AI and health care.

50:29

Also, a good degree of caution that we need to apply to make sure that not only do we not perpetuate any existing biases that we actually, de bias existing biases, but then managing for volume, velocity, and variety of data.

50:45

We don't introduce any net new biases in into the system, as well.

50:54

And I think that's that's it for the content particle consultation.

51:02

Thank you, Mister Arora. Thank you very much for giving us an industry perspective. We now have just a few minutes to take some questions from our audience. Please forgive me, please, for all my speakers come back on. And please forgive us if we don't get to your question. We've had pages of questions submitted even before the session. Our first question I am, I am going to turn to doctor Hightower. one of the things that seems to come up in several questions is, why should Black Latino, Asian, Native American, and other communities of color hope to see equality in AI that doesn't really exist in the policy? So, the question is, How can people have an impact on health equity, and AI at the policy level?

51:53

Yes. So, I think there's There's two kind of kind of components to that question. And one is from the perspective of martyrdom, Historically marginalized populations, patient, community groups? And why should we trust AI if our policies have always had been broken in the past?

52:11

And I think one thing that needs to be recognizes that studies have shown that marginalized communities actually trust technology, moreso than sometimes their interactions with the healthcare system, and that technology isn't free of bias. So, we need to make sure that as we leverage the tools that are available to us, that we do trust, that we are making sure that they really are trustworthy. And, from a policy perspective, I think there's, and, again, a great opportunity, moving forward, to ensure that we have that alignment between our policies, our tools, and the people, the humans, In the loop.

52:54

And ensuring that we have that diverse representation that engender trust across that spectrum where there's the tool that's just sort of the embodiment of our policies and our, our human in the loop and the policies themselves. Because, no, no one component is going to fix the bias that we have.

53:19

OK, thank you. Doctor Sambamoorthi, this next question came in for you.

53:27

How, how can we improve access to healthcare for marginalized communities, I guess at the most grassroots level, I mean, from, from perhaps a more grassroots level than some of the presentations shared.

53:40

Yeah.

53:42

That's a great question and access, as you know, is like a very, very, very broad umbrella term.

53:49

There are so many components involved in that and I'm an eternal optimistic person, soul, I'm going to give you positive stories from this feed house.

54:02

This can be improved, OK. So I'm going to kind of take three areas, right. Because there are so many aspects to it.

54:09

I'm going to first take the availability. right. Availability, basically, I mean, is that people have the requisite resources, whether it is personal technology, so that they can meet the needs of the marginalized communities.

54:24

So the same obermeyer article that we have all been telling throughout these presentations, **** an algorithm, actually, that improved access for the big black patients, right?

54:36

So in osteoarthritis, in individuals, the neat thing was deferentially perceived them black patients. So they did improve an algorithm so that that difference is reduced, so the genome accesses improve for pain treatment for black patients. So this is a success story from the field.

54:59

And the other one that I'm going to talk about is the accessibility, Right?

55:04

We all need to get what we need, you know, access it, without any problems. So, here, I'm going to dig the community.

55:15

LGBTQ If you know from the field that are about seven applications, basically these seven applications increase access in terms of, like, you know, reducing the mental health issues of this community, provide resources for these community, who are living in the underserved areas, and then also facilitate in HIV status. So, there are seven apps right now in existence that actually improves access either through virtually odd, through text messages through chatbots and so forth.

55:52

And you can have like lots of examples in terms of virtual assistants and in terms of drones actually, you know, dropping supplies in underserved areas.

56:04

And the third example, I'm going to give you, again, a positive example, is, the language and cultural sensitivity is a major problem for marginalized communities.

56:15

Here, I think that AI powered translation services, as well as the social determinants of health analysis.

56:23

Everybody I alluded to, can be very, very helpful, so that, you know, people are connected to the resources, and also, people are given D culturally, and no language sensitive.

56:38

So here, these are success stories from the field, how the access can be improved with a marginal this.

56:48

Thank you. And to answer the questions that are coming in, yes, all of the speaker slides will be available on the ... website, after the event.

56:59

Mister Arora, one of the questions that that was directed towards you is, how do you suggest technology leads talk to non tech C suite teams about the trust, safety, reliability, validity, and cost of AI?

57:21

Yeah, a great question.

57:24

I think there's, there's many components of this, but sort of the three main pillars, if you will, as are first is it's important to start with the business outcome in mind and be able to talk through that in unembellished on hyped language, right? So there was a lot of hype around AI. We all understand and get that.

57:48

I think it's important to sort of use the hype to do, create and maintain visibility on the topic, on the power of this, but be practical in terms of when we are talking about the business outcomes, so that we have a cost benefit analysis that can get setup.

58:05

Second is, then, is to be transparent about what are the trust and safety concerns that are of paramount importance.

58:14

Which of those are addressed, which of those are pending or still not addressed?

58:19

And what does the risk surface or the risk exposure around that?

58:23

Again, that demonstrates both your understanding doesn't the person who asked the question.

58:28

But it also creates energy and momentum around, which of these areas are we going to solve for in what order, versus, where are areas where we might have to take some board risk, make some bold moves, too, to move forward?

58:44

On the cost front it is. This is still very much a developing area, again, particularly generally when it comes to cost marvels.

58:53

Still very early in our journey.

58:56

But again, cost versus benefit setting that up, that becomes a clear importance.

59:03

And related to cost, does not all cost us all the financial, right?

59:07

So understanding and being able to parlayed that against other enterprise level costs especially if you're trying to do this insert inside of an enterprise is what is. What does this mean from an implementation perspective?

59:22

What does this mean from a feasibility perspective of the end users to be able to use this?

59:27

And what other tradeoffs might have to be made in order for something like this to two, to become reality, right?

59:34

So I’m happy to go into a lot more detail offline with anyone who would like to.

59:40

But those are the three main things that I think about when I'm thinking about internally, communicating around this, and balancing the potential and the hype against, making things real.

59:55

Thank you so much, and unfortunately, we are out of time, my great thanks to our Extraordinary Panel, who, who compressed extraordinarily complex information into the time we had available today. Thank you to our audience for joining us. Your feedback is very important to us. So a brief survey will pop up on your screen after the event. Please, do fill it out. And also, please see our other resources, all the slides from this panel, including our previous panel on the AI Series, which was navigating the future. How artificial intelligence is. Reshaping Healthcare again. Thank you, everyone. And thank you to our audience. Have a wonderful day.

1:00:41

Thank you. Bye, bye.


Responsible AI in Healthcare – Ensuring Quality Care for All

Maia Hightower, MD, MBA, MPH

Equality AI

AI for Community Design, Data, and Decisions

Usha Sambamoorthi, PhD

University of North Texas Health Science Center at Fort Worth

Health Industry Approaches to Addressing Bias in AI Use

Himanshu Arora

Blue Cross Blue Shield of Massachusetts

This webinar is part of our Artificial Intelligence and Health Series.

 


More Related Content