How Artificial Intelligence Can Advance Health Equity

How Artificial Intelligence Can Advance Health Equity

Nov 29, 2023
with Rep. Don Beyer, Victoria Knight, Ellie Graeden, Jen Roberts, Ngan MacDonald, Deliya Wesley, and Christopher Trenholm

In September 2023, Mathematica and Congressman Don Beyer’s office hosted an event on Capitol Hill to discuss artificial intelligence (AI) and its implications for health equity.

“The opportunities for AI to be a catalyst of health equity are seemingly limitless,” said Christopher Trenholm, who oversees the Health Unit at Mathematica, during opening remarks. But for all of AI’s potential to improve the lives of Americans, Trenholm said, “we must also naturally acknowledge and speak to its risks, risks that will increase dramatically if we do not act now to assess, govern, and shape the direction of AI in our health care system.”  

This episode of On the Evidence features audio from the September event, anchored by Victoria Knight, a health care policy reporter for Axios who interviewed Congressman Beyer and moderated a subsequent panel with Ellie Graeden, Jen Roberts, Ngan MacDonald, and Deliya Wesley.

  • Beyer represents the 8th Congressional District of Virginia and is a vice chair of both the bipartisan Congressional AI Caucus and an AI working group recently formed by the New Democrat Coalition.
  • Graeden is a professor at the Georgetown University Center for Global Health Science and Security.
  • Roberts is the director of resilient systems at the Advanced Research Projects Agency for Health, also known as ARPA-H.
  • MacDonald is the chief of data operations for the Institute for AI in Medicine at Northwestern University.
  • Wesley is the senior director of health equity at Mathematica.

Listen to the full episode.

On the Evidence * 108 | How Artificial Intelligence Can Advance Health Equity

View transcript

[DELIYA BANDA WESLEY]

Thinking about what we put into the technology and what that data actually represents in terms of those people -- so who's represented, who's not represented, what's missing -- and ensuring that the technology does not actually exacerbate or replicate or create new inequities.

[J.B. WOGAN]

I’m J.B. Wogan from Mathematica and welcome back to On the Evidence.

Occasionally on the show, we take recordings from live events Mathematica has hosted and broadcast them here. In September 2023, Mathematica sponsored an event on the Hill with Congressman Don Beyer’s office to discuss artificial intelligence and health equity. Victoria Knight, a health care policy reporter for the news site, Axios, interviewed Congressman Beyer and moderated a subsequent panel with Ellie Graeden, Jen Roberts, Ngan MacDonald, and Deliya Wesley. Ellie is a professor at the Georgetown University Center for Global Health Science and Security. Jen is a director of resilient systems at the Advanced Research Projects Agency for Health, also known as ARPA-H. Ngan is the chief of data operations for the Institute for AI in Medicine at Northwestern University. And Deliya is a senior director of health equity at Mathematica.

The event began with a few remarks from Chris Trenholm, who oversees health at Mathematica.

[CHRISTOPHER TRENHOLM]

I'm pleased to welcome you to today's event on artificial intelligence, or AI, and health equity. I first want to thank you, Congressman Beyer, for being here. I know it's a busy time of year.

[Laughter]

I know because I used to live on South Capitol Street a long time ago and spent a lot of time up here after college. Thank you to our moderator and our panelists and everyone else for joining us today. Mathematica is a nonpartisan research and data analytics organization with a mission to improve public well-being. Working collaborates with federal and state agencies with nonprofit and commercial health organizations that share our mission. It's this mission, the passion that we have for it in improving public well-being that's why as the opportunities and challenges of AI grow across sectors, we believe it's becoming not just important but vital and urgent to examine how AI can be a catalyst to a more equitable health care system and for more equitable health access and health outcomes for underrepresented and underserved people across this country. The opportunities for AI to be a catalyst of health equity are seemingly limitless, and we cannot overlook or miss the importance of public policy to reach this promise. From the possibilities for more equitable medical care, such as diagnostics that are subject to less implicit or even explicit bias, to more equitable patient information and health data access, to wholesale improvements in how our staggeringly complex health care system can deliver high-quality care on an equitable basis, the coming revolution in AI must be met with frameworks and guidance and funding that fuel the purpose and mission of health equity to improve the lives of Americans.

While we're here to discuss the promise of AI in improving health equity -- and I really am pleased that the framing for today is on the optimistic side of what AI can achieve for us -- we must also naturally acknowledge and speak to its risks, risks that will increase dramatically if we do not act now to assess, govern, and shape the direction of AI in our health care system. For example, while AI tools hold major promise in producing more effective and equitable treatment decisions, there are numerous challenges with the data needed to achieve this aim and myriad ethical, regulatory, and economic issues that must be understood first and addressed as well. At Mathematica, these opportunities and challenges are leading us to ask a simple question that I think is worth framing at least some of our thoughts today. How do we de-risk the AI journey in health care? How do we ensure the AI solutions are evaluated in ways to determine their effectiveness and ensure that they're deployed in ways that enhance rather than inhibit health equity? We know that a better understanding of how AI can improve health equity necessitates cross-sector collaboration, and that's why we're here, glad to convene this group of experts right in front of me and hear their diverse perspectives on this intersection of AI health and health care. Thanks again, all, for being here. With that, I'll now turn it over to our moderator, Axios Health Care Policy Reporter Victoria Knight. Victoria, thank you. Thank you, Congressman.

[DON BEYER]

Thank you, Chris.

[VICTORIA KNIGHT]

Hi, everyone. I'm Victoria. I cover health care in Congress, and I'm very excited to be here to interview and represent Don Beyer this morning. Thank you, Congressman, for being here.

[DON BEYER]

Thanks, Victoria. I feel like I'm in a debate setting. It's really scary.

[Laughter]

[VICTORIA KNIGHT]

Getting flashbacks to last night?

[DON BEYER]

Exactly.

[VICTORIA KNIGHT]

So to start off, I'm just curious. I feel like AI just kind of popped up all of a sudden this year. Congress is talking a lot about it. You're the Vice Chair of the Congressional AI Caucus. What are some of the areas that you guys have been learning about where AI is being used in health care? Maybe you've had people come in and talk to you about it. I feel like we're still learning so much about it. What have you learned so far?

[DON BEYER]

I guess I'm glad that Chris talked about this being on the optimistic side because so much of what we talk about appears on the downside, the scary side, the end-of-the-world side, existential risks. Not to discount them, but the positive opportunities to change the way we live, how long we live, the quality of our health care, is just astonishing, and I'm just really excited to be part of it. I think we've been -- in Congress -- been more on the side of watching what our experts have been doing in health care and noting it and making sure that we are moving strongly in that direction. For example, every day it seems to me there's another article that I read, so I have pockets full of examples. But the easy ones -- breast cancer, radiology reading, the whole notion of using imaging -- we had a couple of MIT professors come down and talk with AI, and they said part one is just pattern recognition, which why I've been excited about it going back many years. The notion of imaging hundreds of thousands or millions of mammograms and figuring out which are real and which aren't -- the short-term statistics are a 3% improvement, which is not a lot. But 3% when you have a million women with breast cancer, that's a lot of lives. My closest friend that I was married to died of pancreatic cancer a year ago, eight weeks between diagnosis and death. Now because of course by the time they diagnosed it, it was way too late. Now they say they can perhaps see this four years ahead of time on the lung cancer stuff. We've been really interested in the diabetic retinopathy -- I guess three different major companies now looking at ways to diagnose that much, much earlier than you would otherwise have. So those with diabetes may be able to preserve their sight for a long time. It just goes on and on, and they're all really exciting. I think we're just at the tip of the iceberg. What we'll be able to do in the years to come is astonishing.

[VICTORIA KNIGHT]

That's so interesting. Those are really interesting examples. Have you had people come in and talk to you guys about these different new research innovations, things like that? How are you learning about them?

[DON BEYER]

Well, my wonderful legislative director's mother, father, and brother are all doctors. Somehow she became a political scientist, but as a consequence, we end up with lots of health care folks in. Most recently, I talked to Dr. Shameema Sikder at Johns Hopkins, sent by a friend, who's an ophthalmological surgeon. She's using -- Interesting, I thought she'd be using AI to figure out who needed surgery or this or that. Instead, she's been using it to help existing eye surgeons to diagnose essentially the quality of the work that they were doing -- not a different way to do it, but just was the scalpel in the right place, did they do this in the right order -- and found a really significant increase in the positive outcomes of the eye surgery, which is a very cool thing having had three eye surgeries, so yeah. We're eager to talk to ever more people about it.

[VICTORIA KNIGHT]

I know we talked about at the beginning of this health equity is the center of this discussion today. Do you see potential for AI to help advance health equity, and if so, what are some of those examples?

[DON BEYER]

I think there's a yin/yang on that too. On the one hand, every doctor I know complains about how much time they spend doing paperwork. And nurses -- One of my daughters decided she was going to go try to be a nurse. After a week on the floor, she said, "No way, I can do paperwork someplace else."

[VICTORIA KNIGHT]

Yeah.

[DON BEYER]

So the notion that if AI can change that, that would be dramatic and really helpful. My friend Bill Foster, who's our only PhD physicist in Congress, has been working for years on a digital health care signature that you would have at birth and keep your entire life, so that those records would go, which would be wonderful for equity for everybody including those people in the underserved medical areas who may have only seen a doctor once in their life but needed to see him again at age 40. They'd have the record. But on the downside, the fear is always going to be that people living in the wealthier ZIP Codes are going to get it, and people in rural America or inner-city America are not going to get it. That as with so many breakthroughs, those with money and education are most likely going to have the first and longest access to the brand-new technologies. On the other hand, to the extent that it drives down the cost or improves the quality of the diagnosis, we've got an important medical diagnosis bill introduced that's based on -- I think it's 10,000 hospital deaths a year from misdiagnosis, and the cost is in the billions of dollars. To the extent that AI can dramatically reduce misdiagnosis, back to Atul Gawande's Checklist Manifesto, imagine the Checklist Manifesto on AI. It could be really, really helpful in terms of quality.

[VICTORIA KNIGHT]

As I mentioned at the beginning, you're the Vice Chair of the Congressional AI Caucus. I know you said at the beginning you guys are mostly in the watching phase right now, but have you discussed regulating aspects of AI? What does that kind of look like?

[DON BEYER]

Yeah, the watching and the learning phase -- because we've had lots of folks come talk to us -- Sam Altman, Jack Clark from Anthropic came. I think that that was one of the biggest meetings I've been to, 150 folks came. The folks from MIT came down, et cetera -- lots of stuff going on, a lot on the staff level but really helpful. I think last I heard, this was two weeks ago, there were 100 AI bills introduced in the House all over the place. We have yet really to focus on which are the three or four that make the biggest difference. Part of the New Dems, which is one of the internal caucuses, literally right now trying to go through the 100 and say if we have to endorse five and really push to get these five, which are they? Kevin McCarthy had convened an informal group of Dems and Republicans in his office basically to do the same thing. That's the nice part -- is that it's really bipartisan so far. The caucus itself is led by Mike McCaul, a Republican; Jay Obernolte, a Republican; Anna Eshoo, a Democrat; and me. We're going to try to keep it that way as best we can.

[VICTORIA KNIGHT]

That's really interesting. When I was talking to some of the researchers that use AI prepping for this, they were curious if there's a lot of talk about regulating the AI itself. But they were saying maybe it should be more focused on the data or the outcome that comes out of the AI. Do you have thoughts about that?

[DON BEYER]

Yeah, and it's both input and output -- the garbage in/garbage out is always part of it. On the input side, Grace Brightbill, who is our lead AI Staff Director, she worked close with Anna Eshoo's team on the Create AI Act. I'm never sure if it's the AI Create or Create AI, but it's the whole notion of let's use federal resources, including the labs and National Science Foundation, et cetera, to build huge, curated datasets that are not full of junk -- like as we see when ChatGPT downloads six trillion words from the Internet. There's a lot of junk in there, which is why you get hallucinations and everything else. So let's make sure that the researchers, the academicians, the people that use AI, have credible data to train things on. That's part of the regulation. On the other side, we pay a lot of attention that many times with the European Parliament on the EU AI Act, which we mostly -- well, they disagree. But most of the feedback here is just too restrictive. You've got to license every algorithm and get permission to do this and permission to do that. We don't want to shut down the creativity and the innovation of the American society, but we do have to deal with the risks and the downside. The bunch of different ideas -- the one that comes up most often is literally creating a new federal agency that would just regulate, as we did with Consumer Financial Protection or whatever, or like IAEA on nuclear waste. There are a lot of people that push back against it too. Do we need another huge bureaucracy? Can it be done agency by agency? People like the FTC say they already have the tools to punish misuses. Victoria, what actually gets to your basic question is can you regulate the hardware and the software itself? The thought is probably not.

[VICTORIA KNIGHT]

Okay.

[DON BEYER]

You can't un-invent algorithms or science or computer science or mathematics, but what you can do is say that the unacceptable uses of it have to be restricted/regulated/punished. Interesting, there's a cool little firm in Arlington called "Trustable AI" that, among other things, is building a taxonomy of the risks, which will be really fun over time to see what actually are the downsides. Because that's probably the number one question here in Congress on the AI -- is if we're going to regulate, what are we regulating to prevent? What are the risks that we're trying to address? We don't have a good handle on that yet, other than science fiction.

[VICTORIA KNIGHT]

What about President Biden released the AI Bill of Rights earlier this year. Are you guys using that to guide you at all? It's pretty broad and not very specific.

[DON BEYER]

It's really hard to turn that into legislation. Much more useful has been the NIST framework. Not to diss the President's stuff, but that's more aspirational. The NIST framework is tangible. One of the circulating ideas we're working hard on as a first step would be to say Federal Government procurement -- the contracts that we list -- to the extent that they involve AI, we insist that the NIST framework be observed and obeyed and stuff, which gets back to regulation. You may then be able to say Department of Defense, when you're contracting with all the defense contractors, do you insist they use the NIST framework? And you have the compliance officers within DoD to do that. By the way, one of the pushbacks we're always going to get in a society based on freedom is why are you telling me I can't do this? When you're using federal dollars, you can say that. So a good place to start would be with the federal contractors.

[VICTORIA KNIGHT]

Gotcha, there's also a lot of talk within AI as you talked about -- the data may not be good going in. Also, just human error itself in building the AI because it's people that are making the algorithms, and they may have their own biases and things like that. Have you guys talked at all about that, about ways to prevent that from happening?

[DON BEYER]

We haven't really. Again, the general consensus seems to be that the laws already exist to punish people or to regulate people that are misusing it, whether intentionally or unintentionally. One of my favorite examples is I served on Ways and Means, and we spent $800 billion -- with a "B" -- on unemployment insurance compensation during the pandemic -- all those folks that lost their jobs. In a lot of states -- Florida, where they would get $50 a week -- there wasn't enough for people to live on. So we put $800 billion into it. The last number we've seen is that $60 billion of it was stolen, which is only 8%, but it's still a lot, a lot of money, $60 billion. If we had AI and all the states were using much more sophisticated systems, maybe we would have lost $5 billion or $2 billion. But on the other hand, the fraud search are going to be smart about using AI to do fraud. There'll be a competition there. But we didn't need a new law to do that. In fact, we're going after all of them right now for the fraud.

[VICTORIA KNIGHT]

I know I want to circle back to one of the things you said before. You guys are looking at all these different AI bills that have been introduced. You're looking at endorsing or maybe promoting a few of them. Is there anything in common those have? Are any of them health-related at all?

[DON BEYER]

I don't -- I don't really know. I mean, there are a couple of health care-related bills. Anna Eshoo has been leaning on it. But they tend to be more on the science side rather than regulation side. I think one of the most interesting pieces, as you know we've struggled enormously with our battle with Big Pharma. On the one hand, we are all living a lot longer, a lot healthier -- you know, better living through chemistry. On the other hand, there are lots of issues with Pharma. But just the whole notion with AlphaFold that in a couple weeks they were able to determine the folding structure of virtually every protein known to man and what that will mean for pharmaceutical stuff in the years to come. It's just so incredibly exciting to think that virtually every illness that we have we'll be able to treat -- whether it's CRISPR or Pharma or the like. So it's very exciting. So a lot of that is just trying to make sure that we continue to invest in it and encourage it.

[VICTORIA KNIGHT]

Right, and I guess are you trying to ensure that health equity is something that's considered within all of these things?

[DON BEYER]

Yeah, very much so. It's interesting that I've only been here nine years, but even nine years ago when we talked about climate change, there was rarely an equity discussion. It was, "The world's getting hotter." But we weren't thinking about how much worse it is for people that are living in the inner city or people that don't have air conditioning or what it means to the bottom half of the Indian subcontinent. Now environmental equity shows up everywhere. It's, I think, only beginning to be talked in terms of AI equity, but it's very important. Health equity, again, that has really risen also over the last couple of years.

[VICTORIA KNIGHT]

Right, now I want to be mindful on time. We're supposed to be done soon. Okay, yes, one more question.

[DON BEYER]

You've got a good panel coming up.

[VICTORIA KNIGHT]

I know, thank you so much, Congressman. One last fun question -- there's this great Washington Post feature on you going back to school. You're working your way to get a master's in machine learning. So what is the most valuable thing you've learned about AI so far in your classes?

[DON BEYER]

I just came from lab, by the way.

[VICTORIA KNIGHT]

Oh, did you?

[DON BEYER]

I had to take the express lane on I-66 all the way from (inaudible). It was really fun.

[Laughter]

I'm trying to think -- the most important thing? Well first of all, it's fun to know at 73 you can still learn, which is great. I actually wore a tie this morning because I was coming to see Victoria and Mathematica. I thought the kids were going to like give me -- but they were all very nice to me still, which is really good. The midterm is next Wednesday.

[VICTORIA KNIGHT]

Do they know you're a Congressman?

[DON BEYER]

I don't think they do.

[Laughter]

I think they all look at me a little funny, but they're all nice to me -- because I'm older than their parents. It is wonderful fun. I really enjoy it. It's so much more productive than doing a Sudoku or Wordle. There's a lot of ways you can keep your mind occupied, but this way, I'm hoping that in the course of my studies, I'll actually be able to use it in a constructive way. Obviously, it puts me inadvertently in the middle of the policy debate here, which is also really meaningful.

[VICTORIA KNIGHT]

Right. On that note, we're going to end it. Thank you so much.

[DON BEYER]

Thank you, Victoria, nice to see you.

[Applause]

[J.B. WOGAN]

Hi listeners. Pardon the interruption. At this point in the event, Congressman Beyer leaves the stage and the aforementioned panel joins Victoria for the second part of the conversation. Throughout the second part of this episode, I’ll be jumping in occasionally to clarify who is about to speak because it’s not always clear without the visual cues you would get in person or on video.

[VICTORIA KNIGHT]

Thank you so much, everyone, for being here. We're going to continue our great discussion on how AI can help advance health equity. I would love to hear from you guys now that we heard from a member of Congress. You guys all work in different areas of health, health equity. He was talking about how they're thinking about regulation. What does that kind of bring to your brains now that he is here in Congress? He helps shape the framework. He helps shape how they may legislate. Were there any things that you want members of Congress to know or that you think maybe they need to think about as they're thinking about this huge, huge area that may be regulated?

[J.B. WOGAN]

Deliya Wesley of Mathematica volunteers to kick off the group in answering Victoria’s first question.

[DELIYA BANDA WESLEY]

Sure, I think even before we go there, it's sort of important for us to think about and sort of talk about how we're defining the bigger issue here, which is health equity, and thinking about health equity and AI. But what we mean when we're saying "health equity," and that's really thinking about health equity in process here and also health equity as an outcome. So health equity and outcome as defined as a state where every person has the opportunity to achieve their optimal -- a fair and just opportunity to reach their optimal health, regardless of who they are or where they're born, how they identify. None of those sociodemographic factors or features should be the reason for inequitable outcomes. So that's where we want to be. Then, thinking about health equity in process and with regard to AI, it's thinking about what we put into the technology and what that data actually represents in terms of those people -- so who's represented, who's not represented, what's missing -- and ensuring that the technology does not actually exacerbate or replicate or create new inequities. So I think that's a good place for us to start in thinking about what that means for regulation and at what point we're talking about regulation. I'd invite my co-panelists to talk about sort of similarly some grounding in terms of what do we even mean by AI before we sort of talk about the regulation aspect of it.

[J.B. WOGAN]

Jennifer Roberts of the Advanced Research Projects Agency for Health, or ARPA-H, volunteers to follow Deliya’s initial framing.

[JENNIFER ROBERTS]

Yeah, great points. I think it's important, as the Congressman mentioned, to think about the different types of AI because if we talk about something that is new and upcoming, like the large language models that are clearly showing a lot of promise but not all that well-understood yet, that's a different type of conversation than something that is more on the pattern recognition or the classification side where we have images and we're trying to say, okay, which of these images potentially shows evidence of cancer or something like that. That's a technology that's been around for much longer. It's also something where there's concrete decisions and there's a much clearer way to say how is this technology helping to support improvements in health outcomes. So I think looking at that different class of technology is going to be important and also the maturity of the technologies that might be used for performance monitoring. Because in some places, there's the potential to build something that is at least semi-automated to assist with the performance monitoring pieces. In other places, we really have a lot of work to do as a technical community to figure out how would we even do that. Because it's important to note that when we're talking about this type of technology, it's something that can scale across the health ecosystem very easily, and it's not necessarily the case that we have staff with free cycles to be able to see whether things are working. It would be much better to have a solution that scales to measure how the technology is affecting health outcomes.

[J.B. WOGAN]

Next you’ll hear from Ngan MacDonald of Institute for AI in Medicine at Northwestern University …

[NGAN MACDONALD]

I think that one of the important pieces in legislation is what the Congressman talked about, which is the actual collection of data. I think he talked about input, but the collection of data right now we know from a health equity lens that there are people who are just not coming to the health care system. So they're not coming to the health care system. No matter how well we curate the data, their data is not represented. So from a legislation standpoint, and more actually from a funding standpoint, it would be great. There are actually efforts, like the All of Us research, that is attempting to get the data and collect it from where people live and represent the full breadth of the human experience. I think the other piece of that is the Congressman talked a little about the people, the programmers, who are creating the algorithms. We need a concerted effort to bring that full spectrum of the population into creating the data, collecting the data, labeling the data, and actually creating the algorithms. So Northwestern is an educational institution, so our mission is to try to make that accessible. So I think from a legislative standpoint, it's not necessarily profitable to go and educate people to be able to get to the data. So that's where I think funding from a federal standpoint can come in.

[J.B. WOGAN]

Ellie Graeden from Georgetown University’s Center for Global Health Science and Security …

[ELLIE GRAEDEN]

Great points, and I'll try to sort of pull the pieces together here a little bit. I think that we've heard a lot about this idea that we need to get really good at what the data are, feed them into equitable, sane, rational, trustworthy AI systems, which are really just -- it's another class of a tool to process data. We've been using tools to process data for many, many years and decades, and this has been the latest new tool. Those tools are used in the world to drive outcomes, and those outcomes are ultimately actually what we care about. That's true in any sort of governance or policy environment. We regulate what happens in the world. When we think about even something as simple as a pedestrian walking down the street, what we regulate is that you can't hit the pedestrian. I don't care whether you're using a horse and a buggy or if you're using the latest Tesla. You can't hit the pedestrian. I think it's a very simple but, I think, useful framing here that AI is just one more tool in the bucket. It's one more way that we need to address when we're setting up outcome-oriented regulation, but it really just flows back down through the whole chain. If we structure it in that sort of outcome-focused manner, those outcomes can also -- those outcomes are tied to the rights we care about. They're tied to things like privacy, but they're also tied to the right to health, the right to free commerce, the right to consumer protection. Because we have, especially in the EU, very much focused on really regulating the data themselves or then very specifically trying to regulate one specific type of tool, we actually end up privileging specific rights -- one right over others. So in this particular case, we tend to privilege privacy over, say, health.

Well, that means the practice is that if you have data that look like health data, they may come out of EHRs, but they may also sit in a 2003 SQL database looking at vaccine registries in a state. What we actually care about there is whether your five-year-old is vaccinated before kindergarten, but it turns out a lot of those were written so that they start with your name. They're keyed on the individual. So the problem is that we need to be able to use that for public health purposes to really be able to privilege health and privilege health of the full population. This is where we get to the social determinants component. We can't take care of people if we don't know about them and they're not represented in the data, but we can't access those data if it's keyed on the individual because then we're looking at their name, and that's a privacy -- and rightly so -- a privacy concern for those people. So I think the idea here is to say, okay, what is the outcome we need in this specific context for this specific outcome being implemented by these specific people, and then map that back down to the tools and the sort of source information that goes into it. If we structure it that way, I think it gives us a lot more flexibility and, frankly, innovative potential all along that sort of value chain to get up to those outcomes we really care about. So that's where I start to think about it at any rate.

[VICTORIA KNIGHT]

Okay, so those are some really interesting points. I'm curious about -- I know, Ngan, you said the people that are gathering the data or putting it into the tools and things like that -- they need to also be diverse as well. You mentioned maybe federal grants or something like that. Are there other ideas that you guys have in order to kind of diversify who is working on these AI projects?

[NGAN MACDONALD]

I think that that's actually a pretty fundamental one. We've seen community groups in Chicago, and we've been part of those, where you're basically training people to go out in their own community. If somebody looks like you and is actually cataloging the resources that are available and the people that are there, then what you're doing is you're normalizing data collection and access to health care. So I don't think that can be understated to create more equity.

[J.B. WOGAN]

Deliya Wesley of Mathematica …

[DELIYA BANDA WESLEY]

I would agree and really double down on that because I think the implicit piece that Ngan is getting to is we can talk about regulation, but the other thing that has to be front row and center as we're thinking about the development of these tools and these technologies is also what we need to be doing in parallel to build trust with communities. Because the same communities that are more likely to be adversely affected by some of the outcomes of these tools and technologies are the same communities that really struggle with trust with the health system, with technology at large. So investing in initiatives, in interventions, to actually rebuild the trust that has been historically broken and that lies in opportunities like what you described at Northwestern but then also just really looking at what these population subgroups are and tailoring our approach to rebuilding trust and diversifying the workforces is part of that as well. So I think that trust piece, and partnering with communities in order to do that, I think, is really key.

[VICTORIA KNIGHT]

Yes, and Congressman Beyer and I discussed a couple of the frameworks that are out there -- President Biden's AI Bill of Rights or the NIST framework. Do you think there are things in those that need to be highlighted or emphasized to make sure that health equity is centered within all these decisions that are being made around AI?

[J.B. WOGAN]

Jennifer Roberts of ARPA-H …

[JENNIFER ROBERTS]

I think when we think about that part of the conversation, there were also more general points about federal contracting and that being a mechanism where we can assist with equity. There, just one of the things that ARPA-H is doing -- so I guess pausing for a second, ARPA-H's new start-up organization inside of the government, we are modeled after DARPA, the organization that started the Internet, GPS, self-driving cars, and foundations behind the Moderna vaccine. Effectively, we are taking the model that led to those breakthroughs and bringing it over into health. At the end of the day, we're interested in breakthrough technology research that helps accelerate the improvement of health outcomes for everyone. So in that, there's an emphasis on access and affordability. So how do we make it so that if we have new types of technologies, new types of health interventions, they're getting in the hands of the patient populations that need them? So through that federal research funding, one of the things that we're in the process of doing is setting up what we're calling ARPANET-H. It's meant to be a network that extends into all 50 states through things like organizations like community health centers, other types of organizations that already exist, that are in the communities, have that trust. So that way when we are thinking about the next generation of health breakthroughs -- new therapies, et cetera -- we can be making sure that from the time that we are testing out those technologies and doing things like clinical trials that we are bringing them to a patient population that's representative of the population that's affected by a particular disease. That's something that can be framed in terms of solicitations that go out for research in that area. It can also be written into contracts saying if you are going to do a clinical trial for this great new way of -- I don't know -- performing cancer surgery so that the chance of needing two surgeries instead of one goes way down, let's test that in folks across different geographies, across different demographics, that represent the folks that are actually affected.

[J.B. WOGAN]

Ellie Graeden from Georgetown University’s Center for Global Health Science and Security …

[ELLIE GRAEDEN]

Picking up on that thread, what Jen's describing is fundamentally an engineering process. We need to define these outcomes, understand what the engineering spec is, the specifications that go along with that that are really just the requirements of how the system works -- who they're for, whose needs they are meeting, and making sure that they are equitably distributed across those outcomes. As engineering specs, this is where I think we get into things like the NIST framework, where NIST has done a really nice job of creating sort of a tractable set of -- sort of checklist to the Checklist Manifesto idea from the Congressman. But they've actually laid out the engineering tooling that aligns and the current tools that are available to implement these types of engineering methods and build AI systems that you can measure against things like equitable outcomes. We think about a lot of these systems. We sort of forget that if we're going to say we want them to be equitable, we have to define what that means and sort of make it measurable and quantifiable so we can assess that equity. Those are all, again, outputs of engineering systems that we as engineers deeply understand. They need to then be reflected in the legislation itself, and that gets us all the way back up to the AI Bill of Rights, where really what we want to be doing is somehow meeting in the middle. The role of the regulation in my mind is to align those engineering specs coming out of, say, the NIST framework and making sure that we are meeting the rights laid out in the AI Bill of Rights. The missing piece there -- the sort of missing layer -- is the regulation that ties them together, and that's really what we're talking about here. Again, I think as we focus on the outcomes, we make them measurable so that you can build systems against them and then measure them against rights protection in meaningful ways.

[VICTORIA KNIGHT]

I know we talked about being diverse in the workforce, but there's still the possibility that people who are working on these systems can make errors or still may be biased in some way, and then that can affect in some way the outcome of the product or the tool itself that you're building. So is that something you guys are thinking about, and are there ways to possibly combat that as well?

[J.B. WOGAN]

Ngan MacDonald of the Institute for AI in Medicine at Northwestern …

[NGAN MACDONALD]

What's interesting is AI -- like we can actually use the AI to help audit some of those outcomes. Because part of the problem is it's such a huge amount of information and data. So in the past, we've kind of used these very manual ways of auditing. What we can do is actually use like the AI to recognize that there are patterns and how those patterns are affecting -- like I always come back to the data, because we should know what a certain set of data is, and then we can train the machine to look for that pattern. So at the end of the day, that's like really what AI is -- a pattern recognition algorithm.

[J.B. WOGAN]

Ellie Graeden from Georgetown University’s Center for Global Health Science and Security …

[ELLIE GRAEDEN]

Yeah, there's a great example of this actually from the global community where there was a study done that looked at the primary drivers of when we identify zoonotic disease spillover events. This is when an animal population has a disease and it crosses over into humans, and you see that disease in humans for the first time. What they found was that the single biggest driver for where we see or where we identify spillover events is distance to a health care provider. It's all across the African continent but also across Southeast Asia where you get a lot of population centers and large populations in the rural areas that don't have that ready access. Now what that tells us is not that you get sicker when you go to the doctor, one possible read on that, it's that you don't see a disease you don't diagnose -- meaning you don't see it in the data. So to Ngan's point, we can actually use maps of where we see the data to identify not where there aren't data but where we aren't looking yet. That applies just as quickly to the U. S. as it does in the global community, and it's a really valuable way to use these same tools to start driving toward these direct equity outcomes in addition to the sort of way that we think about using them anyway.

[J.B. WOGAN]

Deliya Wesley from Mathematica …

[DELIYA BANDA WESLEY]

And I think, Jen, the piece that I'm sort of focusing on is how we're defining those outcomes and what the opportunity is with AI, and involving diverse perspectives is what we're actually looking for. What does that outcome -- equity work is values work, right? So what are we valuing and what are we choosing to measure? What are those signals? Bringing diverse voices to the table and people from communities that are most affected to define what we should actually be looking for as outcomes at endpoints, and then that's a real opportunity for having this conversation as we're developing these tools.

[J.B. WOGAN]

Ngan MacDonald of the Institute for AI in Medicine at Northwestern …

[NGAN MACDONALD]

I think, Deliya, you hit upon another point that I was thinking about. Another book of Atul Gawande's is being mortal and talking about sort of the end of life and what do we value at that end of life. I think that's really different for different groups of people. So when we talk about outcomes, we in health care have generally talked about how do we keep people alive longer. If we don't measure the quality of life and how do different groups think about their quality of life at the end of life and make -- I know we're not in the legislation for health care in general. But I do think that we need to start talking about end of life as well because if we don't talk about those outcomes, the AI can't actually help us to measure.

[VICTORIA KNIGHT]

I would just love to hear a little bit about -- we're talking about it somewhat in the abstract, but I'd love to hear all about what maybe things you guys are working on, specifically with AI and health care and how you're thinking about health equity within the things that you're working on and what potentials you see AI for health care or AI in health care.

[J.B. WOGAN]

Jennifer Roberts of ARPA-H …

[JENNIFER ROBERTS]

Yeah, so a couple of the things -- as I mentioned, ARPA-H is a brand-new organization. We're building a portfolio, so we're very interested in pulling in more folks with great ideas in this area. A couple things that we already have in motion are we have an effort called the "Biomedical Data Fabric," and one of the core components there is want to be able to pull together research data from many different sources -- think the nice investments that NIH has made over the course of decades, as well as other federally-funded research -- and be able to start to have techniques that are automatically tracking how representative the groups that have participated in research to date are to the patient populations that are affected by different diseases because that should help us then inform where might our future investments be. We anticipate that sometime maybe fall/early winter we'll also be investing in areas in order to enhance things like clinical trial readiness -- so looking at things like how do we make it so that clinical trials move closer to the point of care and we're really reducing the burden on folks. So that instead of needing to drive two-three hours someplace to participate in a clinical trial, can we instead make it so that folks can do that close to where they are so that we can get, again, more representation from folks that actually are going to be affected by different areas. That will leverage the ARPANET-H network that's going to go into these 50 states and leverage a lot of segue work that our partners at NIH and BARDA have already invested in, as well as organizations like community health centers, hospital networks, et cetera, so that we're really leveraging the infrastructure across the country.

[J.B. WOGAN]

Ngan MacDonald of Institute for AI in Medicine at Northwestern University…

[NGAN MACDONALD]

I think some of the things that we're working on at Northwestern -- because we are both an academic medical center and a medical school, we're piloting being able to use AI to help doctors document their visits. So that's something that we're doing on the health center side. But then on the medical education side, what we're doing is figuring out as we train doctors, what is the information we need them to know about from a digital health and data science standpoint. Because what we know is that that's where it's going to be incorporated into their practice of health, and about I would say 90% of our medical students -- they just want to be doctors. So they're like, "Well, why do I need to know about this?" So we're kind of testing and assessing how much do we need to be able to put into the curriculum so that when they leave school they're ready for what's coming. So I think we're one of the few to actually incorporate this type of education across all four years.

[J.B. WOGAN]

Ellie Graeden from Georgetown University’s Center for Global Health Science and Security …

[ELLIE GRAEDEN]

Then a couple of quick examples from our work that I think highlight some of the really important things being pulled forward here. One is we're actually doing a large-scale effort with LAPD to look at racial equity in policing. That's relevant because, one, it's a major public health issue. So this is part of the reason I'm involved, but it's also at that sort of intersection with policy and how you implement good policy. What we're doing there, and the piece that I think matters so much here, is we're collecting a really broad range of people to do the hand annotations. So it's something that we tend to forget or even maybe not know -- is the degree to which data come from people. They're created by people. It's not that they're just -- data aren't just about people, they're actually created by people. So when we build machine-learning models to evaluate language or interactions -- which is what we're looking at in this particular case, communication -- we want to make sure that we're understanding the fact that different people have different perceptions of what an interaction looks like. So as they annotate a video, they're going to see -- quite literally see -- different things, and they're going to describe different types of interactions, and they're going to describe that type of communication differently. So capturing everybody from retired police officers all the way out to a 17-year-old Latino man who's been pulled over one too many times, we want to capture all of those voices in their description of these interactions for the video so that when we're training the model, we're including that diversity of perspectives. That then is also reflected -- we're actually doing another study, to come back to this disease spillover risk in animal populations globally -- we're actually then looking at when you get the spillover event, who are these populations that they're infecting. That's a sort of traditional epidemiological model like we saw a lot of during COVID. We're then looking at the people who live in those areas. What are their demographic characteristics? But really importantly, what are their immune characteristics? That includes everything from whether they are HIV positive all the way up to whether they've been vaccinated. Both of those are mitigating factors for immune status. So we're actually linking that up to a bunch of survey data asking people what their vaccine hesitancy status is.

We haven't actually traditionally built models that can go end-to-end on that type of system, but what this allows us to do is have these conversations about whether people want to get vaccinated, feel comfortable getting vaccinated, which is actually usually about whether they see the benefit of vaccination for themselves and really understand the degree to which it will shift their risk profile. Then, mapping that all the way back to, say, the disease spillover event in the wild. When we can start to put those pieces together, people start to better understand their own risk profile and why that's going to matter. So it's pulling forward those different perceptions. It's understanding that at the core of this, the data are from the people and of the people, so it needs to reflect all of them. So how do we get creative about how we do that and really represent all of those humans in the data but then also, obviously, in the artificial intelligence systems that we build on top of them?

[VICTORIA KNIGHT]

We're getting close to time, so I want to ask one more question. I'd love to hear from everyone and kind of leave it on a positive note. What are you most excited about in regards to AI in health care and advancing health equity? What do you see as the opportunities moving forward, and how are you thinking about the future? Deliya?

[DELIYA BANDA WESLEY]

Let's start and I think bring it back to one of the things that were in the title for today's panel but maybe we didn't talk as much about, which is social determinants of health -- so all of the factors that are outside of the clinic and clinical records, clinical data, that actually are the drivers of individual health outcomes and community health outcomes. So I think one of the things that I know at Mathematica we're particularly excited about is what the opportunities are to bring those different and often very disparate data sources together -- so thinking about the data that actually lives in different sectors and is coming from housing and the prison system, all of the data that drive health outcomes up to 80% but have not been brought together in ways that we can make meaningful decisions about treatment and interventions. So I think that's one of the many areas that we're most excited about and the opportunity for AI to do that in a way that humans have not been able to, to date.

[VICTORIA KNIGHT]

So interesting. Jen?

[JENNIFER ROBERTS]

I think what stands out to me is the opportunity for AI to help democratize access to high-quality care. I think that if we really invest and move in productive directions there, we can make it so that we are reducing the burden on clinicians freeing up their time to spend more time interacting with patients, perhaps making it so that the communication between patient and clinicians is more robust -- more robust across health literacy levels, more robust languages -- and also making it so that we can do effective resource allocation so that we're making sure that we can detect when there's a gap in care and coming up with ways in order to address that.

[J.B. WOGAN]

Ngan MacDonald of the Institute for AI in Medicine at Northwestern …

[NGAN MACDONALD]

I think one of the key things that I see is that as the large language models have kind of been dropped upon us, I think that there is much more recognition that AI is a team sport. So when you start to recognize that there's public/private partnerships and then there's community partnerships, it creates a groundswell that is kind of like an echo chamber, I guess. As you see more people that look diverse becoming involved in this, I think that it will create more opportunities for involvement going forward. So I think about that in the education space and the data collection space but also in conversational space, in the AI space, where we think about how does AI augment the experience between a doctor and a patient, like Jen said. I think about my mom who is 83, and she doesn't speak English very well. When she goes to her doctor and her doctor says, "You need to reduce your sodium intake," she's like, "Okay." Then, she can have a conversation with her own personal AI assistant that says, "Hey, doctor says, that but what it means that you need to use less salt." Then she says, "Well, I don't use salt." The AI assistant, knowing her, would say, "Maybe a little less fish sauce and soy sauce in your cooking," and she's like, "Ah, that makes sense." So those are the types of interactions that we can have at a personal level, and AI has the opportunity to hyper personalize the interaction that people have with health care. When it gets that personal, then you build the trust in the community.

[ELLIE GRAEDEN]

I love that. That's spot on.

[Laughter]

[J.B. WOGAN]

Ellie Graeden from Georgetown University’s Center for Global Health Science and Security …

[ELLIE GRAEDEN]

I love that. It’s spot on. Because it is -- what I was going to say is very much tied to that. What I'm excited about is that we're actually here having this conversation. There are four Congress people sitting together talking about these issues and making it happen. I feel a little bit like the first time I heard a news anchor early in COVID talk about R values on the nightly news. I thought, wait, I'm sorry, what? We're talking about modeling parameters on CNN? I never -- I never thought I would see the day! But I feel very similarly now as we're having these conversations. The practical reality is that we've all been using systems that use machine learning algorithms for years. We just don't know it. We don't' recognize them as such. We don't understand that even just the sensors in our car that are telling us that we're about to hit something when we're parking are built on these same principles. So I'm really pleased that now we have the language to talk about it, that we're starting to gain the ability to have these conversations because it's expanding really the diversity of the community who's participating in those conversation. When it was just a bunch of people who all looked the same sitting in the ivory tower, it's not a very diverse conversation sort of by definition. We're now to the point where we are having this conversation in a much broader public, and we're talking about the ways that these tools can help everyone -- not just the academics or not just the tech titans. That, I think, is what I'm most excited about...is it's a much more interesting conversation now.

[DELIYA BANDA WESLEY]

And we're not talking about equity after, it's part and parcel. Right.

[CHRISTOPHER TRENHOLM]

Can I just interrupt to say how happy I feel after those four answers?

[Laughter]

[VICTORIA KNIGHT]

I also was going to say I appreciate it's a panel of all women also, which is great.

[Laughter]

[ELLIE GRAEDEN]

On a tech topic!

[VICTORIA KNIGHT]

With that, I think that's my end of questions. Do you want to say anything, or do I need to say anything?

[CHRISTOPHER TRENHOLM]

I think I just simply want to say what an absolute pleasure it's been to listen to you guys describe this future. It's incredibly exciting. The Congressman was remarkable in terms of his focus and interest and skills, and I want to thank everybody working with him for the work that's been shown. I'm really proud to be here, so thank you.

[Applause]

[J.B. WOGAN]

Thanks to Victoria Knight, a health care reporter for Axios, who interviewed Congressman Don Beyer and moderated the panel with Ellie Graeden, Jen Roberts, Ngan MacDonald, and Deliya Banda Wesley. You also heard opening and closing remarks from Chris Trenholm, who oversees health at Mathematica. As always, thank you for listening to On the Evidence, the Mathematica podcast. If you liked this episode, please consider subscribing. We’re on YouTube, Apple Podcasts, Spotify, as well as other podcasting platforms. To learn more about the show, visit us at Mathematica.org/OnTheEvidence.

Show notes

Learn more about how Mathematica’s experts harness vast data, advanced analytics, and deep health care policy experience to help organizations make sense of real-world data in a way that enables exploration and innovation.

About the Author

J.B. Wogan

J.B. Wogan

Senior Strategic Communications Specialist
View More by this Author