May 21, 2025

Trust unlocks AI’s potential in health care

Artificial intelligence can improve health care by reducing administrative tasks, supporting diagnoses, and helping clinicians develop personalized treatment plans.

We build confidence in health AI by communicating clearly about it and using it to help improve our members’ health while ensuring their safety and privacy.

By Daniel Yang, MD, Vice President, Artificial Intelligence and Emerging Technologies


Our health care system faces increasing pressures.

There’s a supply-demand mismatch: Demand for care outpaces supply. This is largely driven by people living longer, often managing multiple chronic illnesses.

At the same time, patients expect more from health care. They want services to be as accessible, immediate, and efficient as the digital tools they use every day.

To help solve the issue, the U.S. needs to quickly increase its health care workforce. But this solution has proven challenging. Fewer workers are entering the health care field. And training and licensure take a long time.

A shortage of health care workers has resulted in:

  • Long wait times for patients
  • Burnout among health care professionals
  • High health care costs, straining both patients and providers

AI’s potential to improve care

Artificial intelligence, or AI, offers real opportunities to address these challenges and transform every aspect of health care.

AI can help health care professionals deliver more effective and efficient care by:

  • Reducing time spent on administrative tasks, such as paperwork and scheduling
  • Assisting clinicians in making accurate and timely diagnoses
  • Helping clinicians develop personalized treatment plans tailored to individual patients

Unlocking AI’s full potential depends on more than just innovation. It also depends on patients’ and providers’ ability to trust that these tools are safe, high-quality, and reliable.

Why trust is essential

Surveys show that about 60% of Americans feel uneasy about their health care providers using AI. Yet, many of these people use AI in their daily life for activities like meal planning, summarizing information, and even drafting emails. The difference is what’s at stake.

Trust in health care is built carefully over time. It increases through reliability, evidence-based practices, and clear communication.

Consider the use of general anesthesia, a common but high-risk medical practice. Today, it’s widely accepted because years of rigorous research and improvements show that its benefits outweigh the risks.

We need to use the same approach for AI in health care.

A people-centered approach

To capture AI’s full potential, we must put people at the center of health AI development and use. That means designing and deploying AI in a responsible way — a way that never loses sight of who these tools are meant to serve: patients and the professionals who care for them.

At Kaiser Permanente, we focus on people, priorities, processes, and policies to help guide our responsible use of AI.

People: Trust starts with people. Doctors, nurses, and pharmacists are consistently ranked in consumer surveys among the most trusted professionals in the country. We can bridge the trust gap in AI by applying the same principles that have earned confidence in health care over time. We can demonstrate how AI has helped clinicians better deliver care by showing the clinical evidence.

At Kaiser Permanente, we are building trust by testing AI tools in real-world settings, directly involving clinicians, and continuously monitoring AI tools’ performance to ensure they support care safely and effectively.

Priorities: Building trust takes time and focus. We’ve learned that trying to do too much at once can overwhelm teams and erode confidence. That’s why we prioritize a few high-impact projects. We start small, learn what works, and expand only when we’re ready.

Our assisted clinical documentation tool is one example. The tool summarizes medical conversations and creates draft clinical notes. Our doctors and clinicians can use it during patient visits.

We first launched it with a small number of doctors. We closely monitored it and gathered feedback from the clinicians using it before we expanded its use.

This process helped us prove the tool’s value and safety. Our phased and careful roll-out of the tool helped our care teams and members build trust in the tool.

Processes: For AI to earn trust, it has to fit into the way care is delivered. That means when we design AI tools we need to think beyond the technical aspects. We need to think about how the tool will be used in practice.

We saw this clearly with our Advance Alert Monitor, a system that uses AI to predict when hospitalized patients might get sicker and need urgent attention.

Our process first sends alerts to nurses who are equipped to quickly and accurately evaluate each one and only escalate to physicians when needed. This keeps physicians, who are already juggling many demands, from being overwhelmed by nonurgent alerts.

This approach supports physicians, and helps patients get the right care faster. In the end, it wasn’t just the technology that earned trust; it was the process we built around it.

Policies: We believe health care organizations including Kaiser Permanente have a role in supporting thoughtful policymaking by sharing what works, where challenges arise, and what’s needed to keep people safe. That kind of transparency can help shape the state and federal rules that support innovation while protecting the public.

When AI tools cause harm or don’t work as intended, they can trigger public mistrust, which might cause a wave of new rules that are meant to help but can make future innovation harder. That’s why trust is just as much a policy issue as a technical or care delivery issue.

Considerations for policymakers

As we integrate AI into health care, policymakers have a critical role. Policymakers can help build trust by:

  • Supporting the launch of large-scale clinical trials to demonstrate health AI’s effectiveness and safety
  • Supporting the establishment of standards and processes that health systems can use to monitor AI in health care
  • Supporting independent quality assurance testing of health AI algorithms

By pursuing these ideas, leaders can help make sure that AI technologies are people-centered and reliable and help to provide safe, high-quality care to all.