March 19, 2024

Fostering responsible AI in health care

With the right policies and partnerships, artificial intelligence can lead to higher-quality, more equitable care.

Health care providers use AI to improve patient care

Contributed by Daniel Yang, MD, Vice President, Artificial Intelligence and Emerging Technologies


Across the U.S., many organizations are realizing the transformative potential of artificial intelligence.

In health care, AI presents opportunities to improve patient outcomes and reduce health disparities. It can support care teams and enable more personalized health care experiences.

But health care leaders must understand and address risks to ensure AI is used safely and equitably. These risks include flawed algorithms, unsatisfying patient experiences, and privacy concerns.

At Kaiser Permanente, we’re taking a thoughtful approach to AI. AI tools alone don't save lives or improve the health of our members, they enable our physicians and care teams to provide high-quality, equitable care.

How Kaiser Permanente uses AI

The health care industry generates almost 30% of all data in the world.

Artificial intelligence enables computers to learn and solve problems using a variety of data sources, including medical images, audio, and text. The insights generated from AI can support our physicians and employees in enhancing the care of our patients.

For example, a Kaiser Permanente program called Advance Alert Monitor uses AI and helps prevent emergencies in the hospital before they happen. Every hour, the program automatically analyzes hospital patients’ electronic health data. If the program identifies a patient at risk of serious decline, it sends an alert to a specialized virtual quality nursing team. The nursing team reviews the data to determine what level of on-site intervention is needed.

This program is currently in use at 21 Kaiser Permanente hospitals across Northern California. A rigorous evaluation found that the program saves an estimated 500 lives per year.

The path to responsible AI

At Kaiser Permanente, AI tools must drive our core mission of delivering high-quality and affordable care for our members. This means that AI technologies must demonstrate a "return on health," such as improved patient outcomes and experiences.

We evaluate AI tools for safety, effectiveness, accuracy, and equity. Kaiser Permanente is fortunate to have one of the most comprehensive datasets in the country, thanks to our diverse membership base and powerful electronic health record system. We can use this anonymized data to develop and test our AI tools before we ever deploy them for our patients, care providers, and communities.

We are careful to make sure that the AI tools we use support the delivery of equitable, evidence-based care for our members and communities. We do this by testing and validating the accuracy of AI tools across our diverse populations. We are also working to develop and deploy AI tools that can help us identify and proactively address the health and social needs of our members. This can lead to more equitable health outcomes.

Finally, once a new AI tool is implemented, we continuously monitor its outcomes to ensure it is working as intended. We stay vigilant; AI technology is rapidly advancing, and its applications are constantly changing. 

Policymakers can help set guardrails

While Kaiser Permanente and other leading health care organizations work to advance responsible AI, policymakers have a role to play too. We encourage action in the following areas:

  • National AI oversight framework — An oversight framework should provide an overarching structure for guidelines, standards, and tools. It should be flexible and adaptable to keep pace with rapidly evolving technology. New breakthroughs in AI are occurring monthly.
  • Standards governing AI in health care — Policymakers should work with health care leaders to develop national, industry-specific standards to govern the use, development, and ethics of AI in health care. By working closely with health care leaders, policymakers can establish standards that are effective, useful, timely, and not overly prescriptive. This is important because standards that are too rigid can stifle innovation, which would limit the ability of patients and providers to experience the many benefits AI tools could help deliver. 

Guardrails: Progress so far

The National Academy of Medicine convened a steering committee to establish a Health Care AI Code of Conduct that draws from health care and technology experts, including Kaiser Permanente. This is a promising start to developing an oversight framework.

In addition, Kaiser Permanente appreciates the opportunity to be an inaugural member of the U.S. AI Safety Institute Consortium. The consortium is a multisector work group setting safety standards for the development and use of AI, with a commitment to protecting innovation.

Considerations for policymakers

As policymakers develop AI standards, we urge them to keep a few important points top of mind.

  • Lack of coordination creates confusion. Government bodies should coordinate at the federal and state levels to ensure AI standards are consistent and not duplicative or conflicting. 
  • Standards need to be adaptable. As health care organizations continue to explore new ways to improve patient care, it is important for them to work with regulators and policymakers to make sure standards can be adapted by organizations of all sizes and levels of sophistication and infrastructure. This will allow all patients to benefit from AI technologies while also being protected from potential harm.

AI has enormous potential to help make our nation’s health care system more robust, accessible, efficient, and equitable. At Kaiser Permanente, we’re excited about AI’s future, and are eager to work with policymakers and other health care leaders to ensure all patients can benefit.