Policymakers can design regulations that protect patients and work for organizations of all sizes — not only those with the most resources.
Artificial intelligence is increasingly being integrated into the workflows of doctors and clinicians across the country. In a survey conducted by the American Medical Association, 66% of physicians currently use AI in their practice.
As AI tools become more common in health care, it’s critical to govern their use responsibly and ensure proper oversight.
State policymakers and accreditation bodies like the Joint Commission are starting to propose new rules for regulating AI in health care.
I welcome these discussions and urge policymakers to consider whether new rules or regulations will help every health care organization — and in turn, their patients? Or will the new rules only help those with the capacity and resources to manage regulatory complexity?
I often talk about the “AI haves and have-nots,” or the organizations that can afford to invest in AI and those that cannot.
Large health care organizations, like Kaiser Permanente, have the resources, data assets, and expertise to responsibly deploy and monitor AI tools and the infrastructure to meet complex requirements. But smaller hospitals, rural clinicians, and community clinics often struggle to maintain basic IT infrastructure, let alone manage complex, new regulations.
Without careful planning, regulations meant to improve safety can worsen the AI divide in our country.
The choices we make now will determine whether rules leave smaller providers and their patients behind or lift the entire field to a higher standard of care.
Effective regulations should help and apply to every organization, not just to those already ahead.
At Kaiser Permanente, we believe regulations should set a clear baseline for privacy, safety, and human oversight. They shouldn’t assume the most advanced organizations define the standard for all.
That balance matters. With many state proposals moving forward at once, each with different definitions and reporting expectations, even large health care organizations will struggle to comply. For small or rural hospitals, the burden can be overwhelming.
Rules for AI in health care work best when they do the following.
As a nonprofit, Kaiser Permanente is committed to sharing what we learn with the broader health care community. Our leadership in AI and data-driven care is not just about improving our own performance and outcomes. It’s about helping the entire health ecosystem move forward together, so all patients can benefit.
We use our experience to help the field move forward.
These efforts make AI safer, more transparent, and more accessible. They also help ensure patients across America benefit from AI, not just those cared for by large health organizations.
Policymakers can take steps to build trust and confidence. We urge them to:
AI in health care is here to stay. The rules we build today will determine whether its benefits reach every community.
Policymakers, health care organizations, and communities must work together to create guardrails that make AI better, safer, and fairer for everyone.
At Kaiser Permanente, we’re committed to making sure progress in AI becomes progress for every patient across the country.