Patient Data Access News

Patient advocates explore a patient bill of rights for AI

The patient advocacy document includes seven different patient rights to consider as healthcare adopts AI.

healthcare AI needs to incorporate patient voice

Source: Getty Images

By Sara Heath

- The Light Collective, a patient advocacy organization focused on promoting the patient voice in the health IT space, wants to make sure the industry centers the patient in its frenzy for adopting AI standards.

In response to the insurgence of AI, healthcare and technology leaders have formed multiple initiatives to write out AI codes of conduct. The WHO, the National Academy of Medicine, and other leading stakeholders have all published their own AI codes of conduct in an effort to ensure the safe and ethical use of AI in healthcare.

But these codes of conduct mostly center health and IT professionals, almost entirely neglecting to incorporate the patient voice. The “AI Rights for Patients” code of conduct from The Light Collective seeks to change that.

“Multiple national initiatives are forming AI standards, codes of conduct, and bills of rights, largely in the absence of input from patient communities,” Andrea Downing, The Light Collective’s co-founder and lead author of the document, said in a statement. “In an era where AI is reshaping the healthcare landscape, it is imperative that patient rights are not only protected, but also championed.  These rights serve as a foundational document, articulating principles and values that underpin patient-centered AI practices.”

In its draft document, The Light Collaborative outlined seven patient rights that must be protected as healthcare continues to embrace AI use.

READ MORE: Patient Trust in Healthcare AI Relies on Use Case, But Familiarity Is Lacking

First is the right to patient-led governance, or the idea that patients are involved in the “design, policymaking, and development of rules that govern AI,” the document reads.

Next is the independent duty to patients. This could be AI’s version of the Hippocratic Oath doctors must take when treating patients. Right now, AI is not guided by such principles of first doing no harm toward patients. To achieve that end, The Light Collaborative said there needs to be a fiduciary incentive plus diverse patient representation guiding healthcare’s AI design and use.

Third, The Light Collaborative advocated for the right to transparency spanning three domains:

  • Patients being informed of how and why their data is being used in generative or predictive AI models
  • Patients being informed when medical guidance, education, or communication is based on an AI algorithm rather than direct human input
  • Patients having access to the evidence of AI efficacy in their care

Fourth, the document states that patients have a right to AI self-determination, meaning that AI needs to be used and developed in a way that lets patients make informed decisions about their own care. That means letting patients opt out of or appeal AI-generated medical decisions.

Next, patients have a right to identity privacy and security. This means ensuring patients understand any cyber risks in using AI or contributing their data to generative or predictive AI. AI should also be free of risk for scams or medical mis- or disinformation. Patients also have the right to disclosures of cyber events related to AI use.

READ MORE: How Can AI Chatbots Help Docs Tailor Patient Education?

The penultimate right includes the right of action, meaning patients have a right to legally enforceable action should they experience a harm related to AI use.

And finally, the document espouses a right to a shared benefit from AI. This means diverse patient communities must “equitably share in the benefits created as a result of health IT,” the document reads. This entails creating protections from commodifying the data contributed to AI algorithms and sharing resources and funding with patient communities.

The “AI Rights for Patients” document is intended to help healthcare organizations, payers, and health IT developers design AI in a way that best serves patients, The Light Collaborative wrote. This comes in response to the lack of clarity around patient data use, the resource says. Other relevant harms include patient data privacy and security issues, and accountability.

To that end, The Light Collaborative stated that healthcare needs an independent, patient-led governance body for AI use. The organization tapped a group of patient activists, clinicians, and health data experts, but stressed that the document is not exhaustive.

“As The Light Collective moves forward, it continues to solicit and incorporate perspectives and voices of a wide range of communities,” The Light Collaborative said.

READ MORE: Are Consumers Left Behind in Healthcare’s Generative AI Talks?

Other industry leaders have begun to sound the alarm on the patient voice in AI.

In January 2024, a report from Deloitte showed that few healthcare organizations are considering the consumer perspective when adopting AI.

The Deloitte 2024 Health Care Generative AI Outlook survey asked 60 healthcare executives about their strategies and considerations when looking at and implementing generative AI. Well over 70 percent of executives are laser-focused on the data aspect of implementing gen AI—data quality and availability, regulatory compliance, and security concerns—but the consumer is being left behind in those conversations.

Only 50 percent of executives said they are focused on building patient trust to share data, a crucial component to making generative AI models work, while the same proportion said they are focused on equitable access to gen AI-driven solutions. Even fewer (45 percent) are focused on patient education about AI and associated risks, Deloitte found.