Medicare Compliance & Reimbursement

Artificial Intelligence:

Discover How CMS Is Harnessing AI

Check out the agency’s AI Playbook and what’s ahead with emerging tech.

As AI evolves and becomes more sophisticated, it continues to reshape healthcare. And Medicare has a strategy to better help you use the technology and control it.

The Centers for Medicare & Medicaid Services (CMS) has been exploring the potential of artificial intelligence (AI) and machine learning (ML) in recent years. The agency has developed an AI Playbook, built a virtual community, and established guidance for responsible AI development. During an October 2023 Tech Topics session, the CMS Office of Information Technology (OIT) presented this information.

Learn how CMS is exploring AI in house to stay ahead of the technological curve.

Establish AI Design Standards

Approximately two years ago, CMS started the AI Explorers program to help encourage AI development at the agency. According to CMS, OIT accepts project proposals that apply “AI and ML techniques related to CMS’ mission and to the applicant’s team functional area.” OIT has supported 10 projects since the program’s inception, and 160 learners from 15 CMS components have completed the beginning AI/ML training track level.

The agency is also building an AI community on Slack where members can interact and learn about the technology. As of October 2023, the community had more than 400 members.

Additionally, following executive order 13960 in 2020, the Department of Health and Human Services (HHS) and CMS developed the AI Playbook. CMS’ AI Playbook provides high-level guidance for fundamental AI design principles and operations. CMS has already published two versions of the playbook and is working on a third version.

“Through more research and development, we’ll continuously share the lessons learned and best practices within the AI community,” said Xingjia Wu, data scientist for OIT, when speaking about the CMS AI Playbook. “Meanwhile, we’ll also ask for AI workspace internally within OIT, and externally with multiple components within CMS. We’d like to find a good working environment that could support more AI applications or even pilots across the agency,” Wu continued.

Analyze AI Development Guidance

If your healthcare organization is looking into developing AI tools to assist your providers and staff or if team members are experimenting with ML and generative AI tools, you should keep expert guidance in mind while exploring the technologies.

Not every generative AI, ML, or LLM tool is created equally, so it’s important to do your homework and assess the components of each software. As a healthcare provider or organization, you’re responsible for protecting your patients’ sensitive information in addition to providing care. Therefore, you’ll need to be as responsible when developing or evaluating AI tools.

The CMS AI Playbook, HHS Trustworthy AI Playbook, NIST, and the AI Risk Management Framework offer enterprise-level guidance for AI, but the main elements for responsible AI boil down to the following areas:

  • Validity and reliability
  • Safety, security, and resiliency
  • Accountability and transparency
  • Explainability and interpretability
  • Privacy
  • Fairness
  • Mitigation of harmful bias
  • Suitability

Examine LLM and Generative AI Tool Use in Healthcare

One of the benefits of LLMs or generative AI models is how the technology can easily summarize large blocks of information in a way that makes sense to the user. Experts are seeing the technology being used to summarize information and provide access to information, which is helping drastically reduce the barrier to entry. Users can easily take complex information or an unfamiliar topic and have the LLM or generative AI model provide a simplified explanation to you that’s easy to understand.

Example: A patient of a family practice logs onto the practice’s website to schedule an appointment for the symptoms they’re feeling. A chatbot pops up and asks the patient to describe their symptoms. The chatbot then harnesses the patient’s account to factor in their medical history and the current symptoms before producing a possible condition. The chatbot also explains the potential condition in a way that the patient best understands.

In the above scenario, the chatbot could present the information back to the patient in a way that factors in the patient’s age range and primary language.

However, there are inherent risks with LLMs and generative AI models. Using the chatbot scenario as an example, the generated responses could include some unintentional biases, which could offend the patient or provide them with inaccurate information. This is why you must do your due diligence to ensure the technology model isn’t going to do more harm than good.

Healthcare providers and organizations need to carefully vet every LLM and generative AI model as the technology is intended to be used, especially regarding Medicare or Medicaid applications. If the model will be used with the public, then the information the technology is providing must be correct. There cannot be any gray areas, and the technology is required to benefit the beneficiaries.