Tech & Innovation in Healthcare

Technology & Innovation:

Modify Your Generative AI Model to Enhance Accuracy

Question: I’m worried about our staff members receiving incorrect information from a generative artificial intelligence (AI) model that could put our patients at risk. What can we do to ensure the results from a generative AI model are reliable and accurate?

New Jersey Subscriber

Answer: Worrying about incorrect information from a generative AI model is completely natural, and there is an alteration a developer can make to the code to ensure reliable and accurate results. Retrieval-augmented generation (RAG) is an AI framework that helps improve the quality of responses generated by a large language model (LLM).

RAG links LLM to external information sources to supplement the generated information. This allows the LLM to provide sources when prompted, so users can fact-check the LLM’s claims. RAG decreases the chances of the LLM delivering an incorrect answer that’s answered authoritatively — in other words, a hallucination — which will help build trust between the AI model and the users.

For example, an LLM augmented with the latest CPT®, ICD-10-CM, and HCPCS Level II codes, guidelines, and payer rules could be a welcome tool for medical coders and billers when working through complex medical reports. The LLM could examine the report and suggest codes to the human coders for review.

Your LLM developer only needs to add a few lines of code to implement RAG into the existing model. Deploying this modification is much faster and more cost-effective than building an entirely new AI model and retraining it with additional datasets.

Stay tuned to Revenue Cycle Insider for more information as AI continues to evolve in healthcare.

Mike Shaughnessy, BA, CPC, Development Editor, AAPC