Medicare Compliance & Reimbursement

Expect AI To Be An Enforcement Tool — And Target

Experts warn about the perils of too much dependency on AI.

You may find that the pros outweigh the cons when embracing artificial intelligence (AI) for your practice needs. But as with all technology, it’s important to tread lightly and ensure you’re implementing it with compliance in mind.

Why? One of the cornerstones of federally-covered care, prescriptions, and equipment is that they are reasonable and medically necessary — and ordered by providers, not a program or algorithm. You run the risk of violating the False Claims Act (FCA) if AI goes awry, suggest attorneys Brian P. Dunphy and Samantha P. Kingsbury with Mintz, Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C. “AI tools could potentially create risk where providers use those tools to order items or services without appropriate provider involvement and oversight,” Dunphy and Kingsbury explain in online analysis. “For example, if AI were used to select a test, and that order did not meet coverage requirements, this activity could give rise to a false claim.”

Be ready: Federal enforcement agencies like the Department of Justice (DOJ) and the HHS Office of Inspector General (OIG) have mined data for years to help them investigate fraud and abuse in the healthcare industry. Additionally, in its Strategic Plan, OIG noted that it intends to increase its “detection capabilities by leveraging artificial intelligence (AI) and machine learning to better predict the potential for fraud, waste, and abuse.”

Whether you’re just dipping your toe in the AI stream or are all-in, it’s wise to set parameters, establish compliance guidelines, and keep abreast of regulatory reforms in the space.

Resource: Find the OIG’s five-year plan at https://oig.hhs.gov/documents/root/7/OIG-Strategic-Plan-2020-2025.pdf.