After falling and breaking an arm, you go to the doctor to have the break set and are then sent to rehab. Since insurance should cover the costs, you submit a claim, only for it to be denied. This leads to the question of whether a claims examiner or artificial intelligence (AI) is responsible for the denial.
On February 6th, the US government issued a memo to certain Medicare insurers, making it clear that they cannot use AI to deny claims. While machine-learning algorithms can assist in making determinations, they cannot be the sole basis for denying care.
This memo from the Centers for Medicare & Medicaid Services (CMS) comes in response to lawsuits against health insurers for allegedly using AI to wrongfully deny care to patients. These lawsuits involve patients claiming that companies like United Healthcare and Humana utilized an AI model with a 90% error rate. The dangers of such technology are evident, with many regulators and critics focusing on potential discriminatory implications of AI.
CMS expressed concern about algorithms exacerbating discrimination and bias, emphasizing that the onus is on insurers to ensure compliance with the anti-discrimination requirements of the Affordable Care Act. Several states, including New York and California, have also issued warnings to insurance companies, urging them to verify that their algorithms are not discriminatory.