AI in Health Insurance: Ensuring Accountability and Oversight in an Evolving Landscape

One might say Artificial Intelligence (AI) is a double-edged sword. It has already proven helpful in streamlining and automating repetitive tasks in many areas of daily life. Yet, AI is also full of risks and dangers when not appropriately balanced with human supervision.

Doctors and nurses taking care of the patients and connecting together: healthcare and technology concept

In October 2023, the Biden Administration issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order opens with a clear admonition:

“Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”1

The Executive Order uses the definition of artificial intelligence already codified in federal law: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments… ”2

In the behavioral health context, AI might be used by a mental health practitioner to generate notes after a patient session, saving time for important patient care. It can also be useful in detecting medical errors or quality issues or automating billing and prior authorization processes. AI might be used by an insurance carrier to set insurance premiums, make underwriting decisions, and automate utilization review of covered services. In all of these cases, careful guardrails must be in place to ensure that human intervention and oversight are maintained in the presence of computer-driven decisions.

Along these lines, this past spring, the American Psychiatric Association adopted a Position Statement on the Role of Augmented Intelligence in Clinical Practice and Research3, which cautions that AI applications may “carry high or unacceptable risk of biased or substandard care, or of patient privacy and consent concerns.” The Position Statement refers to seven areas for increased accountability and oversight, including the need for human clinical involvement, patient education, and safeguarding of health information used by AI systems.

NYSPA

Here in New York, the New York State Department of Financial Services (DFS) recently issued Insurance Circular Letter No. 7 (2024), an industry guidance entitled “Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing.” The goal of the Circular Letter is to ensure that insurers doing business in New York use emerging technologies in compliance with all applicable federal and state laws and regulations. Specifically, “the self-learning behavior that may be present in AI increases the risks of inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes that may disproportionately affect vulnerable communities and individuals or otherwise undermine the insurance marketplace in New York.”4

DFS expects insurers to conduct appropriate oversight over third-party vendors, including due diligence and oversight relative to the risks of AI or external consumer data and information sources used by third-party vendors. DFS states that insurers are ultimately responsible for the outcomes of the use of AI by outside vendors.

While the Circular Letter applies solely to underwriting and pricing activities, the fundamental concepts and spirit of the guidance should be applied to all insurance company activities, particularly utilization review. We strongly support and encourage similar regulatory oversight of the use of AI by insurance carriers in connection with utilization review activities, including review of claims, imposition of pre-payment review, and post-payment audits. NYSPA urges state regulators to adopt additional policies and procedures consistent with the National Association of Insurance Commissioners Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, issued in December 2023.5 The Model Bulletin requires insurers to develop, implement, and maintain a written AI program that mandates the responsible use of AI systems in connection with decisions related to regulated insurance practices. Such written AI programs should be designed to mitigate the risk of adverse consumer outcomes, including “inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes for consumers.”

Earlier this year, New York State Assembly Member Pamela J. Hunter introduced A-9149, a bill that would amend the Insurance Law to require insurers to (i) notify insureds and enrollees about their use or lack of use of AI-based algorithms in the utilization review process by posting a notice on the insurer’s website and (ii) submit to DFS any AI-based algorithms or training data sets being used or that will be used in the insurer’s utilization review process.6 Further, any clinical peer reviewers conducting utilization review on behalf of an insurer who initially uses AI-based algorithms must open and document the utilization review of the individual clinical records or data prior to issuing an adverse determination. This proposed legislation would impose penalties on insurers for failure to comply, including suspension or revocation of the insurer’s license, a one-year delay for issuance of a new license, a fine of no more than $5,000 for each violation and a fine of no more than $10,000 for each willful violation. The same penalties would apply to clinical peer reviewers who violate the law. At present, A-9149 is under review by the Assembly Standing Committee on Insurance. Hopefully, this bill and others like it seeking to create standards for the use of AI in the health insurance industry will continue to garner attention and momentum.

NYSPA is closely monitoring insurance utilization review activities in New York and is working collaboratively with its national organization, the American Psychiatric Association, to address any concerns. First, if utilization review activities are applied unilaterally to behavioral health benefits and not to other types of services, we must examine whether they constitute the discriminatory imposition of a non-quantitative treatment limitation that may violate the federal Mental Health Parity and Addiction Equity Act. If a carrier uses AI-based algorithms or systems to generate record requests and evaluate clinical records, we must look carefully into whether these systems are unfairly or improperly targeting particular groups of providers or beneficiaries.

Although A-9149 is not yet law, it would be extremely interesting to know if current utilization review activities would comply with the provisions of the proposed bill, particularly the requirement that clinical peer reviewers actually review records independently and not simply rely on a determination made by an AI program. As previously noted, the use of AI and its self-learning behaviors includes the fundamental risk of “inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes that may disproportionately affect vulnerable communities and individuals….”7 In this scenario, the possible risks associated with the use of AI could directly impact individuals seeking mental health care and treatment, including those who may rely on insurance reimbursement in order to access necessary and sometimes life-saving care.

AI will continue to be on the minds of regulators and public policymakers. The National Conference of State Legislatures has reported that at least 45 states, as well as Puerto Rico, the Virgin Islands, and Washington, DC, have introduced legislation regarding AI. Thirty-one states and Puerto Rico and the Virgin Islands have adopted resolutions or enacted legislation.8 As New York prepares for the 2025 Legislative Session, the Assembly Committee on Consumer Affairs and Protection and the Assembly Committee on Science and Technology recently announced they will hold a public hearing to examine regulatory and legislative options to ensure consumer and public protection relating to the use of artificial intelligence. The testimony received at this hearing, and others like it across the country will be critical in shaping future laws governing the use of AI.

Rachel Fernbach, Esq. is the Executive Director and General Counsel of the New York State Psychiatric Association and a Partner of the firm Moritt Hock & Hamroff LLP, where she concentrates her practice in the area of not-for-profit law and health care law with a specialty in psychiatry and other mental health services.

Jamie Papapetros is Research and Communications Coordinator at New York State Psychiatric Association’s Government Relations Office, in conjunction with Karin Carreau of Carreau Consulting. Mr. Papapetros has a decade of experience in government relations, identifying, tracking, and analyzing pertinent legislation, providing legislative and electoral research, memo preparation and in-depth legislative and regulatory reports.

Footnotes

  1. www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  2. 15 U.S.C. § 9401(3)
  3. www.psychiatry.org/getattachment/a05f1fa4-2016-422c-bc53-5960c47890bb/Position-Statement-Role-of-AI.pdf
  4. www.dfs.ny.gov/industry-guidance/circular-letters/cl2024-07
  5. https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf
  6. www.nysenate.gov/legislation/bills/2023/A9149
  7. www.dfs.ny.gov/industry-guidance/circular-letters/cl2024-07
  8. www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation

Have a Comment?