Smart and Ethical Recovery: AI in Addiction Treatment

Artificial Intelligence (AI) can potentially revolutionize Addiction Treatment. This complimentary technology can aid clinicians and organizations by personalizing interventions and treatments, generating predictive analytics, aiding clinicians in optimizing workflows, and even reaching patients that might not seek assistance using traditional means. Throughout history, the benefits of any technology have always been counterbalanced by ethical considerations, and AI in the behavioral health space is no exception. AI technologies must also be designed and implemented with transparency, privacy, and bias reduction. Behavioral healthcare executives must be aware of the benefits and dangers of any technology, especially AI, ensuring proper, accurate, and ethical care is provided for all their patients.

Medical tech research, innovative iot global healthcare ai technology with telehealth, telemedicine service concept with doctor analyzing online on EHR, EMR big data patient health record in hospital

Several researchers are exploring the benefits of AI, and undoubtedly, the results are promising. A recent Vanderbilt study could predict suicidal intentions with 80% accuracy (Forbes, 2023), and a Yale study (Roberts et al., 2022) found AI models similarly able to identify patients in addiction treatment with a higher risk of alcohol abuse relapse with similar accuracy. The results could revolutionize the work done by organizations focused on delivering quality care to substance abuse patients. These types of AI and Machine Learning models can provide insight across a broad stratum of patients, considering nuanced characteristics of the patient and the dynamic behaviors of patients while in treatment or post-discharge. Clinicians could be provided with information that would have been buried in notes and not been readily apparent. The information could be presented like information gauges in an airplane cockpit, giving the pilot instant information on the changing conditions needed to successfully fly and land an airplane.

Technology in a vacuum, however, without proper guidance has many consequences. Over-reliance on AI could lead clinicians to disregard their own opinions and training when they run counter to the AI information. Clinicians may not be fully aware of how the AI and Machine Learning models are created and, therefore, may be wary of countering them. Therefore, it is important to understand the critical aspects of how AI models are created. All AI and machine learning models leverage preexisting data known as a “training set” to create the model. Then, a subsequent data set called the “test set” is used to see how the model compared with data the model had not seen. The accuracy measures of the test set are therefore used as a gauge to determine the model’s efficacy.

Phoenix House

Researchers have studied highly successful models, and the concerns raised when the training sets are either limited or not representative. Joy Buolamwini, in her study Gender Shades (Buolamwini & Gebru, 2018), examined how even high-accuracy algorithms used for facial recognition were extremely flawed. Although popular facial recognition tools at that time had reached accuracy levels greater than 90%, they consistently misgendered and misclassified subjects, especially females with darker skin tones. While the article was written in 2018, and improvements have been made, the warning is clear. Training information into any AI algorithm is critical to success. In addition, blind faith in these algorithms can lead to concerning behaviors among those who use them and ultimately can have an adverse effect on populations.

In the substance abuse domain, we must be vigilant in understanding how the new AI technologies are trained and implemented. Our industry must ask very direct questions of technology creators, insisting on more information on how models are created, trained, implemented, and tested. Allowing the technology to provide a classification for a patient should be considered helpful to a clinician, but it should not be the only evaluation of the patient. Even allowing an AI algorithm to write clinical notes or provide a selection of notes to simply meet regulatory requirements or increase efficiency may be problematic. The loss of information in a free text note from a clinician can contain very rich information and could further provide insight that may be sanitized by an algorithmic tool used for efficiency or regulatory compliance.

Our aim must always be to provide the best outcomes for the population that entrusts us with their care. A balance must be found between AI technology and clinicians to ensure proper care. The AI models must be considered a tool for addiction treatment that our clinicians can use to provide the best outcome, and they should not replace the experience, insight, and compassion that only a clinician can provide.

Ann Marie Foster is CEO of Phoenix Houses NY/LI. Alexander Pelaez, PhD, is CIO at Phoenix Houses NY/LI, Professor of Information Systems and Business Analytics at Hofstra University’s Zarb School of Business, and CEO of 5EAnalytics.

References

Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability, and transparency (pp. 77-91). PMLR.

Marr, B. (2023, July 6). AI in mental health: Opportunities and challenges in developing intelligent digital therapies. Forbes. www.forbes.com/sites/bernardmarr/2023/07/06/ai-in-mental-health-opportunities-and-challenges-in-developing-intelligent-digital-therapies/

Roberts, W., Zhao, Y., Verplaetse, T., Moore, K. E., Peltier, M. R., Burke, C., … & McKee, S. (2022). Using machine learning to predict heavy drinking during outpatient alcohol treatment. Alcoholism: Clinical and Experimental Research, 46(4), 657-666.

Have a Comment?