UConn Researchers Leveraging CT Health Data to Develop Suicide Risk Algorithms

UConn researchers are using multiple health care data sources for a project that could help health care providers better identify when patients are at risk of attempting suicide.

UConn researchers are working on a project that uses large health care datasets in Connecticut to help medical providers better identify when patients could be at risk of attempting suicide. (Tina Encarnacion/UConn Health photo)

Hospitals routinely collect massive amounts of patient data, but that data does little good unless practitioners can use it to help patients.

With support from the National Institutes of Health and National Science Foundation, a group of researchers is using statewide health care data to develop and implement an algorithm to help practitioners identify patients at risk of attempting suicide.

Robert Aseltine, professor and chair of the Division of Behavioral Sciences and Community Health and director of the Center for Population Health at UConn Health; Kun Chen, an associate professor in the Department of Statistics in the College of Liberal Arts and Sciences; and Fei Wang, an associate professor in the Department of Population Health Sciences at Weill Cornell Medicine, have been collaborating on this project for the past four years.

The Division of Behavioral Sciences is an eclectic unit within the School of Dental Medicine that has active research programs in oral health, mental health, substance abuse, suicide prevention, diabetes and other chronic diseases, health care delivery, and health outcomes.

Suicide is one of the most serious public health problems in the United States. Nearly 50,000 Americans die by suicide every year, according to the American Foundation for Suicide Prevention. Between 1999 and 2018, the suicide rate in the United States increased by 35%, according to National Institute of Mental Health.

Recent evidence indicates many individuals who die by suicide are in contact with the health care system in the months before their death. This point-of-contact could provide an important avenue for identifying at-risk patients and getting them the help they need.

Tailoring Big Data to Small Settings

The group led by Aseltine is leveraging data from the state Department of Public Health’s CHIME Database, Connecticut’s All Payer Claims Database, as well as clinical data from locations including Connecticut Children’s Medical Center, Hartford HealthCare, and Hartford Hospital Institute of Living to develop a powerful predictive tool.

“We really feel we’re doing work that’s unique in Connecticut and even nationally,” Aseltine says. “We like where we are and where we’re going.”

Using statistical data fusion and transfer learning, the model takes information from a comprehensive external data source and uses it to improve prediction in a more limited data set.

Other researchers in this area typically work with large, sophisticated health care delivery systems to obtain robust data sets for modelling suicide risk. But this data is not automatically applicable to other settings.

“Our take is that these models are really only relevant to their settings, or similar settings, and that’s not where 95% of people get their health care,” Aseltine says.

The group’s algorithm utilizes local data and calibrates it with models built from larger databases to create algorithms tailored to specific clinical settings.

“There’s no one-size-fits-all type of model,” Chen says. “It makes sense to utilize information from multiple sources to make provider- and setting-specific models.”

Their approach addresses the problem of fragmentation across the health care system. Practitioners usually only have access to patient information from their own practice or facility. This leads them to miss important information about outpatient or specialty care a patient may be receiving elsewhere.

This algorithm considers factors across all available sources and determines which are relevant to determining suicide risk. It then provides practitioners with a score indicating someone’s risk and a detailed overview of which factors led to this conclusion.

Chen describes this kind of integrative learning model as a “double-edged sword.” The problems arise from several areas: suicide is a statistically rare event, not all patients have the same amount of information in the health care system, and, while utilizing information from disparate sources, various kinds of redundancy, uncertainty, and heterogeneity also come along.

“If you can use it properly, you can see the value but otherwise, you just see the noise,” Chen says.

Despite these challenges, the researchers have proven the algorithm can pick up on key factors that indicate a patient is at a higher risk of attempting suicide. These factors include having a history of mental health disorders or self-harm, recent changes in medication, and even lab tests, such as if an adolescent female recently took a pregnancy test.

“What has emerged has not been all that surprising,” Aseltine says. “And that’s a good thing, because it means you can build good models with the data that is routinely collected by health care providers.”

In collaboration with the University of Massachusetts, the researchers are looking to incorporate social circumstances such as a recent divorce, food insecurity, or being uninsured or underinsured into the model.

Real Applications

The researchers are testing the algorithm with a number of clinical partners in the coming year and hope it will be ready for broader use within two to three years.

The model may be especially helpful to hospitals who are federally mandated to screen anyone they treat for suicide risk. Rather than having to follow up with thousands of patients who may indicate a factor related to suicide risk, doctors can focus their energy on those who most likely may need immediate help.

The group is also looking at the relationship between prescription opioid use and suicide risk. They have found those with mental health problems have a much higher risk of attempting suicide while taking opioids than the general population of prescription opioid users, but that patients with painful conditions, such as cancer or chronic pain, are at higher risk for suicidal behavior if they are not properly medicated.

The team hopes to develop an algorithm that can function as a decision support platform for clinicians prescribing opioids.

“We really think our work is relevant to what’s happening in the United States right now,” Aseltine says. “We’re working on real solutions that can be applied in real clinical settings to help real patients.”

As a statistician, Chen enjoys working on problems with clearly applicable benefits for people like this one.

“Statisticians get to play in everyone’s backyard,” Chen says. “I really enjoy working to solve real problems.”

The group working on this project is a truly interdisciplinary, interinstitutional effort.

“It’s not just a question of having someone who can do statistics or computer science work,” Aseltine says. “It’s really understanding the unique knowledge and perspective they bring to the project. We feel that the Center for Population Health has been instrumental in advancing this type of multidisciplinary work at UConn.”

 

Follow UConn Research on Twitter & LinkedIn.