The study, which was published in the journal Proceedings of the National Academy of Sciences, involved researchers from the Penn Medicine Center for Digital Health.
For the study, researchers used an artificial intelligence (AI) program to analyze Facebook posts for "linguistic red flags" that may indicate depression. The findings suggest that the AI program can identify the condition at least three months earlier than health services.
Depression, or major depressive disorder, is a common and serious medical illness. While depression can be treated, the condition can negatively affect how you feel, think, and act. Depression is associated with feelings of profound sadness or a loss of interest in activities that you used to enjoy. The condition can also cause different emotional and physical problems, and it can decrease your ability to function at home and at work.
The researchers said that in previous tests, the machine learning algorithm performed just as well as current screening questionnaires that are used to identify depression. But unlike the questionnaires, the AI program can run "unobtrusively" in the background.
Concerned guardians often call for the stricter age and usage limits of social media applications and websites like Facebook, especially due to its potentially damaging impact on children's mental health. However, the researchers believe that the machine can be used to study the "wealth of information in social media pages." It could even be used to screen for mental health conditions, which is alarming since the AI program can do so even without the user's permission.
In the study, the AI program searched for early warning signs that include:
Dr. Johannes Eichstaedt, one of the senior authors of the study, said that social media data has markers that are similar to the genome. He added that methods comparable to those utilized in genomics can be used to "comb social media data to find these markers."
Eichstaedt, who is also a co-founder of the World Well-Being Project at the University of Pennsylvania, explained that the AI program can detect depression, which can significantly change how an individual uses social media, unlike other conditions like diabetes or skin disease. (Related: Facebook admits its site can be bad for you – but says if you are suffering ill effects, you’re doing it wrong.)
In the study, the researchers analyzed the Facebook profiles of 683 individuals who agreed to share their digital archives. The group included 114 volunteers who were diagnosed with depression, and every person with the condition was matched with five participants without a depression diagnosis to test the accuracy of the AI program.
The researchers studied the 524,292 posts made by the participants on Facebook in the years before they were diagnosed with depression. The results of the analysis were then compared with data from control subjects that researchers refer to as "depression-associated language markers."
Using these markers, the program successfully identified warning signs of depression in individuals based on posts that were made at least three months before the condition was documented in their medical records.
The study showed that the program produced more accurate results if it searched for social media cues in the six months prior to a depression diagnosis. This could help detect depression in people at risk, especially if the AI program is paired with other forms of digital screening.
H. Andrew Schwartz, an associate professor of computer science, said that the researchers are aware of the stigma that surrounds social media use. Schwartz, who is also the principal investigator of this study, concluded that social media could be an important tool for diagnosing, monitoring, and eventually treating mental health conditions.
Since this was a small proof of principle study, the researchers posit that the AI program could be improved by incorporating phone usage data or facial recognition software to analyze pictures posted on Facebook.
Are you willing to risk your privacy to learn more about your mental health, instead of consulting a therapist who can promise doctor-patient confidentiality?
Sources include: