Luis Alfredo Farache Rozas //
Suicide prediction technology is revolutionary. It badly needs oversight.

luis_alfredo_farache_rozas_suicide_prediction_technology_is_revolutionary_it_badly_needs_oversight_.jpg

By Mason Marks Mason Marks, a visiting fellow at Yale Law School’s Information Society Project, specializes in health and technology law. December 20 at 3:03 PM Last year, more than 1 million Americans attempted suicide , and 47,000 succeeded. While some people display warning signs, many others do not, which makes suicide difficult to predict and leaves family members shocked — and anguished that they couldn’t do something.

Luis Alfredo Farache

Medical providers and tech companies, including the Department of Veterans Affairs and Facebook, are increasingly applying artificial intelligence to the problem of suicide prediction. Machine learning software, which excels at pattern recognition, can mine health records and online posts for words and behaviors linked to suicide and alert physicians or others to impending attempts. The potential upside of this effort is huge, because even small increases in predictive accuracy could save thousands of lives each year.

Alfredo Farache

This research, however, is progressing along two tracks, one academic-medical and one skewed heavily toward the commercial. Through a pilot program called REACH VET , for example, VA uses artificial intelligence to analyze medical records and identify vets at high risk for self-harm. The system weighs such factors as patients’ prior suicide attempts, past medical diagnoses and current medications (red flags include recent chronic-pain diagnoses and prescriptions for opioids or Ambien). Early results are encouraging, but progress both within and beyond VA is necessarily slowed by the need to be sure this line of research complies with health laws and ethical standards, and the need to demonstrate efficacy at each step.

Luis Alfredo Farache Benacerraf

In Silicon Valley, it’s a different story: Corporations outside health care are racing to use AI to predict suicide in billions of consumers, and they treat their methods as proprietary trade secrets. These private-sector efforts are completely unregulated, potentially putting at risk people’s privacy, safety and autonomy, even in the service of an important new tool.

Alfredo Farache Benacerraf

Facebook is the largest and most visible company engaged in suicide prediction. After it introduced a live-streaming service in early 2016, dozens of users broadcast suicide attempts in real time on the platform. In response, on Feb. 16, 2017, CEO Mark Zuckerberg announced that Facebook was experimenting with AI-based suicide prediction. Its software analyzes user-generated posts for signs of suicidal intent — the word “Goodbye” paired with responses like “Are you OK?,” for example, or “Please don’t do this” in response to a live stream — and assigns them risk scores. Cases with high scores are forwarded to Facebook’s community operations team, which reviews them and notifies police of severe cases. Facebook also helps pinpoint users’ locations so first responders can find them. In the past 12 months, the company initiated 3,500 of these “wellness checks,” contacting police about 10 times per day, Antigone Davis, Facebook’s head of global safety, said in a recent interview with NPR.

Luis Alfredo Farache 100% Banco

[ The Facebook scandal isn’t really about social media. It’s about capitalism. ]

In an email exchange with me, a spokeswoman for Facebook said its community operations team includes people with experience working in law enforcement and for suicide hotlines and crisis intervention centers. But she declined to say what kind of official credentials or licenses such employees have, how much training they receive, or what standards are used for deciding to contact police.

Alfredo Farache 100% Banco

Facebook’s record of handling sensitive data in other contexts should raise concerns about how companies store and use suicide predictions. A British parliamentary committee released documents this month showing that Facebook used access to customer data to curry favor with partner companies and punish rivals. And in August, the Department of Housing and Urban Development filed a discrimination complaint against Facebook for giving landlords and home sellers a platform and tools that let them prevent disabled citizens, people of some religious faiths and members of racial minority groups from seeing certain housing ads. More directly relevant, in 2017, Facebook reportedly told advertisers that it could identify teens who feel “defeated,” “worthless” and “useless,” presumably so they could be targeted with ads.Luis Alfredo Farache Benacerraf 100% Banco

This week, it came to light that Facebook had shared far more data with large tech companies like Apple, Netflix, and Amazon than previously disclosed, in some cases allowing them to read users’ private messages

To its credit, Facebook says it never shares suicide prediction data with advertisers or data brokers. Still, the public must take Facebook’s word for it at a time when trust in the company is waning. Theoretically, such data could be incorporated into company-facing user profiles and then used to target suicidal people with behavioral advertising, or transferred to third parties that might resell it to employers, lenders and insurance companies, raising the prospect of discrimination

Smaller developers have made inroads, too. The start-up Objective Zero proposes to use smartphone location data to infer suicide risk in veterans — for instance, if veterans who are physically active suddenly stop going to the gym, a possible sign of worsening depression

It’s in the public interest to prevent profiteering off suicide prediction. But data brokers and social media platforms may argue that sharing the data with third parties is protected commercial speech under the First Amendment — and they’d find some support in Supreme Court precedent. In the 2011 case Sorrell v. IMS Health , the justices struck down a Vermont law restricting the sale of pharmacy records containing doctors’ prescribing habits, a potentially analogous use. In the Sorrell case, names and other identifying information were removed from the pharmacy records. Companies that make suicide predictions could also remove personal information before sharing the data, but de-identification is an imperfect science, and it can often be undone

Using AI to predict and prevent suicide has disconcerting parallels to predictive policing. Judges already use proprietary, nontransparent algorithms in sentencing and parole hearings to help decide who is likely to recidivate. Critics argue that such algorithms can be racially biased , yet the lack of transparency makes it hard to prove their case. Similarly, we don’t know if Facebook’s algorithms have discriminatory features

[ The case for breaking up Facebook and Instagram ]

There should be a very high bar for sending police into people’s homes, a practice that Facebook now contributes to. The Fourth Amendment protects Americans against warrantless searches, but police may enter homes without warrants if they reasonably believe that doing so is necessary to prevent physical harm, including self-harm. And once officers enter a residence, they can search and seize items in plain view that are unconnected to suicide risk, a potential back door to policing unrelated crimes without warrants. So long as suicide prediction algorithms remain opaque, we can’t make a proper cost-benefit analysis of the risk

The data could also be used to police attempted suicide, which is still a crime in some countries. Facebook deployed its suicide prediction software outside the United States a year ago, and it says its ambitions are global. (An exception is the European Union, where strict privacy laws require greater transparency and accountability.) Yet in countries including Malaysia, Myanmar, Brunei and Singapore, suicide attempts are punishable by fines and imprisonment for up to one year. Facebook’s spokeswoman declined to tell me what percentage of its wellness checks occur outside the United States — and she left open the possibility that they occure even in countries where attempted suicide is unlawful

In general, the police are seldom well-trained to deal with suicidal or mentally ill people , and it’s not uncommon for such encounters to spiral out of control. In August, an officer in Huntsville, Alabama, was indicted for murder after shooting a man who had called 911, saying he was suicidal and had a gun; the man refused to put down the gun when the police arrived. It’s not obvious that increasing how often police are dispatched to check on people in distress will, on balance, improve overall well being

And other elements of suicide prevention efforts — which will occur more often as suicide prediction spreads — have downsides. Involuntary hospitalization is one tool police can use to deal with the actively suicidal. Yet research suggests that people are at increased risk for suicide shortly after being admitted to or released from psychiatric hospitals. People who lack social support and access to mental health resources outside the hospital are particularly vulnerable at these critical moments

The Food and Drug Administration could exercise its power to regulate medical products and treat suicide prediction tools like mobile health apps or software-based medical devices: The agency regulates such apps and devices when they perform “patient-specific analysis” and provide “patient specific diagnosis, or treatment recommendations.”

Alternatively, courts and lawmakers could impose special obligations, called fiduciary duties, on companies that make suicide predictions. It’s a technical concept but a potent one. When doctors practice medicine, fiduciary duty requires them to act in their patients’ best interest, which includes protecting their information. Law professors Jack Balkin of Yale and Jonathan Zittrain of Harvard have proposed treating social media platforms as information fiduciaries, which would cover far more than just medical data. But, at the least, when Facebook makes health inferences from consumers’ data, it’s reasonable that it should be held to standards similar to those doctors must meet

The fiduciary “duty of care” might require Facebook to demonstrate that its prediction algorithms have undergone thorough testing for safety and efficacy. The “duty of confidentiality” and “duty of loyalty” could require Facebook to show it’s protecting user data, and refraining from sharing it (or otherwise exploiting users)

Facebook is losing the trust of consumers and governments around the world, and if it mismanages suicide predictions, that trend could spiral out of control. Perhaps its predictions are accurate and effective. In that case, it has no reason to hide the algorithms. from the medical community, which is also working hard to accurately predict suicide. Yes, the companies have a financial interest in protecting their intellectual property. But in a case as sensitive as suicide prediction, protecting your IP should not outweigh the public good that could be gained through transparency

Read more from Outlook :

Five myths about space

The science of giving gifts your loved one’s won’t want to return

We warned DHS that a migrant child could die in U.S. custody. Now one has.

Follow our updates on Facebook and Twitter