Artificial empathy
Artificial empathy or computational empathy is the development of AI systems—such as companion robots or virtual agents—that can detect emotions and respond to them in an empathic way.
Although such technology can be perceived as scary or threatening,[2] it could also have a significant advantage over humans for roles in which emotional expression can be important, such as in the health care sector. For example, care-givers who perform emotional labor above and beyond the requirements of paid labor can experience chronic stress or burnout, and can become desensitized to patients.
Emotional role-playing between a care-receiver and a robot might actually result in less fear and concern for the receiver's predicament ("if it is just a robot taking care of me it cannot be that critical")[according to whom?]. Scholars debate the possible outcome of such technology using two different perspectives: Either the artificial empathy could help the socialization of care-givers, or serve as role model for emotional detachment.
A broader definition of artificial empathy is "the ability of nonhuman models to predict a person's internal state (e.g., cognitive, affective, physical) given the signals (s)he emits (e.g., facial expression, voice, gesture) or to predict a person's reaction (including, but not limited to internal states) when he or she is exposed to a given set of stimuli (e.g., facial expression, voice, gesture, graphics, music, etc.)".
There are a variety of philosophical, theoretical, and applicative questions related to artificial empathy. For example:
Which conditions would have to be met for a robot to respond competently to a human emotion?
What models of empathy can or should be applied to Social and Assistive Robotics?
Must the interaction of humans with robots imitate affective interaction between humans?
Can a robot help science learn about affective development of humans?
Would robots create unforeseen categories of inauthentic relations?
What relations with robots can be considered authentic?
Although artificial intelligence cannot yet replace social workers themselves, the technology has been deployed in that field. Florida State University published a study about Artificial Intelligence being used in the human services field.The research used computer algorithms to analyze health records for combinations of risk factors that could predict a future suicide attempt. The article reports, "machine learning—a future frontier for artificial intelligence—can predict with 80% to 90% accuracy whether someone will attempt suicide as far off as two years into the future. The algorithms become even more accurate as a person's suicide attempt gets closer. For example, the accuracy climbs to 92% one week before a suicide attempt when artificial intelligence focuses on general hospital patients".
Such algorithmic machines can help social workers. Social work operates on a cycle of engagement, assessment, intervention, and evaluation with clients. Earlier assessment for risk of suicide can lead to earlier interventions and prevention, therefore saving lives. The system would learn, analyze, and detect risk factors, alerting the clinician of a patient's suicide risk score (analogous to a patient's cardiovascular risk score). Then, social workers could step in for further assessment and preventive intervention.