ethical robots and brain implants for the mentally ill?

Have you ever had a mentally ill member of your extended family?

  • Yes.

    Votes: 0 0.0%
  • No.

    Votes: 1 100.0%
  • I'm not sure. But I'm absolutely sure that if I have, it was/ is George W. Bush's fault.

    Votes: 0 0.0%

  • Total voters
    1
  • Poll closed .

shart_attack

Gold Member
Jan 6, 2014
10,012
2,191
245
hangin' with my bro e.coli
By Fjola Halgadottir, Ph.D, Psychology Today—Last month I came across two research grants that push the boundaries of existing technology so far that they sound like science fiction. I have concerns about both projects, but they are so ambitious that I can't help but be excited. Furthermore, both projects have potential implications for the future of mental health treatment.


Ethical robots on the battlefield


The US Office of Naval Research has awarded $7.5 million to investigate "autonomous moral reasoning" (see here for more details). Fortunately, the program directors claim their goal is not robotic soldiers that can independently decide when lethal force is justified (a truly terrifying thought – may we never see the day!). Rather, the article to linked above presents the following scenario:


Consider a robot medic generally responsible for helping wounded American soldiers on the battlefield. On a special assignment, the robo-medic is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured femur. Should it delay the mission in order to assist the soldier?​


This is hard enough for humans, so I'm skeptical about how effective these algorithms will be. This style of reasoning requires high-level understanding of a situation that is well beyond anything I've seen in modern AI. However, I love projects that take on challenges that are "too hard" – you never know what advancements might be made along the way.

If the research is successful, there are other possible applications for ethics algorithms. One idea would be to enhance computerized therapy systems so that their treatment does not cross any ethical boundaries. However, I hope that early systems aren't able to make potentially unethical decisions in the first place. Rather, if progress is made towards a "computational model of morality", a better use would be to gain insight into the thoughts and behavior of people. This technology is still some way away, but it is an intriguing possibility.


Therapy using brain-computer inferfaces


The Defense Advanced Research Projects Agency (DARPA) has recently awarded up to $70 million to investigate brain implants for regulating emotions in the mentally ill. The program is known as SUBNETS, and the first step will be to develop a better understanding of the neural basis of mental illness:


[The] lack of understanding of how mental illness specifically manifests in the brain has limited the effectiveness of existing treatment options, but through SUBNETS we hope to change that. DARPA is looking for ways to characterize which regions come into play for different conditions - measured from brain networks down to the single neuron level - and develop therapeutic devices that can record activity, deliver targeted stimulation, and most importantly, automatically adjust therapy as the brain itself changes. (source)​


Currently, there are two evidence-based methods for treating mental illness: medication and therapy. This research could introduce new alternatives. In particular, implants that stimulate precise regions within the brain at key moments to actively alter thinking patterns. Here is an example:


Fear is generated in the amygdala – a part of the brain involved in emotional memories. But it can be repressed by signals in another region, the ventromedial pre-frontal cortex. "The idea would be to decode a signal in the amygdala showing overactivity, then stimulate elsewhere to [suppress] that fear" says Dougherty. (source)​


Apparently there has already been some success using related techniques for treating epilepsy and severe cases of OCD.

I have reservations about this research. Firstly, this technique may mask symptoms without addressing underlying problems. In other words, it may not be viable as a long term solution. Secondly, there is a risk of unintended and potentially dangerous side effects. Incidentally, these are the same concerns I have about the use of some prescription medication.

For safe and effective long term benefits, my bet remains on cognitive behaviour therapy. However, these are exciting times.

________________________________________

Fjola Helgadottir, PhD, is a senior research clinician at the University of Oxford and co-founder of AI-Therapy, a developer online treatment programs and tools for psychologists. Follow her on twitter @drfjola.


Ethical robots and brain implants for mental illness | Psychology Today
 

Forum List

Back
Top