Getting All Emotional

1x07 Getting All Emotional

I was somewhat hesitant to run with this issue because we tackled personality and chatbots in the previous newsletter. I didn't want to seem like I was duplicating themes, and there is so much more in the new cyberia that I'd like to indulge in and tackle.

I settled on this topic based on a luncheon talk that I gave. This talk was meant as a preview of a longer discussion I had planned for the ValleyTechCon about chatbots. In organizing my slides, I included a handful from the Bot Builder Community GitHub repository because I had been doing a lot of work creating swappable engines so you could use the Bot Framework with text analysis services beyond just Microsoft's Text Analytics API. One of those services was IBM Watson's Natural Language Understanding (NLU) service.

The neat thing about Watson's NLU is that it doesn't just track sentiment, but actually attempts to detect and analyze emotions as well. If asked, it'll return an emotion score object containing five emotions and their various scores. When I gave the luncheon talk, I offhandedly pointed out that with emotion detection you could create a psychotherapy chatbot in minimal lines of code. When I got home later that day, I thought about that statement a little more, and decided I would take a weekend to see how far I could get.

By no means did I build a psychotherapy chatbot--at least not one that would replace your friendly neighborhood psychoanalyst. But in targeting Rogerian talk therapy, I was able to produce a minimally viable conversation that keyed on the emotional content of single messages, and circulated randomly selected statements based on that emotional content to further the discussion. I did that in a weekend. What could someone do in a month? In a year?

Watson's emotion detection doesn't just analyze entire messages, but can also give emotion scores for specifically targeted words. In addition, rather than looking purely at emotion, you can use entity extraction and keyword detection to pull out important words and phrases, each with an emotional context attached. Watson also allows you to identify key concepts, while categorizing text based on major groupings.

Hypothesize with me:

You build a chatbot that accepts messages from a user, and using the Watson NLU via the Bot Builder Community middleware, you can immediately determine the overall concept of the message: "Childhood." The entity extraction middleware then gives you entities of "mother," "father," and "childhood home" with a key phrase of "moved away." Each of those elements have an emotion score attached to them. Father ranks highest with fear, while mother ranks highest with sadness. "Childhood home" ranks highest with joy, but "moved away" ranks highest for sadness.

As a human looking at this language and these score, you can easily determine what the context is, and likely how you should respond, especially if you're only interested in asking non-binary questions that further the conversation, allowing for deeper self-reflection. How would you do this programmatically?

We could design a database of concepts with each concept associated with one or more scenarios (possibly key phrases in a psychotherapy session). These concept/scenario combinations could then encapsulate a number of responses that a therapy bot could send back to the user, but we could filter said responses based on the entities that have been extracted, and we can rank the responses based on the emotion scores provided, depending on which entity we want to dive deeper into. In this scenario, we might want to know more about the user's relationship with his or her father prior to moving.

What about things we don't know? Remember when we talked about Microsoft's Personality Project in the last issue? That was an editorially curated list of responses, but if no response was readily available, it fell back to a deep neural network to try to determine the appropriate language. Something similar (albeit more complicated) could be built for therapy responses, considering that all of the components needed for a follow-up response have already been extracted.

So what's the point? The point is that--given a weekend--I was able to bring a simple idea into a novelty--a toy capable of amusing people with psychotherapeutic meanderings, and I didn't have to write complex math to do it; The Watson packages were at my fingertips. Given a modest budget, a year, and a few good machines, and you could challenge the norms of psychotherapy and the entire counseling industry--whether they're prepared for it or not. --Michael Szul & Bill Ahern

Want to take this conversation further? We're experimenting with a public team on Keybase--an end-to-end encrypted messaging, file, and identity management service. Check out our public team here. You can also contact Bill and me directly from the chat feature. --MS

Editor(s) Note:

Interested in artificial intelligence, natural language processing, and chatbots in particular? Don't forget that Michael wrote a book: Building Chatbots in TypeScript with the Microsoft Bot Framework.


You don't have to wait for me or anybody else to build your therapy bot. They're already out there. In fact, they run as far back as 2017 when Wired ran an article about Woebot--a chatbot built by Stanford research to help reduce anxiety (for the "low" price of $39.00 a month). Woebot specializes in cognitive behavioral therapy rather than Rogerian therapy, so it's primary course of action is to analyze your emotional state, and suggest actions to take to alleviate situational anxiety.

Since the Wired article, Woebot has expanded to included lessons, exercises, and other ways to help beyond the simple conversational experience, but it's still careful to identify itself as a tool of self-reflection and not a replacement for clinical judgement. --MS

Stephen Allwine

On the latest episode of Codepunk, we took a look at the Stephen Allwine case, and contemplate the uptick in recent news stories about Dark Web murders. --BA

X2 and Tess

I get a little wary when I see a human-centered initiative with very non-human centered name. X2 is promoting affordable mental health outcomes, while very much tying itself to the self-help movement, and identifies Tess as a tool to handle 25% of "mild" cases for counselors. Rather than give you the marketing pitch, I'll just leave some words from the CEO:

"Through trial and error, I found a wonderful psychologist, who was able to help me through that time by using talk-therapy. Later on, I realized that when I was speaking with my friends and colleagues, I was simply repeating the conversations I previously had with my psychologist."

"That's when I first realized: 'If I can help people by repeating these conversations, then we could teach a machine to do the same.'" --Michiel Rauws


The problem is that although I know how fast we're advancing with artificial intelligence and healthcare, the marketing spiel that permeates services such as these can be borderline dangerous--especially when we're talking about mental health.

We'll see where all of this goes. --MS

Southern Tier Cold-Pressed Coffee Pumking

Southern Tier Cold-Pressed Coffee Pumking

In a word: Delicious. While the pumpkin isn't as strong as in the traditional Pumking, there was enough there to compliment the coffee and maple flavor. If you're in the mood for a seasonal pumpkin beer, I'd say that the standard will satisfy your needs while the previously featured Warlock Imperial Pumpkin Stout serves to offer something slightly richer with a mild bitterness. --BA