Emma Bedor Hiland: The digitization of mental healthcare (ep335)

What have been the shortcomings of the various technologies promising to make mental health care more accessible? And what does it mean to maintain a sense of humanity in our systems of care—in a world where therapeutic support of different forms is increasingly digitized?

In this episode, we welcome Emma Bedor Hiland, Ph.D., the author of Therapy Tech: The Digital Transformation of Mental Healthcare. As a feminist scholar, she brings an intersectional approach to analyses of the social and cultural effects of media and new technologies. Her work explores questions of what it means to live well, to be happy, and to pursue health.

Subscribe and listen to Green Dreamer in any podcast app, or read on for the episode transcript.

Musical feature: Trust The Sun by A Woman and the Universe by Lara Bello

 
Whenever I would begin these conversations with people who make or advocate for the use of these tools, they would say that they know there’s a mental healthcare crisis, that they think their tools can really make a difference. But when it came to actually getting these tools into the hands of people who might benefit from them, that’s where what they said fell apart.
— EMMA BEDOR HILAND, PH.D.
 
 
 

If you feel inspired by this episode, please consider donating a gift of support of any amount today!

 
 

Transcript:

Note: *Our episodes are minimally edited. Please view them as open invitations to dive deeper into each resource and topic explored. This transcript has been edited for clarity.


Kamea Chayne: Today, we are discussing Dr. Hiland's book Therapy Tech, which looks at the digital transformation of mental healthcare. We begin here as she shares about her “aha” moment that led her to see that digital mental health technologies were not what they were hyped up to be.

Emma Hiland: So there I was at the University of Minnesota, learning about research, ethics, mental health, and illness, still pursuing my interest in media and traditional media — film, TV, new media technology — when I learned about a growing field of digital health technologies that were smartphone applications. I learned that they claim to improve the mental health of their users. I was really optimistic and excited about them. I thought it would be a great way to be able to study media and technology and have an object of inquiry where the outcomes should be great, where people who are able to access and use these tools have improved mental health outcomes.

I attended the American Telemedicine Association's conference that year, which took place in Minneapolis. I believe this was 2015, the summer I was still doing my graduate coursework. As I attended, I sort of had this epiphany or this realization that whereas I was very much motivated in mental health technologies because I wanted to increase access to them for communities and people who continue to have poor access to mental healthcare and resources in general…

Telemedicine itself is fundamentally an industry, a business, and though I want to improve the equity and accessibility of these tools, it really is, at the end of the day, an industry focused on making money, and showing shareholders that it's profitable.

So there was this dialectic and this tension between wanting to do good for people and still needing to show that there is economic legitimacy to this field as something that people can invest in and have new types of work related to. It's not to say the people who create these technologies or advocate for their use aren't thinking about increasing access in the way that I am. But a lot of the time, in the research that I've done and in all of the case studies I talk about in the book, we see that that becomes secondary to the desire for profitability, showing shareholders that there is utility in investing in these technologies.

So I had this sort of pivotal, paradigmatic shift in my own thinking because the more I started to learn and research and look into various types of mental health technologies, whether it was smartphone apps or therapeutic A.I. and chatbots, I started to see cracks and false promises. What they said that they were going to offer to people, which is that improvement in mental healthiness was instead more so being driven by economic imperatives and desires.

Kamea Chayne: There's so much in what you just shared. I really appreciate it and look forward to diving deeper into the details. Before we get into that, I think it may be helpful for us to first paint a picture for our listeners of the backdrop of what has been happening in regards to our collective mental and psychological health. For the purposes of this conversation, we're focusing on mental health. But of course, mental health can't really be separated from our greater physical and more holistic states of well-being.

So just to start us off here, what are some concerning trends that we know of that make up this mental health crisis that we're in? What do we know or not know about the key drivers behind them?

Emma Hiland: The data that we do have from the likes of Mental Health America, the CDC, and lots of nonprofits like the Pew Research Center, all painted a picture — prior to the COVID pandemic — of the fact that we were experiencing a mental healthcare crisis. The reasons for this crisis were multifold.

They were related to the inaccessibility of mental healthcare services from qualified, competent, trained mental health professionals, the unaffordability of care for persons who both have insurance and also those who don't, and the continuing stigmatization of seeking and receiving mental healthcare services. There's also research about other elements and other barriers to accessing and receiving mental healthcare.

For example, there was a really fascinating study that found that people who have white-sounding names and voices are more likely to be accepted as new patients for practitioners and mental healthcare practitioners than people without white-sounding names. So we even see how the biases and prejudices of practitioners themselves can play an important role in preventing people of color [from accessing care], and how they continue to receive lesser levels of mental healthcare than their white counterparts today in ways that we don't often conceptualize.

Again, that was prior to the onset of the COVID pandemic. The research that has been gathered now (the pandemic is not over yet, although we often talk as though it is) continues to show us that we are seeing higher levels of mental distress than any other time in history, especially anxiety and depression. Mental Health America has a screener that it offers people to use and they can access it online just by Googling it, and there’s something like a 90 percent increase, I believe, in levels of anxiety among the American population compared to prior to the COVID pandemic.

One of the other important issues that this brings up is whether or not people are experiencing more and more anxiety and depression and other mental disorders and illnesses, or whether we are tending to overly medicalize mental states today. Now, we talk about medicalization a lot in bioethics and medical humanities, health, and humanities research and scholarship. It is the tendency to take aspects of our lives, which historically we wouldn't conceptualize through medical frameworks, but instead we take them and we conceptualize them as being treatable.

So a really great example of this would be shyness. Shyness is something that a lot of people experience, but increasingly we see diagnoses of social anxiety and anxiety in general, and some people would say that we over medicalized shyness and turn it into something that is treated with medical interventions, but it's not always necessary.

As we see these increasing rates in diagnoses of especially depression and anxiety during and before the COVID pandemic, it's just important that we also keep in mind that medicalization is something we should be aware of.

My goal here is not to say that we should not be providing diagnoses when they are helpful. It's to say we should also be aware of the potentiality for medicalization to affect these rates of increasing diagnoses, too.

Kamea Chayne: Definitely there's nuance here, in that perhaps on the one hand where medicalizing “issues” that traditionally were not viewed as issues needing to be treated, per se, or fixed, and then, on the other hand, we do know that there is an evident increase in mental conditions like anxiety and depression. I know there's also an increasing loneliness epidemic as well, which I'm sure feeds into other forms of mental health crises.

I can't imagine that these things can be separated from the other trends. For example, our increasing economic disparity and the worsening climate crisis and so forth. Although I don't know if there would ever be a sort of causation or study to show that because it's happening at a large scale. I wonder if you have any insights on that.

Emma Hiland: Absolutely. I'm not aware of any large-scale studies like you've described. But in my perspective, and I share this with other folks as well, it's not just about whether you are diagnosed with a particularized disorder or not. I like to think about mental health in terms of overall healthiness and our ability to pursue healthiness. We know that there is a relationship between happiness and healthiness, so even if something is subclinical, if we are experiencing distress of any kind, whether it is, for example, due to the stress related to climate change or instabilities that come from unemployment, especially during the pandemic, it's all very, very much interconnected.

I think that more of us are thinking about the relationships between mental health and overall wellness and happiness due to the pandemic. But certainly these are interconnected dimensions of how we come to experience the world, so they are very much related to our overall mental health.

Kamea Chayne: Going into Therapy Tech, you begin by shining a light on how people tend to fetishize novel technologies as potential solutions to addressing or alleviating our mental health crisis. So what are some examples of the health technologies that have been hyped up and popularized? What do you think it is that leads us to have this tendency to focus on techno fixes?

Emma Hiland: So in preparation for this interview, I went back and looked at a number of the technologies discussed in the book. I conclude the book by saying that by the time somebody read it, it's possible that not all of the tools and technologies even exist anymore, or the ways that they exist are vastly different. But some of the technologies that continue to be discussed and popularized by media today include Woebot, which is one of the therapeutic, artificially intelligent chatbots that I discuss in the book. Woebot is wildly popular and very successful. I saw that they just raised the funds to start a program especially for women with postpartum depression. So there are new and innovative ways that that sort of technology is being used.

There still are so many types of smartphone applications that we can download, either for free or at very minimal low cost. Through the iTunes and Google Play stores, I believe there's somewhere between ten and twenty thousand of these, but it's hard to get an exact figure because they go on and off the market. They're categorized in different ways. Some of them are considered for health and lifestyle, and some are categorized under fitness. These technologies are not slowing down.

In the book, I try to paint a broad picture of all of the different types of tools that really are out there. I begin the book by discussing an ingestible digital therapeutic called Abilify MyCite, which is intended to track whether persons prescribed Abilify, largely for schizophrenia, are taking their medicines. But the case studies also discussed the smartphone applications. They discussed the therapeutic chatbots and A.I. psycho-surveillance. There is a whole chapter about the ways that social media platforms especially are designed to encourage us to monitor the mental healthiness and mental states of others whom we know, and also how there are other opportunities for us to volunteer explicitly on platforms to monitor other's mental healthiness and their mental states. I also looked into 7 Cups of Tea, which I discussed in the book, and I saw that it's still growing as a platform in popularity.

You know, a lot of these tools, due to the pandemic, they've really seen a boom. Media coverage and the statistics say that more and more people are turning to online therapy, which is, of course, not a bad thing. I do discuss tele-mental healthcare or screen-based therapy in the final chapter of my book, and just like every other technology that I discuss, I emphasize the need to acknowledge that problems come when we rely solely upon technological infrastructure to make mental healthcare services possible because not everybody can access that infrastructure in the first place.

Now, I'm not sure I answered your second question...

Kamea Chayne: Oh, just if you have ideas on what it is that leads us to have a tendency to focus on techno-fixes.

Emma Hiland: I think that…

Our tendency to think that technological fixes or solutions are possible largely comes from our culture of individuation in the United States and Western countries in general.

Theoretically, at the beginning of my book, I talk about a framework called neoliberalism, which borrows heavily from the work of Michel Foucault. The way that I describe it and articulate it and use it to make my arguments about these technologies is that we see a diminishment of widely-accessible resources, and instead we start to turn towards ourselves and we're told that we are competent and able and empowered to take care of ourselves effectively if provided with necessary tools.

For example, throughout the pandemic, we saw a lot of public health clinics shuttering and in their place, we see increased emphasis upon using technologies like smartphone applications, tele-mental healthcare services, things which we assume that most of the population can use and access instead of having support mechanisms out in our communities or easily accessible, reachable mental healthcare professionals.

Kamea Chayne: A key point you share is that techno-solutionism disregards the reality that racism, sexism, classism, and other discriminatory belief systems contributed to the mental healthcare crisis in the first place.

And to this point, some may ask, does digitizing mental healthcare and advancing technologically-driven solutions help make this form of care and relief more accessible and democratized? So, for example, people who can't afford to work one-on-one with a therapist might be able to access digital tools that aim to provide that type of support in a different, definitely more limited way, but to a broader population. So in essence, could it help to minimize the disparities of healthcare, or has the result largely been the opposite in that they tend to perpetuate or even worsen these inequities?

Emma Hiland: There are two elements to the response that I'm going to give. On the one hand, we do tend to think that technologies can democratize access and resources, because, unlike people, we conceptualize them as unbiased, apolitical, and acultural. Whereas as I mentioned earlier, perhaps a psychologist might be less willing to take on a client of color who has a Black-sounding name, the research tells us, than a potential client with a white-sounding name. We imagine that A.I. or an algorithm wouldn't care about something like that.

The problem is that we have enough research to show us that technologies are not apolitical, acultural, or value-neutral. They reflect the biases of the people who make them.

So even if they are unintentionally embedded within technologies, they are often there nonetheless. So when it comes to making healthcare decisions on matters of mental health, it becomes really important that we are able to be reflexive about the limits and capabilities of A.I. and algorithms and other technologies which we might think, or ideally hope, can do a better job than us, the imperfect people, but they really can't.

This dimension of technologies, which I conceptualize as falling within this umbrella framework of discriminatory technologies, owes a lot to other scholars, such as Simone Brown, who writes about prototypical whiteness. She emphasizes that the typical body when we create technologies for any user, we assume that the body is going to be white. Also, Safiya Noble, who wrote the book Algorithms of Oppression, came up with this framework called technological redlining. She emphasizes the ways that search tools like Google have algorithms that have a sociocultural effect where they perpetuate negative race and gender-based stereotypes.

My work also owes a lot to the scholarship of Ruha Benjamin, who writes a lot about discriminatory technologies, whether they are medical or scientific technologies or technologies that we use in our everyday lives. A great example from her scholarship would be the ways that even park benches can be discriminatory. If, for example, you have a park bench with armrests in the middle of it, it discriminates against people who are potentially unhoused or homeless because it tries to dissuade them from laying down there. So these discriminatory technologies don't just exist in the context of medicine and mental health illness, they're part of our everyday, normative lives and experiences, too.

Now, the other thing that I wanted to make sure I mentioned in response to your question is that when I discussed with interviewees for the book about the ways that they conceptualize the demographics of people who use mental health technologies, they hadn't put much thought into it.

My interpretation of what they told me is that they just are acting upon this normalized belief in prototypical whiteness, even though they're not doing it intentionally.

They just sort of imagined users as like them and the people who I spoke to were largely white men. So when I asked questions that were a little bit more difficult to ask, such as “What do you know about the demographics of the users of your tools?” or “How do you conceptualize marketing to other populations?” The responses that I got were largely surprising because it wasn't so much that they wanted to market their tools, advertise them, or get them in the hands of people who were more diverse than the power-users who are largely white, young, and female. It turns out they just wanted to be able to market more effectively to the same populations over and over.

So this is where that problem between wanting to help people but also wanting to be financially stable and make money and be profitable becomes really apparent. Because whenever I would begin these conversations with people who make or advocate for the use of these tools, they would say that they want to help people. They know that there's a mental healthcare crisis. They think that their tools can really make a difference. But at the same time, when it came to actually getting these tools into the hands of people who might benefit from them, that's kind of where a lot of what they said fell apart because they weren't actively trying to do that.

After the book was written, I went back and looked at a lot of the technologies which I analyzed for the book, particularly smartphone applications and guided meditations. One of the things that I mentioned in the book is that they are largely voices of white-sounding women, and I will say that there are increasingly smartphone applications that use guided meditations in other languages with more diverse sounding voices. So we are seeing some changes there, which is a good thing.

Kamea Chayne: So with all this in mind, you've really reframed the question from how we can create technologies that can address our mental health crises to what these technologies reveal about our beliefs and practices related to the self, medicine, and culture. What revelations had you made with this perspective shift and how do we situate these critiques within the broader economic and political system that may have incentivized the field of mental healthcare to advance and unfold in these ways?

Emma Hiland: Even now, with the book having been written and I've been working on this research for years and years, I often ask myself, what do we do next? Because of that shift that you've mentioned, that idea that we have to situate everything related to digital health industries within broader scale economic frameworks and cultures and behaviors and patterns, which have far predated the emergence of these industries and technologies.

That presents a problem because how can we reconcile the demands of capitalism with wanting to ensure that people have access to adequate, or better-than-adequate, high-quality mental healthcare resources when they are experiencing any level of distress? Because again, we have all of this data that tells us that regardless of the high levels of anxiety or depression we're seeing, no matter whether it's due to over-medicalization or whether we truly are living in a world where people are increasingly having experiences with mental distress, we still want people to be able to flourish and we still want people to be healthy and happy. So again, I feel like I don't have great answers.

But in the conclusion for my book, I do emphasize that I think that rather than directing immense resources towards novel technologies saying that you can have a doctor in your pocket on demand if you have a smartphone application that claims to improve mental healthiness, we need to take a step back and really invest those resources in that time and that energy in training qualified, human mental healthcare professionals who have a humanistic approach to the work that they do, where they are aware that there continue to be disparities between white folks and people of color when it comes to accessing and utilizing mental healthcare resources.

While we can't necessarily eliminate technologies as accompaniments to receiving quality mental healthcare services, they should be part of a spectrum of care and they shouldn't ever really be the end-all, be-all of care itself. If care is predicated upon access to technologies and technological infrastructure, then we do have to take an even greater step back and make sure that everybody in our community and our world have access to things like the internet and broadband internet. Because if we're going to make technological accessibility a prerequisite to mental healthcare services, then all we end up doing is widening the disparity between who has access to care and who does not.

If we are more reflective about the role of technology in our lives, and the unrealistic expectations and problematic assumptions we have of technologies, technosolutionism, or techno-fixes, then we can redirect resources in ways that will make positive differences.

Kamea Chayne: I really see a lot of common threads between these misguided or limiting approaches to mental healthcare and the techno solutions that get a lot of hype and attention as ways to address our climate crisis, both of which tend to individualize the problems. Like the concept of carbon footprints leading people to fixate on lifestyle changes, which do help, but ultimately don't lead to the systemic shifts that can transform the underlying conditions that led to the essential illness of the fossil fuel economy, which a lot of our lifestyles have been locked into.

So I wonder if it's just part of our culture of individualism and also the neoliberal influences that lead to a lot of these sort of innovative, incrementalist fixes that only address the symptoms of the crises in various ways, but none that are able to really reach and fundamentally get to the roots of what created the problems to begin with.

Emma Hiland: Yes, I would absolutely agree. That's a really great articulation of the broader scale problem that is happening. Mental health, mental illness, mental distress, and mental disorder are all just part of this culture where we emphasize that individuation of responsibility, just like you said, you start to carry your metal straw around and tell the restaurant that you don't want a plastic straw and that will make a difference.

Yes, it does make a slight difference, but what we really need are large-scale solutions and large-scale support from communities and organizations that are bigger than just one person. So it is the same thing in a lot of ways with these mental healthcare solutions or the purported solution. Rather than have access to a larger-scale system where we do things like provide mental healthcare resources for people who need them or desire them, instead, you can just download this app on your own, like you pay your smartphone bill every month, so just pay two dollars a month for a subscription to an app, which some people might believe will decrease your overall anxiety or stress levels. So let's not do anything to address the root of your anxiety or the stress, or how you feel like your quality of life is diminished.

Let's instead just use technology to put the onus on you to come up with your own solution, rather than decreasing the burdens, decreasing the overworking, decreasing anxiety, producing things that we have to do in our everyday lives. Let's just put the onus on you as an individual. This is where theoretically, I use neoliberalism in the book because what it does emphasize is that individuation of responsibility again. Rather than having access to large-scale support mechanisms, we are told that we are empowered. We have the technologies and the knowledge to take care of ourselves effectively. So why would we want to be a drain on large resources? We don't want to be a drain. We want to show our cognizance of social responsibility and care for ourselves effectively.

So what we end up doing is valuing our own individualized interventions and this idea that we can care for ourselves effectively. But at the same time, it perpetuates this myth that those who can't do so — those whose needs aren't satisfied with an app or by talking to an A.I. chat-bot — that there is something wrong with them because if they can't fix themselves using these tools at their disposal, they're not like us.

Even culturally, in terms of our beliefs about ourselves and each other, these technologies do perpetuate this gap between those who can self-care effectively and those who can't. We tend to see those who cannot, who need more resources and more help as beneath us.

Kamea Chayne: Yeah, there's just you can find this pattern across so many parts of our society, including a lot of the increase in diet-related illnesses where people are told they just need to be more educated about what choices to make. That completely disregards the idea that a lot of communities don't even have access to fresh foods.

I know, as you mentioned earlier, you've also looked at some of the doctors on demand, and I hesitate to call these things futuristic because that is not a vision of a future I personally want to hold, but I think there is a place for technology, though I worry about technologies replacing humans across the front lines of healthcare, because even with the statistics and disparities aside, I don't even know how to articulate this properly, but it just feels like it loses a key essence of what “care” even means.

Because what if a part of the healthcare and healing process for people is simply being in the presence of someone who we know is listening deeply, who sees us as a fellow human, with whom we can build relationships based on mutual care and empathy? There may not ever be data to illustrate these points because it's kind of an embodied living feeling. But I would be curious to hear if you've had a chance to explore what it means for our humanity to be digitizing and robotizing care.

Emma Hiland: That is a really important point, and what I will say is that I do know there's quite a bit of time and energy and financial resources all being put toward trying to improve robots and artificial intelligence. You've probably heard of examples of these tools that are given to elderly people to help address their loneliness. It's the same problem that I talk about in my book. We think that rather than think about people as a resource that maybe we need to direct time and energy to recruit people to provide care for other beings, that technologies can replace us in so many of these ways.

So, increasingly, we see publications and research trials for elderly folks who are presented with tactile robots that they can attach and they're supposed to improve their feelings of loneliness and social isolation. And it's true, they absolutely have been shown in these trials to improve those feelings for them. But again, at what cost? Because if we believe that there are technological solutions, whether it's A.I., that we can converse with instead of talking to other people and having those human connections, or it's tactile robots that we can pet as if they are our pets with which we could have human-animal connections. What is the overall cost of that? And A.I., too, like you are concerned about, what does it mean to rely upon non-humans for human connections or to have our human need for connection to be satisfied by non-humans?

Especially during the COVID pandemic, so many of us have realized that in order to feel like we are living and leading fulfilling and meaningful lives, we do need human connection. Even though increasingly, many of us have technologies that make it more possible for us to connect with others, we continue to feel socially isolated and lonely, even though we can log on to our social media and see what other people are doing at all times of day or night. We can text and we can video chat.

But there still is nothing that replaces human to human real contact.

So I am, like you, bothered when I see that increasingly, medical professions are turning towards A.I. of various forms to do things like triage patients or try to have conversations with potential patients on phones to figure out what it is that they need from an actual human medical provider before they're allowed to make an appointment. Or if you go on to a healthcare company’s website or platform, you might be able to engage with a chatbot and tell it what is going on in your life. So there are a lot of ways that we are increasingly seeing the turn towards A.I., and I don't think, for the most part at this point, medical professionals want A.I. to ever take their place in any way, shape, or form.

Instead, those who are accepting of A.I. being part of their workflows in the workplace see it as able to augment and improve what they can do. For example, maybe look over patient medical records and see if maybe they missed something. When they diagnose somebody, they missed something in their medical history. But that is for the folks who are accepting of different types of A.I. in the workplace. In medical fields, there are some people who are adamantly opposed to using technologies. They really want care to be delivered solely by humans. They are more fearful about the repercussions of relying upon A.I. and robots and other technologies.

But due to costs again, a lot of the time, it is more affordable for companies to use A.I. rather than hire people to do things like triage patients or see what prospective patients need from them as medical professionals. So I don't see A.I. as going away any time soon for medicine and various medical fields and specialties. I think we just need to be cognizant of whether it is actually helping us and making clinicians’ and practitioners’ workflows more effective and making it possible for them to interact with and see more patients than they would have been able to otherwise.

Or whether again, these technologies are creating new problems that were unintended and unforeseen, and then we need to deal with the repercussions of them. So we have to be cognizant of all of these potential questions and problems.

Kamea Chayne: Another concern you have with the development of healthcare techno-solutionism, if left unchecked, is that it will result in intensified forms of psycho-surveillance. So how might this play out? Because psycho-surveillance feels more abstract in its actual impact on everyday people, what sorts of issues should we be aware of there?

Emma Hiland: When I first started discussing psycho-surveillance in the book, I did so in terms of one of Facebook's algorithms that came out a few years ago. When it was released, it was heralded. People really praised Facebook for the innovation it offered, which was to scan every status update, every post that people make on Facebook and see if there seemed to be any trigger words, keywords, or responses from other Facebook users to flag it as potentially indicative of that user having the intent to self-harm or to harm others. When I say psycho-surveillance, it's a combination of surveillance practices, but also that which is psychological or related to the SCI disciplines. Another framework which comes from cultural studies, too.

So psycho-surveillance really takes us beyond just Facebook's algorithm. There are also platforms which are predicated solely upon providing psycho-surveillance services as volunteers. In the book, I discuss two platforms in particular. One is called 7 Cups of Tea, and it's a platform where you can either go on as a listener and be trained for about ten or twenty minutes of your time to provide empathetic listening to people who are experiencing a whole spectrum of either diagnosed mental disorders, subclinical mental distress or just want to chat. I also discussed a text platform called Crisis Text Line, which also still exists today, and how both platforms ask for these volunteers to monitor the person’s mental healthiness, to engage, be reflexive, be thoughtful and empathetic listeners back to the people who are looking for a listening ear without paying for the services of a mental health professional. And that could be for a variety of reasons. It could be because they are dealing with a trauma that is very recent, or it could be that they don't have financial resources to pay for somebody who's a qualified and licensed mental healthcare professional to engage with them.

But I conclude the book by also talking about my fears for psycho-surveillance in the future, particularly in the context of big data and smart data in the ways that we are all constantly making more and more data about ourselves.

With the advent and widespread popularity of these technologies, which are gathering information about us all the time, it is possible for all of that data to be aggregated, analyzed and attempted to be used for predictive purposes...

To try to determine people who are maybe now or in the future are going to experience mental distress, mental disorders or potentially try to enact a form of violence based upon alleged mental disorders. The problem becomes equating potentiality for violent acts or violence with mental illness. Because the media has long played a role in perpetuating this myth that people with mental illnesses and mental disorders and who experience mental distress are more likely than the rest of us to behave in violent ways when we have the research that says that that isn't true.

When we had Donald Trump during his presidency saying things like perhaps we can use data that people share on social media to try to predict when violent crimes are going to happen, that was really troubling. Not a lot of media attention was paid to that statement because for most of us, we were able to disregard it as too futuristic and almost like Minority Report-ish with the belief that we can predict who will be violent and when and what their crime will be, and maybe we can arrest them and stop them before they ever act. It's this idea that technologies can be used to predict our behaviors in the domain of mental healthiness and mental illness.

Whereas with other medical disorders, other medical illnesses beyond the psychological, we can have smart data. We can have data which allows us to predict when we are becoming ill or when we might experience a relapse. One of the examples that we often talk about with digital health more broadly is diabetes and how we have really great predictive technologies which are smartphone applications that are so much easier to use and less invasive than it used to be, even ten or twenty years ago, when you had to prick your finger and check your blood level, perhaps all day long, to see your glucose levels.

But this idea, this false belief that we can similarly have predictive data in the realm of mental health and mental illness leads us to think sort of what Donald Trump suggested, which is that we can predict crime and violence, particularly from mentally-ill persons, persons who are diagnosed or not yet diagnosed. And we're not at that point, but it is a dangerous imagined future to think that we can use technologies and big data to be able to make those predictions. Especially because in the next few years or so, we are probably going to see insurance companies requiring many of us to use these technologies because depression and anxiety have been shown to cause huge economic losses. And insurance companies want to make sure that the companies who are paying for their services keep their costs down.

So the idea is that if these tools and technologies can improve a lot of our mental healthiness, then yes, we will be asked to use them. But then what happens with the data after that? My overarching fear is that as we are increasingly required to use these technologies, we will lose control of data about ourselves and our mental healthiness. Now, that's not always necessarily a bad thing, because if it is revealed that many of us are experiencing more levels of distress and anxiety and depression and other mental disorders, whether they are diagnosed or subclinical, it can also destigmatize mental distress, which is, of course, a good thing. But we also want to still retain control over personal information, especially that related to our mental health, because it continues to be highly stigmatized.

Kamea Chayne: There are again, a lot of nuanced layers here. I do hold that fear as well because there are just so many unknowns, and especially knowing that it's the profit motive that is driving most of these “advancements.” That is what I have the most fear about because it's not that we can't advance in ways that actually center care and our collective well-being, but it's just that there is that profit motive that is the primary thing driving these sorts of advancements right now.

So as we look to the future, it's of course really important for us to work towards transforming the conditions that created our varied health crises to begin with. But even as we work to address the roots, we still need to rebuild stronger systems of support and work on community building in ways that are more human-centered to be able to offer people the various forms of mental and physical and holistic health support that they need.

I know you mentioned that there are no easy answers as to what we do now going forward. But what does it look like for you and your mind to build a human-centered form of care and a network that might be rooted in community?

Emma Hiland:

Rather than prescribing a solution from one person, I think that communities themselves can often come up with really great solutions.

So that's one of the reasons why I hold back a bit in Therapy Tech from saying that there's one thing we should do. Of course, I think that devoting resources to training and recruiting more competent mental healthcare professionals can help us a lot by increasing the accessibility of mental healthcare services. But I'm also cognizant and recognize the fact that we're probably never going to see a total turn away from technological solutions.

I do want us to remember that care is a human ethic, and perhaps we can have technologies which are useful for improving the spectrum of care that persons can believe. But if my work can just help a community or a person to recognize that there are others like them who are concerned about potential losses of that ethic of care, which is again a human ethic, then I think that my work really has done its job, which is to raise people's consciousness.

*** CLOSING ***

Kamea Chayne: What's an impactful publication you follow or a book that's been really profound for you?

Emma Hiland: Probably the most perspective-changing book that I've read recently would be Thich Nhat Hanh's Peace Is Every Step, which reminds me that everything we do can be done mindfully. Largely based on my research, I've become very interested in meditation and mindfulness practice, and I absolutely love Thich Nhat Hanh's work.

Kamea Chayne: What is a personal motto, mantra or practice you engage with to stay grounded?

Emma Hiland: I just have this belief that if the work that I do, whether it's research or teaching, even if it's one person or a single student who I can reach and somehow what I do or what I provide changes their thinking or causes them to have a realization that leads them to have a fulfilling life, to me, I feel like I have been successful. So just thinking about it in terms of changing one person's perspective. That keeps me going.

Kamea Chayne: What are some of your biggest sources of inspiration right now?

Emma Hiland: I recently moved to Texas after living in the Northeast for most of my life. One of the things that I've realized is that nature and the plants here, they're all so different and so beautiful that I've realized that I've taken nature for granted for a lot of my life, just assuming that I know about plants and everything that grows. Being in a completely new climate and a new place and a new area of the country, seeing all of the new things that can grow here causes me to be more conscious about my environment. So that's been a really great change for me.

Kamea Chayne: Emma, thank you so much for sharing this time with me and sharing your wealth of knowledge and learnings. What final words of wisdom do you have for us as green dreamers?

Emma Hiland: Forming communities based upon wanting to address challenges that we see in our world today—and there are certainly many of them—should provide some solace. And also the idea that you are not alone, that other people feel the same way as you, and that we can work together to create a better world.

 
kamea chayne

Kamea Chayne is a creative, writer, and the host of Green Dreamer Podcast.

Previous
Previous

Max Ajl: A deeper green new deal for the people (ep336)

Next
Next

Melanie Yazzie: Building indigenous solidarity and power (ep334)