top of page

Can We Design Algorithms For Better Mental Health?

Updated: Feb 12

A girl in her teens is sitting on a couch immersed with her phone, the room is darkly lit and she doesn't seem happy or energized, rather sad or pensive.
Source: freepic.com

We explore an emerging question in social media: can we move beyond algorithms for addiction and design systems that support mental health? Our signals research reveals a complex landscape where immediate solutions, like teaching algorithmic literacy, exist alongside calls for emerging alternatives and policy changes. Through examining current research, platform experiments, and community-focused models, we uncover possibilities for how social media might evolve beyond engagement metrics towards wellbeing.





Selected Links:


  • “Neighborhood Steward Fellowship,” bringing together leaders of local-oriented online spaces to experiment with fun new practices and ideas, https://newpublic.org/fellowship



Episode Transcript:

Lana: Welcome to Signal Shift, by Horizon Shift Lab. We're your hosts, Lana Price, Raakhee Natha, and Sue Chi. Each episode, we explore the latest signals in technology, culture, and society, uncovering insights that will impact our daily lives in the future. Join us as we shift perspectives, explore possibilities, and delve into real changes in our world. Curious to learn more? Go to horizonshiftlab.com.


[0:37 Mental Health in the Digital Age]


Lana: Hello, and welcome to another episode of Signal Shift. This is Lana. And this week, we are talking about the future of mental health and well-being, which has become increasingly complex in the digital age. 


So this is part of our Health and Wellness series, and I am really looking forward to this conversation. It feels very timely because when we're talking about mental health, we are talking about our ability to handle life’s stresses, to realize our potential, and contribute meaningfully to our communities.


And it's really, there have been a lot of studies about the impact of our digital lives on our mental health. So we know that we're technically more connected than ever, yet somehow, we feel increasingly isolated. Our phones ping constantly with notifications, news alerts, and updates. They deliver quick dopamine hits, anxiety-inducing comparisons, and an overwhelming flood of information. 


But we know it's not all doom and gloom, and so we're really looking forward to hearing what signals have we found about the future of mental health, resilience, and connection in the digital age.


[2:11 Rethinking Social Media Algorithms]


Sue: Thanks, Lana. This is Sue. I guess I'll begin by saying that I'm glad you talked about it not being all doom and gloom. Because I think one of the surprises, as I looked into this week's theme, was that the research actually is not conclusive on how exactly social media is impacting our mental health, and specifically, I'm talking about kids. 


And so one of the things I really wanted to focus on, because we've seen the news about Australia's ban, a lot of school districts are either thinking about implementing or have gone through some of the steps to implement a ban for students.


So I listened to a panel by the Child Mind Institute that really focuses on neuroscience and behavior for children. And they were talking a lot about the science and what's really behind the mental health issues that kids are facing today. So that was one just like a learning experience for me, that there still has to be a lot of more development in the research methodology to know this. Because we know from anecdotes really how this is an issue.


But I think one of the things from the panel that I really liked was just an idea of the ways, if you are on social media, the ways the algorithm specifically is impacting your mental health. There are studies, for example, if you have an eating disorder, you are more likely to get posts specifically about eating disorders and appearance than otherwise. And that extends to children too in terms of these issues.


So one thing I found encouraging was there is a researcher from Northwestern who was on the panel for the Center for Behavioral Intervention Technologies. And they found a study that if you purposely go where the kids are, which is on social media, and you can create what they called “single session interventions” around some of the mental health behaviors. And they found that in the digital space, sometimes this can be as effective or even more effective and definitely less expensive than some of the psychotherapy or pharmacotherapy interventions that you receive.


So there has to be a lot more data on that. But the assumption was: you have to go where the kids are. And if they're here, there are ways that you can interject and intersperse these kind of sessions for kids on their feeds. And so that just led me to think, yeah, what is controlling the algorithm of these social media feeds and how can you change it?


Sue: 

So I think this whole idea of algorithmic literacy is going to be really important, is important right now and into the future. There's not enough education on that for adults as well. So how can we even give it to children? So I think overall, that's like a really important piece of the work going into the future. So I thought that part was encouraging and just the, hopefully what we'll see is a turn in some of the research to get better methodologies to really see what the link and what the drivers are with some of the mental health regarding children.


Lana: That's really interesting. So I guess I always thought that the algorithm was kind of a black box. So is that implying that one can have more control over it?


Sue: That's what it was implying. Although we know, to your point, it's very hard to figure out what these algorithms are doing. So LG, to my understanding, basically like pre-programmed a feed for you, that by clicking on the likes, like these are quote, unquote, like very positive posts. And LG had an entire feed of them. So therefore, by liking all them or commenting all them, that then changes what your algorithm is doing to your feed. So it's not directly controlling what you can do in terms of a totally different algorithm.


And then in terms of the Child Mind Institute, they were talking about some of the other things you can control, like the limit to your social media, like which things you're following, what liking something means for your feed. So yeah, a lot more to go.


Lana: Super interesting. Yeah, thanks, Sue, that I'm hearing a lot of the intentionality kind of in there, like more mindful social media use. What about you, Raakhee?


[7:44 Amnesty International's TikTok Study]


Raakhee: Yeah, so Sue, you and I were on the same page around the algorithm, but I was on the dark side. So I think maybe sharing a little bit of this will answer some of the questions you were having, Lana, like, but how does this work? Like, what kind of control do we have? And then possibly, I think, like you spoke about algorithmic literacy, a little bit around a concept of algorithmic design. And what can we actually change there? Or what should these tech companies be changing there?


But yeah, I think this is basically a report from, the signal is from Amnesty International, which, as we know, is a humanitarian nonprofit. It's completely independent, has about 10 million members, global-wide, and their researchers did a study and did a report, this is about a year ago. And it's called How TikToks For You Feed Encourages Self-Harm and Suicidal Ideation. And it's based on desk research, some surveys, focus groups, interviews. And they focus primarily on 300 children in this part, the desk research on kids from two countries with the highest social media usage, which happens to be Kenya and the Philippines. Very fascinating. These two countries.


But they also did a technical investigation with the Algorithmic Transparency Institute and AI Forensics, and a few other technical partners where they looked at data across Kenya and the United States and the Philippines. So they brought the US in here, and they did manual experiments in the US, in Kenya, and the Philippines. 


And what they did with the sort of technical part is they created these automated accounts. So they set up like 40 automated accounts, they gave them personas, different levels of interest in mental health-related content, and basically to mimic different kids' behaviors on TikTok, in particular.


And 20 of these accounts simulated 13-year-olds in the US. And the other 20 were simulated users in Kenya. And they chose that age of 13 because it's really that entry to teenhood and those early years that seem to be where the concern is because of the vulnerability of kids at that age.


And there's a lot that we know from the neuroscience perspective about the vulnerability of the brain at that age, how we perceive the world, how we take in messages, how important peer belonging, and that sort of thing, is. 


So ultimately, through this research and this report, details all of this, what it came down to is that with each hour spent on TikTok’s For You feed, if somebody, if I came on and I expressed I had a bad day, or even if I was looking for fun content, “had a bad day,” the problem is the usage of just that word -- a bad day -- was going to take you into a funnel of more content that expressed other people having bad days. 


Not necessarily bad, in essence, but what has happened is there's a lot of romanticizing and sort of like, regurgitating the same mess. Like, you're not alone in your pain. Like, yeah, you are depressed, you know? Which is maybe not the healthiest thing for a kid who's 13 and depressed and has no other way to kind of go around this.


And it's just too much of that content that they start seeing. So with each hour spent on there, the video clips recommended to the teen account shows children and young people who are crying, they're alone in the dark, they're overlaid. It's kind of like movies, right? You're overlaid with text expressing depressive thoughts or faceless voices, describing their own self-harm. And their suicidal thoughts. 


And the set of algorithms basically analyzes users' interests and engagement patterns. And it matches. The algorithm is all about matching. It's not about dictating this is the intervention you need, right?


And so that comes back to them until we don't fix that. Almost as a design philosophy, right? And a principle is always going to do that. Now, if your interest is in cooking and fashion, that's pretty great. That's simple. That's easy to work. And so does the revenue generation that comes from it. But for kids who are in pain, whether it was meant to be designed as a rabbit hole or not, it unfortunately does become that.


So the blanket rule is more of what you want, what you're feeling, instead of how to get help and what not to do. And they haven't found a good enough way to circumvent that. 


And Sue, I really appreciated the examples of the things you spoke about and the fact that even LG, right, as a product company is trying to design differently. And so it's really sitting now with, I think, the space that it's sitting with is policy, right, and law.


[12:59 Policy vs. Design: Addressing Algorithmic Addiction] 


So for example, California has a law that's called the “Age Appropriate Design Code Act” that was recently adopted in California. And even then, I mean, it's faced battles, right? Because designing differently impacts revenue. So you're fighting the age-old kind of debate here. But there's a lot of policy. Harvard and Boston Children's Institute, they're focusing on the eating disorder side of it, because it's a similar thing. It takes you down that kind of hole.


But really, I think the realization after you read a report like this and several others around the algorithm is that, if the algorithm is going to give you more of what you want, it's addiction, right? It does encourage addictive behavior. And it's very hard to fight that it's the same thing as saying, sugar's not addictive. Like, of course it is, right? Or alcohol's not addictive. Of course it is. And some of us, unfortunately, are just more susceptible. And at that age, a bigger percentage of kids, just because of the nature of their brain, fall into those traps much more easily.


So it's a big question on calling out. And I think, Sue, you alluded to this as well, right? It's part of that algorithmic literacy is saying, we need to say algorithm equals addiction. Like, it's not a debate. It's like, this is how it's designed. We need to understand that so we know what we are working against. And kids need to understand that there needs to be literacy. But a really big side of this is design. And I think the policy and the laws are trying to now force that sort of better design.


Because surely if we can design for addiction, we can design for appropriate behavior. So when you do hit something and you start to see content that says, oh, I'm having a mental health issue, it should be pointing you to resources. They should maybe be a note coming up saying, hey, we noticed you watched this. Maybe you want to go here next, right? That can't possibly be that impossible to do if we have robots in this era. So it comes back to the tech companies and how they're designing stuff.


Lana: Totally true. I guess I'm wondering, Sue, if you have a reaction to that, given you're both kind of looking at different sides of the same coin.


Sue: Yeah, no, absolutely. I think Raakhee, I love bringing it back to what some of the neuroscience is saying for children, specifically, we know that the brain is continuing to develop even into your 20s. And when you see the brain scans of kids, it is different when they're on and addicted to social media versus not. So that is a critical public health issue.


Yeah, no, I would agree with that. I think some of the things we're seeing is the question of what can the tech companies do? What can advocacy in the public movement and parental movement do for kids with the tech companies? And so unfortunately, like a lot of the articles that I saw was just like, we're trying so hard. We're not getting anywhere. So in the meantime, this is what we can focus on.


And I think that's where these two things are aligning. So yes, you absolutely have to push for better algorithmic policies and design. At the same time, that's going to be a huge battle. What can you do? Again, a lot of it is back to your own onus, like your own choices, agency. What can you do to ensure you still have agency? And you're not being run over by these things, right?


And so one interesting thing I saw was I guess the UK National Curriculum is up for review. And the London School of Economics actually just published a whole policy brief on algorithmic literacy has got to be included, not even just as its own thing, but integrated into all the subjects. You need to have proper professional development for the teachers in order to work with the students on this as well. I mean, it's a huge undertaking for this.


So yeah, I would not say that our visions are opposed at all.


Raakhee: Yeah, no, I was just giving the backstory to exactly why we needed what Sue is saying we need. This is why we need it, why it's important, why the algorithm is dangerous.


And I totally agree with you, Sue, though. Is the battle is going to be so hard because it's in the hands of a few companies who have more and more power now.


Why should they change things? They're not going to change easily. So we have to do exactly what you're saying. It's like, how do you take back power, right?


Lana: I was going to say the exact same thing. I mean, given where we are today and where like the recent news with the tech companies and how they're aligning with the current administration, this idea that the US might buy 50% stock of TikTok, like this idea that we would implement, would we implement ideas that go against the business model and the revenue generation?


So I align with you on the major skepticism that like that level of change would happen and that it might happen through policy. But yeah, so I think this is interesting.


[18:39 Front Porch Forum: A Different Approach]


Lana: My signal is actually a good compliment potentially to both of yours. And so this is an example of “slow social media.” And so this is an example. It's from Vermont. It's called the Front Porch Forum. And this is a social platform that, it's a local network where nearly half of Vermont's adults are on it. They're members of this community. And they use it for substantive civil discussions about everything from elections to lost rabbits. And people across the state describe it as this forum is like the “glue that holds our community together.” They talk about it as the centerpiece of community engagement in their towns.


And so essentially this platform rejects all standard social media features. So they don't have feeds. They don't have likes. They don't have an algorithm. They have paid human moderators that review every post, it has very strict rules of engagement. And so for example, a rule is “attack the issue, not the neighbor.” Everyone uses their full names. You have to confirm that you live in the state.


And so they're really focusing on, I think, the moderator, it seems to be like a really key piece of it, but also kind of creating this very safe space. And so 81% of people who use this say that they feel both informed and that they feel safe. And so it really kind of sounds radically different from some of the other online spaces that we have, where it's anonymous. And people are attacking each other. And there's trolls. And folks are coming out of everywhere saying anything.


And so it's just really modeling what rich online interaction could look like. And so I thought it was a really interesting story. 


There's an organization called New_ Public. And so this is what they're trying to do, is create sort of these safe digital communities. They even have a fellowship. If you want to be an online moderator for a group, you can join this fellowship. And you can learn best practices about how to kind of create these digital spaces that are welcoming and inclusive, where people can express themselves, but it doesn't go off the rails. 


And so I was glad to find an example like that, because I think I've almost lost all hope in a civic dialogue happening online.


Sue: Lana, quick question on the Front Porch Forum. Do you know, is it free? Or is it like a nonprofit? Yeah, what's the model behind it?


Lana: Yeah, so it actually is a company that owns it. And they have, I think, I want to say, they have a lot of full-time staff. I thought that I read it was like 30, which to me is like seems like more than sort of your average forum, where it's usually volunteer, volunteer moderator roles. So these are paid positions that do this. 


And their business model is they do have advertising and sponsorships from local businesses. So I think that's how they make money. And I think it makes sense. If you have half the state is on this messaging board, it's a really good opportunity to get some eyeballs on if you're a business in the state. And so I think that's how that works. 


Oh, and the other thing is that they've had offers to sell, but that they have refused. And so yeah, there is something around that too, right? They know that it could be monetized even further, but they're choosing to keep it this way.


Anything else that you want to call out that you want to see more of in the future?


Sue: Yeah, I guess just, again, this is a surprise and I don't know all the details, but better research, more conclusive evidence of exactly all these linkages of what's happening because I think that's also part of the confusion.


Lana: OK, well, great. Well, thank you both so much for a wonderful discussion and, you know, as always, we can find this episode and all of our episodes on our website at horizonshiftlab.com and as well as all the transcripts and the links to the signals and where we found them. 


And so thanks so much for joining us and we look forward to being with you again next week. Bye for now.


Commentaires


bottom of page