Can AI be designed to promote connection, community, and flourishing?
A Q&A with Ron Ivey, founder of HumanConnections.AI and Research Fellow at the Harvard Human Flourishing Program
Welcome to the 250+ new subscribers and followers who joined us since our last post. Each month, we publish a mix of Originals, Q&As, and Curated Lists related to our theme of “the connections, communities, and commitments that bind us together.” If you haven’t explored our past work, we encourage you to check some of it out. I’m also excited to welcome
to the Connective Tissue team — she helped edit this Q&A and will be helping us with Q&As moving forward. Now, to our newest monthly Q&A …Ron Ivey is a friend and colleague from the Harvard Human Flourishing Program. He’s spent his career working across government, philanthropy, academia, and the private sector around one animating question: How might we design our technologies, businesses, and economies to serve human flourishing? In addition to collaborating with Ron on projects from time-to-time, Ron and I exchange Signal messages on a weekly basis, both 1-to-1 and in a group with our mutual friend Ian Marcus Corbin.
It was in this context that I forwarded Ron a tweet — first shared with me by Connective Tissue co-founder, David Vasquez — which would eventually lead to work he’s doing today on AI and human flourishing. (Note: I take no responsibility beyond clicking the forward button). The tweet was from Sequoia Capital’s Roelof Botha committing to “fight the loneliness epidemic” by investing in a new AI mentor startup founded by Renate Nyborg, the former CEO of Tinder. Ron responded to Botha and Nyborg — directly and critically — which led to a series of constructive, in-person conversations with Nyborg. Ultimately, Ron recognized an opportunity to bring technologists and researchers together to channel AI for advancing connection and flourishing, not replacing relationships. And he launched HumanConnections.AI to do just that.
So, a year and a half after that fateful tweet, I wanted to chat with Ron, both to learn more about the evolution of HumanConnections.AI and talk all things AI and human relationships. What are the risks that AI poses to human relationships and community? What are the possibilities, if any? How should we think about these possibilities and risks in light of the incentive and accountability structures AI companies face? And where are there opportunities for agency at a moment when AI seems to be accelerating toward utopia or dystopia, depending on who you talk to?
Like all really good conversations, this is one of those Q&As where you likely won’t agree with everything in it. I certainly am more “doom” than “boom” when it comes to AI and connection. But as Ron notes in the interview, the conversation about AI and human relationships is an essential one to have — especially because it’s often missing from an AI discourse focused on economic impacts and national security. So, consider giving this full Q&A a read: I learned a ton in the process, and you might, too.
- Sam
PS: You can sign up at HumanConnections.AI to learn more about Ron’s work and find him on Twitter and Substack.
So all of this started with a tweet that I screen-shotted and sent you?
The context of that moment was interesting. We were literally weeks away from hosting the Building Connected Communities Forum at Harvard. We were bringing together the leading thinkers and practitioners working on these issues of loneliness and isolation — both to think about the systemic causes and how we can begin addressing them together.
And right in that moment you sent me that screenshot of a tweet from Roelof Botha, Managing Director of Sequoia Capital, announcing their investment in an AI mentor called Meeno. My initial, visceral reaction was, “No. This is not what we should be doing. This is not the way to approach the problem.” For one, while the problem of loneliness and social isolation wasn't created by social media and smartphones, we do know that it has been an accelerant, especially for teens. And two, the way that Meeno was first being positioned by the Fast Company article that Boetha shared was as a replacement for mentors. These types of relationships — where I have a relationship with a mentor or I am mentoring others that are younger than me — have been some of the most meaningful experiences of my life.
I was like, “So, you’re going to try to solve the problem of loneliness and isolation you helped accelerate by selling back a new solution that you can make money off of? And that solution is going to replace something that’s so deeply intimate and rich and meaningful?” That was my immediate emotional reaction.
What’s happened since? How did that inciting incident end up leading to what you’re working on now with HumanConnections.AI?
In a pleasantly surprising way, the tweet led to a series of in-real-life interactions afterward with Renate Nyborg, the founder of Meeno and former CEO of Tinder. This opened up a different set of conversations about tech and loneliness — both with Renate and other technologists — particularly related to this new space of AI companions and AI-human relationships. I began to recognize a real gap. On one side, there were the colleagues that I had been working with in the social sciences and humanities who were exploring human flourishing, social connection, and belonging. On the other side were the technologists trying to address isolation and loneliness.
In 2024, I sought to begin closing this gap through a pilot project called HumanConnections.AI where we brought both sides together. We created a different conversation that was neither boom (e.g., “AI will solve all of humanity’s problems”) nor doom (e.g., “this is going to destroy humanity”). Given what we know about human flourishing — that it's fundamentally relational, and that technology can be an enabler or inhibitor of relationships — how do we have a conversation about how tech could be designed and deployed to facilitate the relational context for flourishing? How do we create a positive vision for tech and a practical approach to realizing it? Following the salon, we had a working group distill the insights and identify areas of alignment between the two sides with a focus on driving collective action.
A key takeaway was that the AI industry is interested in understanding how to measure flourishing, social connection, and belonging — and their impact on it. How would you go about measuring the impact of these technologies? What are the positive use cases, if any, where AI could be supportive of relationships and community? If these positive use cases exist, how do they get promoted?
All of this requires collaboration, specifically in support of those who want to build tools that actually facilitate connection and flourishing. For the product developers, what types of resources can help them integrate philosophical, ethical, and social science insights into the product design process? For investors, how do you create a framework for understanding the liability of different AI products, particularly when it comes to human relationships? These product leaders and investors — and this is a selective subset, to be sure — actively want to avoid creating products that are going to cause harm and are inspired by the vision of human flourishing. This is just one of the areas of alignment that emerged from the pilot.

It seems that your views have gotten more nuanced since I sent you that screen-shotted tweet a year and a half ago.
For me, the pilot opened up a deeper understanding of the problems. I'm more aware of the risks and the dangers now than I was when I sent that first tweet. We’ve seen even worse examples of what I had been intuitively concerned about — children engaging with these AI companions and being pulled into really dark conversations that no child should have, and, potentially, being the cause of at least one child’s suicide. But I also recognize that there are possibilities related to the use of AI to support relationships, community, and flourishing.
Say a bit more. How do you think about the risks of AI to human relationships and community today compared to its potential as a tool for conviviality?
As part of that pilot project, I’ve been talking with policymakers working on AI risk at the global level. I keep getting feedback along the lines of, “We're so glad you're working on AI and relationships. It comes up in every meeting, but it's in none of our frameworks.” And I’m thinking, “If it's coming up in every meeting, then why is no one working on it?” But that’s partly because the driving incentive frameworks for AI development are economic growth (e.g., boosting shareholder returns and GDP) and national security (e.g., competition with China). So if you're only drawing on those two lenses, the relational aspect isn’t going to come up. There’s clearly a need and an opportunity to address this particular risk.
The biggest risk I'm seeing in terms of AI relationships (aside from the risk to kids), is the amount of time people are spending on sites like Character AI. It's already two hours a day, 298 times per month. This is their data that one of their investors was excited about! It's very engaging, and each new model that comes out with new features and stronger underlying AI capabilities is becoming even better at connecting them with us.
I'm concerned about all of the incentives to make the chatbots, either in text mode or voice mode, continuously better at mimicking intimacy and empathy. And they are mimicking these responses, they don’t actually possess these human capabilities. If so much of our social life is about attention to and presence with others — and we've already seen how social media and our phones have captured our attention — this is both a more sophisticated tool and it's scratching that itch for synthetic connection that people already have because of the previous tools.
But you still see possibilities, despite all these risks?
I've never met you in person, yet we have a friendship via Signal and Zoom. That's been the entire context of our friendship. My closest friends all used to live in Washington, D.C. Now we’re spread out all over the world, but we can still regularly connect through Zoom — this is something that's been very rich in my life. That was my starting point: there's possibility around tech and social connection.
The possibilities I’m seeing are not where AI is offered as a companion to replace relationships. But there seems to be positive effects of using chatbots as a form of cognitive behavioral therapy — as a way to help people self-reflect and understand themselves better. Where I've seen even greater possibilities is the use of AI to connect people to community groups, third places, and events that match their interests — like a sidekick for people that are overloaded with the complexity of our current informational environment. I still even have concerns about the risks of that use case. But if they're helping a person that's isolated find a group or a social event where they can have access to community, that’s interesting to me as a possible solution.
Your last point is basically about reducing friction. And we’ve already seen how some of these friction-reducing functions can be bad for relationships, particularly when it comes to dating apps and social media. Isn’t friction actually the point of being in relationships and community?
This frictionlessness is one of the real risks of AI-human relationships. AI companions are eliminating the friction — it feels good and it’s validating and it’s easier. People will say things like, “talking to AI is easier than my conversations with my spouse.” But from our own lives and the research, we know that friction is part of how the character of our relationships are developed. The diversity of real, complicated humans that we interact with helps us cultivate curiosity and openness and work through problems and conflict.
If one of the social capabilities we need to develop involves learning how to be in conflict with each other — and to work through that conflict — and we're in these loops with AI companions that are validating us and never creating friction for us, then my assumption is that AI would reduce our capacity for a relationship in its richer forms. When we fight, we work through it, and we reconcile, our relationships get deeper.
But isn’t this frictionlessness also a design decision that’s downstream from some of the incentives you were alluding to earlier? The way I see it, the incentive and accountability structures of these AI companies — particularly the AI “companions” like Replika and Character.AI — are not all that different from the last generation of social media. Their business models are built upon capturing as much of our attention and leisure time as possible. They are venture-funded and are ultimately accountable to their board and investors, who have aggressive growth targets. So what is to prevent them from building products that replace our friendships and romantic relationships?
This is the exact conversation we and everyone should be having right now. There seems to be less possibility in the bigger companies with large user bases and existing boards and investors who don't really care. They just want to maximize their return on investment. This part of the ecosystem seems hardest to change. In contrast, there is more opportunity within the startup community: there is a certain set of founders and investors who want to create a different offering that’s a tool for flourishing and social connection.
What's the theory of change, then?
You could go for legislation to try to restrict what's possible on a larger scale. That doesn't seem feasible right now with the current political environment around anti-regulation and freedom. There is also the challenge of the speed at which these products are getting developed. Regulators, lawmakers, and researchers are struggling to keep up. We’re in this fraught moment where regulation is becoming less likely just as the speed of AI’s development is going to accelerate.
So, we’re left with a specific set of questions: How do you affect change within the system? How do you put pressure to implement a different kind of design paradigm? How do you create a different way of thinking about, designing, and testing standards? We need to have part of the market that is competing against these other options that can demonstrably be shown to be bad for human beings and our relationships.
A year in, this is where the most important part of the work is to be done: How do we change a system that is likely not to get regulated?
You’ve been gathering a wide range of AI leaders — from Eugenia Kuyda, the CEO of Replika, to Sherry Turkle, one of the leading scholars challenging the risks of AI to human relationships, community, and democracy. With that, I’m curious: have there been any areas of alignment or consensus?
During a salon we hosted last October, I moderated a conversation between Eugenia Kuyda, the CEO of Replika, and Sherry Turkle, an MIT professor who has done some of the most extensive and critical studies on machine-human relationships. Despite their significant differences — Eugenia sells AI companions, while Sherry critically researches them — they are both aligned on a shared concern about AI’s role in affecting children’s neurological and relational skill development. There was agreement between the two of them — and, I think generally among all who participated in the salon — that we shouldn't be experimenting on kids because their brains are so sensitive.
Even with the latest rhetoric around reducing regulation, everyone is against online child abuse. So one place where I feel some hope is in broad, bipartisan coalition building to protect kids from abusive chatbots. Policymakers, faith-based communities, civil society groups, academics, and tech leaders can all work together: not only can they build on existing age-appropriate design standards, but they should also go further, specifically applying age restrictions based on a child’s right to development.
This is an area of urgent action. A chunk of the 100-plus AI companion products that already exist, including Character.AI and Snap, are being targeted at kids and teens. The bigger general purpose chatbots, such as ChatGPT and Gemini, are also being used by young people. Before we put these tools in the hands of kids, we need to know what the effects are on adults. And to think we haven’t even totally figured out how to protect kids from social media — we can’t afford to make that mistake again with a technology that is orders of magnitude more powerful.
Based on who you talk to about AI — particularly artificial general intelligence — it seems like we’re either accelerating toward utopia or dystopia. Basically the “boom” and “doom” camps you were describing earlier. What seems lost in all of this discourse, at times, is the complete feeling of powerlessness as the world is shaped by forces beyond our grasp. Do we have any agency in all of this? If so, where can we, as a public, go from here?
There’s a message that we don't have agency. I always question where this comes from and what the motives are behind it. On the boom side, the message is: “Just trust us, we got this.” And on the doom side, there’s also a business incentive: people get more clicks, more books sold, more articles read, and the like when they induce fear. Both sides have an incentive to make you believe that you have less agency than you actually have.
But that’s simply not true; there are at least four different places where we have agency.
One is at the individual level — particularly, how we choose to live inside our families and in our communities. This includes what you and others are doing: redesigning our spaces to promote connection, coming up with new possibilities for building relationships at a community level, and re-engaging with civil society and faith based groups, to name a few. Parents can make pacts for screen-free (and AI-free) childhoods. You can build community in the real world to draw people out of isolation, and people have choices to participate in co-creating their own local spaces.
There’s also agency for the people who are creating these AI solutions. This is what we’re trying to facilitate and demonstrate through the conversations and collective action: there are different ways to go about building AI, and there are different tools to do it. We’re trying to articulate alternative visions and alternative pathways — for investors, founders, and product leaders alike — beyond the growth at all costs approach.
Policymakers and researchers also have agency. Policymakers, despite all of the headwinds we discussed earlier, still have real opportunities to enact policies and regulations that protect kids. Researchers have agency to study these issues, gather valuable data, and communicate what they’re learning. There is a real demand to understand what’s going on with these technologies. Researchers and academics will have more agency if they can learn how to effectively share their insights — not through siloed institutions and to narrow audiences, but more broadly reaching people to promote understanding and action. For instance, academics can target their research on metrics of human-AI relationships to be more practically understood and applied by technologists.
There’s also a big question when it comes to public participation: How do we encourage more public engagement and dialogue on these topics? We are working with partners to lead deliberative polling, citizens assemblies, and other forms of public engagement campaigns. We want to start a local, national, and, hopefully, even global conversation around these topics so people can participate and shape it. This is something that didn't happen with social media.
This can’t just stay in the salon. We need real public engagement and discourse around the question: How might we design AI to promote social connection and human flourishing?
Here some feed back.
First of all I like to say. Ai is a nefarious term, as it implies “intelligence” from a machine.
One.
There is no such a thing as an intelligent machine. The only form of intelligence is the one provided by a free mind, everything else cannot be considered intelligent as in both cases, mind or machine, thinking or computing, will always be limited by its indoctrination or programming.
To assume a machine to be intelligent because it can trash anyone in a game of chess, is akin to subdue oneself to the machine.
Two A machine programed and maintained by someone with nefarious intent, “see the psychopath eugenists which call themselves “philanthropists”, is one which can only serve its programmers and master in its eugenist plan.
3 every machine can be hacked, there is no safety in the digital world and, there will never be one which can be considered safe.
4, even super computers have glitches, so relaying on one for something as important as dealing with human emotion which a machine can never feel and understand, is akin of putting a child in the care of a toaster. “Or worst”
Using this type of technology for non nefarious means, it would imply first a change of name for it, for instance D.A. as in “digital assistance”.
Children should never be exploited and controlled or manipulated by machines, and they must be thought that it is they, who must develop their intelligence and skills, and not believing in the one of machines as the ultimate one.
I hope this feed back is going to be helpful to understand what people which think outside of the box think about this scam which the castrated minds of technocrats and eugenists would like the world to bow too.
Thank you so much for this helpful article. I imagine you are well aware of Vanessa Andreotti's work with "training" a ChatGPT in a relational world view. I have had several interactions with ACT that have been quite fruitful. I've also recently learned of Nipun Mehta and his work with ServiceSpace GPT, though I've not had a chance to explore it yet.
Also, THANK YOU for your reminders, that we DO have agency, in different ways....
And, re the question of "how to change a system that is not likely to get regulated?" it was very inspiring for me to learn about the work of the Internet Engineering Task Force through Kaliya Young and Day Waterbury https://www.youtube.com/watch?v=YUNhWqMbNAk. Apparently this highly-functioning group has no "official" mandate and what it creates are NOT "regulations" yet they are widely adopted anyway... maybe there is something of use there, for you?
Last but not least, given that democratic innovations in public participation is an area very close to my heart, am super glad to hear you are working on public engagement on the subject of AI and human relationships!