Social Media, AI, and the Battle for Your Brain

AVQw...VH2G
30 Jan 2024
35

Nita Farahany is a professor of law at Duke University where she studies the ethical, legal, and social impacts of new technologies. Her 2023 book The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology (St. Martin’s Press) focuses on the neurotech revolution and its potential impacts on our freedom. Aza Raskin is a writer, inventor, and co-founder of the Center for Humane Technology and the Earth Species Project.

proto.life: I’ve been following both of your work for a long time, and it seems like a pivotal moment to be either one of you. I’d love to hear what this year has been like for each of you.
Nita Farahany: It’s been a whirlwind of a year. It’s been an exciting year. It’s been a bit of a terrifying year in many ways. I think the rapid pace of technological changes in society and the urgent need for ethical and legal guidance [to match] the rapid pace of technological advancement in AI and neurotechnology has made it exciting and terrifying because I’m not sure we will get to a place where we can align the technology in ways that really maximize the benefit for humanity. And so it’s been a year of me being on the road nonstop, missing my kids, but feeling like there is really important work to do.
It’s been exciting because there is so much that’s happening in the technological space that finally I think the world has woken up to the need to have really serious conversations and develop concrete approaches to be able to redirect technology in ways that enhance our cognitive liberty.

proto.life: Aza, you’ve struggled to get the world to understand the implications of social media. I think those have become clear now. Is this a rinse and repeat with AI or are you seeing this as a completely new effort?
Aza Raskin: I think we can frame social media as “first contact with AI.” Where is AI in social media? Well, it’s a curation AI. It’s choosing which posts, which videos, which audio hits the retinas and eardrums of humanity. And notice, this very unsophisticated kind of AI misaligned with what was best for humanity. Just maximizing for engagement was enough to create this whole slew of terrible outcomes, a world none of us really wants to live in. We see the dysfunction of the U.S. government — at the same time that we have runaway technology we have a walk-away governance system. We have polarization and mental health crises. We don’t know really what’s true or not. We’re all in our own little subgroups. We’ve had the death of a consensus reality, and that was with curation AI — first generation, first contact AI.
We’re now moving into what we call “second contact with AI.” This is creation AI, generative AI. And then the question to ask yourself is, have we fixed the misalignment with the first one? No! So we should expect to see all of those problems just magnified by the power of the new technology 10 times, 100 times, 1,000 times more.
You ask what was this year like? Imagine there was a fictional movie of some nation creating artificial intelligence, and at some point, it’ll become powerful enough that the government will be like, all right, every single one of you tech titans that’s working on this technology, get here into the Senate, into the Congress, sit down and we’re going to figure out what to do. You expect that meeting.
proto.life: We had that meeting.
Aza Raskin: We had that meeting — that’s the point. I feel like I am living in that movie because Senator Chuck Schumer, in a bipartisan way, invited everyone to…
Nita Farahany: Not everyone,
Aza Raskin: OK — not everyone.
Nita Farahany: Did you notice that there [were] virtually zero academic voices there?
Aza Raskin: There were just a couple there. Just a couple…
Nita Farahany: Which is why we’re having a second meeting in December. And there will be an academic round table, and there will be a lot more people who will round out that perspective.

“We’re reaching the place where the externality that we create will break the fragile civilization we live in if we don’t get there beforehand.”

Aza Raskin: What I meant by everyone in this case was like all of the tech titans — Sundar [Pichai, Microsoft CEO], Satya [Nadella, Google CEO], Zuck [Meta CEO Mark Zuckerberg], Sam Altman [CEO of OpenAI], Jack Clarke [Anthropic co-founder] — and then us sitting across the table and trying to grapple with this moment. I think this is the year that I’ve really felt that confusion between “Is it to utopia or dystopia that we go?” And the lesson we can learn from social media is that we can predict the future if you understand the incentives. As Charlie Munger, Warren Buffett’s business partner, said, “If you show me the incentives, I’ll show you the outcome.” The way we say it is: “If you name the market race people are in, we can name the result.” The race is the result. And Congress is still sort of blind to that. And so we’re stuck in this question of do we get the promise? Do we get the peril? How can we just get the promise without the peril, without an acknowledgment of, well, what’s the incentive? And the incentive is: grow as fast as possible to increase your capabilities, to increase your power so you can make more money and get more compute and hire the best people. Wash, rinse, repeat without an understanding of what are the externalities. And humanity, no doubt, has created incredible technology. But we have yet to figure out a process by which we invent technology that then doesn’t have a worse externality, which we have to invent something new for. And we’re reaching the place where the externality that we create will break the fragile civilization we live in if we don’t get there beforehand.
proto.life: What are we doing in terms of regulation?
Nita Farahany: You know what’s interesting and one of the things that I’ve been doing a lot, whether it’s meeting with U.S. government agencies or international organizations, is trying to help people see these problems are all interrelated. That we don’t need separate regulation for neurotechnology, separate regulation for generative AI, and separate regulation for social media — that there are a common set of issues and that by trying to address them in a common way, we can reach a lot more agreement.
And so in my book, The Battle for Your Brain, what I lay out is the concept of cognitive liberty — the right to self-determination over our brains and mental experiences — and talk about how neurotechnology gives us the finest-point way to understand that, right, which is that there is this space that we had all assumed that we actually had both the capacity to govern ourselves, that we could access only ourselves. You at least assumed that you could think a private thought, that you had a right to mental privacy, that you had freedom of thought, maybe not freedom of expression, but freedom of thought. And freedom of thought, mental privacy, self-determination, all are under threat by these different technologies.
So understanding it as both the techno optimism, which is the right to access and change our brains if we choose to do so by having a right to use these technologies in ways that benefit us, but also a right from the commodification of our brains and our mental experiences, the access to interference, manipulation, and punishment for our thoughts. That alignment and helping people see that the AI problems of mental manipulation and the social media problems of recommender systems and dopamine hits — that are being developed to try to drive compulsive behavior that leads to harm — or neurotechnologies where the same kind of business model that’s based on commodification of the data and its use in employment settings or use by governments in ways that are oppressive and surveillance — are interrelated.
And so coming up with a common update, for example, to our understanding of international human rights law, to say there’s a right to cognitive liberty, that means updating our understanding of self-determination to be a personal and individual right, updating privacy to include mental privacy, and updating freedom of thought to cover the spectrum of right against interference, manipulation, and punishment, and then translating that into national laws. So those concepts are embedded when the FTC is looking to figure out what constitutes an unfair trade practice. An unfair trade practice is one that engages in mental manipulation of the users, which is a violation of our freedom of thought. And what that means is that practices that are designed to induce compulsion and cause harm [are the ones] that the FTC should go after. And so you can see how you can start to get alignment. And helping people name and frame the problem has been part of what I’ve been trying to do. To say, look, this is a collective set of problems, and then that collectively helps us understand that we have to work on laws, whether it’s human rights, national laws, legislation and regulation.
We have to work on incentives to move toward legacy tech companies that are really focused on extracting data and keeping people’s attention and engagement on devices, to be about cognitive flourishing, to be about, you know, actual liberty and expansion, to look at commercial design, to give people user-level controls. Each of these different domains from research, to cultivating it in individuals, to incentives — across the board, we’re starting to see movement. And you see it whether it’s in language about safeguarding people against manipulation and what the Schumer kind of group put out to how the FTC is thinking about it, to how UNESCO is thinking about the governance of AI and neurotechnologies, and how the U.N. is moving in this direction. So there’s some commonality. And the OECD [the Organization for Economic Co-operation and Development] put out principles of responsible innovation and neurotechnology. They also are working on a broader framework of responsible innovation in emerging technologies. They see how they’re interrelated and are trying to work on a common framework across technologies. That’s I think the approach that we need, is to realize technologies move too quickly, that doing a tech-by-tech-by-tech approach to it isn’t the solution. It’s naming the common set of concerns that we have and then trying to legislate adaptively and develop incentives and norms that align with that.
proto.life: I feel like Congress is starting to really breathe heavily on the necks of the social media, social tech companies. This, in a way, gives them a break, doesn’t it? Because to your point, if we’re going to roll it all up into a bigger basket of all the technologies, communications, and otherwise that are impacting our wellbeing, then where we were headed with the social media regulation is going to be put on hold?
Nita Farahany, Aza Raskin, and Jane Metcalfe at the BrainMind Summit.
Nita Farahany: Maybe not, because I think it recognizes that the social media harms are some of the most egregious ones [within] the recommender systems that they’ve put into place. There are studies that show when you take a 15-second video and pair it with something like a recommender system that’s actually saying, you don’t have to choose, it’s just going to feed you what you’re interested in, that the activation of the motivation-reward system locks you in in a way that’s far more addictive and problematic than if you didn’t use a recommender system and you instead just use something that was more generalized to what’s popular in your region rather than tailored to you uniquely. And when you start to see that, which is that the social media platforms are probably the most advanced in their use of the techniques right now to capture and to addict and to limit and constrict the cognitive liberty of individuals, I think they still become prime targets and the first ones that you go after, but you start to see those same features in the design of generative AI, making it look and sound as humanlike as possible, trying to have it play to cognitive biases and heuristics in humans, to lock them in and to lead them to be more likely to buy into misinformation and disinformation. It’s not as obvious yet for a lot of people on how to deal with those problems in generative AIs, so I think it’s more likely you end up going after still the social media companies first.
Aza Raskin: And if you return to the framework of first contact with AI is social media/curation AI, second contact is generative AI, the thing that is being exploited is still our attention, our engagement. And so it will just become impossible for us to ignore the effects. And hence, I think the regulations or protections put in place for second contact harms will absolutely need to address first contact harms.
You also asked the question: Are you an optimist or a pessimist, are you a techno optimist or techno pessimist? And honestly, I think the framing of optimist versus pessimist is a terrible one. And the reason why is because when you label yourself as an optimist or a pessimist, you are saying, “This is the answer that I want, and therefore I’m going to blind myself to anything that isn’t that answer.” So it becomes not exactly a self-fulfilling prophecy, but it means you are not connected with reality. You shouldn’t say optimist or pessimist. You just say, “Let me see the world as accurately as I can so I can show up in a way that helps it go well.” I always return to the Upton Sinclair quote, which is [essentially], you can never depend on a man seeing what his paycheck demands him not to.
Nita Farahany: I might disagree a little bit… I am an optimist and I’m an optimist in the following sense, which is I believe in humanity. And I believe that we can align technology in ways that are good for human flourishing. I don’t think that means I put blinders on. I think most people would actually look at me and think that I see that dystopian future quite clearly. But for me, optimism is about trying to optimize the outcome for humanity, for the planet.

“It’s stunning that even though most people know the number of steps they’ve taken today, we know virtually nothing about what’s happening in our own brains.”

proto.life: We’re always looking for more data. And the more data we have that we can feed into our models, the better we are at predicting, intervening, and perhaps even preventing things from happening or getting worse. So how do you distinguish between the technologies that see inside someone’s brain to help their mental state and those which will help them make the right choice when it comes to, you know, which pair of jeans to buy?
Nita Farahany: I think the idea of cognitive liberty, the right to self-determination, is also the right to access those technologies, for the improvement of mental health, for the hope that it can offer for humanity. I think it’s stunning that we know virtually nothing about what’s happening in our own brains, that most people know the number of steps they’ve taken today or, you know, their heart rate or their blood pressure. But in terms of accurate understanding of what’s happening in our own brains, we know almost nothing. And these technologies will change that, right? They will give us intimate self-access that is much better than our internal software for being able to access ourselves. And that’s everything from really being able to distinguish between stress and other kinds of experiences that you’re having. Being able to reveal to yourself your own cognitive biases, being able to have better understanding of your own pain and your own wellbeing. New tools to be able to better address depression and mental health disorders, neurological disease and suffering, early detection of different diseases…
And data is needed for that, right? I mean, the more longitudinal real-world data that we have for the common good to be able to address the leading causes of neurological disease and suffering, the more promise for humanity. So I believe strongly that these technologies can be transformational for the human condition in ways that really could reverse the trends that we’re seeing of increased neurological disease and suffering across the world. And so self-determination over your brain and mental experiences includes a right to access those technologies, to be able to share that data for use for the common good with very strong purpose-limit collections on data. If I want to share my brain data, I should be able to do so. I should also be able to do so, confident that that same data is not going to be repackaged, re-mined, and interrogated to be used in the workplace for surveillance of attention and mind wandering or used by governments for purposes of, you know, making people’s brains be subject to interrogation for criminal offenses.
And so it’s about trying to ensure that that hopeful future is one that can be realized [without] technology that will pierce the final fortress of privacy, the final fortress of humanity. And, you know, I have hope — maybe not optimism in this instance — that if we can get this right, and if we can get it right, I think it could be truly the most transformative technology that we’ve ever enabled and ever shepherded in. And also, if we choose poorly and we don’t put into place the right safeguards, I think it could become the most oppressive technology that we have ever unleashed on society.
Aza Raskin: The paradox of technology is, the greater it understands us, the greater it can serve and protect us, and the greater it can exploit us. And I think it’s important to remember the three laws of technology, the ones that I wish I knew when I started my career. One: When you invent a new technology, you uncover a new class of responsibility. It’s not always obvious, right? Like, why should creating JavaScript and web pages have required writing new laws about being forgotten? We didn’t need the right to be forgotten written into law until technology — the internet — could remembers us forever. We didn’t need the right for privacy to be written into law until Kodak produced the mass-produced camera. We didn’t need to have any international treaties on refrigerants until we discovered ozone [depletion]. And so what happens? You invent a new technology and that new technology confers power. That power then gets used to find some commons that wasn’t protected and exploit it, extract it. Because that’s how you maximize profits. So rule one: When you invent a new technology, you uncover a new class of responsibility.
Rule two: If the technology confers power, you start a race. And rule three: If you do not coordinate, that race will end in tragedy as you exploit that thing. And what’s happening with brain-computer interfaces is that we are opening up brand new surface areas of the ineffable parts of the human experience, like our internal worlds, like the way our brains represent things, like our final poker face. And so we don’t have rules or laws yet to protect that. And so what I find so important about Nita’s work is, she’s doing the work of [people] like [Louis] Brandeis, who had to invent out of whole cloth the idea of privacy and add it to our Constitution. What are the parts of us humans that need to be protected? And if we don’t do that, then rational actors acting in market interests will do maximum exploitations of anything that isn’t protected.
proto.life: Are we at a point in our evolution where all knowing humanity, Homo sapiens, should be mapping out a future for our species? Should you talk about cognitive literacy, liberty, but should there be a master plan for how we deploy technology? Should there be a strategic plan? Should there be a creative brief?
Nita Farahany: So I in my book talk in the last chapter about the concept of Beyond Human, that the transformation of humanity has already begun. And whether that’s our cell phone or the growing ways in which we can access and change our brains, it has started. And, you know, it’s been in motion for a very long time. I think the question is, who’s at the table for some of the more transformational pieces that we invite? And I think a broader public dialog, a broader process of democratic deliberation to understand that transformative process, is really important. I also think it’s really important that people start to understand the kind of evolution of self from “I am me in this little container of Nita” versus “I am a relational being, and I exist, and my self is relational to you and relational to my environment and relational to technology.” And when we start to have a more evolved understanding of self, as you know, through this concept of relational autonomy or relational intelligence, I think it’s a lot easier to begin to understand the impacts of technology and how that’s changing things. As for a master plan, I don’t think that we have omniscience to understand where all of this is going, but I think having a better understanding of ourselves as relational beings can help us be more intentional about those changes that are occurring.
Aza Raskin: The wrong question to ask is: What are we doing to ourselves? The right question to ask is: Who must we be to survive? And to answer that question cannot be in the hands of a small number of people who are making technology which will transform the nature of what it is to be human, how to relate, and how we make it on this planet…
To quote E.O. Wilson, “We have Paleolithic emotions, medieval institutions, and godlike power” — that our wisdom is not yet up to wield that power. So either we need to slow down or increase our wisdom. And that, to me, is the question of who we must be.
Most people’s attention goes to “What are the bad actors going to do?” But actually, it’s not just the bad actors. It’s what do rational actors do under market incentives? If you notice, most of the terrible things that have happened with social media haven’t happened because of bad actors, it’s just companies pursuing advertising. So in order to reach the beautiful potential of what BCI [brain-computer interface] does, we have to have that honest reflection of: Into what landscape are they going to be deployed?

Get fast shipping, movies & more with Amazon Prime

Start free trial

Enjoy this blog? Subscribe to Fortuna

0 Comments