ChatGPT, or: How I Learned to Stop Worrying and Love AI
In my first book, Code and Other Laws of Cyberspace (1999), I told the story of why I had become a lawyer. My uncle, Richard Cates, had been the lawyer working for the House Committee on Impeachment (along with a much younger lawyer, Hillary Rodham (soon to be) Clinton). In 1974, just before Nixon resigned, Cates visited us in Pennsylvania and took me for a long walk. I wanted to know why he was doing what he was doing — persecuting Richard Nixon! I was 13. Uncle Dick was the only Democrat in our extended family. He was also the only lawyer. My father despised lawyers. I loved everything about my father.
Uncle Dick explained his job to me. It was, as he said, nothing more than to teach the facts of the case — the Watergate coverup — to Members of Congress. As I remembered his words in Code:
It is what a lawyer does, what a good lawyer does, that makes this system work. It is not the bluffing, or the outrage, or the strategies and tactics. It is something much simpler than that. What a good lawyer does is tell a story that persuades. Not by hiding the truth or exciting the emotion, but using reason, through a story, to persuade.
When it works, it does something to the people who experience this persuasion. Some, for the first time in their lives, see power constrained by reason. Not by votes, not by wealth, not by who someone knows — but by an argument that persuades. This is the magic of our system, however rare the miracles may be.
Those words changed me. They certainly changed who I wanted to be. A dozen years later, I would begin law school. Four years after that, I was clerking for Justice Scalia. And in that clerkship, too, if in only glimpses, I saw what Dick had spoken about. By then, I was no longer a Republican. Certainly not a conservative. But at that point, Scalia had a practice of hiring one liberal clerk. I was the token liberal for the OT 1990 term. And in that year, I sometimes saw law work as Dick had described it. I saw the quiet reasoning of law clerks flip the vote of the Court — twice, one time, from unanimous in one direction to unanimous in another; the other time, from 9–0, to 7–2 the other way round. And I saw Scalia repeatedly argued away from his initial conservative views, to views that were more consistent with his theory of originalism. The last time I saw him before he died, I joked that he had ruined me as a law professor: That as a clerk, he had shown me again and again how reason could drive him to do the “right” thing (as in the originalist thing) rather than the conservative thing; and that I had predicted the same again and again after I became a law professor. But again and again, I told him, he had let me down. Scalia laughed his famous laugh, and we spent the next hour arguing — with reason—about whether my criticism was correct.
Yet it is increasingly hard to sustain such confidence in reason’s power today. There are as many Americans today who believe the 2020 election was stolen as believed it was stolen on January 6. Reason is not responsible for that fact. Time and again, we all have the experience of engaging with someone about something relatively difficult. Time and again, we walk away believing that either we can’t persuade or that reason doesn’t work. The enterprise feels hopeless; most simply give up.
And then I thought, maybe reason isn’t dead. Maybe it’s just reason for us, today. Maybe another form of intelligence could play the reasoning game better. Like, for example, AI.
So I decided to test it. I’m not a supporter of RFK Jr. Indeed, I fear he is a fantasist. But among the “conspiracy theories” that RFK Jr. defends is a theory he has come to late in his life — that his father was not actually killed by Sirhan Sirhan. Certainly, Sirhan shot at RFK. Certainly, he had opportunity, motive, and means — and he confessed (though he said he didn’t remember the event). Most take that confession — and the supporting assertions by those in the government responsible for making such assertions—to mean that Sirhan killed RFK.
And yet, it is perfectly clear that can’t be correct. The coroner who conducted RFK’s autopsy—in the presence of military coroners who he had flown in to confirm his work as he did it—concluded that RFK was killed by shots at close range in his back. Sirhan was never behind Kennedy; never within inches of Kennedy; and every single bullet that Sirhan fired is accounted for — and none entered Kennedy’s body.
So I wanted to see how well ChatGPT responded to this conflict between the views of the authorities — that Sirhan killed RFK—and the view of the coroner—that Sirhan could not have killed RFK. Here’s the transcript:
I was astonished by this exchange. Because here was the reason Cates was talking about. ChatGPT made its point. I pointed out the weakness in its point. ChatGPT then “rethought” its argument and acknowledged its mistake. And then, it even acknowledged its failure fully to acknowledge its mistake. By the end, its conclusion contradicted where it had begun: Through “reason” it had been “persuaded.”
I don’t talk about RFK’s assassination much. I think about it, as I think about the complex soul that is his son; and I think about it, because I feel, as most do, that we still have no good understanding of so much of the 1960s. But I’m not in the business of rallying the world to reopen the investigation of RFK’s murder. That fight won’t slow climate change, or end Empire America, or address the crippling inequality that poisons so much of America.
Instead, the fight that would slow climate change, and end Empire America, and begin to address the crippling inequality that poisons so much of America is the fight against the corrupting influence of money in politics. That is the fight I have been rallying the world to recognize — a fight I’ve been engaged in now for 17 years.
And so when I saw ChatGPT reason to explain RFK’s murder, I wondered whether it might be similarly capable of reasoning about campaign finance jurisprudence. For the last year, I’ve been trying to rally any who would listen to an argument first birthed by Al Alschuler, Larry Tribe, Norm Eisen, and Richard Painter: That under the reasoning of Citizens United, SuperPACs are not required by the First Amendment. Most lawyers treat the opposite as obvious and clear: That Citizens United means SuperPACs are required by the First Amendment. But watching Ron Fein argue the point before the Massachusetts SJC, I became convinced that not only was he right, he was obviously right. That a logical error lay at the core of the case that gave us SuperPACs. And that once that error was seen, it was clear why under existing CJ Roberts law, SuperPACs were not required by the First Amendment. We launched a video competition with a $50,000 prize to rally creators to help us make the point — see CancelSuperPACs.com — and I have spent endless hours with lawyers trying to bring them to see the same point.
And yet, I have found that not a single lawyer who begins by asserting that Citizens United compels SuperPACs ever comes around to recognizing the error in that claim. As one lawyer explained to me, “I also just generally think that the search for true intellectual consistency is misguided here.” To which the only honest reply should be, “then what the hell are we doing when we’re doing law?”
So bolstered by the surprising agility of ChatGPT to reckon the facts surrounding RFK’s murder, I decided to test it with an exchange about campaign finance. Here, too, there is conventional wisdom — that the Supreme Court has declared that SuperPACs can’t be regulated. Here again, that conventional wisdom (imho) doesn’t withstand the facts. And here again, ChatGPT surprised me:
I’ve long believed (and argued) that my failures to persuade are my failures primarily. That others could do the persuading better. I still believe that. But I’ll confess that this persuadable intelligence has seduced me. And that the most dangerous feature of emerging AI may well be how seductive it will become generally—at least when many come to see how it can do what we do, but better.