2023: A Review of the Year in Neuroscience
Yikes. Let’s skip straight to the neuroscience this year, shall we? Even this one small area of human endeavour has not escaped the forces unleashed by tech bros who think ethics are something that happen to other people. A once thriving neuroscience Twitter community dissolved thanks to He Who Shall Not Be Named, the battle-hardened remaining to rail against the dying of the light, others scattered to other platforms, sadly disconnected from one another. ChatGPT and its ilk, tools of such potential, also brought with them a wave of garbage science, including tranches of grammatically-correct, woefully-poor student essays, full of fun facts about studies that did not happen: did you know, for example, about Geoff Schoenbaum’s primate studies on decision making?And yet, science prevails. This machine for creating knowledge moves at such pace and fury that even Musk is but a pebble causing a scant ripple in its flow. As you’ll see, we’ve learnt a lot about the brain this year, about what the brain doesn’t have, what it does have, and about expert game players. But what we learnt most about was dopamine.
Error Error Error
You wouldn’t have thought we’d have much new to learn about dopamine. Or, more accurately, about how dopamine conveys information about reward. It’s been 25 years since Schultz, Dayan & Montague’s classic paper in Science laid out how the firing of dopamine neurons in a monkey’s midbrain looked remarkably like they were signalling the error in predicting a reward: firing more to unexpected rewards, not changing their firing to expected rewards, and firing less when an expected reward didn’t turn up. Indeed, just the error expected in reinforcement-learning models that seek to learn the future value of things in the world.
After 25 years, most research fields in science are either abandoned to the historians as embarrassing dead-ends, stagnate from lack of new ideas, or become mainstream, their facts regurgitated in dull textbooks, reaching that Kuhnian “business as usual” stage. But not dopamine. This year started with a flurry of head-scratching, high-profile papers on what those pesky handful of midbrain dopamine neurons were conveying to the rest of the brain.Jeong and colleagues kicked us off by claiming dopamine isn’t a signal for prediction error at all, but a signal for unexpected sequences of events in the world. Roughly speaking. Proposing a new model for how the brain learns causality between events in the world, they gave the firing of dopamine neurons the job of conveying a term in that model that, well, was so hard to explain they didn’t bother, but is roughly how unexpected it was that one event followed another given how well that event is usually predicted. Frankly, they didn’t manage to explain why dopamine should have that job and not one of the other, at least two, error terms the model needed. Nonetheless, by assuming dopamine was this “adjusted net contingency for causal relations”, the new model did an impressive job of replicating the firing of dopamine neurons in a range of classical conditioning tasks, tasks where the animal just sits there and things happen around it: bells predict food; beeps predict water. The idea that dopamine is crucial to learning causality is not new, but Jeong and co make a good point that conceptually it’s easier to learn backwards, from stuff you’ve already experienced, than forwards, by predicting the future values of stuff you’re going to experience.Mere weeks later, Markowitz and (many) colleagues took a look at the release of dopamine in mice running free, doing whatever they wanted in an open field (well, a 40cm diameter bucket). Dividing the mouse’s behaviour up into syllables, discrete bits of action like rearing or turning or scratching, they found the release of dopamine dips just before the end of one syllable and peaks just after the start of the next one. The data suggested that the bigger the peak of dopamine, the more likely the syllable was to occur again. From this Markowitz and co speculated that dopamine signals were thus being used internally to promote behaviours, just as they would be if those dopamine signals were evoked externally by reward. Between them, these first two papers were arguing there was much to dopamine beyond the error in predicting a reward.A week later, Coddington, Lindo, and Dudman offered us a rather different take by pointing out that reinforcement learning had still much to offer. What, they asked, if we were looking at the wrong kind of reinforcement learning? You see, reinforcement learning models come in two flavours. In one, they learn about the value of things in the world, then decide what to do based on those learnt values. That’s where the classic dopamine-as-prediction-error comes in, as an error in those predicted values. In the other, they just learn directly what to do in each situation; they learn a “policy”.Coddington and co. offered some (pretty impressive) evidence that the firing of dopamine neurons acts like the learning rate of something that’s directly learning a policy. That is, high firing rates would be big updates to the policy, low firing rates would be small updates. They offer us a double departure from the canonical theory: not only is dopamine not a prediction error for value, it’s not a prediction error at all.AND THEN — yes, there’s more — in late summer were two bombshells. Jesse Goldberg’s team casually dropped into conversation that actually the dopamine neurons’ firing is not just fixed to a prediction error for reward. Rather, it is assigned to sending an error about whatever is most important right now. They showed this in male birds singing. Singing on its own, a bird’s dopamine neurons fired about the unexpected errors he made in his song (this much we already knew). But when singing to a female, the dopamine neurons fired after unexpected response calls from the female, when the male didn’t get the reaction it was predicting, whether that was good (she replied!) or not so good (she’s ignoring me). Ah, dopamine, now we can add awkward adolescent male courting to your list of responsibilities.The second late summer bombshell was potentially the biggest. Tritsch and friends reported that the release of dopamine into the striatum is oscillating at between 0.5 and 4Hz, going up and down at least every two seconds and at most four times a second. Oscillating all the time, whether resting, moving, or getting reward. And on such a scale that the peaks of release were as big as those evoked by unexpected reward. This could be a problem.For you see all current theories of what dopamine tells the brain still rest on the idea that there is a baseline from which the changes in dopamine convey information. That baseline defines what is “expected”. This is true of reward prediction error theories; it’s also true of the causality theory (it’s not true if we want to believe dopamine is a learning rate, but then it’s unclear why would we want that to be oscillating a few times a second). Tritsch and friends say there is no baseline.What now? If neuroscience were like physics then tens of papers would have been posted to arXiv before Tritsch’s paper had even hit the stands as swarms of otherwise underemployed theorists descended on the latest anomaly to propose new, exotic theories that explained it away. And then discovering that the data were actually due to a loose cable, and all was for nought. But neuroscience is not like physics. Sometimes that’s a good thing.
Unexpected papers in the bagging area
It’s often claimed that the glam journals, Nature, Science, Cell and their like, don’t publish replications and don’t publish null results. That’s not true. It’s just that, when they do, they have to be gargantuan bodies of work. This year, Bassetto et al published, in Nature, a staggering paper showing that there was no evidence for a magnetic sense in fly brains. They replicated two previous experiments, one showing that flies avoid the arm of a T-shaped maze that has a magnetic field and another showing that flies knocked to the bottom of a tube take longer to climb up if exposed to a magnetic field. Both published in high-profile journals, and both taken as evidence that fly brains contain some kind of receptor system for magnetic fields.
Bassetto and co showed convincingly that neither thing happened: flies did not care about magnetic fields, neither avoiding the arm of the maze nor taking longer to climb up the tube. And when I say convincing, I mean really convincing. A total of 97658 flies tested on the maze and 10960 flies in the tube. Repeated replications of both experiments in ever more stringent conditions, from using the exact same fly lines, up to building custom magnetically-shielded chambers to rule out any effects from the (many) weak magnetic fields in any lab — including the Earth’s own.Six years of work, no result. Except, in truth, two major results. The first was the total absence of statistics in the paper. With so many subjects, any hypothesis test would almost certainly have found a “significant” difference between the groups, when that was not the interesting question — the question was “is there an effect”? Instead, the authors simply showed us the result, and the result was obvious. Rutherford’s dictum in action: “If you need to do statistics, you’ve done the wrong experiment.”And the second result? That this is a model for how to do science. A veritable “house of brick” rather than the exploratory, small-N “mansions of straw” we often read in glam mags (*cough* Cell *cough*). And in case you don’t know what I’m talking about, go read that op-ed now.It’s also often claimed that glam journals don’t publish behavioural studies. Yet witness this year a rather lovely paper from Wei Ji Ma’s lab, led by Bas van Opheusden. They showed us that expertise in a game increases the depth to which people can plan their moves. People played a two-player four-in-a-row game, taking it in turns to place their counter in any free space on a 9x4 grid. They played each other, they played computers, they played in huge numbers on a mobile app. van Opheusden and co showed players’ performance was well described by a model that computed a rough idea of the value of each state of the board, and searched for the next best move from the current state by looking at the values of the possible next moves. With that model they found two main things. The better players got, the further the model said they searched from the starting position. But they also showed that players’ attention was well replicated by the model, predicting where they were looking on the board while thinking.This, for me, was the new connectionism writ large: purely behavioural data, and plenty of it, a systematic exploration of necessary model features, and often impressive model fits to each player used to infer mechanisms. Striking was the lack of vestigial neuroimaging, that urge to tack on a figure showing activation in some brain region to somehow validate the work. Just behaviour and computational modelling done to a high standard and published in a glam journal. Lovely.And to round off the year, there dropped another gigantic collection of papers about cell types in the brain of a mouse from the Brain Initiative Cell Census Network. Where just two years ago we were bathing in ten or so papers on the mouse’s primary motor cortex, now we’ve at least ten papers on a cell atlas of the whole mouse brain. We learnt that there are lots of different cell types, that different types are in different regions of the brain, and that the majority use either glutamate or GABA. You may be thinking that we knew this already but I couldn’t possibly comment. But we can all agree it’s a huge effort and a valuable resource – the value in these cataloguing projects lies in the discoveries built upon them, from the new ways of targeting specific neurons to trace them and tag them and make them fire (or not). Who knows what it could potentially unlock?The mouse brain is now “finished” for the types of cells within it. Presumably in just the same way as the C Elegans connectome was “complete” in 1986, and the updates to it ever since are mere ephemera. Or the human genome sequence was “complete” in 2001, and the fact that the first end-to-end sequence of all bases was published only last year is mere trifling detail.Next up: all the cells in the human brain. And while turning to this challenge, one might bear in mind the surprisingly scathing editorial in Nature accompanying the BICCN papers, about how daft it is for these huge brain projects to be working in silos, without coordination, duplicating efforts, and not working out how to share data.
Let’s all go home we’re wasting our time
I opened this essay revelling in the unstoppable force of science; I’ve just reviewed for you some of the year’s most headline-grabbing advances in neuroscience. But this was also the year a major paper claimed science is becoming less disruptive.Its authors proposed a fun measure of how disruptive a bit of science has been: take the paper announcing it and count how often future papers cite that one without citing anything older. A neat idea, capturing our intuitive idea that disruptive work changes the course of a scientific field, its older work abandoned rapidly in the stampede to catch up. Nice measure, shame about the conclusion.They found their measure decreased over time, up to the present day, leading them to claim that newer science is less disruptive, because we continue to cite older work. But actually it’s all relative: their measure corrected for the exponential increase in published papers over time, so their claim is that science is proportionally less disruptive. Their data showed the absolute number of disruptive papers (and patents) was pretty stable over time. Science is as disruptive as it ever was: it’s just that the disruptive stuff is increasingly outnumbered by the incremental.That’s hardly a surprise. There’s been an exponential increase in the number of scientists since the 1950s. Their job is to produce research. Papers are how they are weighed and measured. So they write a lot of papers, because they have to. So all we’ve learnt is that if you incentivise a global talent pool of scientists to churn out papers as fast as they can, while also teaching, marking, tutoring, sitting on committees, reviewing, interviewing, and endless others “ings”, then those papers are not, unsurprisingly, deeply thought, long-nurtured disruptive ideas.
I see all good people
Let’s end on some good news. Humanity isn’t actually the ugly cesspit we see on social media: it only thinks it is.
Titling their paper “The illusion of moral decline,” Mastroianni and Gilbert summoned strong evidence that people across the world think morality has declined over time: multiple questionnaires across 60 nations, stretching back over 70 years. I’m sure you can think of some social media post, newspaper columnist, or talking head claiming as much just this week: moral standards are falling; things aren’t what they used to be. But Mastroianni and Gilbert strongly suggest this is an illusion.You see, in those same questionnaires, the actual ratings of the morality of people’s peers does not change over time. People rate their everyday interactions with peers just as they always have, with the same level of moral behaviour. For 70 years, there has been no change in how people rate the moral behaviour of themselves or of others around them.Why then do we think morality is declining? Of the many reasons, the most obvious is simple: we remember when we see bad behaviour more than when we see good behaviour. And when there are a few high-profile people in positions of power indulging in endless poor moral choices it leaves a lasting impression. A good lesson to take forward to 2024, as we go into the biggest year of democratic elections in history: some of the people who lead us may be morally bankrupt, but pretty much everyone else is just as good or bad as they’ve always been. So go vote for them.X/Twitter: @markdhumphries
Mastodon: @markdhumphries@neuromatch.socialMy book The Spike: An Epic Journey Through the Brain in 2.1 Seconds is published by Princeton University Press, and available now from your favourite bookseller in paperback, hardback, e-book, and audiobook.