Good Logic, Bad Life: Decision-Making Neuroscience Suggests AI Decision-Making's Weaknesses

We humans have tended to think we are sooo rational. Seventeenth-century French philosopher René Descartes's formulation of the cogito, or the proposition that we think therefore we are ( cogito, ergo sum )—that is, we cannot doubt our existence itself while we doubt—remains the best-known statement of the epistemological significance of human consciousness. But, fast-forward to the early twentieth century… And scholars like neuroscientist Antonio Damasio suggest the Cartesian theory of mind-body dualism doesn't fit the scientific evidence.1) At all. For that evidence increasingly suggests that good, ethical, socially conscious, and rationally self-serving decision-making actually involves parts of the brain responsible for emotions. Without social learning and emotional processing, in other words, there is no active rationality—no great decision-making prowess comes to us from reasoning without feeling. And without bodily feeling, there is no emotion: physicality comes first. So the more disconnected from the organism as a whole (including the body) we try to make our minds—the less that chatterbox, intellectualizing, explaining brain can tell us about what it is we want, what we should then choose, or who we are. In this context, the anti-cogito seems more accurate: cogito, ergo sum falsus… I think (without feeling)—therefore I am false/wrong. Or perhaps instead, as polyvagal theorist Stephen Porges suggests, "Je me sens, donc je suis"—I feel (myself), therefore I am. Either way, the limitations of human intelligence as a Cartesian construct bear on AI in the context of algorithmic decision-making… Making it an even worse idea in the realm of security than might at first seem obvious.


People are Dumb

Humans are actively not that smart. Not as compared to our evolutionary predecessors. Not on metrics of basic functioning. The paradox of human behavior is that we act dumber because we are (in some ways) more intelligent. No other species screws up its basic functions—eating, sleeping, socializing, mating, and co-existing in a sustainable ecosystem—half as much as us. We are the idiot-savants of the animal kingdom.

Our unique intelligence leads us to make more mistakes than other species, and our mistakes can impact all other species. But the same characteristic makes it so that we can also learn from our mistakes through conscious thought. That is our gift. And our curse.

Gift: We’re learning more and more about the frontiers of consciousness. With this advancement come advances in neurotechnology. At present, there is still no known, unique lie response to detect—and therefore no such thing as a "lie detector." Nor is there currently a machine that can see inside your head to determine your intent, read your thoughts, or divine your history. But we're getting closer and closer, such that it might be possible one day to have such a “psychic x-ray,“ or a direct pipeline from the outside, shared representations of reality, to what is going on inside your individual consciousness, be it in the realm of future intent, unvocalized internal monologue, memory, or something else. It could be helpful to know what you really want deep down when you're feeling consciously conflicted—or what your partner or colleague is really thinking when you can't understand their perspective. So this seems like it could happen and it could be good in some ways, although it also carries significant dangers to be discussed in a future post.

Curse: As part of learning more about how the brain works, we’re also learning more about decision-making. The keynote finding of recent decision-making research is that good decision-making relies on emotion. A related corollary is that we do not really know why we do what we do. Good, socially conscious, ethically moored, rationally self-serving decision-making involves the emotional brain and its full-body, multi-directional signaling loops. Our neat, intellectual rationalizations of our own behavior are just that: post hoc rationalizations. Far from being another plane of inferior considerations that we might tally up after doing a rational cost-benefits analysis of why we might take different actions, emotion is the basis of good decision-making.

On one hand, that’s shocking from a rationalist modern perspective. Much of clinical psychology in the late 20th through early 21st century, for example, increasingly celebrated tools like cognitive behavioral therapy that focus on the stories we tell ourselves about why we think and act as we do.2) And it’s true that another consistent finding is that those narratives do matter. The stories we tell ourselves can significantly impact our lives in lots of ways. Carol Dweck's research on fixed versus growth mindsets exemplifies these findings. Dweck finds that people who think of intelligence as fluid rather than fixed tend to experience better life outcomes down the line, and that this mindset can itself be learned. Think of children given impossible tasks. Many will give up, exclaiming “I can't do it.” But some will keep at it, giving themselves encouraging pep talks about learning from mistakes, telling themselves they just haven't solved it "yet". Those “yet” kids grow up to kick educational, professional, and personal ass. So there used to be wide consensus that narrative matters a lot, and some research still suggests that's true. This makes it surprising to think thought, narrative, and the chatterbox brain giving them to us are epiphenomenal to good decision-making.

On the other hand, it’s not so surprising that we don’t really know why we do what we do, and reasoning devoid of feeling doesn't help us make better choices. Because the human brain evolved over a long time, over a lot of species. Our brains are not only our brains. They literally contain parts that are better thought of as reptile brains, parts that are better thought of as lower-order mammalian brains, and then parts that are really distinctly higher-order mammalian or even human brains—but the last seem only to be a relatively small proportion of the human mind. Is it any wonder our decision-making processes can be secrets to ourselves?

The human mind is at least not entirely human. Its humanness is a relatively evolutionarily recent addition. The intelligence this humanness bestows seems to also be a form of profound stupidity. The stupidity of our intelligence is highlighted in famous cases of prefrontal cortex damage that Damasio relates, wherein patients can no longer make simple, self-interested decisions despite appearing to have normal cognitive functioning.

Our interlinked bodily sensations, gut feelings, and felt emotions are what save us from ourselves—what let us make the more context-dependent, individually quirky, socially conscious, irrationally rational decisions necessary to have good lives. Our brains, in a more strictly intellectual sense, are what make us dumb enough to kill ourselves, each other, and the global ecosystem on which our species depends. With only good logic, we end up with bad lives. Less logic, in other words, is more.

… People are Dumb, but Algorithmic Decision-Making is Dumber

See? People are dumb, say AI proponents. We’re always trying to get them to follow the rules in order to be fair, rather than going with their guts when it comes to administering other people’s basic human rights. Plus, prohibitions against black-box decision-making apply to people just as much as to technology. But people go with their guts and are bad at explaining their decisions and preferences even when they want to. People are producing bad outcomes as a species among species, and bad explanations for their decisions as chatterbox narrators who don't really know why they do what they do. So why is it bad to put decision-making off on machines instead? Couldn't it be that machines are fairer? After all, people can have nasty human prejudices that we won't teach the machines. And people often make stupid cognitive errors that can be programmed out of artificial intelligence (AI). All we have to do is train the machines with the Truth, and then they will stop spitting out stupid human errors.

Yeah nah. That only works if we know the ground truth like an objective, omniscient narrator—like God. But we don't. Because we aren't. We are human beings who make human errors all the time.

What this means in relevant contexts like algorithmic, data-driven, or “cops on dots” intelligence, sentencing, and policing is that machines learn human biases and then spit them back out as supposedly objective, neutral decisions. This doesn't happen all the time, and theoretically wouldn't happen as much in a better future that is possible.

But when bias does seep into technology-mediated decisions, it's especially harmful because the biased decisions seem objective, neutral, and scientific. The prejudice is at least temporarily obscured behind a non-transparent veneer of wires and (apparent) perfection. It's harder then to call prejudice out as prejudice. So it's harder to right the wrongs it creates—especially before the bias in tech has been identified. And justice delayed can be justice denied, as the following three examples illustrate.

In U.S. drone warfare, classified algorithms for targeted assassinations seem highly likely to be fed data on "correct" hits that actually include incorrect hits (e.g., journalists and other civilians wrongly targeted and killed). In the criminal justice system at the level of the courts, technologies like COMPAS (the Correctional Offender Management Profiling for Alternative Sanctions) intended to help rationalize discretionary power by assessing re-offending risk have actually institutionalized racial and gender bias. And in policing, systems like CompStat have created perverse incentives for police to manipulate their data, for instance by inappropriately downgrading felony reports.

As scholars like Porter and Ross have long observed, systems that appear to make behavior less discretionary and more quantified do not necessarily make them more scientific or neutral. They just incentivize people to make decisions differently, often in imperfect institutional environments that lend themselves to corruption and abuse, and always as human beings. Tech does not remove the human element. But it does make the human part of the process less transparent because it might be more or less limited to the tech creators. And it can also sometimes make the process less subject to discretionary power, a domain within which help and mercy may lie. Tech is many things functionally in society—including a form of authority, obedience to which has long been recognized as a conspicuous threat to our more humane decision-making potentials. It can also be a way of systematizing dominance and compounding inequalities, as Virginia Eubanks's recent work on automated systems' effects on the poor in America shows.

So yes, it would be great if technology could help decrease avoidable, objective errors with serious consequences, like when rare but deadly diagnoses are missed in critical time windows. There are technologies that do this. And some of my research shows that people can exhibit bias in how they interface with such tech, but the tech itself actually seems pretty robust to that bias. In a context like medicine where there is (sometimes) a black-and-white right and wrong answer, and we can (often) know what it is, and the point is to help people and society and to do no harm (as if these outcomes had settled and static definitions)… There is definitely some hope for humanity in using technology responsibly to decrease some kinds of errors without building in prejudice or rote, dehumanizing (on multiple sides) rule administration.

But no, when it comes to really non-transparent, unaccountable arenas already prone to massive abuses of power—like intelligence, law enforcement, and border security—it's not so desirable to introduce decision-making technologies that can affect people's fundamental rights. Because we don't know ground truth as well, and perhaps never can. There are no universal diagnostic tests at present for guilt and innocence. When we get it wrong, innocent people suffer unfair punishments. The incentive structures are particularly perverse in a lot of relevant institutions. It is not typical to see the Hippocratic Oath hanging in a police station.

AI proponents counter that algorithms are more transparent than people. But the proprietary nature of decision-making software belies this counter-argument. Black-box proprietary algorithms' non-transparency has been contested as a due process violation. But black-box proprietary algorithms are the norm. And inside that box lies either an extension of previously gathered real-world data—complete with real-world biases—or an extension of rational rules the likes of which would cause real human beings to be incapable of making good life choices, according to the anti-Cartesian revolution in emotion and decision-making—or both.

People are dumb, but AI is dumber. People are whole organisms with various feedback loops from our bodies and emotional influences on our thought processes that help us make good decisions. There is a lot about consciousness that is still mysterious to scientists, and to all of us. And everything that can't be programmed or explained is part of what makes us human.

As people, we entrust other people with power as part of what we often think of (metaphorically) as social contracts. In democratic societies, people delegate trust through voting to other people to form governments to perform specific social functions. They do not delegate that trust to the unelected people who program the software that runs on machines, or who make the hardware constituting them (for good reason). That is one of the best reasons why…

Algorithmic Decision-Making (Including Profiling) is Also Illegal

Imagine you hear screams and shots near your home, and are frightened. You call the police. A robot shows up to take your statement. How would you feel and why? Most of us actually want to talk to a real person when something serious happens. And we want to be able to understand why decisions are made when they're important, not just be told that it came out of an algorithm so it must be right. The existing legal regime in the EU codifies this intuition.

Article 22 of the GDPR bans decisions that affect people's fundamental rights from being based solely on automated processing including profiling. Profiling is specified because algorithms usually profile, or engage in Bayesian updating according to demographic groups like gender, age, and race. So if you want to ban algorithmic decision-making, you might as well specify profiling because that is a typical part of how algorithms make decisions.

AI proponents tend to address this small problem (the fact that AI decision-making is illegal) with two arguments. First, they argue there will always be a “human in the loop.”3) Second, they argue the legal regime prohibiting purely algorithmic decision-making in the EU is wrong and should be changed. Both arguments are flawed.

“Human in the loop”

The “human in the loop” argument fails to achieve face plausibility. Just because there is a person ostensibly deciding what to do with algorithmic output does not mean that the subsequent decision-making is really the person's prerogative. There will probably be strong institutional norms around AI that generally mean the person “deciding” has to go along with the machine's decision—whether that's denying a home loan, testing for a disease, or killing a target with a drone. Indeed, there probably already are, but we don't know much about these norms due to non-transparency.

In addition to failing to achieve face plausibility in prototypical institutional contexts, the “human in the loop” argument is not logically an argument. It just moves the definition of the point of decision-making down the line. In doing so it fails to address the fact that the decision-making process—links in the decision-chain, ingredients in the mix, or however you want to analogize it—still includes profiling and other input from the AI. But this move does not change the fact that profiling and black-box decision-making are then part of the process.

So sure, when AI lie detectors tell border guards who's probably lying and then border guards disproportionately select those people for secondary screening or worse, there is still a “human in the loop.” Proponents argue this means the resultant decisions are not purely algorithmic. So they're technically legal.

But if the tech is systematically biasing border guards against people it deems deceptive (confirmation bias), and those people are also disproportionately, e.g., racial / religious / gender / sexual minorities, it still violates EU legal and moral norms against discrimination. Especially if it has any of a number of likely “garbage in, garbage out” problems wherein the machine learning is based on the wrong data and the right data are impossible to get. And since AI decision-making support tools including iBorderCtrl tend to use profiling—and this tool specifically has in early related research appeared to be more accurate for ethnic Europeans—it seems highly likely that additional screenings (or worse) resulting from the use of this sort of tool will disproportionately affect vulnerable groups, and this result will be illegal.4)

When you really pin AI proponents down on these particulars of why their main argument here does not make sense, they sometimes say yeah, ok. You're right. This is bullshit. If it were actually in use, this technology would break the law as it stands today. But the law, they say, is wrong…

“But The Law is Wrong!”

Some proponents of AI decision-making argue that current EU law prohibiting AI decision-making is wrong and should change to make way for algorithmic decision-making, for instance, at EU borders. The basis of this argument is the idea that machines are not people, and so they can be more neutral, rational, reliable, and otherwise smart as decision-makers than people. But this argument is based on a false premise that machines come from some sort of Deus ex machina. In other words, this argument is weak because people make machines.

An honest (if naive) exponent of the argument that tech is more neutral than people could set out to prove that point through empirical field research comparing bias in technology-mediated decisions with bias in non-technology mediated decisions. Ideally this comparison would involve multi-method research including different outcomes of interest to various stakeholders (e.g., the tech industry, security professionals, and different end-users). It would also need to account for different ways in which technology and its use can be value-laden regardless of whether it institutionalizes bias.

This sort of comparison is not typically done, however. Empirics do not support and are not being collected to potentially challenge or support the hypothesis that machines make better decisions than people. And this is a curious hypothesis in light of the recent revolution in decision science and its takeaway point that we still know rather little about human decision-making, but it seems to work much better when the whole organism is involved.

Another argument that proponents of AI might make in favor of changing the law currently prohibiting algorithmic decision-making would be that machines could theoretically make more reliably explicable decisions than people. The problem with this argument is that in theory it's correct, but in practice it's false. Black-box technologies like iBorderCtrl tend to consider their inner workings (down to their ethics reports) to be trade secrets. So you can't see what's really going on in the decision-making process. And even if you could see all the inner hardware and software workings, the process might still remain non-transparent. It's not always clear, for instance, why an algorithm exhibits racial bias, even if you can look at the underlying code and data.

This non-transparency points back to the fundamental flaw of letting tech—which really means the people creating the tech—administer the basic processes and rights of a democracy. When it really matters for society, we delegate trust to people and not to machines. If we were to delegate trust to machines, we would really be delegating it to the cabal creating, running software on, maintaining, and otherwise accessing the machines. Another legal and ethical regime is possible, in which we decide as a society to do just that. But that regime would not be democratic. It would be an oligarchy—government by the small subset of a subset of people who run the technology.

Conclusion

People are animals, but as a species we're special. It's long been noted that our gifts are also our curses, and this is especially true for the intelligence that allows us to hurt ourselves, each other, and our supporting ecosystems with astounding stupidity. One thing that might explain this paradox is our apparent disconnect between (felt) decision-making and (thought) explanation. We don't understand ourselves; or at least, knowing oneself has been considered a worthy lifelong philosophical endeavor at least since Socrates.

“Descartes' error,” according to Damasio and his creative synthesis of contemporary neuroscience, was his dualism—his idea that the thinking part of the self was the true self, and was separate from the body, from feeling, from the brain's necessary mooring in the whole organism's physical being. A contemporary restatement of the cogito as such might as well be the rallying cry of AI decision-making proponents. If the mind is like software and the brain its hardware, then the body and its physically-moored feelings are like pesky bugs that computers are less prone to than people. “AI simply reasons, therefore it is rational,” they might say, implying—the more disembodied the reasoning, the better the process, the truer the outcome.

But Damasio et al suggest an organismic perspective wherein it is embodiment that drives feeling that motivates our passion for reason in the first place—and all the feedback loops back and forth between body and mind (and emotions in-between) that make cultivating higher reasoning possible. Algorithms aren't animals. But that might actually be a bad thing when it comes to decision-making.

And the fact that algorithms aren't animals also doesn't make them gods. Tech is still man-made. There's insufficient evidence supporting the proposition that it creates fairer outcomes of decision-making processes to argue that the law barring algorithmic decision-making should change in the interests of fairness.

Of course context matters and some uses of tech in decision-making can help society greatly. Medical diagnosis decision support tools are a good example, because they can help make better differential diagnoses for medical professionals to investigate in the service of helping patients and the public. But those uses of those tools are already legal. And it doesn't appear from available evidence that those tools are particularly prone to institutionalizing bias, while available evidence shows that security AI like iBorderCtrl automate discrimination.

The non-transparency of algorithmic decision-making tools in the security context also differs from the use of tech in decision-making in other realms, because it threatens democracy. People vest other people and not machines (run by small groups of other people) with power. So algorithmic decision-making including profiling is illegal in the EU, and it should stay that way. People are dumb, but algorithmic decision-making is dumber.

1)
Damasio summarizes his critique of Cartesian dualism thusly:
“This is Descartes' error: the abysmal separation between body and mind, between the sizable, dimensioned, mechanically operated, infinitely divisible body-stuff, on the one hand, and the unsizable, undimensioned, un-pushpullable, nondivisible mind stuff; the suggestion that reasoning, and moral judgment, and the suffering that comes from physical pain or emotional upheaval might exist separately from the body. Specifically: the separation of the most refined operations of mind from the structure and operation of a biological organism.”—Descartes' Error, p. 249-250
As an alternative creative synthetic understanding of the contemporary scientific literature, he suggests the hypotheses “that feelings are a powerful influence on reason, that the brain systems required by the former are enmeshed in those needed by the latter, and that such specific systems are interwoven with those which regulate the body” (p. 245).
2)
That was before the currently ascendant Van der Kolk, Porges, Levine-style “wise body” revolution in trauma therapy, psychedelics revolution in treating mental illness, and mindfulness revolution in responding to dysfunctional narrative, to name a few branches of the counter-reacting tree.
4)
For a more in-depth discussion of why iBorderCtrl automates discrimination, please see the previous blog post on that topic.