This was originally posted as a response to an article on Quillette entitled “The White of the AI”. I will clean it up later, as breakfast is pending.
Great essay. Succinct, cogent and a perfect example of clear writing. I think there really are problems associated with AI and race, especially in areas of facial recognition (which is easy to overcome, more data please!) and statistical hegemony. The latter beast is a far more difficult dragon to slay. Actuarial work has long been obsessed with obtaining data which is as accurate as possible for a long while, for the purposes of providing fair estimates of likely risk, so that insurers and banks can provide products which reflect a customers circumstances whilst deriving modest profits- along the proceeds of the far more lucrative float (money made from investments whilst the insurer waits for the risks to accrue and the possibility of a pay out).
But to what extent does the calculation of risk, embed unfairness into the system? African American men have an elevated risk of hypertension in later life, leading to a whole host of potential medical problems. Should this affect their medical insurance? Woman can get pregnant. Should a woman be less likely to secure lending or venture capital, just because her leadership be taken out of the equation at an all important business juncture? Officially, it might be the case that the answer to these question is no, but the problem is that these considerations tend to embed themselves back into the complex equation of a human life, peripherally. Imagine a web of statistical correlations which extend out from a core element of your identity- race or gender in this scenario- some of them positive, some of them negative, some of them causal, some of them less so.
Even if the insurer, the bank, or even a boss or potential tries to eliminate your race or gender as a consideration in their calculations, these features will embed themselves back into the system, through complexity. Consider redlining as an example. Officially, it no longer exist- officially. But lending decisions are primarily based upon credit ratings. Income is a huge consideration in credit ratings, and because Pew Data shows that Latinos and African Americans routinely have lower incomes than whites this will embed itself back into the system, at the executive decision-making layer of the system. It should also be noted that poor people are targeted for predatory lending practices such as payday loans, and this can also affect you credit score. We may ultimately eliminate human prejudice, only to find that an all pervasive statistical hegemony still exists, simply because some demographics possess a statistical advantage.
AI is better because humans routinely tend to overestimate risk, by an order of magnitude in some scenarios. COMPAS, the algorithmic system which calculates the risks of recidivism and reoffending in sentencing is a prime example of this. Judges are better than most humans at assessing risk, but they still get it wrong. But the problem is that African Americans do possess higher rates of recidivism, for any number of complex reasons.
The company which makes the software is understandably reluctant to share proprietary technology, but it is highly likely that gang-involvement plays a considerable role in the algorithm. Although Norway has achieved impressive results with reform (though not as impressive as they claim), they themselves have admitted that their system does have problems with recidivism amongst the gang-affiliated. This is a tiny problem in Norway, but a huge problem in America. Needless to say, some demographics are far more likely to be gang-involved than others.
And the problem is when government tries to intervene, their actions have huge negative repercussions if they don’t carefully consider all the second and third order effects from their intervention. It is highly likely that one of the primary reasons why only one person was ever prosecuted for the 2008 Financial Crash, was because government didn’t want to bring to light their role in the fiasco. The Clinton administration wanted to remove some of the barriers to home ownership caused by the history of redlining, so that marginalised groups could experience equal participation in the housing market.
The problem was, as the bankers likely pointed out at the time, lending decisions are primarily based upon income, and income levels among African Americans and Latinos simply weren’t there. Rather than implement some form of complex adjustment system, which likely would have unpopular in some demographics because of perceived unfairness (higher interest rates for whites, for example- or tax relief, plus a small federally funded deposit for Blacks and Latinos) they simply promised an Savings and Loan style bailout to the banks, in return for the assumption of bad lending risks. They also used to the two public-private hybrids- Fannie Mae and Freddie Mac.
Now, to be fair to government, they had no idea upfront, that the banks would take their promised assumption of risk, and rack up a credit card bill which beggars belief. And to be fair to the Democratic Party, George W. was just as culpable as Clinton- he likely didn’t want to be accused of racial animus and was quite happy to be the beneficiary of a legacy program which was increasing the rate of home ownership and pushing the housing market towards boom. This also shouldn’t absolve the banks of any wrongdoing- because of all the obscenities to come out of that period, the practice of using Black people’s churches, a locus of trust and community to push predatory lending rates- must surely stand as a grotesque monument to Bad Actors. It should be taught in business schools as an example of what not to do.
This essay is not meant as an admonishment to government not to act or intervene, where emerging technologies replicate or compound the inequities of the past. It is meant as a warning to be extremely careful when deciding to act, to understand the complexities of the problems government will face, especially in terms of downstream consequences. I was also wanted to illustrate how, in the modern context, too much is made of implicit or unconscious bias in explaining the difficulties faced in trying to create a fundamentally fairer system. This is not to say that these factors are not very real problems- ingroup or affinity bias is found particularly in hiring- but there are other factors which need to be considered, if one really wants to achieve fairness. There are real monsters out there that need to be slain, which have nothing to do with human prejudice.
The statistical monster also rears its ugly head in hiring- your Credit Rating directly affects both your chances of getting some prestige jobs and your starting level of remuneration. If you had a father who gave you a credit card for college, and religiously paid it off- you will have a considerable head start on others…
As a long time fan of your comments on Quillette, I was delighted to see you started writing regularly on Substack.
My concern with this article though is I am not sure everyone would agree with the way you are defining fairness. The point is that insurance or interest rates should be set blind to race, not blind to hypertension or income. The fact that some racial groups have better and some have worse health or lending records is just a statistical artifact, not an example of some kind of higher level racism. The point of a fair system is to take our thumb off the scale, not to change which side we are pressing on.
From a fairness standpoint, I do not believe we should lower basketball shooting standards to ensure we get a sufficient proportion of Asian females or dwarves on NBA teams. Fairness is using impartial standards applied equally to all, and disparate impact is not proof to the contrary.
From a practical standpoint, I could go into depth on the unintended adverse impacts of forcing statistical equality after the fact on insurance or lending. Perverse results are virtually guaranteed. This will probably be off topic though, so I will refrain doing so.
Great article!