Dominating British news is the Post Office Horizon scandal. Probably, possibly at one of the law firms that I may have worked at, I may or may not have represented an innocent sub-postmaster. If this actually happened, then it’s a case that I’ll never forget.
Upon taking the case, my initial view was one of sheer bewilderment. During our first meeting, late on a Sunday night at my client’s house, I was unsure what to believe, although my client insisted that the computer system was defective. How could calculators essentially miscalculate?
If I did work on such a case, I could be forgiven for my initial misgivings, because I was faced with only one such case, with no concrete evidence of other similar cases. On the other side, the corporate law firms representing the Post Office evidently saw patterns – literally thousands of these similar cases crossing their desks. Surely those Post Office lawyers had some inkling that their client’s instructions to pursue the sub post-masters were misconceived. I remain unimpressed.
My readers know that I’m a proponent – albeit a Sceptical proponent – of ethical AI. But the Horizon scandal should give us all pause for thought of what happens when the computer says “no”, but the operators do not know why. Dodgy computerised systems must be open to human challenge, reviewed by ethical, skilled humans. Furthermore, the data used to train these systems must be free from bias.
With the EU implementing stringent AI regulation, the UK is going to be the Wild West when it comes to AI regulation. I foresee more such Post Office-type scandals. Thank you, Mr Brexit, one could say.