“Trust me, I’m a doctor!” is an old cliché, that we’ve all learned to take with a pinch of salt, but now there’s a new benchmark for trust.
How often, when a friend or colleague spouts a factoid or news bite, do you check it out on Google for accuracy? And when driving to a new destination, or trying to beat traffic, you rely on your smart device – car or phone – for navigation, don’t you? You’d never ask a stranger for advice.
All these things are driven by artificial intelligence, and we’ve come to rely on them, because they’re usually right. Machines learn by consuming vast amounts of data, and they get constant feedback from other machines, ‘adversarial networks’, that evaluate their performance on the job. No humans can handle that level of throughput.
As a result, we’ve now got AI systems that tell us who to hire and who to fire, when to buy and sell, what to plant where and when, and even who to date! Our contracts and tax returns are checked by AI, our medical scans and test results are screened by AI, and in some societies, our behaviour is automatically evaluated, to see if it’s socially acceptable. By AI.
Sure, there are biases and error bars in any system, but these are mainly baked in by the architects and designers, who after all are only human! And there’s bias and prejudice in every society, so you can’t really blame the computers for picking up on it. Ethics and norms are a social problem, not a data construct.
And most of the time, almost all of the time, the machines are more accurate than people, better than humans, at finding errors and diagnosing conditions, identifying criminals, and the like.
You can take it from me, it’s true; trust me, I’m an AI.