Insights
AI: From GIGO to BIBO
By Steve McKee
You’ve probably seen the infamous Today Show clip (circa 1994) in which Bryant Gumbel, Katie Couric and Elizabeth Vargas are humorously mystified by the internet. We can make fun of their wide-eyed confusion today, but it’s not as if 30 years ago we understood that newfangled thing better than they did.
I believe we’re at a similar moment concerning AI. Not only do average people like you and me know precious little about it but I’m also convinced the so-called experts don’t even fully realize what we’re getting into. I do have a hunch, however – in part based on our collective experience with the internet – that it will involve the continuing struggle between truth and lies.
We’ve all heard the term GIGO, which stands for “garbage in, garbage out.” It’s a truism not only about the data we put into computers but the food we eat and the entertainment we consume. Given what we do know about AI, I think we may need to coin a new term: BIBO, or “bias in, bias out.” Computer programmers are no more immune to prejudices and presuppositions than jurors or journalists.
What does AI have to say about this?
Just for fun, I asked the natural language model Chat GPT (the application by which most of us are having our first conscious experience with AI) if it is biased. Here’s what it said:
As an AI language model, I strive to be unbiased and provide information and assistance based on the data I have been trained on. However, it’s important to note that my responses are generated based on patterns and information present in the training data, which may include biases inherent in the data itself. While efforts have been made to mitigate bias during the training process, I may still inadvertently exhibit some biases or reflect the biases of the data I was trained on. It’s always a good idea to critically evaluate the information and consult multiple sources to obtain a well-rounded perspective.
That’s pretty well said, and it underscores why, in the age of AI, the need for truth will only increase. Consider how destructive it will be if self-learning AI systems become manipulated by Google’s already manipulated results, creating an accelerating manipulation loop. Yikes.
Yet therein lies hope. The corollary to Richard Weaver’s post-World War II formulation that ideas have consequences is that bad ideas have victims, which means the incentives for most AI applications will be to root out as much error as possible.
Using artificial intelligence, healthcare diagnoses and treatments will be better and more quickly administered, but only if they use accurate information. If not, they’ll harm people. Transportation systems will be faster and safer, but only if they use accurate information. If not, they’ll crash. Automated manufacturing systems will be more efficient and productive, but only if they use accurate information. If not, they won’t work. And public policy will better contribute to human flourishing, but only if it’s based on factual information. If not, the outcomes will be negative.
That last point has long been the case, but, in the political realm, connecting cause and effect has always been muddy, slow and difficult to gauge. Communist economies, for example, never did work, but they fooled many people for a long time.
AI: The great equalizer?
With AI, the old saying that a lie can make it halfway around the world while the truth is still lacing up its shoes might become increasingly less the case. Artificial intelligence accelerates learning, as did the internet before it and the printing press before that. The extent to which AI can more quickly expose bad ideas as bad ideas and confirm good ideas as good ideas, the more humanity will flourish.
AI might, therefore, be a great equalizer, turbocharging the truth tortoise in its race against the lying hare. Politicians and pundits who misrepresent the facts may get busted faster and with better evidence. Economic data collected more quickly and in ever-more-detailed forms will better expose misinformed tax policy. Military intelligence gathered faster and more accurately will lower the uncertainty that often leads to destructive wartime and foreign policy decisions. Even mundane traffic data, richer and more sophisticated, will help prevent accidents on roads and bridges.
We must always beware, of course, of nefarious entities who wish to use AI for evil. Totalitarian China is already demonstrating how technology can be used to increasingly track, deceive and oppress its people. Seeing this, many in this country are clamoring for AI regulation, but decentralization may be a better bet. Many thousands of companies, and even state and local governments (the laboratories of democracy) experimenting with AI, will allow for a great deal of learning with less widespread and harmful consequences, reducing the likelihood of oppression—and Armageddon.
The other day someone asked me if the world would be a safer place in the age of artificial intelligence. Since all lies eventually get exposed, my answer was yes. But given where AI is in its development, that may happen only after the world first becomes more dangerous. I’m not ready to say I’m an AI optimist, but if we’re careful, I think we can find our way through.
Originally published on SmartBrief on Leadership
Steve McKee
Co-founder and author, Steve specializes in addressing the most meaningful problems. Steve’s at his best when you want to change the world. He might even have some books (and some research) on that.
Sign Up for Growth Insights
"*" indicates required fields
Contact
"*" indicates required fields