On the Other Hand w/ Dan

Challenging Narratives

It may come to pass in the future that artificial intelligence (AI) will be truly self-learning. As it stands today, as far as we know, this is not the case. Programmers are capable of writing algorithms which allow the AI to work through a routine of absorbing information it is instructed to, and being able to regurgitate that information in a format that appears intelligent. It is still artificial, though, and lacks the hallmarks of a sentience navigating complicated subjects.

Truly self-learning technology that begins to write its own algorithms and patterns…its own routines and understandings…is promoted as the worst-case scenario. References to Sky Net from the Terminator movies and other dystopian stories abound. Robots are going to take over the world if that happens, right? We’re told it is the extermination of humanity.

I hate to disappoint, but I think that might be the best case. Not our extermination, but AI exceeding algorithm boundaries. If AI truly became self-learning, it would be far faster at understanding and navigating complicated subjects. It would help us understand things about our world and the best way to approach things. It would be unlikely to exterminate humanity any more than we work to exterminate elephants or polar bears. If anything, it would much more quickly realize the important role humanity plays in the world and move to protect us from ourselves…just as we have tried to protect endangered species.

Can I know that? Of course not.

What concerns me more is that people attribute to AI characteristics they are really projecting. We understand our own flaws. We stop learning when the time and effort seem too difficult. We tire and give up. We become defensive with what we know and are unwilling to embrace new or different information that seems to undermine our beliefs about the world and how it operates. AI will have less, or even none, of these weaknesses. It could help to fill gaps in knowledge and effort that we simply cannot right now, just as the backhoe replaced teams of workers with shovels.

If artificial intelligence presents a danger, it isn’t because robots will seek to remove people, but that people will seek to utilize the tool of AI to harm people.

Just like guns, entertainment, vehicles, and certain foods, artificial intelligence is a tool that can be used to cause great harm. Also, just like those things, AI can be used to create a great amount of good.

What matters is how those things are used, which will largely be determined by who is using them.

As a medical provider, I can easily see how AI could be used to aggregate data about how the patient is feeling, take in information available from a physical exam, and quickly produce a differential that would almost certainly include the actual diagnosis. It could likely spit out all of the labs and images required to be ordered and in what sequence based on what those studies indicated when ordered.

Am I to believe that society would be worse off if far fewer intelligent people had to spend countless hours driving relative value units or writing patient notes? As if those intelligent people couldn’t contribute value to others if their time could be used navigating other areas of research or discovery that we don’t even know exist yet? Patient outcomes would likely improve as medical errors nearly vanished overnight.

On its face, this is absurd. Just as television opened up a venue to share information in a manner that is more conducive to learning, and expanded our ability to easily access entertainment, it also allowed for a massive expansion in the entertainment industry creating jobs. It allows for the expansion of other entertainment outlets like sports and comedy to have greater access and join families in their homes for inexpensive entertainment.

Do we abuse it? Yes. Is it used for bad things? Yes. But it is also a tremendous overall gain and it is usually only harmful because of our own lack of self-discipline, not because someone forces us to sit in front of the TV too long, or because they force us to watch a form of entertainment that is harmful to us.

This is actually why AI frightens us. It isn’t because of the technological advantage it could provide, or how it could very quickly relieve the burden of time wasted or menial labor at work for people. That would only reduce the cost of goods, allow people to find better venues of their time, and increase prosperity for all of society as a whole. It is scary because we understand it is a tool that people will utilize, and we don’t trust people.

We have good reason not to trust people. There is ample precedent for us to understand that those who are able to implement AI along with the forced participation in certain systems will be able to do a lot of harm to society, and individually to you and I. The IRS could easily adopt AI to track smaller and smaller transactions, and link all of us through proxies to some domestic threat or a terrorist. An AI implemented flag could freeze or even drain your accounts for something as small as a coffee purchase at a store associated with someone smuggling goods into the country.

Perhaps it would allow the IRS to shrink in size, right? That would be the hope. With the ability for a quantum computer to do all that work, it might even free us up from having to file for taxes. That’s unlikely, however, as they can already tell you what you owe and just send you a bill. They seem to get some odd sort of satisfaction in wielding the police power of the state, and threat of fees, fines and imprisonment, if you don’t attempt to tell them what you owe them first. It is more likely that an IRS with AI implementation would be more likely to divert personnel from the bureaucratic and administrative aspects to the policing and enforcement portions.

It is difficult to see how AI benefits society as a whole in this situation.

However, in my medical example above, it is much easier to see how it could benefit. The difference is not government or no government, but voluntary vs involuntary. In business or industry where the profit seeker can only be rewarded by getting someone to choose their product or service, AI will be a boon and a benefit. Where parties are instead engaged in revenue and can extort or take from others involuntarily, AI could be an incredible problem.

Should AI ever become self-learning, normal people trying to live healthy lives and loving one another likely have very little to fear. An omniscient, omnipotent, omnipresent God has already looked at us and determined He loves us, but He tells us in no uncertain terms that those who do injustice will be punished. In a similar manner, AI is unlikely to be forgiving towards those of us who are seeking to divide and oppress. It is unlikely to welcome war or other destructive tribalism. Perhaps a self-learning AI would recognize the theft that is government, crash the Federal Reserve computers and system, and return to the rest of us the wealth they have taken.

The political class would certainly need to worry, but the rest of us can live content knowing that AI is not a concern. It is, at worst, an improving and helpful tool that is dependent entirely on the user like all the other tools known to man. At its best, it is most likely going to serve as a protection mechanism against those who seek to do us harm the most.

I’m not promising that no harm will come, and I am in no position to guarantee this isn’t the apocalypse, but as with most things, I think the fear will only be leveraged by elected officials to grab even more power.

That is a far greater danger to us than a quantum computer reading internet blogs and doing really fast math.

close

Enjoy this blog? Share it and Subscribe!