SuperSeed Loader

ChatGPT (and AI) – a toy, a threat or a terror?

AI creates buzz and awe, but not everyone is excited. A look at some of the criticism plus potential solutions.

Last week, OpenAI released ChatGPT to the world. It was generally met with awe in tech and venture circles. But not everyone is excited. The criticisms are in one of three camps:

  1. it’s just a toy with limited practical application (and implication)
  2. it’s a threat to workers and to the democratic discourse. Thus, it is a threat to the foundations of civilisation
  3. it’s a terrifying technology that harbingers dire implications for humanity.

Some of these criticisms are directed at ChatGPT. And others at AI in general.

Let’s unpack these – from benign to cataclysmic.

It is a toy

It has shortcomings that render it useless or harmful:

ChatGPT uses probability to create content and responses. Sometimes it’s “right”. Sometimes it just sounds right, but it’s factually wrong. But even when it’s wrong, it is built to sound as if it is right. That makes it unpredictable. At best, that makes it useless. At worst, this makes it dangerous.

It is a threat to workers and to democracy

The potential to reduce human interaction

If AI algorithms are compelling and stimulating, we will likely spend more time interacting with them. And as we spend more time interacting with algorithms, we spend less time interacting with each other. Humans are social animals, and human interaction is essential for our well-being. Given that, algorithms that reduce human interaction are bad for individuals. And as a second-order consequence, also bad for society.

The potential for loss of jobs

AI has the potential to do the work that is done by humans today. Consequently, it displaces workers, puts them out of a job and threatens their livelihoods.

Having a job or access to work enables people to earn a living. It can also help to give people a purpose. In that light, it is harmful if AI algorithms like GPT permanently eliminate people’s jobs. This is bad for both the individual and for society.

The potential for bias

AI systems are only as good as the data used to train them. If that data is biased, then the AI system may also be biased. This could lead to unfair and unequal treatment of individuals based on their race, gender, age, or other factors.

It’s one step from the Terminator

The risk that a powerful AI will be set on destroying humanity

ChatGPT is one further step towards [Skynet]. The premise of the films in the Terminator franchise is that a powerful AI develops sentience and decides to take over the world. Cue death and destruction.

The risk that governments use AI algorithms to harm humans

It is unlikely that GPT (or a similar algorithm) will develop sentience and set itself on world domination. A more likely scenario is that nefarious governments use AI algorithms to harm others. AI is a powerful tool. It has the potential to be used for good and for bad. And when powerful tools are used for “bad”, very bad things can happen.

All the “bads”

In summary, here are some arguments for why AI is useless or even dangerous:

  1. ChatGPT is prone to make up answers that sound plausible but could be wrong. That makes it a toy or a danger.
  2. AI Algorithms reduce human interaction, making us lonelier.
  3. Algorithms can displace workers and lead to permanent job losses.
  4. Algorithms are biased, which can lead to unfair treatment.
  5. AI algorithms can be used to harm humans. Or – worst case – do so on its own accord.

These are all reasonable concerns. AI is a powerful technology. We should consider the implications carefully.

How do we ensure AI is useful and safe?

AI is one of the biggest technological advances in our lifetime. Such advances raise questions and challenges. It is beyond this blog to comprehensively address all AI-related questions. But here are three principles that might guide our thinking.

Develop transparency to overcome inaccuracies and biases

Algorithms like ChatGPT are probabilistic. This means they are prone to making up answers that sound right but are actually wrong. Sounds familiar? In many ways, this is something humans do all the time. Another thing we humans do all the time is to be swayed by our biases. And the concern is that AI algorithms will encode and amplify existing biases.

One of the benefits of algorithms is that we can look inside them and audit them. We can get them to reveal the basis of their conclusions. And to grade the confidence of their answers. In many ways, we can make algorithms much more transparent than humans. But only if we make this a priority.

It strikes me that open and transparent algorithms must be one of the top priorities in AI research.

Invest in education to enable mobility

We displace human workers when we automate work using AI Algorithms. Some argue that it will put everyone out of a job and destroy our economy.

Of course, people have been arguing against automation for centuries. Witness how the Luddites smashed the automated weaving looms in 19th-century England. But human ingenuity has kept finding new things to do. And few would argue that we would be better off with society stuck in the 19th century than we are today.

Yes, AI automation brings progress and a better world. But it also brings challenges. We must put in place programmes that enable life-long education and retraining. And we must start early. We need an education system that prepares us for a shifting world. So the second big priority is comprehensive investment in education.

And speaking of education, consider some of the benefits of AI. Imagine a world where we have a personal tutor for every child. It’s hard to imagine anything more transformative than that.

Make sure that government policy embraces AI

This leads me to the third criticism of AI. Powerful tools are dangerous when used for destructive purposes. AI is extremely powerful. It can be very dangerous. I think this critique is spot on.

So does this mean that we should stop AI research? That doesn’t seem like a smart move. At best, we’d love access to the powerful benefits AI can bring. And at worst, competing nations will continue their research and leapfrog our capabilities. Then we would lose out on the benefits while putting ourselves at risk.
So as a third principle, we must ensure comprehensive mastery of AI. This should be through government policy that provides:

  1. comprehensive public R&D investment,
  2. a regulatory framework that encourages the development and use of AI,
  3. close partnership with our allies,
  4. support for deploying AI in education, business and healthcare, and government.

AI – so much potential if we can get it right

AI (and ChatGPT) wasn’t built in a day. But it is getting better every day. Every advance brings new potential. And potentially new challenges. Like all powerful tools, we have to “handle with care”. The three principles I propose to help us do that are as follows:

  1. Make the development of open and transparent algorithms the top AI research priority
  2. Invest in life-long education to enable personal mobility
  3. Develop comprehensive government policy to embrace AI

It is on us to act with purpose and compassion. And if we do that, AI has the potential to be the most transformative technology of our lifetime.

Related

Please enable JavaScript in your browser to complete this form.
Click or drag a file to this area to upload.
Confirm your details
Name
Business type
Which of these areas best describes your company?