Computer user interfaces just got smarter; this is worrying

Computer user interfaces just got smarter; this is worrying

The other day I was talking to a computer. Presented with a small chat box, I typed in simple things – you know, “why does it get cold in the winter?” or “why is the sky blue?” – which answered it well.

I then asked it to tell me about the partition of India in 1947.

Partition, it wrote, was “the division of British India into two independent countries, India and Pakistan,” which “was the result of the Indian independence movement.”

It went on, and surprisingly competently too.

Huh? It was new.

Those responses and millions of others didn’t exactly come from a computer, but rather from something called ChatGPT. It is the latest iteration of interactive artificial intelligence software created by the company OpenAI, which was founded in 2016 by Elon Musk and Sam Altman, among others.

It works by being “trained”, being fed large amounts of information and learning to recognize patterns and relationships in it.

You can ask it questions in plain language and it will use that training and its smarts to spit out an answer.

Put more clearly, if ChatGPT can provide a truly competent, useful answer to all sorts of questions – “Why did World War II start?”; “How do I bake a gingerbread?”; “How do I calculate my taxes?” — it does so because it has “read” and learned those things.

This is as remarkable as it is alarming, because what we are essentially looking at is the automation of some parts of human thought.

Artificial intelligence and its consequences for work, life and society will be profound, and quite possibly as transformative as the internet and printing press before it.

But, as with all technology, the goals to which it is put will depend not only on how it is used, but who controls and shapes it.

And for that matter, this latest iteration of AI should be cause for concern.

Even with those caveats, it’s hard not to be shocked by how clever or interesting ChatGPT can be; social media is already littered with examples of, for example, people who manage to come up with useful Excel formulas, use them to create computer code, or even write short essays that mostly match what my undergraduates are capable of. It is, at least at first glance, puzzling.

However, the ability to not only spit out knowledge, but sometimes to synthesize it, is both the strength and weakness of AI.

A lot of what we do on a daily basis just falls under that category of bringing together or packaging information, things ChatGPT seems to be pretty good at so far.

If a human’s job is to analyze data, flip through spreadsheets, or write simple content like a press release, it’s not hard to imagine an AI doing some of that work in a decade or two , not simply because it is “smart,” but also because capitalism relentlessly seeks efficiency.

If it sets off alarm bells, it should. The downside of AI’s capacity to synthesize information is that it will repeat the bad and the good in the data it was trained on.

There have already been many examples of ChatGPT spewing misinformation, or simply repeating the biases that still plague us today. (One has seen this suggesting that whites and Asians make better scientists).

Imagine how bad misinformation or systemic bias would be when presented in the form of persuasively written AI talk. This is the definition of dystopia, not least because artificial intelligence is not actually “intelligent” as such; it can, at least for now, only ever be a product of the people and structures that created it.

This means that AI cannot replace humans in some simple substitution; rather, in the same way that a calculator or spreadsheet can help you know how much cash flow a business needs, but not decide whether a business is compelling, so too can AI be a thing that deploys the most positive only in conjunction with human supervision.

However, this is a best case scenario.

First, technology has built-in biases. Fifteen years later, for example, we can now see that Twitter is structured to prioritize extreme discourse, to our detriment.

What hidden biases or behaviors will be cultivated by an AI trained on historical data, or one deployed to increase a California company’s profits?

Now it’s fair to say that we shouldn’t simply assume that Silicon Valley titans have good intentions. Nowhere among the breathless proselytizing of the technocrats about a future of space travel or electric cars is there any discussion of a world without hunger or injustice.

The brilliance of AI we get from ChatGPT is awe-inspiring, both in its promise and its dire threat.

For too long we have ignored the same ambivalence in social media and digital in general.

We cannot afford to make the same mistake again.

Navneet Alang is a Toronto-based freelance contributing technology columnist for the Star. Follow him on Twitter: @navalang

JOIN THE CONVERSATION

Discussions are opinions of our readers and are subject to the Code of conduct. The Star does not endorse these views.

Leave a Reply

Your email address will not be published. Required fields are marked *