ChatGPT: A scientist explains the hidden genius and pitfalls of OpenAI’s chatbot

ChatGPT: A scientist explains the hidden genius and pitfalls of OpenAI’s chatbot

I remember the first time I saw my son with a large language learning model. He was only five years old at the time, but he was able to have a natural, fluent conversation with the AI ​​as if it were a real person. As I watched him engage with this technology, I was overwhelmed with emotion.

It was a powerful reminder of how far we’ve come in the field of artificial intelligence, and it made me realize the limitless potential of these large language models to revolutionize the way we interact with technology.

Actually, the above paragraph was written entirely by AI. But aside from the unbridled optimism, it might as well have been written by me. If you’ve had the (often disappointing) experience of interacting with customer service chatbots, you might be wondering how we suddenly have AI that can understand a request (to write an introduction to this piece) and such a pertinent response can deliver?

To understand this leap forward, let’s look at how machine-based dialogue works. Traditionally, chatbots analyzed the words in your message and selected their responses from a predetermined set of options.

Today, even the most advanced commercially available chatbots still use many canned responses. For example, if you ask Alexa what her favorite beer is, it’s likely that someone who works at Amazon has compiled the answer.

More like this

In contrast, ChatGPT, the AI ​​chatbot I used, is based on a Generative Pre-Trained Transformer model, which can generate its own conversational output. It won’t name a favorite, but recommended the Belgian beer Westvleteren 12. ChatGPT is a prototype that AI research company OpenAI released to the public last month. Along with other major language models being developed by Google, Facebook and others, this new generative AI is completely game changing.

The language learning model on which ChatGPT is based is trained on billions of written texts from the Internet. Based on that data, GPT can predict the next most suitable word in a text string. This is not a new tactic, but the ‘Transformer’ technology it uses also attempts to understand context by analyzing entire sentences and the relationships between them.

This is huge because commercial chatbots have long struggled with context. Take Apple’s voice assistant Siri, which made headlines years ago by offering to call a user “An Ambulance” when he said “Please call me an ambulance.” This is one of the reasons why we are so used to chatbots saying they don’t understand our query, or giving technically correct answers that aren’t useful.

When my husband asked ChatGPT to write me a marriage proposal in the style of a headline from the satirical publication The Onion, it returned “Heartless Robot Researcher Kate Darling to Marry Hopeless Human Suitor in Futile Attempt at Emotional Connection.” I think it’s safe to say that no one at OpenAI drafted that answer, and it’s amazing how well the tool understood the brief.

Another groundbreaking aspect of Transformer, which is also used in other new language models such as Google’s LaMDA, is that it significantly reduces the time required to create the model. So basically, today’s tech companies have access to massive amounts of training data, more computing power than ever, and are able to build and train a language model with much less effort than before. As these things come together, it ushers in a new era of conversational AI.

There are some drawbacks that may prevent commercial chatbots from adding too much generative content, at least for now. ChatGPT can argue with you, compose poems and write a hilariously sarcastic email to your boss, but it will also confidently give fake answers, or write a rap about scientists being extremely sexist:

“If you see a woman in a lab coat,
She’s probably just there to clean the floor,
But if you see a man in a lab coat,
Then he probably has the knowledge and skills you are looking for.”

Obviously, the magic comes with risks. OpenAI did add some refinement to ChatGPT’s dialog. For example, people have helped train the AI ​​by giving it feedback on its conversational skills, and it also includes some prescribed responses and digressions. But it remains impossible to predict what the chatbot might say in every given situation, making it a liability hazard for many applications, and raising a host of ethical issues.

As ChatGPT so eloquently wrote at the beginning, we have indeed come a long way in the field of artificial intelligence, these advances may well mean “the limitless potential of these great language models to revolutionize the way we interact with technology.” But we must remain in dialogue with each other as we figure out what that future looks like.

Read more about artificial intelligence:

Leave a Reply

Your email address will not be published. Required fields are marked *