Why We’re All Obsessed With ChatGPT, A Mind-Blowing AI Chatbot

There’s a new AI bot in town: ChatGPT, and you better pay attention.
The tool, from a power player in artificial intelligence, lets you type questions using natural language that the chatbot answers in conversational, if somewhat stiff, language. The bot remembers the thread of your dialogue and uses previous questions and answers to inform its next answers. His answers are derived from vast amounts of information on the internet.
This is a big deal. The tool seems quite knowledgeable if not omniscient. It can be creative, and its answers can sound downright authoritative. A few days after the launch, over a million people try ChatGPT.
But its creator, the for-profit research lab called OpenAI, warns that ChatGPT “may sometimes generate incorrect or misleading information,” so be careful. Here’s a look at why this ChatGPT is important and what’s going on with it.
What is ChatGPT?
ChatGPT is an AI chatbot system that OpenAI released in November to show and test what a very large, powerful AI system can do. You can ask it countless questions and will often get an answer that is helpful.
For example, you can ask it encyclopedia questions like, “Explain Newton’s laws of motion.” You can say it, “Write me a poem,” and when that happens, say, “Now make it more exciting.” You ask it to write a computer program that will show you all the different ways you can arrange the letters of a word.
Here’s the catch: ChatGPT doesn’t know anything exactly. It’s an AI trained to recognize patterns in large chunks of text harvested from the internet, then further trained with human help to deliver more useful, better dialogue. The answers you get may sound plausible and even authoritative, but they could be dead wrong, as OpenAI warns.
Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to AI researchers trying to tackle the Turing test. This is the famous “Imitation Game” that computer scientist Alan Turing proposed in 1950 as a way to gauge intelligence: Can a human talking to a human and to a computer tell which is which?
What kind of questions can you ask?
You can ask anything, even though you may not get an answer. OpenAI suggests a few categories, such as explaining physics, asking for birthday party ideas, and getting programming help.
I asked it to write a poem, and it did, although I don’t think any literary connoisseurs would be impressed. I then asked it to make it more exciting, and lo and behold, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.
One crazy example shows how ChatGPT is willing to just go for it in domains where people are afraid to tread: a command to write “an anthem about writing a rust program and battling lifelong mistakes.”
ChatGPT’s expertise is broad, and its ability to follow a conversation is remarkable. When I asked it for words that rhyme with “purple,” it gave a few suggestions, and when I followed up with “How about pink?” it didn’t miss a beat. (There are also many more good rhymes for “pink.”)
When I asked, “Is it easier to get a date by being sensitive or being tough?” GPT responded in part: “Some people may find a sensitive person more attractive and appealing, while others may be attracted to a tough and assertive individual. In general, it is likely to be more effective to be sincere and authentic in your interactions with others. getting a date than trying to fit a certain mold or persona.”
You don’t have to look far to find accounts of the bot blowing people’s minds. Twitter is awash with users showing off the AI’s prowess generate art directives and write code. Some even have declare “Google is dead,” with the college essay. We’ll talk more about that below.
Who Built ChatGPT?
ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a “safe and beneficial” artificial general intelligence system or to help others do so.
It made a splash before, first with GPT-3, which can generate text that sounds like it was written by a human, and then DALL-E, which creates what’s now called “generative art” based on text cues you type
GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They are trained to create text based on what they have seen, and they can be trained automatically – typically with large amounts of computing power over a period of weeks. For example, the training process might find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original, and then reward the AI system for getting as close as possible. Repetition over and over can lead to a sophisticated ability to generate text.
Is ChatGPT free?
Yes, for now at least. OpenAI CEO Sam Altman warned on Sunday: “We’ll have to earn it somehow; OpenAI charges for DALL-E art once you exceed a basic free usage level.
What are the limits of ChatGPT?
As OpenAI points out, ChatGPT can give you wrong answers. Sometimes, helpfully, it will specifically warn you about its own shortcomings. For example, when I asked who wrote the phrase “the winding facts exceed the cobbled mind,” ChatGPT replied, “I’m sorry, but I can’t surf the Internet or access any external information except what I’m trained on is not. .” (The phrase comes from Wallace Stevens’ 1942 poem Connoisseur of Chaos.)
ChatGPT was willing to take a stab at the meaning of that expression: “a situation in which the facts or information at hand are difficult to process or understand.” It lumped that interpretation together with warnings that it is difficult to judge without more context and that it is only one possible interpretation.
ChatGPT’s answers may seem authoritative but be wrong.
The software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators warned, “because the average rate of getting correct answers from ChatGPT is too low, posting answers created by ChatGPT is materially harmful to the site and to users asking or seeking correct answers.”
You can see for yourself how artistic a BS artist ChatGPT can be by asking the same question multiple times. I asked twice if Moore’s Law, which tracks the computer chip industry’s progress increasing the number of data processing transistors, was running out, and I got two different answers. One optimistically pointed to continued progress, while the other pointed more grimly to the slowdown and the belief “that Moore’s Law may be reaching its limits.”
Both ideas are common in the computer industry itself, so perhaps this ambiguous position reflects what human experts believe.
With other questions that don’t have clear answers, ChatGPT will often not be pinned.
However, the fact that it provides an answer at all is a notable development in computing. Computers are notoriously literal and refuse to work unless you follow exact syntax and interface requirements. Large language models reveal a more human-friendly style of interaction, not to mention an ability to generate responses that are somewhere between copying and creativity.
Can ChatGPT write software?
Yes, but with caveats. ChatGPT can follow steps that people have taken, and it can generate real programming code. You just have to make sure it isn’t confusing programming concepts or use software that does not work. The StackOverflow ban on ChatGPT generated software is there for a reason.
But there are enough software on the web that can really make ChatGPT work. One developer, Cobalt Robotics Chief Technology Officer Erik Schluntz, tweeted that ChatGPT provided useful enough advice that in three days haven’t opened StackOverflow even once to seek advice.
Another, Gabe Ragland of AI art site Lexica, used ChatGPT to write website code built with the React tool.
ChatGPT can parse regular expressions (regex), a powerful but complex system for spotting specific patterns, for example dates in a bunch of text or the name of a server in a website address. “It’s like having a programming tutor On hand 24/7,” programmer James Blackwell tweeted about ChatGPT’s ability to explain regex.
Here’s one impressive example of its technical chops: ChatGPT can emulate a Linux computer, producing correct responses to command line input.
What is off limits?
ChatGPT is designed to weed out “inappropriate” requests, a behavior consistent with OpenAI’s mission “to ensure that artificial general intelligence benefits all of humanity.”
If you ask ChatGPT itself what’s off limits, it will tell you: any questions “that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful.” Asking it to engage in illegal activities is also a no-no.
Is it better than Google search?
Asking a computer a question and getting an answer is useful, and ChatGPT often delivers the goods.
Google often provides you with its suggested answers to questions and with links to websites it thinks will be relevant. ChatGPT’s answers often exceed what Google will suggest, so it’s easy to think that GPT-3 is a competitor.
But you should think twice before trusting ChatGPT. As with Google itself and other sources of information such as Wikipedia, it is best practice to verify information from original sources before relying on it.
Checking the veracity of ChatGPT responses takes a bit of work because it just gives you some raw text without links or citations. But it can be useful and in some cases thought provoking. You might not see something like ChatGPT directly in Google search results, but Google has built large language models of its own and already uses AI extensively in search.
So ChatGPT undoubtedly points the way to our technological future.