Explainer: Not everything we call AI is actually ‘artificial intelligence’. Here’s why

AI has arrived in our lives. But what does Artificial Intelligence actually mean? What must a system exhibit to qualify as AI? An expert weighs in
In August 1955, a group of scientists made a funding request of USD 13,500 to host a summer workshop at Dartmouth College, New Hampshire. The field they proposed to explore was artificial intelligence (AI).
While the funding request was humble, the researchers’ hunch was not: “every aspect of learning or any other feature of intelligence can in principle be described so precisely that a machine could be made to simulate it”.
Since these humble beginnings, movies and media have romanticized AI or cast it as a villain. Yet for most people, AI remained a point of discussion and not part of a conscious lived experience.
AI has arrived in our lives
Late last month, AI, in the form of ChatGPT, broke free from the sci-fi speculations and research labs and onto the computers and phones of the general public. This is what is known as a “generative AI” – suddenly a cleverly worded prompt can produce an essay or compile a recipe and shopping list, or create a poem in the style of Elvis Presley.
While ChatGPT was the most dramatic participant in a year of generative AI success, similar systems have shown even greater potential to create new content, with text-to-image prompts used to create vibrant images that have even won art competitions has.
AI may not yet have a living consciousness or a theory of mind popularized in sci-fi movies and novels, but it is getting closer to at least disrupting what we think artificial intelligence systems can do.
Researchers working closely with these systems have succumbed to the prospect of sentience, as in the case with Google’s large language model (LLM) LaMDA. An LLM is a model trained to process and generate natural language.
Generative AI has also raised concerns about plagiarism, exploitation of original content used to create models, ethics of information manipulation and abuse of trust, and even “the end of programming”.
At the center of it all is the question that has grown in urgency since the Dartmouth summer workshop: Is AI different from human intelligence?
What does ‘AI’ actually mean?
To qualify as AI, a system must exhibit some degree of learning and adaptation. For this reason, decision-making systems, automation and statistics are not AI.
AI is broadly defined into two categories: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI). To date, AGI does not exist.
The key challenge for creating a general AI is to adequately model the world with all the knowledge, in a consistent and useful way. It’s a massive undertaking, to say the least.
Most of what we know as AI today has narrow intelligence – where a specific system addresses a specific problem. Unlike human intelligence, such narrow AI intelligence is only effective in the area in which it is trained: fraud detection, facial recognition or social recommendations, for example.
However, AGI will function as humans do. For now, the most notable example of trying to achieve this is the use of neural networks and “deep learning” trained on large amounts of data.
Neural networks are inspired by the way human brains work. Unlike most machine learning models that perform calculations on the training data, neural networks work by feeding each data point one by one through an interconnected network, adjusting the parameters each time.
As more and more data is fed through the network, the parameters stabilize; the final outcome is the “trained” neural network, which can then produce the desired output on new data – for example, recognizing whether an image contains a cat or a dog.
The significant leap forward in AI today is driven by technological improvements in the way we can train large neural networks, adjusting large numbers of parameters in each run thanks to the capabilities of large cloud computing infrastructures. For example, GPT-3 (the AI system that powers ChatGPT) is a large neural network with 175 billion parameters.
What does AI need to work?
AI needs three things to be successful.
First, it needs unbiased, high-quality data, and lots of it. Researchers building neural networks use the large data sets that have emerged as society has digitized.
Co-Pilot, for augmenting human programmers, pulls its data from billions of lines of code shared on GitHub. ChatGPT and other major language models use the billions of websites and text documents stored online.
Text-to-image tools, such as Stable Diffusion, DALLE-2, and Midjourney, use image-text pairs from datasets such as LAION-5B. AI models will continue to evolve in sophistication and impact as we digitize more of our lives, providing them with alternative data sources, such as simulated data or data from game settings like Minecraft.
AI also needs computing infrastructure for effective training. As computers become more powerful, models that now require intensive efforts and large-scale computing may in the near future be handled locally. Stable diffusion, for example, can already be performed on local computers rather than cloud environments.
The third need for AI is improved models and algorithms. Data-driven systems continue to make rapid advances in domain after domain once thought to be the domain of human cognition.
As the world around us is constantly changing, AI systems must be constantly retrained using new data. Without this important step, AI systems will produce answers that are factually incorrect, or fail to take into account new information that has emerged since they were trained.
Neural networks are not the only approach to AI. Another prominent camp in artificial intelligence research is symbolic AI – instead of digesting large data sets, it relies on rules and knowledge similar to the human process to form internal symbolic representations of particular phenomena.
But the balance of power has tipped heavily toward data-driven approaches in the past decade, with the “founding fathers” of modern deep learning recently awarded the Turing Prize, the equivalent of the Nobel Prize in computer science.
Data, calculations and algorithms form the foundation of the future of AI. All indicators are that rapid progress will be made in all three categories in the foreseeable future.
(The Conversation. By George Siemens, University of South Australia)
Also read: Why I was being brushed by Lensa AI made me anxious