Blog

Artificial Intelligence: what is it and how is it changing the world?

PRESENTED BY PaperCut Logo

Welcome to the 21st century. Where “artificial intelligence” is no longer exclusively a plot device for blockbuster movies and novels. It’s a daily reality. 

You’d be forgiven for once thinking AI technology was primarily a work of fantasy. But the link between entertainment and reality is more proof of how science fiction predicts the future

Artificial Intelligence as a concept spans back further than Alan Turing’s Turing test in 1950, further back than Isaac Asimov’s “Three Laws of Robotics” in 1942, and even further back than Mary Shelley’s Frankenstein in 1818

While AI might seem like a shiny new tool, don’t believe the hype. Artificial Intelligence as a theoretical concept has been a part of human history, spanning more than 100 years. Its theoretical origins stretch as far as the 1600s - before modern machinery even existed! 

What is artificial intelligence?

Artificial intelligence (AI) is the ability of machines or software (or anything technological and not human or animal) to perform tasks that normally require human intelligence. 

This covers areas of perception and comprehension including reasoning, learning, decision-making, abstraction, logic, understanding, self-awareness, emotional intelligence, critical thinking, problem-solving, and creativity. 

There are multiple definitions of artificial intelligence, but computer science and artificial intelligence pioneer, John McCarthy, offered this definition in his 2004 paper, “What is Artificial Intelligence?”: “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.” 

As a framework, McCarthy defines intelligence as the “computational part of the ability to achieve goals in the world.” He provided the caveat that any definition of intelligence is, for better or worse, tied to our understanding of human intelligence. 

In today’s society, AI is used in many areas of technology. Most commonly, AI tools enhance internet capabilities like internet search engines (Google Search, Bing), automatic content recommendation systems (Netflix, Amazon, Disney+, Spotify, Apple TV, and Apple Music), voice-generated virtual assistants (Siri, Google Assistant, Alexa) and, more recently, generative/creative tools (ChatGPT, Google Bard, DALL-E 2, Midjourney).

The four goals of AI

In their seminal 2009 book, Artificial Intelligence: A Modern Approach,  Stuart Russel and Peter Norvig proposed four goals/definitions which categorize human and computer intelligence on differentiating between thinking and acting

Human approach

  • Systems that think like humans
  • Systems that act like humans

Ideal approach:

  • Systems that think rationally
  • Systems that act rationally

These definitions, of course, allow for the fact that humans are capable of both rational and irrational thinking and acting.

Artificial intelligence is therefore computer science, machine learning, and deep learning smooshing together to engineer algorithms so that computer systems can display intelligent thought and/or intelligent action. 

The four types of AI

AI systems can be classified into two categories: weak/narrow AI and strong/general AI. All kinds of AI are categorized based on the level of intelligence displayed by the machine or the degree of human intervention required. 

Weak or Narrow AI refers to systems that are designed to perform specific tasks. These are the two types of AI that are a current reality.

  • Reactive AI: This type of AI can only react to the current situation without any memory or learning from the past. Reactive AI systems can only perform a specified task within a set of parameters e.g. the “computer” option in any video game like Chess, spam filters, recommendation engines.
  • Limited memory AI: This type of AI can use some historical data or experience to improve its performance or behavior e.g. self-driving cars or facial recognition systems.

Strong or General AI is a theoretical form of AI that refers to systems with equal intelligence to humans, and most significantly, self-awareness. Within strong/general AI there sits two forms, with two sub-forms.

  • Theory of mind AI: This type of AI can understand the mental states, emotions, beliefs, intentions, and goals of other agents and interact with them accordingly. For example, social robots or virtual assistants are theory-of-mind AI systems that can communicate with humans using natural language and gestures.
  • Self-aware AI: This type of AI can have a sense of self-consciousness, self-awareness, and self-improvement. Within self-aware AI there are two levels of intelligence:
    • Artificial General Intelligence (AGI) can perform any intellectual task that a human can do, such as understanding natural language, solving complex problems, or generating content. 
    • Artificial Super Intelligence (ASI) would not only achieve self-aware consciousness but its intelligence and other cognitive abilities would surpass those of the human brain.

AGI are hypothetical self-aware AI systems that can perform any intellectual task that a human can do. ASI are hypothetical self-aware AI systems that can surpass human intelligence, as predicted by Ray Kurzweil in his book The Singularity is Near.

Science is aiming for theory of mind AI and self-aware AI in the future. AGI and ASI are theoretical (for now, gulp) but they are often portrayed in science fiction. There is a long list of AGI and  ASI depictions in film, literature, and television: J.A.R.V.I.S. or F.R.I.D.A.Y in the Marvel Cinematic Universe, replicants in Blade Runner, Skynet in the Terminator franchise,  Data in Stark Trek: The Next Generation, C3PO and R2D2 in Star Wars,  or possibly the most famous, Hal 9000 in 2001: A Space Odyssey.

Artificial intelligence vs machine learning vs deep learning

What is deep learning and machine learning?

Machine learning is how artificial intelligence is achieved. It’s a sub-category of artificial intelligence. Deep learning is an evolution and sub-category of machine learning. 

Artificial Intelligence is the computer science of machines learning to think and or act like humans.

Machine learning enables computer systems to complete tasks without direct programming to digest data, interpret data, and respond/make decisions from data.  

Deep learning is an evolved form of machine learning where programmable algorithms form more than three layers to create an artificial neural network that can learn and make independent decisions.

The Plunge | Timeline of Artificial Intelligence

History of artificial intelligence

The term “artificial intelligence” was coined by John McCarthy in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence conference at Dartmouth College. This event and the coining of the term are considered the birth of AI as a distinct field of study. 

But the question, “Can humans create intelligent non-human beings?” existed long before the conference. Many gifted theoreticians and brilliant minds ahead of their time have scraped against the concept of “artificial intelligence” for centuries of human history.

Here is a condensed timeline of some key milestones in the history of AI, with some computer science key dates for context. Please note, these are handpicked highlights and NOT the full story. Check out this more in-depth list from Forbes for a more comprehensive snapshot.

BC - 1800s

  • 2nd century BC - (Discovered in 1901) The notion of a mechanical calculator can be traced back to the oldest known example of the analog computer, the Antikythera mechanism - yep, the latest Indianan Jones movie didn’t just make that up!
  • 1637 - French philosopher-mathematician René Descartes’s work theorizing self-operating machinery is an early precursor of automata theory and machine learning.     
  • 1818 - Mary Wollstonecraft Shelley’s Frankenstein tells the tale of reanimated dead flesh achieving sentience, serving as an early example of darker, ethical questions around a concept that would later be referred to as “artificial intelligence.” 
  • 1822-1837 - Charles Babbage’s design for the Difference Engine - an automatic mechanical calculator - and his work with Ada Lovelace designing the Analytical Engine - a steam-powered programmable computer - are an early foundation for computer science. 

1930s - 1960s

  • 1939-1940 - Alan Turing’s Bombe electric-mechanical device sees humanity knocking on the door of computer devices. Bombe was first produced in 1939 at the UK Government Code and Cypher School at Bletchley Park and helped British cryptologists to decipher the German Enigma-machine
  • 1943 - A few years later at Bletchley Park, Turing’s Banburismus cryptanalysis process (used during the deciphering of Enigma) lays some of the groundwork for Tommy Flowers’s Colossus, the first programmable, electronic, digital computer.
  • 1942 - In his short story collection, “I, Robot” Isaac Asimov proposes the Three Laws of Robotics which later became an influential, albeit fictional, text on the ethics of artificial intelligence.
  • 1950 - In his research paper “Computing Machinery and Intelligence” Alan Turing proposes the Turing test aka “The Imitation Game” as a way of measuring machine intelligence by asking a simple question, “Can machines think?”  
  • 1951 - The first business computer the Leo I launches. 
  • 1956 - At the Dartmouth Artificial Intelligence conference John McCarthy coins the term “artificial intelligence” and AI launches as a field of study.
  • 1960 - Arthur C. Clarke’s “2001: A Space Odyssey” features the “AI gone bad” HAL 9000.
  • 1962 - Computer scientist Arthur L. Samuel coins the term “machine learning” in a research paper on the game of checkers he lost to an IBM 7094 computer.
  • 1964 - Computer designer Evelyn Berezin invents the first computerized word processor the Data Secretary. 
  • 1965 - Computer scientist Joseph Weizenbaum creates the first chatbot ELIZA for psychotherapy training.
  • 1986 - Ernst Dickmann’s modified Mercedes van is the first vehicle to drive autonomously.  

1990s - 2000s

  • 1991 - CERN researcher Tim Berners-Lee launches the first website and publishes HTTP, inventing the World Wide Web.
  • 1996 - IBM’s Deep Blue computer defeats world chess champion Garry Kasparov in a six-game match.
  • 2009 - Google launches Google Translate

2010s - now

  • 2011 - IBM’s Watson defeats two human champions in the quiz show Jeopardy!
  • 2014 - The Navia shuttle from Induct Technology is the first self-driving vehicle available for commercial sale. 
  • 2015 - The non-profit OpenAI is founded by a group of investors including Sam Altman, Peter Thiel, Reid Hoffman, and Elon Musk.
  • 2016 - Microsoft launches Tay, a chatbot that can interact with Twitter users. But the chatbot is shut down within 24 hours after it quickly learned to produce offensive and racist messages.
  •  2017 - Google’s AlphaZero learns to play chess, Shogi, and Go.
  • 2018 - OpenAI launches ChatGPT-1 the first model of their generative pre-trained transformer with 117 million parameters. 
  • 2018 - Google launches Duplex, a service that can make phone calls on behalf of users to book appointments or make reservations. 
  • 2019 - OpenAI launches GPT-2, now with the upgrade of 1.5 billion parameters. 
  • 2020 - OpenAI launches ChatGPT-3 having now been trained with 175 billion parameters. 
  • 2022 - OpenAI launches GPT online, based on their 3.5 model and 1 million users sign up in 5 days.
  • 2023 - Google launches their generative AI chatbot, Bard.

The generative AI boom: how AI technology is changing the world 

There is a lot of hype around AI at the moment. But it’s been a part of modern life since the 2000s and has been gaining more implementation in various fields since the 2010s. Recent developments and breakthroughs have placed AI at the forefront of popular culture. The launch of OpenAI’s ChatGPT is a turning point for the progress of AI. With ChatGPT, AI is now capable of natural language processing

Deep learning allowed Generative AI to expand its capabilities to images, speech, and complex forms of data. Generative AI is a form of deep learning where an AI system is given a prompt and then learns from an input of raw data to create statistically probable output.  

As impressive as generative AI tools are, and, boy, are they impressive, their quick progression hints at their future potential. At the moment, prompt writing for generative AI is something of an art and a science. Essentially, as generative AI becomes more sophisticated, it will need less explicit and tailored prompting and will be able to learn generally.

The future of AI is now

Artificial intelligence has been improving our lives for many years. It has changed the way we work and live. Voice recognition for virtual assistants like Siri and Alexa,  and self-driving technology in cars are now common examples of tools that have simplified daily life. We truly are living in the future. Yet we’re just scraping the surface. There is greater potential for AI to assist further in realms of science and healthcare, like earlier detection or even prevention of cancer and other diseases.

However, we must proceed with caution and not dig too deep. As long as we’ve been aware of the potential good of artificial intelligence, spanning all the way back to Frankenstein we’ve also been aware of the risk of great harm. Generative AI in the entertainment industry, such as voice replication or deepfake video technology poses obvious ethical and legal dilemmas. Paul McCartney using AI to replicate John Lennon’s voice to record “the final Beatles song” has sparked debate over intellectual property rights. Similarly, music producer Timbaland’s use of AI to regenerate deceased hip-hop artists like Notorious B.I.G. has raised ethical concerns over the use of AI in entertainment. Not only do they have the potential to breach copyright, they can also blur the lines between reality and fantasy. 

That’s just in the entertainment sphere. If there’s capacity for harm in the arts, then we must step forward wisely when it comes to fields like security and defense. With concerns around data privacy for services like social media platforms and web browsers, coupling these already questionable practices with technology that can, at the very least, mimic human behavior, presents cause for concern around what it may very well be to call ourselves human.