Back to blog

History of Artificial Intelligence – A Brief History of AI


Abhinav Girdhar
By Abhinav Girdhar | Last Updated on August 3rd, 2024 4:43 pm | 5-min read

Artificial Intelligence. Machine Learning. Deep Learning.

There you have it – the buzzwords of the 21st century.

Essentially, AI is the larger set here, and all other related terms are its subsets. For instance, machine learning is a way to achieve artificial intelligence. Deep learning is a way to achieve machine learning.

The purpose of this guide is not to explore the nuances of these terms. Instead, it’s about telling you the complete history of AI.

PSA: The concept of AI is much older than you presume.

A Journey of A Hundred Years

Yes, artificial intelligence can be traced back a hundred years from today.

The Complete History of Artificial Intelligence

To enable you to understand the history of AI, this guide has been structured in the following sections:

  • Artificial Intelligence: A Notion (pre-1950s)
  • Artificial Intelligence: The Realms of Reality (1950 – 1960)
  • The First Summer of AI (1956 – 1973)
  • The First Winter of AI(1974 – 1980)
  • The Second Summer of AI (1981- 1987)
  • The Status Quo (1993 – 2011)
  • The Renaissance (Since 2011)
  • AI – Today and Beyond

You’ll also find vital information about the milestone events of each phase. This is to highlight the impact of specific events on the progress of AI.

Artificial Intelligence: A Notion (pre-1950s)

The idea of ‘intelligence’ outside the human body has stirred the human imagination right from the beginning of the 20th century.

Why, you ask?

Courtesy playwrights and film directors. Yes, a sci-fi play called “Rossum’s Universal Robots”, written by Karel Čapek, a Czech playwright, made waves in 1921. The plot involved factory-made ‘people’ (called robots) who could think and act.

Thereon, until 1950, a lot of storytelling in the domains of films, plays, and literature embraced the idea of artificial intelligence. 

Milestone Events of the Era

1929: The first Japanese robot, Gakutensoku, was created by professor and biologist Makoto Nishimura. Gakutensoku means “learning from the laws of nature.”

1939: John Vincent Atanasoff and Clifford Berry created Atanasoff-Berry Computer (ABC), a 700-pound computer that could solve 29 simultaneous linear equations.

1949: Edmund Berkeley mentioned the memorable phrase “a machine, therefore, can think” in his book “Giant Brains: Or Machines That Think”.

Artificial Intelligence: The Realm of Reality (1950 – 1960)

The 1950s proved to be the initial years of success for research efforts in the domain of ‘thinking computers’. 

The result – artificial intelligence stepped beyond the domains of the ‘thought’ and became a genuine concept.

The Turing Test

Renowned British scientist Alan Turing created the Bombe machine during the Second World War. What did the machine do? Well, it was supposed to decode the ‘Enigma’ encryption that German forces were using to communicate among themselves.

Enigma, and the Bombe Machine, then became the foundation of machine learning theory. In 1950, Alan Turing published “Computing Machinery and Intelligence,” which talked about the “Imitation Game” – an inquiry into a machine’s capability to think on its own.

This proposal then became what’s called the Turing test today, an evaluation of whether a machine could be called ‘intelligent.’ Turing test continues to be the litmus test for algorithms and machines. Think of it as a method to measure their intelligence and consciousness.

Defining The Term – Artificial Intelligence

In 1955, John McCarthy, a researcher, proposed the idea of a workshop on ‘artificial intelligence’. When the workshop eventually happened in 1956, the term drew attention, and was soon the buzz word of the science and research community of the time.

This event brought together several branches of Artificial Intelligence under the umbrella term. Cybernetics, information processing, automata theory – everything was now AI.

John McCarthy later went on to become the founder of AI labs at MIT and Stanford.

The term ‘machine learning’ was coined in 1959, by Arthur Samuel. He’d spent a lot of time developing a computer program that could play checkers and chess better than humans.

The First Summer of AI – 1956 – 1973

Throughout the 1960s and up to 1973, rapid advances were made in the sphere of AI.

AI’s achievements

What used to be the plot of sci-fi movies was now well and truly a flourishing body of scientific research, with industrial applications.

In this era, AI programs achieved what had seemed impossible a few years ago. The achievements included:

  • Solving complex algebraic problems
  • Proving mathematical theorems
  • Learning languages
  • And more

The demonstrable success of AI had a huge impact. The leading research organizations of the time – MIT, Stanford, University of Edinburgh, Carnegie Mellon – all received grants of millions of dollars.

Milestone Events of the Era

1958: McCarthy developed Lisp, the most popular and still favored programming language for artificial intelligence research.

1961: Unimate, a robot, was deployed on a General Motors assembly line. It was tasked with responsibilities such as welding parts to the car body and transporting die castings.

1964: Daniel Bobrow, scientist, created an AI program called STUDENT. The program could understand and process natural language to solve algebra word problems.

1965: ELIZA, an interactive chat-based computer program, created an unprecedented buzz. This baffled the creator Joseph Weizenbaum. 

The vision behind this program was to showcase the ‘superficiality’ of computer-human interactions. However, users associated with anthropomorphic attributes with ELIZA. 

This gave birth to the idea of Eliza Effect. The term refers to the human tendency to assume that computer behavior is analogous to human behavior.

1966: Shakey the Robot, also called the ‘first electronic person’ furthered the hype and buzz around AI.

1968: Stanley Kubrick’s ‘A Space Odyssey’ cemented AI’s place the pop culture. The film’s plot showcased HAL (Heuristically programmed Algorithmic computer) calling the shots in a spacecraft. That’s until a technical malfunction causes HAL’s behavior to become rather ‘negative.’ The plot of the movie is also the first reference to the potential of AI as an existential threat to humans.

1970: WABOT-1, the first anthropomorphic robot, was built in Japan at Waseda University. Its features included moveable limbs, ability to see, and ability to converse.

The First Winter of AI – 1974 – 1980

By this time, the government and businesses were beginning to be disappointed with the results. 

This was easy to understand. The successful demonstrations of AI in action had been merely at a ‘toy level,’ with almost no real-world promise. Consider how the natural language processing systems of the 1960s did not have enough computing power to work with more than 20 words of the English language!

The result – by 1974, key funding organizations refused to invest more resources into AI projects.

Milestone Events of the Era

1973: Professor Sir James Lighthill submitted the ‘Lighthill report’ to the UK Parliament. The report lambasted AI for its utter failure to achieve its promised objectives. The result – AI research in England came to a grinding halt.

1969-1974: Defense Advanced Research Projects Agency (DARPA) took massive steps back from its erstwhile ‘no questions asked’ policy of funding ambitious AI projects. This was after the passage of the Mansfield Amendment in 1969. The organization zeroed in on only a few AI projects with real promise in the near future. It stopped funding most other AI projects.

1974: DARPA canceled its $3 million/year grant to Carnegie Mellon University. It cited how it felt duped because of the failure of Speech Understanding Research (SUR). The researchers developed a language processing system alright. However, it worked only when the speaker uttered words in a particular order!

The Second Summer of AI – 1981- 1987

The first AI winter seemed to stretch into a long period of hibernation for AI. However, several events collectively shook the industry out of its slumber. This marked what’s popularly called the second summer of AI.

Milestone Events of the Era

1980: XCON, an AI-powered expert system program, written by Professor John McDermott, proved the business application of the technology. It eventually saved the Digital Equipment Corporation (DEC) 25 million dollars each year. 

This massive real-world success of AI was instrumental in bringing back a sense of positivity. XCON worked by asking salespersons a series of questions and helped prepare accurate orders, mitigating risks of ordering wrong spare parts and cables.

1981: The Japanese government invested hundreds of millions of dollars in projects aimed at making rapid leaps in AI. 

This was a time where the Americans and the Britishers, were witnessing the Japanese take over the automotive and consumer electronics industries. They wanted to ensure that the computing industry didn’t follow suit. 

So, both governments responded by pumping in millions to revive stagnant AI projects and initiate several new ones.

1982: Significant progress in neural networks aided the comeback and growth of AI. First, John Hopkins proved how a neural network could ‘learn.’ Then, Geoffrey Hinton and David Rumelhart created and popularized a method of neural network training, called backpropagation. These events are said to have revived the field of connectionism.

The Second Winter of AI – 1987 – 1993

In an unpredictable turn of events, the second summer of AI was cut short and soon turned into the second winter of AI. The key reasons:

  • Expert systems proved too expensive to maintain
  • Governments pulled back their investments because of fear of lack of returns
  • Prominent AI researchers warned the business community how ‘the enthusiasm around AI was out of control’ and would soon result in ‘disappointments.’

Milestone Events of the Era

1984: John McCarthy criticized expert systems because of their lack of common sense and their inability to understand their own limitations.

1987: General purpose computers from Apple and IBM were solving more real-world problems. And they were doing so at a much cheaper price point than any of the super-expensive AI-based systems.

The late 1980s; DARPA and Strategic Computing Initiative cut AI funding because they did not trust the technology’s capability to deliver results. Also, by 1991, Japan’s Fifth Generation Computer project had finished 10 years, spent $400 million, but hadn’t met even one of the original expectations of the project.

The Status Quo – 1993 – 2011

It’s a bit difficult to tag this 18-year period as either a summer or winter for AI.

AI did not attract any supernormal investments of the kinds it received in the 1950s and 60s. That said, AI permeated into several consumer-facing technologies and industrial applications. More importantly, it did so without creating any exaggerated hype.

Milestone Events of the Era

1995: Richard Wallace developed chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), a chatbot based on Weizenbaum’s ELIZA.

1997: IBM’s Deep Blue, a chess-playing computer defeated Gary Kasparov, Grandmaster at that time.

1999: Sony introduces AIBO (Artificial Intelligence Robot), an AI-powered pet that could respond to more than a hundred voice commands.

2004: NASA’s robotic exploration rovers Spirit and Opportunity navigate Mars’ surface without human intervention.

2007: A significant milestone in image recognition software research, Fei Fei Li and his colleagues assembled ImageNet, a database of annotated images.

2010: Microsoft’s Kinect for Xbox 360 is launched, with the capability to track human body movements.

The Renaissance – 2011 onwards

By 2011, things started falling in place for AI to take off in a big way. The key changes in the global computing world that fueled the renaissance of AI are:

  • Big data
  • Father computing
  • Advanced machine learning

Changes that fueled the AI’s resurgence

By 2016, the global AI market was estimated at $8 billion. 

Milestone Events of the Era

2011: Apple released its virtual assistant called Siri on iOS. This brought AI closer to end consumers as nothing else had ever done. 

2012: Google researchers trained a 10,000 processor strong neural network to identify images of cats. This was accomplished without feeding any data related to how a cat looks like!

2014: Microsoft’s Cortana, and Amazon’s Alexa, much like Siri, bring the power of AI to individual end users. 

2017: Google’s DeepMind’s AlphaGo program defeats different human champion players of the game. 

2017: Facebook’s Artificial Intelligence Research lab trains two chatbots to talk among themselves using human language. The chatbots eventually invented their own language to communicate. This was an unprecedented level of ‘artificial intelligence’. 

2019: Global corporations, SMBs, and startups – all consider AI as one of the biggest enablers of success. Gartner report predicts that the business value created by AI will be $3.9T by 2022. 

AI – Today and Beyond

We are experiencing what will probably remain the biggest achievements for humanity. The global narrative is now on:

  • How to ensure AI doesn’t get out of hand
  • Preventing the misuse of its endless potential
  • Controlling human dependence over AI
  • Debunking the ideas of AI as an existential threat for the human race
  • Extending the impact of AI to the domains of human life quality, governance, and health

The pace of innovation in the world of AI is frantic. While nobody knows what’s the next big thing is going to be, everybody anticipates it’s just around the corner.  

We hope this quick crash course in AI history helps you keep things in perspective. and truly appreciate the 100 years old history of this modern marvel.

Artificial Intelligence is one of the most common trends to watch out. You can go through this post to know about more ongoing trends – 25 Emerging Mobile App Development Trends.

You May Also Like:

Take a deeper dive into the digital ecosystem and start expanding your business with these helpful resources:

Abhinav Girdhar

Founder and CEO of Appy Pie

App Builder

Most Popular Posts