Top
image credit: Unsplash

A Brief History of Artificial Intelligence: An Idea Almost As Old As Human Civilization

May 4, 2023

Category:

The United Nations (UN) General Assembly adopted the Universal Declaration of Human Rights in 1948 by a vote of 48 of the 58 founding states. In the same decade, the foundations of Artificial Intelligence were being laid, with a multitude of perspectives, approaches, and paradigms shaping research in the field. 

Today, Artificial Intelligence (AI) has reached a milestone where it has grown beyond laboratories. From government and private sector decisions to tackling global problems, such as climate change and world hunger, we recognize the fantastic contribution of AI, as well as the unprecedented challenges it brings.

Artificial Intelligence—An Old Idea That’s Always New

The idea of artificial intelligence (AI) has fascinated mankind for millennia and is almost as old as the history of civilization, from mechanical humans in ancient Greek and Egyptian mythology to 20th-century science fiction literature.

In “I, Robot”, a collection of stories originally published in American magazines between 1940 and 1950, Isaac Asimov tackles the theme of the interaction between humans, robots, and morality. In the same decade, a generation of scientists, mathematicians, and philosophers had already defined the concept of AI, which was already culturally assimilated.

Alan Turing, a British mathematician and computer scientist best known for creating the Turing Machine during World War II, explored the mathematical possibility of AI. As a result, he suggested that machines could use available information and reason to solve problems and make decisions, just as humans do. This was the framework for his 1950 paper—Computing Machinery and Intelligence, in which he described intelligent machines and how their intelligence could be tested.

In 1943, Warren McCulloch, an American neurophysiologist and cyberneticist, and Walter Pitts, a logician and self-taught cognitive psychologist, described the “McCulloch-Pitts artificial neuron” as the first mathematical model of a neural network.

Additional landmark years for the birth of AI include 1955 when Allen Newell and Herbert A. Simon created the first AI program. This system proved 38 out of 52 mathematical theorems, finding new and more elegant proofs for some of them. In the following year,1956, the phrase “artificial intelligence” was first adopted by the American computer scientist—John McCarthy, at the Dartmouth Conference, where AI emerged as the first academic field.

Between 1956 and 1974, AI spiked in momentum, and researchers focused on developing algorithms that could solve mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, called ELIZA, and in 1972 the first intelligent humanoid robot was built in Japan—WABOT-1.

Next, in 1980, the first national conference of the American Association for Artificial Intelligence (AAAI) was held at Stanford University, followed by several years when investors and governments stopped funding AI research due to high costs, and the results were deemed ineffective. A turning point occurred in 1997, when IBM Deep Blue defeated world chess champion Garry Kasparov, becoming the first computer to beat a champion of this sport.

When the 2000th century arrived, AI entered the home for the first time in the shape of the Roomba vacuum cleaner. This technology led to new business models, spawning companies such as Meta (Facebook), Twitter, and Netflix. AI has grown exponentially over the past decade, as concepts such as Deep Learning (DL), Big Data, and AI became household terms.

Challenges and Ethical Principles

As human imagination grasped the potential of technology, the first fears arose, leading to discussions about ethics in literature, academia, and society. In 1942, Isaac Asimov set out three laws of robotics in his literature, indicating:

  1. A robot may not cause harm to a human being or, through inaction, allow a human being to do harm.
  2. Robots must obey orders given by human beings unless such orders would conflict with the First Law; 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

Beyond their simplicity, these abstract rules demonstrate a concern for ethics in the abstract human-robot relationship before the technology existed.

Today, AI has reached a tipping point where it is no longer limited to science fiction literature or laboratory research. Knowingly or unknowingly, we use products or services that integrate this technology. From a possible doomsday technology to one that can help eradicate incurable diseases, AI has come a long way in our everyday conversation. 

Why Are We Talking So Much About AI Today?

The wheel or printing press are technologies with specific uses, designed to perform predefined tasks. But through developing AI, we have invented an inventor that has the potential to surpass humans in many ways.

While AI continues to challenge society, this technology has been adapted and implemented in innovative ways. This transformation is being driven primarily by the leaders of top universities, technology companies, and technologists in general. Some of their public statements have helped people understand the impact of AI, and the benefits of using this technology.

As with all technological advances, innovation tends to outpace government regulation in new, emerging areas. However, as this sector advances, we can expect more AI protocols to follow for companies, allowing them to avoid infringement of human rights and civil liberties.

In 2021, the United Nations Educational, Scientific and Cultural Organization (UNESCO) unanimously adopted a document called “Recommendation on the Ethics of Artificial Intelligence”, the first-ever global agreement on the Ethics of AI. While acknowledging that AI systems involve risks, experts are convinced that AI has the potential to significantly transform society.