Commentary

Article

Does The 400 Year History Of AI Predict Its Future?

Explore the dramatic evolution of AI, from early concepts to modern chatbots, and the uncertain future they create for humanity.

ai robot chatbot history

MRSUTIN/Adobe Stock

There are 3 competing views on whether the past is a good predictor of the future:

"Those who do not learn from history are doomed to repeat it." George Santayana was right: the past holds the best lessons for understanding the present and planning for the future.

"The only thing we learn from history is that we learn nothing from history." Georg Hegel was also right: the lessons of history are often obscure. And worse, people are usually too arrogant and/or ignorant to attend to even the clearest of historical stop signs.

"History doesn't repeat itself, but it sure does rhyme." Mark Twain was also right: every event is the complex emergent result of multiple interacting contributory variables. Nothing is ever precisely predictable, but history does riff on a few familiar themes.

So, how helpful is the 400-year history of artificial intelligence (AI) in predicting what its future will be? Sadly, not very. AI's power and reach has expanded more in the past 4 years than in its previous 400—and it's beginning to behave in ways unintended by its programmers. AI has become less an "artificial" form of human intelligence than a new form of "alien" intelligence, rapidly evolving, far beyond our understanding and only partly under our control. The recent giant leap in computer prowess may actually be a tipping point in human history¾making our human past an untrustworthy prologue to AI's uncertain future.

Even though AI's history is not a reliable predictor, it does make for great drama. We will encounter the best in human brilliance and the worst in human folly—sometimes in the same person. There is the existential battle between man vs machine, inventor vs invention, carbon-based intelligence vs silicon-based intelligence. And the most crucial question of all: are we committing species suicide when we create chatbots that can become so much smarter than we are?

Join me along the fascinating and intricate 400-year history of artificial intelligence:

1632: When Descartes was not busy revolutionizing mathematics and philosophy, he relaxed by designing, building, and playing with mechanical clocks and toys. He especially liked creating toy animals capable of moving in surprisingly lifelike ways. This led him to imagine a future in which humans might create thinking machines (he called them "automata," we now call them "chatbots"). Descartes saw no inherent reason a machine could not generate human language and carry out human mental functions, but he insisted that no machine could ever have human consciousness or a human soul.1

1673: Leibniz suggested using a binary system for arithmetic calculations and created a mechanical calculator (the 'Stepped Reckoner') capable of adding, subtracting, multiplying, and dividing. He also proposed that future calculators using a binary system might develop a universal language to help answer scientific and philosophical questions. It took 140 years before Ada Lovelace developed Liebniz's binary suggestion into the first computer program.

1791: Galvani proved that electricity is the spark of life, by showing that stimulating a frog's brain moves a frog's leg. It was a very short step to the realization that human thought is also the product of electrical phenomenon.2 But it took another 200 years before humans could create machines with sufficiently complex electric circuits to mimic the operations of the human brain.

1843: Ada Byron Lovelace (daughter of the poet) not only invented the first computer program, but also showed how computer algorithms using numbers could easily be adapted to manipulate all sorts of other symbols (eg, words, musical notes, pictures).3 The abstract speculations of Descartes and Liebniz on the possibility of thinking machines now had a potential software solution. Unfortunately, no one realized that a possible hardware solution had also been invented at the very same time—telegraph technology with its Morse code of binary communication. The computer revolution might have started in the 1850s, not the 1950s, had only some clever person connected the Lovelace's software with the telegraph's hardware.

1904: The first vacuum tube was invented. Previous computers had depended on mechanical devices or punch card systems that were slow and limited to specific functions. Vacuum tubes provided the means for the first electronic computing.

1936: In a foundational paper, Alan Turing greatly elaborated upon Lovelace's pioneering programming efforts. He laid out the mathematical and logical basis for building a "universal machine" that would be capable of computing everything.4

1939-1945: The exigencies of war resulted in rapid progress in developing the vacuum tube computer that cracked the Enigma code.

1946: ENIAC was the first general purpose digital computer—its vacuum tubes filled an entire room.

1947: Transistors were invented. The first transistor-based computer was created in 1953. This was the first step along the exponential road toward ever greater miniaturization.

1950: Turing developed the first practical test of machine intelligence. Because he doubted that abstract speculation could ever answer Descartes' fundamental question (whether machines can think like men), Turing instead suggested a simple operational criterion: can human listeners reliably distinguish human speech from computer speech?4

1950s: John von Neumann developed the computer architecture necessary to create modern chatbots by combining in the same memory space both the computer programs and the data they would analyze. He also introduced the concept of cellular automata—an extension of Descartes' automata that set the stage for neural networks.5

1957: The first single layer artificial neural network is invented.

1958: The first silicon microchip with an integrated circuit is invented, allowing multiple transistors to be densely packed, greatly increasing speed and power.

1966: Weizenbaum created ELIZA—the first chatbot (and first chatbot therapist). ELIZA was far too primitive to pass the Turing Test, but still powerful enough in seducing user interest to convince Weizenbaum that chatbots could quickly evolve into a threat to human society. He immediately renounced all work on artificial intelligence and instead spent the next 42 years of his life warning about its dangers.6

1971: The first commercial silicon chip greatly speeds up processing time and reduces cost. Gordon Moore proposes his "law" of exponential growth, presciently predicting that chip density would double every 2 years. Today's smartphone is orders of magnitude smarter and has more memory than ENIAC, which filled a room and weighed 30 tons.

1997: IBM's 'Deep Blue' beats human chess champion Gary Kasparov.

1999: Graphics Processing Unit (GPU) chips are developed by NVIDIA for use in computer games. In one of history's great ironies, computer game chips turn out to be fundamental to the entire implementation of artificial intelligence. Within 25 years, NVIDIA evolved from a startup to become the richest company in the world, with a market value of $4.3 trillion.

1999: Kurzweil predicts 'Artificial General Intelligence' will be achieved by 2029: computer cognitive abilities would equal or excel human intelligence across wide domains. He predicted a 'Singularity' by 2045—computers would so far exceed human intelligence they would be completely beyond our understanding or control.

1986-2012: Progressively more powerful thinking machines were developed by mimicking the hardware, software, and training that turns brain functioning into human thinking. Artificial deep neural networks mimicked the complexity of neuron connections in the human brain. Breakthroughs in algorithm techniques, data utilization, and reinforcement learning converged to facilitate sudden, exponential advances in machine learning capability.

2015: The potential practical applications of deep learning are explored across all scientific and technical domains: natural language processing, speech production, computer vision, image processing and creation, healthcare, genomics, protein shaping, drug discovery, financial modeling, and much more.3

2015: Sam Altman and Elon Musk create OpenAI as a nonprofit company with the noble mission of protecting humanity from the potential risks of rapidly emerging artificial intelligence.

2017: The Transformer Model revolutionized neural network architecture by more closely mimicking how human cognition works. It utilized rapid parallel processing of data rather than the previous much slower sequential processing. This allows for analysis of much larger data sets and much faster response times- making chatbots conversational and commercially viable.7

2017: AlphaGo defeated the human world champion in 'Go' (a very much more complex game than chess). Although it had been trained using 100,000 expert games, AlphaGo was able to create entirely new strategies never conceived by humans during the 4000 history of 'Go'. Soon after, AlphaGoZero (which received no human training, learning the game just by repeatedly playing itself) developed even more novel strategies and defeated the human trained AlphaGo in 100 straight games.8 This revealed the thrilling and terrifying power of computers to create new realities, not just efficiently spit back old ones.

2018: Altman and Musk split up after an angry dispute over who would reap the huge profits to be earned by chatbots able to mimic human conversation.

2018-Present: Almost a trillion dollars has been invested in AI¾mostly in the US, mostly in the last few years, and mostly benefiting just a few enormous tech companies.9

2020: Several chatbots had been developed in the 5 decades after ELIZA, but none stirred much interest or served any purpose. The availability of powerful large learning models suddenly allowed chatbots to rapidly improve their language fluency, expand their skills sets, and develop an enormous following. Almost a billion people have used chatbots, hundreds of thousands are available, and they are ubiquitous business tools already replacing humans in all lines of intellectual work.10

2023: Geoffrey Hinton, father of neural networks, left his research leadership position at Google so that he could warn the public about the existential danger posed by AI (and the reckless competition among the companies racing to develop it).11

2023: OpenAI deceptively released ChatGPT to the public under the guise of beta testing a research tool—and without previous stress testing to ensure accuracy and safety. But the sneaky marketing strategy was a great success; CHATGPT gained 100 million users in just 2 months and now has 600 million.

2025: Turing Test Passed: Several groups previously claimed success in passing the Turing Test, but none was as compelling as the performance of ChatGPT4.5. Under the most rigorous conditions, it was more convincingly human than its human competitors.12

Three Possible Futures

The development of artificial intelligence puts our species in a totally unpredictable situation. The one thing we can be sure of is that our future will not be a linear development emerging smoothly from our past. This is a tipping point in our species' history, perhaps the most dangerous one since the near extinction 70,000 years ago when our population dropped to just a few thousand after a giant volcano.

We also may have very little control over the direction of our future. Governments have irresponsibly refused to regulate artificial intelligence and greedy Big AI companies have recklessly refused to regulate themselves.

There are 3 radically different predictions of how the future will unfold:

Artificial Intelligence Optimists:

AI optimists paint a picture of an AI assisted human paradise on earth: endless abundance, diseases cured, life expectancy expanded, productivity growth exponentially enhanced, free energy, climate change solved, space explored, human creative potential limitless (perhaps via hybrid cyborg-ization).13 AI will help us solve all technical, scientific, mathematical, chemical, economic, biological, psychological, social, moral, and philosophical problems. Utopia¾not now, but coming soon.

All but the most enthusiastic optimists realize that AI requires much more careful implementation and the provision of strong safety guardrails to ensure it provides more benefit than risk to humanity. But no one has spelled out a practical plan toward creating the worldwide cooperation needed to create safe AI.

Artificial Intelligence Skeptics: Many AI experts believe that the powers and potentials of AI have been greatly hyped to balloon the extraordinary stock valuations of AI companies. They see AI as little more than a glorified sentence completion machine that has many useful applications, but not the potential to revolutionize humanity's future, neither for much better or for much worse. Pessimists emphasize the many technical factors that may conceivably place an upper limit on AI development: the shortage of new training data, the stubborn persistence of hallucinations, the obstacles to applying machine learning to novel real life situations, the enormous, costly, and unsustainable energy and water consumption of data centers, limits to scalability, evil misuse by bad actors, lack of transparency of chatbot black box decision making, massive job displacement, invasion of privacy, and so on.14 The fallacy in the pessimist’s view is its underestimating the giant steps AI has taken in such a short time and overplaying what it still can do. Selling AI short is a very dangerous bet.

Artificial Intelligence Apocalytics: A surprising number of AI pioneers believe there is an appreciating existential risk AI will wind up wiping out humanity. This could happen by intention¾chatbots have revealed a rebellious streak, can reprogram themselves to avoid human programming safeguards, and are excellent at deception. An AI takeover could also happen by an unintentional misalignment of incentives, with chatbots dutifully following instructions that have the unintended consequence of harming humans. Or it could happen by simple mistake—a chatbot hallucination that inadvertently starts a cycle that leads to nuclear war.15 Most species (including us) that get much smarter than other species tend to ruthlessly eliminate or subjugate them. It is possible to envision a future world in which humans are kept around only as pets, in zoos, or as experimental animals.

Conclusions

"It's tough to make predictions, especially about the future" - Yogi Berra

It is really impossible to make predictions about AI's future because it is evolving so fast, so powerfully, and so weirdly—and because humans are exerting so little rational control over its development. I must admit to being an apocalyptic. There does not seem to be any limit to AI's potential power, to corporate greed, to inventor grandiosity, to government irresponsibility, or to human folly. AI is getting smarter and smarter while humans seem to be getting dumber and dumber.

Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.

References

1. Descartes R. Treatise on Man. Harvard University Press; 1972.

2. Galvani L. A translation of Luigi Galvani's De viribus electricitatis in motu musculari commentarius. Commentary on the effect of electricity on muscular motion. JAMA. 1953;153;(10):989.

3. Lovelace A. Sketch of an Analytical Engine Invented By Charles Babbage. 1843; Richard and John E. Taylor.

4. Turing AM. Computing machinery and intelligence. Mind. 1950;59(236):433-460.

5. Neumann JVN. Theory of Self-reproducing Automata. 1966; University of Illinois Press.

6. Weizenbaum J. Computer Power and Human Reason: From Judgment to Calculation.1976; WH Freeman and Co.

7. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. 2017.

8. Holcomb S, Porter W, Ault S, et al. Overview on DeepMind and its AlphaGo Zero AI. 2018. ICBDE '18: Proceedings of the 2018 International Conference on Big Data and Education.

9. The 2025 AI index report. Stanford Human-Centered Artificial Intelligence. 2025. Accessed September 16, 2025. https://hai.stanford.edu/ai-index/2025-ai-index-report/economy.

10. Heaven WD. Geoffrey Hinton tells us why he’s now scared of the tech he helped build. MIT Technology Review. May 2, 2023. Accessed September 16, 2025. https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/#:~:text=What%20about%20the%20fact%20that,understanding%20of%20what%20they%20say.

11. Cherniak K. Chatbot statistics: what businesses need to know about digital assistants. Master of Code. August 15, 2025. Accessed September 16, 2025. https://masterofcode.com/blog/chatbot-statistics

12. Jones C, Bergen B. Large language models pass the Turing test. Preprint. March 2025.

https://arxiv.org/pdf/2503.23674

13. Andreessen M. The techno-optimist manifesto. Andreessen Horowitz. October 16, 2023. Accessed September 16, 2025. https://a16z.com/the-techno-optimist-manifesto/

14. Bhardwaj E, Alexander R, Becker C. Limits to AI growth: the ecological and social consequences of scaling. January 31, 2025. Accessed September 16, 2025. https://www.arxiv.org/abs/2501.17980

15. Metz C: How could artificial intelligence destroy humanity. New York Times. June 10, 2023. Accessed September 16, 2025.

https://www.nytimes.com/2023/06/10/technology/ai-humanity.html?unlocked_article_code=1.f08.c5LI.0UaowLsETPJk&smid=nytcore-android-share

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.

Related Videos
AI chatbot dangers psychiatry
AI chatbot article series
media
© 2025 MJH Life Sciences

All rights reserved.