Convincing AI |631|

For more visit

In Skeptiko 631, we test the capabilities of Large Language Models (LLMs) in understanding and applying basic logic to complex scientific reasoning. The test case may sound a bit strange, but it was ideal for testing the AI. The discussion centered around the claim that viruses don’t exist because they’ve never been isolated.

Host Alex Tsakiris and guest Mark Gober engage in a deep dive into the nuances of virology, the concept of isolation in biomedical science, and how these ideas intersect with AI’s ability to process and analyze information. The experiment reveals differences in performance between two LLMs, with one demonstrating a superior grasp of the deeper context of the scientific claims.

A key point is how long it took for the AI to recognize that “isolation” might be an inappropriate term in this specific scientific context. This leads to broader questions about AI’s capacity for nuanced understanding and its vulnerability to being misled by carefully crafted arguments.

The episode ultimately raises questions about the current state and future potential of AI in tackling complex, controversial scientific debates. How far can AI go toward understanding the intricacies of cientific reasoning? Will it always be susceptible to clever human manipulation? Join us as we unpack these questions and more in Skeptiko 631.

Published on July 5, 2024

Share