Podcast: Can AI Teach Us How to Think?
Exploring Philosophy, Ethics, and the Future of Knowledge with Every's Dan Shipper
In this thought-provoking episode, I sat down with Dan Shipper, co-founder and CEO of Every, to explore the intersections of AI, philosophy, and human cognition. We tackle big questions: Can AI truly think like humans? Does it think more like a human than most of us imagine? Is the pursuit of absolute truth compatible with how human minds really work? And what can thousands of years of philosophical inquiry teach us about the rapidly evolving landscape of artificial intelligence?
We dive deep into how AI is reshaping our concepts of knowledge, ethics, specialization, and justice. Below are some highlights and key insights from our fascinating conversation. Listen to the full discussion on YouTube, Spotify or anywhere you get your podcasts!
Is AI reshaping our understanding of knowledge?
We often view knowledge as explicit and symbolic—something clearly defined and universally agreed upon. The journey of thousands of years of philosophy, our studies in at university, and the recent history of philosophy all challenge this idea. Traditional symbolic AI approaches tried (and failed) to model human thinking through explicit rules, while modern neural networks thrive on intuitive, distributed, and fuzzy reasoning.
The shift in AI mirrors philosophical trends away from seeking ultimate, universal truths toward embracing intuitive, implicit, and context-dependent knowledge.
The paradox of contradictions in AI
When AI generates contradictory responses, it's tempting to dismiss it as flawed - a failure in alignment. But perhaps contradictions reflect a deeper truth: human cognition itself isn't free from contradictions. Perhaps AI models can or should hold multiple contradictory viewpoints at once.
AI, ethics, and the risk of rigid thinking
If AI ethics are built solely from today's moral standards, we might inadvertently lock ourselves into inflexible ethical norms. AI models risk becoming "fuzzy universal calculi," encoding biases and judgments from our current moment.
However, Dan points out a brighter path: intentionally designing AI systems with flexibility and fuzziness. By avoiding rigid ethical frameworks, we allow for moral reasoning that adapts and evolves—much like human moral intuition.
Generalists empowered by AI
Historically, society has moved from generalist cultures (think ancient Athens) to highly specialized ones. AI might reverse this trend by empowering individuals to act as "super-generalists," able to access specialized expertise instantly without needing to become specialists themselves.
This shift has profound implications: could AI help us reclaim a more holistic, flexible way of living and working?
Could AI reshape justice?
Our legal system depends heavily on human juries, inherently unpredictable and inconsistent. What if AI could offer more consistent moral judgments?
Yet there's tension here: while consistency could improve fairness, there's value in human unpredictability—like jury nullification—which challenges unjust laws. How we balance AI’s consistency with human flexibility might redefine justice itself.
We're at a fascinating crossroads where AI isn't just changing technology—it’s challenging us to rethink what it means to think, know, and create.
Check out the full episode!