Posts

Showing posts from 2025

Ben Goertzel - Do LLMs Really Reason?

Image

Soenke Ziesche - IkigAI & AI, Ikigai Risk (IRisk)

Image
What happens to human meaning when AI takes over not just our jobs - but our joy, our purpose, even our sense of self? In this conversation, I speak with Soenke Ziesche, co-author of Considerations on the AI Endgame, about Ikigai - the Japanese concept of life purpose - and how it may be at risk in a world increasingly dominated by artificial intelligence. Together we explore:     The psychological illusion of meaning in modern life     The looming threat of Ikigai Risk - when AI automates away the things we live for, not just on     How this connects to Nick Bostrom’s vision of Deep Utopia     What happens when AI systems become more engaging than our friends, more competent than us in every skill, and indifferent to our well-being Along the way, we raise difficult questions:     Will humans still find purpose when AI can outperform us in love, labour, and laughter?     If superintelligent AI lacks ikigai-kan - a felt sense of...

Colin Allen - Moral Machines

Image
Interview with Colin Allen - Distinguished Professor of Philosophy at UC Santa Barbara and co-author of the influential 'Moral Machines: Teaching Robots Right from Wrong'. Colin is a leading voice at the intersection of AI ethics, cognitive science, and moral philosophy, with decades of work exploring how morality might be implemented in artificial agents. We cover the current state of AI, its capabilities and limitations, and how philosophical frameworks like moral realism, particularism, and virtue ethics apply to the design of AI systems. Colin offers nuanced insights into top-down and bottom-up approaches to machine ethics, the challenges of AI value alignment, and whether AI could one day surpass humans in moral reasoning. Along the way, we discuss oversight, political leanings in LLMs, the knowledge argument and AI sentience, and whether AI will actually care about ethics. 0:00 Intro 3:03 AI: Where are we at now? 7:53 AI Capability Gains 11:12 Gemini Gold Level in Interna...

Colin Allen - Moral Machines

Image

Robin Hanson - We Broke Humanity’s Superpower

Image

Can Machines Understand? A.C. Grayling on AI and Moral Judgement

Image

James Barrat - The Intelligence Explosion

Image

AI Welfare - Jeff Sebo

Image

AC Grayling on Use of AI: Can = Should? #ai #agi #interview

Image

Nick Bostrom - From Superintelligence to Deep Utopia

Image

Metacognition in LLMs - Shun Yoshizawa & Ken Mogi

Image

Metacognition in LLMs - Shun Yoshizawa & Ken Mogi

Image

AI Consciousness: Ghosts in the Machine? With Ben Goertzel, Robin Hanson...

Image

Nick Bostrom: Novelty In A Big Universe #novelty #utopia #interesting

Image

Nick Bostrom: Post-Human Consciousness #consciousness #philosophy #po...

Image

Nick Bostrom - Cosmic Norms #ethics #morality #philosophy

Image

Nick Bostrom on Coherent Extrapolated Volition CEV #ai #AGI #CEV

Image

Nick Bostrom: Failing to Develop Superintelligence an Existential Catastrophe

Image

Nick Bostrom - AI Values: Satiable vs Insatiable #ai #agi #values

Image

Nick Bostrom: Superintelligence Could Arrive Very Soon #agi #ai #superintelligence

Image

Nick Bostrom - Seeking Purpose? Seize the Day! #purpose #meaning #DeepUtopia

Image

Nick Bostrom -on Xrisk - pt 2 #xrisk #airisks #ai

Image

The AI Arms Race & the Darwinian Trap - a discussion between Kristian Rönn & Anders Sandberg

Image

Anders Sandberg - AI Optimism & Pessimism

Image
Anders discusses his optimism about AI in contrast to Eliezer Yudkowsky's pessimism. Eliezer sees AI Safety achievement through mathematical precision where the a good AI sort of folds out of the right equations - but get one BIT wrong and its doom. Anders sees AI safety through a sort of swiss cheese security: https://en.wikipedia.org/wiki/Swiss_cheese_model #AGI #Optimism #pdoom