Posts
Showing posts from 2025
Soenke Ziesche - IkigAI & AI, Ikigai Risk (IRisk)
- Get link
- X
- Other Apps
What happens to human meaning when AI takes over not just our jobs - but our joy, our purpose, even our sense of self? In this conversation, I speak with Soenke Ziesche, co-author of Considerations on the AI Endgame, about Ikigai - the Japanese concept of life purpose - and how it may be at risk in a world increasingly dominated by artificial intelligence. Together we explore: The psychological illusion of meaning in modern life The looming threat of Ikigai Risk - when AI automates away the things we live for, not just on How this connects to Nick Bostrom’s vision of Deep Utopia What happens when AI systems become more engaging than our friends, more competent than us in every skill, and indifferent to our well-being Along the way, we raise difficult questions: Will humans still find purpose when AI can outperform us in love, labour, and laughter? If superintelligent AI lacks ikigai-kan - a felt sense of...
Colin Allen - Moral Machines
- Get link
- X
- Other Apps
Interview with Colin Allen - Distinguished Professor of Philosophy at UC Santa Barbara and co-author of the influential 'Moral Machines: Teaching Robots Right from Wrong'. Colin is a leading voice at the intersection of AI ethics, cognitive science, and moral philosophy, with decades of work exploring how morality might be implemented in artificial agents. We cover the current state of AI, its capabilities and limitations, and how philosophical frameworks like moral realism, particularism, and virtue ethics apply to the design of AI systems. Colin offers nuanced insights into top-down and bottom-up approaches to machine ethics, the challenges of AI value alignment, and whether AI could one day surpass humans in moral reasoning. Along the way, we discuss oversight, political leanings in LLMs, the knowledge argument and AI sentience, and whether AI will actually care about ethics. 0:00 Intro 3:03 AI: Where are we at now? 7:53 AI Capability Gains 11:12 Gemini Gold Level in Interna...
Can Machines Understand? A.C. Grayling on AI and Moral Judgement
- Get link
- X
- Other Apps
AC Grayling on Use of AI: Can = Should? #ai #agi #interview
- Get link
- X
- Other Apps
AI Consciousness: Ghosts in the Machine? With Ben Goertzel, Robin Hanson...
- Get link
- X
- Other Apps
Nick Bostrom: Novelty In A Big Universe #novelty #utopia #interesting
- Get link
- X
- Other Apps
Nick Bostrom: Post-Human Consciousness #consciousness #philosophy #po...
- Get link
- X
- Other Apps
Nick Bostrom - Cosmic Norms #ethics #morality #philosophy
- Get link
- X
- Other Apps
Nick Bostrom on Coherent Extrapolated Volition CEV #ai #AGI #CEV
- Get link
- X
- Other Apps
Nick Bostrom: Failing to Develop Superintelligence an Existential Catastrophe
- Get link
- X
- Other Apps
Nick Bostrom - AI Values: Satiable vs Insatiable #ai #agi #values
- Get link
- X
- Other Apps
Nick Bostrom: Superintelligence Could Arrive Very Soon #agi #ai #superintelligence
- Get link
- X
- Other Apps
Nick Bostrom - Seeking Purpose? Seize the Day! #purpose #meaning #DeepUtopia
- Get link
- X
- Other Apps
The AI Arms Race & the Darwinian Trap - a discussion between Kristian Rönn & Anders Sandberg
- Get link
- X
- Other Apps
Anders Sandberg - AI Optimism & Pessimism
- Get link
- X
- Other Apps
Anders discusses his optimism about AI in contrast to Eliezer Yudkowsky's pessimism. Eliezer sees AI Safety achievement through mathematical precision where the a good AI sort of folds out of the right equations - but get one BIT wrong and its doom. Anders sees AI safety through a sort of swiss cheese security: https://en.wikipedia.org/wiki/Swiss_cheese_model #AGI #Optimism #pdoom