<H2> Nick Bostrom’s Home Page </H2> |
<H2> Selected papers </H2> |
<H3> </H3> |
<H3> Recent additions </H3> |
<H3> Ethics & Policy </H3> |
<H3> Transhumanism </H3> |
<H3> Risk & The Future </H3> |
<H3> Technology Issues </H3> |
<H3> The New Book </H3> |
<H3> Anthropics & Probability </H3> |
<H3> Philosophy of Mind </H3> |
<H3> Decision Theory </H3> |
<H3> Bio </H3> |
<H3> My Work </H3> |
<H3> Contact </H3> |
<H3> Newsletter </H3> |
<H3> Virtual Estate </H3> |
<H3> Some Videos & Lectures </H3> |
<H3> Some additional (old, cobwebbed) papers </H3> |
<H3> Interviews </H3> |
<H3> Policy </H3> |
<H3> Miscellaneous </H3> |
<H4> Propositions Concerning Digital Minds and Society </H4> |
<H4> Sharing the World with Digital Minds </H4> |
<H4> Strategic Implications of Openness in AI Development </H4> |
<H4> The Reversal Test: Eliminating Status Quo Bias in Applied Ethics </H4> |
<H4> The Fable of the Dragon-Tyrant </H4> |
<H4> Astronomical Waste: The Opportunity Cost of Delayed Technological Development </H4> |
<H4> Infinite Ethics </H4> |
<H4> The Unilateralist's Curse: The Case for a Principle of Conformity </H4> |
<H4> Public Policy and Superintelligent AI: A Vector Field Approach </H4> |
<H4> Dignity and Enhancement </H4> |
<H4> In Defense of Posthuman Dignity </H4> |
<H4> Human Enhancement </H4> |
<H4> Enhancement Ethics: The State of the Debate </H4> |
<H4> Human Genetic Enhancements: A Transhumanist Perspective </H4> |
<H4> Ethical Issues in Human Enhancement </H4> |
<H4> The Ethics of Artificial Intelligence </H4> |
<H4> Ethical Issues In Advanced Artificial Intelligence </H4> |
<H4> Smart Policy: Cognitive Enhancement and the Public Interest </H4> |
<H4> Base Camp for Mt. Ethics </H4> |
<H4> Why I Want to be a Posthuman When I Grow Up </H4> |
<H4> Letter from Utopia </H4> |
<H4> The Transhumanist FAQ </H4> |
<H4> Transhumanist Values </H4> |
<H4> A History of Transhumanist Thought </H4> |
<H4> The Vulnerable World Hypothesis </H4> |
<H4> Where Are They? Why I hope the search for extraterrestrial life finds nothing </H4> |
<H4> Existential Risk Prevention as Global Priority </H4> |
<H4> How Unlikely is a Doomsday Catastrophe? </H4> |
<H4> The Future of Humanity </H4> |
<H4> Global Catastrophic Risks </H4> |
<H4> The Future of Human Evolution </H4> |
<H4> Technological Revolutions: Ethics and Policy in the Dark </H4> |
<H4> Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards </H4> |
<H4> Information Hazards: A Typology of Potential Harms from Knowledge </H4> |
<H4> What is a Singleton? </H4> |
<H4> Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer? </H4> |
<H4> The Evolutionary Optimality Challenge </H4> |
<H4> The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents </H4> |
<H4> Whole Brain Emulation: A Roadmap </H4> |
<H4> Converging Cognitive Enhancements </H4> |
<H4> Hail Mary, Value Porosity, and Utility Diversification </H4> |
<H4> Racing to the Precipice: a Model of Artificial Intelligence Development </H4> |
<H4> Thinking Inside the Box: Controlling and Using Oracle AI </H4> |
<H4> Future Progress in Artificial Intelligence: A Survey of Expert Opinion </H4> |
<H4> Cognitive Enhancement: Methods, Ethics, Regulatory Challenges </H4> |
<H4> Are You Living in a Computer Simulation? </H4> |
<H4> Superintelligence: Paths, Dangers, Strategies </H4> |
<H4> Anthropic Bias: Observation Selection Effects in Science and Philosophy </H4> |
<H4> Self-Locating Belief in Big Worlds: Cosmology's Missing Link to Observation </H4> |
<H4> The Mysteries of Self-Locating Belief and Anthropic Reasoning </H4> |
<H4> Anthropic Shadow: Observation Selection Effects and Human Extinction Risks </H4> |
<H4> Observation Selection Effects, Measures, and Infinite Spacetimes </H4> |
<H4> The Doomsday argument and the Self-Indication Assumption: Reply to Olum </H4> |
<H4> The Doomsday Argument is Alive and Kicking </H4> |
<H4> The Doomsday Argument, Adam & Eve, UN++, and Quantum Joe </H4> |
<H4> A Primer on the Doomsday argument </H4> |
<H4> Sleeping Beauty and Self-Location: A Hybrid Model </H4> |
<H4> Beyond the Doomsday Argument: Reply to Sowers and Further Remarks </H4> |
<H4> Cars In the Other Lane Really Do Go Faster </H4> |
<H4> Observer-relative chances in anthropic reasoning? </H4> |
<H4> Cosmological Constant and the Final Anthropic Hypothesis </H4> |
<H4> Quantity of Experience: Brain-Duplication and Degrees of Consciousness </H4> |
<H4> The meta-Newcomb Problem </H4> |
<H4> Pascal's Mugging </H4> |
<H4> Nick Bostrom: How AI will lead to tyranny </H4> |
<H4> TED2019 </H4> |
<H4> Podcast with Sean Carroll </H4> |
<H4> Podcast with Lex Fridman </H4> |
<H4> TED talk on AI risk </H4> |
<H4> Crucial Considerations and Wise Philanthropy </H4> |
<H4> The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? </H4> |
<H4> Omens </H4> |
<H4> How to make a difference in research: interview for 80,000 Hours </H4> |
<H4> On the simulation argument </H4> |
<H4> On cognitive enhancement and status quo bias </H4> |
<H4> Smart Policy: Cognitive Enhancement and the Public Interest </H4> |
<H4> Three Ways to Advance Science </H4> |
<H4> What are the key steps the UK should take to maximise its resilience to natural hazards and malicious threats? </H4> |
<H4> Drugs can be used to treat more than disease </H4> |
<H4> The Interests of Digital Minds </H4> |
<H4> Golden </H4> |
<H4> Synkrotron </H4> |
<H4> The World in 2050 </H4> |
<H4> Transhumanism: The World's Most Dangerous Idea? </H4> |
<H4> Moralist, meet Scientist </H4> |
<H4> How Long Before Superintelligence? </H4> |
<H4> When Machines Outsmart Humans </H4> |
<H4> Everything </H4> |
<H4> Superintelligence </H4> |
<H4> Most Still to Come </H4> |
<H4> The Game of Life—And Looking for Generators </H4> |
Social
Social Data
Cost and overhead previously rendered this semi-public form of communication unfeasible.
But advances in social networking technology from 2004-2010 has made broader concepts of sharing possible.