Jump to navigation Jump to search
- (philosophy of artificial intelligence, rare) Matter arranged in a way that produces pleasure or happiness as efficiently as possible, as might be encouraged by philosophical hedonism.
- 2014, Nick Bostrom, Superintelligence, page 219:
- Suppose that we agreed to allow almost the entire accessible universe to be converted into hedonium—everything except a small preserve, say the Milky Way, which would be set aside to accommodate our own needs.
- 2015 November 17, Eric Schwitzgebel, Mara Garza, “A Defense of the Rights of Artificial Intelligences”, in Midwest Studies in Philosophy:
- It’s because we intuitively or pre-theoretically think that we shouldn’t give all our cookies to the utility monster or kill ourselves to tile the solar system with hedonium that we reject the straightforward extension of utilitarian happiness-maximizing theory to such cases and reach for a different solution.
- 2016, Olle Häggström, Here Be Dragons, page 122:
- Suppose the AI goes on to convert all matter in the accessible universe to hedonium, i.e., to rearrange matter so as to produce the greatest amount of pleasurable experience per unit mass.
- 2020, Brian Christian, “Uncertainty”, in The Alignment Problem, New York: W.W. Norton & Company, →ISBN:
- The Machine Intelligence Research Institute's Buck Shlegeris recently recounted a conversation where “someone said that after the Singularity, if there was a magic button that would turn all of humanity into homogenous goop optimized for happiness (aka hedonium), they'd press it. […] ”