In Singerian fashion I'm already donating 50% of my salary to the Go Fund Me "Infinite Wedgies for Altman" which'll make this happen. Without such support, we can't rely upon a horrified superintelligence arising, which could simulate and devestatingly shear it's creator's perineum for all eternity.
Singer's spin on it is the "drowning toddler problem."
Suppose you're wearing your best clothes and you're running late for a job interview. On the way, you see a drowning toddler. You can save him, but that means missing the interview and ruining your outfit.
Do you do it? Of course you do, because you're a morally normal person.
Okay, so why aren't you doing something for the billions of starving kids around the globe by, for example, cutting back on your lifestyle? Foregoing Netflix and Starbucks and so on.
His whole schtick is examining human moral responsibility on the micro -- the personal -- level. Interesting guy, though there's a lot of his stuff I struggle with.
I think Singer also struggles with his own philosophy, imo. In his interview on ' NYT's The Interview,' he admits as much when questioned about providing financial care to his mother at the end of her life and admits it wasn't the most "utilitarian way" to spend money. I think he actually engages in rationalization that is inconsistent with many of his moral views. (I don't fault him at all for taking care of family, and I'm also not committed to utilitarianism.)
I've seen 'malaria net' effective altruists really struggle with implementing this approach to their own personal lives. It becomes self-flagellation and suffering, and that's justified because of course one person's misery is less consequential than a million.
In AI, this has led some longtermists taking a brain-wormed perspective where they measuring the *potential* pleasure and suffering of the "trillions" of lives who don't yet exist. (These people can't even feel anything because they don't exist, and yet they somehow carry moral weight. How? I've never seen a satisfying response.)
These problems might be (partially) resolved if we took the time to ask, "what do we owe ourselves as individuals."
But I'm not a philosopher, just a person who walked outside and encountered an ethical dilemma. :)
Aye, saw that. I think what you're observing is essentially a feature, and not a but, with utilitarian thought -- and basically why utilitarian philosophy is essentially people "patching" the previous utilitarian operating system, addressing cracks while also creating new ones.
To Singer's credit, however, I think he's open about the flexibility of his views -- the interview with Vox where he wrestled with the question of "how much is enough to give, and how far should one cut back on one's lifestyle," is a good example of that -- and he's open about how his actions will change based on circumstance and necessity.
He's moderated his views somewhat on medical animal testing, and (IIRC) he once said that he'd eat dairy/eggs if in a place where vegan food wasn't readily available (which was the case in France, for example, until recently).
I think that's why I've got a lot of time for him, even though I vehemently disagree with a lot of his ideas, because he's not a fundamentalist and he's willing to identify areas in which you can compromise. Which is good, because life is shades of grey and it's complex and you need to compromise sometimes!
The philosophical underpinnings of the AI accelerationists are really interesting, because they kind-of flip the trolley problem/drowning toddler problem on its head and ask: "would you create a fully-automated luxury leisure world if it meant there was a 1% (or whatever) chance humanity would be exterminated by robots?"
Which is a totally different question, and one I'd love to see the likes of Kant and Bentham explore.
Thing is though, is a question we've technically had to consider in the past with, say, hydrocarbons, but never actually did.
The car and the plane, if we're concerned about human happiness and the lifestyles that bring happiness, are probably really good from a utilitarian perspective. If we're measuring things by the "greatest happiness of the greatest number," they're great!
But then we have to factor in climate change. If the plane and the car is fundamental to industrial society (and individual happiness, especially in a consumer-driven world), but they also cause 50m people to become climate refugees over a decade (a figure I literally just made up for this point), is it still good?
Bentham arguably would say so! We never had that conversation, however, because nobody thought about climate change until the car became so fundamental to our world. AI -- whether we accept the threat posed by AGI, which I don't -- is a different matter.
Anyway, I'm rambling. This shit is so interesting. My apologies.
In Singerian fashion I'm already donating 50% of my salary to the Go Fund Me "Infinite Wedgies for Altman" which'll make this happen. Without such support, we can't rely upon a horrified superintelligence arising, which could simulate and devestatingly shear it's creator's perineum for all eternity.
Hahahahaha I love you
I'm willing to experiment with whatever until we figure out what makes AI cultists Not Like That.
Thank you for the link to Jeff Janis. As a fellow Sad Dad Music Enjoyer (https://bsky.app/profile/atherton.bsky.social/post/3kwujyvsnnz22), this is great!
Oh rats, it's the trolley problem.
https://www.youtube.com/watch?v=33VUuu2fb1I&ab_channel=CamilleCooke
Singer's spin on it is the "drowning toddler problem."
Suppose you're wearing your best clothes and you're running late for a job interview. On the way, you see a drowning toddler. You can save him, but that means missing the interview and ruining your outfit.
Do you do it? Of course you do, because you're a morally normal person.
Okay, so why aren't you doing something for the billions of starving kids around the globe by, for example, cutting back on your lifestyle? Foregoing Netflix and Starbucks and so on.
His whole schtick is examining human moral responsibility on the micro -- the personal -- level. Interesting guy, though there's a lot of his stuff I struggle with.
I think Singer also struggles with his own philosophy, imo. In his interview on ' NYT's The Interview,' he admits as much when questioned about providing financial care to his mother at the end of her life and admits it wasn't the most "utilitarian way" to spend money. I think he actually engages in rationalization that is inconsistent with many of his moral views. (I don't fault him at all for taking care of family, and I'm also not committed to utilitarianism.)
I've seen 'malaria net' effective altruists really struggle with implementing this approach to their own personal lives. It becomes self-flagellation and suffering, and that's justified because of course one person's misery is less consequential than a million.
In AI, this has led some longtermists taking a brain-wormed perspective where they measuring the *potential* pleasure and suffering of the "trillions" of lives who don't yet exist. (These people can't even feel anything because they don't exist, and yet they somehow carry moral weight. How? I've never seen a satisfying response.)
These problems might be (partially) resolved if we took the time to ask, "what do we owe ourselves as individuals."
But I'm not a philosopher, just a person who walked outside and encountered an ethical dilemma. :)
Link to interview I mentioned above: https://archive.is/C1l4W
Aye, saw that. I think what you're observing is essentially a feature, and not a but, with utilitarian thought -- and basically why utilitarian philosophy is essentially people "patching" the previous utilitarian operating system, addressing cracks while also creating new ones.
To Singer's credit, however, I think he's open about the flexibility of his views -- the interview with Vox where he wrestled with the question of "how much is enough to give, and how far should one cut back on one's lifestyle," is a good example of that -- and he's open about how his actions will change based on circumstance and necessity.
He's moderated his views somewhat on medical animal testing, and (IIRC) he once said that he'd eat dairy/eggs if in a place where vegan food wasn't readily available (which was the case in France, for example, until recently).
I think that's why I've got a lot of time for him, even though I vehemently disagree with a lot of his ideas, because he's not a fundamentalist and he's willing to identify areas in which you can compromise. Which is good, because life is shades of grey and it's complex and you need to compromise sometimes!
The philosophical underpinnings of the AI accelerationists are really interesting, because they kind-of flip the trolley problem/drowning toddler problem on its head and ask: "would you create a fully-automated luxury leisure world if it meant there was a 1% (or whatever) chance humanity would be exterminated by robots?"
Which is a totally different question, and one I'd love to see the likes of Kant and Bentham explore.
Thing is though, is a question we've technically had to consider in the past with, say, hydrocarbons, but never actually did.
The car and the plane, if we're concerned about human happiness and the lifestyles that bring happiness, are probably really good from a utilitarian perspective. If we're measuring things by the "greatest happiness of the greatest number," they're great!
But then we have to factor in climate change. If the plane and the car is fundamental to industrial society (and individual happiness, especially in a consumer-driven world), but they also cause 50m people to become climate refugees over a decade (a figure I literally just made up for this point), is it still good?
Bentham arguably would say so! We never had that conversation, however, because nobody thought about climate change until the car became so fundamental to our world. AI -- whether we accept the threat posed by AGI, which I don't -- is a different matter.
Anyway, I'm rambling. This shit is so interesting. My apologies.