New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 6 of 12 FirstFirst 123456789101112 LastLast
Results 151 to 180 of 352

Thread: Are we evil?

  1. - Top - End - #151
    Titan in the Playground
     
    TuggyNE's Avatar

    Join Date
    Jun 2011
    Gender
    Male

    Default Re: Are we evil?

    Quote Originally Posted by Murska View Post
    EDIT: Ah, yes. In the context of Prisoner's Dilemma specifically, detriment is functionally equivalent to not getting the best possible outcome, as the game is abstracted to only contain the four possible outcomes which are clearly ranked in preference.
    Sure, but a rational actor might in principle consider that acting in a locally-suboptimal fashion in Prisoner's Dilemma would in some fashion benefit them in a broader context, e.g. if they believe themselves to be observed by someone who values behavior of one kind or another. And in English, one's "values" are conventionally used to mean personal ideals in the context of a belief system; these are generally used as the basis for determining (presumably) rational courses of action to further those values, but they need not be such simple things as the amassing of utilons, but can include meta-values like (not) cooperating.

    It should probably also be noted that results from game theory suggest that naive cooperation is not, in fact, the optimal strategy, at least in iteration. Just to muddy the waters a bit more.

    TL/DR: This is actually fairly complicated stuff, and there are very few easy answers that are worth the electrons they inconvenience.
    Quote Originally Posted by Water_Bear View Post
    That's RAW for you; 100% Rules-Legal, 110% silly.
    Quote Originally Posted by hamishspence View Post
    "Common sense" and "RAW" are not exactly on speaking terms
    Projects: Homebrew, Gentlemen's Agreement, DMPCs, Forbidden Knowledge safety, and Top Ten Worst. Also, Quotes and RACSD are good.

    Anyone knows blue is for sarcas'ing in · "Take 10 SAN damage from Dark Orchid" · Use of gray may indicate nitpicking · Green is sincerity

  2. - Top - End - #152
    Banned
     
    SiuiS's Avatar

    Join Date
    Jan 2011
    Location
    Somewhere south of Hell
    Gender
    Female

    Default Re: Are we evil?

    Quote Originally Posted by TuggyNE View Post
    It should probably also be noted that results from game theory suggest that naive cooperation is not, in fact, the optimal strategy, at least in iteration. Just to muddy the waters a bit more.
    Interesting. Could you elaborate on this?

  3. - Top - End - #153
    Troll in the Playground
     
    Murska's Avatar

    Join Date
    Jul 2007
    Location
    Whose eye is that eye?
    Gender
    Male

    Default Re: Are we evil?

    Quote Originally Posted by TuggyNE View Post
    Sure, but a rational actor might in principle consider that acting in a locally-suboptimal fashion in Prisoner's Dilemma would in some fashion benefit them in a broader context, e.g. if they believe themselves to be observed by someone who values behavior of one kind or another. And in English, one's "values" are conventionally used to mean personal ideals in the context of a belief system; these are generally used as the basis for determining (presumably) rational courses of action to further those values, but they need not be such simple things as the amassing of utilons, but can include meta-values like (not) cooperating.

    It should probably also be noted that results from game theory suggest that naive cooperation is not, in fact, the optimal strategy, at least in iteration. Just to muddy the waters a bit more.

    TL/DR: This is actually fairly complicated stuff, and there are very few easy answers that are worth the electrons they inconvenience.
    Meta-concerns are generally ignored in theoretical discussion. But one would assume that generally speaking most value systems promote cooperation even more.

    Naive cooperation is not a good idea, I believe I mentioned that in iterative Prisoner's Dilemma the optimal strategy tends to, in experiments, be to begin with cooperation yet punish defection harshly. Simulative iterative Prisoner's Dilemma favours the same thing.

    It's not very complicated at all, really. Most things aren't. The vast majority of correct answers are blindingly obvious, in hindsight.

    EDIT: Also, 'utilon' is a general term for 'benefit according to one's values' in imaginary units that can be compared in theoretical discussions. If you get utilons from having observers view you as trustworthy even while cooperating against a probable defect, then that might become rational, which is the reason why such concerns are abstracted away - you are assumed to only and solely care about the result of the game.
    Last edited by Murska; 2015-02-28 at 03:51 AM.
    Quotes:
    Spoiler
    Show
    Quote Originally Posted by lamech View Post
    Trusting Murska worked out great!
    Quote Originally Posted by happyturtle View Post
    A Murska without lies is like a day without sunshine.
    Quote Originally Posted by Xihirli View Post
    I say we completely leave our fate in the hands of the trustworthy Murska and continue in complete safety.

  4. - Top - End - #154
    Halfling in the Playground
     
    DruidGuy

    Join Date
    Jan 2015
    Location
    Chicago
    Gender
    Male

    Default Re: Are we evil?

    Are humans naturally evil? No, I don't think so, but I'm an eternal optimist. However, I will say that things like fear, and the desire for power is what drives some humans into that dark direction.
    LFGdating
    Currently playing: D3, SC2, and wait for it ... Red Alert 3. (And possibly some Goldeneye here or there.)

  5. - Top - End - #155
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Are we evil?

    Quote Originally Posted by TuggyNE View Post
    *insert Princess Bride quote here* "Detriment" is not conventionally defined in terms of "going against values", but in terms of harm (per OED, M-W, etc etc etc). So no, it's not a tautology. It's a novel and unusual definition, upon which your entire argument hangs. Therefore, as previously noted, you can't just brush it under the rug; you must rigorously defend exactly why this definition is correct, or why it is semantically equivalent to the usual, or whatever. Otherwise the argument falls on its face at the starting blocks.
    I would hope that you'd agree that calling a life-saving surgery "injurious" because it involves slicing someone up is misleading, because you're no longer using that word to indicate what it normally indicates. Usually! But, in turn, if saving someone's life is contrary to that individual's goals, then calling it harmful is appropriate again.

    That's the ethically relevant sense of these words. That which is beneficial to or good for me is that of which I approve, and that which is detrimental to or bad for me is that of which I disapprove. Acting contrary to my preferences is not "helping" me. Don't speak of yourself as if you are my ally if you are instead my enemy. Etc.

    It's generally dubious to suppose that you know how to achieve others' values better than they do, and there's a case to be made that we have an ethical obligation not to do that most of the time. But at the point at which you're knowingly acting against others' values, it's outright dishonest to claim that you're doing something to them "for their own good", as you're not even trying to act in their service.

    Quote Originally Posted by Murska View Post
    A mind could conceivably have pretty much any values. And acting according to those values would be 'right' or 'good' in their morality. Just as it might be 'wrong' or 'evil' in ours. Saying that an action is moral, ethical or right means that it is endorsed by our value system, which in most humans is close enough to most other humans that we can make judgments like that even over several minds that technically have slightly differing valus. That's all.
    No. A value system need not be moral in nature. And saying that something is moral doesn't just mean that it's in accordance with our values, because we are capable of distinguishing our moral values from other values of ours which aren't moral. People sometimes do things that they think are immoral, because sometimes people care about other things more than morality.

    In much the same way in which chocolate isn't whatever flavor of ice cream you like the best, morality isn't whatever you want the most. And an alien being with a good grasp of human language wouldn't call some sort of gross (to us) puke-flavored ice cream or whatever "chocolate" just because that's its favorite. Similarly, an alien being with values fundamentally opposed to morality would have no reason to call its values "moral" unless it was trying to deceive.

    Try putting yourself in the alien's place here. Suppose that you come across a planet filled with beings who place great value on a bunch of interrelated things that are, as a rule, loathsome to you. Do you then adopt their word for those things as a term for a bunch of stuff that you value? Does that seem like a reasonable thing to do?

    This doesn't mean that the concepts right and wrong are meaningless. They are very important to us, directing our actions and judging the actions of others based on our values. But there's nothing beyond ourselves that gives them meaning.
    ... Do you think that this somehow constitutes an argument against something that I said? If so, what, and how?

    I would dispute this. If you act in a way that is to your own detriment, you are not acting rationally. Rationality means winning.
    You win the most in the Prisoner's Dilemma if you defect and your opponent cooperates. Are defectors whose opponents cooperate the most rational? Is playing the lottery highly rational if you manage to pick the winning numbers? Rationality is related to winning, but the two are not equivalent.

    In the example of an iterated Prisoner's Dilemma, a rational actor will cooperate by default, but punish defectors. Two rational agents will cooperate the entire time. In the case of a singular Prisoner's Dilemma, two rational agents will still cooperate, because if they simulate the other side in their situation, a defection would result in two defects and cooperation would result in two cooperations. This is part of timeless decision theory.
    That's true if they have mutual knowledge of their mutual rationality, but that isn't necessarily the case now is it? Technically, it's a matter of the probability that the other agent's decision will mirror your own, for any of numerous possible reasons. (In practice, it's not rational to assign a proposition a probability of 0 or 1.)

    Quote Originally Posted by Murska View Post
    However, I hope I have now made my meaning perfectly clear, and would much rather defend that if anyone finds anything in it to question, rather than discussing words.
    But defining "rationality" as acting in accordance with your own values doesn't seem to have been the basis for something else that you were getting at but rather to have been your central point (in the latter half of your response to me). What point other than the meaning of the word "rational" were you addressing? Furthermore, what do you think that the preceding exchange is about if not the meanings of the words "right", "good", "moral", "ethical", "wrong", "evil", etc.?

    Purely philosophical statements are basically those statements whose truth values are purely functions of the meanings of the words they contain. "Jill's house is green", for example, is not a philosophical statement, but "Something is knowledge if it is a true justified belief" is. Philosophical questions by their very nature are semantic questions. "What is knowledge?", "What is goodness?", "What is rationality?", "What is beauty?", etc. directly equate to "What does 'knowledge' mean?", "What does 'goodness' mean?", "What does 'rationality' mean?", "What does 'beauty' mean?", etc. Each of the big questions of philosophy is really just the question of what the heck we're even talking about when we use a particular set of interrelated words.

    Didn't Wittgenstein say something to the effect that the only function of philosophy is the clarification of language?

    When engaged in a discussion like this, it's pretty ridiculous to accuse someone else of engaging in mere semantic nitpicking as if you are not also disputing the meanings of words. What various words mean is basically ultimately all that's at issue. But something being pretty ridiculous has never been a barrier to plenty of people doing it. :P

    Quote Originally Posted by Murska View Post
    Well, a poster mentioned that rational agents can act to the detriment of themselves in things like Prisoner's Dilemma. I answered that it is not so, as rational agents do not act in a way that is to their own detriment.
    I mentioned the idea that a group of rational agents can act towards their mutual detriment in some situations. I didn't claim that it's so, did I? But if you really do want to argue that that's impossible then my understanding is that you've got your work cut out for you. What the Prisoner's Dilemma purports to show, to my understanding, is that individual rationality doesn't add up to collective rationality. Basically, the idea that a choice is a bad one or a good one is based on contrast to "what would have happened in the case that another choice had been made", so the whole concept hinges on the evaluation of counterfactuals, which is actually a fairly complicated philosophical question I think.

    But the sort of reasoning that you mention is actually fairly applicable here, because the fates of human beings may one day be governed by minds qualitatively more sophisticated than ours. And for all of their differences from us, they may subject to many of the same considerations as we are regarding the question of how to treat one's inferiors, notably including the consideration that their superiors may be subject to many of the same considerations as they are.

    It's possible to pick out some trait that you have and to devalue everyone who lacks it. And post-humans or alien superbeings or the extradimensional entities simulating our universe or whatever could totally devalue human beings for lacking mental capabilities of theirs that we can't even conceive of, just as various human beings devalue those who lack intelligence, self-awareness, metacognition, moral agency, abstractions, hypotheses, or even just whatever skin color or religion or nationality or whatever the hell they favor. Any damn thing, really! (They probably wouldn't claim that whatever they chose was the only possible basis on which to value minds, though. One imagines that such hyper-advanced superbeings would be far more intellectually honest, self-aware, etc. than that.)

    And obviously by raising the possibility of being in the same situation, I'm trying to get human readers to empathize with other creatures, and to raise considerations of the Golden Rule, along with ethical principles like it being wrong for someone to have bad things happen to them because of things that they have no control over, it still being evil to do evil things to those in your outgroup, etc. But also, even if you don't give a rat's ass about any of that, there seems like a non-trivial possibility that minds who have to decide how well to treat you just may be similar enough to your mind in the relevant ways that their decisions mirror your decision of how well to treat those that you have power over. It certainly seems more likely that other minds in general will tend to be like yours than opposite yours. In addition to specific possibilities like post-humans inheriting human values, interest in simulating minds like one's own, etc., some anthropic reasoning also seems applicable.

    In many cases your decision of how to treat your inferiors will have no obvious direct causal impact on your superiors' decision of how to treat you, but as in Newcomb's problem, that doesn't mean that it's rational to disregard the relationship between the two. And all of the above is before we even get into the question of what form punishing defectors might conceivably take...

    Of course, if you decide that the probability of a superior's decision mirroring yours is very small, then the logic sounds rather similar to Pascal's Wager. Including the idea that the tiny chance of a huge payoff isn't necessarily the only reason to take the wager.


    Does anyone seriously think that any principle that could be reasonably described as "moral" somehow recommends disregarding the welfare of other sentient beings? Because it seems to me like basically all of them recommend treating other sentient beings well. Kindness? Obviously. Justice? Of course. The ethic of reciprocity? Well, yeah! Maximizing happiness? You bet. Enlightened self-interest? Um... It actually kinda seems like it.

    This should not come as a surprise. One of the reasons that ethical principles are formulated in so many different ways is that there are a lot of surprisingly different formulas that produce surprisingly similar advice. Or not so surprising, really, since prohibiting murder, theft, assault, etc. is basically what they're designed to do. And sometimes, nearly all of the various generalizations of such standard ethical rules will recommend against doing something that the standard rules themselves do not prohibit. That indicates that you shouldn't do that thing! For, like, almost every value of "should"!

    Only when different ethical guidelines make conflicting endorsements does it make a difference which principles are fundamental. In a case where e.g. happiness and preference satisfaction are both maximized by the same course of action, then the question of which whether one of them should only be regarded as a means to the other isn't practically relevant.
    Last edited by Devils_Advocate; 2015-04-14 at 11:04 AM.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  6. - Top - End - #156
    Banned
     
    SiuiS's Avatar

    Join Date
    Jan 2011
    Location
    Somewhere south of Hell
    Gender
    Female

    Default Re: Are we evil?

    Thanks. That was a good read. :)

  7. - Top - End - #157
    Ettin in the Playground
     
    Kobold

    Join Date
    May 2009

    Default Re: Are we evil?

    Devils Advocate: for the most part I wholeheartedly agree with you. And yet there are a couple of places where your logic troubles me. So just for your reference, in the hope that it may help you to tighten up your explanation in future...

    Quote Originally Posted by Devils_Advocate View Post
    Try putting yourself in the alien's place here. Suppose that you come across a planet filled with beings who place great value on a bunch of interrelated things that are, as a rule, loathsome to you. Do you then adopt their word for those things as a term for a bunch of stuff that you value? Does that seem like a reasonable thing to do?
    I'm thinking of the 17th-through-19th-century European explorers, who went forth and discovered new peoples and cultures, and attached words like "religion" and "gods" and "laws" to what they found, even though their own religion and laws said that these were inappropriate. (It was a considerable intellectual jump for some of them, but in the end the doubters were firmly overruled by the cultural relativists among them.) If the aliens have a concept of "rightness", a "should" imperative that causes them to regard these repulsive things as valuable and important even when they don't, at a personal level, find them remotely desirable, then it's not so far-fetched to associate them with a concept that we regard in the same light.

    Morality isn't a set of values, it's a way of regarding those values. We bundle a set of concepts together and attach the label "good" to them. What, precisely, gets included in that bundle is secondary: it's the label that makes it "moral". If we see other people attaching an equivalent label to a very different bundle, then there's nothing dishonest about using our word "morality" to describe their attitude.

    Quote Originally Posted by Devils_Advocate View Post
    That's true if they [both players of the Prisoner's Dilemma] have mutual knowledge of their mutual rationality, but that isn't necessarily the case now is it? Technically, it's a matter of the probability that the other agent's decision will mirror your own, for any of numerous possible reasons. (In practice, it's not rational to assign a proposition a probability of 0 or 1.)
    It's perfectly rational, if you know that the other player is identical to yourself. Imagine that a computer, sufficiently advanced to reason the answer out for itself, knows that it is playing against another computer that is a perfect clone of itself, and that it's given precisely the same starting information. Then it can know, with certainty, that the clone will make the same choice as it does.
    "None of us likes to be hated, none of us likes to be shunned. A natural result of these conditions is, that we consciously or unconsciously pay more attention to tuning our opinions to our neighbor’s pitch and preserving his approval than we do to examining the opinions searchingly and seeing to it that they are right and sound." - Mark Twain

  8. - Top - End - #158
    Troll in the Playground
     
    Murska's Avatar

    Join Date
    Jul 2007
    Location
    Whose eye is that eye?
    Gender
    Male

    Default Re: Are we evil?

    Quote Originally Posted by Devils_Advocate View Post
    No. A value system need not be moral in nature. And saying that something is moral doesn't just mean that it's in accordance with our values, because we are capable of distinguishing our moral values from other values of ours which aren't moral. People sometimes do things that they think are immoral, because sometimes people care about other things more than morality.

    In much the same way in which chocolate isn't whatever flavor of ice cream you like the best, morality isn't whatever you want the most. And an alien being with a good grasp of human language wouldn't call some sort of gross (to us) puke-flavored ice cream or whatever "chocolate" just because that's its favorite. Similarly, an alien being with values fundamentally opposed to morality would have no reason to call its values "moral" unless it was trying to deceive.

    Try putting yourself in the alien's place here. Suppose that you come across a planet filled with beings who place great value on a bunch of interrelated things that are, as a rule, loathsome to you. Do you then adopt their word for those things as a term for a bunch of stuff that you value? Does that seem like a reasonable thing to do?
    Semantics wasn't my point here. We can talk about moral values and other values if you want, and aliens can use a completely different term, but it makes no difference to the argument.

    ... Do you think that this somehow constitutes an argument against something that I said? If so, what, and how?
    I don't know, does it? What does that matter?

    You win the most in the Prisoner's Dilemma if you defect and your opponent cooperates. Are defectors whose opponents cooperate the most rational? Is playing the lottery highly rational if you manage to pick the winning numbers? Rationality is related to winning, but the two are not equivalent.
    Rationality is systemized winning. Playing the lottery is rational if you can pick numbers in a way that statistically you win money. Defecting in Prisoner's Dilemma is rational if you know your opponent will cooperate. In reality, assuming a rational opponent, you cannot make your opponent cooperate except by precommitting to cooperate, in which case cooperation is the highest expected reward and is therefore rational.

    That's true if they have mutual knowledge of their mutual rationality, but that isn't necessarily the case now is it? Technically, it's a matter of the probability that the other agent's decision will mirror your own, for any of numerous possible reasons. (In practice, it's not rational to assign a proposition a probability of 0 or 1.)
    Yes. 1 and 0 are not probabilities - they don't exist. (With ~0.9999 certainty, at any rate) But you make your decision through modeling your opponent the best you can, so the technicality doesn't matter for the purposes of the argument.

    But defining "rationality" as acting in accordance with your own values doesn't seem to have been the basis for something else that you were getting at but rather to have been your central point (in the latter half of your response to me). What point other than the meaning of the word "rational" were you addressing? Furthermore, what do you think that the preceding exchange is about if not the meanings of the words "right", "good", "moral", "ethical", "wrong", "evil", etc.?
    What are you asking here? I attempted to define rationality in order to mention that rational agents don't systematically lose in Prisoner's Dilemma and give some basis why, but that was not related in any way I can discern to other arguments regarding morality or whatnot.

    Purely philosophical statements are basically those statements whose truth values are purely functions of the meanings of the words they contain. "Jill's house is green", for example, is not a philosophical statement, but "Something is knowledge if it is a true justified belief" is. Philosophical questions by their very nature are semantic questions. "What is knowledge?", "What is goodness?", "What is rationality?", "What is beauty?", etc. directly equate to "What does 'knowledge' mean?", "What does 'goodness' mean?", "What does 'rationality' mean?", "What does 'beauty' mean?", etc. Each of the big questions of philosophy is really just the question of what the heck we're even talking about when we use a particular set of interrelated words.

    Didn't Wittgenstein say something to the effect that the only function of philosophy is the clarification of language?
    If all philosophical questions are only about meanings of words, then there is nothing to philosophy as all words only mean what we want them to mean, and nothing more. In general, philosophical questions tend to be questions about things we find mysterious due to our own lack of knowledge, and then over time as our knowledge increases, more and more philosophical questions are dissolved or answered. That doesn't mean they're talking about nothing, but it does mean that philosophy in general is pretty useless. Wittgenstein has said plenty of things, which might or might not be true.

    When engaged in a discussion like this, it's pretty ridiculous to accuse someone else of engaging in mere semantic nitpicking as if you are not also disputing the meanings of words. What various words mean is basically ultimately all that's at issue. But something being pretty ridiculous has never been a barrier to plenty of people doing it. :P
    When engaging in a debate or argument regarding some matter, the correct way to proceed is to take the opponent's argument and do your very best to interpret it in the most damaging possible way to your own position, to do your very best to break yourself upon it, and then see if you can still stand. If someone counters some argument of mine by claiming they use a different definition of some word, it equates to them saying they do not understand what I mean, and therefore I must clarify my position. This is not yet debating the position either way, it is simply trying to clearly communicate it to the other person so they can begin to attack it as best they can, or accept it as correct. That's all semantics is good for - enabling mutual understanding.

    I mentioned the idea that a group of rational agents can act towards their mutual detriment in some situations. I didn't claim that it's so, did I? But if you really do want to argue that that's impossible then my understanding is that you've got your work cut out for you. What the Prisoner's Dilemma purports to show, to my understanding, is that individual rationality doesn't add up to collective rationality. Basically, the idea that a choice is a bad one or a good one is based on contrast to "what would have happened in the case that another choice had been made", so the whole concept hinges on the evaluation of counterfactuals, which is actually a fairly complicated philosophical question I think.

    But the sort of reasoning that you mention is actually fairly applicable here, because the fates of human beings may one day be governed by minds qualitatively more sophisticated than ours. And for all of their differences from us, they may subject to many of the same considerations as we are regarding the question of how to treat one's inferiors, notably including the consideration that their superiors may be subject to many of the same considerations as they are.

    It's possible to pick out some trait that you have and to devalue everyone who lacks it. And post-humans or alien superbeings or the extradimensional entities simulating our universe or whatever could totally devalue human beings for lacking mental capabilities of theirs that we can't even conceive of, just as various human beings devalue those who lack intelligence, self-awareness, metacognition, moral agency, abstractions, hypotheses, or even just whatever skin color or religion or nationality or whatever the hell they favor. Any damn thing, really! (They probably wouldn't claim that whatever they chose was the only possible basis on which to value minds, though. One imagines that such hyper-advanced superbeings would be far more intellectually honest, self-aware, etc. than that.)

    And obviously by raising the possibility of being in the same situation, I'm trying to get human readers to empathize with other creatures, and to raise considerations of the Golden Rule, along with ethical principles like it being wrong for someone to have bad things happen to them because of things that they have no control over, it still being evil to do evil things to those in your outgroup, etc. But also, even if you don't give a rat's ass about any of that, there seems like a non-trivial possibility that minds who have to decide how well to treat you just may be similar enough to your mind in the relevant ways that their decisions mirror your decision of how well to treat those that you have power over. It certainly seems more likely that other minds in general will tend to be like yours than opposite yours. In addition to specific possibilities like post-humans inheriting human values, interest in simulating minds like one's own, etc., some anthropic reasoning also seems applicable.

    In many cases your decision of how to treat your inferiors will have no obvious direct causal impact on your superiors' decision of how to treat you, but as in Newcomb's problem, that doesn't mean that it's rational to disregard the relationship between the two. And all of the above is before we even get into the question of what form punishing defectors might conceivably take...

    Of course, if you decide that the probability of a superior's decision mirroring yours is very small, then the logic sounds rather similar to Pascal's Wager. Including the idea that the tiny chance of a huge payoff isn't necessarily the only reason to take the wager.


    Does anyone seriously think that any principle that could be reasonably described as "moral" somehow recommends disregarding the welfare of other sentient beings? Because it seems to me like basically all of them recommend treating other sentient beings well. Kindness? Obviously. Justice? Of course. The ethic of reciprocity? Well, yeah! Maximizing happiness? You bet. Enlightened self-interest? Um... It actually kinda seems like it.

    This should not come as a surprise. One of the reasons that ethical principles are formulated in so many different ways is that there are a lot of surprisingly different formulas that produce surprisingly similar advice. Or not so surprising, really, since prohibiting murder, theft, assault, etc. is basically what they're designed to do. And sometimes, nearly all of the various generalizations of such standard ethical rules will recommend against doing something that the standard rules themselves do not prohibit. That indicates that you shouldn't do that thing! For, like, almost every value of "should"!

    Only when different ethical guidelines make conflicting endorsements does it make a difference which principles are fundamental. In a case where e.g. happiness and preference satisfaction are both maximized by the same course of action, then the question of which whether one of them should only be regarded as a means to the other isn't practically relevant.
    Agree on everything, aside from the very first part. In Prisoner's Dilemma, a group of rational actors will cooperate with each other, and achieve the highest total payoff for all of them combined. Therefore, it would appear to me to show that individual rationality adds up to collective rationality, in this case. It is easy to try and craft a situation where individual rationality might not lead to maximized collective payoff - for example having a situation where one person in a group of five is given a choice to take action that gives him one utilon and costs everyone else one utilon, or to not take it, stipulating no additional costs or benefits whatsoever socially or otherwise, and also stipulating that this is the only time this chance will ever happen to anyone. However, this is where timeless decision theory begins to get complicated. If we assume that each actor in the group has a very good ability to model each other's decisions, it would be rational for everyone to truly precommit to not taking that action if offered, as there's a four in five chance otherwise of losing an utilon. If one were capable of fooling everyone else of being so precommitted yet still being able to change their decision once given the choice, that'd be the better thing to do, however.
    Quotes:
    Spoiler
    Show
    Quote Originally Posted by lamech View Post
    Trusting Murska worked out great!
    Quote Originally Posted by happyturtle View Post
    A Murska without lies is like a day without sunshine.
    Quote Originally Posted by Xihirli View Post
    I say we completely leave our fate in the hands of the trustworthy Murska and continue in complete safety.

  9. - Top - End - #159
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Are we evil?

    Quote Originally Posted by Murska View Post
    Semantics wasn't my point here.
    I'm honestly not sure how it is that you think that. I made statements about what some words mean and you responded with your own contradictory claims about what those words mean. We were totes arguing semantics, and now you seem to be indicating that you were doing so without realizing it, even though it was really quite obvious.

    Quote Originally Posted by Murska View Post
    Saying that an action is moral, ethical or right means that it is endorsed by our value system
    I bolded the relevant keywords to help clarify what I'm talking about.

    Quote Originally Posted by Murska View Post
    We can talk about moral values and other values if you want, and aliens can use a completely different term, but it makes no difference to the argument.
    What argument are you talking about here?

    I don't know, does it?
    It doesn't seem to, no. I don't seem to have claimed or implied that any position entails that the concepts right and wrong are meaningless or given meaning by something beyond ourselves.

    What does that matter?
    I wanted to know if you mistakenly read something into something that I said, or if I mistakenly said something that I didn't mean to, or if I said something that I lost track of somehow. I thought that perhaps you hadn't intentionally segued away from contradicting me, so I wanted to check whether I had missed something. Ha ha, maybe that was sort of paranoid of me?

    What are you asking here?
    I'm asking the questions that you quoted. Was that supposed to be a trick question or something?

    Okay, look, let me break down one of your arguments:

    [1] Rationality is, by definition, acting towards your own values.
    [2] Detriment is, by definition, something that goes against your values.
    [3] Rational agents do not act to their own detriment.

    No one is disputing that [3] follows from [1] and [2]. It's obvious that [3] follow from [1] and [2] (given a few other definitional assumptions that we're perfectly willing to grant). The dispute is over [1] and [2], which are claims about what words mean.

    So, if you were only giving definitions to clarify your argument, then what position were you arguing for, if not for definitions you favor? See what I mean?

    Are you claiming that you thought that people contradicting [3] already agreed with [1] and [2], but not that [3] follows from [1] and [2]? (And if you actually did think that, then, good grief, why?)

    I attempted to define rationality in order to mention that rational agents don't systematically lose in Prisoner's Dilemma and give some basis why, but that was not related in any way I can discern to other arguments regarding morality or whatnot.
    The issue of what the words "rational", "rationality", "irrational", "irrationality", etc. mean is similar to the issue of what the words "right", "good", "moral", "ethical", "wrong", "evil", etc. mean in that each is an issue of what a set of interrelated words mean.

    If all philosophical questions are only about meanings of words, then there is nothing to philosophy as all words only mean what we want them to mean, and nothing more.
    To operate under that assumtpion is to use words incorrectly.

    The above statement is clearly a valid one, yes? It's not as though I could be using the word "incorrectly" incorrectly if there's no such thing as using a word incorrectly. If a word's meaning is chosen by the person using it, then I can make "incorrectly" mean whatever I want it to. And, quite frankly, I don't think that it's hard to pick a meaning for "meaning" that's a lot more conducive to productive conversation than what you're suggesting.

    It may be the case that, in a technical philisophical sense, words don't actually mean things. But verbal communication is pretty clearly based on at least pretending that they do, and even on different people pretending that the same words mean the same things, or at least close to the same things. But in many cases, it is not a trivial matter for different people to manage to pretend that a word means close to the same thing. Coordinating meaning-pretendings relies heavily on communication, and so can be significantly impeded by the fact that we do not speak the same language, which is exactly the problem we're trying to correct!

    If someone counters some argument of mine by claiming they use a different definition of some word, it equates to them saying they do not understand what I mean, and therefore I must clarify my position. This is not yet debating the position either way
    That assumes that something other than the definitions of words is at issue, which is a poor assumption to make, because people appear to be really bad at recognizing when disputes are fundamentally semantic in nature.

    If a tree falls in a forest and no one is around, does it make a sound? The popular assumption seems to be that that question is about whether events happen without being observed. WRONG, BUCKO! The hypothetical -- "a tree falls in the forest and no one is around" -- assumes that unobserved events occur. At issue, rather, is what the word "sound" means. Does it cover certain sensory perceptions, the external phenomena that can cause those perceptions, phenomena that do cause such perceptions, or what? And, call me crazy if you want, but I think that it's easier for two people to resolve that question if they realize that that's what they're arguing about. Heck, if they both agree that "words don't really mean things, people just think they do", then they may consider the matter settled right there, on the grounds that there is no substantive disagreement between them. But that requires them to first realize that they're arguing definitions!

    I once saw the situation described as "Something like 90% of philosophical arguments boil down to semantics, with the debaters acknowledging this in maybe 10% of cases". Or something along those lines. That may be an overestimate, but my experience is that this is something that crops up fairly often. I have seen e.g. two people argue back and forth about what extremism is, both seemingly unaware that they were debating the definition of the word "extremism", despite the fact that plainly neither was using the other's definition and despite the fact that their "arguments" plainly consisted entirely of stating their respective definitions. Once again, I say: This appears to be something that people are really bad at recognizing that they're doing.

    As such, acknowledging this is a powerful tool for understanding why you and someone else are expressing disagreement with each other: Just try to work out how your disagreement boils down to semantics. A handy rule of thumb is that if it's not clear how any observation would constitute evidence for or against either of your positions, then you are very probably engaged in a purely philosophical/semantic dispute.

    That's all semantics is good for - enabling mutual understanding.
    Isn't combining their individual understandings of things into a superior shared understanding ideally the goal of the participants in a debate? Or at least improving their individual understanding in ways that probably involve bringing those understandings closer to each other. I'm not sure why you'd consider that to be preliminary.

    Indeed, needing to clarify your position to someone else may force you to first clarify it to yourself, and maybe to acknowledge that it wasn't as clear as you thought it was.

    Quote Originally Posted by Murska View Post
    But you make your decision through modeling your opponent the best you can, so the technicality doesn't matter for the purposes of the argument.
    No, there are cases where modeling your opponent the best you can does not mean assuming that your opponent is rational.

    Agree on everything, aside from the very first part.
    Whoops, I should have said "What the argument that originally introduced the Prisoner's Dilemma purports to show", or something like that. The phrase "the Prisoner's Dilemma" really just refers to the scenario under consideration, or game-theoretical equivalents or near-equivalents, I think.

    In Prisoner's Dilemma, a group of rational actors will cooperate with each other, and achieve the highest total payoff for all of them combined. Therefore, it would appear to me to show that individual rationality adds up to collective rationality, in this case.
    Consider the following hypothetical:

    A group of selfish people are gathered up by researchers and paired off in sequence to participate in a PD-style scenario -- one with the same sort of payoff matrix, based on the binary choice of each participant to "cooperate" or "defect". These people are not allowed to communicate with each other, and each one plays the game only once. All of the participant who have yet to play the game watch each pair of participants who play before them, and observe that both players defect each time.

    After many of these "matches", eventually two participants are paired together, who, as it happens, are both rational agents. Do they choose to cooperate or to defect?

    This is a contrived situation, but any "pure" Prisoner's Dilemma style scenario is hella contrived. That doesn't mean that considering them can't inform our understanding of a broader class of interactions. And the hypothetical described above has implications for cases where the person you're dealing with probably has no technical knowledge of game theory nor decision theory, much less an understanding superrationality, timeless decision theory, and the like. Aren't most of our interactions in real life with people who aren't ideally rational? It's rare to be faced with a case where someone else is likely to make the same choice as you for the same reasons... even in cases where you and another person have to make the same decision! Sure, a rational actor effectively chooses for all rational actors faced with the same decision (with corresponding priorities, information, etc.), but that's usually still a minority in practice.

    Quote Originally Posted by veti View Post
    It's perfectly rational, if you know that the other player is identical to yourself. Imagine that a computer, sufficiently advanced to reason the answer out for itself, knows that it is playing against another computer that is a perfect clone of itself, and that it's given precisely the same starting information. Then it can know, with certainty, that the clone will make the same choice as it does.
    What is knowledge, bro? If you guess that the other player is identical to you and happen to be right, do you know that the two of you are identical? Specifying that justified beliefs are knowledge just moves the ambiguity from "knowledge" to "justification"; what justifies a belief? Your perceptions could be an elaborate illusion specifically designed to deceive you; whether or not that's likely, it certainly seems logically possible. If sensory experiences can justify a belief despite being compatible with that belief being false, and all justified beliefs are knowledge, then you can know things that aren't even true; does that sound right? On the other hand, saying that knowledge equals true justified belief makes "knowing" things partly a matter of luck, and avoiding that was frankly why we didn't just go with the simpler and much clearer definition of knowledge as true belief in the first place! Like, whoa.

    At this point the concept of subjective probability cuts in and says "Hey, bro, belief isn't all or nothing; it's a matter of degree, you dig? Like, you can be pretty confident that a store will be open on Tuesday, but way super confident that the sun will rise tomorrow, and even more confident that one plus one equals two. You can specify how confident you are about something by estimating what fraction of the time things that you're that confident about will be true. 99% of the stuff that you're '99% confident' about should be true. If less than 99% of that stuff is true then you're overconfident, bro."

    Due to the problem of induction, you can't rationally believe things about your surroundings -- like that someone is identical to you -- with 100% confidence. Now, you might say to that "Aw, man, that's too bad, but at least I can be rationally certain about tautologies, right?" Actually... not so much. See, in practice, human beings are not perfect reasoners, and make mistakes. And if you think that there's only a one in a million chance that 23 isn't a prime number, then you're probably overconfident. Probably more than one of every million things you're that confident in are false. (But I'm only claiming a confidence of over 50% on that, mind you. "Probably" just means "more probably than not".)

    "Well, okay, but what about some sort of superintelligent machine that reasons completely flawlessly?", you may ask. But such a machine could be made to malfunction by modifying it in some way. So how could such a machine be justified in 100% certainty in its own perfect functioning? How could it "know" that it's functioning perfectly in a sense of "knowledge" that doesn't involve being at least a little lucky? Basically, we can look at a subprocess that renders final judgement on something and ask how it performs in the event that supporting subprocesses are compromised. Because that is a thing that could conceivably happen!

    (And that sort of consideration is important to bear in mind if you're actually trying to design a superintelligent machine. Assuming that anything will function perfectly is irresponsible. You want to have failsafes to cope with things like less than perfectly reliable hardware and the fact that the individual modules have been coded by fallible human beings.)

    So, for a rational being, as Murska put it, 1 and 0 are not probabilities (with ~0.9999 certainty).

    The parenthetical part is because our understanding of rationality is also the product of fallible human beings and thus subject to uncertainty; get it? It's funny because it's true, as they say.

    Quote Originally Posted by veti View Post
    I'm thinking of the 17th-through-19th-century European explorers, who went forth and discovered new peoples and cultures, and attached words like "religion" and "gods" and "laws" to what they found, even though their own religion and laws said that these were inappropriate. (It was a considerable intellectual jump for some of them, but in the end the doubters were firmly overruled by the cultural relativists among them.) If the aliens have a concept of "rightness", a "should" imperative that causes them to regard these repulsive things as valuable and important even when they don't, at a personal level, find them remotely desirable, then it's not so far-fetched to associate them with a concept that we regard in the same light.
    I have no problem with acknowledging alien ideals and social mores as ideals and social mores. But if they aren't our ideals and social mores, then they're not the same thing. The same type of thing, sure, but not the same thing.

    Even if you think that calling something "right" or "good" is just endorsing it and has no other meaning, it makes no sense use those words to describe things that you disapprove of but others approve of, because you don't want to endorse them yourself!

    "How many legs does a dog have if you call the tail a leg? Four. Calling a tail a leg doesn't make it a leg."
    -- Abraham Lincoln

    See, if the word "legs" included tails, then the statement "A dog has five legs" would be true, because it would have a different meaning due to the word "legs" having a different meaning. But a dog would still only have four legs. We are not having this exchange within the hypothetical world under consideration, you see, so when I use the word "legs" in a statement to you -- e.g. "But a dog would still only have four legs" -- it means what we use it to mean. Because meaning is a matter of convention and/or intention (depending on what one means by "meaning", as dangerously recursive as that is), not in spite of it!

    Now, if some people had a word for legs and/or tails, but not a word for just legs, then it would be fairly reasonable to use "legs" as an approximate translation of that word most of the time, depending on context, even though the meaning wouldn't be exactly the same. It wouldn't be reasonable to translate a word for tails as "legs", though. Not even if the people using it felt the same way about tails as we do about legs. (Wow, what a weird hypothetical.) IMO.

    Morality isn't a set of values, it's a way of regarding those values. We bundle a set of concepts together and attach the label "good" to them. What, precisely, gets included in that bundle is secondary: it's the label that makes it "moral". If we see other people attaching an equivalent label to a very different bundle, then there's nothing dishonest about using our word "morality" to describe their attitude.
    That pretty much summarizes what I have been disagreeing with, yes.

    Partly this is a matter of how the word "morality" is used. It seems to me that there's enough difference in usage that the most honest thing to do is to avoid using "morality" in many cases, including the one under consideration. It's a question of what to group together under what label. Doesn't mean that the groupings aren't all useful regardless. Consider that color television sets were still called "television sets" despite being different from those produced earlier, but television sets were not called "radios". It can be useful to consider the group of all television sets and all radios; it can be useful to distinguish between television sets and radios; and it can be useful to distinguish between black and white and color television sets. One can agree that all of those groupings can be useful regardless of whether one thinks that television sets should have been called "radios", or that color television sets should have gotten their own name, or whatever.

    But I also get the impression that maybe people having ideals is more important to you than what those ideals are. Maybe not, I'm not sure. But if so: WOW ****, that is some scary as hell Lawful Neutral ****, there. Like, wow.
    Last edited by Devils_Advocate; 2015-05-29 at 04:59 PM.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  10. - Top - End - #160
    Troll in the Playground
     
    Murska's Avatar

    Join Date
    Jul 2007
    Location
    Whose eye is that eye?
    Gender
    Male

    Default Re: Are we evil?

    Sorry, that's too much text for me to have time to get involved. I'll cherry pick some specific things to respond to.

    I don't find arguments about semantics very interesting in most cases, which is why I try to define and redefine things to get around the words to the interesting part of the issue, which is why it's so difficult to progress if someone else finds arguing semantics to be fine. So the following should elaborate this point rather well:

    Okay, look, let me break down one of your arguments:

    [1] Rationality is, by definition, acting towards your own values.
    [2] Detriment is, by definition, something that goes against your values.
    [3] Rational agents do not act to their own detriment.

    No one is disputing that [3] follows from [1] and [2]. It's obvious that [3] follow from [1] and [2] (given a few other definitional assumptions that we're perfectly willing to grant). The dispute is over [1] and [2], which are claims about what words mean.
    It's obvious to me too that 3 follows from 1 and 2. And I have no interest in debating what the words mean, I am simply attempting to use them as defined in 1 and 2. All I'm trying to say here is that 3 follows from 1 and 2, which should be obvious and thus all the people who seem to dispute it confuse me. If they're actually trying to dispute 1 and 2, which are definitions of words in order to be able to state 3 in the first place, then that's not what I'm here for and it seems pointless given that we can define words to mean anything. In this case, it seems to me that we agree on 3, which resolves everything neatly. All the time and effort spent on discussing 1 and 2 was entirely worthless.

    What I want to find is situations where, even if me and whomever I'm debating with understand each other's words, we disagree on some fact. That's when we can discuss our points of view and attempt to find out which one of us is right, if either. All the work getting to the point where we understand each other is just an unfortunate necessity, made harder if people wilfully misinterpret things in order to have something to disagree with each other about.

    After many of these "matches", eventually two participants are paired together, who, as it happens, are both rational agents. Do they choose to cooperate or to defect?
    They are likely to defect in this scenario due to having an incorrect model of each other as being irrational actors, based on having observed a lot of games where all participants have been irrational. If they were paired in the first game, they would definitely cooperate, and the probability of mutual cooperation drops as a function of watched games, I would expect.
    Quotes:
    Spoiler
    Show
    Quote Originally Posted by lamech View Post
    Trusting Murska worked out great!
    Quote Originally Posted by happyturtle View Post
    A Murska without lies is like a day without sunshine.
    Quote Originally Posted by Xihirli View Post
    I say we completely leave our fate in the hands of the trustworthy Murska and continue in complete safety.

  11. - Top - End - #161
    Titan in the Playground
     
    TuggyNE's Avatar

    Join Date
    Jun 2011
    Gender
    Male

    Default Re: Are we evil?

    Quote Originally Posted by Murska View Post
    It's obvious to me too that 3 follows from 1 and 2. And I have no interest in debating what the words mean, I am simply attempting to use them as defined in 1 and 2. All I'm trying to say here is that 3 follows from 1 and 2, which should be obvious and thus all the people who seem to dispute it confuse me. If they're actually trying to dispute 1 and 2, which are definitions of words in order to be able to state 3 in the first place, then that's not what I'm here for and it seems pointless given that we can define words to mean anything. In this case, it seems to me that we agree on 3, which resolves everything neatly. All the time and effort spent on discussing 1 and 2 was entirely worthless.
    Your argument appears to be based inherently on picking words that sound good (after all, who doesn't want to be considered rational?) and making sure their definitions are adjusted such that the logical conclusions you desire fall out as a syllogism, then refusing to discuss whether those definitions you hand-picked are suitable and match the full set of connotations and associations nearly everyone will have with those words, on the basis that words can be redefined to whatever people want! In the face of all the scholarly effort in the world that goes into determining actual meanings from actual conventional usage, this looks disingenuous in the extreme.

    The best part, of course, is that logically this doesn't go anywhere. You've defined "rationality" such that (apparently) all agents are rational, which raises the question of what "detriment" means. A hypothetical class of actions that no one would ever take, varying per person/agent? That's not especially useful in a practical sense. And since your definition doesn't match what any of your opponents in debate would actually consider "detriment", you can't logically apply it to their arguments to rebut their points. You've cut yourself off from meaningful debate.
    Quote Originally Posted by Water_Bear View Post
    That's RAW for you; 100% Rules-Legal, 110% silly.
    Quote Originally Posted by hamishspence View Post
    "Common sense" and "RAW" are not exactly on speaking terms
    Projects: Homebrew, Gentlemen's Agreement, DMPCs, Forbidden Knowledge safety, and Top Ten Worst. Also, Quotes and RACSD are good.

    Anyone knows blue is for sarcas'ing in · "Take 10 SAN damage from Dark Orchid" · Use of gray may indicate nitpicking · Green is sincerity

  12. - Top - End - #162
    Troll in the Playground
     
    Murska's Avatar

    Join Date
    Jul 2007
    Location
    Whose eye is that eye?
    Gender
    Male

    Default Re: Are we evil?

    Quote Originally Posted by TuggyNE View Post
    Your argument appears to be based inherently on picking words that sound good (after all, who doesn't want to be considered rational?) and making sure their definitions are adjusted such that the logical conclusions you desire fall out as a syllogism, then refusing to discuss whether those definitions you hand-picked are suitable and match the full set of connotations and associations nearly everyone will have with those words, on the basis that words can be redefined to whatever people want! In the face of all the scholarly effort in the world that goes into determining actual meanings from actual conventional usage, this looks disingenuous in the extreme.

    The best part, of course, is that logically this doesn't go anywhere. You've defined "rationality" such that (apparently) all agents are rational, which raises the question of what "detriment" means. A hypothetical class of actions that no one would ever take, varying per person/agent? That's not especially useful in a practical sense. And since your definition doesn't match what any of your opponents in debate would actually consider "detriment", you can't logically apply it to their arguments to rebut their points. You've cut yourself off from meaningful debate.
    I apologize. Let me reword my argument once again.

    [1] Asreworg is, by definition, acting towards your own values.
    [2] Sirah is, by definition, something that goes against your values.
    [3] Asreworg agents do not act to their own sirah.

    No actual agent that I've ever observed is perfectly asreworg. People tend to take sirah actions all the time, due to various reasons such as biased thinking, incomplete information or simple accident.
    Last edited by Murska; 2015-05-30 at 05:41 PM.
    Quotes:
    Spoiler
    Show
    Quote Originally Posted by lamech View Post
    Trusting Murska worked out great!
    Quote Originally Posted by happyturtle View Post
    A Murska without lies is like a day without sunshine.
    Quote Originally Posted by Xihirli View Post
    I say we completely leave our fate in the hands of the trustworthy Murska and continue in complete safety.

  13. - Top - End - #163
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Are we evil?

    Murska, if that had instead been your reply to my post, I would have been perfectly in line responding "What the hell are you even talking about?" You'd have had to also say something like "I think that asreworg agents are rational", or a reader would have to infer that you were implying that, or something along those lines, in order to make sense of that stuff as doing anything but starting an entirely different discussion. At which point you'd still be arguing semantics, just indirectly.

    I think that Tuggy was on the right track with the idea that a suitable definition captures a word's connotations and associations. More specifically, I'd say the art of coming up with an appropriate formal definition is the art of making a vague concept less vague by formalizing what you're talking about. And that's a process that it's possible to screw up, inadvertently specifying something different from what you meant to specify.

    Quote Originally Posted by Murska View Post
    They are likely to defect in this scenario due to having an incorrect model of each other as being irrational actors, based on having observed a lot of games where all participants have been irrational.
    So... you agree that that two selfish rational agents can in fact act towards their mutual detriment in some scenarios, which is to say, your definition-based argument was wrong, because your definition was wrong? Your stated definition of "rational" is more like the definition of "fortunate", or rather "fortunately-acting". You neglected the possibility that the option with the best expected outcome doesn't have the best actual outcome. Like someone overlooking the possibility of untrue justified beliefs in defining "knowledge". You made a mistake and specified the wrong thing.

    Rational thinking and rational behavior are thinking and behavior that are appropriate in a particular sort of way. In what way, exactly? That's the question that a definition of "rational" attempts to answer. And, since we are talking about a particular sort of appropriateness, not just any old sort, it's a question that can totally be answered incorrectly. Brushing off the art of correctly specifying what we're trying to talk about is like saying that Bayes' theorem isn't particularly useful. It may turn out, upon examination, that we're actually talking about several subtly different things that are very similar. But let's be clear, here: There are things that we aren't talking about. There may not be only one right answer, but that doesn't mean that there aren't plenty of wrong ones.

    Similarly -- to bring this back around to the main topic -- ethical values and ethical behavior are values and behavior that are appropriate in another particular sort of way. Or, again, maybe not exactly one specific sort of way. But certainly not in any sense whatsoever, and in very nearly one sense for many practical purposes. Pointing to the ambiguity of "evil" as though that even constitutes a rebuttal to ethical criticism is like treating the ambiguity of "sound" as a counterpoint to complaints that something is too loud. You need to show that different senses of a word are unequally applicable in order to show that the distinction is relevant to the discussion at hand. Otherwise you're just trying to shut down the discussion. It's remarkably disingenuous to suddenly act as though meaningful discourse requires infinite clarity when finite clarity plainly is normally so not a problem that it goes unnoticed.

    So that's my problem with arguments of that nature.

    If they were paired in the first game, they would definitely cooperate, and the probability of mutual cooperation drops as a function of watched games, I would expect.
    How on Earth is that definite? What reason does either one have to suppose that it's anything more than highly unlikely that the other person will be making the same decision for the same reasons? What reason do you have for your apparent assumption that that's probable?
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  14. - Top - End - #164
    Orc in the Playground
    Join Date
    Nov 2014
    Location
    Colorado

    Default Re: Are we evil?

    Mr Tumnus... You may not be evil. But you are most certainly a naughty faun.

  15. - Top - End - #165
    Orc in the Playground
    Join Date
    Jan 2015
    Location
    Deleted account

    Exclamation Re: Are we evil?

    Answering straight from reading the OP: Yes.

    Let me explain.

    Most of us try our very best to be good, and most of us claim to be 'good people', but that does little to change the fact that we have an inherently evil nature. It's mostly nurture and personal decisions that drive us to try and be 'good', well-mannered people, but when you examine your actions and motivations, you'll realize that you have a selfish streak in you, no matter how generous you are. For instance, of course.

    I read a statement somewhere that rang very true for me: Even the pope is capable of murder. Of course he is, he is only human. It's by choice, willpower and by nurture that he doesn't go around killing people. That we know of.

    Jokes aside, yes, we are inherently evil, but we choose to be good to try and overcome that evil nature.

    Please understand that evil doesn't necessarily mean 'trying to take over the world' or some such thing. As I've seen it, most evil is based in selfishness.

  16. - Top - End - #166
    Titan in the Playground
    Join Date
    Sep 2014

    Default Re: Are we evil?

    Quote Originally Posted by Flaming Eagle View Post
    I read a statement somewhere that rang very true for me: Even the pope is capable of murder. Of course he is, he is only human. It's by choice, willpower and by nurture that he doesn't go around killing people. That we know of.
    Going to level with you here man. I don't make a choice not to kill people, I don't have to suppress my desire for murder-lust by sheer willpower and while sure...my surrounding culture taught me that killing was wrong I dare say I'd know that without having it to be taught to me. And I can tell you that with fairly good certainty because I am -not- inherently Evil. I am an inherently empathetic being and I can take a step back from myself and think "Would I like to be killed". The answer of course is no, I don't want to be killed. If I don't want to be killed I can therefore assume plenty of other people don't want to be killed. Thus, without nurture or willpower or choice, I can understand that killing another person is wrong. I won't kill another person because it's the wrong thing to do. Don't steal for the same reasons. Don't do a lot of things for the same reasons. And I dare say that we as a species have been able to do that since we stepped off the plains. Or else we wouldn't have a global civilization or an internet to discuss the matter over.
    Last edited by Razade; 2015-07-22 at 05:47 AM.

  17. - Top - End - #167
    Ogre in the Playground
    Join Date
    Nov 2012

    Default Re: Are we evil?

    While I generally fall along the lines of an "inherent" good impulse, primarily along Mencian lines, I have to say I think a corollary to the rule about real physics in D&D and kittens is that every time somebody tries to argue something is "inherent" to human nature, a crazy king locks a baby in a closet. Less facetiously, I think we tend to significantly underestimate how difficult it is to divorce ourselves from our socialization.

  18. - Top - End - #168
    Orc in the Playground
     
    RangerGuy

    Join Date
    May 2007

    Default Re: Are we evil?

    Quote Originally Posted by Mr Tumnus View Post
    So some background, I was watching this new fall anime Parasyte -the maxim- where these parasites come to earth, bond with existing lifeforms (usually humans), take over their bodies and then proceed to eat people. When the main character asks one of these parasites "You guys are monsters, why are you doing this?" the parasite responds with "We're not monsters, we're eating humans for food." At this point there have been about 80 of these strange murders caused by these things. The parasite then asks "Aren't humans really the monsters? How many millions of things do you kill each year and eat?" Its this statement that prompted this post.

    Picture a world where these ravenous creatures existed that enslaved and consumed the other, less intelligent creatures of that world. So great was their hunger that entire species went extinct in an attempt to satiate them. They forced the ones that didn't die out to mate in order to produce more food. They ate creatures of every gender and age, young, old, the strong, the weak and even the unborn.

    Now realize that thats humans. Is there any horror story that can compare to what humans do on a daily basis? Don't get me wrong, I'm not advocating a vegan lifestyle, meat tastes too good. Its just that this parasite thing kind of had a point, we justify doing what we do to animals because we're at the top of the food chain. If we found out there was something else above us, can we really complain if they do the same to us?
    Yes. We're a sapient species. There's good arguments we shouldn't eat chimpanzees or dolphins. Other than that, trying to treat animal species as humans for ethical purposes is Stolen Concept Fallacy, and mainly the province of people that want to aggrandize themselves by pretending they have some great moral truth in objecting to eating meat. The animals eaten would not know any different and the universe would not care if we stopped.

  19. - Top - End - #169
    Ogre in the Playground
    Join Date
    Nov 2012

    Default Re: Are we evil?

    Quote Originally Posted by Diamondeye View Post
    Yes. We're a sapient species. There's good arguments we shouldn't eat chimpanzees or dolphins. Other than that, trying to treat animal species as humans for ethical purposes is Stolen Concept Fallacy,
    Huh? No it's not. The Stolen Concept fallacy refers to an argument that requires the validity of the point it is trying to disprove. Treating humans and other animal species equivalently for ethical purposes does nothing of the kind. With a little sleight of hand and control over the language of the debate, it's pretty easy to make it seem like it does, but this is the sort of parlor trick that might work in some intro philosophy course at the world's dingiest diploma mill, but even that's a "might."

    Quote Originally Posted by Diamondeye View Post
    The animals eaten would not know any different
    I don't know what you mean by this. In what sense would they not? Virtually all animals can demonstrably feel pain, most can be observed to feel fear and many are generally thought to mourn. While they may not understand the concept of death in a philosophical sense, they would certainly "notice" the cessation of their being to the same extent as anything else. Even if they wouldn't consciously recognize they weren't going to be killed and eaten, they'd know they whether or not they were presently being killed and eaten, or kept in horrific conditions, &c.
    If you mean an animal won't know the difference between someone who will and will not eat it, this is also at least partly false. When you have a new pet bird, for example, you show it your profile while earning it's trust; a bird will recognize a potential predator when it sees that you have two eyes in the front of your head. After the bird has socialized to you and learned you're not a threat, however, it will not recognize you as a predator despite your eye configuration. While its guess may turn out to be inaccurate, it will distinguish between individual members of a species when determining threats. Obviously, this is based on experience and not some intrinsic knowledge, but the argument that animals can't just magically sense vegetarians is a pretty silly counter-argument.

    Quote Originally Posted by Diamondeye View Post
    the universe would not care if we stopped.
    This is an even sillier counter argument. The universe is an abstract concept, not a conscious being; it doesn't care about anything.

  20. - Top - End - #170
    Pixie in the Playground
    Join Date
    Sep 2013
    Location
    USA
    Gender
    Male

    Default Re: Are we evil?

    Think of the planet Earth as a living object and humans as a virus feeding off its host and whichever way it can.
    Last edited by ef87; 2015-07-29 at 05:54 PM.

  21. - Top - End - #171
    Orc in the Playground
     
    Shamash's Avatar

    Join Date
    Aug 2014

    Default Re: Are we evil?

    Quote Originally Posted by ef87 View Post
    Think of the planet Earth as a living object and humans as a virus feeding off its host and whichever way it can.
    I don't get why people act as if humans are aliens and not part of the planet.

    Nature created us! We are part of it and we have all the right to be here.
    Last edited by Shamash; 2015-08-01 at 07:08 PM.
    Shamash! The true sun god!

    Praise the sun! \o/

    I also have a DeviantArt now... Most are drafts of my D&D campaigns but if you want to take a look.

  22. - Top - End - #172
    Ogre in the Playground
    Join Date
    Nov 2012

    Default Re: Are we evil?

    I think you're misreading the analogy; "nature" also created viruses, and they are part of nature and have a right to be there. They are nonetheless detrimental, generally speaking, to the health of their host.

  23. - Top - End - #173
    Titan in the Playground
    Join Date
    Sep 2014

    Default Re: Are we evil?

    Quote Originally Posted by Zrak View Post
    I think you're misreading the analogy; "nature" also created viruses, and they are part of nature and have a right to be there. They are nonetheless detrimental, generally speaking, to the health of their host.
    It's a bad analogy because the Earth isn't a single organism.

  24. - Top - End - #174
    Ogre in the Playground
    Join Date
    Nov 2012

    Default Re: Are we evil?

    Nor are human beings an actual virus. I don't think you know what analogies are.

  25. - Top - End - #175
    Titan in the Playground
    Join Date
    Sep 2014

    Default Re: Are we evil?

    I know what an analogy is, it doesn't change the fact that it's a bad one.

  26. - Top - End - #176
    Ogre in the Playground
    Join Date
    Nov 2012

    Default Re: Are we evil?

    Sorry, I phrased that poorly. I'm sure you know, in a general sense, what an analogy is. I meant that you don't understand how analogies work. If it is a bad analogy, it's not for the reason you mentioned; whether the earth is a single organism or not is totally irrelevant. Analogies compares two things or sets of things on the basis of a significant similarity, most often in their structure or relationship, in order to better illustrate or clarify a contention about one of the things. The nature of each party in a set is not relevant if the point of comparison is the relationship between the parties in the set, because the goal is to illustrate the relationship of the source in the more commonly understood structure of the target. But hey, why take my word for it when we can use an example.

    So, for our example, imagine a DM has come up with some new monsters for their urban fantasy campaign setting. One of these monsters is an intelligent undead whose great powers are balanced out by its crippling and relatively quotidian weaknesses; it catches fire and/or disintegrates upon even relatively indirect contact with ginger, while it is repelled by the flickering of fluorescent lighting, which revolts its heightened senses. Your argument is that it would be a bad analogy to say that ginger is to it as sunlight is to vampires because ginger is not a kind of lighting; similarly, saying fluorescent lighting to it is like garlic to a vampire would be a bad analogy because fluorescent lighting is not a root vegetable with a strong scent and diverse culinary uses. While both of those clearly convey the relationship in question, and would thus be perfectly serviceable explanations for the DM to give their players of the monster's weaknesses, your position says they're bad analogies merely because the taxonomies fail to match up.

    So, then, let's do the reverse, and assume the DM gives their players taxonomy-matching analogies: "for this monster, ginger is like garlic to a vampire" and "fluorescent light is like sunlight to a vampire." How are the players supposed to understand, from these analogies, that ginger disintegrates the undead and fluorescent light repels it? Not only can they not discern the meaning, they're fairly likely to discern the wrong meaning and assume fluorescent lighting disintegrates the creature and ginger repels it because that is exactly what the analogy is saying.

    Now, add to all of this that the analogies would be just as intelligible and misleading respectively were we also to change the DM's creature so that it was an ooze or demon or swarm. An analogy's ability to convey its intended meaning does not relate, directly or indirectly, to the comparability of taxonomy of the parties involved in the structures or relationships it compares, merely in the comparability of those structures or relationships in and of themselves. As such, the fact that the Earth is not a single organism has no impact whatsoever on the ability of any analogy about the relationship of a species to the earth to convey that relationship, or at least the speaker's view of it, clearly and effectively.

  27. - Top - End - #177
    Ogre in the Playground
    Join Date
    Aug 2012
    Gender
    Male

    Default Re: Are we evil?

    No, see, viruses aren't that talky so we're totally not viruses.

  28. - Top - End - #178
    Titan in the Playground
    Join Date
    Sep 2014

    Default Re: Are we evil?

    Quote Originally Posted by Zrak View Post
    Sorry, I phrased that poorly. I'm sure you know, in a general sense, what an analogy is. I meant that you don't understand how analogies work. If it is a bad analogy, it's not for the reason you mentioned; whether the earth is a single organism or not is totally irrelevant. Analogies compares two things or sets of things on the basis of a significant similarity, most often in their structure or relationship, in order to better illustrate or clarify a contention about one of the things. The nature of each party in a set is not relevant if the point of comparison is the relationship between the parties in the set, because the goal is to illustrate the relationship of the source in the more commonly understood structure of the target. But hey, why take my word for it when we can use an example.

    So, for our example, imagine a DM has come up with some new monsters for their urban fantasy campaign setting. One of these monsters is an intelligent undead whose great powers are balanced out by its crippling and relatively quotidian weaknesses; it catches fire and/or disintegrates upon even relatively indirect contact with ginger, while it is repelled by the flickering of fluorescent lighting, which revolts its heightened senses. Your argument is that it would be a bad analogy to say that ginger is to it as sunlight is to vampires because ginger is not a kind of lighting; similarly, saying fluorescent lighting to it is like garlic to a vampire would be a bad analogy because fluorescent lighting is not a root vegetable with a strong scent and diverse culinary uses. While both of those clearly convey the relationship in question, and would thus be perfectly serviceable explanations for the DM to give their players of the monster's weaknesses, your position says they're bad analogies merely because the taxonomies fail to match up.

    So, then, let's do the reverse, and assume the DM gives their players taxonomy-matching analogies: "for this monster, ginger is like garlic to a vampire" and "fluorescent light is like sunlight to a vampire." How are the players supposed to understand, from these analogies, that ginger disintegrates the undead and fluorescent light repels it? Not only can they not discern the meaning, they're fairly likely to discern the wrong meaning and assume fluorescent lighting disintegrates the creature and ginger repels it because that is exactly what the analogy is saying.

    Now, add to all of this that the analogies would be just as intelligible and misleading respectively were we also to change the DM's creature so that it was an ooze or demon or swarm. An analogy's ability to convey its intended meaning does not relate, directly or indirectly, to the comparability of taxonomy of the parties involved in the structures or relationships it compares, merely in the comparability of those structures or relationships in and of themselves. As such, the fact that the Earth is not a single organism has no impact whatsoever on the ability of any analogy about the relationship of a species to the earth to convey that relationship, or at least the speaker's view of it, clearly and effectively.
    I understand how analogies work, you don't particularly have to condescend down to me you know? It doesn't exactly help express your point in any effective manner.

  29. - Top - End - #179
    Firbolg in the Playground
     
    GnomeWizardGuy

    Join Date
    Jun 2008

    Default Re: Are we evil?

    Morality tends to be human-centric, thanks to it being created by humans and being defined by humans. (or being understood and interpreted by humans, depending on viewpoint) As such, morality is going to favor humans over other things, or at least equal to other things, and favor human viewpoints over non-human ones.

    Please note that the idea that humans are evil, simply by eating and surviving, is not a new one - although such thinking generally assumes that humans must therefore actively work to counterbalance the necessary evil of existence. However, that isn't a very popular at times. Most people don't like the idea that they are evil, and so the more popular moralities tend to set some sort of arbitrary divide. Either it's "non-sapient don't count" or it's "non-sentient don't count" or it's "non-living don't count" or something similar, where there is a distinct difference between doing evil to something and not doing evil to it, or the impact being lessened. Note that, even in those cases, a person could still end up doing evil - most people would not consider the death of a dog to automatically be evil, especially a dangerous or rabid dog, but most people would still consider tortore or inhumane treatment of the dog to be evil. Even if it was a dangerous or rabid animal, it is still considered inhumane (evil, basically) to needlessly cause it suffering.

    Some sort of creature which doesn't need to kill a bunch of stuff just to survive day-to-day could easily look at humanity and call it evil. After all, if they just need some water and sunlight to survive and grow, then what do they think about a species which needs to kill stuff daily just to survive? To them, who have no concept of necessary killing of others, the idea of regularly killing something just to eat could be completely foreign. Their morality would not see much difference between sapience and non-sapience, because while non-sapient creatures may not be capable of decisions, that doesn't follow that they are worthy of being slaughtered by the dozens just for another creature to survive.

    And on the flip side, something which does eat to survive and which everything living has some degree of sapience isn't going to make a sapient/non-sapient distinction. To them, there is no real point; they need to kill and eat sapient creatures just to survive. They are unlikely to see much difference between eating a human and eating a bovine.
    Quote Originally Posted by darthbobcat View Post
    There are no bad ideas, just bad execution.
    Spoiler
    Show
    Thank you to zimmerwald1915 for the Gustave avatar.
    The full set is here.



    Air Raccoon avatar provided by Ceika
    from the Request an OotS Style Avatar thread



    A big thanks to PrinceAquilaDei for the gryphon avatar!
    original image

  30. - Top - End - #180
    Ogre in the Playground
    Join Date
    Nov 2012

    Default Re: Are we evil?

    Quote Originally Posted by Razade View Post
    I understand how analogies work, you don't particularly have to condescend down to me you know? It doesn't exactly help express your point in any effective manner.
    Says you. I tried to condescend up to someone, once, and I wound up in traction.

    More seriously, I'm not even trying to come across as particularly condescending, but it is hard to find a neutral phrasing to explain the fact that your criticism of the analogy not only missed the point of the analogy in question, but of analogy categorically. I mean, you said the analogy was bad because of the thing that made it an analogy. I don't think there's a way to express that kind of mistake that wouldn't come across as condescending. Since I assumed you weren't being wrong on purpose, and I am honestly sorry if I was mistaken and have misapprehended a deadpan joke or the purpose of an intentional self-contradiction has gone over my head, the only reasonable explanation I can see for your argument is a basically total misunderstanding of the concept of analogy. I explained the function of analogies in great detail because earnestly believed that you didn't really understand the purpose of analogy, because if you did you wouldn't have made the criticism you did, because that criticism is blatantly nonsensical on its surface.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •