Experiments in Ontological Relativism

and Other Brain Farts

On Kindness

Posted by Jason on March 8, 2012

I know, I know, I still owe you guys the rules for my new wiki game. I haven’t forgotten; they’re about halfway written, I just haven’t finished them up yet. Mostly due to laziness. But that’s another post.

I was having a conversation the other day on the topic of… well, just generally being nice to people. Later, thinking back on it, I came to the realization that there are essentially two schools of thought when it comes to kindness to others:

  1. Being nice is the default state, but niceness can be revoked; or
  2. Not being nice is the default state, but niceness can be earned.

I realize I run the risk of oversimplifying here, but it seems to me that option 2 is probably the most natural for humans in general, but most systems of ethics and morality are basically trying to get people to go with option 1.

And let’s face it, option 2 does make a lot of sense from a survival standpoint: it’s dealing with known quantities. If someone has earned your good will–whether that’s through helping to raise you (in the case of family or community) or by demonstrating some other form of trustworthiness–you run much less of a risk of having that good will taken advantage of. Strangers are scary, when it comes to survival. You don’t know what they’re going to do or how they’re going to react. When your life is spent in a small community surrounded only by people with whom you’re familiar, it actually makes a lot of sense to shun anyone who’s an unknown quantity until they prove they deserve otherwise.

The problem is, though, that most of us don’t live in small, self-enclosed communities anymore. And that’s where option 1 comes in. When you expand the idea of “community” to include a large number of people whom you’ve never met–and probably never will meet–option 1 is more advantageous to the functioning of the group as a whole. When societies grow large (and by “large” I mean anywhere from a small city to a nation to a global community), your individual well-being is tied to the actions of a vast number of other people of whom you may have very little to no acquaintance. In this case, it is actually to your individual benefit to be kind to them by default; after all, who knows how they may or may not be able to help you now or in the future.

Maybe this is all a little too abstract. I tend to think a lot about systems and large-scale interactions and efficiency. I’m not entirely sure why; maybe it’s just something about the way my mind works. But really, the core of this idea came to me when I realized that, in my own life, choosing to be nice to everyone costs me relatively little, but can gain me a great deal. When you’re nice to someone, they’re generally nice back, and that makes for smoother relations for all involved. Choosing not to be nice to someone also doesn’t cost me anything, but it doesn’t gain me anything either. In fact, it could even cost me. What if I’m rude to someone and later need something from that person? That random stranger on the train could later be, say, the banker who I need to approve my loan.

You could argue that this is a very cynical way to look at morality, only looking to do good in as much as it benefits you personally. And that very well may be, but if the net effect is a positive one, does that really matter? I’m not saying that you should only consider ethics in terms of personal benefit (though there certainly are functioning ethical systems that do just that), but I do think that it can be an interesting way to look at it. Certainly the language of “good” and “bad” has served us well in the past, and continues to do so in many contexts; but occasionally looking at morality from a different angle (in this case, “benefit” and “cost”) can reveal some new insights as well.

I sometimes feel that many of the biggest societal issues that face us today are the result of a clash between survival strategies from small groups versus large ones. Humans have hundreds of thousands of years of experience operating in small communities or as individuals, and it’s no surprise that we’ve evolved instinctive survival strategies for those conditions. But the modern world is a very different place, and calls for different strategies of cooperation and acceptance. I think that’s why we need ethical systems and teachers in the first place: it’s not that our instincts are bad, just that they’re not suited to our current environment. But humans are unique in that intellect can overcome instinct, and we can use that to adapt to new situations. Ethics as adaptation, perhaps?

I’m not entirely sure where I’m going with this, exactly, but I wanted to share. As always, I invite your thoughts as well!

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: