In the world of science, you can't go a day (or even a conversation, usually) without using the word "significant." That's because we almost never measure something that's either ON or OFF. There's always some activity going on, whether we're talking about brain activation or gamma rays out there in space. This is what we refer to as the "baseline level" or simply "noise." What we're always trying to figure out is whether something we see is statistically significant as compared to the baseline level, or noise.
That's science, in a nutshell: Do something, record what changes, and compare those changes to what usually happens. If the changes we caused were statistically different from what one could reasonable expect under normal circumstances, then we found a "statistically significant" effect. Obviously, there are important ways of cleaning that process up, but we're talking basics here.
Thursday, January 23, 2014
Monday, May 27, 2013
I am currently reading Frans de Waal's The Bonobo and the Atheist, and I've come across an unfortunate, common critique of Utilitarianism. As a whole, it's been an interesting & entertaining read. I do have one specific nit to pick, however.
In a lengthy passage (pp 182-184), de Waal applies the not-super-original arguments that "good" is a value judgment, and that increasing the "greatest common good" would devalue and detract from our personal relationships. This is only true when referring to a caricatured, algebraic application of the principal of "greatest common good." De Waal claims that a true striving for the greatest common good would result in an abandonment of our close personal relationships: Why buy your down-and-out brother lunch when there are people literally dying of hunger out there?! And why bring flowers to your Alzheimer's-afflicted mother when you could donate those $30 to Alzheimer's research, to benefit many more people?
In short, this is an argument precariously tiptoeing the precipice of the classic reductio ad absurdum tactic. De Waal is probably correct that if every person subordinated the good of their family for the good of every individual on Earth, personal relationships would be completely empty and meaningless. However, given that postulate and our innate human need for close companionship, that obviously would not be a path toward "human flourishing," and therefore not a valid Utilitarian path. De Waal does a great job tugging at the readers' heartstrings regarding the importance of personal relationships. However, he fails to acknowledge that, assuming his assertions are correct, those personal relationships would be inherently necessary in a Utilitarian "equation." The subordination of personal biases toward one's own relationships for the sake of humanity as a whole is more akin to a Moral Marxism than it is Utilitarianism.
Wednesday, May 15, 2013
from NBC News Android app.
The headline to the article above, though quite a disappointing teaser for an article that never really explains its title, hints at an idea I've had for a while now. As somebody who enjoys both psychology research and squeezing in some gaming time, I have long been interested in the "Violence in Video Games" debate.
Although I am of the opinion that the violence on video games does not *cause* any anti-social behaviors, I can certainly agree that, for individuals with certain dispositions, adding violent video games into the mix is most certainly not healthy, and allowing children to play them is definitely not a laudable parental decision.
TOPPLING THE MORAL PILLARS
I recently finished reading Jonathan Haidt’s The Righteous Mind: Why good People are Divided by Politics and Religion, and I've felt compelled to write a response since completing even just the first few pages. Although I had been familiar with his Moral Foundations Theory, this reading served as my formal introduction to it, and, to put it lightly, I was left with a very sour taste in my mouth. I have always found myself with an evolutionary psychology mentality, and his moral pillars, or foundations, just do not hit home with me. All six of his foundations seem to be obvious extensions of a Harm-based morality developed by the social primates that humans are, with the splitting into six factors serving only to justify a relativistic view of morality. As a refresher, here are his 6 moral foundations, which he claims are the bases of morality in all cultures, with various cultures emphasizing different foundations to different degrees, resulting in the many cultural definitions of right and wrong:
- Care vs Harm
- Fairness vs Cheating
- Liberty vs Oppression
- Loyalty vs Betrayal
- Authority vs Subversion
- Sanctity vs Degradation