Culture of poverty: complete nonsense

Discuss race long enough, and you will eventually come across someone who says that black people are the authors of their own downfall. That laziness and a ‘culture of poverty’ that discourages people from making positive economic choices is the reason for the wide income disparities that fall along racial lines. No evidence is ever forthcoming to support this contention – it is merely asserted as a self-evident truth. After all, anyone can look into the ghettoes of the United States and Canada and see that poor people are lazy and have bad attitudes. Millions of dollars are spent on programs targeting these groups, and yet the disparities still persist. What other explanation could there be?

I’m not a sociologist, and I’m sure this little factoid is apparent to any readers of the blog that are sociologists. I try my best to reserve my comments to topics I understand, and based on fields of inquiry with which I have at least some familiarity. Insofar as I am not trained as a sociologist, I usually try and avoid interpreting the primary literature. However, insofar as I can appreciate the scientific method present in that type of inquiry, I do occasionally dip my toe into this realm. Such dabbling is made far easier when someone does all the heavy lifting for me:

[Oklahoma State Senator Sally] Kern was simply advancing one of the most enduring and pernicious untruths in America’s political economy. It holds that poverty – in general, but especially within communities of color – doesn’t result from purely economic factors. Rather, the poor are where they find themselves as a consequence of some deep-seated cultural flaws that keep them from achieving success. They’re held back, the story goes, by what is known alternatively as a “culture of poverty,” or a “culture of dependence.” It’s a popular fable for the right, as it absolves the political establishment for public policies that harm the working class and the poor.

It’s also thoroughly and demonstrably untrue, flying in the face of decades of serious research findings.

It’s a myth that should be put to rest by the economic experience of the African American community over the past 20 years. Because what Kern and other adherents of the “culture of poverty” thesis can’t explain is why blacks’ economic fortunes advanced so dramatically during the 1990s, retreated again during the Bush years and then were completely devastated in the financial crash of 2008.

In order to buy the cultural story, one would have to believe that African Americans adopted a “culture of success” during the Clinton years, mysteriously abandoned it for a “culture of failure” under Bush and finally settled on a “culture of poverty” shortly after Lehman Brothers crashed. That’s obviously nonsense. It was exogenous economic factors and changes in public policies, not manifestations of “black culture,” that resulted in those widely varied outcomes.

I will attempt to translate: the ‘culture of poverty’ hypothesis suggests that poverty cannot be affected by social programs – that the problem is one that must be addressed culturally (however one does that) rather than through the application of policy effort. The counter to that hypothesis states that cultural factors do not explain poverty, and that policy will decrease disparity. That appears to be precisely what happened:

But a little-known fact is that even before the recession hit in 2008, blacks had already taken a huge step back economically during the 2000s. By 2007, African Americans had already lost all of those gains from the 1990s. That year, sociologist Algernon Austin wrote, “On all major economic indicators—income, wages, employment, and poverty—African Americans were worse off in 2007 than they were in 2000.”

Although the Great Recession obviously hit everyone hard, it didn’t cause everyone equal pain. In 2007, the difference between white and black unemployment rates fell to the lowest point in years: just 3 percentage points. Yet as the economy fell into recession, that gap quickly grew again, and by April 2009 it had doubled, reaching a 13-year high.

“So what?” You might be saying. “All that proves is that when you give black people more money, they have more money. It could still be evidence that a culture of failure exists, which is why they lost it all again when the policy changed.” I’ll admit that was my first thought. But as I’ve pointed out before, poverty is not simply a lack of money – it’s a lack of opportunity and access. The way to measure whether or not a ‘culture of poverty’ exists is to look directly at attitudes and behaviours that are different between those at the top and those at the bottom:

Gorski did an exhaustive literature review on the culture of poverty meme. Are poor people lazier than their wealthier counterparts? Do they have a poor work ethic that keeps them from pulling themselves up by their bootstraps? Quite the opposite is true. A 2002 study by the Economic Policy Institute found that among working adults, poorer people actually put in more hours than wealthier ones did. As Gorski noted, “The severe shortage of living-wage jobs means that many poor adults must work two, three, or four jobs.”

So under direct measurement, there does not appear to be a difference in attitudes towards work, education, or even alcohol and drug use between the wealthy and the impoverished. Even attitudes toward marriage (the article goes into more detail, but I don’t really see why) are based more on economic security than a culture of poverty – suggesting quite the opposite of the central thesis that underpins the ‘culture of poverty’ mythos: that poor people are poor because they fail to make good decisions.

So maybe there’s something to be gleaned from this idea that the reason poverty falls along racial lines is because black people are just lazier than average, and don’t put in the work to pull themselves up out of the hole. After all, if they were serious about getting out of poverty, wouldn’t they take advantage of things like retraining and job fairs? Or at least start their own businesses? Yes, that’s exactly what they’d do:

So let’s look again at the evidence. AARP did a study of working people over 45 years of age (PDF), and found that “African Americans surveyed were more likely than the general population to be proactive about jobs and career training.”

They took steps such as training to keep skills up-to-date (30% versus 25%), attending a job fair (18% versus 7%), and looked for a new job (24% versus 17%) in the past year at rates higher than the general sample. A sizeable share also indicated that they plan to engage in these behaviors. More African Americans relative to the general population plan to take training (38% versus 33%), look for a new job (27% versus 24%), attend a job fair (26% versus 11%), use the internet for job-related activities (30% versus 23%), and start their own business (13% versus 7%).

The unemployment rate for African Americans between 45-64 years of age stands at 10.8 percent; the rate for whites of the same age is just 6.4 percent. Older black workers have the drive, and report putting in more effort to land jobs or start businesses than their white counterparts – they embrace a “culture of success” — yet their unemployment rate remains 40 percent higher.

Now this article does not completely rule out the ‘culture of poverty’ hypothesis. There may in fact be some differences in narratives that were not explicitly measured by these studies between black people and the general population. Certainly there is something to be said for the aspirations of success among many black groups, particularly those living in urban environments where opportunities are scarce and ‘success’ has a very different definition. What this article does do, however, is strongly suggest that we cannot ascribe much explanatory power to the idea either that poverty is explained by laziness and poor work ethic, nor can we exclude policy as a useful method of alleviating poverty.

Like this article? Follow me on Twitter!

The god of the glass

Back when I was in my younger teen years I used to love playing a game for Nintendo called Secret of Mana. Toward the end of the game, you have to battle against clones of your own character in order to complete a particular dungeon. This battle was always necessarily the most difficult in the game, because the clone of you had all of your abilities. It meant that unlike other enemies in the game, you couldn’t gain experience or items that would tip the scales in your favour if the fight was too difficult on first pass. The opponent was always your equal, meaning you had to rely on your superior abilities to carry the day. I wasn’t (and am still not) a very good gamer, so this part was always tough for me.

I was reminded of my frustration with this battle against one’s self when I saw this article:

People often reason egocentrically about others’ beliefs, using their own beliefs as an inductive guide. Correlational, experimental, and neuroimaging evidence suggests that people may be even more egocentric when reasoning about a religious agent’s beliefs (e.g., God). In both nationally representative and more local samples, people’s own beliefs on important social and ethical issues were consistently correlated more strongly with estimates of God’s beliefs than with estimates of other people’s beliefs (Studies 1–4).

In particular, reasoning about God’s beliefs activated areas associated with self-referential thinking more so than did reasoning about another person’s beliefs. Believers commonly use inferences about God’s beliefs as a moral compass, but that compass appears especially dependent on one’s own existing beliefs.

(I find HTML journal articles very difficult to read. A .pdf version is available here)

I hinted at this during last week’s Movie Friday, suggesting that when someone talks about their ‘personal relationship’ with whatever deity they happen to worship, there are always discrepant accounts of what that deity values. This is quite inconsistent with the idea that there is an actual entity out there, but fits exactly with the hypothesis that people have a ‘personal relationship’ with something within their own heads. I’ve made this more explicit in the phrase “Ask 100 people for a definition of god, get 200 answers” – referencing the fact that the gods people claim to believe in almost always turn into something much more mushy and deistic under direct scrutiny. The authors of this study have done the scientifically responsible thing and made fun of religious people on a blog actually conducted some research.

In the first study, the researchers asked people to report their own beliefs, those of a person they do not know personally, and those of their god. Keep in mind that if there were some external standard (god), the level of correlation between people’s own evaluations and that external standard would vary. After all, not everyone agrees with homosexuality or capital punishment or abortion, or any number of topics. What they found instead was that there was a consistently strong correlation between whatever the respondent happened to believe, and what they thought their god believed. Once again, surprising if you believe in a supernatural source of absolute morality that communicates with humans, completely expected if you recognize what it looks like when people talk to themselves.

The facile rejoinder to this would sound something like this:

True followers of YahwAlladdha spoke the truth about those topics, whereas those who are not real _______ only spoke what was in their own heads. What this study demonstrated is nothing more than the fact that some people are not sincere believers.

Luckily, there is a way to test this hypothesis too. If this was indeed the case, then the sincere believers would not change their minds, whereas the convictions of those who are just faking it (or worse, believing in the wrong version of YahwAlladdha) would shift to fit the circumstances. After all, the sincere believers have direct communication with the divine, who is unchanging and absolute. The scientists had participants read arguments for and against a policy (in this case, affirmative action) and rate how strong they felt the arguments were. Then they were asked to rate their opinion of the topic, as well as the fictitious people’s opinion, and then God’s.

As we can see from the graph, those that opposed the policy (the anti-policy group) felt that their god disapproved just as much. Those who had been manipulated to support the policy (keep in mind these were randomized groups, so their position before reading the arguments would have been the same) felt that their god did too. Interestingly, this effect was not seen in how participants thought the average person felt – suggesting that evaluations of the average person are not quite as egocentric as evaluations of YahwAlladdha. This effect was further explored by having people read speeches that either supported or opposed the position they held on the death penalty, which has the effect of polarizing agreement and moderating disagreement. Again, after being manipulated into a position, the participants’ expectation of what their god supports changed right alongside.

Finally, if that wasn’t enough evidence that the ‘personal relationship’ is about as personal as it could be (i.e., just a reflection of your own beliefs), the investigators hauled out a functional MRI (fMRI) scan. Brain activity when considering one’s own beliefs was different than when participants considered the beliefs of other people. However, as you might have expected from the above experiments, when people thought about what their god wanted the pattern of activity was the same as when thinking about themselves. Not only are the content of the beliefs identical, but so too is the method by which believers arrive at them.

None of this is proof that a god doesn’t exist – such a thing is logically impossible and wildly uninteresting (I will explain this on Monday). What it does prove, however, is that people do not get their morality from direct communication with the Holy Spirit or any other kind of supernatural entity. Moral attitudes come from a variety of sources, none of which point to non-material origin. While people may get their moral instruction from religion (in a “do this, don’t do that” kind of way), it is not because of an entity which embodies absolute morality and communicates said morality through prayer.

I am still curious how believers deal with things they disagree with, but which they are told are commanded by their god. Do anti-gay activists legitimately hate gay people, or are they just following the instructions from the pulpit? Are the religious teachings to blame for the evils committed by religious adherents, or are they just a smokescreen used to justify underlying organic hatred and spitefulness? Whatever the answer, those of us hoping to deal with those who believe their cause is divinely justified have to confront the truth that we are not just fighting against the concept a god – we are fighting against the concept of a god that takes shape in the mirror.

Like this article? Follow me on Twitter!

Movie Friday: Fear of Numbers

It’s no secret that I’m a big fan of Neil DeGrasse Tyson. Here’s why:

When Carl Sagan died, there was a hole left for a science educator that could engage with average people and get them excited by new scientific concepts. I feel like that role has gone to Dr. Tyson, though I’m sure he would forswear the comparison. I had a conversation with a couple of friends and raised the point that like basic math skills and basic language skills (although still not in many cases), it should be a prerequisite of having a career as a scientist that you can communicate your research with ordinary people (i.e., non-scientists). If the scientific community can’t manage to bring the fire of the gods to the people (I am making a Prometheus allusion), then what are they (we) doing this for?

Like this article? Follow me on Twitter!

Health care; we still live in the world

So as you may have deduced from yesterday’s marathon post, I am back from my trip. While I spent the first week in sunny and beautiful Amsterdam, I spent the second week in sunnier Toronto – my old home. This trip wasn’t all pleasure though; in fact, I was traveling for business. I don’t talk about this on the blog often, but I work as a health economist. Basically, health economics is a branch of research concerned with resource allocation and decision-making in health care. We look at alternative methods of health care delivery, technologies, programs, etc. and apply the scientific method to work out which options are worth the investment of time, energy, and (ultimately) money. The goal, at least for me, is to maintain the public health system so that it is viable in the long term.

The biggest problem with public provision of health care (or really, any kind of health care provision) is that there are a finite amount of resources available. At every turn, we are confronted by the fact that while costs of care are climbing steadily, the amount of money available to fund treatment can’t even come close to keeping up. At some point, while we’d like to see that everyone gets all the treatment she/he needs and would like, we have to draw a line.

Sometimes we get in our own way a little:

A B.C. woman with a rare, serious skin disease can’t understand why the province refuses to cover a one-time treatment that would likely put it into remission — but will pay for much more expensive treatment that only helps relieve her symptoms…

Dermatologist Gabriele Weichert wrote to PharmaCare, recommending a one-time treatment with Rituximab instead. The drug is approved for treatment of rheumatoid arthritis and other conditions, and Weichert said the drug has also shown much better results in treating pemphigus.

So here it seems there is a clear-cut case where government bureaucracy is getting in the way of medical decision-making. We’ve got a disease, a drug that treats it (at lower cost, no less), and a bloated, inefficient system that won’t cover the cost of the medication because it’s not on “the list”. Pretty shocking, right? Well, until we read this:

A spokesperson for PharmaCare told CBC News approval was denied because Health Canada has yet to approve Rituximab for treatment of pemphigus. Using it to treat that condition is considered “off-label”.

Rituximab is part of a class of drugs called ‘monoclonal antibodies’ that basically mimic the body’s own immune response to foreign proteins. When a strange substance (in immunology, called an “antigen”) enters the body, it is recognized by the white blood cells. They form a chemical impression of the proteins that make up the antigen and begin creating antibodies. Those antibodies coat the foreign protein, signalling other blood cells to envelop and destroy them. Sort of like adding bacon bits to an otherwise-unpalatable salad. Monoclonal antibody drugs do this, but for tumour cells (which are not recognized as ‘foreign’ because they come from the body’s own tissue).

As you might suspect, these drugs are typically used for cancer. Using rituximab for skin disease is indeed an ‘off-label’ usage, and those can be potentially disastrous. The kind of cowboy prescription involved in giving treatments for which efficacy is not established can have potentially fatal consequences, as we’ve seen in the furore around so-called ‘Liberation Therapy’ for multiple sclerosis. The problem here is that there is likely never going to be the kind of trial that we would consider sufficiently strong evidence to justify covering rituximab for use in this setting – the disease is just too rare.

So why not just give it anyway? It’s medicine, right? What possible harm could there be in prescribing it? Well… how about death?

Four people with rheumatoid arthritis have died after being treated with Rituxan, says the drug’s manufacturer, which has issued safety information about the medication in conjunction with Health Canada. None of the deaths caused by a severe infusion-related reaction occurred among Canadian patients, Hoffmann-La Roche Ltd. said in a release.

All drugs have potential adverse effects, and some of those effects might be fatal. Doctors know this, which is why they take such precaution with filling prescriptions (well… that’s debatable I suppose). Giving a medication for an indication that is unknown may result in a miraculous cure, but it might also kill the patient. Because of the vast divide in knowledge between the doctor and the patient, and the unique level of trust that characterizes that relationship, physicians must be extremely careful in the advice they give. When the stakes are high, patients will often leap at opportunities for cures without really understanding all of the variables involved.

This is the tightrope that the health care system must walk every day. If they adhere to the rules and regulations too strictly, they run the risk of undertreating patients, or promoting practices that are inefficient and ineffective. Relax the rules too much and they run the risk of seeing patients die from inappropriate or experimental treatment at the hands of well-intentioned but ultimately misguided care providers. There are horror stories on either side of this divide, which can be (and are) milked in order to shift policy and public opinion.

There is no perfect solution to this set of problems. Different countries employ a variety of different approaches to find a way to maximize patient autonomy whilst simultaneously protecting them from the consequences of their own ignorance. Whenever there are failures, they should be brought up and discussed. The key to any system is one that is not so intractably bound by regulation that it cannot respond to times of crisis (like in the treatment of pehphigus), but not so flexible as to undermine its own ability to safeguard its stakeholders.

Like this article? Follow me on Twitter!

Cracking the code

I screwed up. A couple of weeks ago I introduced a new term into the discussion – “coded racism” – without doing my usual thought-piece beforehand:

To the list of code words that don’t sound racist but are, I would add ‘personal responsibility’. While personal responsibility is a good thing, its usage in discussions of race inevitably cast black and brown people as being personally irresponsible, as though some genetic flaw makes us incapable of achievement (which, in turn, explains why we deserve to be poor and why any attempt to balance the scales is ‘reverse racism’).

I have danced around the idea, and I have made occasional reference to the concept behind it, but I haven’t really explained what coded racism is. I will have to do that in next Monday’s post, so stay tuned for that. As a teaser explanation, I will simply point out that oftentimes phrases are used to identify groups in a sort of wink/nudge way, where everyone listening knows who the speaker is really talking about. It’s phrases like “Welfare queen” and “illegal immigrant” that do not explicitly name the group being criticized, but still carry with them the image of a particular race. It is not, as is the common objection, simply a phrase describing any criticism of racial minority groups.

Before we can really delve too deeply into coded racism, there is a truth that we must acknowledge and grok – that racism (like all cognitive biases) can happen at levels not available to our conscious mind. The second part of the grokking is that even though we are not aware of it, racism can influence the decisions we make. As much as we like to believe that we are free-willed agents of our own decision-making, closer to the truth is that a wide variety of things operate in our subconscious before we are even aware that a decision is being made. This is why an artist, an engineer and a physicist could all look at the same blank piece of canvas and see completely different things (a surface upon which to draw, a flat planar surface with coefficient of friction µ, a collection of molecules). We then build conscious thoughts on top of the framework of our subconscious impressions and arrive at a decision.

So when we tell ourselves “I don’t have a racist bone in my body“, what we are really referring to are those conscious thoughts. Most people refuse to entertain overtly racist attitudes, because those attitudes have become wildly unpopular and people recognize that racism is destructive. However, our decisions are only partially decided by our overt ideas, and we can end up engaging in patterns of behaviour that may surprise even us:

You are more likely to land a job interview if your name is John Martin or Emily Brown rather than Lei Li or Tara Singh – even if you have the same Canadian education and work experience. These are the findings of a new study analyzing how employers in the Greater Toronto Area responded to 6,000 mock résumés for jobs ranging from administrative assistant to accountant.

Across the board, those with English names such as Greg Johnson and Michael Smith were 40 per cent more likely to receive callbacks than people with the same education and job experience with Indian, Chinese or Pakistani names such as Maya Kumar, Dong Liu and Fatima Sheikh. The findings not only challenge Canada’s reputation as a country that celebrates diversity, but also underscore the difficulties that even highly skilled immigrants have in the labour market.

This phenomenon is well-known to people who study race disparity, but it is rare to see it make the pages of a paper like The Globe and Mail – hardly a leftist rag. People of colour (PoCs), or in this case people who seem non-Anglo, are at a disadvantage not because of how they look, or how they act, but simply because they have funny-sounding names. Now one would have to be particularly cynical to think that a human resources professional is sitting there saying “Fatima Sheikh? I don’t want no towel-head working for ME!” and throwing résumés in the trash. As I said, that kind of overt racism is rare, even in the privacy of one’s own head. What is far more likely is that, given a situation in which a choice had to be made between a number of potential candidates, the HR person made a ‘gut instinct’ decision to call back the person that they felt most comfortable with.

The problem is that when we feel different levels of comfort with people of different ethnic backgrounds, our aggregate decisions tend to benefit white people and disadvantage PoCs. This isn’t because we’re all card-carrying KKK members, but because we are products of a racist society. This kind of thinking isn’t relegated to how we hire, either:

An experiment was conducted to demonstrate the perceptual confirmation of racial stereotypes about Black and White athletes… Whereas the Black targets were rated as exhibiting significantly more athletic ability and having played a better game, White targets were rated as exhibiting significantly more basketball intelligence and hustle. The results suggest that participants relied on a stereotype of Black and White athletes to guide their evaluations of the target’s abilities and performance.

In a situation where an athlete is identified to study participants as either black or white, but performance is kept exactly the same (they listen to a radio broadcast), what is considered ‘athletic ability’ in a black player is ‘basketball intelligence’ and ‘hustle’ in a white player. The identical stimulus is perceived in different ways, based on racial ideas that are not readily available to the subjects (and, by extension, the rest of us). This finding on its own may be benign enough, but extrapolate the fact that innate ‘athletic talent’ in one race is seen as ‘intelligence and hustle’ in another – the black players are just naturally good; the white ones had to work for it. Poor white folks are ‘down on their luck’, poor black folks are ‘waiting for a handout’. Jobless white folks are ‘hit hard by the economy’; jobless brown folks are ‘lazy’.

And so, when we discuss the idea of words that are simply coded racial evaluations, we have to keep in mind that it is this subconscious type of racism that these phrases appeal to. Far from simply being a macro description of a real problem, the way they are used bypasses our conscious filters and taps right into the part of our mind we don’t know is there, and like to deny.

Like this article? Follow me on Twitter!

So predictable

One of the first posts I ever wrote for this blog was discussing why belief based in science is much better than belief based in religious faith. Even if we were to grant the wildly unsupported and ridiculous assertion that religious narratives and scientific observations are equally accurate methods to describe the way the world came to be, the fact remains that religious narratives are consistently inaccurate when it comes to predicting the future. For all the talk of ‘prophecy’ that is in the Bible, most of it is simply an expression of rudimentary understanding of human nature. If you couch your predictions in vague enough language, everything becomes a ‘fulfilled’ prophecy.

Of course those who do dare to tip-toe outside the safe boundaries of non-specific prognostication and actually put their reputations on the line by selecting a specific date and location for an event are always proved wrong. Predictions of this specific type would actually be useful – being able to, for example, know when a plague or a famine or a natural disaster was going to strike a certain region would be incredibly useful. Assuming for a moment that religious truth picks up where science leaves off, and science isn’t capable of predicting these events, using this other ‘way of knowing’ would be an incredible boon to mankind. We could use the Bible (or Qu’ran or Vedas or whatever you want to use) to predict when this would happen, and then use science to minimize the damage such things would cause.

However, that’s not the case. So instead we get stuff like this:

More than 22 earthquakes struck Italy by noon on Wednesday, as is normal for the quake-prone country but none was the devastating temblor purportedly predicted by a now-dead scientist to strike Rome. Despite efforts by seismologists to debunk the myth of a major Roman quake on May 11, 2011 and stress that quakes can never be predicted, some Romans left town just in case, spurred by rumour-fueled fears that ignore science.

Many storefronts were shuttered, for example, in a neighbourhood of Chinese-owned shops near Rome’s central train station. And an agriculture farm lobby group said a survey of farm-hotels outside the capital indicated some superstitious Romans had headed to the countryside for the day.

Some people I know are superstitious, or believe in horroscopes and the like. Contexually, it is a harmless enough fancy – for the most part they use logic and good sense to make their life decisions. In principle however, these kinds of beliefs can be incredibly destructive. When people begin abandoning their homes and work over a superstition that violates scientific principles it’s not simply something to laugh off. People leaving their jobs means a serious burden to the national economy; people leaving town ties up roads and puts an additional strain on emergency services; the efforts spent trying to disabuse people of a false belief could have been better spent in any number of fields. I’m not saying that people can’t take a day off, but when hundreds do so at the same time for an extremely poor reason, you kind of have to give your head a shake.

When those same people spend millions of dollars to propogate a superstitious belief, you kind of wish you could shake them instead:

Billboards are popping up around the globe, including in major Canadian cities, proclaiming May 21 as Judgment Day. “Cry mightily unto GOD for HIS mercy,” says one of the mounted signs from Family Radio, a California-based sectarian Christian group that is sending one of its four travelling caravans of believers into Vancouver and Calgary within the next 10 days. Family Radio’s website is blunt in its prediction of Judgment Day and the rolling earthquake that will mark the beginning of the end. “The Bible guarantees it!” the site proclaims, under a passage from the book of Ezekiel, which says “blow the trumpet … warn the people.”

You didn’t misread that – Family Radio (why is every fundagelical group ‘Family’ something – as though only Christians have families?) has determined through some serious Biblical research that the final judgment of all mankind is happening two days from now (or maybe less, depending on when you’re reading this). Oh, and when I say “serious Biblical research”, I mean some random shit that he’s made up:

I remember a few years ago, I was reading an article by a Rastafari preacher in a Bajan newspaper. He was telling people that you shouldn’t eat ice cream, because it sounds like “I scream”, and therefore it meant that your soul is screaming when you eat it.

Year earlier than that, a guy in one of my high school classes used the same ‘logic’ as Harold Camping to demonstrate that Barney the Dinosaur was actually the devil – apparently the letters in BIG PURPLE DINOSAUR, when converted to Roman numerals (substituting ‘V’ for ‘U’, as is the style in Latin), and removing all letters that don’t correspond to numerals, add to “666”. At least when Lee said it, he was joking. The followers of Mr. Camping are selling their homes, quitting their jobs, and basically giving themselves no Plan B. This is seriously disruptive not only to their lives, but to the lives of those that depend on them. The sad part is what will happen to all of these people when the sun rises on May 22nd and nothing’s changed.

If I am moved by a spirit of uncharacteristic generosity, I will grant that religion helps people deal with existential crises by giving them convenient and non-falsifiable answers to complicated questions (by teaching them not to deal with them at all, but whatever). However, when it comes to making claims about the material world, religion can and must be completely ignored as a source of reliable information. Faith is simply one of the remainders that falls out of the long-division of our evolution-crafted mental processes. Just like we can control our urge to defecate on the ground and have sex with teenagers (well… most of us anyway), we can control our urge to believe in ridiculous claims of superstition when it comes to answering the only questions that matter – how are we to live in the world?

Like this article? Follow me on Twitter!

I’ll just leave these here…

Sometimes things get said so well that there’s no point in my digesting and putting my own spin on them. Today we have a few of those, which I’m just going to leave here and suggest you read.

1. 90% of prominent Climate Change deniers are linked to Exxon Mobil

A recent analysis conducted by Carbon Brief which investigated the authors of more than 900 published papers that cast doubt on the science underlying climate change, found that nine of the ten most prolific had some kind of relationship with ExxonMobil.

Links to these papers were proudly displayed on the denialist Global Warming Policy Foundation website, where they are still fanning the dying embers ofClimategate hoping something will catch, under the heading, “900+ Peer-Reviewed Papers Supporting Skepticism Of ‘Man-Made’ Global Warming (AGW) Alarm.”

The top ten contributors to this list were responsible for 186 of the 938 papers cited.

Hey denialists (coughcoughcoughgrassrutecough) – want to talk some more about how climate change is just a scam? Just be aware that your position was bought and paid for by oil companies. Why don’t we look at the evidence rather than accusing each other of having some secret financial motive?

2. The appeal of the “New Racism”

The New Racism manifests itself in many ways–school choice, the obsession with property values, including the rise of Neighborhood Watch in the 1980s; the differences in prison sentences for those convicted of possessing crack as opposed to cocaine, etc.

We’ve lost an understanding of what racism means in this country. We’ve forgotten that it’s race hate combined with power. A white person being harassed in a black neighborhood is not experiencing racism–that person can call the police and get a response. My students refer to anything other than whatever they think of as Martin Luther King’s dream as racism. Like with so many other words, conservatives have won the rhetorical war. We need to define racism as what it actually is and reclaim the rhetorical ground on moving toward real equality.

To the list of code words that don’t sound racist but are, I would add ‘personal responsibility’. While personal responsibility is a good thing, its usage in discussions of race inevitably cast black and brown people as being personally irresponsible, as though some genetic flaw makes us incapable of achievement (which, in turn, explains why we deserve to be poor and why any attempt to balance the scales is ‘reverse racism’).

3. Seriously, Fuck Ayn Rand

We all know that liberalism is for the (naive, inexperienced, foolish) young while conservatism is a natural byproduct of aging, maturing, and gaining experience with the world, right? Conventional wisdom gets it wrong yet again. The surge in popularity of objectivism and libertarianism on campus underscores how right wing ideology, not pie-in-sky liberalism, is the real fantasyland for kids who have absolutely no experience in the real world.

Yes, Ayn Rand is making a comeback among the college-aged. Objectivism is even getting some mainstream press in light of Commissar Obama frog-marching the nation toward hardcore Communism. Heroic individualists are threatening to “go galt” now that Obama has completely eliminated all incentive for anyone to work ever again, re-enacting their own version of the “producers’ strike” in Atlas Shrugged.

I’ve gotten a little more mellow in recent years, believe it or not, less keen to argue and more able to see middle ground. But there is no middle ground here, no way for us to meet halfway in intellectual compromise: If you are an Objectivist, you are retarded. This is a judgment call, and I just made it. Grow up or fuck off. Those are your two options.

So I decided to give you 1000 words on objectivism last week. Gin and Tacos gives us an… alternative take on the same position. While I’m not a fan of the use of the word ‘retarded’, the rest of the piece is worth reading. Edit: I should note that there is at least one person who is a Rand devotee and whose intelligence and opinion I respect, even if I do not agree.

4. 10 Ways the Birthers are an Object Lesson in White Privilege

Ultimately, the election of Barack Obama has provided a series of object lessons in the durability of the colorline in American life. Most pointedly, Obama’s tenure has provided an opportunity for the worst aspects of White privilege to rear their ugly head. In doing so, the continuing significance of Whiteness is made ever more clear in a moment when the old bugaboo of White racism was thought to have been slain on November 4, 2008.

To point: Imagine if Sarah Palin, a person who wallows in mediocrity and wears failure as a virtue, were any race other than White. Would a black (or Latino or Asian or Hispanic) woman with Palin’s credentials have gotten a tenth as far? Let’s entertain another counter-factual: If the Tea Party and their supporters were a group of black or brown folk, who showed up with guns at events attended by the President, threatening nullification and secession, and engaging in treasonous talk, how many seconds would pass before they were locked up and taken out by the F.B.I. as threats to the security of the State? If the Tea Party were black they would have been disappeared to Gitmo or some other secret site faster than you can say Fox News.

Earlier this week President Obama tried to be the adult in the room by surrendering his birth certificate in an effort to satisfy the Birthers and their cabal leaders Donald Trump and Pat Buchanan. Of course, his generous act does nothing to satisfy the Birther beast for it is insatiable in its madness. Nevertheless, a lesson can still be salvaged by exploring the rank bigotry which drives the Birther movement. In an era of racism without racists, the Tea Party GOP Birther brigands provide one more lesson in the permanence of the social evil known as White privilege.

Still confused about how white privilege works? Here’s a few concrete examples.

I guess I should get a tumblr or something for this stuff…

Like this article? Follow me on Twitter!


“Liberation Therapy” saga continues

A while back, near the beginning of this blog, I brought to your attention a new potential treatment for Multiple Sclerosis – a severe degenerative disease. The treatment, pioneered by an Italian doctor by the name of Zamboni (I couldn’t make this stuff up – I’m not that creative), is referred to as ‘liberation therapy’, and involves using venous angioplasty (balloons) to clear blockages.

I expressed my skepticism about this procedure at the time, saying that I generally doubted the claim, simply because there’s little connection between the circulatory and nervous systems. It seemed improbable to me, but I was happy (and encouraged others) to wait and see what the evidence says – what happens when we observe patients under controlled circumstances with adequate followup?

Well, it seems that this happens:

People with multiple sclerosis may show blocked neck veins as a result of the disease rather than as a cause, a large study published Wednesday suggests. The findings cast doubt on the theory that blocked or narrowed veins are a main cause of MS, study author Dr. Robert Zivadinov of the University of Buffalo said. The findings published in the journal Neurology were consistent with thinking that the condition — also known as chronic cerebrospinal venous insufficiency, or CCSVI — is more common in patients with multiple sclerosis but not to the degree first reported by Italian doctor Paolo Zamboni.

Please don’t mistake me – I get little pleasure from being right in this case. People close to my family have lived with MS, and I would much rather be wrong if it meant that people could undergo a simple medical procedure and achieve relief from their symptoms. However, the facts are the facts. In this case, the facts do not support the claim that blocked veins contribute to MS, and there is consequently no reason to suspect that alleviating the blockages will have any effect on MS patients.

This study is, perhaps, not the definitive ‘smoking gun’ that liberation therapy is not effective, but it certainly does cast doubt on the original hypothesis of its efficacy. One of the chief components of the scientific method’s accuracy is the ability to reproduce results in a variety of locations. If some event only occurred once, and cannot be observed by others performing the same procedures as elicited the original event, then serious doubt is cast on the original observation. It is far more likely, in a case like this, that there was some flaw in the original observation. This is a good thing – it prevents us from making decisions based on bad information.

However, sometimes we are hell-bent on making those decisions no matter what the evidence says:

The New Brunswick government says it will still help multiple sclerosis patients gain access to therapy to open narrowed neck veins, even though a new report on the procedure is raising concerns. New Brunswick Health Minister Madeleine Dube said that could be debated in the medical community for some time. “But while this is being researched and debated, those people still need support and we are committed to that,” she said Thursday.

There is nothing strictly incorrect about Minister Dube’s statement; however, she and I do seem to have a disagreement over what the word ‘support’ means. Under my definition, it means giving sick people the best care possible, guided by scientific evidence and good practice. Under her definition, it means giving patients whatever they ask for to make them feel better. While I am all for making people feel better, I do not subscribe to the philosophy that cutting people open to elicit the placebo effect constitutes responsible medical care.

For all intents and purposes, there is no reason to suspect that liberation therapy elicits anything stronger than a placebo effect. For every anecdote that states an improvement in symptoms, there is one that talks about how the symptom relief has faded over time. And among those anecdotes, there’s more from people who keep chasing the bad medicine like an addict fiending for a fix:

The monitoring is for Canadians such as Caroline McNeill of Langley, B.C., who travelled to California to have her neck veins reopened using balloon angioplasty. She has had the procedure twice before, and noted lingering benefits such as feeling less tired. “The numbness on my fingers has started to come back again, and I have really bad dizziness and vertigo,” McNeill told her doctor. She plans to return to Newport Beach in Southern California for a stent later this month.

It doesn’t surprise or confound me in the slightest that people who experience a temporary benefit would go back to the well, so to speak, and give the therapy another try. When the current regimen of therapies are only partially effective and carry a whole host of adverse effects, it’s completely reasonable to leap at any alternative. This is why these ‘alternative therapies’ (which is a really stupid name) are so dangerous – they make wild promises that offer benefits that have no scientific backing whatsoever. The people to whom these promises are made are often desperate for any relief, and will try just about anything no matter how dangerous it is.

This is why people who advocate “health freedom” make me so angry – there is no way you can expect people to be dispassionate and conscientious consumers, weighing the plusses and minuses of different options, when the stakes are so high. People’s lives and day-to-day well-being hang in the balance, and they’ll jump at any chance to feel better. This is why our policy should be based on scientific evidence, not the whims of politicians and the desperation of sick people.

Like this article? Follow me on Twitter!

Psychology beats “bootstraps”

Crommunist is back from vacation, at least physically. I will be returning to full blogging strength by next week. I appreciate your patience with my travel hangover.

Here’s a cool thing:

You don’t have to look far for instances of people lying to themselves. Whether it’s a drug-addled actor or an almost-toppled dictator, some people seem to have an endless capacity for rationalising what they did, no matter how questionable. We might imagine that these people really know that they’re deceiving themselves, and that their words are mere bravado. But Zoe Chance from Harvard Business School thinks otherwise.

Using experiments where people could cheat on a test, Chance has found that cheaters not only deceive themselves, but are largely oblivious to their own lies.

Psychology is a very interesting field. If I wasn’t chasing the get-rich-quick world of health services research, I would have probably gone into psychology. One of the basic axioms of psychology, particularly social psychology, is that self-report and self-analysis is a particularly terrible method of gaining insight into human behaviour. People cannot be relied upon to accurately gauge their motivations for engaging in a given activity – not because we are liars, but because we genuinely don’t know.

Our consciousness exists in a constant state of being in the present, but making evaluations of the past and attempting to predict the future. As a result, we search for explanations for things that we’ve done, and use those to chart what we’d do in the future. However, as careful study has indicated, the circumstances under which we find ourselves is far and away a more reliable predictor of how we react to given stimuli than is our own self-assessment. This isn’t merely a liberal culture of victimhood, or some kind of partisan way of blaming the rich for the problems of the poor – it is the logical interpretation of the best available evidence that we have.

Part of the seeming magic of this reality of human consciousness is the fact that when we cheat, we are instantaneously able to explain it away as due to our own skill. Not only can we explain it away, but we instantly believe it too. A more general way of referring to this phenomenon is internal and external attribution – if something good happens it is because of something we did; conversely, bad things that happen are due to misfortune, or a crummy roll of the dice. When seen in others, this kind of attitude is rank hypocrisy. When seen in ourselves, it is due to everyone else misunderstanding us. This is, of course, entirely normal – everyone would like to believe the best about themselves, and our minds will do what they can to preserve that belief.

The researchers in this study explored a specific type of self-deception – the phenomenon of cheating. They were able to show that even when there was monetary incentive to be honest about one’s performance and cheating, people preferred to believe their own lies than to be honest self-assessors. However, the final result tickled me in ways that I can only describe as indecent:

This final result could not be more important. Cheaters convince themselves that they succeed because of their own skill, and if other people agree, their capacity for conning themselves increases.

There is a pervasive lie in our political discourse that people who enjoy monetary and societal privilege do so because of their own hard work and superior virtue. This type of thinking is typified by the expression “pulled up by her/his bootstraps” – that rich people applied themselves and worked hard to get where they are. The implication is that anyone who isn’t rich, or who has the galling indecency to be poor, is where they are because of their own laziness and nothing more. It does not seem to me to be far-fetched at all that these people are operating under the same misapprehension that plagued the study’s participants – they succeed by means that are not necessarily due to their own hard work, and then back-fill an explanation that casts themselves in the best possible light.

Please do not interpret this as me suggesting that everyone who is rich got their by illegitimate means. If we ignore for a moment anyone who was born into wealth, there are a number of people who worked their asses off to achieve financial success – my own father is a mild example of that (although he is not rich by any reasonable measure). However, there are a number of others who did step on others, or use less-than-admirable means to accumulate their wealth. However, they are likely to provide the same “up by my bootstraps” narrative that people who genuinely did build their own wealth would, and they’ll believe it too! When surrounded by others who believe the same lie, it becomes a self-sustaining ‘truth’ that only occasionally resembles reality.

The problem with this form of thinking is that it does motivate not only attitudes but our behaviours as well. It becomes trivial to demonize poor people as leeches living off the state, and cut funding for social assistance programs as a result. People who live off social assistance programs often believe this lie too, considering themselves (in the words of John Steinbeck) to be “temporarily embarrassed millionaires” who will be rich soon because of their furious bootstrap tugging. While it is an attractive lie, it is still a lie that underlies most conservative philosophy – which isn’t to say that liberals aren’t susceptible to the same cognitive problems; we just behave in a way that is more consistent with reality, so it doesn’t show as much.

Like this article? Follow me on Twitter!

Conservative Party of Canada is against science

There is a surefire way to ensure tyranny – undermine the education of the populace. When the people don’t have the tools required to determine truth from lies or to obtain their information from a variety of sources, they become dependent on the state to tell them “the Truth™”. We can see this currently happening in the Arab world, where state television in Libya is still being used to broadcast misinformation that is (perhaps fatally) undermining the cause of the pro-democracy rebellion.

One way to ensure a religious tyranny is to ensure that the populace doesn’t have access to adequate scientific information. Science is inherently hostile to religion, since the two are very different methods at arriving at answers. The scientific method involves testing repeated observations and inferring rules and laws from trends within those observations. The religious method involves arriving at a conclusion and then finding observations that support the a priori position. The problem with the latter method is that it is trivially easy to arrive at false conclusions and then justify them afterward. By ensuring that the public doesn’t have access to scientific knowledge, you can erode the cause of science and replace it with whatever system you like.

Enter the Conservative Party of Canada:

The public has lost free online access to more than a dozen Canadian science journals as a result of the privatization of the National Research Council’s government-owned publishing arm. Scientists, businesses, consultants, political aides and other people who want to read about new scientific discoveries in the 17 journals published by National Research Council Research Press now either have to pay $10 per article or get access through an institution that has an annual subscription.

Now this on its own is an incredibly minor development. The vast majority of people who access the scientific literature are scientists working at institutions that can afford to buy subscriptions. Furthermore, the lay public get most of their scientific information from people who interpret the studies that are now behind a paywall, so most people won’t notice the difference. This is not the straw that breaks the camel’s back by any stretch of the imagination.

However, erosion doesn’t work in giant leaps – it occurs gradually over time. One of the strengths of science is the ability of anyone who is curious to go back and investigate the source material. Someone tells you that a drug works to treat diabetes, you can go to the paper and check it for yourself. Someone tells you that homeopathy cures warts, you can go check it out for yourself. Someone tells you that the universe was created in the Big Bang, you can go read the papers. This process encourages skepticism and critical thinking, while increasing the trust that the public has in the scientific community (by increasing transparency).

By placing additional barriers between lay Canadians and the products of Canadian scientific researchers, the privatization of the National Research Council is inherently anti-transparent and anti-science. It discourages scientific scrutiny and question-asking, which are two things that the CPC really doesn’t like in the first place. If Harper can’t get a majority right now, at least he can do as much damage as possible with the limited powers he wields.

Like this article? Follow me on Twitter!