Normalizing belief (pt. II) – in defense of aggression

Previously, I tried to illustrate my take on the “accommodation vs. confrontation” issue using a model from statistics. In brief, I pointed out that by asserting a strong, persuasive position it is possible to shift a population of people along a continuum from absolute belief toward absolute disbelief. This shift can occur despite the fact that you may not move a single strong believer into a position of disbelief:

In the graph above, the blue line represents the distribution of a priori level of belief in a proposition (to wit, the existence of a god/gods with 1 reprsenting gnostic theism and 7 representing gnostic atheism), and the green dotted line is what I call a “precipice of belief” – the point at which people begin to ask serious questions and doubt the validity of their beliefs. The red line is a hypothetical distribution after someone has made a compelling argument against belief. Notice that many people have crossed the “precipice”, particularly those that were already close to questioning. Note also that none of the “strong” believers (those 1s, 2s and 3s) are atheists now, but are still somewhat shifted.

The question inevitably arises in such discussions: is it necessary to be aggressive? Doesn’t being aggressive and employing mockery of people’s beliefs make them less likely to listen to your argument? Wouldn’t it be better to state your case in a nice non-confrontational way, rather than arguing from an extreme point of view? I outlined my objections to this argument in the first post:

In general, there are 4 major objections: 1) someone who believes in something because the opponents are mean isn’t rational; 2) there would have to be a lot of people turned off for this to be ‘counterproductive’; 3) minds change over a period of time, not at a single instant; and 4) believers are not the only people in the audience.

I feel it’s important to expand on those points.

1. Someone who believes in something because the opponents are mean isn’t rational

If we grant for a moment the existence of people who will simply move further to the left, or completely shut down, if someone isn’t nice to them (and I’m sure they’re out there), this still fails to be a reasonable objection to the use of aggressive rhetoric. If the strength of your belief is predicated on the disposition of your critics, then you’ve abandoned rationality and are doing things from an entirely emotional perspective. As an analogy, imagine someone who believes in science primarily out of a hatred for hippies and anti-vaxxers. Her belief in science has nothing to do with its actual efficacy, but rather an ad hominem rejection of the opponents. Her reasons for belief are therefore non-rational, and a reasoned argument against them would be a complete waste of time.

It is certainly someone’s right to believe or disbelieve for any number of reasons, but then we have to stop pretending that a rational argument, no matter how friendly, will sway them in the slightest. It therefore requires a different type of argument to convince someone with this mindset, one that is based on emotive reasoning rather than logical. Unless “Diplomats” are advocating abandoning reason as a means of dialogue, then we have to accept that a variety of approaches are necessary.

People whose beliefs will not respond to logical reasoning represent only one portion of the population of believers, and those ones are likely in the 1s and 2s, rather than close to the precipice of belief. After all, if you’ve drawn the cloak of your belief around the shoulders of your brain that tightly, you’re probably not interested in hearing dissenting opinions anyway. It’s also nearly impossible to find arguments that aren’t offensive to believers, when any questioning of their faith is seen as an unforgivably rude insult.

2. The issue of ‘counterproductive’

One of my least favourite words that always pops up in this argument is “counterproductive”. The assertion is that being aggressive turns more people off than it turns on. If we look at that curve in the above image, we can see that while no ‘strong believers’ have crossed the “precipice”, quite a number of those living in the middle are now in a position to seriously question their position. In order for this to be a “counter-productive” shift, an equal or larger number of people would have to be pushed away from questioning their belief.

There is no evidence to suggest that such a shift happens. What is more likely is that people simply ignore a given argument if they don’t like the speaker, and their level of belief remains fixed where it was before. Viewed in isolation from an individual level, this may look like a failure of the argument, but we have to remember that we’re dealing with a distribution of beliefs in a population, not the efficacy for any one person. Furthermore, the audience for an appeal that includes aggression is, by definition, those who are mature enough not to dig in their heels every time their feelings get hurt.

3. Changing minds takes time

The “Diplomat” position also makes the implicit assumption that the goal of a given argument is to turn someone from a believer into a non-believer immediately. This is an attractive fiction, but a fiction nonetheless. People do not arrive at their beliefs all at once, but rather over a period of time. It is a rare person who can look at even the most compelling argument against a position and switch their beliefs immediately. More common is that a series of kernels of cognitive dissonance are introduced, whereafter more questions are asked. This process eventually leads to the changing of minds.

As evidence, consider the stories of people who are outspoken atheists. Many of them start from a position of strong belief and then turn to a more liberal form of their religion. Then, as time progresses and they allow themselves to ask more questions, they slowly (over a number of years, in my case) progress toward a complete rejection of those religious beliefs, then of religious beliefs altogether. More rare are stories of people who have a friend point out, in the nicest language possible, that YahwAlladdha is fictional, after which they say “those are good points – I’m an atheist now!”

4. Believers are not the only audience

Once again, I am up against the word limit for this post, and I think I can devote an entire 1,000 to the fourth (and in my mind, most important) of these defenses of aggression. What I will do instead is summarize what I’ve said above and leave off the final part of this discussion for next Monday’s “think piece”.

Much of my objection to the “Diplomat” position is that it demands exclusivity. It says that being confrontational is inherently a bad idea because it fails to convert a believer into a non-believer. My contention is that it is neither desirable nor practical to focus on converting individual believers into atheists, especially given the diversity of belief within the general population, and the fact that changing minds takes time. We must remember that we are speaking to a variety of people, who are at different stages in their journey away from belief. One approach is not going to reach everyone, and pushing hard can move people who are already close.

Like this article? Follow me on Twitter!

Banking on poverty

So at various points in the past I’ve talked about the pernicious lie that is the idea of Africa as a barren wasteland. Because Africa’s people are poor, we assume that the continent itself is poor. After all, isn’t that what we see in the charity commercials? People (mostly children) poking through rubble, having to walk miles across a barren wasteland for fresh water, dry savannah with no resources to exploit? It’s a lie, all of it: Africa isn’t poor because it lacks resources; it is poor because it is kept poor:

Hedge funds are behind “land grabs” in Africa to boost their profits in the food and biofuel sectors, a US think-tank says. In a report, the Oakland Institute said hedge funds and other foreign firms had acquired large swathes of African land, often without proper contracts. It said the acquisitions had displaced millions of small farmers.

When colonial powers officially left Africa, they left behind a long legacy of abuse and destabilization of local government. The lack of domestic education and infrastructure meant that newly-minted African leaders were woefully unprepared to resist sweet-sounding offers that came from foreign corporate entities, promising high-paying jobs and modern conveniences. What people didn’t realize was that, much in the same way European powers had taken control of American land from its native people, Africans were signing their lands away.

Africa is incredibly resource rich, but lacks the human capital to exploit its own powers in the way that, say, the United States was able to do to become a world power (of course the fact that outside Mauritania, Africa doesn’t really have a thriving slave trade prevents them from really matching the USA’s rise to dominance). The result is that Africans have a choice – work for foreign corporate powers or starve. Whatever political will there is for change is tamped down by well-funded and armed warlords that act as political leaders, but reap the rewards of selling their people back into slavery chez nous.

Of course with no real options for self-improvement, people who wish to survive in Africa agree to work for the corporations. It is only by allowing the conditions to remain oppressive and hopeless that the corporations can maintain an economic stranglehold on the nations of Africa. That is why I am particularly skeptical when one of the same hedge funds that owns African land roughly the same acreage as the country of France (wait… isn’t colonialism over?) say something like this:

One company, EmVest Asset Management, strongly denied that it was involved in exploitative or illegal practices. “There are no shady deals. We acquire all land in terms of legal tender,” EmVest’s Africa director Anthony Poorter told the BBC. He said that in Mozambique the company’s employees earned salaries 40% higher than the minimum wage. The company was also involved in development projects such as the supply of clean water to rural communities. “They are extremely happy with us,” Mr Poorter said.

Anyone who knows about the existence of a “company town” knows to be wary of statements like this. When the entire economic health of a municipality is dependent on jobs from one source, the citizens of the town basically become 24/7 employees. Without strong labour unions and the rule of law, this kind of arrangement can persist in perpetuity, or at least until the company decides that there’s no more value to be squeezed from that area and the entire town collapses, creating generations of impoverished people.

Much like we say in yesterday’s discussion of First Nations reserves, when there is not a strong force for domestic development – whether governmental or otherwise – people are kept trapped in a cycle of poverty. Poverty goes beyond simply not having money – it means that one has no hope of pulling themselves out. When you lack the means, the education, and the wherewithal to “pull yourself up by your bootstraps” (a term I hate for both rhetorical and mechanical reasons – wouldn’t you just flip your feet over your own head and land up on your ass?), all of the Randian/Nietzschean fantasies of some kind of superman building his fortune from scratch can’t save you.

Which is why well-fed free-market capitalist ideologues annoy me so much. The private sector is not bound by ethics, and most of the companies doing this kind of exploitation aren’t the kind of things you can boycott (as though boycotts actually work, which they don’t – just ask BP). When profit is your only motive and law is your only restraint, you’ll immediately flock to places with the least laws and most profits. I’m not suggesting that more government is necessarily the answer – most of the governments in Africa are so corrupt that they simply watch the exploitation happen and count their kickbacks – but neither is rampant and unchecked free market involvement.

Like Canada’s First Nations people, Africans must be given not only the resources but the knowledge and tools to learn how to develop their own land. They must be treated as potential partners and allies, rather than rubes from whom a buck can be wrung. Small-scale development projects that put the control in the hands of the community rather than the land-owners are the way to accomplish this. Not only does it build a sense of psychological pride and move the locus of control back into people’s hands, but there are effects that echo into the future, as new generations of self-sufficient people grow up with ideas and the skills to make them happen.

While it’s all well and good to talk about bootstraps, when there’s a boot on your neck then all the pulling in the world won’t get you onto your own feet.

Like this article? Follow me on Twitter!

In defense of my bigoted moron brothers

Black Nonbelievers of Atlanta is a non-crazy freethinkers group in Atlanta, and you should check them out.

This morning I went on a bit of a tirade against KD and Black Son, two of the hosts of a public access television show called “Black Atheists of Atlanta” for their completely non-scientific rationalization of their anti-gay stance. I got so fired up about tearing them a new asshole, that I forgot to talk about the original point I wanted to make about the show.

The first point was that being a member of a minority group (whether that be a racial or ideological minority) doesn’t make you immune from being a bigot or an idiot. Similarly, being an atheist doesn’t automatically mean you’re intelligent – it just means you have at least one thing right. KD and Black Son are just as seeped in the heterosexism of their society as anyone else. While we might be surprised to see someone that is a religious skeptic use the same kind of nonsensical “reasoning” we complain about in apologists, it’s not completely mysterious. The challenge is to be skeptical about all claims, and to apportion belief to the evidence – KD and Black Son clearly aren’t very skilled at appraising the quality of evidence.

The other thing I wanted to say but didn’t get a chance to was a response to something that Hemant wrote:

At one point, someone calls in to say that there is, in fact, a biological basis for homosexuality. The response?

KD: “Those scientists were white, weren’t they?”
Caller: “Why does that matter?”
KD: “It matters to me because I’m black… if you’re not careful, even science can be racist.”

(I’ll admit it’s true that black people have been victims in some experiments, but that’s the fault of individual scientists, not science as a process.)

Hemant’s comment represents a fundamental misunderstanding of racism, and the climate from which things like the Tuskegee experiment came. It wasn’t simply a handful of unscrupulous scientists operating outside the norms that were responsible for the atrocities of the now-infamous abuses done in the name of science. Rather, the rationalization for using these people as they were used sprang from the societal idea that black people were little better than animals, and as such could be used as instruments of medical testing rather than treated as people.

KD’s remark about science being prone to racism is not then an indictment of the process of science, nor is it a misplaced criticism of a few people. It is justifiable skepticism about truths that come from the scientific establishment – an establishment that has demonstrated again and again its vulnerability to racism, sexism, heterosexism… all the flaws we see in human beings. Seen from this perspective, KD’s point is entirely justified – one does have to be careful to ensure that science isn’t racist. We see this taking place in clinical trials, where medicines are tested in primarily white, male populations, and then distributed to the population at large without checking to see if the results are generalizable. To be sure, this is getting better, but we haven’t reached the point where we have to stop being careful.

That being said, the correct response is to remain skeptical – not to reject the science. Animal studies of homosexuality have been performed by a variety of scientists in many countries, and they are based on observation. They were also not performed with the purpose of proving that gay sex happens in the animal kingdom, they are based on field observations and followup hypothesis testing. This is quite ancillary to the fact that there is nothing inherent in people of European descent that is pro-gay; white people and black people alike hate LGBT people, in equal measure, and with equally little rational support.

So while I am still appalled and horrified by what KD and Black Son said in their broadcast, and find it just as stupid and meritless as I did this morning, I have to defend that particular comment, because it is rooted in a justifiable and rational response to a scientific establishment that is predominantly white and has a long history of racism. Science, properly applied, leads to the acceptance of homosexuality in humans just as sure as it does lead to the conclusion that black people are equal in all meaningful ways to all other people.

Like this article? Follow me on Twitter!

Normalizing belief

Today I want to dive back into the issue of “accommodation vs. confrontation” that is currently a topic of discussion within the atheist community, but which is germane to any social movement. Summarized, this debate centers on what the “best” way is to engage public opinion and advocate your position. The accommodation camp prioritizes civility, compromise and co-operation as the optimum solution, whereas those in the other camp elect to use direct and uncompromising language to spell out their (our) position.

You may find it strange that I put the word “best” in scare quotes in the above paragraph; I will explain why. As best I can tell, the two positions are talking past each other (actually, I think it is more accurate to say that the “accommodation” camp simply isn’t listening to the other side of the argument because they have repeatedly demonstrated their inability to respond to the criticisms of both their position and their approach). The “Diplomats” (a term I use for ‘accommodationists’, because it’s much less unwieldy) consistently invoke examples of one-on-one personal interaction, where the intention of the debate is to change the mind of the other party over the course of discussion. The implication is that insulting someone to their face is a poor way of getting the point across.

The problem with this approach is that it is severely flawed both in its premises and its conclusions. First, the majority of interactions between atheists and believers happens in the course of one-on-one interaction between friends or family members – the idea that atheists are going on rants against their acquaintances is largely fictitious (I will completely ignore the straw man that Diplomats erect of how “Firebrands” speak). Second, the assumption that minds are changed over the course of a conversation or blog post is ridiculous – people are largely resistant to completely changing their minds on positions that are of high importance to them. Third, and what I think is the biggest problem with the position, it presumes that believers are the only audience worth speaking to.

Blog posts, speeches, debates, books – any public exposition of ideas reaches an audience with a diverse range of opinions. As a thought experiment, assume for the moment that public opinion vis a vis atheism is normally distributed, and can be plotted on Richard Dawkins’ 7-point scale of statements of belief:

  1. Strong Theist: I do not question the existence of God, I KNOW he exists.
  2. De-facto Theist: I cannot know for certain but I strongly believe in God and I live my life on the assumption that he is there.
  3. Weak Theist: I am very uncertain, but I am inclined to believe in God.
  4. Pure Agnostic: God’s existence and non-existence are exactly equiprobable.
  5. Weak Atheist: I do not know whether God exists but I’m inclined to be skeptical.
  6. De-facto Atheist: I cannot know for certain but I think God is very improbable and I live my life under the assumption that he is not there.
  7. Strong Atheist: I am 100% sure that there is no God.

Both of these assumptions are likely false – there are far more “strong theists” than “strong atheists” in the world, and because of the nature of the variables the distribution terminates at 1 and 7, rather than continuing indefinitely. However, for the purposes of this illustration, violation of these assumptions does not meaningfully impact the point. Granting the assumptions for a moment, we would have a population that looks something like this:

Let us also imagine for a moment that there is something like a “precipice of belief” – a point of confidence beyond which people allow themselves to start asking difficult questions. Arguably, the location of that precipice is entirely dependent on the individual, and there is just as likely to be one for atheism as there is for theism (i.e., it’s theoretically possible for an atheist to begin questioning whether or not there really is a god/gods – in practical terms this is far more rare). But again, looking from the perspective of the general population, there will be an “average” point at which people will start questioning their faith:

It is crucially important at this point to reiterate that I am talking about a population of people rather than any one individual. It is the failure to understand this distinction that is the central flaw of the “Diplomat” position. When an author writes a blog post, or a book, or gives a speech that articulates something from a position of, say, “6”; she/he is speaking to this general audience rather than a particular individual. The goal of the argument is to effect a general shifting of the curve of belief, moving the general population further toward the threshold:

When we consider the new graph, it is immediately clear that while not everyone has crossed the “precipice”, there has been a general shift toward the rightmost edge. But who is it that moved over? It was people who were already teetering on the edge of that precipice, rather than people at the leftmost edge – the “True Believers”, so to speak. True Believers are still believers, but have been subtly moved along in their level of questioning (if it is a particularly effective argument). Even though not a single strong believer has been converted into an atheist, we have accomplished a population-level shift toward atheism, made up of those who were already somewhat predisposed to question.

Doesn’t aggression turn some people off?

The common response to this line of reasoning is to point out that some people will refuse to engage if their feelings are hurt. The general point is that those who are 2s and 3s might move further left, or simply shut their ears, thus blunting the effectiveness of the argument (and further arguments to boot). There are a variety of reasons why I don’t find this line of reasoning compelling, but I am butting up against the word limit of this post, so I will save my response for another post. In general, there are 4 major objections: 1) someone who believes in something because the opponents are mean isn’t rational; 2) there would have to be a lot of people turned off for this to be ‘counterproductive’; 3) minds change over a period of time, not at a single instant; and 4) believers are not the only people in the audience.

To summarize, when we think of belief, we have to recognize that we are dealing with a population that has a continuum of strength of conviction. Not everyone is at the point where they are ready to question their beliefs, but when we address an audience we can expect that some people can be moved towards disbelief, even if we don’t reach everyone.

Like this article? Follow me on Twitter!

Health care ‘rationing’: Canada vs. the USA

Once again, and I hope you will forgive the digression, I’d like to talk a bit about something that has absolutely nothing at all to do with the usual topics of this blog. This topic is one that is more in line with my professional interests rather than my personal ones (if those two can be really thought of as distinct – I chose this career for a reason). As I may have intimated previously, I am a passionate believer in public provision of health care services.

While private-sector advocates often point to the increased competition and innovation possible in a for-profit delivery model, they neglect two important factors in their argument. First, health care is consumed almost entirely at a point of crisis. People walking into a hospital are not really in a position to “shop around” – they have an acute need and are therefore far less capable of making a dispassionate consumer choice. Second, the only way a for-profit health care delivery system could work is if it is either stringently regulated (a position that is wildly unpopular) or if we just stopped caring if sick people get gouged by unscrupulous corporate interests. Private delivery has the interest of maximizing profit, and while increasing efficiency is one avenue of doing that, companies have figured out that extra billing and price fixing are much more lucrative ways of turning a profit.

The debate over health care reform in the United States has introduced a new word into the public lexicon: rationing. Basically, rationing refers to the belief that under a publicly-administrated health care system, only a certain level of care would be available, and if you want more than that, it’s tough shit. It is from this idea (and an intentional misrepresentation of ‘end-of-life counselling’) that the now-infamous “death panels” became a talking point. People became outraged at the idea that the government would step in and say “grandma can’t have that hip replacement, because it’s too expensive”.

First, here’s what’s true about that argument: a publicly-provided health care system will introduce rationing. There will be medications, technologies and procedures that people will not have access to because of lines drawn by government about what is acceptable care and what is excessive.

However, there is already rationing in the American system, and it happens all the time. Any health care system will require rationing – the demand for health care services will always exceed the amount of available resources. Our concepts of disease and health are plastic, and shift as new innovations are made and the understanding of the human body increases. In order to understand health care we must first understand that there is no method of delivery that is free of material constraints – the question then becomes “how can we provide the greatest level of health care with what we’ve got?”

Canada’s approach, and indeed that approach of most industrialized nations that have publicly-funded health care delivery, has been twofold. First, a list of services is drawn up. The Canada Health Act allows for all “medically necessary” services – a definition that is intentionally vague. This imprecise wording means that the number of services that are provided can expand and contract based on need and resource availability. If you have a specific medical need that is not listed – for example, you have a rare disease or want a type of drug that is not covered – then you will have to pay out-of-pocket for it. Obviously, this is non-ideal, but by delineating it this way and drawing up the list in such a way that covers the majority of health care needs, the Canadian system can provide some form of care to everyone, even if it is not the absolute best.

Second, the Canadian system rations in terms of accessibility – the notorious waiting lists. Given a finite level of capital resources (and I am putting human resources on this list as well), demand may fluctuate in such a way as to exceed the availability of the system to deliver services immediately to all people. For example – if you have the ability to do 10 bone scans a day and 11 people walk in the door, 1 person is going to have to wait until tomorrow (when, hopefully, only 9 people will come in). These waiting lists can be managed with varying levels of efficacy, and we’ve gained some ground in recent years. The fact remains, however, that people cannot necessarily get immediate care for all health conditions (although acute and emergency needs are always prioritized and get attention reasonably fast).

Rationing in the United States is far less publicized, and far more dangerous. Given the same situation (finite resources, high demand), the USA’s system handles rationing by artificially reducing demand by curtailing access. Whereas there may be the same proportion of people requiring care, the United States simply does not provide care to certain people. By knocking people off the rolls (prohibitively high cost of insurance, de-insuring people for a variety of reasons, making coverage contingent on employment), the system ensures that everyone who can get care gets it quickly and to the extent they want/can pay for.

The reason why I call this type of non-explicit rationing more dangerous than the Canadian solution is because the consequences are far more dire for individuals and the economy. For individuals, because losing health coverage (or never having it in the first place) means that people are unable to get care for anything but emergency conditions. For the economy, because those emergency conditions are far more expensive to treat than they are to prevent, and because medical bankruptcy has a ripple effect through the economy at large. This is to say nothing of the reality that public provision is far cheaper than for-profit schemes (despite what free-market advocates would have us believe).

Conclusion

While “rationing” sounds like a scary word, people need to realize it is the inevitable result of a level of demand that is always greater than available supply. Rationing is no more rare in a for-profit system than it is in a publicly-funded one; the only difference is the method of rationing we choose to use. The Canadian solution is to provide services up to a certain level with some barriers to access (waiting times). The American solution is to curtail the number of people who are able to access any level of care. These solutions have different effects, and for reasons of both utilitarian ethics and personal/economic outcomes, the Canadian approach is superior.

Coded racism

Nobody likes to be called a racist. Well, almost nobody, but nobody who wishes to be taken seriously by the general public. We have developed a knee-jerk reaction to racism that has made even the mention of race-sensitive issues abhorrent. This reaction is far from irrational – people have seen how destructive the ideologies of racism are, and how deeply-wounded marginalized communities have become as a result of societal racism. Most people have friends, romantic partners, perhaps even relatives, that are from a different racial group; everyone recognizes that discriminating based on race is a bad thing.

The problem arises when this aversion to racism causes us to become willfully blind to racist practices around us. When confronted with them, we are more likely to explain them away rather than simply admit that we might not be perfect “non-racists”. I’m a particular fan of the way that Stewart Lee characterized it: “…if political correctness has achieved one thing, it’s to make the Conservative party cloak its inherent racism behind more creative language.” Of course we can substitute “Conservative party” with “general public” in most cases. We live in a racist society, and nobody is immune from the subtle voice of cultural indoctrination whispering in our ears.

Given this lack of immunity, the only tools we have to combat the effects of racism are self-awareness and intellectual courage (and surprise…). However, it seems that we prefer instead to use a lexicon that allows us to continue our racist behaviour without seeming racist. This is referred to generally as ‘coded racism’, which I will define as statements of racist ideologies that are carefully designed not to appear racist. I will, for the sake of illustration, give a few examples.

Arizona’s anti-immigration law

Those of you who have been paying attention to the news probably know about Arizona’s new anti-immigration bill, supposedly designed to reduce the amount of illegal immigration to the state. Leaving aside the fact that illegal immigration has absolutely nothing to do with Arizona’s financial woes, the bill reeks of coded racism. The most debated aspect of the bill is the provisions that require police officers to detain anyone that “looks illegal”. No standard has been provided for determining what an illegal immigrant looks like, or how to distinguish someone that “looks illegal” from someone that looks like a legal immigrant. The process is simply left up to a sort of “c’mon… you know what we’re talking about” process.

Defenders of the bill (and there are many) repeatedly affirm that racism and racial profiling are not the purpose of the legislation, stating instead that it is about fighting illegal immigration; and if all the illegals just happen to be brown-skinned people, that’s just an accident of statistics. We are asked to simply ignore the ‘wink-nudge’ aspects of the bill, along with the extreme anti-Hispanic attitudes that accompany it, and pretend that we don’t see how clearly it targets one group of people. Illegal immigration may be a serious issue in Arizona, and if it were, a program that finds a way to minimize the damage would certainly be necessary. However, one that simply gives police discretion to start locking up people based on the way they look is quite clearly racist, even if we don’t want to use those words to describe it.

The “Ground Zero Mosque”

Many of you will likely remember a year ago when a group intended to build an Islamic community centre in Manhattan, a few blocks away from the former site of the World Trade Center. People immediately began frothing at the mouth, calling it the “Ground Zero Mosque” and claiming that it was a plot by terrorists to insult America. Again, leaving aside for a moment that there was already a mosque there, that they weren’t building a mosque, that the construction would have modeled religious tolerance (something that that particular group of terrorists hates), and that Muslims died in the Sept 11th attacks too, the language used was couched in a kind of “this is about terrorists, not Muslims” language that the frothiest of opponents quickly turned to whenever the racist aspects arose.

I will happily concede the point that ‘Muslim’ isn’t a race. That still doesn’t help the argument. The faces of the fight, of the “secret terrorists” was not that of members of the Nation of Islam (with its militant history) or recently-converted white people (converts are among the most zealous); it was Arabs. When a group of protesters mistakenly confronted a construction worker and began screaming at him, it was based on the fact that he was dark-skinned (black, in fact, but he looked Muslim :P). The particularly galling aspect of this particular issue is that these same opponents would like us to give credence to the ‘wink-nudge’ of putting up an Islamic centre at Ground Zero – “c’mon, you know it’s a thinly-veiled insult to those that died”, but then completely reject the “c’mon, you know it’s racist” criticism from the other side.

Birthers

Remember that time that a majority of Americans elected someone with a long history of community service and patriotic dedication, and how his racial identity was the sign of a new, more mature America? Yeah, me either. What I remember is how every excuse was leveled at a black president (“He’s a secret Muslim!”, “He’s a Black Panther!”, “He’s a Kenyan communist sympathizer”) including the accusation that he was foreign-born. This of course despite the fact that he had released his birth certificate during the campaign, that being born in another country doesn’t necessarily preclude you from holding the office of President, and that the guy on the other side of the election actually was born in another country. No, it was pretty clear that the narrative was about Barack Obama being an “other”, and therefore being a bad choice for president.

The Birthers would have us believe that their chief concern is adherence to the Constitution, and certainly not anything that is motivated by racism. I will certainly accede that a lot of their motivation has to do with hating Democrats and liberals rather than simply blind racial hatred. However, their actions and staunch refusal to accept the evidence (even when presented over and over again), coupled with their close ties to the Tea Party, who is making these accusations (how many black, hispanic, or Asian birthers do you think there are?), and the nature of the rhetoric buzzing around Obama that wasn’t there for Clinton, one can’t help but see that race enmity is very much a part of the Birther ideology.

You’ll undoubtedly have noticed that all three of the examples I’ve provided are American. This isn’t in any way to suggest that we here in Canada don’t do the exact same thing, particularly when it comes to talking about First Nations people and their ‘government handouts’. That being said, Canadians are much more stealthy in our use of coded racism, being far more shy about it than our neighbours to the south. These are three dramatic and notorious examples of this process at work.

As I said earlier in this post, it is only by having the courage and integrity to confront our own ideas and motivations that we can identify and eliminate this kind of verbal cloaking. Being able to identify racism and being unafraid to call it out is the first (and second, I guess) step to ameliorating the problem. Failure to do that will only serve to keep us looking the other way, to the detriment of racial minority groups in perpetuity.

TL/DR: As racism has become more unpopular (but no less rare), we have developed a new lexicon to express racist ideas without appearing overtly racist.

Like this article? Follow me on Twitter!

Pol Pot, Stalin, Mao… all irrelevant

Anyone who has ever watched a debate between a theist and an atheist has seen this familiar scene: 1) the atheist points out that religion, despite its claims to inform human morality, has been (and continues to be) responsible for many atrocities and moral outrages; 2) the theist counters that the greatest mass murderers in the history of mankind (usually some combination of Hitler, Stalin, Pol Pot, and Mao) were atheists; 3) the theist wins the argument (note: step 3 may or may not be completely made up). Like the sun rising in the morning, the leaves changing colour in autumn, or the Rapture happening two days ago (remember how awesome that was?), this line of argument is so predictable as to be almost laughable.

There are so many flaws with this argument that it makes the head spin, so I am going to try and walk you, the reader, through them sequentially.

Hitler, Stalin, Pol Pot and Mao were atheists

This is debatable. Leaving aside Hitler for a moment (who was baptized Catholic and used Christian religious imagery extensively as the justification for his racist political ideology), there certainly have been leaders that have killed many of their own people, many of whom were openly atheist. However, none of the people that are commonly listed (and some that are less commonly mentioned like Idi Amin, Fidel Castro, and Kim Il-Sung) left religion out of the picture. Instead of worship of a supernatural deity that speaks directly into the ear of the leader, these men simply bypassed the middle man and pronounced themselves akin to the deity.

Without exception, if you look at how these men ruled their countries, they made themselves a figurehead and object of worship. Even today, there are pictures of Castro and Guevera plastered all over Cuba. Idi Amin was Uganda and erected a quasi-religious framework around him; ditto for Stalin (but even more so). Pol Pot and Mao, arguably the closest to being truly atheistic dictators, still installed themselves as nearly-supernatural beings whose word was divine law; in the case of Kim Il-Sung this is quite literally true. Strictly speaking, this doesn’t qualify as atheism. There is a world of difference between saying “there are no gods” and “I am a god”. It exploits the seemingly-innate propensity of human beings to subjugate themselves to something – far closer to the religious position (“I speak for the gods”) than the atheist position.

But, even if that were true…

Let’s pretend for a moment that we can accurately label the above listed dictators as being atheists (in the interest, perhaps, of avoiding being inaccurately accused of using the “No True Scotsman” fallacy). The argument is still invalid because the crimes these men committed were not done in the name of atheism. Whereas theistic murderers often use religious scripture and theological ‘reasoning’ to justify why suchandsuch group of people are deserving of the end of a sword, I know of no examples where someone has said the following:

Because there are no gods, we have the right to murder/oppress this group.

Such a statement would be on par with the justifications that come from religiously-justified crimes against humanity (“God hates fags”, “Unbelievers deserve hell”, “Jews killed Jesus”). And while there have been many atrocities that have happened for non-religious reasons, it is not reasonable or consistent to classify anything that is not pro-theistic as being atheist. The statement “there are no gods” could be twisted to support the murder of people if one was particularly psychopathic, but I don’t think it ever has.

But, even if that were true…

But let’s for a moment imagine that someone unearthed such an example, where the lack of god belief was used as a justification to commit a crime against humanity. Even then, this argument would have no value, since atheism is not a morality claim. The whole purpose of raising the atrocities committed with religious justification is to poke holes in the argument that religious faith is the source of morality, or that adherence to religious codes makes humanity more moral. If this were the case, it would be a rare exception that religious fervor could be twisted to serve a genocidal purpose – people’s faith would steer them away from the clear evil of mass murder.

The fact that even ‘atheistic’ mass murderers used the trappings of religious adherence and unwavering faith to rally people to their clearly immoral cause suggests that, if anything, religion makes people less moral. At least it seems to be useful in getting people to short-circuit their critical thinking faculties and engage in behaviour that, if they were to sit and think rationally about it (or, in hindsight) they would rightly recoil from. Even so, the cup of religion overfloweth with claims of superior morality – claims not supported by the available evidence. Atheism has no such morality claims; it is simply the lack of god-belief. It is entirely incidental (or, more likely, due to a third variable like propensity for independent introspection) that atheists are less likely to murder, rape, etc.

But even if that were true

Even if we, for the sake of argument, granted all of the above (untrue) assumptions – that atheistic dictators committed their crimes from a position of atheistic moral authority – this argument would still be completely worthless. The issue of whether or not atheism is nice has absolutely no relationship to whether or not atheism is true. Even if we were to grant that atheists are just as shitty are theists, that doesn’t say anything about which of the two positions of correct – all it says is that people suck. Making the assertions that morality comes from the divine assumes the existence of the divine. Failure to demonstrate the existence of the divine (we’re still waiting, by the way) completely invalidates the theistic moral position. Saying that theists are super-nice doesn’t mean that the gods exist any more than saying atheists are shitty people does. Both positions are entirely orthogonal to the central claim of whether or not gods are real.

In summary

I’m honestly not sure why this argument is perceived to carry any weight in a serious debate. Surely respected theists are aware of Godwin’s Law, and while I hold out no expectations for people debating issues on Reddit or on someone’s Facebook wall, I would imagine that enough people have at least thought through their position long enough to realize that such an assertion has no bearing whatsoever on their position. And yet, keep your eyes and ears open for the next big debate between an atheist and a believer – I’ll be willing to bet cookies that the rotting, shuffling corpse of this thoroughly-useless argument will rise again and attempt to devour the brains of the audience.

Remember, aim for the head.

TL/DR: People are often pointing out that some of the greatest mass murderers in history are atheist. Even if they were, they didn’t kill in the name of atheism. Even if they did, atheists don’t make claims of superior morality because of atheism (whereas religion does). Even if they did, that is irrelevant to whether or not atheism is true.

Mixed up

Those of you who have read this blog for a while, or who know me personally, know that I am what is technically known as “mixed race”. Generically, this means that my parents identify as two different ethnic groups. More specifically, my father is black and my mother is white, which according to the racist nomenclature of Jim Crow era America makes me a “mulatto” (a word meaning ‘mule’). At various points in my life, my ‘mixed’ status meant different things to me.

When I was very young, it used to irk me that people in my mostly white home town, who knew I had one white parent, didn’t see me as half-white. After all, technically speaking it was just as true that I was half-white as much as I was half-black. However, nobody else seemed to think along those lines. When I mentioned it to my dad, he imparted to me one of the first lessons I ever had to learn about race: it doesn’t matter what you are, it’s what other people think you are that matters. It affects the way they treat you, the way they think of you, and the way they see you.

I had the opposite experience living in Mississauga, where there were white kids, “really black” kids, and then me. As if I wasn’t enough of an outcast, being a recent transplant to Ontario, not knowing most of the kids I went to school with, and not really having been exposed to other black kids before, I was viewed with deepening suspicion and ultimately kept on the outside. As much as the kids I hung out with (mostly white, as that was who I was used to being around) accepted me, I knew I didn’t fit in. Most of them were Italian, Maltese, or of another Mediterranean extraction.

As a result of my mixed heritage, I never really connected with the black community where I grew up, only able to view it from the outside. Being in a special-ed program that didn’t exactly overflow with black kids didn’t help much either. To this day I wonder whether the system passively discriminated against the black kids – failing to identify them as “gifted” (in the language of the time – who knows what it’s called now?) because of pre-conceived notions of how black kids are supposed to be. I wonder if that’s the case, or if kids that were intelligent enough to qualify weren’t encouraged at home. As for me personally, I had tons of support. That’s neither here nor there, vis a vis this story, I just thought I would big up my home environment.

People of mixed race have been around for as long as there have been distinct racial groups, but as a sociological phenomenon, there has been a marked shift in how kids of my ilk are viewed. First, people no longer call us “half breeds” a term I hated when I was younger – my parents aren’t horses or dogs; they didn’t breed. Furthermore, the idea of someone being “pure” anything is mostly nonsense – everyone is a mutt no matter where they come from. We are called “part _____”, which is a much more flexible descriptor that allows for people who are a mixture of many different things. We’ve gotten over our obsession with fractions.

Secondly, people of mixed heritage are no longer seen as an exotic oddity (at least not to the extent that we were before). Perhaps with the rising prevalence of interethnic marriages, some of the shine is off the penny when it comes to the novelty of identifying with more than one group. Even the census and most other questionnaires that ask about ethnicity use a “check all that apply” rather than forcing people to choose one that applies best.

Last week a white supremacist showed up in the comments section. While I’ve dealt with that type before, there’s always a part of me that gets apprehensive because it raises an old spectre that I don’t like thinking about. That is, if genetics (along racial lines) do influence things like intellect and “personal responsibility”, what does that mean for me? They don’t, of course, but what if they did? Is my interest in science and academic topics the result of my “white” half? Is my love of music and dancing the result of my “black” half? Do traits break down like that? Am I a lucky composite of two complementary characteristics?

I am always able to beat those kinds of introspections back with a little bit of skepticism. Are there not many prominent intelligent black scientists out there? White musicians? Haven’t we learned through history and experience that the reasons that one group does something better than another is simply a product of culture rather than genetics? The stereotypes we paint each other with are just the result of sloppy thinking. Still, it’s always a struggle to have to deal with those fears every day.

Through this blog, I am trying to encourage readers to engage in skeptic thinking when it comes to race. Above and beyond my love of skewering religious topics, if there’s one thing I’d like you to do it’s learn to recognize and challenge the nearly-inaudible voice of cultural indoctrination when it comes to race. We all have embedded assumptions about groups not like our own (or even of those within our own group), and learning how to catch ourselves when we start unconsciously following those assumptions is a useful tool for dealing with each other fairly.

I learned this trick by reflex, living my entire life trying to figure out how I fit in. I don’t have the option to turn it off, nor would I want to if I could. We can find a way to make our unique set of interactions work well if we are just a combination of open-minded, careful and honest. If we can all be “mixed” in this way, we can learn important things about each other, and about ourselves.

Like this article? Follow me on Twitter!

Mining a silver lining

First off, I want to apologize for shirking my duties this past week. I squandered my weekend, when I should have been writing the posts for last week, doing other stuff. When Monday came around, I had decided to write a post-mortem on the election after the results were in. However, by the time I got home from working at the polls I was so tired and disgusted with the outcome that I couldn’t really marshal my thoughts enough to write anything that I could feel good about. This is the reason why I usually set up a buffer of posts, so as to avoid this exact type of thing.

Secondly, I find it troubling that the week that I decide not to post, my hit count explodes 😛

Finally, this post is going to be a sort of amalgamation of some thoughts that have been kicking around my head for the past week since the election. I’ve titled this post ‘mining a silver lining’, because while it pretty much goes without saying that I am disappointed and fearful about what it means that the Republican North party has a legislative fiat (both in the Parliament and ostensibly in the Senate), I think there are some real good news stories to come out of the election. The political content of the archives of this blog should be sufficient to explain why a Republican North majority is a bad thing for Canada; I will instead focus on some good news speculation.

ALL THE PROGNOSTICATION MEANS ABSOLUTELY DICK

There will be a lot of political commentators (myself among them) who will make predictions about what will or won’t happen under a Republican North majority. The sheer variety of opinions and predictions ensures, mathematically, that most of them will be wrong. Political decisions are influenced by ideology and promises, but occur on a day-to-day basis and are affected by human events. Nobody can predict exactly what human beings will do, as this world is a chaotic place. Nobody would have expected U.S. foreign policy to make a dramatic series of shifts based on events in the Middle East and Northern Africa. Fewer still would have predicted that Japan’s economy would take a tumble after an earthquake and resultant nuclear accident.

My point here is that no matter who makes the predictions, policy will adapt to the the immediate circumstances around it. Changes in technology, in climate, in foreign politics, in any number of things will have a strong influence on how Stephen Harper’s policy decisions will be made. Trying to predict specific actions over a four-year period is a complete waste of time, and can be enjoyed only as an intellectual masturbatory exercise.

Now I will commence to fapping.

STEPHEN HARPER IS THE LEADER OF A DIVIDED PARTY

The Republic North party is made up of two core constituencies: social conservatives and fiscal conservatives. The perhaps unspoken (or certainly under-spoken) reality that accompanies such a grouping is that while they may claim to be related ideologies, the two are in fact orthogonal. There is nothing in the doctrine of social conservatism that lends itself to fiscal conservatism – in fact the two are often at cross purposes. Libertarians and Classical Liberals believe that the government has no business whatsoever legislating either social issues or economic issues – only in safeguarding individual liberties. The reason the Republican North party was able to pick up so much support is because they catered to the economic centre/right, which is also a part of the Liberal party’s core constituency.

The only way (as far as I can see) that the RNP was able to stitch these two groups together was to simultaneously forge a false equivalence between these two perpendicular political perspectives, and to publicly proclaim disinterest in social policy while quietly whispering assurances to their social base that those issues would come to the fore once a majority was achieved. Now that this is a political reality, Prime Minister Harper will have to ‘pay the piper’, so to speak, by advocating positions that are wildly unpopular among the Canadian majority. If he fails to do this, social conservatives who have long felt ignored by the federal government will abandon the RPN and revive the Reform party. Should he capitulate to their whims, he will alienate the Libertarian/Classical Liberal wing of his party.

This must be a deft balancing act that will take an extraordinary statesman and leader to accomplish. Stephen Harper is neither of these.

JACK LAYTON MAY EXERCISE A GREAT DEAL OF CONTROL

Part of the success of the RPN during their successive minority governments was Stephen Harper’s ability to keep the reins of his party tightly held. Information did occasionally leak, but for the most part the government spoke from one perspective only. Considering the number of wingnuts in the party, keeping that communication clamped down was an extraordinary achievement that served the party’s interests well. Jack Layton may be able to exercise the same kind of party discipline, albeit in a dramatically different way.

Nobody really predicted that the NDP would make the strides they did in this past election (owing largely to Quebec, but also partially due to the implosion of the Liberal party). Jack Layton now finds himself the leader of a party with 102 seats, many held by rookie politicians. The NDP brand has been, since the early 2000s, consistently centred on Jack Layton himself, rather than a particular policy position. The rookie MPs will be looking to Mr. Layton for guidance and instruction, more so than would a team of seasoned veterans. While Jack will have to pull in some of his own wingnuts and handle more than the ordinary number of blunders born of inexperience, he will also have a party that gets virtually all of its cues from him. In this way, the NDP can appear more organized and credible than they legitimately are. This means that progressive decisions and policies can be articulated without seeming like they’re coming from the hippie fringe.

ELIZABETH MAY WAS ELECTED

I am not a Green voter. I did vote Green in 2006, because my riding was a safe bet and I supported electoral reform. I think the Green party can articulate a non-corporate perspective that is sorely and noticeably absent from the other three major parties. Elizabeth May is a gifted speaker and is able to articulate environmental policy issues well. She’s also shown herself to be indomitable and highly resistant to intimidation in the face of overwhelming opposition. While I don’t necessarily agree with her party’s platform on many issues (medicine and health care being chief among those), I am glad to see a more pluralistic Parliament.

Her election also serves the purpose of giving the Green party legitimate political status. Voting Green is now a legitimate alternative, and while the party is still in its infancy in terms of credibility, having elected an MP (over a RPN cabinet minister, no less) certainly vaults it into the standings. After all, they only have 3 fewer seats than the Bloc, who used to be the official opposition 😛

POLARIZATION IS BAD, BUT NOT ALL BAD

One doesn’t have to look much further than the United States (a name that is becoming progressively more ironic) to see how dangerous political polarization can be. Polarization forces people to make choices to support positions they don’t agree with in the name of party affiliation. Having a plurality of perspectives means that government will be more stable, rather than erratically jerking back and forth from right to left. Canada has elected a far-right government with a far-left opposition (although I don’t think either of those descriptions are really fair in the general scheme of things), meaning that for the first time in a long time we see a stark separation between the usually moderate people of this country.

However, there is one upside to polarization that has to do with a necessary consequence of good government. When the government is largely running things behind the scenes and caters to the will of the majority, people become complacent. Why bother getting up in arms about a government whose actions are largely invisible and that I agree with for the most part? Having the debate happen more to the extremes, with policies to match, means that government activity will become increasingly salient to the average Canadian. People will see that their actions (or inactions, as the case may be) can allow dangerous legislation that is contrary to their personal interests to be passed largely without comment. Perhaps having a RPN majority government is what Canada needs as a kick in the pants to spur increased political involvement by its populace.

SUMMARIZING THOUGHTS

As I’ve made clear, I’m not happy about this election. My best-case scenario would have seen a diminishing Harper minority with a strong NDP opposition – allowing the further fragmentation of the right and bringing progressive issues to the fore. What I got instead was a bizarro world in which a 2% increase in political support for the RPN means 30 more seats and the Bloc has all but evaporated. It is an interesting time for Canadian politics, and while there will undoubtedly be some serious damage done in the interim (I’m thinking specifically of crime, climate change and the strength/direction of the Canada Health Act), there may yet be some positive stories to come out of this.

I am back to my regular self, and am recommitting myself to articulating my position. I promise – no more weeks of rage (well… hopefully).

Like this article? Follow me on Twitter!

“Natural Law” – When to ignore someone (pt. 4)

Arguments are powerful things in the world of rhetoric. When considering any given topic, familiarity with the cognitive and evidentiary frameworks that pertain to that topic can be of great use both in understanding and defending a position. Some arguments (albeit few) are powerful enough to justify a position all by themselves; most positions require a variety of arguments to be fully persuasive. Conversely, there are some arguments that are so weak that it is reasonable to completely ignore anyone who would try and press them into service.

I have so far dealt with four such arguments: “common sense”, “I’ve done my own research”, any sentence that starts with “I believe that…” and back-filling explanations to satisfy an a priori conclusion. “Common sense” is a poorly-named concept, because it presumes that people perceive and process information in a uniform way. Doing your own research rarely meets the standard of “research” required to be authoritative or replicable. A person’s individual belief in a thing does not grant it legitimacy, regardless of the sincerity of that belief. Finally, reliable information cannot be gained by assuming the truth of the conclusion, then looking for confirmatory evidence.

These are all specious and worthless arguments, and carry with them no persuasive force when the audience is able to think about them critically. To this list, I would like to add any argument that is contingent on the concept of “natural law”. There are a surprising number of thinkers and theorists that use this concept, and a separate definition for each. The particular understanding of the concept that I find to be most vacuous is perhaps best articulated by the Catholic Church:

The natural law expresses the original moral sense which enables man to discern by reason the good and the evil, the truth and the lie: The natural law is written and engraved in the soul of each and every man, because it is human reason ordaining him to do good and forbidding him to sin… But this command of human reason would not have the force of law if it were not the voice and interpreter of a higher reason to which our spirit and our freedom must be submitted.5

The general thrust of this definition is that humans have an innate sense of right and wrong, and that this sense is both reliable and derived through human reason. The weaknesses of the Catholic position (the conjuring of the existence of their specific god and a human soul) aside, the very concept is still meritless, or at least not borne out by evidence. Given the diversity of ways in which people react to similar moral quandaries is evidence that there is not a uniform moral sense. The existence of quandaries – situations in which a reasonable case can be made for or against a given action – is evidence enough that there is nothing “written and engraved in the soul” of anybody.

There are a variety of reasonable ways of arriving at a moral decision – the entire field of ethics attests to this fact. A variety of ethical constructs and theoretical scaffolds have been invented to codify a method of consistently arriving at conclusions that maximize the good and to minimize the negative. However, when a given action may cause both good and evil (e.g., giving a life-saving blood transfusion to a Jehova’s Witness against her/his will), our supposed innate moral sense fails us. One person may choose to ignore her innate moral sense to preserve life in favour of obeying the patient’s wishes, while another may reject the patient’s irrational belief in favour of giving him life-saving treatment. Both of these choices are justifiable (although, for the record, medical ethics fall firmly on the side of patient autonomy). Neither can be said to either violate human reason or some kind of ‘natural law’.

While this argument would be merely annoying if invoked in abstract, it is sometimes assumed to be valid, and then used to justify all manner of harm:

…tradition has always declared that “homosexual acts are intrinsically disordered.” They are contrary to the natural law. They close the sexual act to the gift of life. They do not proceed from a genuine affective and sexual complementarity. Under no circumstances can they be approved.

Divorce is a grave offense against the natural law. It claims to break the contract, to which the spouses freely consented, to live with each other till death. Divorce does injury to the covenant of salvation, of which sacramental marriage is the sign. Contracting a new union, even if it is recognized by civil law, adds to the gravity of the rupture: the remarried spouse is then in a situation of public and permanent adultery:

Basing regulations on the non-existent natural law is dangerous and detrimental to those caught outside the realm of what the authority deems acceptable. Two women that are in love, or a man that wants to leave his abusive wife, are shit out of luck because those things are ‘against natural law’, as though loving who you choose and self-preservation are some kind of irrational goal.

What we see in both the conception and application of ‘natural law’ is simply a collision of ‘common sense’ and back-filling. “I don’t like these things for whatever reason, and so I will look for a justification for my dislike that makes them seem rational.” As an argument, it is the equivalent of throwing up your hands and saying “because I said so, that’s why!” It takes courage and honesty to recognize that things you don’t like may be honestly justifiable to some, based on valid precepts (and no, I don’t count cultural norms or appeals to tradition among the list of valid precepts). Homosexuality seems weird to me, and I may not like it (for the record, I don’t really have strong feelings one way or the other, although I am immensely proud of our society whenever I see a gay couple together openly). I don’t agree with polygamy. I think that religious rules about diet or medical treatment are stupid. My personal discomfiture with a practice is, however, not evidence that said practice is ‘against natural law’. It just means I don’t like it.

Like this article? Follow me on Twitter!