Cargo Cult debate

One thing science fetishists can’t bear is to have their obvious ignorance of science pointed out:

Babak Golshahi ‏@bgolshahi1
I love being able to back up what I say with hard evidence, peer reviewed scientific consensus.

Supreme Dark Lord ‏@voxday
50 percent of which is proven to be wrong when replication is attempted. You’re out of date.

Babak Golshahi ‏@bgolshahi1
replication of what? You got a peer reviewed piece or really any article that backs up your claim? Waiting.

Supreme Dark Lord ‏@voxday
Mindlessly repeating the words “peer review” and citing “articles” shows you’re a low-IQ ignoramus.

Babak Golshahi ‏@bgolshahi1
you apologize for that or you’re blocked

Supreme Dark Lord ‏@voxday
Block away, moron. It won’t fix peer review or change the fact that you’re both stupid and ignorant.

Babak Golshahi ‏@bgolshahi1
You are blocked from following @bgolshahi1 and viewing @bgolshahi1’s Tweets.

I wish more of these morons would use Randi Harper’s anti-GG autoblocker, so I wouldn’t be subjected to their repetitive idiocy.

It is important to understand that if you’re prone to demanding “peer reviewed pieces” or shouting “logical fallacy” at people with whom you are arguing, you’re probably a midwit who doesn’t really understand what you’re talking about. In both these, and other similar cases, what we have is a person who has seen someone else win an argument successfully refuting another individual’s argument by comparing scientific evidence or identifying a specific logical fallacy being committed, and trying to imitate them without understanding what the other person was actually doing.

But if there is no genuine substance behind the demand or the identification, if you don’t have your own competing scientific evidence or you can’t point out the actual logical fallacy – and there is a massive difference between the set of flawed syllogisms and the subset of logical fallacies – then you have no business talking about such things.

The failure to cite a peer-reviewed study means nothing in the absence of competing citations. The claim of logical fallacy means nothing when the precise fallacy is not identified. If you don’t understand those things, stop embarrassing yourself by arguing with people and start reading.

Otherwise, you’re no different than the ignorant South Pacific islander building runways in the hopes that the magic sky machines will descend bearing gifts.


Diversity kills community

The same negative effect of diversity on community discovered – and initially buried – by Robert Putnam in the United States is replicated in the United Kingdom by a study entitled Does Ethnic Diversity Have a Negative Effect on Attitudes towards the Community? A Longitudinal Analysis of the Causal Claims within the Ethnic Diversity and Social Cohesion Debate:

We observe that as a community becomes more diverse around an individual, they are likely to become less attached to their community. This is a strict test of the causal impact of diversity, minimizing unobserved heterogeneity and eliminating selection bias. Importantly, neither indicator of disadvantage is significantly associated with attachment.

Model 2 shows the same analysis among movers. Diversity is again significant and negative, suggesting that individuals who move from more diverse to less diverse communities are likely to become more attached (and vice versa).

Like calls to like. Most people prefer to live among their own. Segregation is not only the right of free association in action, it is a community imperative. This is further evidence that the increasingly diverse United States will not survive because it cannot survive. It is not a nation.


All ur hashtag are belong to us

The Global Warming charlatans are planning a propaganda push. This is from a science activist mailing list.

Climate Feedback works like this: Using the new web-annotation platform Hypothesis, scientists verify facts and annotate online climate articles, layering their insights and comments on top of the original story. They then issue a “5-star” rating so readers can quickly judge stories’ scientific credibility. Recognized by NASA, the UN Framework Convention on Climate Change and California Gov. Jerry Brown among others, Climate Feedback is already improving journalistic standards by flagging misreported climate science in mainstream outlets; earlier this month, for example, scientists took apart Bjorn Lomborg’s misleading op-ed in the Wall Street Journal. This is only a hint of what Climate Feedback has in store as it begins to aggregate those credibility scores into a wider index, rating major news sources on their reporting of climate change as part of a new Scientific Trust Tracker.

To that end, Climate Feedback is launching a crowd funding campaign on April 27 around the hashtag #StandWithScience, supported by leading climate minds like Profs. Michael Mann, Naomi Oreskes and others. I invite you to take a look at this sneak preview of our campaign (NOTE: please do not share publicly before April 27). The Exxon climate scandal has already made its way into the 2016 election season, but few have discussed the role the media has played enabling corporate interests to sow doubt about the science of climate change, which has long confused the public and undermined political support for dealing with the issue. As 350.org founder Bill McKibben said of Climate Feedback: Scientists are just about ready to come out of the lab and get more active and when they do, it will make a remarkable difference.

Let’s disrupt it. VFM, you know what to do. Political activism is not science. #StandWithScience.


The intrinsic unreliability of science

More and more investigations of quasi-scientific shenanigans are demonstrating the need for more precision in the language used to describe the field that is too broadly and misleadingly known as “science”:

The problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.

Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an ­unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it. Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. These were not studies published in fly-by-night oncology journals, but blockbuster research featured in Science, Nature, Cell, and the like. The Bayer researchers were drowning in bad studies, and it was to this, in part, that they attributed the mysteriously declining yields of drug pipelines. Perhaps so many of these new drugs fail to have an effect because the basic research on which their development was based isn’t valid….

Paradoxically, the situation is actually made worse by the
fact that a promising connection is often studied by several
independent teams. To see why, suppose that three groups of researchers
are studying a phenomenon, and when all the data are analyzed, one group
announces that it has discovered a connection, but the other two find
nothing of note. Assuming that all the tests involved have a high
statistical power, the lone positive finding is almost certainly the
spurious one. However, when it comes time to report these findings, what
happens? The teams that found a negative result may not even bother to
write up their non-discovery. After all, a report that a fanciful
connection probably isn’t true is not the stuff of which scientific
prizes, grant money, and tenure decisions are made.
And even if they did write it up, it probably wouldn’t be
accepted for publication. Journals are in competition with one another
for attention and “impact factor,” and are always more eager to report a
new, exciting finding than a killjoy failure to find an association. In
fact, both of these effects can be quantified. Since the majority of
all investigated hypotheses are false, if positive and negative evidence
were written up and accepted for publication in equal proportions, then
the majority of articles in scientific journals should report no
findings. When tallies are actually made, though, the precise opposite
turns out to be true: Nearly every published scientific article reports
the presence of an association. There must be massive bias at work. 
Ioannidis’s argument would be potent even if all
scientists were angels motivated by the best of intentions, but when the
human element is considered, the picture becomes truly dismal.
Scientists have long been aware of something euphemistically called the
“experimenter effect”: the curious fact that when a phenomenon is
investigated by a researcher who happens to believe in the phenomenon,
it is far more likely to be detected. Much of the effect can likely be
explained by researchers unconsciously giving hints or suggestions to
their human or animal subjects, perhaps in something as subtle as body
language or tone of voice. Even those with the best of intentions have
been caught fudging measurements, or making small errors in rounding or
in statistical analysis that happen to give a more favorable result.
Very often, this is just the result of an honest statistical error that
leads to a desirable outcome, and therefore it isn’t checked as
deliberately as it might have been had it pointed in the opposite
direction. 

But, and there is no putting it nicely, deliberate fraud
is far more widespread than the scientific establishment is generally
willing to admit.

Never confuse either scientistry or sciensophy for scientody. To paraphrase, and reject, Daniel Dennett’s contention, do not trust biologists or sociologists or climatologists, or anyone else who calls himself a scientist, simply because physicists get amazingly accurate results.


Scientistry and sciensophy

Keep this sordid history of scientific consensus in mind every time you hear the AGW/CC charlatans selling their global government scam on that basis:

In 1980, after long consultation with some of America’s most senior nutrition scientists, the US government issued its first Dietary Guidelines. The guidelines shaped the diets of hundreds of millions of people. Doctors base their advice on them, food companies develop products to comply with them. Their influence extends beyond the US. In 1983, the UK government issued advice that closely followed the American example.

The most prominent recommendation of both governments was to cut back on saturated fats and cholesterol (this was the first time that the public had been advised to eat less of something, rather than enough of everything). Consumers dutifully obeyed. We replaced steak and sausages with pasta and rice, butter with margarine and vegetable oils, eggs with muesli, and milk with low-fat milk or orange juice. But instead of becoming healthier, we grew fatter and sicker.

Look at a graph of postwar obesity rates and it becomes clear that something changed after 1980. In the US, the line rises very gradually until, in the early 1980s, it takes off like an aeroplane. Just 12% of Americans were obese in 1950, 15% in 1980, 35% by 2000. In the UK, the line is flat for decades until the mid-1980s, at which point it also turns towards the sky. Only 6% of Britons were obese in 1980. In the next 20 years that figure more than trebled. Today, two thirds of Britons are either obese or overweight, making this the fattest country in the EU. Type 2 diabetes, closely related to obesity, has risen in tandem in both countries.

At best, we can conclude that the official guidelines did not achieve their objective; at worst, they led to a decades-long health catastrophe. Naturally, then, a search for culprits has ensued. Scientists are conventionally apolitical figures, but these days, nutrition researchers write editorials and books that resemble liberal activist tracts, fizzing with righteous denunciations of “big sugar” and fast food. Nobody could have predicted, it is said, how the food manufacturers would respond to the injunction against fat – selling us low-fat yoghurts bulked up with sugar, and cakes infused with liver-corroding transfats.

Nutrition scientists are angry with the press for distorting their findings, politicians for failing to heed them, and the rest of us for overeating and under-exercising. In short, everyone – business, media, politicians, consumers – is to blame. Everyone, that is, except scientists….

In a 2015 paper titled Does Science Advance One
Funeral at a Time?, a team of scholars at the National Bureau of
Economic Research sought an empirical basis for a remark made by the
physicist Max Planck: “A new scientific truth does not triumph by
convincing its opponents and making them see the light, but rather
because its opponents eventually die, and a new generation grows up that
is familiar with it.”

The researchers identified more than 12,000 “elite” scientists from
different fields. The criteria for elite status included funding, number
of publications, and whether they were members of the National
Academies of Science or the Institute of Medicine. Searching obituaries,
the team found 452 who had died before retirement. They then looked to
see what happened to the fields from which these celebrated scientists
had unexpectedly departed, by analysing publishing patterns.

What they found confirmed the truth of Planck’s maxim. Junior
researchers who had worked closely with the elite scientists, authoring
papers with them, published less. At the same time, there was a marked
increase in papers by newcomers to the field, who were less likely to
cite the work of the deceased eminence. The articles by these newcomers
were substantive and influential, attracting a high number of citations.
They moved the whole field along.

A scientist is part of what the Polish philosopher of science Ludwik
Fleck called a “thought collective”: a group of people exchanging ideas
in a mutually comprehensible idiom. The group, suggested Fleck,
inevitably develops a mind of its own, as the individuals in it converge
on a way of communicating, thinking and feeling.

This makes scientific inquiry prone to the eternal rules of human
social life: deference to the charismatic, herding towards majority
opinion, punishment for deviance, and intense discomfort with admitting
to error. Of course, such tendencies are precisely what the scientific
method was invented to correct for, and over the long run, it does a
good job of it. In the long run, however, we’re all dead, quite possibly
sooner than we would be if we hadn’t been following a diet based on
poor advice.

It is always necessary – it is absolutely vital – to carefully distinguish between scientody, or the scientific method, and scientistry, which is the scientific profession. The evils described in this article are not indicative of any problems with scientody, they are the consequence of the inevitable and intrinsic flaws with scientistry.

To simply call everything “science” is to be misleading, often, but not always, in innocence. Science has no authority, and increasingly, it is an intentional and deceitful bait-and-switch, in which the overly credulous are led to believe that because an individual with certain credentials is asserting something, that statement is supported by documentary evidence gathered through the scientific method of hypothesis, experiment, and successful replication.

In most – not many, but most – cases, that is simply not the case. Even if you don’t use these neologisms to describe the three aspects of science, you must learn to distinguish between them or you will repeatedly fall for this intentional bait-and-switch. In order of reliability, the three aspects of science are:

  • Scientody: the process
  • Scientage: the knowledge base
  • Scientistry: the profession

We might also coin a new term, sciensophy, as practiced by sciensophists, which is most definitely not an aspect of science, to describe the pseudoscience of “the social sciences”, as they do not involve any scientody and their additions to scientage have proven to be generally unreliable. Economics, nutrition, and medicine all tend to fall into this category.


Liberals, not conservatives, hate science

As Maddox has amply demonstrated, they don’t “fucking love science”, they like pictures that remind them of science. Actual science, they hate, because it’s not careful of their precious feelings and tends to gradually destroy their sacred narratives:

I first read Galileo’s Middle Finger: Heretics, Activists, and the Search for Justice in Science when I was home for Thanksgiving, and I often left it lying around the house when I was doing other stuff. At one point, my dad picked it up off a table and started reading the back-jacket copy. “That’s an amazing book so far,” I said. “It’s about the politicization of science.” “Oh,” my dad responded. “You mean like Republicans and climate change?”

That exchange perfectly sums up why anyone who is interested in how tricky a construct “truth” has become in 2015 should read Alice Dreger’s book. No, it isn’t about climate change, but my dad could be excused for thinking any book about the politicization of science must be about conservatives. Many liberals, after all, have convinced themselves that it’s conservatives who attack science in the name of politics, while they would never do such a thing. Galileo’s Middle Finger corrects this misperception in a rather jarring fashion, and that’s why it’s one of the most important social-science books of 2015.

At its core, Galileo’s Middle Finger is about what happens when science and dogma collide — specifically, what happens when science makes a claim that doesn’t fit into an activist community’s accepted worldview. And many of Dreger’s most interesting, explosive examples of this phenomenon involve liberals, not conservatives, fighting tooth and nail against open scientific inquiry.

It’s probably not a book anyone who reads this blog regularly needs to read, but it may be one that most of us would like to give to someone we know. As Nassim Taleb explains, what passes for science simply isn’t really science and it certainly isn’t reliable.

What we are seeing worldwide, from India to the UK to the US, is the rebellion against the inner circle of no-skin-in-the-game policymaking “clerks” and journalists-insiders, that class of paternalistic semi-intellectual experts with some Ivy league, Oxford-Cambridge, or similar label-driven education who are telling the rest of us 1) what to do, 2) what to eat, 3) how to speak, 4) how to think… and 5) who to vote for.

With psychology papers replicating less than 40%, dietary advice reversing after 30y of fatphobia, macroeconomic analysis working worse than astrology, microeconomic papers wrong 40% of the time, the appointment of Bernanke who was less than clueless of the risks, and pharmaceutical trials replicating only 1/5th of the time, people are perfectly entitled to rely on their own ancestral instinct and listen to their grandmothers with a better track record than these policymaking goons.

Indeed one can see that these academico-bureaucrats wanting to run our lives aren’t even rigorous, whether in medical statistics or policymaking. I have shown that most of what Cass-Sunstein-Richard Thaler types call “rational” or “irrational” comes from misunderstanding of probability theory.


Mailvox: atheism and the motte-and-bailey analogy

BJ, an atheist, didn’t feel the topic that was debated in On the Existence of Gods was entirely fair.

As an atheist, I agree that Vox won the debate. His arguments were more
persuasive and coherent. Dominic was a good sport, but he was attacking a
castle with no cannons, no towers, no ram, not even a ladder. I don’t think it is a fair debate topic, though that is not Vox’s fault.
It’s what Myers originally claimed and what Dominic agreed to. But it’s
not a fair view on the subject.

This is the standard motte and
bailey for defending theism. You replace ‘proof of god’ with ‘doubt of
science’ and hope no one calls you on it (Dominic didn’t). Then you push
the atheist into admitting they can’t rule out the possibility of the
existence of something which may resemble a god or gods. Most people
consider that a win.

The problem I have with that is no priest
suggests the possibility of a god or gods, they talk about very specific
gods with very specific rules, demand very specific obedience, and ask
for very real money. None of them can prove their god is real but that
is the bailey position; when they are under attack they retreat to the
motte position, which is just “you can’t prove god(s) DON’T exist.”
Kinda weak basis for tithing 10% of my income.

On the one hand, this is an entirely reasonable point with which I agree entirely. In fact, I repeatedly point out, in both On the Existence of Gods and in The Irrational Atheist, that the argument for the existence of the supernatural, the arguement for the existence of Gods, and the argument for the existence of the Creator God as described in the Bible are three entirely different arguments.

One could further observe, with equal justice, that none of these three arguments suffice to establish the Crucifixion and Resurrection of Jesus Christ of Nazareth or the existence of the Holy Trinity as described in the Constantinian revision of the original Nicene Creed.

The problem, however, is that BJ reverses the motte-and-bailey analogy as it is actually observed in the ongoing atheism-Christianity debate. For example, even in the debate he criticizes, Dominic’s sallies were initially directed at all forms of supernaturalism before being knocked back by my response which observed that the supernatural is a set of which gods are merely a subset.

More importantly, there was never any retreat to the Christian bailey. It simply wasn’t the subject at hand; the purpose of the debate was to challenge the atheist claim to the motte claimed by PZ Myers. And as for Dominic supposedly failing to call me on the very rational and substantive grounds to doubt the legitimacy of science, particularly as it relates to science’s ability to address the subject of gods, that was an intelligent tactical move on his part, because I would have easily demolished any attempt to rely upon science in that manner.

As readers of this blog know, I don’t regard science as being even remotely reliable in its own right, I consider its domain to be limited, and there is considerable documentary, logical, and even scientific evidence to support that position. It is certainly an effective tool, when utilized properly, but it is not a plausible arbiter of reality.

In any event, those interested in the subject appear to find On the Existence of Gods to be a worthy addition to the historical discussion, as it is currently #2 in the Atheism category, sandwiched between a pair of books by Richard Dawkins. If you haven’t posted a review yet, I would encourage you to do so.


Idolocracy and idiocracy

I saw part of an episode of American Idol last night, and what struck me immediately was that it, and the commercials run for its viewers, was entertainment for retards and children. There was an Angry Birds skit/commercial that was very nearly as embarrassing as it was insulting to the intelligence of the audience. As near as I could tell in the 10 minutes or so that I managed to endure it, it looked as if it was aiming for an audience with an IQ of around 85-90. This makes commercial sense, of course, given the fact that I’ve calculated the average US IQ has fallen at least four points based on demographic change alone.

I thought my calculation was pessimistic for the long-term fate of the USA, but it turns out that the situation may well be considerably worse. If Bruce Charlton and Michael Woodley are correct, idiocracy is already here and there appears to be no way to reverse the course of the intellectual decline short of either a) a cataclysmic collapse and rebuilding of Western society or b) totalitarian scientific eugenicism on steroids.

It has been a fascinating, and I must admit horrifying, three-and-a-bit years since Michael Woodley and I first discovered the first objective evidence that there has been a very substantial decline in general intelligence (‘g’) over the past two hundred years – the evidence was posted on this blog just a few hours after we discovered it:

Since then, Michael has taken the lead in replicating this finding in multiple other forms of data, and in a variety of paradigms; and learning more about the magnitude of change and its timescale. His industry has been astonishing! 

We currently believe that general intelligence has declined by approximately two standard deviations (which is approximately 30 IQ points) since 1800 – that is, over about 8 generations.

Such a decline is astonishing – at first sight. But its magnitude has been obscured by social and medical changes so that we underestimate intelligence in 1800 and over-estimate intelligence now.

On the other hand, magnitude and rapidity of decline in world class geniuses in the West (and of major innovations) does imply a decline of intelligence of at least 2 SDs – so from that perspective the rate and size of decline is pretty much as-expected.

So much for the quaint notions of a shiny, sexy, seculatopia where reason and logic would reign over all. If they are right, we’ll be fortunate if our great-great-grandchildren don’t return to the trees and seas, a-grunting as they go.

To a certain extent, the crisis facing the species is similar to that of Nigeria, only writ large. Whereas the Nigerian population used to be limited by high child mortality and was able to feed itself, the importation of Western science and medical care reduced the child mortality rate, caused the population to explode, and has rendered the nation both unable to feed itself, and less intelligent on average as well.

In the West, one need only compare the difference between the popular books of fifty, one hundred, and two hundred years ago with today’s bestsellers to observe that there has been a prodigious decline in reader’s tastes, despite the fact that the less-intelligent half of the population doesn’t read at all.

These changes are not merely dysgenic and dyscivic, they are dyscivilizational. Which causes me to suspect that the future trend is not merely going to be nationalistic, but highly eugenicist as well. The first nation to ensure its homogenuity and solve the declining intelligence challenge will have a significant advantage over all the rest. The only upside that I see is that there should be no desire whatsoever to attack and rule over other nations and populations, although that carries some potentially ominous implications too.

I certainly hope they’re wrong, because it’s enough to make even a hard-core atheist science-fetishist want to say: “Come, Lord Jesus, and soon!”


Stupidity vs psychopathy

That is the correct way to describe the argumentum ad absurdum of the religious mind versus the rational mind:

To believe in a supernatural god or universal spirit, people appear to suppress the brain network used for analytical thinking and engage the empathetic network, the scientists say. When thinking analytically about the physical world, people appear to do the opposite.

“When there’s a question of faith, from the analytic point of view, it may seem absurd,” said Tony Jack, who led the research. “But, from what we understand about the brain, the leap of faith to belief in the supernatural amounts to pushing aside the critical/analytical way of thinking to help us achieve greater social and emotional insight.”

Jack is an associate professor of philosophy at Case Western Reserve and research director of the university’s Inamori International Center of Ethics and Excellence, which helped sponsor the research.


”A stream of research in cognitive psychology has shown and claims that people who have faith (i.e., are religious or spiritual) are not as smart as others. They actually might claim they are less intelligent.,” said Richard Boyatzis, distinguished university professor and professor of organizational behavior at Case Western Reserve, and a member of Jack’s team.

“Our studies confirmed that statistical relationship, but at the same time showed that people with faith are more prosocial and empathic,” he said.

In a series of eight experiments, the researchers found the more empathetic the person, the more likely he or she is religious.

That finding offers a new explanation for past research showing women tend to hold more religious or spiritual worldviews than men. The gap may be because women have a stronger tendency toward empathetic concern than men.

Atheists, the researchers found, are most closely aligned with psychopaths—not killers, but the vast majority of psychopaths classified as such due to their lack of empathy for others.

This is yet another piece of scientific evidence in support of my hypothesis that atheism is nothing more than the predictable consequence of being neurologically atypical; that atheism is what might as reasonably be described as social autism.

Which, of course, is just another way of describing a lack of empathy. This makes sense, as I have all the attributes of the average atheist, with one key exception: I am highly empathetic. The short answer to the common question: “how can you believe in God when you are highly intelligent and well-educated” is “Because I am capable of empathizing with my fellow Man.”

As will be clear to anyone who has read the Metaphysics bestseller, On the Existence of Gods, atheism is not a rational position justified by reason and evidence. It is, quite to the contrary, an instinctive and emotional reaction to the atheist’s inability to identify with and relate to the world around him. This is why most atheists become atheists in their teenage years, and why so few are able to provide any justification for their atheism beyond a highly subjective appeal to their own credulity.

That doesn’t mean that atheism is not a legitimate expression of disbelief. It absolutely is, it simply isn’t what it purports to be.

However, it also explains the intrinsic distrust that normal individuals harbor for atheists; it is the same distrust they harbor for psychopaths and others who do not “read” normally.

As I once told Sam Harris in an email when I was helping him with the neurology experiment that led to The Moral Landscape, the scientific investigation into belief and unbelief is far more likely to discover things that trouble the atheist perspective considerably more than the religious one.

For example, if we can ever cure psychopathy by instilling empathy into those who lack it, one likely consequence will be the eventual elimination of atheism. And if the suppression of religious belief necessarily means the suppression of empathy, this renders all dreams of a functional post-religious society intrinsically impossible.

In any event, this will provide a useful rhetorical weapon for the theists. The next time an atheist tells you that you are less intelligent because you believe in God, the obvious response is that you are also, unlike the atheist, not a psychopath.


Thank you for coming

Mike Cernovich says that one ought to thank ten different people every day. So, I thought I’d get a few months out of the way all at once and thank each and every one of you for taking the time to visit here, read here, and comment here this month.

The reason is that I was rather pleased to observe that the blogs passed the two-million-monthly pageview mark today; Google reported 2,041,464 for February 2016. It’s more than a little surprising to finally crack two million on a short month, but apparently this Leap Year was propitious. I always enjoy surpassing the traffic levels McRapey used to lie to the media about having. Truth is so much more satisfying than fiction and one big advantage of simply telling the truth and not exaggerating is never having to worry about being caught out or keeping your various stories straight.

Strangely, despite having more than four times his site traffic, neither the New York Times nor the science fiction media ever describes me as “popular”, or calls this blog “influential”. I wonder why that might be?

In unrelated news, this was a pleasant surprise. I was at the gym, reading Do We Need God To Be Good, by anthropologist C.R. Hallpike, between sets, when I came across this passage.

It is surely rather naive, then, to think that religion is uniquely prone to generate mass slaughter and violent persecution, rather than being just one among a number of such factors that also include politics, race, social class, language, and nationality. It was these, not religion, which produced the wars of the last century, the most violent in history, and the belief that if we removed religion we could remove the main cause of human conflict is clearly incorrect. Indeed, many wars in history have had nothing to do with group hatreds at all, but have simply been the result of kingly ambition and the desire for territory, power, and plunder. Religion has actually been calculated to have been the primary cause of only about 7 per cent of the wars in recorded history, half of which involved Islam (Day 2008:105).

The main thing is for the ideas to circulate, of course, but it’s still nice to see that Dr. Hallpike got the citation correct. I’m about one-third of the way in and it’s a pretty good book, complete with a ruthless beatdown of evolutionary psychology from an anthropological perspective that borders on the epic. One might almost characterize it as Post-New Atheist, as the author takes a firmly secular approach while recognizing that science and religion may not always be in harmony, but are also very far from enemies, let alone opposites.