The intrinsic unreliability of science

More and more investigations of quasi-scientific shenanigans are demonstrating the need for more precision in the language used to describe the field that is too broadly and misleadingly known as “science”:

The problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.

Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an ­unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it. Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. These were not studies published in fly-by-night oncology journals, but blockbuster research featured in Science, Nature, Cell, and the like. The Bayer researchers were drowning in bad studies, and it was to this, in part, that they attributed the mysteriously declining yields of drug pipelines. Perhaps so many of these new drugs fail to have an effect because the basic research on which their development was based isn’t valid….

Paradoxically, the situation is actually made worse by the
fact that a promising connection is often studied by several
independent teams. To see why, suppose that three groups of researchers
are studying a phenomenon, and when all the data are analyzed, one group
announces that it has discovered a connection, but the other two find
nothing of note. Assuming that all the tests involved have a high
statistical power, the lone positive finding is almost certainly the
spurious one. However, when it comes time to report these findings, what
happens? The teams that found a negative result may not even bother to
write up their non-discovery. After all, a report that a fanciful
connection probably isn’t true is not the stuff of which scientific
prizes, grant money, and tenure decisions are made.
And even if they did write it up, it probably wouldn’t be
accepted for publication. Journals are in competition with one another
for attention and “impact factor,” and are always more eager to report a
new, exciting finding than a killjoy failure to find an association. In
fact, both of these effects can be quantified. Since the majority of
all investigated hypotheses are false, if positive and negative evidence
were written up and accepted for publication in equal proportions, then
the majority of articles in scientific journals should report no
findings. When tallies are actually made, though, the precise opposite
turns out to be true: Nearly every published scientific article reports
the presence of an association. There must be massive bias at work. 
Ioannidis’s argument would be potent even if all
scientists were angels motivated by the best of intentions, but when the
human element is considered, the picture becomes truly dismal.
Scientists have long been aware of something euphemistically called the
“experimenter effect”: the curious fact that when a phenomenon is
investigated by a researcher who happens to believe in the phenomenon,
it is far more likely to be detected. Much of the effect can likely be
explained by researchers unconsciously giving hints or suggestions to
their human or animal subjects, perhaps in something as subtle as body
language or tone of voice. Even those with the best of intentions have
been caught fudging measurements, or making small errors in rounding or
in statistical analysis that happen to give a more favorable result.
Very often, this is just the result of an honest statistical error that
leads to a desirable outcome, and therefore it isn’t checked as
deliberately as it might have been had it pointed in the opposite
direction. 

But, and there is no putting it nicely, deliberate fraud
is far more widespread than the scientific establishment is generally
willing to admit.

Never confuse either scientistry or sciensophy for scientody. To paraphrase, and reject, Daniel Dennett’s contention, do not trust biologists or sociologists or climatologists, or anyone else who calls himself a scientist, simply because physicists get amazingly accurate results.


Scientistry and sciensophy

Keep this sordid history of scientific consensus in mind every time you hear the AGW/CC charlatans selling their global government scam on that basis:

In 1980, after long consultation with some of America’s most senior nutrition scientists, the US government issued its first Dietary Guidelines. The guidelines shaped the diets of hundreds of millions of people. Doctors base their advice on them, food companies develop products to comply with them. Their influence extends beyond the US. In 1983, the UK government issued advice that closely followed the American example.

The most prominent recommendation of both governments was to cut back on saturated fats and cholesterol (this was the first time that the public had been advised to eat less of something, rather than enough of everything). Consumers dutifully obeyed. We replaced steak and sausages with pasta and rice, butter with margarine and vegetable oils, eggs with muesli, and milk with low-fat milk or orange juice. But instead of becoming healthier, we grew fatter and sicker.

Look at a graph of postwar obesity rates and it becomes clear that something changed after 1980. In the US, the line rises very gradually until, in the early 1980s, it takes off like an aeroplane. Just 12% of Americans were obese in 1950, 15% in 1980, 35% by 2000. In the UK, the line is flat for decades until the mid-1980s, at which point it also turns towards the sky. Only 6% of Britons were obese in 1980. In the next 20 years that figure more than trebled. Today, two thirds of Britons are either obese or overweight, making this the fattest country in the EU. Type 2 diabetes, closely related to obesity, has risen in tandem in both countries.

At best, we can conclude that the official guidelines did not achieve their objective; at worst, they led to a decades-long health catastrophe. Naturally, then, a search for culprits has ensued. Scientists are conventionally apolitical figures, but these days, nutrition researchers write editorials and books that resemble liberal activist tracts, fizzing with righteous denunciations of “big sugar” and fast food. Nobody could have predicted, it is said, how the food manufacturers would respond to the injunction against fat – selling us low-fat yoghurts bulked up with sugar, and cakes infused with liver-corroding transfats.

Nutrition scientists are angry with the press for distorting their findings, politicians for failing to heed them, and the rest of us for overeating and under-exercising. In short, everyone – business, media, politicians, consumers – is to blame. Everyone, that is, except scientists….

In a 2015 paper titled Does Science Advance One
Funeral at a Time?, a team of scholars at the National Bureau of
Economic Research sought an empirical basis for a remark made by the
physicist Max Planck: “A new scientific truth does not triumph by
convincing its opponents and making them see the light, but rather
because its opponents eventually die, and a new generation grows up that
is familiar with it.”

The researchers identified more than 12,000 “elite” scientists from
different fields. The criteria for elite status included funding, number
of publications, and whether they were members of the National
Academies of Science or the Institute of Medicine. Searching obituaries,
the team found 452 who had died before retirement. They then looked to
see what happened to the fields from which these celebrated scientists
had unexpectedly departed, by analysing publishing patterns.

What they found confirmed the truth of Planck’s maxim. Junior
researchers who had worked closely with the elite scientists, authoring
papers with them, published less. At the same time, there was a marked
increase in papers by newcomers to the field, who were less likely to
cite the work of the deceased eminence. The articles by these newcomers
were substantive and influential, attracting a high number of citations.
They moved the whole field along.

A scientist is part of what the Polish philosopher of science Ludwik
Fleck called a “thought collective”: a group of people exchanging ideas
in a mutually comprehensible idiom. The group, suggested Fleck,
inevitably develops a mind of its own, as the individuals in it converge
on a way of communicating, thinking and feeling.

This makes scientific inquiry prone to the eternal rules of human
social life: deference to the charismatic, herding towards majority
opinion, punishment for deviance, and intense discomfort with admitting
to error. Of course, such tendencies are precisely what the scientific
method was invented to correct for, and over the long run, it does a
good job of it. In the long run, however, we’re all dead, quite possibly
sooner than we would be if we hadn’t been following a diet based on
poor advice.

It is always necessary – it is absolutely vital – to carefully distinguish between scientody, or the scientific method, and scientistry, which is the scientific profession. The evils described in this article are not indicative of any problems with scientody, they are the consequence of the inevitable and intrinsic flaws with scientistry.

To simply call everything “science” is to be misleading, often, but not always, in innocence. Science has no authority, and increasingly, it is an intentional and deceitful bait-and-switch, in which the overly credulous are led to believe that because an individual with certain credentials is asserting something, that statement is supported by documentary evidence gathered through the scientific method of hypothesis, experiment, and successful replication.

In most – not many, but most – cases, that is simply not the case. Even if you don’t use these neologisms to describe the three aspects of science, you must learn to distinguish between them or you will repeatedly fall for this intentional bait-and-switch. In order of reliability, the three aspects of science are:

  • Scientody: the process
  • Scientage: the knowledge base
  • Scientistry: the profession

We might also coin a new term, sciensophy, as practiced by sciensophists, which is most definitely not an aspect of science, to describe the pseudoscience of “the social sciences”, as they do not involve any scientody and their additions to scientage have proven to be generally unreliable. Economics, nutrition, and medicine all tend to fall into this category.


Liberals, not conservatives, hate science

As Maddox has amply demonstrated, they don’t “fucking love science”, they like pictures that remind them of science. Actual science, they hate, because it’s not careful of their precious feelings and tends to gradually destroy their sacred narratives:

I first read Galileo’s Middle Finger: Heretics, Activists, and the Search for Justice in Science when I was home for Thanksgiving, and I often left it lying around the house when I was doing other stuff. At one point, my dad picked it up off a table and started reading the back-jacket copy. “That’s an amazing book so far,” I said. “It’s about the politicization of science.” “Oh,” my dad responded. “You mean like Republicans and climate change?”

That exchange perfectly sums up why anyone who is interested in how tricky a construct “truth” has become in 2015 should read Alice Dreger’s book. No, it isn’t about climate change, but my dad could be excused for thinking any book about the politicization of science must be about conservatives. Many liberals, after all, have convinced themselves that it’s conservatives who attack science in the name of politics, while they would never do such a thing. Galileo’s Middle Finger corrects this misperception in a rather jarring fashion, and that’s why it’s one of the most important social-science books of 2015.

At its core, Galileo’s Middle Finger is about what happens when science and dogma collide — specifically, what happens when science makes a claim that doesn’t fit into an activist community’s accepted worldview. And many of Dreger’s most interesting, explosive examples of this phenomenon involve liberals, not conservatives, fighting tooth and nail against open scientific inquiry.

It’s probably not a book anyone who reads this blog regularly needs to read, but it may be one that most of us would like to give to someone we know. As Nassim Taleb explains, what passes for science simply isn’t really science and it certainly isn’t reliable.

What we are seeing worldwide, from India to the UK to the US, is the rebellion against the inner circle of no-skin-in-the-game policymaking “clerks” and journalists-insiders, that class of paternalistic semi-intellectual experts with some Ivy league, Oxford-Cambridge, or similar label-driven education who are telling the rest of us 1) what to do, 2) what to eat, 3) how to speak, 4) how to think… and 5) who to vote for.

With psychology papers replicating less than 40%, dietary advice reversing after 30y of fatphobia, macroeconomic analysis working worse than astrology, microeconomic papers wrong 40% of the time, the appointment of Bernanke who was less than clueless of the risks, and pharmaceutical trials replicating only 1/5th of the time, people are perfectly entitled to rely on their own ancestral instinct and listen to their grandmothers with a better track record than these policymaking goons.

Indeed one can see that these academico-bureaucrats wanting to run our lives aren’t even rigorous, whether in medical statistics or policymaking. I have shown that most of what Cass-Sunstein-Richard Thaler types call “rational” or “irrational” comes from misunderstanding of probability theory.


Mailvox: atheism and the motte-and-bailey analogy

BJ, an atheist, didn’t feel the topic that was debated in On the Existence of Gods was entirely fair.

As an atheist, I agree that Vox won the debate. His arguments were more
persuasive and coherent. Dominic was a good sport, but he was attacking a
castle with no cannons, no towers, no ram, not even a ladder. I don’t think it is a fair debate topic, though that is not Vox’s fault.
It’s what Myers originally claimed and what Dominic agreed to. But it’s
not a fair view on the subject.

This is the standard motte and
bailey for defending theism. You replace ‘proof of god’ with ‘doubt of
science’ and hope no one calls you on it (Dominic didn’t). Then you push
the atheist into admitting they can’t rule out the possibility of the
existence of something which may resemble a god or gods. Most people
consider that a win.

The problem I have with that is no priest
suggests the possibility of a god or gods, they talk about very specific
gods with very specific rules, demand very specific obedience, and ask
for very real money. None of them can prove their god is real but that
is the bailey position; when they are under attack they retreat to the
motte position, which is just “you can’t prove god(s) DON’T exist.”
Kinda weak basis for tithing 10% of my income.

On the one hand, this is an entirely reasonable point with which I agree entirely. In fact, I repeatedly point out, in both On the Existence of Gods and in The Irrational Atheist, that the argument for the existence of the supernatural, the arguement for the existence of Gods, and the argument for the existence of the Creator God as described in the Bible are three entirely different arguments.

One could further observe, with equal justice, that none of these three arguments suffice to establish the Crucifixion and Resurrection of Jesus Christ of Nazareth or the existence of the Holy Trinity as described in the Constantinian revision of the original Nicene Creed.

The problem, however, is that BJ reverses the motte-and-bailey analogy as it is actually observed in the ongoing atheism-Christianity debate. For example, even in the debate he criticizes, Dominic’s sallies were initially directed at all forms of supernaturalism before being knocked back by my response which observed that the supernatural is a set of which gods are merely a subset.

More importantly, there was never any retreat to the Christian bailey. It simply wasn’t the subject at hand; the purpose of the debate was to challenge the atheist claim to the motte claimed by PZ Myers. And as for Dominic supposedly failing to call me on the very rational and substantive grounds to doubt the legitimacy of science, particularly as it relates to science’s ability to address the subject of gods, that was an intelligent tactical move on his part, because I would have easily demolished any attempt to rely upon science in that manner.

As readers of this blog know, I don’t regard science as being even remotely reliable in its own right, I consider its domain to be limited, and there is considerable documentary, logical, and even scientific evidence to support that position. It is certainly an effective tool, when utilized properly, but it is not a plausible arbiter of reality.

In any event, those interested in the subject appear to find On the Existence of Gods to be a worthy addition to the historical discussion, as it is currently #2 in the Atheism category, sandwiched between a pair of books by Richard Dawkins. If you haven’t posted a review yet, I would encourage you to do so.


Idolocracy and idiocracy

I saw part of an episode of American Idol last night, and what struck me immediately was that it, and the commercials run for its viewers, was entertainment for retards and children. There was an Angry Birds skit/commercial that was very nearly as embarrassing as it was insulting to the intelligence of the audience. As near as I could tell in the 10 minutes or so that I managed to endure it, it looked as if it was aiming for an audience with an IQ of around 85-90. This makes commercial sense, of course, given the fact that I’ve calculated the average US IQ has fallen at least four points based on demographic change alone.

I thought my calculation was pessimistic for the long-term fate of the USA, but it turns out that the situation may well be considerably worse. If Bruce Charlton and Michael Woodley are correct, idiocracy is already here and there appears to be no way to reverse the course of the intellectual decline short of either a) a cataclysmic collapse and rebuilding of Western society or b) totalitarian scientific eugenicism on steroids.

It has been a fascinating, and I must admit horrifying, three-and-a-bit years since Michael Woodley and I first discovered the first objective evidence that there has been a very substantial decline in general intelligence (‘g’) over the past two hundred years – the evidence was posted on this blog just a few hours after we discovered it:

Since then, Michael has taken the lead in replicating this finding in multiple other forms of data, and in a variety of paradigms; and learning more about the magnitude of change and its timescale. His industry has been astonishing! 

We currently believe that general intelligence has declined by approximately two standard deviations (which is approximately 30 IQ points) since 1800 – that is, over about 8 generations.

Such a decline is astonishing – at first sight. But its magnitude has been obscured by social and medical changes so that we underestimate intelligence in 1800 and over-estimate intelligence now.

On the other hand, magnitude and rapidity of decline in world class geniuses in the West (and of major innovations) does imply a decline of intelligence of at least 2 SDs – so from that perspective the rate and size of decline is pretty much as-expected.

So much for the quaint notions of a shiny, sexy, seculatopia where reason and logic would reign over all. If they are right, we’ll be fortunate if our great-great-grandchildren don’t return to the trees and seas, a-grunting as they go.

To a certain extent, the crisis facing the species is similar to that of Nigeria, only writ large. Whereas the Nigerian population used to be limited by high child mortality and was able to feed itself, the importation of Western science and medical care reduced the child mortality rate, caused the population to explode, and has rendered the nation both unable to feed itself, and less intelligent on average as well.

In the West, one need only compare the difference between the popular books of fifty, one hundred, and two hundred years ago with today’s bestsellers to observe that there has been a prodigious decline in reader’s tastes, despite the fact that the less-intelligent half of the population doesn’t read at all.

These changes are not merely dysgenic and dyscivic, they are dyscivilizational. Which causes me to suspect that the future trend is not merely going to be nationalistic, but highly eugenicist as well. The first nation to ensure its homogenuity and solve the declining intelligence challenge will have a significant advantage over all the rest. The only upside that I see is that there should be no desire whatsoever to attack and rule over other nations and populations, although that carries some potentially ominous implications too.

I certainly hope they’re wrong, because it’s enough to make even a hard-core atheist science-fetishist want to say: “Come, Lord Jesus, and soon!”


Stupidity vs psychopathy

That is the correct way to describe the argumentum ad absurdum of the religious mind versus the rational mind:

To believe in a supernatural god or universal spirit, people appear to suppress the brain network used for analytical thinking and engage the empathetic network, the scientists say. When thinking analytically about the physical world, people appear to do the opposite.

“When there’s a question of faith, from the analytic point of view, it may seem absurd,” said Tony Jack, who led the research. “But, from what we understand about the brain, the leap of faith to belief in the supernatural amounts to pushing aside the critical/analytical way of thinking to help us achieve greater social and emotional insight.”

Jack is an associate professor of philosophy at Case Western Reserve and research director of the university’s Inamori International Center of Ethics and Excellence, which helped sponsor the research.


”A stream of research in cognitive psychology has shown and claims that people who have faith (i.e., are religious or spiritual) are not as smart as others. They actually might claim they are less intelligent.,” said Richard Boyatzis, distinguished university professor and professor of organizational behavior at Case Western Reserve, and a member of Jack’s team.

“Our studies confirmed that statistical relationship, but at the same time showed that people with faith are more prosocial and empathic,” he said.

In a series of eight experiments, the researchers found the more empathetic the person, the more likely he or she is religious.

That finding offers a new explanation for past research showing women tend to hold more religious or spiritual worldviews than men. The gap may be because women have a stronger tendency toward empathetic concern than men.

Atheists, the researchers found, are most closely aligned with psychopaths—not killers, but the vast majority of psychopaths classified as such due to their lack of empathy for others.

This is yet another piece of scientific evidence in support of my hypothesis that atheism is nothing more than the predictable consequence of being neurologically atypical; that atheism is what might as reasonably be described as social autism.

Which, of course, is just another way of describing a lack of empathy. This makes sense, as I have all the attributes of the average atheist, with one key exception: I am highly empathetic. The short answer to the common question: “how can you believe in God when you are highly intelligent and well-educated” is “Because I am capable of empathizing with my fellow Man.”

As will be clear to anyone who has read the Metaphysics bestseller, On the Existence of Gods, atheism is not a rational position justified by reason and evidence. It is, quite to the contrary, an instinctive and emotional reaction to the atheist’s inability to identify with and relate to the world around him. This is why most atheists become atheists in their teenage years, and why so few are able to provide any justification for their atheism beyond a highly subjective appeal to their own credulity.

That doesn’t mean that atheism is not a legitimate expression of disbelief. It absolutely is, it simply isn’t what it purports to be.

However, it also explains the intrinsic distrust that normal individuals harbor for atheists; it is the same distrust they harbor for psychopaths and others who do not “read” normally.

As I once told Sam Harris in an email when I was helping him with the neurology experiment that led to The Moral Landscape, the scientific investigation into belief and unbelief is far more likely to discover things that trouble the atheist perspective considerably more than the religious one.

For example, if we can ever cure psychopathy by instilling empathy into those who lack it, one likely consequence will be the eventual elimination of atheism. And if the suppression of religious belief necessarily means the suppression of empathy, this renders all dreams of a functional post-religious society intrinsically impossible.

In any event, this will provide a useful rhetorical weapon for the theists. The next time an atheist tells you that you are less intelligent because you believe in God, the obvious response is that you are also, unlike the atheist, not a psychopath.


Thank you for coming

Mike Cernovich says that one ought to thank ten different people every day. So, I thought I’d get a few months out of the way all at once and thank each and every one of you for taking the time to visit here, read here, and comment here this month.

The reason is that I was rather pleased to observe that the blogs passed the two-million-monthly pageview mark today; Google reported 2,041,464 for February 2016. It’s more than a little surprising to finally crack two million on a short month, but apparently this Leap Year was propitious. I always enjoy surpassing the traffic levels McRapey used to lie to the media about having. Truth is so much more satisfying than fiction and one big advantage of simply telling the truth and not exaggerating is never having to worry about being caught out or keeping your various stories straight.

Strangely, despite having more than four times his site traffic, neither the New York Times nor the science fiction media ever describes me as “popular”, or calls this blog “influential”. I wonder why that might be?

In unrelated news, this was a pleasant surprise. I was at the gym, reading Do We Need God To Be Good, by anthropologist C.R. Hallpike, between sets, when I came across this passage.

It is surely rather naive, then, to think that religion is uniquely prone to generate mass slaughter and violent persecution, rather than being just one among a number of such factors that also include politics, race, social class, language, and nationality. It was these, not religion, which produced the wars of the last century, the most violent in history, and the belief that if we removed religion we could remove the main cause of human conflict is clearly incorrect. Indeed, many wars in history have had nothing to do with group hatreds at all, but have simply been the result of kingly ambition and the desire for territory, power, and plunder. Religion has actually been calculated to have been the primary cause of only about 7 per cent of the wars in recorded history, half of which involved Islam (Day 2008:105).

The main thing is for the ideas to circulate, of course, but it’s still nice to see that Dr. Hallpike got the citation correct. I’m about one-third of the way in and it’s a pretty good book, complete with a ruthless beatdown of evolutionary psychology from an anthropological perspective that borders on the epic. One might almost characterize it as Post-New Atheist, as the author takes a firmly secular approach while recognizing that science and religion may not always be in harmony, but are also very far from enemies, let alone opposites.


Words are magic

A minor dialogue on Twitter cracked me up today. To put it in context, some scientists and science fetishists on Twitter were in an uproar over my assertion that SCIENTIFIC PEER REVIEW was not only unreliable, but was nothing more than glorified proofreading. They argued that SCIENTIFIC PEER REVIEW was all about replicating experiments and testing conclusions, not merely reading over the material in order to make sure the author wasn’t smoking crack.

One guy even demanded to know if I knew what “peer” meant. Because, you know, that totally changes the process.

Finally, I asked a scientist how many peer reviews he had done. Between 10 and 30 was the answer. Fair enough. Then I asked him how many experiments he had replicated as part of those SCIENTIFIC PEER REVIEWS.

None. Or to put in scientific mathematical terms, zero. Also known as “the null set”.

And what did he actually do in scientifically peer-reviewing these papers? Well, he read them and occasionally made some suggestions for improving them.

[INSERT FACE PALM OF YOUR CHOICE HERE]

That is why I am strongly considering changing my title from Lead Editor of Castalia House to Lead Scientific Peer Reviewer. Because then, you see, we won’t merely be publishing fiction, we’ll be publishing PEER REVIEWED SCIENCE.

UPDATE: This was Real Live Scientist with More than TEN Proofreads Peer Reviews David Whitcombe’s response to finding out that scientists with considerably more experience agreed with me.

David Whitcombe ‏@hauxton
Ooh
You wrote a blog.
Still misunderstanding peer review.
Over your head in guess
 
David Whitcombe ‏@hauxton
Laughable Dunning Kruger

Thereby supporting my hypothesis that SJWs always double down.


Psychologist, heal thyself

This is why therapy is reliably doomed to failure:

Confessions of a depressed psychologist: I’m in a darker place than my patients.

I am sitting opposite my sixth patient of the day. She is describing a terrible incident in her childhood when she was abused, sexually and physically, by both of her parents. I am nodding, listening and hoping I appear as if I appear normal. Inside, however, I feel anything but.

My head is thick – as if I’m thinking through porridge. I find myself tuning out and switching to autopilot. I put it down to tiredness – I haven’t slept well recently; last night I managed just two hours – but after the session I’m disappointed in myself. I’m worried that I might have let down my patient and I feel a bit of  a failure, but I tell no one.

One week later, I am in my car, driving across a bridge. Everything should be wonderful – my partner has a new job, my career as a psychologist in the NHS is going well, plus it’s almost Christmas, the second with our young child, and we’re readying ourselves for a move to London.

Yet, my mind is thick again. My only lucid thought is, “What if I turned the steering wheel and drove into the bridge support? What if I stuck my foot on the pedal and went straight off the edge? Wouldn’t that be so much easier?”

I grip the steering wheel and force myself to think, instead, of my partner and child. They are the two people who get me home safely.

It is the sort of anecdote I have heard from clients time and time again. I became a psychologist because I have a natural nurturing tendency – I never dreamt I would be the vulnerable one. But 10 years ago I found myself suffering from an extremely severe episode of depression that lasted three months, left me unable to work for six weeks and, at my very lowest, saw me contemplating suicide.

Would you go to a plumber whose toilet is overflowing? Would you hire a computer programmer who didn’t know how to use a computer? Then why would you ever talk to one of these nutjobs in order to fix whatever mental issues you might be having? In addition to the 46 percent of psychologists who the NHS reports as being depressed, “out of 800 psychologists sampled, 29 per cent reported suicidal ideation and 4 per cent reported attempting suicide.”

There is very little scientific evidence of the benefits of psychology. I read one recent study which showed that neurotic individuals actually stabilize on their own at a higher rate than those who seek therapy. This is no surprise, as the foundations of psychology are literally fiction. One might as reasonably base one’s economics on Isaac Asimov novels.

How many people do you know that have gone into therapy and never exited it? Those who advocate therapy are rather like fat people testifying to the efficacy of diet plans on which they never lose any weight.


Women, science, and sex

The SJWs in science are setting up their favorite damned-if-you-do, damned-if-you-don’t scenario for male scientists. If you don’t bring young women along with you on your trips, you’re a damnable sexist. And if you do, you’re a sexual predator.

On a cold evening last March, as researchers descended upon St. Louis, Missouri, for the annual meeting of the American Association of Physical Anthropologists (AAPA), a dramatic scene unfolded at the rooftop bar of the St. Louis Hilton at the Ballpark, the conference hotel. From here, attendees had spectacular views of the city, including Busch Stadium and the Gateway Arch, but many were riveted by an animated discussion at one table.

Loudly, and apparently without caring who heard her, a research assistant at the American Museum of Natural History (AMNH) in New York City charged that her boss—noted paleoanthropologist Brian Richmond, the museum’s curator of human origins—had “sexually assaulted” her in his hotel room after a meeting the previous September in Florence, Italy. (She requested that her name not appear in this story to protect her privacy.) Over the next several days, as the 1700 conference attendees presented and discussed the latest research, word of the allegations raced through the meeting.

Richmond, who was also at the meeting, has vigorously denied the accusations in a statement to Science and in email responses. (He declined to be interviewed in person or by telephone.) The encounter in the hotel room, he wrote, was “consensual and reciprocal,” adding that “I never sexually assaulted anyone.”

Although the most recent high-profile cases of sexual harassment in science have arisen in astronomy and biology, many researchers say paleoanthropology also has been rife with sexual misconduct for decades. Fieldwork, often in remote places, can throw senior male faculty and young female students together in situations where the rules about appropriate behavior can be stretched to the breaking point. Senior women report years of unwanted sexual attention in the field, at meetings, and on campus. A widely cited anonymous survey of anthropologists and other field scientists, called the SAFE study and published in July 2014 in PLOS ONE, reported that 64% of the 666 respondents had experienced some sort of sexual harassment, from comments to physical contact, while doing fieldwork.

Even a few years ago, the research assistant might not even have aired her complaint, as few women—or men—felt emboldened to speak out about harassment. Of the 139 respondents in the SAFE study who said they experienced unwanted physical contact, only 37 had reported it. Those who remained silent may have feared retaliation. Senior paleoanthropologists control access to field sites and fossils, write letters of recommendation, and might end up as reviewers on papers or grant proposals. “The potential for [senior scientists] to make a phone call and kill a career-making paper feels very real,” says Leslea Hlusko, a paleontologist at the University of California (UC), Berkeley.

It will be interesting to learn if the female scientists entering the field will be sufficient to make up for the male scientists they drive from it. The history of social justice convergence indicates that not only will they fail to make up for it, but that all actual scientific activity will cease once a critical mass is reached.

It’s rather remarkable that the Richmond situation is being portrayed as him sexually assaulting her when she was in his hotel room. I suspect that the charge of sexual assault are nothing more than her trying to cover for the fact that she was more or less cheating on her husband. They were out drinking with their colleagues, all of whom would have known that she went back to his room with him.

Remember, it’s much better to be deemed a sexist than a sexual assailant. Don’t mentor women in person, don’t go out of your way to help them, don’t befriend them (particularly if you find them attractive), and don’t go out to dinner with them alone. If you can’t avoid it due to work, insist on lunch. Definitely don’t go out for drinks or to a club. Don’t hug or kiss them, and don’t let them touch you except to shake your hand. Don’t ever give the SJWs an opening to take you down.

The SJWs would love nothing better than to try to do to me what they’ve done to everyone from Jian Gomeshi to James Frenkel. They can’t, because I never give them even the slightest molehill out of which to make a mountain.