Scientific skepticism

And scientists wonder why we’re every bit as skeptical as everything they report as “the current scientific consensus”. This is a fairly typical “scientific” rebuttal to a science news report that happens to be outside of the current mainstream of accepted thought:

Let’s start with a quick talk about aliens. In an infinite universe, it seems foolhardy— even arrogant— to completely dismiss the idea of extraterrestrial life. There are so many galaxies, so many planets, so many suns; across the neverending expanse of space, one suspects that there must be another group of intelligent beings somewhere.

But suspect is the key word there. We have no credible evidence for the existence of alien civilizations. As Carl Sagan said, “Extraordinary claims require extraordinary evidence.” And claiming that the Paracas skulls are possibly alien is certainly extraordinary. So let’s look at the evidence— does it measure up?

Well, the short answer is no. First, consider the source: the preliminary results of genetic testing were announced by Brien Foerster, who is the assistant director of the Paracas History Museum.

That’s a pretty impressive title, and I’ll admit that it threw me. That title implies formal archaeological, curatorial, or history credentials, maybe a body of peer-reviewed research projects. That title implies that he has serious academic credibility, and that we should listen to his announcements about his areas of expertise.

None of this is true. Some pretty basic Google research turns up some facts about Foerster that cast his announcement in an entirely different light.

First, his academic credentials: by cobbling information together from the webpage of his company Hidden Inca Tours and his official Facebook page, it appears that he has a Bachelor of Science from the University of Victoria, in British Columbia, Canada. Foerster doesn’t offer any further information about his educational background, including his exact field of undergraduate study. I was unable to find any evidence of an advanced degree.

Foerster’s company, Hidden Inca Tours, is a travel agency that specializes in taking travelers on paranormal tours around the world, but focuses on Peru and the surrounding region. Foerster has also written a number of books on archaeology, including one called “The Enigma of Cranial Deformation: Elongated Skulls of the Ancients,” which he wrote with David Hatcher Childress. Vanderbilt University archaeologist Charles E. Orser once called Childress “one of the most flagrant violators of basic archaeological reasoning.”

So what about his role as assistant director at the Paracas History Museum? How did a paranormal tour operator get that job?

Well, first, the Paracas History Museum is a private museum. It’s owned by one Juan Navarro, who is also its director. Navarro is also listed on the Hidden Inca Tours webpage as a member of “Our Team of Experts.” I was unable to find any mention of academic credentials earned by Navarro, either.

My preoccupation with academic credentials is not meant to downplay the immense wisdom and experience possessed by many people who do not have undergraduate or post-grad degrees. Being smart does not require a college degree. Heck, it doesn’t require any kind of education at all; it’s an innate quality.

However, scientific expertise is not an innate quality. It is something that is gained through years of study and research, both of which are usually completed in an institution that awards successful students degrees upon graduation.

To be fair, I don’t have any special academic credentials that make me an expert in archaeology or genetics. But I’m not arguing that the data is flawed— we haven’t seen the full data, and I’m not qualified to speak on that— but I am arguing that a number of features of the announcement should warn us not to take Foerster’s announcement at face value.

That brings us to the strange nature of the announcement. Foerster announced the results personally, via internet, rather than through a scientifically reputable source.

There are a number of problems with the way he announced the preliminary results. Speaking to Discovery.com, science promoter and skeptic Sharon Hill said “This is an unconventional way of making ‘groundbreaking’ claims.”

Hill added “It’s not supported by a university, but by private funding. The initial findings were released in this unprofessional way (via Facebook, websites and an Internet radio interview) obviously because Foerster and the other researchers think this is very exciting news.”

Exciting news is one thing, but scientific credibility is another. “[S]cience doesn’t work by social media,” said Hill. “Peer review is a critical part of science and the Paracas skulls proponents have taken a shortcut that completely undermines their credibility. Appealing to the public’s interest in this cultural practice we see as bizarre — skull deformation —instead of publishing the data for peer-review examination is not going to be acceptable to the scientific community.”

There’s also the matter of the testing itself. According to Foerster, the geneticist who discovered the allegedly never-before-seen DNA, wants to remain anonymous. If that’s not a red flag for the credibility of your research, I don’t know what is.

The final nail in this story’s coffin, for me, was the revelation that Foerster had appeared on the popular History Channel program “Ancient Aliens” multiple times. In yesterday’s article, I said that the scientific and archaeological communities generally regard “Ancient Aliens” as inaccurate.

Now let’s consider the various bases for why we are supposed to dismiss the announced findings of genetic anomalies in the highly unusual Paracas skulls, which reportedly do not fit within the parameters of human skull variations.

  1. We have no credible evidence for the existence of alien civilizations? That’s a stupid statement, considering these skulls may be such evidence. There is no credible evidence for anything the first time it is discovered.
  2. The Carl Sagan quote is stupid and incorrect, for reasons that a) should be obvious and b) have been covered previously. It’s cheap sciencistic rhetoric.
  3. The title doesn’t imply anything. As for the lack of credentials, well, given the amount of known fraud and statistical error being committed by impeccably credentialed scientists, that is hardly a disqualifier.
  4. Guilt-by-association. I wrote a book with Bruce Bethke, but that doesn’t make me one of the world’s experts on supercomputers.
  5. (laughs) The writer has no credentials either. By her own logic, should we not dismiss everything she is saying? In any event, her preoccupation with academic credentials is not exactly hard to explain; she is a woman. That’s why women now so outnumber men in the university enrollments.
  6. The fact that Foerster elected to bypass the gatekeepers says literally nothing about whether the reported news is accurate or not.
  7. The geneticist’s preference to remain anonymous is not a red flag but rather an indication of the corrupt nature of science and science journalism. He knew his credibility would be attacked and adroitly avoided it by permitting the evidence to stand on its own.
  8. An appearance on a television show that is generally regarded as inaccurate by the very communities whose consensus and competence is being challenged by these reports says absolutely nothing about whether they are true or not.

Now, none of this means that Foerster is not a con artist and the reports of the genetic anomalies in the skulls are not fiction. But the correct response is for other geneticists to test the samples and either confirm or contradict the report; that is scientody. This sort of blanket assertion isn’t founded in science, it’s not even based on good logic.


The bonfire of science

In which it is once more demonstrated that scientific evidence is VASTLY less reliable than other types of evidence, because, in most cases, no one ever bothers to actually check the results:

A whole pile of “this is how your brain looks like” MRI-based science has been invalidated because someone finally got around to checking the data.

The problem is simple: to get from a high-resolution magnetic resonance imaging scan of the brain to a scientific conclusion, the brain is divided into tiny “voxels”. Software, rather than humans, then scans the voxels looking for clusters.

When you see a claim that “scientists know when you’re about to move an arm: these images prove it”, they’re interpreting what they’re told by the statistical software.

Now, boffins from Sweden and the UK have cast doubt on the quality of the science, because of problems with the statistical software: it produces way too many false positives.

In this paper at PNAS, they write: “the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.”

For example, a bug that’s been sitting in a package called 3dClustSim for 15 years, fixed in May 2015, produced bad results (3dClustSim is part of the AFNI suite; the others are SPM and FSL).

That’s not a gentle nudge that some results might be overstated: it’s more like making a bonfire of thousands of scientific papers.

It is not even remotely reasonable to take scientific evidence at face value anymore, much less the pseudoscience so often being substituted for the results produced by genuine, if often flawed, scientody.

It is not even remotely surprising that the flaw the scientists failed to pick up in this situation was statistical, for as I’ve previously observed, most scientists have very little training in math or statistics, and despite their habit of regularly citing statistics, most of them are more or less statistically illiterate.

Never forget that while there are certainly some brilliant scientists, most of them are literal midwits as there are relatively few credentialed scientists with IQs over 132. A study of all the U.S. PhD recipients in 1958 reported an average IQ of 123; the Flynn effect notwithstanding, it is highly unlikely that the average IQ of today’s increasingly diverse and vibrant PhD recipients has risen since then.


The problem of peer review

Peer review simply isn’t what it is advertised to be; it is not only little more than editing, most of the time it is not even competent editing:

As a scientist with a 15 year career behind me so far, I am afraid that my experiences reflect this. Peer review is excellent in theory but not in practice. Much of the time, the only vetting the papers get are two relatively junior people in a field (often grad students or postdocs) giving it a thumbs up or thumbs down. That is absolutely it. In theory, the editors should make the decisions with the recommendations of the reviewers, but the editors rarely have the time or the expertise to judge the papers and often automatically defer to reviewers. Also, the papers should be reviewed by luminaries of the field, but these folks rarely have the time, and either decline invitations or bounce the work to a student or another trainee. It’s not just bad papers that get through, but also good, rigorous, papers that are bounced by this system.

Many if not most of the people in academic science today, at least in biology (my field), are overwhelmed with the need to publish in such high volumes, few people with the needed expertise can afford the time to go over the results in detail. All this while, at the same time and for the same reason, the volume of papers that needs to be reviewed goes up. I’ve heard of (and had myself) papers havve lingered for 4+ months before they even went out for review.

And, in our rush to publish, we often don’t read this literature carefully ourselves but start citing papers anyway, which weaves these potentially weak or erroneous papers even more tightly into the fabric of their field.

It’s difficult to care a lot about the quality of your work when you know the extra effort often doesn’t help something go through this fickle review process, and when you know people will cite it without really reading it closely. There is little incentive to spend longer on a paper to make sure everything is right and the results are reproducible because there is very little accountability for errors and huge rewards for being prolific.

The ironic thing is that True Believers and the I Fucking Love Science crowd genuinely believe that “peer reviewed science” is the gold standard for evidence. But there is a reason scientific evidence is not automatically allowed in a court of law, let alone considered conclusive, and the more we learn about the defects of peer review, the better we understand that science’s credibility is limited.

We have a word for science that is trustworthy, and that word is engineering. Until science can be applied, it cannot be fully trusted to be correct.

All peer review is really designed to do is to reassure the reader that the information presented fits safely within the confines of the consensus status quo.


Was Charles Darwin a science fraud?

A buster of supermyths claims that Darwin was, in fact, a plagiarist:

Sutton has himself embarked on another journey to the depths, this one far more treacherous than the ones he’s made before. The stakes were low when he was hunting something trivial, the supermyth of Popeye’s spinach; now Sutton has been digging in more sacred ground: the legacy of the great scientific hero and champion of the skeptics, Charles Darwin. In 2014, after spending a year working 18-hour days, seven days a week, Sutton published his most extensive work to date, a 600-page broadside on a cherished story of discovery. He called it “Nullius in Verba: Darwin’s Greatest Secret.”

Sutton’s allegations are explosive. He claims to have found irrefutable proof that neither Darwin nor Alfred Russel Wallace deserves the credit for the theory of natural selection, but rather that they stole the idea — consciously or not — from a wealthy Scotsman and forest-management expert named Patrick Matthew. “I think both Darwin and Wallace were at the very least sloppy,” he told me. Elsewhere he’s been somewhat less diplomatic: “In my opinion Charles Darwin committed the greatest known science fraud in history by plagiarizing Matthew’s” hypothesis, he told the Telegraph. “Let’s face the painful facts,” Sutton also wrote. “Darwin was a liar. Plain and simple.”

Some context: The Patrick Matthew story isn’t new. Matthew produced a volume in the early 1830s, “On Naval Timber and Arboriculture,” that indeed contained an outline of the famous theory in a slim appendix. In a contemporary review, the noted naturalist John Loudon seemed ill-prepared to accept the forward-thinking theory. He called it a “puzzling” account of the “origin of species and varieties” that may or may not be original. In 1860, several months after publication of “On the Origin of Species,” Matthew would surface to complain that Darwin — now quite famous for what was described as a discovery born of “20 years’ investigation and reflection” — had stolen his ideas.

Darwin, in reply, conceded that “Mr. Matthew has anticipated by many years the explanation which I have offered of the origin of species, under the name of natural selection.” But then he added, “I think that no one will feel surprised that neither I, nor apparently any other naturalist, had heard of Mr. Matthew’s views.”

That statement, suggesting that Matthew’s theory was ignored — and hinting that its importance may not even have been quite understood by Matthew himself — has gone unchallenged, Sutton says. It has, in fact, become a supermyth, cited to explain that even big ideas amount to nothing when they aren’t framed by proper genius.

Sutton thinks that story has it wrong, that natural selection wasn’t an idea in need of a “great man” to propagate it. After all his months of research, Sutton says he found clear evidence that Matthew’s work did not go unread. No fewer than seven naturalists cited the book, including three in what Sutton calls Darwin’s “inner circle.” He also claims to have discovered particular turns of phrase — “Matthewisms” — that recur suspiciously in Darwin’s writing.

It wouldn’t surprise me in the slightest if Sutton is correct. Although non-writers don’t have much confidence in it, to the expert, or even the experienced amateur, literary style is very nearly as distinguishable as a fingerprint. This is particularly true in cases where the two works are supposed to be entirely unrelated.


Cargo Cult debate

One thing science fetishists can’t bear is to have their obvious ignorance of science pointed out:

Babak Golshahi ‏@bgolshahi1
I love being able to back up what I say with hard evidence, peer reviewed scientific consensus.

Supreme Dark Lord ‏@voxday
50 percent of which is proven to be wrong when replication is attempted. You’re out of date.

Babak Golshahi ‏@bgolshahi1
replication of what? You got a peer reviewed piece or really any article that backs up your claim? Waiting.

Supreme Dark Lord ‏@voxday
Mindlessly repeating the words “peer review” and citing “articles” shows you’re a low-IQ ignoramus.

Babak Golshahi ‏@bgolshahi1
you apologize for that or you’re blocked

Supreme Dark Lord ‏@voxday
Block away, moron. It won’t fix peer review or change the fact that you’re both stupid and ignorant.

Babak Golshahi ‏@bgolshahi1
You are blocked from following @bgolshahi1 and viewing @bgolshahi1’s Tweets.

I wish more of these morons would use Randi Harper’s anti-GG autoblocker, so I wouldn’t be subjected to their repetitive idiocy.

It is important to understand that if you’re prone to demanding “peer reviewed pieces” or shouting “logical fallacy” at people with whom you are arguing, you’re probably a midwit who doesn’t really understand what you’re talking about. In both these, and other similar cases, what we have is a person who has seen someone else win an argument successfully refuting another individual’s argument by comparing scientific evidence or identifying a specific logical fallacy being committed, and trying to imitate them without understanding what the other person was actually doing.

But if there is no genuine substance behind the demand or the identification, if you don’t have your own competing scientific evidence or you can’t point out the actual logical fallacy – and there is a massive difference between the set of flawed syllogisms and the subset of logical fallacies – then you have no business talking about such things.

The failure to cite a peer-reviewed study means nothing in the absence of competing citations. The claim of logical fallacy means nothing when the precise fallacy is not identified. If you don’t understand those things, stop embarrassing yourself by arguing with people and start reading.

Otherwise, you’re no different than the ignorant South Pacific islander building runways in the hopes that the magic sky machines will descend bearing gifts.


Diversity kills community

The same negative effect of diversity on community discovered – and initially buried – by Robert Putnam in the United States is replicated in the United Kingdom by a study entitled Does Ethnic Diversity Have a Negative Effect on Attitudes towards the Community? A Longitudinal Analysis of the Causal Claims within the Ethnic Diversity and Social Cohesion Debate:

We observe that as a community becomes more diverse around an individual, they are likely to become less attached to their community. This is a strict test of the causal impact of diversity, minimizing unobserved heterogeneity and eliminating selection bias. Importantly, neither indicator of disadvantage is significantly associated with attachment.

Model 2 shows the same analysis among movers. Diversity is again significant and negative, suggesting that individuals who move from more diverse to less diverse communities are likely to become more attached (and vice versa).

Like calls to like. Most people prefer to live among their own. Segregation is not only the right of free association in action, it is a community imperative. This is further evidence that the increasingly diverse United States will not survive because it cannot survive. It is not a nation.


All ur hashtag are belong to us

The Global Warming charlatans are planning a propaganda push. This is from a science activist mailing list.

Climate Feedback works like this: Using the new web-annotation platform Hypothesis, scientists verify facts and annotate online climate articles, layering their insights and comments on top of the original story. They then issue a “5-star” rating so readers can quickly judge stories’ scientific credibility. Recognized by NASA, the UN Framework Convention on Climate Change and California Gov. Jerry Brown among others, Climate Feedback is already improving journalistic standards by flagging misreported climate science in mainstream outlets; earlier this month, for example, scientists took apart Bjorn Lomborg’s misleading op-ed in the Wall Street Journal. This is only a hint of what Climate Feedback has in store as it begins to aggregate those credibility scores into a wider index, rating major news sources on their reporting of climate change as part of a new Scientific Trust Tracker.

To that end, Climate Feedback is launching a crowd funding campaign on April 27 around the hashtag #StandWithScience, supported by leading climate minds like Profs. Michael Mann, Naomi Oreskes and others. I invite you to take a look at this sneak preview of our campaign (NOTE: please do not share publicly before April 27). The Exxon climate scandal has already made its way into the 2016 election season, but few have discussed the role the media has played enabling corporate interests to sow doubt about the science of climate change, which has long confused the public and undermined political support for dealing with the issue. As 350.org founder Bill McKibben said of Climate Feedback: Scientists are just about ready to come out of the lab and get more active and when they do, it will make a remarkable difference.

Let’s disrupt it. VFM, you know what to do. Political activism is not science. #StandWithScience.


The intrinsic unreliability of science

More and more investigations of quasi-scientific shenanigans are demonstrating the need for more precision in the language used to describe the field that is too broadly and misleadingly known as “science”:

The problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.

Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an ­unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it. Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. These were not studies published in fly-by-night oncology journals, but blockbuster research featured in Science, Nature, Cell, and the like. The Bayer researchers were drowning in bad studies, and it was to this, in part, that they attributed the mysteriously declining yields of drug pipelines. Perhaps so many of these new drugs fail to have an effect because the basic research on which their development was based isn’t valid….

Paradoxically, the situation is actually made worse by the
fact that a promising connection is often studied by several
independent teams. To see why, suppose that three groups of researchers
are studying a phenomenon, and when all the data are analyzed, one group
announces that it has discovered a connection, but the other two find
nothing of note. Assuming that all the tests involved have a high
statistical power, the lone positive finding is almost certainly the
spurious one. However, when it comes time to report these findings, what
happens? The teams that found a negative result may not even bother to
write up their non-discovery. After all, a report that a fanciful
connection probably isn’t true is not the stuff of which scientific
prizes, grant money, and tenure decisions are made.
And even if they did write it up, it probably wouldn’t be
accepted for publication. Journals are in competition with one another
for attention and “impact factor,” and are always more eager to report a
new, exciting finding than a killjoy failure to find an association. In
fact, both of these effects can be quantified. Since the majority of
all investigated hypotheses are false, if positive and negative evidence
were written up and accepted for publication in equal proportions, then
the majority of articles in scientific journals should report no
findings. When tallies are actually made, though, the precise opposite
turns out to be true: Nearly every published scientific article reports
the presence of an association. There must be massive bias at work. 
Ioannidis’s argument would be potent even if all
scientists were angels motivated by the best of intentions, but when the
human element is considered, the picture becomes truly dismal.
Scientists have long been aware of something euphemistically called the
“experimenter effect”: the curious fact that when a phenomenon is
investigated by a researcher who happens to believe in the phenomenon,
it is far more likely to be detected. Much of the effect can likely be
explained by researchers unconsciously giving hints or suggestions to
their human or animal subjects, perhaps in something as subtle as body
language or tone of voice. Even those with the best of intentions have
been caught fudging measurements, or making small errors in rounding or
in statistical analysis that happen to give a more favorable result.
Very often, this is just the result of an honest statistical error that
leads to a desirable outcome, and therefore it isn’t checked as
deliberately as it might have been had it pointed in the opposite
direction. 

But, and there is no putting it nicely, deliberate fraud
is far more widespread than the scientific establishment is generally
willing to admit.

Never confuse either scientistry or sciensophy for scientody. To paraphrase, and reject, Daniel Dennett’s contention, do not trust biologists or sociologists or climatologists, or anyone else who calls himself a scientist, simply because physicists get amazingly accurate results.


Scientistry and sciensophy

Keep this sordid history of scientific consensus in mind every time you hear the AGW/CC charlatans selling their global government scam on that basis:

In 1980, after long consultation with some of America’s most senior nutrition scientists, the US government issued its first Dietary Guidelines. The guidelines shaped the diets of hundreds of millions of people. Doctors base their advice on them, food companies develop products to comply with them. Their influence extends beyond the US. In 1983, the UK government issued advice that closely followed the American example.

The most prominent recommendation of both governments was to cut back on saturated fats and cholesterol (this was the first time that the public had been advised to eat less of something, rather than enough of everything). Consumers dutifully obeyed. We replaced steak and sausages with pasta and rice, butter with margarine and vegetable oils, eggs with muesli, and milk with low-fat milk or orange juice. But instead of becoming healthier, we grew fatter and sicker.

Look at a graph of postwar obesity rates and it becomes clear that something changed after 1980. In the US, the line rises very gradually until, in the early 1980s, it takes off like an aeroplane. Just 12% of Americans were obese in 1950, 15% in 1980, 35% by 2000. In the UK, the line is flat for decades until the mid-1980s, at which point it also turns towards the sky. Only 6% of Britons were obese in 1980. In the next 20 years that figure more than trebled. Today, two thirds of Britons are either obese or overweight, making this the fattest country in the EU. Type 2 diabetes, closely related to obesity, has risen in tandem in both countries.

At best, we can conclude that the official guidelines did not achieve their objective; at worst, they led to a decades-long health catastrophe. Naturally, then, a search for culprits has ensued. Scientists are conventionally apolitical figures, but these days, nutrition researchers write editorials and books that resemble liberal activist tracts, fizzing with righteous denunciations of “big sugar” and fast food. Nobody could have predicted, it is said, how the food manufacturers would respond to the injunction against fat – selling us low-fat yoghurts bulked up with sugar, and cakes infused with liver-corroding transfats.

Nutrition scientists are angry with the press for distorting their findings, politicians for failing to heed them, and the rest of us for overeating and under-exercising. In short, everyone – business, media, politicians, consumers – is to blame. Everyone, that is, except scientists….

In a 2015 paper titled Does Science Advance One
Funeral at a Time?, a team of scholars at the National Bureau of
Economic Research sought an empirical basis for a remark made by the
physicist Max Planck: “A new scientific truth does not triumph by
convincing its opponents and making them see the light, but rather
because its opponents eventually die, and a new generation grows up that
is familiar with it.”

The researchers identified more than 12,000 “elite” scientists from
different fields. The criteria for elite status included funding, number
of publications, and whether they were members of the National
Academies of Science or the Institute of Medicine. Searching obituaries,
the team found 452 who had died before retirement. They then looked to
see what happened to the fields from which these celebrated scientists
had unexpectedly departed, by analysing publishing patterns.

What they found confirmed the truth of Planck’s maxim. Junior
researchers who had worked closely with the elite scientists, authoring
papers with them, published less. At the same time, there was a marked
increase in papers by newcomers to the field, who were less likely to
cite the work of the deceased eminence. The articles by these newcomers
were substantive and influential, attracting a high number of citations.
They moved the whole field along.

A scientist is part of what the Polish philosopher of science Ludwik
Fleck called a “thought collective”: a group of people exchanging ideas
in a mutually comprehensible idiom. The group, suggested Fleck,
inevitably develops a mind of its own, as the individuals in it converge
on a way of communicating, thinking and feeling.

This makes scientific inquiry prone to the eternal rules of human
social life: deference to the charismatic, herding towards majority
opinion, punishment for deviance, and intense discomfort with admitting
to error. Of course, such tendencies are precisely what the scientific
method was invented to correct for, and over the long run, it does a
good job of it. In the long run, however, we’re all dead, quite possibly
sooner than we would be if we hadn’t been following a diet based on
poor advice.

It is always necessary – it is absolutely vital – to carefully distinguish between scientody, or the scientific method, and scientistry, which is the scientific profession. The evils described in this article are not indicative of any problems with scientody, they are the consequence of the inevitable and intrinsic flaws with scientistry.

To simply call everything “science” is to be misleading, often, but not always, in innocence. Science has no authority, and increasingly, it is an intentional and deceitful bait-and-switch, in which the overly credulous are led to believe that because an individual with certain credentials is asserting something, that statement is supported by documentary evidence gathered through the scientific method of hypothesis, experiment, and successful replication.

In most – not many, but most – cases, that is simply not the case. Even if you don’t use these neologisms to describe the three aspects of science, you must learn to distinguish between them or you will repeatedly fall for this intentional bait-and-switch. In order of reliability, the three aspects of science are:

  • Scientody: the process
  • Scientage: the knowledge base
  • Scientistry: the profession

We might also coin a new term, sciensophy, as practiced by sciensophists, which is most definitely not an aspect of science, to describe the pseudoscience of “the social sciences”, as they do not involve any scientody and their additions to scientage have proven to be generally unreliable. Economics, nutrition, and medicine all tend to fall into this category.


Liberals, not conservatives, hate science

As Maddox has amply demonstrated, they don’t “fucking love science”, they like pictures that remind them of science. Actual science, they hate, because it’s not careful of their precious feelings and tends to gradually destroy their sacred narratives:

I first read Galileo’s Middle Finger: Heretics, Activists, and the Search for Justice in Science when I was home for Thanksgiving, and I often left it lying around the house when I was doing other stuff. At one point, my dad picked it up off a table and started reading the back-jacket copy. “That’s an amazing book so far,” I said. “It’s about the politicization of science.” “Oh,” my dad responded. “You mean like Republicans and climate change?”

That exchange perfectly sums up why anyone who is interested in how tricky a construct “truth” has become in 2015 should read Alice Dreger’s book. No, it isn’t about climate change, but my dad could be excused for thinking any book about the politicization of science must be about conservatives. Many liberals, after all, have convinced themselves that it’s conservatives who attack science in the name of politics, while they would never do such a thing. Galileo’s Middle Finger corrects this misperception in a rather jarring fashion, and that’s why it’s one of the most important social-science books of 2015.

At its core, Galileo’s Middle Finger is about what happens when science and dogma collide — specifically, what happens when science makes a claim that doesn’t fit into an activist community’s accepted worldview. And many of Dreger’s most interesting, explosive examples of this phenomenon involve liberals, not conservatives, fighting tooth and nail against open scientific inquiry.

It’s probably not a book anyone who reads this blog regularly needs to read, but it may be one that most of us would like to give to someone we know. As Nassim Taleb explains, what passes for science simply isn’t really science and it certainly isn’t reliable.

What we are seeing worldwide, from India to the UK to the US, is the rebellion against the inner circle of no-skin-in-the-game policymaking “clerks” and journalists-insiders, that class of paternalistic semi-intellectual experts with some Ivy league, Oxford-Cambridge, or similar label-driven education who are telling the rest of us 1) what to do, 2) what to eat, 3) how to speak, 4) how to think… and 5) who to vote for.

With psychology papers replicating less than 40%, dietary advice reversing after 30y of fatphobia, macroeconomic analysis working worse than astrology, microeconomic papers wrong 40% of the time, the appointment of Bernanke who was less than clueless of the risks, and pharmaceutical trials replicating only 1/5th of the time, people are perfectly entitled to rely on their own ancestral instinct and listen to their grandmothers with a better track record than these policymaking goons.

Indeed one can see that these academico-bureaucrats wanting to run our lives aren’t even rigorous, whether in medical statistics or policymaking. I have shown that most of what Cass-Sunstein-Richard Thaler types call “rational” or “irrational” comes from misunderstanding of probability theory.