Desperately seeking the steady state

For some strange reason, some scientists intensely dislike the idea of the universe having a discrete beginning point. I can’t imagine why….

The universe may have existed forever, according to a new model that applies quantum correction terms to complement Einstein’s theory of general relativity. The model may also account for dark matter and dark energy, resolving multiple problems at once.

The widely accepted age of the universe, as estimated by general relativity, is 13.8 billion years. In the beginning, everything in existence is thought to have occupied a single infinitely dense point, or singularity. Only after this point began to expand in a “Big Bang” did the universe officially begin.

Although the Big Bang singularity arises directly and unavoidably from the mathematics of general relativity, some scientists see it as problematic because the math can explain only what happened immediately after—not at or before—the singularity.

“The Big Bang singularity is the most serious problem of general relativity because the laws of physics appear to break down there,” Ahmed Farag Ali at Benha University and the Zewail City of Science and Technology, both in Egypt, told Phys.org.

Ali and coauthor Saurya Das at the University of Lethbridge in Alberta, Canada, have shown in a paper published in Physics Letters B that the Big Bang singularity can be resolved by their new model in which the universe has no beginning and no end.

The physicists emphasize that their quantum correction terms are not applied ad hoc in an attempt to specifically eliminate the Big Bang singularity.

Let’s see. The evidence suggests that “the law of physics appear to break down”. That would be non-natural, perhaps one might say, “supernatural”. So obviously, logic and evidence must be ruled out of bounds in favor of math that adds up correctly!

The mere fact that the physicists are “emphasizing” that they are not doing what they are observably doing tends to cast more than a little doubt on their theory. Perhaps the math will hold up, perhaps it won’t. But it won’t surprise me to see this turn out to be about as credible as historical South American temperature data revisions.

Using the quantum-corrected Raychaudhuri equation, Ali and
Das derived quantum-corrected Friedmann equations, which describe the
expansion and evolution of universe (including the Big Bang) within the
context of general relativity.

Corrected, revised… we sure seem to see scientists using those words a lot these days.


The biggest science scandal ever

And yes, as we AGW/CC skeptics have been saying since the beginning, the world is not getting warmer and you cannot trust corrupt scientists anymore than you can trust corrupt bankers or corrupt politicians. Global warming is a fraud and a scandal of global proportions:

When future generations look back on the global-warming scare of the past 30 years, nothing will shock them more than the extent to which the official temperature records – on which the entire panic ultimately rested – were systematically “adjusted” to show the Earth as having warmed much more than the actual data justified.

Two weeks ago, under the headline “How we are being tricked by flawed data on global warming”, I wrote about Paul Homewood, who had checked the published temperature graphs for three weather stations in Paraguay against the temperatures that had originally been recorded. In each instance, the actual trend of 60 years of data had been dramatically reversed, so that a cooling trend was changed to one that showed a marked warming.

This was only the latest of many examples of a practice long recognised by expert observers around the world – one that raises an ever larger question mark over the entire official surface-temperature record.

Following my last article, Homewood checked a swathe of other South American weather stations around the original three. In each case he found the same suspicious one-way “adjustments”. First these were made by the US government’s Global Historical Climate Network (GHCN). They were then amplified by two of the main official surface records, the Goddard Institute for Space Studies (Giss) and the National Climate Data Center (NCDC), which use the warming trends to estimate temperatures across the vast regions of the Earth where no measurements are taken. Yet these are the very records on which scientists and politicians rely for their belief in “global warming”.

Homewood has now turned his attention to the weather stations across much of the Arctic, between Canada (51 degrees W) and the heart of Siberia (87 degrees E). Again, in nearly every case, the same one-way adjustments have been made, to show warming up to 1 degree C or more higher than was indicated by the data that was actually recorded.

There has been some discussion about the discrepancy between the post-1979 satellite data and the surface-temperature record, as the former shows no sign of global warming while the latter has recently begun to do so. Now we have the answer explaining that discrepancy; the surface-temperature record has been corrupted and is false.

Notice that intrepid scientists went to the trouble of falsifying the data in places where it would be relatively difficult to check what they were reporting. This is genuinely a massive scandal, and if scientists don’t quickly denounce what has taken place, all scientists are soon going to lose even more credibility with the public once people see what a tremendous scam has been perpetrated by scientists operating on the public dime.


Science and the Middle Ages

Tim O’Neill explains why the false view of science coming to a halt during the Middle Ages is not merely incorrect, but is the result of anti-Christian Enlightenment propaganda.

The
standard view of the Middle Ages as a scientific wasteland has
persisted for so long and is so entrenched in the popular mind largely
because it has deep cultural and sectarian roots, but not because it has
any real basis in fact.  It is partly based on anti-Catholic prejudices
in the Protestant tradition, that saw the Middle Ages purely as a
benighted period of Church oppression.  It was also promulgated by
Enlightenment scholars like Voltaire and Condorcet who had an axe to
grind with Christianity in their own time and projected this onto the
past in their polemical anti-clerical writings. By the later Nineteenth
Century the “fact” that the Church suppressed science in the Middle Ages
was generally unquestioned even though it had never been properly and
objectively examined.

It was the early historian of science, the
French physicist and mathematician Pierre Duhem, who first began to
debunk this polemically-driven view of history.  While researching the
history of statics and classical mechanics in physics, Duhem looked at
the work of the scientists of the Scientific Revolution, such as Newton,
Bernoulli and Galileo.  But in reading their work he was surprised to
find some references to earlier scholars, ones working in the supposedly
science-free zone of the Middle Ages.  When he did what no historian
before him had done before and actually read the work of Medieval
physicists like Roger Bacon (1214-1294), Jean Buridan (c. 1300- c.
1358), and Nicholas Oresme (c. 1320-1382) he was amazed at their
sophistication and he began a systematic study of the until then ignored
Medieval scientific flowering of the Twelfth to Fifteenth Centuries.

What
he and later modern historians of early science found is that the
Enlightenment myths of the Middle Ages as a scientific dark age
suppressed by the dead hand of an oppressive Church were nonsense. 
Duhem was a meticulous historical researcher and fluent in Latin,
meaning he could read Medieval scientific works that had been ignored
for centuries.  And as one of the most renowned physicists of his day,
he was also in a unique position to assess the sophistication of the
works he was rediscovering and of recognising that these Medieval
scholars had actually discovered elements in physics and mechanics that
had long been attributed to much later scientists like Galileo and
Newton.  This did not sit well with anti-clerical elements in the
intellectual elite of his time and his publishers were pressured not to
publish the later volumes of his Systeme de Monde: Histoire des Doctrines cosmologiques de Platon à Copernic
– the establishment of the time was not comfortable with the idea of
the Middle Ages as a scientific dark age being overturned. 

One thing I learned in writing The Irrational Atheist was to never trust “what everybody knows” about history. It’s more than MPAI, it’s more than a general ignorance about history; the fact is that most people who consider themselves to be educated with regards to history are, in demonstrable fact, maleducated. They’ve been given a false narrative that is belied by the actual documentary evidence.


Science is raciss

It’s now impossible to claim that we are all the same under the skin, thanks to DNA phenotyping.

Parabon’s Snapshot Forensic system is said to be able to accurately predict genetic ancestry, eye colour, hair colour, skin colour, freckling, and face shape in individuals from any ethnic background.

Each prediction is presented with a ‘measure of confidence’.

As an example, the test can say a person has green eyes with 61 per cent confidence, green or blue with 79 per cent confidence, and that they definitely don’t have brown eyes, with 99 per cent confidence.

Based on ancestry, and other markers, the test also creates a likely facial shape. From all of this information, it builds a computer generated e-fit.

Science is gradually obliterating the secular myths, one by one. My expectation is that most of the “racist pseudo-science” that was supposedly falsified (and which in most cases was simply declared out of bounds by equalitarian anti-scientists), are going to come back in vogue with a vengeance and on a sound scientific basis.


Cuckoo for peer review

Scientistry at its finest:

Shrime decided to see how easy it would be to publish an article. So he made one up. Like, he literally made one up. He did it using www.randomtextgenerator.com. The article is entitled “Cuckoo for Cocoa Puffs?” and its authors are the venerable Pinkerton A. LeBrain and Orson Welles. The subtitle reads: “The surgical and neoplastic role of cacao extract in breakfast cereals.” Shrime submitted it to 37 journals over two weeks and, so far, 17 of them have accepted it. (They have not “published” it, but say they will as soon as Shrime pays the $500. This is often referred to as a “processing fee.” Shrime has no plans to pay them.) Several have already typeset it and given him reviews, as you can see at the end of this article. One publication says his methods are “novel and innovative”!. But when Shrime looked up the physical locations of these publications, he discovered that many had very suspicious addresses; one was actually inside a strip club.

Many of these publications sound legitimate. To someone who is not well-versed in a particular subfield of medicine—a journalist, for instance—it would be easy to mistake them for valid sources. “As scientists, we’re aware of the top-tier journals in our specific sub-field, but even we cannot always pinpoint if a journal in another field is real or not,” Shrime says. “For instance, the International Journal of Pediatric Otorhinolaryngology is the very first journal I was ever published in and it’s legitimate. But the Global Journal of Pediatric Otorhinolaryngology is fake. Only someone in my field would know that.”

You can trust scientists. Because global warming. And they have proven that birds are, indeed, cuckoo for cocoa puffs.


The costs of scientistry

A scientist laments the loss of scientific credibility:

If we want to use scientific thinking to solve problems, we need people to appreciate evidence and heed expert advice. But the Australian suspicion of authority extends to experts, and this public cynicism can be manipulated to shift the tone and direction of debates. We have seen this happen in arguments about climate change.

This goes beyond the tall poppy syndrome. Disregard for experts who have spent years studying critical issues is a dangerous default position. The ability of our society to make decisions in the public interest is handicapped when evidence and thoughtfully presented arguments are ignored.

So why is science not used more effectively to address critical questions? We think there are several contributing factors including the rise of Google experts and the limited skills set of scientists themselves. We think we need non-scientists to help us communicate with and serve the public better.

At a public meeting recently, when a well-informed and feisty elderly participant asked a question that referred to some research, a senior public servant replied: “Oh, everyone has a scientific study to justify their position, there is no end to the studies you could cite, I am sure, to support your point of view.”

This is a cynical statement, where there are no absolute truths and everyone’s opinion must be treated as equally valid. In this intellectual framework, the findings of science can be easily dismissed as one of many conflicting views of reality.

Such a viewpoint is dangerous from our point of view.

This is the result of scientists passing off their unscientific opinions as expertise for decades. No one trusts “science” anymore because no one trusts scientists. Everyone has seen too many idiots with advanced degrees and white coats trying to pull the Dennett Demarche, in which the scientist argues that biologists can be trusted because physics is very accurate.

This is why a clear distinction between scientody, the scientific method, and scientistry, the profession of science, is absolutely necessary. But because men are corrupt and fallen, too many scientists find it too useful to be able to cloak their unscientific (and all too often uneducated), opinions under the veil of scientific expertise.

And the writer undercuts his own argument when he laments that “evidence” (which may be scientific) and “thoughtfully presented arguments” (which have absolutely nothing to do with science) are ignored. Because the fact is that logic is not science, it is philosophy, and philosophy is exactly what scientody is designed to counteract.

Furthermore, science is intrinsically dynamic. So listening to the “real experts” in science is a guaranteed way to ensure that one ignores both logic, and in many cases, reality.


The three laws of behavioral genetics

JayMan explicates them:

  • First Law. All human behavioral traits are heritable.
  • Second Law. The effect of being raised in the same family is smaller than the effect of genes.
  • Third Law. A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families.

These laws are more controversial than they should be. No one who comes from a large family will find it easy to take exception to them, and anyone who does must be put to the objective test.

What, specifically, is a behavioral trait that is not heritable? And how would one go about demonstrating that? My impression is that as with many other issues, the fact that most people are binary thinkers renders it very difficult for them to grasp the truth of probabilistic matters. If it can’t be answered with an absolute “yes, always”, then they assume that the answer must necessarily be “no, never”.


Explaining the mental gymnasts

Anonymous Conservative explains the bizarre affection the Left habitually displays for Islam, despite the way in which it goes against nearly everything they say they believe:

One religion is held in utter contempt, while the other receives freely groveling praise and welcoming admiration. Which is which?

Again, rabbits don’t grovel before Muslims because they think it through, and decide to appease the people who could kill them. Rather, their mind touches violent Islam tentatively and experiences an almost imperceptible shot of fear – in rabbit-speak, a triggering. Their brain then looks at the facts of the matter, and subconsciously realizes that embracing Islam as superior, is the least amygdala stimulating of the various thoughts running through their head. Far less stimulating than insulting or opposing Islam and being killed, and far less than consciously groveling at the feet of people they intellectually acknowledge reviling, to save their own lives. If they embrace Islam and believe their own embrace is true, they can even claim to be intellectual, moral, and tolerant, all best described as anti-triggering concepts in the rabbit’s mind.

Here, once this feat of mental gymnastics occurs, you enter a strange realm. Their initial jump to the counter-intuitive position has already been established in their mind as not due to some deficit of intellect, but rather due to the immensity of their intellect. At that point, the more counter to logic the position embraced by the rabbit, the more they see it as a mark of their tolerance, evolved-ness, advancement, and superiority over the more base, primitive, stupid, caveman-like tendencies of their opposition.

You have to view leftism and rabbitism as simple logical programs, run as if on computer, by a mind that cannot tolerate triggering, and which will believe anything to avoid experiencing it.

I’m pleased to be able to say that Anonymous Conservative is now an Associate of Castalia House, so if you wish to purchase his books in either EPUB or Kindle format, you can now do so through our online store. I highly recommend both of them, as they offer genuine insight into the Left from a perspective that is as unique as it is informative.

Both books provide a useful, hands-on theoretical explanation for behavior that we have all witnessed and found inexplicable. While AC would be the first to agree that considerably more scientific evidence would be required before one could assert either of his primary hypotheses as unassailable fact, even in the absence of published peer review they are very useful heuristics in attempting to better understand, and deal with, the literal lunatics of the political Left.


The randomness of scientistry

Science is finally turning scientody on scientistry… and the results are not as self-flattering to professional science as most scientists expected.

The NIPS consistency experiment was an amazing, courageous move by the organizers this year to quantify the randomness in the review process. They split the program committee down the middle, effectively forming two independent program committees. Most submitted papers were assigned to a single side, but 10% of submissions (166) were reviewed by both halves of the committee. This let them observe how consistent the two committees were on which papers to accept.  (For fairness, they ultimately accepted any paper that was accepted by either committee.)

The results were revealed this week: of the 166 papers, the two committees disagreed on the fates of 25.3% of them: 42. But this “25%” number is misleading, and most people I’ve talked to have misunderstood it: it actually means that the two committees disagreed more than they agreed on which papers to accept. Let me explain.

The two committees were each tasked with a 22.5% acceptance rate. This would mean choosing about 37 of the 166 papers to accept. Since they disagreed on 42 papers total, this means each committee accepted 21 papers that the other committee rejected and vice versa, for 21 + 21 = 42 total papers with different outcomes. Since they each accepted 37 papers, this means they disagreed on 21/37 ≈ 56% of the list of accepted papers.

In particular, 56% of the papers accepted by the first committee were rejected by the second one and vice versa. In other words, most papers at NIPS would be rejected if one reran the conference review process (with a 95% confidence interval of 40-75%).

What rightly concerns the writer is the fact that a purely random process would have resulted in a 77.5 percent disagreement, which is closer to the 56 percent observed than the 30 percent expected. And, of course, the 0 percent that the science fetishists would have us believe is always the case.

This is a very important experiment, because it highlights the huge gap between science the process (scientody) and science the profession (scientistry). Some may roll their eyes at my insistence on using different words for the different aspects of science, but the observable fact, the scientodically informed fact, is that using the same word to refer to the two very differently reliable aspects of science is incredibly misleading.


The perils of philosophy

John Wright challenges the concept of IQ:

Since I am apparently one of those self deceived idiots, allow me to say that the predictive ability of people who do well on one kind of intellectual test to do well on another kind of intellectual test is not science. It is not the empirical measurement of an observable reality.

I could with even greater accuracy predict that the winners of beauty pageants will be shapely women who are in favor of world peace.

I can also predict she will wear a crown and carry a bouquet.

No matter how accurate such a prediction, it is not science. Beauty is not a thing that can be measured and neither is the degree of craving for world peace.

It is (at best) confirming a correlation. This is not the same as Newton determining the laws of gravity from which accurate descriptions of falling apples and orbiting planets can be deduced mathematically. 

Such are the perils of a philosopher wading out into the perilous waters of science. What is not observable about an intellectual test? What is less empirical about a percentage of correct answers than a quantity of inches or a measure of weight? And, of course, the science of intelligence goes far beyond people taking two or more intellectual tests. It is no less scientific than any other branch of genetic science, in which the birth of a baby with blue eyes can be predicted or the disease of a child yet unconceived can be anticipated on the basis of his parents’ genetics.

Science does not require precisely defined measurements to be science. It need only be observable, testable, and repeatable. The fact that it is harder to agree upon a measure for intelligence than one for height does not mean that intelligence is not observable or that the predictive model is unreliable. John appears to be erroneously targeting the fuzzy metric presently used to quantify intelligence and thinking this is sufficient to call the entire science into question.

Would he also claim that weight does not exist or is unscientific? After all, it is even harder to predict the adult weight of a baby than his IQ on the basis of his parents. As other commenters have pointed out, we have a pretty good idea of the heritability of g, so how can it be reasonably asserted that there is no use of the scientific process being utilized? We have seen and observed a considerable number of relevant hypotheses being tested, both formally and informally, after all.

And beauty, at least in some of its forms, can be measured, as the picture below demonstrates.