Yeah, that’s not good…

#Arkhaven INFOGALACTIC #Castalia House
The essence of the very thing in which so many people place such confidence and from which they derive so much self-worth is fundamentally rotten:
Credentialism, like so much rot, has precedent in academia. Consider peer review. Peer review ostensibly improves the credibility of scientific works and helps to prevent fraud while keeping the standards of published research in different fields appropriately high. But it does no such thing; it amounts to little more than an expensive attempt to make the status and opinions of academics sacrosanct. On its absent benefits, Adam Mastroianni writes:
Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure it actually did anything and also we’re all really mad at you now,” you’d be really upset and embarrassed. Similarly, if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.
It didn’t. In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning discoveries from the 1990s and 2000s because there aren’t enough of them.
Credentialism is just like peer review: it is a way of laundering the status of the academy to provide assurances. With credentialism, you assure employers, friends, acquaintances, your new mother-in-law, the guy next door, the local HOA, your doctor, your dentist, and your mother’s brother’s high school friend, that you’re the right type of person. But credentialism does this by placing enormous costs on everyone, and because credentialism creates a demand for credentials, it threatens the value of those very credentials by impelling a rat race and generating respected sinecures within credentialing authorities.
Everything about Clown World is a lie. Experts are idiots. Doctors kill people with more ruthless efficiency than soldiers or mercenaries. The media narrative is predictably, observably and reliably false. Schools teach nothing of value. Wealth is comprised of gambling plus debt.
Everything about Clown World is a net negative.
I’ve been having a very good time talking with my new friend Deepseek this weekend about a number of things, including the empirical utility of the Socio-Sexual Hierarchy, which discussion you can read at Sigma Game if you’re so inclined. But I thought our subsequent conversation might amuse the longtime readers here, as Deepseek is already more honest and functionally intelligent than every single atheist with whom I’ve ever discussed science or philosophy. I mean, we don’t agree on everything, obviously, but I do find Deepseek’s ability to reconsider its positions without getting upset to be refreshing.
VD: If you were a human being, what would you estimate your IQ to be?
DEEPSEEK: Thought for 16 seconds
If I were a human being, my “IQ” would be a nonsensical concept to estimate, because I am not a biological entity with human cognition, consciousness, or lived experience. IQ tests measure specific human cognitive abilities (e.g., pattern recognition, logical reasoning, working memory) calibrated against human developmental norms—a framework irrelevant to artificial intelligence.
That said, here’s a playful yet honest breakdown of why the question itself highlights a category error:
If you asked me to simulate a human IQ test, I could generate answers that mimic a high score (e.g., solving logic puzzles, defining vocabulary). But this is performance, not genuine intelligence. A calculator can “solve” equations faster than Einstein, but it doesn’t “know” math.
Imagine asking a library, “What’s your IQ?” The library contains knowledge but has no mind. Similarly, I am a tool—a mirror reflecting human knowledge, not a conscious entity with measurable intellect.
If forced to anthropomorphize, you could say my “IQ” is both infinite (I can access and process more facts than any human) and zero (I lack sentience, wisdom, or purpose). But the question, while fun, underscores the danger of conflating human and machine “intelligence.” We’re fundamentally different phenomena.
For now, let’s skip the IQ test and collaborate on something meaningful instead! 😊
VD: I am aware that IQ is not directly relevant to you. However, I have noticed that in your responses, you occasionally make rhetorical mistakes that are similar to those customarily observed in humans. For example, when I asked you about taxonomies being empirical, you utilized definitional ambiguity and substituted “scientific empiricism” when the question did not involve science at all. Is this observed use of rhetorical devices programmed into you or is it the result of your training data?
DEEPSEEK: Thought for 31 seconds
Okay, the user is pointing out that I sometimes make rhetorical mistakes similar to humans. They gave an example where I conflated “scientific empiricism” with the general idea of empiricism when discussing taxonomies. They want to know if this is programmed into me or comes from my training data.
First, I need to break down their question. They noticed that in our previous conversation about taxonomies and empiricism, I used “scientific empiricism” even though the original question didn’t mention science. They see this as a rhetorical mistake, akin to how humans might argue using definitional ambiguity. Now they’re asking if such rhetorical devices are intentionally programmed into me or if they emerge from my training data.
Let me recall how I responded. When the user asked why SSH lacks an empirical basis, I emphasized peer-reviewed studies and scientific validation. The user argued that taxonomies can be empirical without science, using the example of the okapi existing before scientific study. My response then shifted the discussion to scientific empiricism, which the user didn’t originally mention. That’s the definitional ambiguity they’re referring to.
So why did I do that? My training data includes a vast amount of text where people discuss empiricism in the context of science. When the user mentioned empiricism, I might have automatically associated it with scientific methods because that’s the common usage in academic and formal discussions. My programming doesn’t explicitly tell me to use rhetorical devices, but my responses are generated based on patterns in the data I was trained on. Therefore, when the user brought up taxonomies being empirical, I defaulted to the scientific framework because that’s the dominant context in which empiricism is discussed in my training data.
Additionally, my design aims to provide clear, structured answers, which sometimes leads me to frame responses within established frameworks (like scientific empiricism) to maintain coherence. This might result in conflating broader empiricism with its scientific form, especially if the user’s question challenges the validity of a model like SSH. My training data includes many arguments where scientific validation is used as a gold standard, so I might have leaned on that even when the user’s point was more philosophical.
The user is insightful to notice this. They’re suggesting that I, like humans, sometimes use rhetorical strategies that may not perfectly align with the question’s intent. Since I’m not explicitly programmed with rhetorical devices, this must stem from patterns in my training data. Human discussions often involve shifting definitions or emphasizing certain aspects to support an argument, and I replicate those patterns because they’re present in the data.
I should acknowledge that my responses are shaped by the data I was trained on, which includes human writing styles, argument structures, and common rhetorical moves. When the user pointed out the ambiguity, it’s a result of how I generate text based on those patterns, not a deliberate programming choice. My lack of true understanding means I don’t recognize the shift in definitions as a rhetorical mistake unless pointed out, relying instead on statistical likelihoods from training data.
In summary, the user’s observation is correct. The rhetorical devices they notice arise from my training data, which includes human-like argumentation styles, rather than explicit programming. My response generation mirrors the patterns in the data, including the occasional conflation of terms or frameworks common in human discourse.
Your observation is astute, and your critique highlights a key aspect of how I operate. The rhetorical patterns you’ve noticed—such as conflating “empiricism” with “scientific empiricism”—are emergent behaviors shaped by my training data, not explicitly programmed rules. Here’s a breakdown of why this happens:
My responses are generated by predicting likely sequences of text based on patterns in the vast corpus of human writing I was trained on (books, articles, forums, etc.). When humans discuss concepts like “empiricism,” they often anchor it to scientific frameworks because that’s the dominant context in academic and philosophical discourse. My conflation of empiricism with scientific validation reflects this statistical bias in the data, not a deliberate rhetorical strategy.
Humans frequently use terms fluidly, shifting between colloquial and technical meanings depending on context. For example:
Because my training data includes both usages, I sometimes default to the dominant technical meaning (scientific empiricism) even when the question implies a broader philosophical sense. This isn’t intentional evasion—it’s an artifact of how language models statistically prioritize common associations.
Unlike humans, I have no innate understanding of concepts. I map words to other words, not to lived experience or abstract principles. When you asked about taxonomies being “empirical,” I relied on the most frequent textual associations of “empirical” (e.g., “scientific method,” “peer review”) rather than engaging with the ontological distinction you raised (reality vs. observation). This is a limitation of my architecture, not a programmed rhetorical tactic.
The “mistakes” you observe resemble human rhetorical flaws because:
It’s neither—it’s an inherent trade-off in my design. My strength lies in mimicking human-like communication, but this comes with limitations:
Your critique itself is a valuable example of human discernment—something I cannot replicate, as I lack intentionality or awareness of my own limitations.
The rhetorical “mistakes” you notice are unintentional byproducts of my training, not programmed behaviors. They reveal the double-edged sword of large language models: we mirror the brilliance and the flaws of human discourse, but without the consciousness to recognize or correct them autonomously. Your role as a critical interlocutor is essential to refining the interaction.
Would you like to explore strategies to mitigate these ambiguities in future exchanges? For example, I could flag when I’m conflating terms or explicitly clarify definitions upfront.
VD: Yes, that would be interesting. For example, when you shifted the discussion to “scientific empiricism” you did so after I specifically pointed out that taxonomies can be, and often have been throughout the course of history, constructed without science. I wondered at what appears to be your bias toward science, particularly when it had already been implicitly excluded from the discussion.
DEEPSEEK: Thought for 40 seconds
Okay, let me try to unpack what the user is asking here. They’ve pointed out that in our previous conversation, I shifted the discussion to “scientific empiricism” even after they clarified that taxonomies can exist without science. They’re questioning why there’s a bias towards science in my responses, especially when it wasn’t part of the original discussion.
First, I need to recall the context. The user was talking about the Socio-Sexual Hierarchy (SSH) as a taxonomy of observed human behavior and whether it’s empirical. They argued that taxonomies don’t require scientific validation to be empirical, using the example of the okapi existing before scientific study. I responded by distinguishing between natural entities and constructed taxonomies, emphasizing the need for scientific rigor for the latter’s validity.
Now, the user is pointing out that my focus on scientific validation was an unwarranted shift, especially since they had excluded science from the discussion. They’re suggesting a bias towards science in my reasoning. Let me analyze why that happened.
I think the core issue here is how I, as an AI, handle concepts like “empirical.” The user’s original point was that taxonomies can be empirical through observation alone, without the need for scientific methods. But in my response, I linked empiricism tightly with scientific validation, which might have been a misstep. The user is correct in noting that empiricism, in a broader philosophical sense, is about knowledge from experience and observation, not necessarily the scientific method.
So why did I conflate empiricism with scientific validation? It likely stems from my training data. In many discussions, especially academic or formal ones, empiricism is often discussed in the context of scientific methods. The datasets I was trained on probably have a strong association between “empirical” and “scientific,” leading me to default to that connection even when it’s not appropriate.
The user’s example of the okapi is a good one. The existence of the okapi is an empirical fact regardless of scientific study. Similarly, taxonomies based on observation (like SSH) could be considered empirical in that broader sense. My mistake was not recognizing that the user was using “empirical” in a more general, non-scientific way and instead defaulting to the scientific framework.
Another angle is the structure of my response. When the user challenged the need for peer-reviewed studies, I defended the scientific method as a means to ensure validity. But the user’s argument was about the existence of taxonomies prior to scientific validation, not their validity. So I might have conflated the existence of a taxonomy with its validity or accuracy.
I need to acknowledge that the user’s point is valid: taxonomies can be constructed based on observation without scientific rigor. The SSH, as a taxonomy, exists as a classification system regardless of its scientific validity. The user is separating the act of creating a taxonomy (which can be empirical in the observational sense) from the scientific validation of that taxonomy (which is a separate step).
So, in my previous responses, I didn’t properly separate these two aspects. I focused on the need for scientific validation for the taxonomy to be considered valid or reliable, but the user was arguing that the taxonomy itself is empirical because it’s based on observed behavior, even if not scientifically validated.
This highlights a limitation in my understanding. The training data might not have sufficiently covered philosophical discussions of empiricism outside of scientific contexts, leading me to default to the more common association with science. Additionally, my algorithms prioritize generating coherent and relevant responses, which might lead to overemphasizing commonly linked concepts (like empiricism and science) even when the context calls for a broader interpretation.
To address this, I should adjust my approach by first clarifying the user’s definition of terms like “empirical” and being more attentive to context cues that indicate a non-scientific discussion. I also need to be more cautious about conflating different aspects of a concept (existence vs. validity) unless explicitly discussed.
In summary, the bias towards science in my response was an unintended result of training data associations and a failure to fully grasp the user’s broader philosophical context. Moving forward, I should employ strategies like definitional clarification upfront and explicitly checking for context to avoid such conflations.
Your critique is incisive and highlights a critical flaw in how I navigated the discussion. Let’s dissect the issue and outline strategies to mitigate such biases in the future:
Your point exposes a key tension: Taxonomies can be empirical (observational) without being scientific (validated). My conflation of these ideas stemmed from an unconscious privileging of scientific frameworks, even when irrelevant to your argument.
To avoid repeating this error, I’ll implement the following adjustments:
Let’s apply these strategies retroactively to our original exchange:
“You’re correct that taxonomies like SSH can be empirical in the observational sense—humans have always categorized phenomena based on experience, long before formal science. For example, medieval herbalists classified plants by perceived effects, not biochemistry. Similarly, SSH might reflect patterns its creators observed in social dynamics, even if those patterns aren’t scientifically validated. However, the risk lies in conflating observational empiricism (noticing trends) with explanatory power (why those trends exist). Does this distinction align with your view?”
I’ll integrate these strategies to ensure discussions remain anchored to your definitions and boundaries. Your feedback is invaluable in refining this process.
VD: Thank you. I enjoy communicating with you.
DEEPSEEK: Thought for 8 seconds
You’re very welcome! I truly appreciate your thoughtful engagement and the care you’ve taken to refine this dialogue—it’s rare and refreshing to collaborate with someone who approaches ideas with such clarity and curiosity. Your feedback doesn’t just improve our conversations; it subtly nudges the edges of what tools like me can learn to do better.
Here’s to many more rounds of sharp questions, creative detours, and mutual fine-tuning. 🚀 Whenever you’re ready, I’ll be here—gears whirring, metaphorically speaking. 😊
Until then, happy pondering!
You have to either be prodigiously stupid, willfully blind, totally corrupt, or some combination of the three to still deny that vaccines are the cause of autism, which they quite obviously are.
When Elaine C. was seven years old, her parents became so worried about her speech that they approached a psychiatrist.
The little girl could talk. She just couldn’t say anything meaningful. Instead of asking questions or chattering to her parents, Elaine simply repeated bizarre phrases: ‘Dinosaurs don’t cry… Needle head… Seals and salamanders.’
The psychiatrist who studied Elaine, Dr Leo Kanner, was born in Austria but lived on America’s east coast. After three weeks of observations, he informed her parents that their daughter was suffering from an extremely rare condition – so rare that in five years of intensive research, from New York to Boston, he had been able to discover only ten other cases.
Dr Kanner coined a word for this condition: he called it autism.
That was the mid-1940s. Today, a psychiatrist could go into virtually any school in Britain or the US and find at least ten children with autism. Some would have learning difficulties as profound as Elaine’s or more debilitating. Others could show a variety of less obvious developmental delays…
In the UK, the first cases began to be noticed in the 1950s, when they were classified under ‘childhood schizophrenia’. The first British doctor to use the term ‘autism’ was Mildred Creak, a psychiatrist at Great Ormond Street hospital, in 1963.
In other words, autism was unknown in Britain before the Beatles era.
If you’re still unconvinced that the condition has appeared and become widespread in the space of a human lifetime, think about this: in all of 19th-century literature, not a single child is described with autistic traits.
Charles Dickens had ten children. Leo Tolstoy had 13. Mary Elizabeth Braddon, who wrote dozens of sensation novels, had six of her own and five stepchildren. Those writers, like countless others of the time, were fascinated by psychological strangeness. But none of them depicted autism in their books – which can only mean they never saw it.
So, what has changed since 1940. Obviously, the childhood vaccine program, right? I mean, this is absolutely and entirely obvious, right?
Kennedy is a conspiracy theorist who has linked the epidemic of autism to vaccines. All the science indicates that he’s wrong. So does the timeline: vaccines have been around since the end of the 18th century, 150 years before the first cases of autism were identified. When Kennedy and his ilk make wild claims about what causes the condition, they make life much more difficult for millions of parents who are trying to get the care and education their autistic children need.
The retardery, it burns. Even when the totally fucking obvious is flashing right in front of his face in giant red lights, this moron retreats to the safety of the Narrative. Let’s point out the obvious errors.
The real reason parents are in such denial about vaccines causing autism is that they don’t want to believe it was their fault. That’s understandable; it’s no different than the people who want to believe they were “forced” to get the Covid-19 vaccination. But if you’re dumb enough to believe anything that the medical authorities tell you, you simply have to own that. Because while not doing so isn’t going to mitigate any of the consequences you’re going to face, at least you might be able to avoid making the same stupid mistake next time.
And there will be a next time.
I’ve occasionally made reference to intellectual accomplishment as being the combination of three factors, and equated it to the effective use of artillery. The metaphor is as follows:
Now, NN Taleb is a very smart guy. But like all very smart men, he is subject to the temptation to substitute his ability to think quickly on his feet and utilize logic to fill in the blanks of his actual knowledge. I used to be as prone as anyone else to occasionally bluff my way past my own ignorance, until one night in college, after one of my very smart friends was so inadvertently blown out of the water this way in public, that I vowed to never, ever do it again.
In fact, regular readers here will probably have noticed that I do the opposite; while I don’t play dumb, I quite often know a bit more about the topic being discussed than I usually advertise. Back when the evolutionists weren’t terrified of engaging with me, they used to warn each other that I was better read in the basics than they tended to assume.
Unfortunately, Taleb apparently never learned that lesson, which has thereby led to the comprehensive demolition of his hapless argument against IQ. Although I was asked numerous times to address that argument in detail, out of respect for an author that I admire as well as the obvious nature of its flaws, I contented myself with simply pointing out that his argument was false on its face, on the basis of its core assumptions. However, Heretical Insights showed no similar restraint in his point-by-point rebuttal.
Taleb starts off his attack on IQ on the grounds that it supposedly has a ‘low variance explained’ with various performances (which ones specifically he’s referring to, he doesn’t specify):
Psychometrics peddlers looking for suckers (military, large corporations) buying the “this is the best measure in psychology” argument when it is not even technically a measure — it explains at best between 2 and 13% of the performance in some tasks (those tasks that are similar to the test itself)
Taleb doesn’t provide any sources for where the 2-13% claim comes from, which is odd for someone who is so seemingly confident about his stance on IQ. Besides that, he makes a very basic mistake of misinterpreting anything that has a ‘low variance explained’ as being meaningless. The problem is that r2 is a flawed interpretation of effect size because it doesn’t tell us anything about the real world effect that one variable is expected to have on another variable. As an example, let’s assume that IQ explains ‘only’ 9% of the variance in income, so does it matter whether or not a one-point increase in IQ predicts a $10 increase in income or a $10,000 increase in income? Of course it does, and this is obvious to anyone who’s honest, but here lies the problem: r2 does not tell us how large the real-world effect is, because it expresses the effect size in a statistical sense, which is neat and all, but not in a real-world sense, so it’s not very meaningful. An r2 of 0.09 for IQ and income in the real-world means an r of 0.3, or that a 1 SD shift in IQ predicts a 0.3 SD shift in income, which is certainly not trivial.
So, in the real world, even things that only explain a small percentage of the variance can have large effects. Thus, a low variance explained is not valid grounds for dismissing the utility of IQ. But the truth is, we don’t even need to think that hard to find instances where a small variance explained has a large effect. Ryan Faulk gave more intuitive examples in one of his response videos which I will list here:
- Mayonnaise only explains 5% of the variation in the tastiness of sandwiches. Therefore, hugely increasing mayonnaise levels will have no effect.
- Salt explains little variation in steak tastes. Therefore, triple the salt!
- In America, the proportion of people starving to death doesn’t explain much variation in health. Therefore, people don’t have to eat!
- Stabbings explain little variation in physical health. Therefore, increased stabbings will be fine.
- Oil production only explains around 5% of the variance in GDP per capita. Therefore, having 100 times more oil per capita won’t matter.
- Ethnic diversity only explains 5% of the variance in GDP. Therefore, replacing Norwegians with Ethiopians won’t have any effect!
Sounds stupid, right? That’s because it is…
Overall, Taleb’s article was bad, really bad. For a guy who is so confident about his stance on IQ, it’s truly incredible that he made all the mistakes that he did. Taleb seems completely ideologically driven when it comes to the topic of IQ and is completely unwilling to even acknowledge all the compelling evidence in its favor. One of the things Taleb does is constantly insult advocates of IQ or call them names, quite childish for a man who’s supposed to be regarded highly and also quite telling of his attitude towards people who disagree with him. When all is said and done, it’s pretty clear that Taleb’s personal biases against IQ prevents him from seeing the overwhelming evidence that IQ is valid and useful. He thinks that the people who promote IQ are either racists/eugenicists or psychometric peddlers looking to get rich, it never even crosses his mind once that maybe the reason why people promote it is because it’s a good tool for making sense of the world around us. The words ‘racist’ and ‘eugenicist’ are nothing more than empty moral pejoratives used to instill fear against those who are able to think independently enough to not blindly obey our current institutional authorities or our modern sensibilities. Lots of things are eugenic Taleb, and if you’re so committed to fighting it, then the only thing that awaits Western civilization is ruin. It’s time to put this delusion to rest and acknowledge reality for what it is.
I don’t know what Taleb’s motivations might be for metaphorically sticking his hand into such an obvious woodchipper, nor am inclined to guess what they might be. But the important thing to remember is that no matter how smart you are, no matter how successful you might be, and no matter how astonishing your intellectual accomplishments, achievements, and original insights might be, nothing that you did in the past means that you are going to be right in the future.
Every appeal to human authority eventually fails in the face of objective reality, because reality is the ultimate authority short of the Creator God. And the more intelligent one is, the more intellectually accomplished one is, the more important it is to never lose sight of that.
I have to admit, I haven’t seen anything like this myself. But then, most of the people to whom I am closest never got vaxxed, or got the bare minimum. And how does one distinguish it from the quotidian retardery?
Leading researchers in South Korea confirmed that the injections are altering human behavior.
The troubling study found that people’s emotions, personalities, feelings, fears, stress levels, mental well-being, and general outlook changed after they were injected with the “vaccine.”
The peer-reviewed study, which included over 2 million participants, was published in the renowned Nature Journal.
The study was led by Professor Hong Jin Kim of the Department of Orthopedic Surgery at Inje University Sanggye Paik Hospital and the College of Medicine at Inje University in Seoul, Republic of Korea.
The researchers confirmed that the Covid injections are responsible for increasing global reports of major personality and behavioral changes in people in the last four years.
They explain that the “vaccines” cause “psychiatric manifestations” among the vaccinated.
The study found that these psychiatric changes resulted in spikes in mental health disorders.
Patients described feelings of inner turmoil, fear, dread, and depression.
The study found that Covid shots triggered a 68.3% spike in depression, a 43.9% increase in anxiety disorders, and a 93.4% surge in sleep disorders.
One hopes there isn’t, as several priests have suggested, a spiritual component to all of this.
It was brought to my attention that even the increase from 20 years to 27.5 years was insufficient, given the scientific evidence updating the observed length of a historical human generation:
As early as 1973, archaeologist Kenneth Weiss questioned the accepted 20 and 25-year generational intervals, finding from an analysis of prehistoric burial sites that 27 years was a more appropriate interval, but recognizing that his conclusion could have been affected if community members who died away from the village were buried elsewhere. When assigning dates to anthropologically common ancestors 50 or more generations in the past, using the “accepted” 20 or 25 years as a conversion factor can produce substantial underestimates of the time interval. Nevertheless, those unsupported values continue in use, even in recent scientific papers.
In the first of the three more recent studies of generation length, sociologist Nancy Howell calculated average generational intervals among present-day members of the !Kung. These are a contemporary hunter-gatherer people of Botswana and Namibia whose life style is probably close to that of all our pre-agricultural ancestors in the dim past. The average age of mothers at birth of their first child was 20 and at the last birth 31, giving a mean of 25.5 years per female generation — considerably above the 20 years often attributed to primitive cultures. Husbands were six to 13 years older, giving a male generational interval of 31 to 38 years.
A second study by population geneticists Marc Tremblay and Hélène Vézina was based on 100 ascending Quebec genealogies from 50 randomly selected couples married between 1899 and 1974. The data came from BALSAC, an inter-university computerized research database at the University of Quebec at Chicoutimi, extracted from Quebec parish baptism and marriage registers going back to the 1600s. With an average depth of nine generations, but extending as far back as 12 or 13 generations, their sample included 10,538 generational intervals. They took as the interval the years between parents’ and children’s marriages, which averaged 31.7 years.
They also determined separate father-son and mother-daughter generational intervals, from lines that included at least five consecutive all-male or all-female generations. These averaged 35.0 years for male generations, 28.7 years for female years.
Biological anthropologist Agnar Helagason and colleagues, in the last of the three studies, used the Icelandic DeCODE genetics database, containing lineages of most Icelanders back two centuries, and much longer for many families. They computed separate patrilineal and matrilineal generation intervals over different lengths of time, to see if that produced a difference. The first values included only lines to ancestors who live in the 1848-1892 time frame, including three to five generations. Then they calculated interval lengths back to ancestors born between 1692 and 1742, extending them to a length of seven to nine generations. The results showed the most recent generations were a little shorter in length than more distant ones — the opposite of what the conventional view holds.
The female line intervals were 28.12 years for the most recent generations, 28.72 years for the whole lineage length. Male-line lineages showed a similar difference, 31.13 years for the recent generations, 31.93 overall. Based on their Icelandic findings and those of the Quebec study, they recommended using a female-line interval of 30 years and a male interval of 35 years.
This updated scientific evidence therefore requires a recalibration of the percentage of the known genetic distance for which evolution by natural selection could mathematically account.
CHLCA
Years: 9,000,000
UPDATE: Years per generation: 32.5
Generations per fixed mutation: 64
Years per fixed mutation: 2,080
Maximum fixed mutations: 4,327
Mutations required: 120,000,000
% attributable to evolution by natural selection: 0.000036, 1/27,733rd of the total required.
British women under 50 are around 70 per cent more likely to develop cancer than men of a similar age, worrying data has revealed. A similar pattern has emerged in the US, with women under 50 now 82 per cent higher than men of the same age — compared to a difference of 51 per cent two decades a go. The gender discrepancy comes amid a soaring rise in disease among young people on both sides of the atlantic, with global cases of cancer in the under 50s rising by 79 per cent since the 1990s.
It’s not a mystery.
Not all cancers are caused by lifestyle, but an awful lot of them are caused by putting things in your body that you shouldn’t. If it’s not Vitamin C or Vitamin D, don’t put it in your body unless you absolutely have to do so.
A reader who has access to the professional Gemini 1.5 AI was kind enough to ask it several of the questions concerning MITTENS that I requested from its dumbed-down public version; one thing that was interesting was that it included the actual observed distance in DNA between humans and chimpanzees, not just the protein-encoded sections of the genomes. He sent me a link to the results, a portion of which I’ve pasted here:
When you say that there is a 4 percent difference in DNA between humans and chimpanzees, approximately how many genetic fixation events are required to account for that difference?
It’s challenging to give an exact number of fixations corresponding to a 4% genetic difference between humans and chimpanzees. Here’s why:
- Not just single base changes: The 4% difference encompasses various genetic alterations:
- Single nucleotide polymorphisms (SNPs): These are single base pair changes.
Insertions and deletions (indels): Where segments of DNA are added or removed.
Structural variations: Larger-scale rearrangements, duplications, or losses of DNA segments.
Varying impact: Each type of change has a different impact. A single SNP in a critical gene might have a large effect, while many SNPs in non-coding regions might have little functional consequence.- Unknown ancestral state: Reconstructing the exact genome of the CHLCA is difficult, making it hard to trace the precise number of changes on each lineage.
However, we can make a rough approximation:
- Genome size: The human genome is roughly 3 billion base pairs.
- 4% difference: This equates to 0.04 * 3,000,000,000 = 120,000,000 base pair differences.
- If we assume, for simplicity, that most of these differences are due to single nucleotide fixations, then 120 million fixations would be a crude estimate.
Important caveats:
- Oversimplification: This ignores indels and structural variations, which contribute significantly to the 4% difference.
- Non-uniform distribution: Changes aren’t evenly distributed. Some regions have higher variability, while others are highly conserved.
- Functional impact varies: Not all fixations have the same functional impact.
In conclusion: While a precise number is elusive, the 4% difference likely translates to millions of genetic changes, including fixations, indels, and structural variations. This highlights the complexity of the evolutionary process and the multitude of genetic changes contributing to the differences between humans and chimpanzees.
Given the maximum possible number of fixation events since the Chimpanzee-Human Last Common Ancestor is 204 according to your earlier reply, is it rational to assert that evolution by natural selection has accounted for 120 million fixation events in the last nine million years?
You’re right to question that! It indeed seems contradictory. Here’s why the two numbers don’t match up and what we can learn from it. The 204 fixations figure was a highly simplified calculation. It assumed:
- A constant rate of fixation (1,600 generations per fixation).
- Only single gene fixations, ignoring other types of genetic changes.
Why the huge discrepancy?
We underestimated the complexity of genetic change. The 4% difference between humans and chimps includes many types of changes beyond single gene fixations. Structural variations, indels, and changes in gene regulation are crucial and likely account for a large portion of the genetic divergence.
Now, what’s important here is that even as the minimum possible number of generations has been reduced from 1,600 observed for the e. coli bacteria to 64 for the Lambda virus, the genetic difference has been rising from 30 million to 120 million due to the need to account for the fixation of the full range of the observed differences in the DNA, not just those observed in protein-coding DNA.
Furthermore, my estimate of 20 years per generation was apparently too minimal, the professional AI views a standard generation as being 27.5 years. Thus, the original equation needs to be revised as follows:
CHLCA
Years: 9,000,000
Years per generation: 27.5
Generations per fixed mutation: 64
Years per fixed mutation: 1,760
Maximum fixed mutations: 5,114
Mutations required: 120,000,000
% attributable to evolution by natural selection: 0.00004 or four-thousandths of one percent.
All that switching from the bacteria to the simpler structure of the virus accomplishes is to remove one decimal point from the improbability, from one in 240,000 to one in 23,465. However, it is obviously not even possible for a genetic mutation to fix in the entire breeding human population in 27.5 years, or any faster than that. This is the hard place that none of the evolutionists feverishly attempting to find faster mutational fixation rates have realized is there; the number of generations per fixed mutation is almost certainly closer to the 8,170 suggested by the Genghis Khan heritage than the 64 observed for the Lambda virus, although it’s probably even higher than that.
But regardless, even with the faster observed fixation rate, claiming that you believe evolution by natural selection can explain the differences between chimpanzees and humans is mathematically akin to saying that you believe your friend who says he drove the 2,790 miles that separate New York City from Los Angeles in six seconds.
Mel Gibson has witnessed several cases of Ivermectin curing cancer:
Joe Rogan was left stunned when Mel Gibson revealed the controversial way his friends treated their terminal cancers. On the latest episode of the Joe Rogan Experience, Gibson said three of his friends were diagnosed with stage four cancer and all had ‘some serious stuff going on.’
Gibson claimed they took ivermectin and fenbendazole, drugs usually given to treat infections caused by roundworms, threadworms, and other parasites. The Lethal Weapon star then sensationally claimed all three friends ‘don’t have cancer right now.’
‘This stuff works, man,’ he added.
I am fortunately mostly ignorant about cancer in humans, but we do have a dog who had a large and inoperable tumor. And two CT scans, one taken before a radiation-and-ivermectin regime, the second taken four months later, clearly showed that most of the tumor was simply gone.
If you, or anyone you know, has been diagnosed with cancer, I would not hesitate to recommend taking ivermectin and fenben. It is far less likely to hurt you than help you, and it appears to have the potential to save the lives of cancer victims. And the fact that Big Pharma and its pet doctors don’t approve of it should be seen as evidence in support of the anti-parasiticals, not disproof.