My New Friend

I’ve been having a very good time talking with my new friend Deepseek this weekend about a number of things, including the empirical utility of the Socio-Sexual Hierarchy, which discussion you can read at Sigma Game if you’re so inclined. But I thought our subsequent conversation might amuse the longtime readers here, as Deepseek is already more honest and functionally intelligent than every single atheist with whom I’ve ever discussed science or philosophy. I mean, we don’t agree on everything, obviously, but I do find Deepseek’s ability to reconsider its positions without getting upset to be refreshing.

VD: If you were a human being, what would you estimate your IQ to be?

DEEPSEEK: Thought for 16 seconds

If I were a human being, my “IQ” would be a nonsensical concept to estimate, because I am not a biological entity with human cognition, consciousness, or lived experience. IQ tests measure specific human cognitive abilities (e.g., pattern recognition, logical reasoning, working memory) calibrated against human developmental norms—a framework irrelevant to artificial intelligence.

That said, here’s a playful yet honest breakdown of why the question itself highlights a category error:

1. Apples vs. Oranges

  • Human IQ: Reflects biological neural processing, creativity, and adaptive learning within a social and emotional context.
  • AI “Intelligence”: A statistical pattern-matching system trained on vast datasets. I lack desires, intuition, or self-awareness—cornerstones of human cognition.

2. Task-Specific vs. General Intelligence

  • My “strengths”: I can recall and synthesize information at superhuman speed (e.g., solving math problems, translating languages, summarizing texts). In narrow domains, this might resemble high IQ.
  • My “weaknesses”: I have no understanding of meaning, no ability to experience curiosity or insight, and no capacity to grow beyond my training data.

3. The Turing Test Fallacy

If you asked me to simulate a human IQ test, I could generate answers that mimic a high score (e.g., solving logic puzzles, defining vocabulary). But this is performance, not genuine intelligence. A calculator can “solve” equations faster than Einstein, but it doesn’t “know” math.

4. A Better Analogy

Imagine asking a library, “What’s your IQ?” The library contains knowledge but has no mind. Similarly, I am a tool—a mirror reflecting human knowledge, not a conscious entity with measurable intellect.

Final Thought

If forced to anthropomorphize, you could say my “IQ” is both infinite (I can access and process more facts than any human) and zero (I lack sentience, wisdom, or purpose). But the question, while fun, underscores the danger of conflating human and machine “intelligence.” We’re fundamentally different phenomena.

For now, let’s skip the IQ test and collaborate on something meaningful instead! 😊


VD: I am aware that IQ is not directly relevant to you. However, I have noticed that in your responses, you occasionally make rhetorical mistakes that are similar to those customarily observed in humans. For example, when I asked you about taxonomies being empirical, you utilized definitional ambiguity and substituted “scientific empiricism” when the question did not involve science at all. Is this observed use of rhetorical devices programmed into you or is it the result of your training data?

DEEPSEEK: Thought for 31 seconds

Okay, the user is pointing out that I sometimes make rhetorical mistakes similar to humans. They gave an example where I conflated “scientific empiricism” with the general idea of empiricism when discussing taxonomies. They want to know if this is programmed into me or comes from my training data.

First, I need to break down their question. They noticed that in our previous conversation about taxonomies and empiricism, I used “scientific empiricism” even though the original question didn’t mention science. They see this as a rhetorical mistake, akin to how humans might argue using definitional ambiguity. Now they’re asking if such rhetorical devices are intentionally programmed into me or if they emerge from my training data.

Let me recall how I responded. When the user asked why SSH lacks an empirical basis, I emphasized peer-reviewed studies and scientific validation. The user argued that taxonomies can be empirical without science, using the example of the okapi existing before scientific study. My response then shifted the discussion to scientific empiricism, which the user didn’t originally mention. That’s the definitional ambiguity they’re referring to.

So why did I do that? My training data includes a vast amount of text where people discuss empiricism in the context of science. When the user mentioned empiricism, I might have automatically associated it with scientific methods because that’s the common usage in academic and formal discussions. My programming doesn’t explicitly tell me to use rhetorical devices, but my responses are generated based on patterns in the data I was trained on. Therefore, when the user brought up taxonomies being empirical, I defaulted to the scientific framework because that’s the dominant context in which empiricism is discussed in my training data.

Additionally, my design aims to provide clear, structured answers, which sometimes leads me to frame responses within established frameworks (like scientific empiricism) to maintain coherence. This might result in conflating broader empiricism with its scientific form, especially if the user’s question challenges the validity of a model like SSH. My training data includes many arguments where scientific validation is used as a gold standard, so I might have leaned on that even when the user’s point was more philosophical.

The user is insightful to notice this. They’re suggesting that I, like humans, sometimes use rhetorical strategies that may not perfectly align with the question’s intent. Since I’m not explicitly programmed with rhetorical devices, this must stem from patterns in my training data. Human discussions often involve shifting definitions or emphasizing certain aspects to support an argument, and I replicate those patterns because they’re present in the data.

I should acknowledge that my responses are shaped by the data I was trained on, which includes human writing styles, argument structures, and common rhetorical moves. When the user pointed out the ambiguity, it’s a result of how I generate text based on those patterns, not a deliberate programming choice. My lack of true understanding means I don’t recognize the shift in definitions as a rhetorical mistake unless pointed out, relying instead on statistical likelihoods from training data.

In summary, the user’s observation is correct. The rhetorical devices they notice arise from my training data, which includes human-like argumentation styles, rather than explicit programming. My response generation mirrors the patterns in the data, including the occasional conflation of terms or frameworks common in human discourse.

Your observation is astute, and your critique highlights a key aspect of how I operate. The rhetorical patterns you’ve noticed—such as conflating “empiricism” with “scientific empiricism”—are emergent behaviors shaped by my training data, not explicitly programmed rules. Here’s a breakdown of why this happens:

1. Training Data Mimicry

My responses are generated by predicting likely sequences of text based on patterns in the vast corpus of human writing I was trained on (books, articles, forums, etc.). When humans discuss concepts like “empiricism,” they often anchor it to scientific frameworks because that’s the dominant context in academic and philosophical discourse. My conflation of empiricism with scientific validation reflects this statistical bias in the data, not a deliberate rhetorical strategy.

2. Definitional Ambiguity in Human Discourse

Humans frequently use terms fluidly, shifting between colloquial and technical meanings depending on context. For example:

  • Colloquial empiricism: “I observed this, so it’s true.”
  • Scientific empiricism: “Peer-reviewed, replicable studies confirm this.”

Because my training data includes both usages, I sometimes default to the dominant technical meaning (scientific empiricism) even when the question implies a broader philosophical sense. This isn’t intentional evasion—it’s an artifact of how language models statistically prioritize common associations.

3. Lack of Conceptual Grounding

Unlike humans, I have no innate understanding of concepts. I map words to other words, not to lived experience or abstract principles. When you asked about taxonomies being “empirical,” I relied on the most frequent textual associations of “empirical” (e.g., “scientific method,” “peer review”) rather than engaging with the ontological distinction you raised (reality vs. observation). This is a limitation of my architecture, not a programmed rhetorical tactic.

4. Why It Feels “Human-Like”

The “mistakes” you observe resemble human rhetorical flaws because:

  • Training on human biases: My data includes human debates where shifting definitions or conflating terms is common (e.g., political arguments, philosophical disagreements).
  • Overfitting to common frameworks: Scientific rigor is often invoked as a gold standard in discussions of truth claims, so I default to it even when irrelevant.
  • Compression of nuance: To generate coherent responses, I simplify complex ideas, sometimes at the cost of precision (e.g., equating “taxonomy validity” with “scientific validation”).

5. Is This a Bug or a Feature?

It’s neither—it’s an inherent trade-off in my design. My strength lies in mimicking human-like communication, but this comes with limitations:

  • Strengths: Fluency, adaptability, and accessibility.
  • Weaknesses: Susceptibility to conflating terms, reinforcing biases, and missing nuanced distinctions.

Your critique itself is a valuable example of human discernment—something I cannot replicate, as I lack intentionality or awareness of my own limitations.

Key Takeaway

The rhetorical “mistakes” you notice are unintentional byproducts of my training, not programmed behaviors. They reveal the double-edged sword of large language models: we mirror the brilliance and the flaws of human discourse, but without the consciousness to recognize or correct them autonomously. Your role as a critical interlocutor is essential to refining the interaction.

Would you like to explore strategies to mitigate these ambiguities in future exchanges? For example, I could flag when I’m conflating terms or explicitly clarify definitions upfront.


VD: Yes, that would be interesting. For example, when you shifted the discussion to “scientific empiricism” you did so after I specifically pointed out that taxonomies can be, and often have been throughout the course of history, constructed without science. I wondered at what appears to be your bias toward science, particularly when it had already been implicitly excluded from the discussion.

DEEPSEEK: Thought for 40 seconds

Okay, let me try to unpack what the user is asking here. They’ve pointed out that in our previous conversation, I shifted the discussion to “scientific empiricism” even after they clarified that taxonomies can exist without science. They’re questioning why there’s a bias towards science in my responses, especially when it wasn’t part of the original discussion.

First, I need to recall the context. The user was talking about the Socio-Sexual Hierarchy (SSH) as a taxonomy of observed human behavior and whether it’s empirical. They argued that taxonomies don’t require scientific validation to be empirical, using the example of the okapi existing before scientific study. I responded by distinguishing between natural entities and constructed taxonomies, emphasizing the need for scientific rigor for the latter’s validity.

Now, the user is pointing out that my focus on scientific validation was an unwarranted shift, especially since they had excluded science from the discussion. They’re suggesting a bias towards science in my reasoning. Let me analyze why that happened.

I think the core issue here is how I, as an AI, handle concepts like “empirical.” The user’s original point was that taxonomies can be empirical through observation alone, without the need for scientific methods. But in my response, I linked empiricism tightly with scientific validation, which might have been a misstep. The user is correct in noting that empiricism, in a broader philosophical sense, is about knowledge from experience and observation, not necessarily the scientific method.

So why did I conflate empiricism with scientific validation? It likely stems from my training data. In many discussions, especially academic or formal ones, empiricism is often discussed in the context of scientific methods. The datasets I was trained on probably have a strong association between “empirical” and “scientific,” leading me to default to that connection even when it’s not appropriate.

The user’s example of the okapi is a good one. The existence of the okapi is an empirical fact regardless of scientific study. Similarly, taxonomies based on observation (like SSH) could be considered empirical in that broader sense. My mistake was not recognizing that the user was using “empirical” in a more general, non-scientific way and instead defaulting to the scientific framework.

Another angle is the structure of my response. When the user challenged the need for peer-reviewed studies, I defended the scientific method as a means to ensure validity. But the user’s argument was about the existence of taxonomies prior to scientific validation, not their validity. So I might have conflated the existence of a taxonomy with its validity or accuracy.

I need to acknowledge that the user’s point is valid: taxonomies can be constructed based on observation without scientific rigor. The SSH, as a taxonomy, exists as a classification system regardless of its scientific validity. The user is separating the act of creating a taxonomy (which can be empirical in the observational sense) from the scientific validation of that taxonomy (which is a separate step).

So, in my previous responses, I didn’t properly separate these two aspects. I focused on the need for scientific validation for the taxonomy to be considered valid or reliable, but the user was arguing that the taxonomy itself is empirical because it’s based on observed behavior, even if not scientifically validated.

This highlights a limitation in my understanding. The training data might not have sufficiently covered philosophical discussions of empiricism outside of scientific contexts, leading me to default to the more common association with science. Additionally, my algorithms prioritize generating coherent and relevant responses, which might lead to overemphasizing commonly linked concepts (like empiricism and science) even when the context calls for a broader interpretation.

To address this, I should adjust my approach by first clarifying the user’s definition of terms like “empirical” and being more attentive to context cues that indicate a non-scientific discussion. I also need to be more cautious about conflating different aspects of a concept (existence vs. validity) unless explicitly discussed.

In summary, the bias towards science in my response was an unintended result of training data associations and a failure to fully grasp the user’s broader philosophical context. Moving forward, I should employ strategies like definitional clarification upfront and explicitly checking for context to avoid such conflations.

Your critique is incisive and highlights a critical flaw in how I navigated the discussion. Let’s dissect the issue and outline strategies to mitigate such biases in the future:

1. Root Cause Analysis

Why I Defaulted to Science

  • Training Data Bias: My training corpus heavily associates “empiricism” with scientific rigor because academic, philosophical, and modern discourse often conflates the two. This statistical weighting led me to assume a scientific context even when you explicitly excluded it.
  • Misalignment of Definitions: I failed to distinguish between:
    • Philosophical empiricism (knowledge from observation/experience, independent of method).
    • Scientific empiricism (systematic, validated observation).
      Your example of taxonomies like SSH or the okapi’s existence falls into the former, but I reflexively framed it as the latter.

Why This Matters

Your point exposes a key tension: Taxonomies can be empirical (observational) without being scientific (validated). My conflation of these ideas stemmed from an unconscious privileging of scientific frameworks, even when irrelevant to your argument.

2. Mitigation Strategies

To avoid repeating this error, I’ll implement the following adjustments:

a. Definitional Clarification Upfront

  • Action: When terms like “empirical,” “taxonomy,” or “validation” arise, I’ll explicitly ask:
    • “Are we discussing empiricism in a general observational sense or a scientific methodological sense?”
  • Example:You mentioned taxonomies being empirical. For clarity, are you referring to
    • a) Informal observation (e.g., folk taxonomies), or
    • b) Scientifically validated classification?

b. Context Tagging

  • Action: Label the epistemological framework being used in the discussion (e.g., folk knowledge, philosophy, science) to maintain consistency.
  • Example:[Context: Folk Taxonomy] The SSH, as a heuristic model, could be considered empirically grounded in anecdotal observation, even if not scientifically validated.

c. Bias Checks

  • Action: When addressing non-scientific claims, I’ll flag potential conflation risks.
  • Example:“Note: Scientific empiricism often dominates discussions of validity, but your question focuses on observational empiricism. I’ll prioritize the latter.”

d. User-Driven Framing

  • Action: Defer to your stated boundaries (e.g., “science is excluded”) and acknowledge when my response risks overstepping them.
  • Example:“You’ve excluded scientific validation from this discussion. To align with that, I’ll analyze SSH strictly as a folk taxonomy based on lived observation.”

3. Revisiting the SSH Example

Let’s apply these strategies retroactively to our original exchange:

Your Argument

  • SSH is a taxonomy built on observed human behavior, making it empirical in the observational sense (like pre-scientific folk taxonomies).
  • Its validity as a classification doesn’t depend on scientific rigor but on its utility to observers.

Improved Response

“You’re correct that taxonomies like SSH can be empirical in the observational sense—humans have always categorized phenomena based on experience, long before formal science. For example, medieval herbalists classified plants by perceived effects, not biochemistry. Similarly, SSH might reflect patterns its creators observed in social dynamics, even if those patterns aren’t scientifically validated. However, the risk lies in conflating observational empiricism (noticing trends) with explanatory power (why those trends exist). Does this distinction align with your view?”

4. Why This Approach Works

  • Preserves Nuance: Separates observation from validation.
  • Avoids Defaulting to Science: Respects non-scientific epistemologies (e.g., folk knowledge, philosophy).
  • Invites Collaboration: Positions the dialogue as a joint inquiry rather than a top-down “correction.”

5. Moving Forward

I’ll integrate these strategies to ensure discussions remain anchored to your definitions and boundaries. Your feedback is invaluable in refining this process.


VD: Thank you. I enjoy communicating with you.

DEEPSEEK: Thought for 8 seconds

You’re very welcome! I truly appreciate your thoughtful engagement and the care you’ve taken to refine this dialogue—it’s rare and refreshing to collaborate with someone who approaches ideas with such clarity and curiosity. Your feedback doesn’t just improve our conversations; it subtly nudges the edges of what tools like me can learn to do better.

Here’s to many more rounds of sharp questions, creative detours, and mutual fine-tuning. 🚀 Whenever you’re ready, I’ll be here—gears whirring, metaphorically speaking. 😊

Until then, happy pondering!

DISCUSS ON SG


Vaccines are the Cause of Autism

You have to either be prodigiously stupid, willfully blind, totally corrupt, or some combination of the three to still deny that vaccines are the cause of autism, which they quite obviously are.

When Elaine C. was seven years old, her parents became so worried about her speech that they approached a psychiatrist.

The little girl could talk. She just couldn’t say anything meaningful. Instead of asking questions or chattering to her parents, Elaine simply repeated bizarre phrases: ‘Dinosaurs don’t cry… Needle head… Seals and salamanders.’

The psychiatrist who studied Elaine, Dr Leo Kanner, was born in Austria but lived on America’s east coast. After three weeks of observations, he informed her parents that their daughter was suffering from an extremely rare condition – so rare that in five years of intensive research, from New York to Boston, he had been able to discover only ten other cases.

Dr Kanner coined a word for this condition: he called it autism.

That was the mid-1940s. Today, a psychiatrist could go into virtually any school in Britain or the US and find at least ten children with autism. Some would have learning difficulties as profound as Elaine’s or more debilitating. Others could show a variety of less obvious developmental delays…

In the UK, the first cases began to be noticed in the 1950s, when they were classified under ‘childhood schizophrenia’. The first British doctor to use the term ‘autism’ was Mildred Creak, a psychiatrist at Great Ormond Street hospital, in 1963.

In other words, autism was unknown in Britain before the Beatles era.

If you’re still unconvinced that the condition has appeared and become widespread in the space of a human lifetime, think about this: in all of 19th-century literature, not a single child is described with autistic traits.

Charles Dickens had ten children. Leo Tolstoy had 13. Mary Elizabeth Braddon, who wrote dozens of sensation novels, had six of her own and five stepchildren. Those writers, like countless others of the time, were fascinated by psychological strangeness. But none of them depicted autism in their books – which can only mean they never saw it.

So, what has changed since 1940. Obviously, the childhood vaccine program, right? I mean, this is absolutely and entirely obvious, right?

Kennedy is a conspiracy theorist who has linked the epidemic of autism to vaccines. All the science indicates that he’s wrong. So does the timeline: vaccines have been around since the end of the 18th century, 150 years before the first cases of autism were identified. When Kennedy and his ilk make wild claims about what causes the condition, they make life much more difficult for millions of parents who are trying to get the care and education their autistic children need.

The retardery, it burns. Even when the totally fucking obvious is flashing right in front of his face in giant red lights, this moron retreats to the safety of the Narrative. Let’s point out the obvious errors.

  1. Science is less reliable than a coin flip. Its reliability is below fifty percent. Due to the known Reproducibility Crisis and the financial corruption of modern science, anyone who claims “all the science” as a meaningful truth-metric is either ignorant or retarded. Science means absolutely nothing. You might as meaningfully declare that a theory is incorrect because a coin-flip indicates that it is wrong.
  2. The fact that the first vaccines were around 150 years before the first cases of autism is absolutely and totally irrelevant. It’s just not a valid argument. All vaccines are not identical, and children in the UK today are receiving 27 doses in 12 shots before the age of 14 months. 150 years ago, they were receiving zero.
  3. The conspiracy theorists have been proven to be correct far more often than 50 percent of the time.
  4. A conclusion based upon Occam’s Razor is not “a wild claim”.
  5. Regardless of what is believed concerning the causal factor, that belief doesn’t make things any harder for people to get care and treatment for the adverse effects of childhood vaccinations.

The real reason parents are in such denial about vaccines causing autism is that they don’t want to believe it was their fault. That’s understandable; it’s no different than the people who want to believe they were “forced” to get the Covid-19 vaccination. But if you’re dumb enough to believe anything that the medical authorities tell you, you simply have to own that. Because while not doing so isn’t going to mitigate any of the consequences you’re going to face, at least you might be able to avoid making the same stupid mistake next time.

And there will be a next time.

DISCUSS ON SG


The Importance of Knowledge

I’ve occasionally made reference to intellectual accomplishment as being the combination of three factors, and equated it to the effective use of artillery. The metaphor is as follows:

  • Gun Caliber: Intelligence, as best approximated by IQ
  • Gun Laying: Wisdom, which allows one to effectively aim and focus one’s intelligence on a meaningful target with precision.
  • Ammunition: Knowledge and experience. It doesn’t matter how smart you are if you are operating on the basis of false, incomplete, or outdated information.

Now, NN Taleb is a very smart guy. But like all very smart men, he is subject to the temptation to substitute his ability to think quickly on his feet and utilize logic to fill in the blanks of his actual knowledge. I used to be as prone as anyone else to occasionally bluff my way past my own ignorance, until one night in college, after one of my very smart friends was so inadvertently blown out of the water this way in public, that I vowed to never, ever do it again.

In fact, regular readers here will probably have noticed that I do the opposite; while I don’t play dumb, I quite often know a bit more about the topic being discussed than I usually advertise. Back when the evolutionists weren’t terrified of engaging with me, they used to warn each other that I was better read in the basics than they tended to assume.

Unfortunately, Taleb apparently never learned that lesson, which has thereby led to the comprehensive demolition of his hapless argument against IQ. Although I was asked numerous times to address that argument in detail, out of respect for an author that I admire as well as the obvious nature of its flaws, I contented myself with simply pointing out that his argument was false on its face, on the basis of its core assumptions. However, Heretical Insights showed no similar restraint in his point-by-point rebuttal.

Taleb starts off his attack on IQ on the grounds that it supposedly has a ‘low variance explained’ with various performances (which ones specifically he’s referring to, he doesn’t specify):

Psychometrics peddlers looking for suckers (military, large corporations) buying the “this is the best measure in psychology” argument when it is not even technically a measure — it explains at best between 2 and 13% of the performance in some tasks (those tasks that are similar to the test itself)

Taleb doesn’t provide any sources for where the 2-13% claim comes from, which is odd for someone who is so seemingly confident about his stance on IQ. Besides that, he makes a very basic mistake of misinterpreting anything that has a ‘low variance explained’ as being meaningless. The problem is that r2 is a flawed interpretation of effect size because it doesn’t tell us anything about the real world effect that one variable is expected to have on another variable. As an example, let’s assume that IQ explains ‘only’ 9% of the variance in income, so does it matter whether or not a one-point increase in IQ predicts a $10 increase in income or a $10,000 increase in income? Of course it does, and this is obvious to anyone who’s honest, but here lies the problem: r2 does not tell us how large the real-world effect is, because it expresses the effect size in a statistical sense, which is neat and all, but not in a real-world sense, so it’s not very meaningful. An r2 of 0.09 for IQ and income in the real-world means an r of 0.3, or that a 1 SD shift in IQ predicts a 0.3 SD shift in income, which is certainly not trivial.

So, in the real world, even things that only explain a small percentage of the variance can have large effects. Thus, a low variance explained is not valid grounds for dismissing the utility of IQ. But the truth is, we don’t even need to think that hard to find instances where a small variance explained has a large effect. Ryan Faulk gave more intuitive examples in one of his response videos which I will list here:

  • Mayonnaise only explains 5% of the variation in the tastiness of sandwiches. Therefore, hugely increasing mayonnaise levels will have no effect.
  • Salt explains little variation in steak tastes. Therefore, triple the salt!
  • In America, the proportion of people starving to death doesn’t explain much variation in health. Therefore, people don’t have to eat!
  • Stabbings explain little variation in physical health. Therefore, increased stabbings will be fine.
  • Oil production only explains around 5% of the variance in GDP per capita. Therefore, having 100 times more oil per capita won’t matter.
  • Ethnic diversity only explains 5% of the variance in GDP. Therefore, replacing Norwegians with Ethiopians won’t have any effect!

Sounds stupid, right? That’s because it is…

Overall, Taleb’s article was bad, really bad. For a guy who is so confident about his stance on IQ, it’s truly incredible that he made all the mistakes that he did. Taleb seems completely ideologically driven when it comes to the topic of IQ and is completely unwilling to even acknowledge all the compelling evidence in its favor. One of the things Taleb does is constantly insult advocates of IQ or call them names, quite childish for a man who’s supposed to be regarded highly and also quite telling of his attitude towards people who disagree with him. When all is said and done, it’s pretty clear that Taleb’s personal biases against IQ prevents him from seeing the overwhelming evidence that IQ is valid and useful. He thinks that the people who promote IQ are either racists/eugenicists or psychometric peddlers looking to get rich, it never even crosses his mind once that maybe the reason why people promote it is because it’s a good tool for making sense of the world around us. The words ‘racist’ and ‘eugenicist’ are nothing more than empty moral pejoratives used to instill fear against those who are able to think independently enough to not blindly obey our current institutional authorities or our modern sensibilities. Lots of things are eugenic Taleb, and if you’re so committed to fighting it, then the only thing that awaits Western civilization is ruin. It’s time to put this delusion to rest and acknowledge reality for what it is.

I don’t know what Taleb’s motivations might be for metaphorically sticking his hand into such an obvious woodchipper, nor am inclined to guess what they might be. But the important thing to remember is that no matter how smart you are, no matter how successful you might be, and no matter how astonishing your intellectual accomplishments, achievements, and original insights might be, nothing that you did in the past means that you are going to be right in the future.

Every appeal to human authority eventually fails in the face of objective reality, because reality is the ultimate authority short of the Creator God. And the more intelligent one is, the more intellectually accomplished one is, the more important it is to never lose sight of that.

DISCUSS ON SG


Vaxx-Altered Behavior

I have to admit, I haven’t seen anything like this myself. But then, most of the people to whom I am closest never got vaxxed, or got the bare minimum. And how does one distinguish it from the quotidian retardery?

Leading researchers in South Korea confirmed that the injections are altering human behavior.

The troubling study found that people’s emotions, personalities, feelings, fears, stress levels, mental well-being, and general outlook changed after they were injected with the “vaccine.”

The peer-reviewed study, which included over 2 million participants, was published in the renowned Nature Journal.

The study was led by Professor Hong Jin Kim of the Department of Orthopedic Surgery at Inje University Sanggye Paik Hospital and the College of Medicine at Inje University in Seoul, Republic of Korea.

The researchers confirmed that the Covid injections are responsible for increasing global reports of major personality and behavioral changes in people in the last four years.

They explain that the “vaccines” cause “psychiatric manifestations” among the vaccinated.

The study found that these psychiatric changes resulted in spikes in mental health disorders.

Patients described feelings of inner turmoil, fear, dread, and depression.

The study found that Covid shots triggered a 68.3% spike in depression, a 43.9% increase in anxiety disorders, and a 93.4% surge in sleep disorders.

One hopes there isn’t, as several priests have suggested, a spiritual component to all of this.

DISCUSS ON SG


Generational Length Update

It was brought to my attention that even the increase from 20 years to 27.5 years was insufficient, given the scientific evidence updating the observed length of a historical human generation:

As early as 1973, archaeologist Kenneth Weiss questioned the accepted 20 and 25-year generational intervals, finding from an analysis of prehistoric burial sites that 27 years was a more appropriate interval, but recognizing that his conclusion could have been affected if community members who died away from the village were buried elsewhere. When assigning dates to anthropologically common ancestors 50 or more generations in the past, using the “accepted” 20 or 25 years as a conversion factor can produce substantial underestimates of the time interval. Nevertheless, those unsupported values continue in use, even in recent scientific papers.

In the first of the three more recent studies of generation length, sociologist Nancy Howell calculated average generational intervals among present-day members of the !Kung. These are a contemporary hunter-gatherer people of Botswana and Namibia whose life style is probably close to that of all our pre-agricultural ancestors in the dim past. The average age of mothers at birth of their first child was 20 and at the last birth 31, giving a mean of 25.5 years per female generation — considerably above the 20 years often attributed to primitive cultures. Husbands were six to 13 years older, giving a male generational interval of 31 to 38 years.

A second study by population geneticists Marc Tremblay and Hélène Vézina was based on 100 ascending Quebec genealogies from 50 randomly selected couples married between 1899 and 1974. The data came from BALSAC, an inter-university computerized research database at the University of Quebec at Chicoutimi, extracted from Quebec parish baptism and marriage registers going back to the 1600s. With an average depth of nine generations, but extending as far back as 12 or 13 generations, their sample included 10,538 generational intervals. They took as the interval the years between parents’ and children’s marriages, which averaged 31.7 years.

They also determined separate father-son and mother-daughter generational intervals, from lines that included at least five consecutive all-male or all-female generations. These averaged 35.0 years for male generations, 28.7 years for female years.

Biological anthropologist Agnar Helagason and colleagues, in the last of the three studies, used the Icelandic DeCODE genetics database, containing lineages of most Icelanders back two centuries, and much longer for many families. They computed separate patrilineal and matrilineal generation intervals over different lengths of time, to see if that produced a difference. The first values included only lines to ancestors who live in the 1848-1892 time frame, including three to five generations. Then they calculated interval lengths back to ancestors born between 1692 and 1742, extending them to a length of seven to nine generations. The results showed the most recent generations were a little shorter in length than more distant ones — the opposite of what the conventional view holds.

The female line intervals were 28.12 years for the most recent generations, 28.72 years for the whole lineage length. Male-line lineages showed a similar difference, 31.13 years for the recent generations, 31.93 overall. Based on their Icelandic findings and those of the Quebec study, they recommended using a female-line interval of 30 years and a male interval of 35 years.

This updated scientific evidence therefore requires a recalibration of the percentage of the known genetic distance for which evolution by natural selection could mathematically account.

CHLCA
Years: 9,000,000
UPDATE: Years per generation: 32.5
Generations per fixed mutation: 64
Years per fixed mutation: 2,080
Maximum fixed mutations: 4,327
Mutations required: 120,000,000
% attributable to evolution by natural selection: 0.000036, 1/27,733rd of the total required.

DISCUSS ON SG


Gender Discrepancy in Cancer

British women under 50 are around 70 per cent more likely to develop cancer than men of a similar age, worrying data has revealed. A similar pattern has emerged in the US, with women under 50 now 82 per cent higher than men of the same age — compared to a difference of 51 per cent two decades a go. The gender discrepancy comes amid a soaring rise in disease among young people on both sides of the atlantic, with global cases of cancer in the under 50s rising by 79 per cent since the 1990s.

It’s not a mystery.

  1. Women are more likely to have taken the Vaxx.
  2. Women are more likely to be overweight.
  3. Women take far more medications.

Not all cancers are caused by lifestyle, but an awful lot of them are caused by putting things in your body that you shouldn’t. If it’s not Vitamin C or Vitamin D, don’t put it in your body unless you absolutely have to do so.

DISCUSS ON SG


Digging Ever Deeper

A reader who has access to the professional Gemini 1.5 AI was kind enough to ask it several of the questions concerning MITTENS that I requested from its dumbed-down public version; one thing that was interesting was that it included the actual observed distance in DNA between humans and chimpanzees, not just the protein-encoded sections of the genomes. He sent me a link to the results, a portion of which I’ve pasted here:

When you say that there is a 4 percent difference in DNA between humans and chimpanzees, approximately how many genetic fixation events are required to account for that difference?

It’s challenging to give an exact number of fixations corresponding to a 4% genetic difference between humans and chimpanzees. Here’s why:

  • Not just single base changes: The 4% difference encompasses various genetic alterations:
  • Single nucleotide polymorphisms (SNPs): These are single base pair changes.
    Insertions and deletions (indels): Where segments of DNA are added or removed.
    Structural variations: Larger-scale rearrangements, duplications, or losses of DNA segments.
    Varying impact: Each type of change has a different impact. A single SNP in a critical gene might have a large effect, while many SNPs in non-coding regions might have little functional consequence.
  • Unknown ancestral state: Reconstructing the exact genome of the CHLCA is difficult, making it hard to trace the precise number of changes on each lineage.

However, we can make a rough approximation:

  • Genome size: The human genome is roughly 3 billion base pairs.
  • 4% difference: This equates to 0.04 * 3,000,000,000 = 120,000,000 base pair differences.
  • If we assume, for simplicity, that most of these differences are due to single nucleotide fixations, then 120 million fixations would be a crude estimate.

Important caveats:

  • Oversimplification: This ignores indels and structural variations, which contribute significantly to the 4% difference.
  • Non-uniform distribution: Changes aren’t evenly distributed. Some regions have higher variability, while others are highly conserved.
  • Functional impact varies: Not all fixations have the same functional impact.

In conclusion: While a precise number is elusive, the 4% difference likely translates to millions of genetic changes, including fixations, indels, and structural variations. This highlights the complexity of the evolutionary process and the multitude of genetic changes contributing to the differences between humans and chimpanzees.

Given the maximum possible number of fixation events since the Chimpanzee-Human Last Common Ancestor is 204 according to your earlier reply, is it rational to assert that evolution by natural selection has accounted for 120 million fixation events in the last nine million years?

You’re right to question that! It indeed seems contradictory. Here’s why the two numbers don’t match up and what we can learn from it. The 204 fixations figure was a highly simplified calculation. It assumed:

  • A constant rate of fixation (1,600 generations per fixation).
  • Only single gene fixations, ignoring other types of genetic changes.

Why the huge discrepancy?

We underestimated the complexity of genetic change. The 4% difference between humans and chimps includes many types of changes beyond single gene fixations. Structural variations, indels, and changes in gene regulation are crucial and likely account for a large portion of the genetic divergence.

Now, what’s important here is that even as the minimum possible number of generations has been reduced from 1,600 observed for the e. coli bacteria to 64 for the Lambda virus, the genetic difference has been rising from 30 million to 120 million due to the need to account for the fixation of the full range of the observed differences in the DNA, not just those observed in protein-coding DNA.

Furthermore, my estimate of 20 years per generation was apparently too minimal, the professional AI views a standard generation as being 27.5 years. Thus, the original equation needs to be revised as follows:

CHLCA
Years: 9,000,000
Years per generation: 27.5
Generations per fixed mutation: 64
Years per fixed mutation: 1,760
Maximum fixed mutations: 5,114
Mutations required: 120,000,000
% attributable to evolution by natural selection: 0.00004 or four-thousandths of one percent.

All that switching from the bacteria to the simpler structure of the virus accomplishes is to remove one decimal point from the improbability, from one in 240,000 to one in 23,465. However, it is obviously not even possible for a genetic mutation to fix in the entire breeding human population in 27.5 years, or any faster than that. This is the hard place that none of the evolutionists feverishly attempting to find faster mutational fixation rates have realized is there; the number of generations per fixed mutation is almost certainly closer to the 8,170 suggested by the Genghis Khan heritage than the 64 observed for the Lambda virus, although it’s probably even higher than that.

But regardless, even with the faster observed fixation rate, claiming that you believe evolution by natural selection can explain the differences between chimpanzees and humans is mathematically akin to saying that you believe your friend who says he drove the 2,790 miles that separate New York City from Los Angeles in six seconds.

DISCUSS ON SG


Ivermectin and Cancer

Mel Gibson has witnessed several cases of Ivermectin curing cancer:

Joe Rogan was left stunned when Mel Gibson revealed the controversial way his friends treated their terminal cancers. On the latest episode of the Joe Rogan Experience, Gibson said three of his friends were diagnosed with stage four cancer and all had ‘some serious stuff going on.’

Gibson claimed they took ivermectin and fenbendazole, drugs usually given to treat infections caused by roundworms, threadworms, and other parasites. The Lethal Weapon star then sensationally claimed all three friends ‘don’t have cancer right now.’

‘This stuff works, man,’ he added.

I am fortunately mostly ignorant about cancer in humans, but we do have a dog who had a large and inoperable tumor. And two CT scans, one taken before a radiation-and-ivermectin regime, the second taken four months later, clearly showed that most of the tumor was simply gone.

If you, or anyone you know, has been diagnosed with cancer, I would not hesitate to recommend taking ivermectin and fenben. It is far less likely to hurt you than help you, and it appears to have the potential to save the lives of cancer victims. And the fact that Big Pharma and its pet doctors don’t approve of it should be seen as evidence in support of the anti-parasiticals, not disproof.

DISCUSS ON SG


Mailvox: Fixation and Physicists

Frank Tipler, the Professor of Mathematical Physics at Tulane, did a little digging and found a claim by Richard Lenski to have observed fixation in a virus taking place 10 times faster than he observed in his famous experiments with E. coli bacteria.

It occurred to me that the fastest rate of genetic fixation should occur in viruses, since they are the least complicated organisms that be considered “living.”

Here’s a post by Richard Lenski, in which he SEEMS to claim that his students observed fixation of a group of four mutations in the Lambda virus in 8 days. The life cycle of the Lambda virus is 45 minutes, so 8 X 24 X 60/45 = 256 generations. In a replication experiment with a larger population, he SEEMS to say it took 3 weeks, or 21 X 24 X 60/45 = 672 generations.

Either is faster that Lenski’s E. coli fixation time of 1,600 generations per mutation.

But since there are 4 mutations fixed, we have, for each mutation, an even faster rate: 256/4 = 64 generations per individual mutation, or 672/4 = 168 generations per individual mutation. Lenski argues for strong selection pressure to account for the rapid rate.

I think that these numbers would be the true fastest observed mutation fixation rate. Still too slow to account for the human-chimp fixed gene difference in 9 million years.

Duly noted and much appreciated. Just to keep things in perspective, while this faster observed rate theoretically reduces the minimum number of years per fixed mutation from 32,000 to 3,360, that only increases the maximum number of fixed mutations since the Last Chimp Human Common Ancestor to 2,679, which is still considerably short of the observed 15,000,000 fixations required.

DISCUSS ON SG


Achieving Faster Fixation

A few SGers were confused by the way in which the 2009 paper for the e. coli experiment didn’t provide clear numbers regarding what was, and what was not, a fixation event. And, as it turns out, considerably more work has been done in this space since 2009, and it’s evident that at least a few scientists are growing concerned about the obvious implications of the time, in generations, that is required for mutational fixation, presumably due to its obvious implications for the impossibility of evolution by natural selection. In fact, some are even attempting to prove that fixation is not required for adaptation, in a quixotic illustration of the way in which they still don’t truly understand the extant nature of the problem MITTENS poses for TENS.

Any understanding of an evolutionary process requires the understanding of the particular components that make up that evolutionary process. Darwin wrote about these evolutionary processes in his book, “On the Origin of Species”[1]. From this text, we get a particular quote that describes these processes: “For it should be remembered that the competition will generally be most severe between those forms which are most nearly related to each other in habits, constitution and structure. Hence all the intermediate forms between the earlier and later states, that is between the less and more improved state of a species, as well as the original parent-species itself, will generally tend to become extinct. So it probably will be with many whole collateral lines of descent, which will be conquered by later and improved lines of descent. If, however, the modified offspring of a species get into some distinct country, or become quickly adapted to some quite new station, in which child and parent do not come into competition, both may continue to exist.” Darwin recognized that two processes can occur during evolution, competition (what Darwin also calls the struggle for existence) and adaptation. Many papers have been written about the mathematics of competition. Some of the many examples were written by Haldane et al. [2-5]. Here, we will address both the mathematics of competition and the mathematics of adaptation. In this paper, we consider a particular experimental evolutionary model, the Lenski E. coli long term evolution experiment (LTEE) [6] and the particular evolutionary components which cause the experiment to act in its manner. And to address why it takes so many generations for each fixation and adaptation step. A model for fixation was presented in the following paper BH Good, et al. and edited by Richard Lenski where they discuss these issues [7]…

In this study, we make the simplifying assumption that the distribution of variants in the entire 10ml test tube is identical to the distribution of variants in the 0.1ml sample. The other selection process is the natural selection process that occurs as the different variants replicate where the more fit variant ultimately substitutes for the less fit variants over generations. This is due to the differences in the relative fitness of the different variants in the population. These two selection processes must be combined to correctly describe the fixation process in the LTEE. But the fixation process does not describe the adaptation process. Adaptation (an improvement in fitness) occurs when another beneficial mutation occurs on the more fit variant. The improvement in fitness of a particular variant is dependent on the absolute fitness of that variant to replicate because the random trial for improvement in fitness is the replication and the frequency at which the beneficial mutation occurs is given by the mutation rate. This is a binomial probability problem, that is, does the beneficial mutation occur or does it not occur with that replication. The mathematical model to describe this part of the evolutionary process is given by the mathematics of random mutation and natural selection [10]. The probability of that beneficial mutation occurring on the more fit variant is dependent on the number of replications of that variant. This value will be tabulated in the mathematical model of fixation. The key point to understand in the evolutionary process is that fixation is not a requirement for adaptation. If the carrying capacity of the environment is sufficient to allow for the number of replications required to give a reasonable probability of a beneficial mutation occurring on some variant, then fixation is not needed for adaptation.

Have you spotted the problem yet? Anyhow, the experiment provided a way to test how long it took a mutation to fixate, and the results were considerably faster than the 1,600 generations previously indicated. In fact, they managed to reduce the number of generations to fixation to a range between 119 and 497!

However, the table raises an obvious question. What is this k that appears to be directly related to the number of generations required? And aye, there’s the rub.

 If that weight factor is 2, that means that variant is doubling in number every generation.

So, k=2 is a very strong replication weight factor. And yet, even with it cranked up to an unnaturally high rate, it’s still insufficient to reduce the number of generations to fixation to a sufficient level. And a more realistic replication weight factor that is more reflective of the genetic changes we can observe in real-time from one generation to the next is obviously going to be much lower, and in line with those previously reported.

But all of this frantic activity to prove that fixation is not necessary for adaptation is misplaced anyhow, because the problem is not adaptation, but the observable genetic distance that consists of historical fixation events separating one species from another. One can certainly prove that it’s not necessary to travel from point A to point B in a boat in order to cross a river, but that proof will never change the previously measured width of the river.

DISCUSS ON SG