Convergence kills

 From Corporate Cancer, published in 2019:

The bright future of well-funded diversity departments and their growing cost to corporate budgets can be anticipated by looking at what some of the most converged corporations in the United States are doing. In 2015, Intel announced a $300 million commitment to diversity, pledging to spend $60 million per year by 2020 in order to establish a $300 million fund to be used by 2020 to improve the diversity of the company’s work force. This expensive program was supplemented by Intel Capital’s Diversity Initiative, which at $125 million, is “the largest venture capital resource ever created to focus on underrepresented entrepreneurs.”

So, this suggests Intel has entered Stage Five convergence, which means that it is now incapable of fulfilling its primary purpose. Which means this news will come as little surprise to those who understand the concept.

Intel contemplates outsourcing advanced production

Secretive labs and tightly guarded clean rooms in Hillsboro have long represented the leading edge of semiconductor technology. That’s where Intel crafted generations of new microprocessors, chips that led the industry for decades as engineers working at atomic dimensions invented new ways of packing more capabilities into a minute space. Those discoveries powered years of progressively faster, cheaper and more advanced computers.

And it’s there, in Hillsboro, that Intel began making these new chips at research factories tethered to its labs. Intel would then send its meticulously developed manufacturing technique to its other factories around the world where it was replicated precisely, a well-established practice called “copy exactly.”

The model led Intel to become Oregon’s largest corporate employer and one of the state’s major economic engines, convening researchers from all over the world to engineer new chips as the company spent billions of dollars on equipment to manufacture their microscopic marvels.

Now, Intel is laying the groundwork to toss the old model out the window. It is openly flirting with the notion of moving leading-edge production from Oregon to Asia and hiring one of its top rivals to make Intel’s most advanced chips.

They’re not contemplating outsourcing because they’re seeking better profit margins. They’re contemplating it, and they’re going to do it, because their repeated efforts to make the leap to the next level in chip manufacturing have consistently failed. 

But they’ve got diversity now, which is nice.


An interview with the Original Cyberpunk

 No, not me. It’s an interview with Bruce Bethke, the award-winning author of Cyberpunk and Head Crash.

BB: Most people misunderstand how supercomputers work and what supercomputers really do. We hit peak CPU speed about 15 years ago. More processing speed equals greater power consumption equals formidable heat dissipation problems, so unless there’s some kind of radical breakthrough in processor technology—quantum processing has been coming “next year” for as long as I’ve been in the industry; I’m not holding my breath—the way we increase computer power now is by building ever more massively parallel processor architectures.

The result is that the majority of the work being done on supercomputer systems now is just plain old computational fluid dynamics. Admittedly, we’re talking here about crunching through data sets measured in petabytes or exabytes, but deep down, it’s still just engineering. You may think it’s a dead language, but most of these programs are written in Fortran. While Fortran 2018 ain’t your grandaddy’s Fortran, it’s still Fortran.

There is interesting work being done in artificial intelligence and machine learning on supercomputers now, but it’s more in line with pattern recognition and data mining. For now, most AI work doesn’t need the kind of brute force a modern supercomputer brings to the table.

Ergo, for me, the most frightening possibilities are those that involve the misuse by unscrupulous politicians or corporations of the kinds of insights and inferences that can be drawn from such extensive data mining. The things that are being done right now, and that will be coming online in the next few years, should scare the Hell out of any civil libertarian.

AIs on their own seem to be best at finding flaws in their developer’s assumptions. I’ve seen AIs tasked with solving problems come up with hilariously unworkable solutions, because their developers made assumptions based on physical realities that did not apply in the virtual world in which the AI worked.

CM: Could you elaborate on your comments about data mining?

BB: Sure. What we’re talking about here is a field generally called “big data”. It’s the science of extracting presumably meaningful information from the enormous amount of data that’s being collected—well, everywhere, all the time. “Big data” tries to take information from disparate sources—structured and unstructured databases, credit bureaus, utility records, “the cloud”, pretty much everything—then mashes it together, finds coherences and correlations, and then tries to turn it into meaningful and actionable intelligence—for who? To do what with it? Those are the questions.

For just a small example: do you really want an AI bot to report to your medical clinic—or worse, to make medical treatment decisions—based on your credit card and cell phone dutifully reporting exactly when and for how long you were in the pub and exactly what you ate and drank? Or how about having it phone the Police, if you pay for a few pints and then get into the driver’s seat of your car?

That’s coming. As a fellow I met from a government agency whose name or even acronym I am forbidden to say out loud, “Go ahead and imagine that you have privacy, if it makes you feel better.”

Read the whole thing at Mythaxis Review. There is also a nice bit about me there, which may be of interest to you, if for no other reason than it almost certainly annoys Bruce to always have to answer questions about me given that he is without question the much better writer and novelist. But we all have our crosses to bear….


Forget the ticket

 Apparently it’s the earworm that you have to take if you want to sell your soul. AC is, unsurprisingly enough, on top of the recent Joe Rogan antics.

One thing talked about in surveillance is the tendency of operators to fidget with their ears, due to discomfort with, and apparent intermittent unreliability of the deeply seated hidden earpieces. I assume they can move slightly while deep in the ear, making the sound diminish or change, and squishing the outside of the ear can jiggle them back and return higher sound levels. 4Chan is all abuzz, because during the Alex Jones interview, Joe looked bothered, reached under his headphone and began to mess with his ear, and suddenly a female voice can be heard saying, “Relax, we’re here.” People think it was Joe’s CIA-Cabal earpiece which tells him what to say malfunctioning, as somebody in the control room messed with feeds and accidentally put it through the main feed. I would have rated this a 51{5c1a0fb425e4d1363f644252322efd648e1c42835b2836cd8f67071ddd0ad0e3} likelihood of being interesting. But then 4CHan was immediately hit with post after post talking about Schizos and meds. That is not something the casually curious normal poster would post, but it is what you will see on any post talking about surveillance, gangstalking, or any other topic the powers that be do not want discussed.

And to precisely no one’s surprise, Spotify disappeared the Alex Jones interview. In Rogan’s defense, it’s entirely possible that he didn’t actually realize he was selling anything when he signed his big money deal. He’s legitimately dumb enough to believe that his numbers actually justified the price. It will be interesting to see how he reacts when he finds out that they think they own him, as I tend to doubt he’ll react in the “definitely not meth” path blazed by Jordan Peterson.


You already policed my speech, Jack

Twitter’s Jack Dorsey has the gall to try to hide behind free speech in an attempt to prevent Congress from removing Twitter’s ability to engage in the publisher/platform dance:

Section 230 is the Internet’s most important law for free speech and safety. Weakening Section 230 protections will remove critical speech from the Internet.

Twitter’s purpose is to serve the public conversation. People from around the world come together on Twitter in an open and free exchange of ideas. We want to make sure conversations on Twitter are healthy and that people feel safe to express their points of view. We do our work recognizing that free speech and safety are interconnected and can sometimes be at odds. We must ensure that all voices can be heard, and we continue to make improvements to our service so that everyone feels safe participating in the public conversation—whether they are speaking or simply listening. The protections offered by Section 230 help us achieve this important objective.

As we consider developing new legislative frameworks, or committing to self-regulation models for content moderation, we should remember that Section 230 has enabled new companies—small ones seeded with an idea—to build and compete with established companies globally. Eroding the foundation of Section 230 could collapse how we communicate on the Internet, leaving only a small number of giant and well-funded technology companies.

We should also be mindful that undermining Section 230 will result in far more removal of online speech and impose severe limitations on our collective ability to address harmful content and protect people online. I do not think anyone in this room or the American people want less free speech or more abuse and harassment online. Instead, what I hear from people is that they want to be able to trust the services they are using.

Twitter doesn’t believe in free speech, Twitter believes in actively and aggressively policing speech. The God-Emperor is right when he calls for the repeal of Section 230. 

I was banned by Twitter, without cause and without any reason or justification given, years ago. So, I don’t believe a single word that is coming out of Dorsey’s mouth. The fact that he is defending Section 230 is sufficient reason to eliminate it.

 


The DOJ comes for Google

It’s about time. The heavy-breathers who have to keep actively reminding themselves not to do evil have had it coming for years. Let’s hope that the DOJ isn’t content with a Microsoft-style slap on the wrist and drops the full AT&T breakup on them:

The Justice Department filed an antitrust lawsuit Tuesday alleging that Google engaged in anticompetitive conduct to preserve monopolies in search and search advertising that form the cornerstones of its vast conglomerate.

The long-anticipated case, filed in a Washington, D.C., federal court, marks the most aggressive U.S. legal challenge to a company’s dominance in the tech sector in more than two decades, with the potential to shake up Silicon Valley and beyond.

At this point, I think it’s safe to say that the days of the platform/publisher dance are numbered. And it’s going to be hilarious to see Google’s horndog lawyers trying to make the very sort of free speech arguments that their Trust & Safety teams consistently ignore.


“This should trouble you immensely”

Clay Travis calls out the troubling behavior of the Big Tech media cabal:

Democrats impeached the president for his call with Ukraine’s president asking for Ukraine to look into this issue. Now that the Hunter Biden emails have surfaced, it appears the president was 100{5c1a0fb425e4d1363f644252322efd648e1c42835b2836cd8f67071ddd0ad0e3} correct. Did Joe Biden have a secret meeting with Ukraine officials, cover it up, and then lie about it?

That’s certainly what Hunter Biden’s email would suggest.

Now, again, you may not care. Or may not think a story like this should impact your presidential vote. But for a technology company to unilaterally and arbitrarily suspend all discussion of this issue?

That should be terrifying to anyone. Whether you’re a Democrat, Republican or an independent, this should trouble you immensely.

There absolutely, positively have to be content neutral rules in place for major tech companies, which are acting as default monopolies when it comes to online news distribution in our country. If those tech companies decide to favor one political party’s side over the other, that’s not proper behavior and we need major investigations to uncover how and why this is occurring.

Not allowing a story like this to circulate artificially constrains the marketplace of ideas and keeps the American public from being exposed to all arguments and perspectives about an important election decision. When editors at Twitter and Facebook are artificially manipulating which stories you see — and favoring one political party in the process — it’s also no longer possible for the tech platforms to claim they are not exercising editorial decision making.

Get rid of Section 230. End the platform/publisher dance. This isn’t that hard. 


Remove Section 230 protection from Facebook

Because Facebook is actively editing content and isn’t even bothering to pretend it is merely a platform rather than the publisher it obviously is:

Donald Trump Jr. has accused Facebook of election interference for limiting the spread of the New York Post story which claims Joe Biden met with a Ukrainian businessman while he was Vice President, saying it needs to be fact-checked first by its chosen third party before they will allow people to share it more online. 

The announcement came on Wednesday without any explanation from the social media giant and before Biden had even denied it. 

It thrusts into the spotlight again the exorbitant power Facebook has not only over the circulation of news but also over politics and the spread of information, and comes at a particularly tense moment given the Presidential election is in just three weeks.  It also raises the question of who Facebook’s fact-checkers are and what qualifies them to arbitrate the truth.  

Andy Stone, who is a policy communications director at Facebook announced the decision on Twitter.

‘While I will intentionally not link to the New York Post, I want be clear that this story is eligible to be fact checked by Facebook’s third-party fact checking partners. In the meantime, we are reducing its distribution on our platform,’ he said. 

Perhaps House Trump should stop crying all the time about social media and simply a) remove the artificial and unfair legal protections from the Social Media Cartel while b) supporting the non-Cartel alternatives.

The New York Post article in question, which proves that Joe Biden lied about his corrupt son’s international business dealings.

Smoking-gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad

Hunter Biden introduced his father, then-Vice President Joe Biden, to a top executive at a Ukrainian energy firm less than a year before the elder Biden pressured government officials in Ukraine into firing a prosecutor who was investigating the company, according to emails obtained by The Post.

The never-before-revealed meeting is mentioned in a message of appreciation that Vadym Pozharskyi, an adviser to the board of Burisma, allegedly sent Hunter Biden on April 17, 2015, about a year after Hunter joined the Burisma board at a reported salary of up to $50,000 a month.

“Dear Hunter, thank you for inviting me to DC and giving an opportunity to meet your father and spent [sic] some time together. It’s realty [sic] an honor and pleasure,” the email reads.

An earlier email from May 2014 also shows Pozharskyi, reportedly Burisma’s No. 3 exec, asking Hunter for “advice on how you could use your influence” on the company’s behalf.

The blockbuster correspondence — which flies in the face of Joe Biden’s claim that he’s “never spoken to my son about his overseas business dealings” — is contained in a massive trove of data recovered from a laptop computer.

UPDATE: The God-Emperor is on it.

So terrible that Facebook and Twitter took down the story of “Smoking Gun” emails related to Sleepy Joe Biden and his son, Hunter, in the @NYPost. It is only the beginning for them. There is nothing worse than a corrupt politician. REPEAL SECTION 230!!! 

– Donald Trump


The limits of simulation

In a rather clever confluence of Bostron’s simulation theory and the Fermi Paradox, Anatoly Karlin hypothesizes the possibility that the reason there is no extraterrestial life in our simulated universe is that it lies beyond the simulation’s limits:

In a classic paper from 2003, Nick Bostrom argued that at least one of the following propositions is very likely true: That posthuman civilizations don’t tend to run “ancestor-simulations”; that we are living in a simulation; or that we will go extinct before reaching a “posthuman” stage[58]. Let us denote these “basement simulators” as the Architect, the constructor of the Matrix world-simulation in the eponymous film. As Bostrom points out, it seems implausible, if not impossible, that there is a near uniform tendency to avoid running ancestor-simulations in the posthuman era.

There are unlikely to be serious hardware constraints on simulating human history up to the present day. Assuming the human brain can perform ~1016 operations per seconds, this translates to ~1026 operations per second to simulate today’s population of 7.7 billion humans. It would also require ~1036 operations over the entirety of humanity’s ~100 billion lives to date [8]. As we shall soon see, even the latter can be theoretically accomplished with a nano-based computer on Earth running exclusively off its solar irradiance within about one second.

Sensory and tactical information is much less data heavy, and is trivial to simulate in comparison to neuronal processes. The same applies for the environment, which can be procedurally generated upon observation as in many video games. In Greg Egan’s Permutation City, a sci-fi exploration of simulations, they are designed to be computationally sparse and highly immersive. This makes intuitive sense. There is no need to model the complex thermodynamics of the Earth’s interior in their entirety, molecular and lower details need only be “rendered” on observation, and far away stars and galaxies shouldn’t require much more than a juiced up version of the Universe Sandbox video game sim.

Bostrom doesn’t consider the costs of simulating the history of the biosphere. I am not sure that this is justified, since our biological and neurological makeup is itself a result of billions of years of natural selection. Nor is it likely to be a trivial endeavour, even relative to simulating all of human history. Even today, there are about as many ant neurons on this planet as there are human neurons, which suggests that they place a broadly similar load on the system [9]. Consequently, rendering the biosphere may still require one or two more orders of magnitude of computing power than just all humans. Moreover, the human population – and total number of human neurons – was more than three orders of magnitude lower than today before the rise of agriculture, i.e. irrelevant next to the animal world for ~99.9998{5c1a0fb425e4d1363f644252322efd648e1c42835b2836cd8f67071ddd0ad0e3} of the biosphere’s history [10]. Simulating the biosphere’s evolution may have required as many as 1043 operations [11].

I am not sure whether 1036 or 1043 operations is the more important number so far as generating a credible and consistent Earth history is concerned. However, we may consider this general range to be a hard minimal figure on the amount of “boring” computation the simulators are willing to commit to in order in search for a potentially interesting results.

Even simulating a biosphere history is eminently doable for an advanced civilization. A planet-scale computer based on already known nanotechnological designs and powered by a single-layer Matryoshka Brain that cocoons the Sun will generate 1042 flops[60]. Assuming the Architect’s universe operates within the same set of physical laws, there is enough energy and enough mass to compute such an “Earth history” within 10 seconds – and this is assuming they don’t use more “exotic” computing technologies (e.g. based on plasma or quantum effects). Even simulating ten billion such Earth histories will “only” take ~3,000 years – a blink of an eye in cosmic terms. Incidentally, that also happens to be the number of Earth-sized planets orbiting in the habitable zones of Sun-like stars in the Milky Way[61].

So far, so good – assuming that we’re more or less in the ballpark on orders of magnitude. But what if we’re not? Simulating the human brain may require as much 1025 flops, depending on the required granularity, or even as many as 1027 flops if quantum effects are important [62,63]. This is still quite doable for a nano-based Matryoshka Brain, though the simulation will approach the speed of our universe as soon as it has to simulate ~10,000 civilizations of 100 billion humans. However, doing even a single human history now requires 1047 operations, or two days of continuous Matryoshka Brain computing, while doing a whole Earth biosphere history requires 1054 operations (more than 30,000 years).

This will still be feasible or even trivial in certain circumstances even in our universe. Seth Lloyd calculates a theoretical upper bound of 5*1050 flops for a 1 kg computer[64]. Converting the entirety of the Earth’s mass into such a computer would yield 3*1075 flops. That said, should we find that one needs significantly more orders of magnitude than 1016 flops to simulate a human brain, we may start to slowly devalue the probability that we are living in a simulation. Conversely, if we are to find clues that simulating a biosphere is much easier than simulating a human noosphere – for instance, if the difficulty of simulating brains increases non-linearly with respect to their numbers of neurons – we may instead have to conclude that it is more likely that we live in a simulation.


Marketing doesn’t hold a monopoly

On corporate stupidity. The engineers, both hardware and software, also exhibit a reliable form of stupidity that has been known to prove terminal. From IN SEARCH OF STUPIDITY, which is the best business book I have ever read, and other than CORPORATE CANCER, which addresses an even more critical problem, possibly the most important.

SMS: Joel, what, in your opinion, is the single greatest development sin a software company can commit?

JS: Deciding to completely rewrite your product from scratch, on the theory that all your code is messy and bug-prone and is bloated and needs to be completely rethought and rebuilt from ground zero.

SMS: What’s wrong with that?

JS: Because it’s almost never true. It’s not like code rusts if it’s not used. The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it.

SMS: Well, why do programmers constantly go charging into management’s offices claiming the existing code base is junk and has to be replaced?

JS: My theory is that this happens because it’s harder to read code than to write it. A programmer will whine about a function that he thinks is messy. It’s supposed to be a simple function to display a window or something, but for some reason it takes up two pages and has all these ugly little hairs and stuff on it and nobody knows why. OK. I’ll tell you why. Those are bug fixes. One of them fixes that bug that Jill had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes a bug that occurs in low-memory conditions. Another one fixes some bug that occurred when the file is on a floppy disk and the user yanks out the diskette in the middle. That LoadLibrary call is sure ugly, but it makes the code work on old versions of Windows 95. When you throw that function away and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

SMS: Well, let’s assume some of your top programmers walked in the door and said, “We absolutely have to rewrite this thing from scratch, top to bottom.” What’s the right response?

JS: What I learned from Charles Ferguson’s great book, High St@kes, No Prisoners, is that you need to hire programmers who can understand the business goals. People who can answer questions like “What does it really cost the company if we rewrite?” “How many months will it delay shipping the product?” “Will we sell enough marginal copies to justify the lost time and market share?” If your programmers insist on a rewrite, they probably don’t understand the financials of the company, or the competitive situation. Explain this to them. Then get an honest estimate for the rewrite effort and insist on a financial spreadsheet showing a detailed cost/benefit analysis for the rewrite.

SMS: Yeah, great, but, believe it or not, programmers have been known to, uh, “shave the truth” when it comes to such matters.

JS: What you’re seeing is the famous programmer tactic: All features that I want take 1 hour, all features that I don’t want take 99 years. If you suspect you are being lied to, just drill down. Get a schedule with granularity measured in hours, not months. Insist that each task have an estimate that is 2 days or less. If it’s longer than that, you need to break it down into subtasks or the schedule can’t be realistic.

SMS: Are there any circumstances where a complete code rewrite is justified?

JS: Probably not. The most extreme circumstance I can think of would be if you are simultaneously moving to a new platform and changing the architecture of the code dramatically. Even in this case you are probably better off looking at the old code as you develop the new code.

SMS: Hmm. Let’s take a look at your theory and compare it to some real-world software meltdowns. For instance, what happened at Netscape?

JS: Way back in April 2000, I wrote on my website that Netscape made the single worst strategic mistake that any software company can make by deciding to rewrite their code from scratch. Lou Montulli, one of the five programming superstars who did the original version of Navigator, e-mailed me to say, “I agree completely; it’s one of the major reasons I resigned from Netscape.” This one decision cost Netscape 4 years. That’s three years they spent with their prize aircraft carrier in 200,000 pieces in dry dock. They couldn’t add new features, couldn’t respond to the competitive threats from IE, and had to sit on their hands while Microsoft completely ate their lunch.


Gab catches up

Gab now has their own servers:

Today is a tremendous milestone for the Gab community.

After over a year of work Gab has finally migrated to our own in-house servers. We own the hardware, which means no one can ban us from using our own technology to host Gab. If you talk to anyone in the technology industry they will tell you that this is no easy task. Most tech startups have the luxury of using third-part cloud hosting providers like Amazon AWS, Microsoft Azure, and others.

Gab does not have this luxury.

Over the past four years we have been banned from multiple cloud hosting providers and were told that if we didn’t like it we should “build our own.”

So, that’s exactly what we did.

Good for them. I’m not being ironic or sarcastic, this is exactly what independent platforms need to do across the West. That being said, both Infogalactic and SocialGalactic have been on their own servers from the start, and Unauthorized has been on its own much more powerful servers since February.

The enemy has Cloud supremacy, but all this does is force us to be stronger and more independent on the ground. And when they go after the payment processors, the banks, and even the entire SWIFT system, as they will, what they will discover is that they will only succeed in creating even more formidable competitors.

The thing they simply don’t seem to grasp is that we’re not their only enemies. The entire world is increasingly turning against them.