THE CULT OF BAYES' THEOREM

I'm no longer a skeptic, but still I can't resist the old skeptic urge to do a bit of debunking. After all, there are a lot of crackpots out there. There are people, for example, who believe that a superintelligent computer will arise in the next twenty years and then promptly either destroy humanity or cure death and set us free. There are people who believe that one of the best works of English literature is an unfinished Harry Potter fanfic by someone who can barely write a comprehensible English sentence. There are even people who believe the best thing you can do to help the poor and the starving is become a city stockbroker or Silicon Valley entrepreneur! And more often than not, the same people believe all these crazy things!

The striking thing about these people is that they are no ordinary kooks — some of them actually identify as skeptics themselves, and all of them claim to be committed rationalists. Many are even full-time evangelists for "rationality", and can justify in sound and impressive detail why all their beliefs are correct. And they're not just backed up by the laws of logic and mathematics, but also by some of the finest minds and fattest wallets in Silicon Valley. Who are these people? They are the members of the Elect group who have received into their minds and hearts the glorious truth of something called Bayes' Theorem.

BAYESIAN GRACE

Above: Bayes' Theorem in its natural environment. The way it works is this. Suppose you are a charitable person and believe only 5% of people are Assholes (P(A) = 0.05). And suppose you believe that 1% of Assholes wear Bayes T-shirts ( P(B|A) = 0.01). (These values are known as priors, which is ironic because Bayesians inevitably pull them out of their posteriors.) And then suppose you believe that only an Asshole would wear a Bayes T-shirt, meaning that the chance of observing any random person in a Bayes T-shirt is 0.05% (P(B) = 0.0005). Now, if you observe a random person wearing a Bayes T-shirt, you can plug all the above values into the equation to find out the probability P(A|B) that they're an Asshole! You might find the result surprising!
Bayes' Theorem is a simple formula that relates the probabilities of two different events that are conditional upon each other. First proposed by the 18th-century Calvinist minister Thomas Bayes, the theorem languished in relative obscurity for nearly 200 years, an obscurity some would say was deserved. In its general form it follows trivially from the definitions of classical probability, which were formalised not long after Bayes' death by Pierre-Simon Laplace and others. From the time of Laplace until the 1960s, Bayes' Theorem barely merited a mention in statistics and probability textbooks.

The theorem owes its present-day notoriety to Cold-War-era research into statistical models of human behaviour [1], the same research movement that gave us the Prisoner's Dilemma, Mutually Assured Destruction and Fuck You, Buddy. Statisticians in the 1950s and 1960s, initially concentrated around the University of Chicago and the Harvard Business School, decided to interpret probability not as a measure of chance, but as a measure of the confidence an agent has in its subjective beliefs. Under this interpretation, Bayes' Theorem acquired great prescriptive power: it expressed how a perfectly rational agent should revise its beliefs upon obtaining new evidence. In other words, the researchers had discovered in Bayes' Theorem an elegant, one-line formula for how best to learn from experience — the formula for a perfect brain!

The researchers decided to reinterpret all statistics in this subjective, prescriptive and profoundly individualist sense, launching a revolution that set their field ablaze. The struggle that followed pitted "frequentists", who defended the old statistical ways, against "Bayesian" radicals — though since most of the Bayesians were ideological mates with Milton Friedman and John von Neumann, maybe "radical" isn't the right word. The Bayesians were excited by the potential applications of their new way of thinking in political, business and military spheres. Their opponents, meanwhile, kept pointing to Bayesianism's central difficulty: assigning a confidence value to a subjective belief is a whole lot more problematic than assigning a probability to the outcome of a coin toss.

The Bayesian revolution was not confined to the field of statistics: truly, it was a shot that was heard around the globe. Cognitive scientists seized on Bayes as a whole new way to introduce quantifiable crap into their models of human behaviour. Analytic philosophers expanded "Bayesianism" into a whole new area of empty pedantry. AI researchers used electronic Bayesian brains in a number of equally unimpressive "intelligent" systems — classifiers, learning systems, inference systems, rational agents. And right-wing economists saw in Bayesian agents — goal-directed, self-interested, anti-social creeps — a model of their ideal investor or consumer.

AMAZING BAYES

More pertinently to my interests, Bayesianism has also found a strong foothold in nerd culture. Bayes' theorem is now the stuff of gurus and conventions, T-shirts and fridge magnets, filking and fanfic. There are people who strive to live by its teachings; it's not an exaggeration to say that a cult has arisen around it. How did this formula create such a popular sensation? Why do so many people identify so strongly with it?

Perhaps the answer lies in the beguiling power of its prescriptive interpretation. In one simple line, Bayes' Theorem tells you how a perfectly rational being should use its observations to learn and improve itself: it's instructive, aspirational and universal. Other mathematical and scientific laws are merely the truth, but Bayes' Theorem is also the way and the light.

I must also admit that the theorem has some wondrous properties, of the kind that can readily inspire devotion. It's a small and simple formula, but it regularly works minor miracles. Its power often surprises; it has a habit of producing counterintuitive but correct results; it won't be fooled by those annoying trick questions posed by smug psychologists; it seems smarter than you are. It's not hard to see why people who discover Bayes' Theorem, like people who discover secret UFO files, gnostic texts, or Prolog programming, can think they've opened up a new world of deeper, strange, alien truths. And from there, it's a short step to become an eager disciple of Bayesianism.

Bayesianism has a particular attraction for nerds, who see in Bayes' Theorem the calculating badass they always imagined themselves to be. Much like the protagonist of a bad sci-fi novel, the theorem decisively uses the available evidence to attain the best possible results every time. It's no surprise that it wins fans among the milieu that idolises Ayn Rand, Robert Heinlein, Frank Herbert, and Ender Wiggin.

True believers can attribute amazing powers to Bayes' Theorem, even in places it doesn't belong. Here, for example, is a committed Bayesian who thinks he has used the theorem to prove that the sentence "absence of evidence is not evidence of absence" is a "logical fallacy". I find this striking for two reasons. Firstly, because I had no idea Bayes' Theorem had anything to say about this matter, and secondly, because I'm pretty sure the sentence in question is not a "fallacy", at least in the sense that the majority of level-headed English-speakers would interpret it. [2] But some Bayesians seem to believe they can conclude a falsehood from their own ignorance.

Another over-enthusiastic Bayes fan is the historian and "New Atheist" loudmouth Richard Carrier, who in his book Proving History claims that any valid historiographic method should be reducible to Bayes' Theorem. In the general sense, this claim couldn't be less interesting: both Bayes and historians are concerned with getting at "truth" by processing "evidence", and pointing this out will enlighten no one. And in the specific sense, the claim couldn't be more stupid. The idea that historical evidence and theses could be reduced to probability values, and that plugging them into Bayes' Theorem would make historical research more accurate or reliable or rigorous, is not only the worst kind of technocrat fantasy, it's also completely unworkable.

In general practice, there's no way to come up with meaningful figures for the right-hand side of the Bayes equation, and so Bayesians inevitably end up choosing values that happen to justify their existing beliefs. The Reverend Bayes originally used his theorem to prove the existence of God, while in his next book, Carrier will apparently use the same theorem to disprove the existence of the historical Jesus. I myself have applied it to a thornier subject: the problem of uncovering academic frauds. In fact, by inputting the precise values for what I believe about academic frauds, their likelihood of spreading pseudoscientific bullshit, and the likelihood that pseudoscientific bullshit is precisely what Carrier is spreading, I have used Bayes' Theorem to calculate that Richard Carrier is almost certainly a fraud, to a confidence level of five sigma. (Calculations available upon request.)

LESS WRONG

The most visible group of Bayesians on the web, and my subject for the rest of this article, is the Singularity Institute of Berkeley, California. [3] This organisation is the Heaven's Gate of Bayesianism, a doomsday cult which preaches that a hostile AI will take over the world and destroy us all (in an event known as the singularity) unless we do something about it — like make regular donations to the Singularity Institute. The Institute is urgently working on our best hope of survival. So far, it has put a lot of windbags on websites.

The principal website associated with the Singularity Institute is lesswrong.com, which describes itself as "a community blog devoted to refining the art of human rationality". As far as I can tell, "refining the art of human rationality" involves using Bayes' Theorem to develop a series of Baloney-Detection-Kit-style heuristics you can slap over any argument. I'm still unsure as to why becoming Bayesian will help us against the singularity threat: maybe the superintelligent AI will then confuse us for one of its own? I've no doubt the answer is explained somewhere in the 100,000 "sequences" on lesswrong.com, but please don't ask me to comb them for it.

The "sequences" are a pompously-titled series of blog entries written by Lesswrong's central guru figure, the charismatic autodidact Eliezer Yudkowsky, in which the tenets of the cult are murkily explained. I must confess that I fail to see the appeal of this guy and his voluminous writings. I find the "sequences" largely impenetrable, thick with nerd references and homespun jargon, and written in a claggy, bombastic, inversion-heavy style that owes much to Tolkien's Return of the King and more still to Yoda. When I read stuff like "correct complexity is only possible when every step is pinned down overwhelmingly", or "They would need to do all the work of obtaining good evidence entangled with reality, and processing that evidence coherently, just to anticorrelate that reliably", I can't hit the back button quickly enough. The main purpose of this kind of writing is to mystify and to overawe. Like countless other gurus, Yudkowsky has the pose of someone trying to communicate his special insight, but the prose of someone trying to conceal his lack of it. I suspect this is the secret of much of his cult-leader charisma.

And I suspect the rest can be accounted for by the built-in charisma of the autodidact, which is part of Western cultural heritage. Here in Christendom, people are conditioned to admire the voice in the wilderness, the one who goes it alone, the self-made man, the pioneer. Popular scientific narratives are filled with tales of the "lone genius" who defies the doubters to make his great advances; these tales are mostly bullshit, but people in this post-Christian age desperately want to believe them. And Yudowsky, a wannabe polymath with no formal education, whose ego is in inverse proportion to his achievements, has all the stylings of a self-made science Messiah. He's an earthly avatar for the kind of forsaken savants who think their IQ scores alone should entitle them to universal respect and admiration.

Yudkowsky isn't the only autodidact at lesswrong.com: his apostle Luke Muehlhauser is also proudly self-educated, boasting "I studied psychology in university but quickly found that I learn better and faster as an autodidact." I'll take his word on "faster", but I'll have to quibble about "better". The trouble with autodidacts is that they tend to suffer a severe loss of perspective. Never forced to confront ideas they don't want to learn, never forced to put what they've learned in a wider social context, they tend to construct a self-justifying and narcissistic body of knowledge, based on an idiosyncratic pick-and-mix of the world's philosophies. They become blinded to the incompleteness of their understanding, and prone to delusions of omniscience, writing off whole areas of inquiry as obviously pointless, declaring difficult and hotly-debated problems to have a simple and obvious answer. Yudkowsky and Muehlhauser exhibit all these symptoms in abundance, and now, surrounded by cultists and fanboys and uncritical admirers, they might well be hopeless cases.

BAYESIANISM IS THE MIND-KILLER

Many critics of the Singularity Institute focus on its cult-like nature: the way it presents itself as the only protection against an absurdly unlikely doomsday scenario; the way its members internalise a peculiar vocabulary that betrays itself when they step outside the cult confines; the way they keep pushing the work of their idolised cult guru on unwilling readers. (In particular, they keep cheerleading for Yudkowsky's endlessly dire Harry Potter fanfic, Mary Sue and the Methods of Rationality.) While all these criticisms are legitimate, and the cultish aspects of the Singularity Institute are an essential part of its power structure, I'm more concerned about the political views it disseminates under the guise of being stridently non-political.

One of Yudkowsky's constant refrains, appropriating language from Frank Herbert's Dune, is "Politics is the Mind-killer". Under this rallying cry, Lesswrong insiders attempt to purge discussions of any political opinions they disagree with. They strive to convince themselves and their followers that they are dealing in questions of pure, refined "rationality" with no political content. However, the version of "rationality" they preach is expressly politicised.

The Bayesian interpretation of statistics is in fact an expression of some heavily loaded political views. Bayesianism projects a neoliberal/libertarian view of reality: a world of competitive, goal-driven individuals all trying to optimise their subjective beliefs. Given the demographics of lesswrong.com, it's no surprise that its members have absorbed such a political outlook, or that they consistently push political views which are elitist, bigoted and reactionary.

Yudkowsky believes that "the world is stratified by genuine competence" and that today's elites have found their deserved place in the hierarchy. This is a comforting message for a cult that draws its membership from a social base of Web entrepreneurs, startup CEOs, STEM PhDs, Ivy leaguers, and assorted computer-savvy rich kids. Yudkowsky so thoroughly identifies himself with this milieu of one-percenters that even when discussing Bayesianism, he slips into the language of a heartless rentier. A belief should "pay the rent", he says, or be made to suffer: "If it turns deadbeat, evict it."

Members of Lesswrong are adept at rationalising away any threats to their privilege with a few quick "Bayesian Judo" chops. The sufferings caused by today's elites — the billions of people who are forced to endure lives of slavery, misery, poverty, famine, fear, abuse and disease for their benefit — are treated at best as an abstract problem, of slightly lesser importance than nailing down the priors of a Bayesian formula. While the theories of right-wing economists are accepted without argument, the theories of socialists, feminists, anti-racists, environmentalists, conservationists or anyone who might upset the Bayesian worldview are subjected to extended empty "rationalist" bloviating. On the subject of feminism, Muehlhauser adopts the tactics of an MRA concern troll, claiming to be a feminist but demanding a "rational" account of why objectification is a problem. Frankly, the Lesswrong brand of "rationality" is bigotry in disguise.

Lesswrong cultists are so careful at disguising their bigotry that it may not be obvious to casual readers of the site. For a bunch of straight-talking rationalists, Yudkowsky and friends are remarkably shifty and dishonest when it comes to expressing a forthright political opinion. Political issues surface all the time on their website, but the cult insiders hide their true political colours under a heavy oil slick of obfuscation. It's as if "Politics is the mind-killer" is a policy enforced to prevent casual readers — or prospective cult members — from realising what a bunch of far-out libertarian fanatics they are.

Take as an example Yudkowsky's comments on the James Watson controversy of 2007. Watson, one of the so-called fathers of DNA research, had told reporters he was "gloomy about the prospect of Africa" because "all our social policies are based on the fact that their intelligence is the same as ours — whereas all the testing says not really". Yudkowsky used this racist outburst as the occasion for some characteristically slippery Bayesian propagandising. In his essay, you'll note that he never objects to or even mentions the content of Watson's remarks — for some reason, he approaches the subject by sneering at the commentary of a Nigerian journalist — and neither does he question the purpose or validity of intelligence testing, or raise the possibility of inherent racism in such tests. Instead, he insinuates that anti-racists are appropriating the issue for their own nefarious ends:

"Race adds extra controversy to everything; in that sense, it's obvious what difference skin colour makes politically".

Yudkowsky appears to think that racism is an illusion or at best a distraction. He stresses the Bayesian dogma that only individuals matter:

"Group injustice has no existence apart from injustice to individuals. It's individuals who have brains to experience suffering. It's individuals who deserve, and often don't get, a fair chance at life. [...] Skin colour has nothing to do with it, nothing at all."

Here, he tells the victims of racial discrimination to forget the fact that their people have been systematically oppressed by a ruling elite for centuries, and face up to the radical idea that their suffering is their own individual problem. He then helpfully reassures them that none of it is their fault; they were screwed over at birth by being simply less intelligent that then creamy white guys at the top:

Never mind the airtight case that intelligence has a hereditary genetic component among individuals; if you think that being born with Down's Syndrome doesn't impact life outcomes, then you are on crack."

Yudkowsky would reject the idea that these disadvantaged individuals could improve their lot by grouping together and engaging in political action: politics is the mind-killer, after all. The only thing that can save them is Yudkowsky's improbable fantasy tech. In the future, "intelligence deficits will be fixable given sufficiently advanced technology, biotech or nanotech." And until that comes about, the stupid oppressed masses should sit and bear their suffering, not rock the boat, and let the genuinely competent white guys get on with saving the world.

Social Darwinism is a background assumption among the lesswrong faithful. Cult members have convinced themselves the world's suffering is a necessary consequence of nature's laws, and absolved themselves from any blame for it. The strong will forever triumph over the weak, and mere humans can't do anything to change that. "Such is the hideously unfair world we live in", writes Yudkowsky, and while he likes to fantasise about eugenic solutions, and has hopes for "rational" philanthropy, the official line is that only singularity-level tech can solve the world's problems.

In common with many doomsday cults, singularitarians both dread and pray for their apocalypse; for while a bad singularity will be the end of humanity, a good singularity is our last best hope. The good singularity will unleash a perfect rationalist utopia: from each according to his whim, to each according to his IQ score. Death will be no more, everyone will have the libido of a 16-year-old horndog, and humankind will colonise the stars. In fact, a good singularity is so overwhelmingly beneficial that it makes all other concerns irrelevant: we should dedicate all our resources to bringing it about as soon as possible. Lesswrong cultists are already preparing for this event in their personal and private lives, by acting like it has already happened.

THE BRAVE NEW RATIONALIST UTOPIA: TRIGGER WARNINGS AHOY

To get an idea of what social relations, and in particular sexual relations, will be like in the singularitarian utopia, it helps to look at the utopian visions of libertarian-friendly authors like Ayn Rand, Robert Heinlein, or Poul Anderson, or the more embarrassing bits of Iain M. Banks. Suffice to say that when the computers are in charge, Lesswrong nerds will be getting a whole lot of sex with a whole lot of partners in a whole lot of permutations. The Lesswrongers who tumble into cuddle-puddles at their Bay Area meetups aren't just the Bright Young Things of a decadent culture; they're trailblazers of the transhuman morality.

You might think those cuddle-puddles are cute and fluffy, but it's too convenient to give the members of lesswrong.com a pass because they're into a bit of free love. (It should incidentally be noted that the ideology of "free love" has often been exploited by men in power — most notably, cult gurus — to pressure others into sleeping with them). Lesswrongers might see themselves as the vanguard of a new sexual revolution, but there's nothing new or revolutionary about a few rich kids having an orgy. Even the "sexual revolution" of the late 60s and 70s was only progressive to the extent that it promoted equality in sexual activity. Its lasting achievement was to undermine the old patriarchal concept of sex as an act performed by a powerful male against passive subordinates, and forward the concept of sex as a pleasure shared among equal willing partners. Judged by this standard, Lesswrong is if anything at the vanguard of a sexual counter-revolution.

Consider, for example, the fact that so many Lesswrong members are drawn to the de facto rape methodology known as Pick-Up Artistry . In this absurd but well-received comment, some guy calling himself "Hugh Ristik" tries to make a case for the compatibility of PUA and feminism, which includes the following remarkable insight:

"Both PUAs and feminists make some errors in assessing female preferences, but feminists are more wrong: I would give PUAs a B+ and feminists an F"

It's evident that "Hugh Ristik" sees himself as a kind of Bayes' Theorem on the pull, and that "female preferences" only factor into the equation to the extent that they affect his confidence in the belief he will get laid.

As another clue to the nature of Lesswrong's utopian sexual mores, consider that Yudkowsky has written a story about an idyllic future utopia in which it is revealed that rape is legal. The Lesswrong guru was bemused by the reaction to this particular story development; that people were making a big deal of it was "not as good as he hoped" , because he had another story in mind in which rape was depicted in an even more positive light! Yudkowsky invites the outraged reader to imagine that his stand-in in the story might enjoy the idea of "receiving non-consensual sex", as if that should placate anyone. Once again we have a Bayesian individual generalising from his fantasies, apparently unmoved by the fact that "receiving non-consensual sex" is a horrible daily threat and reality for millions worldwide, or that people might find his casual treatment of the subject grossly disturbing and offensive.

All in all, I haven't seen anything on lesswrong.com to counter my impression that the "rational romantic relationships" its members advocate are mostly about reasserting the sexual rights of powerful males. After all, if you're a powerful male, such as a 21st-century nerd, then rationally, a warm receptacle should be available for your penis at all times, and rationally, such timeworn deflections as "I've got a headache" or "I'm already taken" or "I think you're a creep, stay away from me" simply don't cut it. Rationally, relationships are all about optimising your individual fuck function, if necessary at others' expense — which coincidentally means adopting the politics of "fuck everyone".

THIEL'S LITTLE LIBERTARIANS

The main reason to pay attention to the Lesswrong cult is that it has a lot of rich and powerful backers. The Singularity Institute is primarily bankrolled by Yudkowsky's billionaire friend Peter Thiel, the hedge fund operator and co-founder of PayPal, who has donated over a million dollars to the Institute throughout its existence [4]. Thiel, who was one of the principal backers of Ron Paul's 2012 presidential campaign, is a staunch libertarian and lifelong activist for right-wing causes. Back in his undergrad days, he co-founded Stanford University's pro-Reagan rag The Stanford Review, which became notorious for its anti-PC stance and its defences of hate speech. The Stanford experience seems to have marked Thiel with a lasting dislike of PC types and feminists and minorities and other people who tend to remind him what a shit he is. In 1995, he co-wrote a book called The Diversity Myth: 'Multiculturalism' and the Politics of Intolerance at Stanford, which was too breathtakingly right-wing even for Condi Rice; one of his projects today is the Thiel's Little Achievers Fellowship, which encourages students to drop out of university and start their own businesses, free from the corrupting influence of left-wing academics and activists.

Other figures who are or were associated with the Institute include such high-profile TED-talkers as Ray Kurzweil [5], the delusional "cyborg" crackpot; Aubrey De Grey, the delusional "Methuselah" crackpot; Jaan Tallinn, co-creator of Skype, the world's favourite backdoor Trojan; and Professor Nick Bostrom of Oxford University's generously-endowed Future of Humanity Institute, which is essentially a neoliberal think-tank in silly academic garb. Perhaps I will have more to say about this institution at another time.

Buoyed by Thiel's money, the Singularity Institute is undertaking a number of outreach ventures. One of these is the Center for Applied Rationality, which, among other things, runs Bayesian boot-camps for the children of industry. Here, deserving kids become indoctrinated with the lesswrong version of "rationality", which according to the centre's website is the sum of logic, probability (i.e. Bayesianism) and some neoliberal horror called "rational choice theory". The great example of "applied rationality" they want these kids to emulate? Intel's 1985 decision to pull out of the DRAM market and lay off a third of its workforce. I guess someone needs to inspire the next generation of corporate downsizers and asset-strippers.

Here we see a real purpose behind lesswrong.com. Ultimately it doesn't matter that people like Thiel or Kurzweil or Yudkowsky are pushing a crackpot idea like the singularity; what matters is that they are pushing the poisonous ideas that underlie Bayesianism. Thiel and others are funding an organisation that advances an ideological basis for their own predatory behaviour. Lesswrong and its sister sites preach a reductive concept of humanity that encourages an indifference to the world's suffering, that sees people as isolated, calculating individuals acting in their self-interest: a concept of humanity that serves and perpetuates the scum at the top.


[1] My source for this claim is Stephen E. Fienberg's paper "When did Bayesian Inference become 'Bayesian'?" . Fienberg diligently traces the use of Bayesian-like methods throughout the history of statistics, but it's clear from his account that "Bayesian" statistics did not become a coherent and formally identified movement until the 1950s.

[2] It's a nonsense in the first place to claim that a sentence is "a logical fallacy": only arguments can be fallacious, and "absence of evidence is not evidence of absence" is not an argument. I'll charitably assume that the writer means to claim that the sentence is logically invalid — in which case he's still probably wrong.

Declaring any natural language sentence "logically valid" is obviously problematic, since sentences have various interpretations and can mean absolutely different things in different contexts. Most people use the sentence "absence of evidence is not evidence of absence" as a snappy way to counter the logical fallacy of "argument from ignorance" — in other words, they use it to make the quite sensible observation that "just because you don't know something to be true, that doesn't necessarily mean it's false". This is certainly a consistent observation — it's true sometimes — and I think most of us would agree that it's valid too — true all the time. For it ever to be false, you'd have to allow the possibility of omniscience. Even Bayesians do not in general presume to omniscience, though some of them hope to get close.

Instead, for their own didactic reasons, they choose to interpret the original sentence differently, to mean "failure to observe evidence for something should not increase your confidence that it is false". It then contradicts their interpretation of Bayes' Theorem, and is therefore a heresy. But this heresy actually points to a problem in the hardcore Bayesian outlook. Ultimately a Bayesian agent is always a victim of its own ignorance: it's very easy to bamboozle it by selectively showing and denying it evidence. Left to its own devices, eventually its beliefs will propagate into a bizarrely self-referential worldview that bears no relation to the reality it finds itself in. The analogy to certain Bayesian cults should be obvious.

[3] I am indebted to the rationalwiki page on lesswrong for much of the information in this and subsequent sections.

[4] Incidentally, years before he became a cult leader, Yudkowsky was unsuccessfully trying to popularise "to Paypal" as a generic verb for Internet money transfer.

[5] Kurzweil was also the founder of a separate organisation known as the "Singularity University", but now seems to be associated with neither. It seems that the Singularity Institute and Singularity University have recently had something of a dispute over the ownership of the "singularity" brand, and that the latter won out. By the time you read this, the Singularity Institute might even be known by its new name of "The Machine Intelligence Research Institute".
HOME OPINIONS GAMES REVIEWS FULL INDEX ABOUT