Wikipedia is a valuable online resource that, despite massive changes to the web, has managed to remain truly great to this day. I, like millions of other users, visit the site every day to learn something new or double-check my existing knowledge. In the era of continuous AI garbage, Wikipedia is a kind of antidote.
If you look at Wikipedia and think, “It's okay, but an AI version would be much better,” you might just be Elon Musk. Musk's artificial intelligence company xAI has just launched Grokipedia—yes, that's really its name—an online encyclopedia that closely resembles Wikipedia in name and appearance. But under the hood, the two could hardly be more different. Although the new “encyclopedia” is just beginning to emerge, I would argue that it shouldn't be used, at least not for anything real.
Grokipedia Experience
When you load the Grokipedia sitelooks pretty standard. You see the Grokipedia name, version number (v0.1 at the time of writing), a search bar, and a counter for “Available Articles” (885,279). Searching for an article is also simple: you enter a query and a list of available articles appears for you to choose from. When you open an article, it looks like Wikipedia, only very simple: there are no images, just text, although you can use the sidebar to navigate between sections of the article. At the bottom of each article you will also find sources marked with numbers that correspond to the “References” section.
However, the key difference between Grokipedia and the simple version of Wikipedia is that these articles are not written and edited by real people. Instead, each article is generated and “fact-checked” by Grok, xAI's large language model (LLM). LLMs are capable of generating large volumes of text in short periods of time and include the sources from which they draw information, which may make Grokipedia's presentation appealing to some. However, LLMs are also prone to hallucinations.or in other words, making things up. Sometimes the sources from which the AI draws information are unreliable or humorous; in other cases, the AI takes on the task of “lying” and generating text that is simply not true. In both cases, the information can't be trusted, especially at face value, so it's alarming to see that much of the experience is entirely Grock-based, with no human intervention.
Grokipedia vs Wikipedia
Musk calls Grokipedia a “significant improvement” over Wikipedia, which he has criticized for pushing propaganda, especially towards left-wing ideas and policies. It's ironic that some of these Grokipedia articles are themselves taken from Wikipedia. As The Verge's Jay Peters points outarticles like MacBook Air Please note the following below: “Content adapted from Wikipediadistributed under a Creative Commons Attribution-ShareAlike 4.0 license.” Moreover, Peters found that some Grokipedia articles, such as PlayStation 5 And Lincoln Mark VIIIare almost identical copies of the corresponding Wikipedia articles.
If you've been following Musk's politics and political activities in recent years, you won't be surprised to learn that he belongs to the right side of the political spectrum. This may give pause to anyone considering using Grokipedia as an objective source of information, especially after Musk constantly retooled Grok generate responses more favorable to right-wing opinions. Critics like Musk argue that Wikipedia is biased to the left, but Grokipedia is entirely modeled after artificial intelligence with a nasty bias.
You will find that when you read certain topics on Wikipedia and Grokipedia, you get very different impressions. For example, the Wikipedia article on Tylenol states the following:
In 2025, Donald Trump made several claims about the controversial and unproven link between autism and Tylenol. These claims linking Tylenol during pregnancy and autism are based on unreliable sources without scientific support.
Compare this with Grokipediain which three paragraphs are devoted to the topic, the first of which begins like this:
Numerous observational studies and meta-analyses have found an association between prenatal exposure to acetaminophen (the active ingredient in Tylenol) and an increased risk of neurodevelopmental disorders (NDDs) in offspring, including attention-deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD).
However, the second paragraph highlights some of the problems with these studies, and the third highlights that some agencies suggest that “the benefits outweigh the unproven risks.”
Likewise, as noted by WIREDGrokipedia article, Transgenderhighlights the belief that social media may have become an “infection” for the rise of transgender identification. Not only is this a common right-wing claim, but this particular word may have come from a post by right-wing account X. Wikipedia articleas one might expect, does not take this statement into account at all.
Grokipedia also favors claims that are unproven, controversial, or outright absurd. As Rolling Stone magazine points outIt talks about “Pizzagate,” the conspiracy theory that led to the actual shooting, as “allegations,” “hypothesis,” and “narrative.” Grokipedia confirms the “Great Replacement”, a racist theory put forward by white supremacists.
Is it worth using Grokipedia?
Here's the short answer: no. The problem I have with Grokipedia is twofold: first, no encyclopedia will be reliable if it is almost entirely created by AI models. Sure, some information may be accurate, and it's great that you can see the sources the bot is using, but when the risk of hallucinations is built into the technology and there's no way around it, opting for massive human intervention all but guarantees that inaccuracies will plague much of Grokipedia's knowledge base.
As if that weren't enough, this Grokipedia is built on an LLM that Musk openly tinkers with in order to produce results that more closely align with his worldview and that of one particular political ideology. Hallucinations and bias are exactly the ingredients you need to encyclopedia.
The special thing about Wikipedia is that it is written and edited by People. These people can hold other human authors accountable by adding new information when it becomes available and correcting errors when they encounter them. You may be dismayed to read that your favorite Secretary of Health and Human Services is “promoting vaccine misinformation and public health conspiracy theories.” but this is an objective, scientific reality. Removing these objective descriptions and reframing the discussion to fit a distorted worldview does not make Grokipedia better than Wikipedia—it makes it useless.






