Can you use ChatGPT to write Wikipedia articles?

"Hey ChatGPT, can you write me a Wikipedia article?" 

Since late 2022, OpenAI’s ChatGPT has been all the rage. It's been covered by news outlets, endlessly discussed on tech forums, and looks to be the bane of existence for many college professors. ChatGPT may capture all the headlines, but it's far from the only AI platform out there. Others, such as Google, are working on competitors, and there are a variety of other AI platforms that are barely in the public lexicon.

A robot surrounded by type-related imagery

Concerns and questions about large-language models and AI-generated text have been spilling over onto Wikipedia, one of the world's most popular websites. Wikipedia prides itself on real people doing real work to make the sum of human knowledge freely accessible for no reason other than that it would be a good thing for humanity. It's a lofty goal and, for 12 years, Beutler Ink has helped bring it to fruition.

We were drawn immediately to the question of how AI might help us with our work. How many more clients could we help? How many agonizing hours could be saved searching for that one out-of-place pipe in a 50,000-character sea of black and white? How many days of research could be used to instead share photos of our pets on the company Slack channel?

With the prospect of more puppy pictures firmly on the horizon, we knew we couldn't pass up the opportunity to see what the free version of ChatGPT could do for us. As it turns out, the answer is, "Not as much as you might think." At least, not yet. Beutler Ink's Wikipedia strategy team went toe-to-toe with the large language model. We'll let you decide who won.

Automation and AI on Wikipedia

Almost since Wikipedia's inception, editors have been looking for ways to speed up the process of editing. How can they bring in vast amounts of data, often varying only slightly with names or population numbers, in an efficient way? How can editors harness the power of databases and computers to make knowledge more accessible for everyone? 

For the last 20 or so years of Wikipedia's existence, the primary answer has been bots. Bots are computer programs created by editors to make simple, repetitive edits, like updating citations or archiving Talk page conversations. Unlike ChatGPT, bots can't "think" or come up with novel answers to questions; they can only perform functions which they have been specifically programmed to perform. 

In Wikipedia's infancy, editors created bots with vastly more power, importing data from the Census Bureau to create thousands of articles in the span of a week on small towns, or from NASA to create thousands of articles about asteroids. Many of these articles were later reviewed by human editors and deleted because they failed to meet Wikipedia's guidelines. The author Andrew Lih goes into great detail on the genesis of bots on Wikipedia in Chapter 5 of his book, The Wikipedia Revolution

This mass creation of articles led to the development of a policy for bots, including a ban on large-scale article creation, an approval process for creating new bots, and specifying which tools may count as being "bot-like" even if they aren't independently operating programs. As of May 2023, there were over 2,600 approved bots on Wikipedia, most making small background edits that casual readers will never notice. 

More recently, automating and assisting with tasks has been done with AI. Wikipedia and the Wikimedia Foundation have used versions of machine learning and language models for years, using ORES to train bots to spot vandalism and using machine translation to translate articles to different languages. 

Artificial intelligence is nothing new. It has long been a concept in the human imagination and academics have been studying the technology since the 1950s. What AI hasn't been is accessible. For decades, artificial intelligence was something the average person could only see if they studied computer science for years or bought a ticket to a drive-in B-movie.  

That changed in 2022. As The Atlantic put it, "After years of seemingly false promises, AI got startlingly good in 2022." The once esoteric concept of a real artificial intelligence that we could interact with was now just a few keystrokes away. 

People were drawn first to AI image generators, where they could make cute pictures of cats playing chess or get help with their interior design work. The technology rapidly improved. Philosophical discussions ensued about the nature of art (Is AI-created imagery art?), the ethics of training image models (Should an artist get credit if AI imitates their style?), and the nature of truth (Can we believe what we see anymore?).

Then, on November 30, 2022, OpenAI launched ChatGPT for general use. Though not the first chatbot in existence, it proved itself to be better and easier to use than the others on the market. That gave Wikipedia new questions to answer. Here was a program that could create text that looks and sounds like a bonafide Wikipedia article. In addition to capturing Wikipedia's voice with reasonable accuracy, it can work in wikitext, the encyclopedia's coding language. ChatGPT can format tables, condense existing text, and create custom user boxes. It will even provide references. How much could this one program change Wikipedia? How will ChaptGPT operate with human editors? What are its limits?

On the surface, the articles ChatGPT creates are almost indistinguishable from those written by humans. Dig a little deeper and the importance of the human touch is obvious for at least one hugely important reason: when ChatGPT doesn't know something, it will unabashedly make it up. 

Creating a new Wikipedia article:

Humans vs ChatGPT 

We wanted to see how we stacked up against ChatGPT, so we ran a simple experiment using the free version of the software. First, one of our Wikipedia strategists researched and wrote a brand new Wikipedia article about the tabletop roleplaying game Monster of the Week, a game near and dear to our hearts but that had not yet had an article created about it. Then, we asked ChatGPT to do the same thing. By starting on something that didn't yet exist, we could be sure that ChatGPT and our strategist were on a level playing field. 

On the surface, the two articles look similar. Both have introductions and sections on gameplay, development, and reception. ChatGPT completely smoked our strategist in putting this together, finishing in under a minute to the strategist's more than 3 hours. Beyond the layout, there are also some strong similarities in the content, especially when comparing how the two articles talk about the gameplay. ChatGPT also got some of the supplemental materials correct, like its addendum about the Tome of Mysteries

It isn't a bad start, and we've seen more than a few articles virtually identical to the one generated by ChatGPT on Wikipedia. But a closer inspection starts to reveal some of the cracks. Below is a section-by-section analysis of what's different between the articles, and what ChatGPT flat-out got wrong.

(Note: Beutler Ink is not affiliated with Monster of the Week, nor its publisher, Evil Hat Productions. Beutler Ink did not consult with nor discuss this experiment with Monster of the Week's creators or publisher. Beutler Ink received no monetary compensation for this experiment. This topic was selected because Beutler Ink employees play Monster of the Week in their free time.)

Human Version

Time to complete: 3 hours 20 minutes

ChatGPT Version

Prompt: I would like you to create a Wikipedia article for the roleplaying game "Monster of the Week". Please include sections describing its gameplay, history, development and expansions, and the reception it received. Please include a list of references used to verify the information

Time to complete: 52 seconds

Infobox

At first glance, it's easy to see that the human-developed version of the infobox has vastly more information. It includes much more of the information commonly found in tabletop RPG articles, like the designer, illustrators, and the base system.

The ChatGPT version, in addition to being woefully incomplete by comparison, generated a completely fake file name for the cover of the rulebook, gets the number of players wrong (though this could be disputed), and suggests that players could rush through character and monster creation in less time than the average bathroom break. The game also does not have specific playtime targets (though 2-4 hours is a reasonable range), nor does the base game have guidance on how old players should be.

ChatGPT's infobox is a solid start, but ultimately, it's lacking the details a human editor would pick up during their research.

Human version

Beutler Ink's strategist created a Wikipedia infobox

ChatGPT version

ChatGPT created a Wikipedia infobox with several mistakes and fabricated elements

Introduction

Overall, there isn't a lot to talk about with the content of these introductions. Both of them get the big details like the game designer, publisher, year of release, system, and inspirations. Other editors may quibble about the comparison to Dungeons & Dragons or inclusion of the genre in the opening line, but we don't see this as a mark against ChatGPT. 

What editors can't quibble about is the formatting. Wikipedia has specific style guidelines for how leads are to be formatted, and ChatGPT failed to meet the mark. Of other significant note, the ChatGPT version didn't include any italicization, and maybe most importantly, no links to other Wikipedia pages. These internal links are hugely important for Wikipedia and that web of connections is one of the things that makes Wikipedia so unique—and so useful.

Human version

Monster of the Week (MOTW) is an urban fantasy-horror tabletop role-playing game developed by Michael Sands. It was first published in 2012, and a revised edition was published by Evil Hat Productions in 2015, who have since published the game and its supplemental materials. The game was inspired by villain of the week television series such as Buffy the Vampire Slayer, Supernatural, and The X-Files.[3]

MOTW is a Powered by the Apocalypse game and as such has a simplified ruleset when compared to tabletop RPGs like Dungeons & Dragons or Cyberpunk.[4][5]

ChatGPT version

"Monster of the Week" is a tabletop roleplaying game designed by Michael Sands and published by Evil Hat Productions. It was first released in 2012 and is inspired by TV shows like Buffy the Vampire Slayer, Supernatural, and The X-Files. The game uses the "Powered by the Apocalypse" system, which is known for its narrative-driven gameplay and its focus on player agency.

Gameplay

Like the introduction, there is some quibbling that can be done in the Gameplay section about which content to include, if the "semi-structured, open-ended" descriptor is needed (after all, Monster of the Week is a TTRPG and that's kind of a hallmark of the genre), and if we really need to specify that yes, sometimes players will be doing basic math.

But some of the more significant elements, like the game master being called the "Keeper", character classes being referred to as "playbooks", and the omission in the ChatGPT version of the system's functionality in campaigns (rather than solely being used for single-session experiences, as is the case with other RPG systems), are pretty significant oversights. Volunteer editors might argue about the importance of each of these elements, but we think it's important to note what makes the subject of an article distinctive, be it a game, company, or person.

Beyond that, the inarguable element is that ChatGPT has once again failed to connect this section to other relevant articles. The importance of internal wikilinks – and properly formatting them – cannot be overstated. ChatGPT also does not make use of inline citations, a critical component in any properly formatted, cited, and written Wikipedia article.

Human version

MOTW is a semi-structured, open-ended tabletop role-playing game (TTRPG). One player acts as the gamemaster (known as the "Keeper of Monsters and Mysteries" or "Keeper") while the other players develop player characters and take the role of one or two "hunters".

Each hunter is assigned a "playbook" or character class based on a character archetype from monster media. During the course of play, each player will direct the actions of their character and roll two six-sided dice when prompted by the Keeper to use various skills, including investigating mysteries and using magic, as well skills that are specific to individual playbooks. Unlike other TTRPGs, the Keeper does not roll dice to determine the actions of the characters under their control; instead, the Keeper reacts to the rolls of the hunters. Each player will employ logic, basic arithmetic, and imagination during the course of the game. A single game may finish in one playing session or be extended over the course of multiple playing sessions in an "arc" or "campaign".[6]

Enemies, settings, and puzzles are developed by the Keeper in advance of the playing session. The Keeper determines what the results of player choices are according to the rules of the game and their interpretation of those rules. MOTW has been described as a "fiction-first" game, with a greater emphasis on storytelling than adhering to rules.[7]

ChatGPT version

In "Monster of the Week," players take on the roles of hunters, individuals who are investigating and fighting against supernatural creatures. The game uses a "mystery of the week" structure, where the hunters are tasked with solving a specific supernatural problem in each game session. The game master (GM) is responsible for creating the mystery, playing the non-player characters, and setting the scene.

The game uses a simple 2d6 dice mechanic, where players add relevant stats and modifiers to their rolls. The results of the roll determine the success or failure of the player's action and the consequences that follow. In addition to stats, players also have access to special moves, which allow them to perform unique actions based on their character's abilities.

History & Development

Again, the History sections bear many similarities, but the differences are important. Beyond the lack of links to other pages, inline citations, and formatting changes, the ChatGPT version misses some important parts of Monster of the Week's developmental history – its time as a print-on-demand only publication and the role of Fred Hicks stand out to us – but what's worse  is ChatGPT inserted information that isn't verifiably true. 

ChatGPT claims the initial publication of the game came about from a Kickstarter campaign. This isn't verifiable. It's true that the developers of Monster of the Week ran a crowdfunding campaign—of which Kickstarter is one of many platforms to do so—but nowhere in the game's history, as told by Steve Hickey in the Tome of Mysteries expansion, does it specify Kickstarter as the source of the campaign. We searched around for this detail in other sources and didn't locate it in anything appropriate for use on Wikipedia. This is a small detail, but an important one, as it begins to demonstrate some of the bigger issues with using ChatGPT to write a Wikipedia article. 

Looking closely at the ChatGPT version, we can also see that it says development of Monster of the Week began in 2011. This is demonstrably false. Looking again to the Tome of Mysteries, the date when earnest development started isn't clear, but early sketches of the rules had already been completed by October 2007 (of note, we decided to omit this information in our version because we didn't want the article to become bogged down with potentially unencyclopedic information). ChatGPT's version is a decent framework, but the holes are becoming more clear with each section. 

Human version

MOTW was first published by Michael Sands in 2012. In an interview with Dicebreaker, Sands said the game was inspired by television series such as Supernatural, Buffy the Vampire Slayer, and The X-Files. Following the release of Apocalypse World, Sands adapted that system to create MOTW.[3] After its release, MOTW was primarily available via print on demand, which limited its potential audience. In 2015, Sands, at the encouragement of Fred Hicks, partnered with Evil Hat Productions to make a revised edition of the game and publish it through the company's platform.[6] Sands developed a ruleset expansion and collection of pre-made content called the Tome of Mysteries which was released in 2019,[1] and an additional ruleset expansion with new party mechanics to be released in 2023, titled Codex of Worlds.[3]

ChatGPT version

Michael Sands began developing "Monster of the Week" in 2011, drawing inspiration from his love of supernatural TV shows. He based the game on the "Apocalypse World" system, created by Vincent Baker, which focuses on player agency and storytelling. Sands launched a successful Kickstarter campaign in 2012 to fund the game's publication, which was released later that year.

Since its initial release, "Monster of the Week" has had multiple printings and has gained a strong following among tabletop RPG fans. In 2021, Evil Hat Productions released a revised and expanded edition of the game, which included new playbooks, revised rules, and additional content.

Expansions

In our version of the article, we opted to omit an Expansions section entirely, for the simple reason that there is not enough available information in Wikipedia-appropriate sources to justify having a full section on the game's expansions. Editors can decide if a "revised" edition is truly an expansion, but we're willing to give ChatGPT a pass on this one given that there have now been three distinct versions of the base rulebook released, each with its own tweaks and minor growth points. 

What we can't give a pass on is ChatGPT simply making something up. It lists The Monster Hunter Files as one of the supplemental materials for Monster of the Week. It certainly sounds like it could be, and it's undoubtedly related to the monster hunting genre. There's just one teensy, tiny problem: The Monster Hunter Files is not in any way affiliated with Monster of the Week. It is a compilation of short stories put together by Larry Correia and Bryan Thomas Schmidt, published by Baen Books in 2017, and is part of Correia's Monster Hunter International series. It is a complete fabrication to say the two properties are linked.

In addition to making up that The Monster Hunter Files was part of the game's universe, ChatGPT neglected to include the very real expansion to the game, Codex of Worlds. This is likely because of ChatGPT's limitations when it comes to pulling information on recent events. Codex of Worlds was announced in 2022; ChatGPT's knowledge cutoff is September 2021 and it lacks access to real-time information, highlighting another area where using ChatGPT to write Wikipedia articles is problematic. 

ChatGPT version

In addition to the core rulebook, "Monster of the Week" has several expansions that provide additional content and options for players. These include:

"Tome of Mysteries": A supplement that provides additional mysteries, monsters, and hunters for the game.

"Monster of the Week: Revised": A revised and expanded edition of the core rulebook, which includes new playbooks, revised rules, and additional content.

"The Monster Hunter Files": A collection of mysteries and monsters written by various authors, including Seanan McGuire and Chuck Wendig.

Reception

This is where ChatGPT goes off the deep end. Reviews of Monster of the Week are rare in Wikipedia-appropriate sourcing. It's true that the game has been praised for its emphasis on story over tactics and rules, so we'll give ChatGPT a point for that.

But that's where we'll stop. The entire second paragraph of ChatGPT's effort is fabricated. Every word of it. There is no IGN list. There are no Forbes accolades. And while there probably is an Eric Dodd somewhere in the world, maybe even an Eric Dodd who is a fan of Monster of the Week, he hasn't written anything saying so on RPG.net.

Human version

MOTW has had a generally positive critical reception since its release, with reviewers lauding its quick pacing and accessibility for new tabletop gamers. R. Talsorian Games designer Cody Pondsmith wrote of MOTW in 2022: "Monster of the Week's streamlined resolution system allows you to focus on playing your character and moving the story forward without getting slowed down by additive math and complex rules."[4] PC Gamer's Robin Valentine described the game as, "a great introduction to the world of game driven by collaboration and improv at the table."[5]

Criticism of MOTW has primarily been related to its open-ended ruleset, about which Pondsmith wrote, "the trade-off is that the game relies heavily on the game master to adjudicate when rolls should be made and what the results of those rolls should be, but the open-ended system gives solid guidance when things get tricky."[4]

ChatGPT version

"Monster of the Week" has received critical acclaim for its engaging gameplay and its ability to capture the feel of supernatural TV shows. The game has been praised for its simple mechanics, which allow players to focus on storytelling and roleplaying.

In a review for RPG.net, reviewer Eric Dodd praised the game's "simple, flexible rules," and its ability to "immerse players in a world of supernatural danger and adventure." The game has also been featured in numerous "best of" lists, including IGN's "The 10 Best Tabletop RPGs of All Time" and Forbes' "The Best Tabletop RPGs To Get You Through The Pandemic."

In Popular Culture

Human version

MOTW has been featured in multiple actual play podcasts since its creation, most notably as the system used in Season 2 of The Adventure Zone.[8]

There's some debate in the Wikipedia community about the propriety of In popular culture sections and when they should be used, but we felt it relevant to include Monster of the Week's usage in The Adventure Zone (TAZ)  because a) TAZ has its own Wikipedia page; b) it is specifically mentioned twice in a source recommended by WikiProject Board and Table Games; and c) these sections are quite common in other articles about TTRPGs. 

This isn't something we can necessarily ding ChatGPT for not including, but a human editor will be able to reason through the logic of including this information or not, as well as the level of depth with which to treat this section.

References

In our eyes, the References section may be the most damning evidence of any we've explored in this blog post when it comes to ChatGPT's incompetence at writing Wikipedia articles, and it's the one the casual Wikipedia reader is most likely to never peruse. References sections are critically important on Wikipedia; like showing your work in math class, References sections let other editors check your work and ensure the text of the article is as accurate as possible. These sections enable the collaborative nature of Wikipedia. They are what sets Wikipedia apart from forums like Reddit or a random blog, and why Wikipedia has become the first (and often only) source of information for many people, whether they realize it or not. 

References sections inherently build trust in an article. They show that editors have taken the time to research, to think, to compile, and they allow anyone to follow the trail of breadcrumbs back to the beginning to verify information or make corrections. Without a proper References section, any attempt to create an article on Wikipedia will be instantly denied. 

Our human-created References section is done the right way. It's properly formatted, using Wikipedia's style of citation, with links to reliable sources per Wikipedia's rules. Each reference has a number attached to it which allows any reader to see exactly which bits of information are coming from which sources. 

ChatGPT's version doesn't do that. Any of it. The citation formatting is close, but not correct. It doesn't spell out the first names of authors even though it should. Dates should have months and days when that information is available. 

Oh, and the sources should be real. 

None of ChatGPT's links take readers to real sources. There is no Eric Dodd review on RPG.net. There is no "P. Hernandez" review published by Forbes. There is no GeekNative review. The section is entirely fabricated. The one listed source that is closest to real is the link to the Evil Hat website, which, if entered into an address bar, will redirect users to the right page, but in its form as provided by ChatGPT is a complete fabrication.

Human version

  1. ^ a b Sands, Michael, ed. (2019). Monster of the Week: Tome of Mysteries. Silver Spring, Md.: Evil Hat Productions. ISBN 978-1-61317-171-4.

  2. ^ "Monster of the Week". Generic Games. Retrieved 12 April 2023.

  3. ^ a b c Jarvis, Matt (8 March 2023). "'If I was designing Monster of the Week now it would be very different': Michael Sands on the RPG's origins, inspirations and what comes next". Dicebreaker. Retrieved 12 April 2023.

  4. ^ a b c Hall, Charlie; Pondsmith, Cody (15 December 2022). "The best tabletop RPGs we played in 2022". Polygon. Retrieved 12 April 2023.

  5. ^ a b Valentine, Robin (23 February 2023). "I've been playing tabletop RPGs for 20 years, and these are the D&D alternatives I recommend". PC Gamer. Retrieved 12 April 2023.

  6. ^ a b Sands, Michael (2015). Monster of the Week (Revised ed.). Silver Spring, Md.: Evil Hat Productions. ISBN 978-1-61317-091-5.

  7. ^ Gailloreto, Coleman (24 September 2020). "Must-Play Roleplaying Games That Are "Powered By The Apocalypse"". ScreenRant. Retrieved 12 April 2023.

  8. ^ Meehan, Alex (1 December 2022). "The Adventure Zone's Griffin and Travis McElroy chat discovering D&D, dream celeb players - and reveal their favourite RPG". Dicebreaker. Retrieved 12 April 2023.

ChatGPT version

Evil Hat Productions. (n.d.). Monster of the Week. Retrieved from https://www.evilhat.com/home/monster-of-the-week/

Dodd, E. (2016). Review of Monster of the Week. RPG.net. Retrieved from https://www.rpg.net/reviews/archive/16/16336.phtml

Hall, C. (2021). Monster of the Week Revised Edition Review. Geek Native. Retrieved from https://www.geeknative.com/130016/monster-of-the-week-revised-edition-review/

Hernandez, P. (2021). The Best Tabletop RPGs To Get You Through The Pandemic. Forbes. Retrieved from https://www.forbes.com/sites

Pros and cons to using ChatGPT to create Wikipedia articles

Pros

  • Fast

  • Free

  • Formatting is generally correct

  • Can provide base code

Cons

  • Makes up facts

  • Doesn’t use real sources

  • Excludes easy-to-find information for no apparent reason

  • Won’t be accepted by reviewing editors

For all its achievements—and as much as we've harped on what ChatGPT got wrong in its article, there is a lot that it got right—it's clear that the one thing it can't stand is a content gap. Rather than leave a section blank or undersized, it will simply make things up to fill the space. This happened in several other pre-experiments we ran: ChatGPT made up names of people and gave those fake people credit for things they never did; it padded a person's résumé with a prestigious spot on the board of directors for a major telecommunications company; and it created an entire early life of a person that never happened. 

We're not the only ones who have come across this phenomenon. When Andrew Lih ran his own ChatGPT-Wikipedia experiments, he found that the AI would occasionally fabricate plot elements to movies.  

Part of that has to do with how ChatGPT works. As a large language model, ChatGPT isn't a researcher and it isn't a search engine. It has no ability to look things up, to determine the quality of a source, to sift facts from opinions, and it certainly doesn't have the ability to parse decisions like what is relevant for a Wikipedia article and what is not. 

We talked to ChatGPT about this. When asked, "If you do not have certain information, do you ever just make stuff up?", ChatGPT replied:

"No, I do not make up information. As an AI language model, my responses are generated based on the information and patterns that I have learned from my training data. If I do not have information on a particular topic, I will simply say that I do not know or that I do not have enough information to provide a meaningful answer.

"It's important to note that while I strive to provide accurate and useful information, I am not perfect and there may be cases where my responses are incomplete or contain errors. Therefore, it's always a good idea to verify information from multiple sources to ensure that it is accurate and reliable."

ChatGPT puts together words, phrases, and sentences that seem like they should go together based on all the input data with which it has been trained. Sometimes, it gets things right. Other times, it creates a word salad that has no basis in reality but, dangerously, sounds like it could. Researchers call it "hallucinating". It's basically a more mature version of Mad Libs. We would argue that creating content based on patterns rather than facts is making things up. Ol' Chatty and Beutler Ink will have to agree to disagree on this one. 

One of ChatGPT's great strengths is its ability to capture Wikivoice, the dry tone that sends readers back to their days perusing the dusty, leather-bound tomes in the reference sections of the local public library. Okay, maybe that was just us. But the Wikipedia style is something that many editors struggle with, especially new editors, and ChatGPT does a fantastic job of capturing that elusive tone. Certainly not perfectly, but it's generally close much of the time.

Another Wikipedia editor, JPxG, demonstrated this with a plot summary of an episode of The Office. The plot summary was rather long, and ChatGPT was able to cut it down in seconds. Editors can argue about which details should have been cut and which ones should have stayed, but JPxG showed that ChatGPT can effectively capture Wikipedia's style, something that could boost editing speed significantly in the future. 

JPxG also showed a darker side to this function. They had ChatGPT write characters out of the show completely, changing the show’s story in seconds. Astute fans of The Office might be able to see through this ruse, but the casual fan might never know that the Wikipedia article is wrong until they reach that episode. 

It isn't difficult to imagine how the capability to selectively edit facts and generate citations that sound legitimate could be weaponized in articles with more significance. Can ChatGPT write a new summary of the life of Rosa Parks that excludes why she was asked to sit at the back of the bus like a textbook company did? Can it rewrite the history of the Russian invasion of Ukraine that excludes any information about the human cost of the conflict? Could a bot be programmed to use ChatGPT to automatically insert AI-generated text that rewrites history? To quote the Magic 8-Ball, signs point to yes. 

It's not all doom and gloom when it comes to AI on Wikipedia, however. Perhaps ChatGPT's greatest strength is its ability to turn the information it is provided with into tables and other coding elements. These tables, like the filmographies in articles of actors or the collected results of an NBA season, take an enormous amount of time to properly code and a single misplaced character can break the entire thing. ChatGPT can put these tables together in seconds and almost always put them together correctly the first time. 

The Way Ahead

Yuval Noah Harari questioned what will happen to human culture when faced with the rise of AI in The Economist: "Language is the stuff almost all human culture is made of. Human rights, for example, aren't inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws… What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture."

At least for now, AI isn't taking over the culture of Wikipedia. No hard rules have yet been established for using AI to generate content on the platform, but the community of editors seems to be leaning toward using AI as a starting point rather than letting it do all the work itself.  

The best way to make changes on Wikipedia is still to do it the old-fashioned way.

That just so happens to be one of the things that Beutler Ink has been a global leader in for more than a decade. Reach out today to see if our services are a match for your needs.