AI Writing, Gaming Media and My Position on AI Use

By MARK WILSON

human head with outlined brain and synapses

We’re still in an age where people are discovering artificial intelligence (AI) writing tools, and thinking themselves clever when they write an article and reveal at some point that you’ve been reading AI writing. Haha! You’ve been fooled! See, it’s indistinguishable.

Now, go forth my children, and become famous novelists by learning how to write detailed prompts to fancy software tools!

I’m going to avoid sarcasm as much as possible in this article, but occasionally it’s going to be tough. My stance on AI isn’t entirely negative or entirely positive, but the more ridiculous hype, fear and confusion surrounding it is – let’s be honest – worth chuckling at.

So to be clear, not a single word in this article was written by AI. This is me. But I don’t categorically hate AI either, and in fact use it as a support tool in my writing efforts. Let me get into why both of those things can be true.

In advance, this weaves in and out of AI in the marketing and writing fields in general (which is where I make my living in my day job). This is a lot less focused on games than most of my articles. But I promise it’s relevant, not just to content here on Bumbling Through Dungeons but to everything you read on the internet in the modern age.

Artificial Intelligence: A Subjective Breakdown

The term artificial intelligence (AI) is decades old, and was designed to inspire a bit of technological fear, a feat it’s still adept at to this day.

It’s also actively misleading. Anyone thinking about Terminator-style machines and humanlike intelligence is in another realm. There are people trying to work on that type of AI as well, but all signs point to us being nowhere near to creating a humanlike intelligence. That isn’t the type of AI that’s getting headlines, though.

It’s more helpful to think of current AI tools simply as powerful software. Some of them amount to no more than what I’d call a fancy spreadsheet. Most are designed only to do a single, repetitive task, and their power comes in their ability to do that task at scales humans can’t, and also to parse terabytes of data in doing it.

Take ChatGPT, which gets a lot of headlines. It simply predicts the next word in a sequence, then does that over and over, and it uses data collected from the internet to make its predictions. It doesn’t know it’s talking to a human. It doesn’t know what a human is, or have any awareness of the meaning of the words it is putting into a sequence.

It’s a big calculator extrapolating probabilistic patterns from an existing, large data set. It’s powerful, but also more limited than some people seem to think. Many other tools follow suit. Their power is as much in how they’re used as exactly how their software functions.

Ethical Considerations

There’s good and bad here, but this is a contentious topic. However, if someone gets angrily defensive about AI, you can probably bet they’re benefiting financially from it. To my eye, though, attacks on AI usage occasionally conflate ideas that should be handled separately and can be similarly misguided.

When I write, it’s my own words, but particularly if it’s a researched, educational topic, I’m often adapting other sources into my own words. In practice, it’s not so different from what AI tools do. In fact, if I run my writing through AI detectors, it usually estimates that 5-30% could have been written by AI. These are usually sections where I looked up some information, synthesized it in my mind and rephrased it to suit my particular purposes.

This happens constantly in writing. If you’re directly quoting or referencing another’s work, cite your sources of course, but much more often – particularly in subjective, opinion-based industries like gaming critique – it’s that you’re taking nebulous ideas instead of exact quotes and repurposing them toward some new set of ideas, or applying them in a different context.

I think ultimately, this sort of adaptation is going to have to be allowable both legally and by any logically consistent ethical standard. More egregious, unedited plagiarism could remain a problem (which has happened with AI), but in theory, the process AI writing tools engage in, to me, shouldn’t be vilified.

However, this doesn’t mean I give a blanket pass to AI writing tools, and others like image or video generation.

First, I firmly believe AI conglomerates scraped copyrighted data and other data that should have intellectual property rights attached to them, and the fact that legislation in most countries is woefully behind the technology has given a pass to these groups, as they harvest the internet without a care. Ironically but unsurprisingly, as data harvesting of AI data sets has happened without the AI groups initially knowing, some of these groups have been far less willing to share openly. Legal or not, the double standards applied to these situations are staggering. This stage of their work could have been handled with far more care, and it’s likely that the reality of how it happened was deliberate.

This continues currently, of course. I think there are more ethical ways to train tools and scrape data, and the money being funneled into AI means that many of these groups could also develop first-party data sets or opt-in models easily. They won’t, though, but if you’ve followed their “shoot first, ask questions later” methods, you already knew that.

Second, the number of artists with rights to their work who have made their non-consent explicitly known, yet have been able to prove that their work has been used in training data for AI tools, is as massive as it is saddening. It’s likely inevitable that shifts will happen in creative fields as a result of AI technology, including loss of jobs. In and of itself, I don’t think this is bad. Unfortunate in some instances perhaps, but not evil.

However, it’s clear to me that zero empathetic consideration went into its rollout, further marginalizing an already-marginalized workforce in society with little-to-no recourse.

I’m far from some Luddite with my head in the sand when it comes to new technological tools. As someone with a career in digital marketing, spanning several industries and specific niches, by age 40 I’ve learned and adopted more software tools than most will ever have to in their entire lives.

However, I can’t divorce myself from the human side of technology trends, and so I can’t help but be furious at some instances of tone deaf rebuttals that may be legally defensible but I find to be reprehensible in any truly human sense. The people saying “it’s happening whether you like it or not” are correct, but that doesn’t mean we should give a pass to reprehensible corporate behavior as these tools are implemented.

AI in Digital Writing

I’m going to talk in a moment about AI tools that do different things than ChatGPT and its competitors, but let’s focus on that for the moment, because that’s most of what people are aware of.

You’ve read a lot of AI content already. Like, entire articles. Also social media posts. Also emails. Also landing page copy on websites. Also advertisements in Google searches. Also in Amazon product listings. Also…

You get the point. None of those above are hypothetical, and many more examples exist.

Any digital copy you read has an increasing chance of being AI-generated. The first AI novel likely isn’t far off, or has already been published and I’m simply unaware of it. Though as I’ll talk about more later, something of that complexity isn’t solely AI-generated, or anywhere near.

The Scams, Lies and Half-Truths

The general public knows very little about AI, is actively misled by the hype surrounding its use and potential, and yet using a vast array of AI tools seems to be a prerequisite to being a success-minded professional in an increasing number of fields.

Sense the problem yet? The field is ripe for grifts.

I’ve seen a few up-close, where a company with zero AI in their processes suddenly has it infused into every level of their operations.

Except nothing changed besides their web copy, where they stuffed AI terms into every nook they could find. But if you don’t know how to identify truly AI-based software operations vs. other types, it’s easy to look the part with some well-curated lines for your sales team.

As mentioned, I work in digital marketing, and the number of companies running similar schemes is uncomfortably high. Upon vetting some of them, it’s clear that even some of their employees don’t realize that they’re not actually selling AI tools. For a tool to actually be AI, it has to be able to learn/adapt in some way as it’s trained on a task with your company’s data and audience. Many don’t, which makes them “regular” software tools, just with AI terms sprinkled on the website and product pitchbooks.

The other big half-truth that’s sold about AI is that it doesn’t require human curation. The Marketing AI Institute is a group I respect, and their leaders have adamantly stated that if someone promises you that you don’t have to be involved at all in the AI’s processes, they’re either ignorant of what they’re pitching or they’re lying.

This has implications for writing, because it’s rare that an AI tool can spit out a fully-formed article, or web copy, or whatever else, that requires no additional work. Maybe you write 100 prompts to an AI and then rewrite the best bits in the AI, then merge them all manually in the final edit. Or you fix some factual errors. Or you adjust the copy to be more aligned with the brand character on your website.

That’s still a lot of human effort, and this is supposedly one of the most automated use-cases of AI.

This isn’t just the shady startups that are involved in this. It’s the shady mega-corporations as well.

Some time ago, Amazon somewhat infamously opened a series of pop-up physical stores that were supposedly automated by AI, requiring no staff.

The truth, though, is that these stores were “staffed” largely by remote workers who were being underpaid and asked to monitor the stores via cameras, processing a lot of the store’s supposed automation manually (to remain as impartial as possible, it’s worth noting that Amazon has disputed some of these reports, though they appear to dispute only the extent to which they were true and didn’t deny using human assistance in the pop-up shops).

Why did Amazon do this? I’m speculating here, but my suspicion is that it was to inflate their stock prices temporarily with their amazing new “AI-powered” innovation. And they’ll do it again with something different, if they haven’t already.

Point is, it’s not just the pop-up salespeople doing the bad stuff. Major, globe-spanning corporations are guilty as well.

AI is the new digital frontier, so there’s a lot that people don’t know. And they don’t know what they don’t know, which makes it a dangerous environment for those looking to adopt cutting-edge tools but without the requisite background to properly assess them.

AI Writing and SEO

Again, I’m not opposed to AI writing in theory, but the reality is murkier, with some good and bad.

The short version is that there’s still a ceiling on the success you can achieve with AI writing alone, one that’s a lot lower than traditional content marketing models. And that if you’re unscrupulous about your execution, you can actually hurt yourself long-term.

Reputable content marketing hubs researched this at one point and found that in the long haul, human writers outpaced AI writers for traffic and conversions, even accounting for the time taken to write the pieces.

And that was before Google’s more stringent guidelines on “scaled content.” Scaled Content is basically code for organizations spamming AI content to try to game the system and jump to the top of search rankings. It’s generally a bad strategy for a host of proven reasons I won’t get into here.

Google’s official advice on AI content is that it doesn’t penalize AI content in and of itself. Their entire model is based on rewarding content that is most useful for the users of a website. If AI-generated content retains this usefulness, it can perform just as well as human content.

This is 100% true. And it’s why the smartest writers are seeing success with AI tools (more on that shortly). But if you stopped there and didn’t look at the typical outcome of AI content farms, you’d be missing a lot of the picture. Without significant effort, AI content tends to be generic and fairly unengaging. Human writing doesn’t guarantee these things, but the best ones still tend to outperform AIs.

This may change eventually as AIs learn to write in more human ways, but at the moment there are still significant disconnects between AI writing and the type of thinking that produces the highest results.

AI + Human

A lot of smart industry people will tell you the merging of expert content creators and AI tools is the best framework, and in general I can endorse this mentality.

I talked about the worst of the industry above, but the best exists as well, of people pushing the limits of the technology and merging it with their own understanding of digital writing. This still has occasional limitations, but also opportunities to streamline workflows. I won’t call this “the norm” since a lot of people just want to take shortcuts. But it’s a better way to understand how to get the most out of AI in ways that don’t simply suck.

My other take is that the writing tools like ChatGPT are among the least useful to me currently as a writer, but that I do find a lot of value in AI tools that don’t get as many headlines, nor do they risk compromising what I consider to be my personalized voice and perspective.

AI Writing and Authenticity

For some uses, I don’t actually have an issue with AI standing in for another’s voice.

Certainly, if you’re in a medical or financial field, you want information that’s vetted by an expert in the field. Google even has stricter authenticity guidelines for these fields in their algorithm. It’s called YMYL or “Your Money Your Life” fields, referring to information that represents financial or medical advice.

AI is not currently a known practitioner of medicine nor a financial advisor. It also has a checkered history with fact-checking in any field. So there can be cause for concern. As AI content becomes more ubiquitous, and more people accept it as the norm for content creation, the general public will need to be increasingly careful about where they get vital medical and financial information from.

However, if you’re reading a travel, lifestyle or food blog, or one of 1,000 other lower-stakes industries, and particularly if you’ve done some due diligence with prompting, editing and matching for the voice of your blog, sure, whatever, there’s no harm in it as far as I’m concerned.

For this particular website, though, I don’t employ it. Why? Because it’s largely opinion-based, and I value my perspective and consistency in that perspective with readers.

Imagine me trying to teach an algorithm what kinds of things I like in board games, then asking it to review a game I played. It might get a couple things correct, but it wouldn’t be my experience with the game. Or perhaps I feed it what I liked and disliked about a game and have it spit out a review? Closer, perhaps, but still lacking a personal touch.

I have sporadic monetized efforts within gaming, and will continue to do so. This website isn’t entirely a personal endeavor but is occasionally a commercial one. But if I become the million and first “content creator” simply spewing out content for its own sake with AI, I’ve lost some core spark of why I started my efforts in the first place. I won’t succumb to that consumerist drone ringing loudly from every corporate tower in the world. We all hear it, whether we realize it or not.

I consider this part of my contract with readers and purchasers of my gaming products. I may never meet these people face-to-face, but I owe them a requisite level of passion, so that I can confidently say that even if a product falls short of the mark for someone, it was created to manifest a particular vision, not merely to fill a bucket of “content” that might make me money.

AI Writing and Quality

It would be folly to talk about what AI “will never be able to do.” We are poor predictors of technology, and technology has already made fools of many more intelligent prognosticators than I.

So let’s talk instead about what it can’t do currently, for that is all I can deal with in any certainty.

I mentioned earlier that behind the writing is no sense of self, or of the reader, or of the meaning of the words it spits out. This is all true.

This, then, begets certain limitations. Limitations that occasionally could be gotten around through brute force prompting over thousands of iterations, but then you might as well be writing the thing yourself. Others, I contend, would yet remain elusive with the technology as it currently exists.

Take a writer like R.A. Lafferty. Defies genre description and is known as a “writer’s writer.” There are elements of folktales, tall tales, science fiction, magical realism, and realistic fiction mixed together in his work.

He’s one of those writers that reminds you that there are different ways to think about the world than what you hear every day in your life.

An AI could easily imitate aspects of Lafferty. But it would struggle at forming something coherent from his style. There’s too much deliberate confusion, at least on the surface, but it’s confusion harboring the layered weight of interwoven metaphors made of concepts both real and irrational, hyper-realistic and abstract, but with foundational elements linking them both in dialogue and theme.

So rather, an AI’s facsimile would likely be too coherent, and if prompted to be less so, would struggle to identify the lexical throughlines which act as touchpoints for the reader. Or struggle to know the exact points at which revealing a key plot point will have the greatest emotional impact.

Take Walt Whitman’s Leaves of Grass. Whitman was adamant that this was a complete, singular work, not disparate poems removed from one another. Taken as a whole, it takes on an entirely superhuman quality. Whitman himself is the protagonist of the tale, as is The Reader, whom he often speaks to, in any age or country. The reader is present in the work itself, not merely a passive observer. As is the American people, who collectively with the other two form the triumvirate protagonist that links the entire work.

So each poem is its own self-contained world, with construction rules and themes and common vocabulary and style, and it’s also part of a larger whole in which these players are omnipresent actors within the work’s framing. Whitman is speaking directly to us – not metaphorically but literally! – in his work, while also serving other sweepingly grand ends as well. No AI can, as yet, encompass this sort of thinking.

It can mimic such thinking, of course. But it is not a new thought, or even a thought at all. It is not purposeful. It is, at best, an artful curation of the preceding thinker’s vision. And it is often too bounded by what is the most likely word to occur next, ignoring (or rather, ignorant of the fact) that the shocking, mysterious or enigmatic choice is often the better one to evoke such ideas.

Thus, the grandest, most unified and densely woven visions are – at best – difficult to emulate using AI, and I hope it is uncontroversial to suggest that such tools are currently poor ones for trying to create such works that match their inspirations in power.

The same is true in math. I’ve seen really smart people take a look at AI for educational purposes in math, sciences and coding. And it can do a lot of cool stuff in these areas! Most are willing to admit there’s potential for educational purposes.

But it’s not actually doing the math. It’s simply getting better at predicting the correct sequences of numbers, given the previous inputs. A calculator will still be more accurate, since it’s actually programmed with the functions necessary to calculate totals. And for more complicated or theoretical areas, large language models (LLMs) are simply inadequate.

So AI still has very little to tell me about deeper forms of writing and human thought, those not at the surface level of most internet copy. It’s no coincidence that the use-cases for AI writing are largely the stuff that is formulaic, and which doesn’t require such broad, conceptual thinking. Yes, you can train it on your writing style (or anyone’s), adjust the tone through careful prompting, avoid cliches and tropes of AI writing through analysis and prompting, and so on. Yet it’s mostly used to generate blog posts to generate traffic for your website or ads and make you money. Such things are also artforms of sorts, but they have always been the ones most likely to be washed away in a recursive algorithm’s pattern recognition capabilities.

AI can do such things competently, but there is still a hard ceiling on its imagination, for in fact it doesn’t have one.

This is also giving the users of AI lots of credit to suggest such lofty works can be approached. Yes, if you push the technology to its absolute limits, through time, effort, and expertise in a variety of techniques both AI and storytelling, a lot is possible. I don’t doubt someone could produce something approaching Whitman’s majesty using AI (albeit with some pre established expertise and writing skill and LOTS of manual effort as well). But if we’re honest with ourselves, most of the writing that AI is used to create is as forgettable as it is derivative and generic.

AI Support for Writers

Here’s where my stance on AI usage is a bit less harsh, and in fact becomes optimistic.

See, AIs are good at doing repetitive tasks at scales that humans simply can’t approach. Which makes subjective creativity difficult to emulate, but harder sciences much easier to model.

For instance, the process through which I research topics to include in my blog is somewhat laborious. Some of it is just whatever strikes my fancy, but you also ideally want to understand what your readership is interested in, and what industry/hobby blindspots you might have.

Similarly, the process through which I determine what subtopics I need to cover as part of that article is similarly protracted. And because I do want to reach a public audience, I’m trying to optimize the website for SEO while also simply writing interesting content, which requires topic research.

AI can – and often does – help me with every one of those steps except the very last one: actually writing it. That’s where the personal touch comes in. But that other stuff? It’s rote work that’s better done by software tools that can analyze thousands of websites and provide me with insights, from which I can make decisions that are best for me.

So AI informs my writing almost constantly at this point. The ideation, the outlining, the selection of topics to discuss, the optimization of it for keyword/SEO purposes, and so on. It’s just not doing the final writing itself.

This allows me to retain the qualities that I believe separate my writing from others, while also leveraging large data repositories to actually reach an audience instead of talking to myself on this website. Thus far, these efforts have been moderately successful, and increasingly I have AI to thank for it.

The Increasing Ubiquity of AI and Hybrid Models

“Increasing ubiquity” sounds paradoxical, but I use it purposefully. See, AI is already obnoxiously ubiquitous in some industries. You are using it everyday, most likely, in things like your phone’s autocorrect system, and in the recommendations you get on numerous websites and apps.

So in one sense, it’s unavoidable if you live online in any capacity. And in the coming years, professionals in numerous industries aren’t going to have a choice of whether or not to use AI tools, since AI technology will be built into the baseline systems they use to complete their job.

Thus, the only sane question for those of us who aren’t billionaires or lawmakers becomes one of adoption and usage, not of whether or not we should be using these tools at all.

It’s cliche to say “AI is a tool” and then to talk about how it can be used poorly or well depending on the user. But sometimes the most cliche statements have some truth to them.

Hopefully I’ve already outlined above how these tools can be used poorly or well within the areas of my personal expertise.

Some people think I hate AI when I discuss it with them, and that’s legitimately not the case. I hate the overhype surrounding it, pushed by those with a monetary stake in you believing the hype. I hate the deliberate, knowing misuse and more innocent but still misinformed misuse of AI tools that end up producing worse outcomes for the end users of a piece of AI-generated ephemera. I hate the exploitation of people whose lack of knowledge about an emerging field causes opportunities for grifts.

What I don’t hate, though, is finding new insights that help me do what I do, doing it better, but also continuing to do it responsibly. Despite the mess that is marketing, it’s possible to market oneself ethically by providing legitimate value. And it’s possible to use AI tools in the same way: as enhancement but without sacrificing the things that make a lot of human content more engaging, consistent and coherent.

Bumbling Through Dungeons AI Policy

Hopefully the diatribe above was of sporadic interest to you. But now we come to the least important part of this whole article: the AI content policy of a backwater personal blog about tabletop gaming. I won’t pretend this has any true importance, but I also consider it responsible to include here.

So what do I use AI for? Beyond the passive stuff I mentioned above (autocorrect, Amazon recommendations), I use it for topic ideation to discover new corners of the hobby I’m unaware of, occasional outlining, keyword analysis, content analysis to find opportunities for SEO improvement.

And then I keep what’s useful to me, get rid of what isn’t, and execute on my vision for a piece of content. It’s more informed and purposeful because of AI tools, but only because I’m adopting it in a way where it doesn’t sacrifice my voice, perspective and expertise. The writing, and the feeling behind it, is my own.

For graphic design, I’ll mock up prototype images and reference images in AI, but then anything that I expect to monetize, I either generate my own graphics or license graphics from human artists. I’ve used AI graphics for D&D character portraits and concept art as well, but this is for my home game and personal use. Anything that is sold or reaches a public audience, my priorities shift.

And from what I’ve seen professionally and personally, this is the better way to use AI for content creation for a host of reasons. It’s more engaging, more authentic, and doesn’t run into the accuracy, ethical, tonal or quality hiccups that currently still plague the space.

Being able to create stuff seemingly from nothing feels like a miracle, and in many ways, it is! But we see enough “content for content’s sake” in the world. I prefer mine to be a touch more deliberate, while still benefiting from how technology can enhance those efforts.

Like my content and want more? Check out my other reviews and game musings!

Read More From BTD