Echoes of Somewhere and the AI Controversy

echoes of somewhere article teaser
Jeff Tripoli

Written by Jeff Tripoli

July 21, 2023

The controversy surrounding the use of machine learning in arts and entertainment has reached a boiling point in the cultural zeitgeist within just the last year. The exponential advancement and widespread popularity of artificial intelligence (AI) tools like ChatGPT and Midjourney have pushed the boundaries of AI-generated art into a realm that until recently seemed like distant-future science fiction.

While many studios across various media have embraced AI as the next technological leap, countless artists are left understandably concerned about the future of their livelihoods. The Writers Guild of America is currently on strike in part over AI, while actors, visual artists, and musicians are actively lobbying for legislation to regulate its use. More alarmingly, even Geoffrey Hinton, often referred to as the “Godfather of AI,” has resigned from his life’s work, issuing warnings about a looming Skynet-like AI-induced apocalypse.

The gaming industry serves as a perfect microcosm for the ongoing debate between ethics and the capabilities of AI-generated art assets. Beyond the hand-wringing and ethical debates, many developers are still willing to explore the possibilities of machine learning as a tool for game development. One such developer is Jussi Kemppainen of Dinosaurs Are Better, who serves as the lead designer and artist for the upcoming game, Echoes of Somewhere.

Echoes is a cyberpunk point-and-click adventure game, inspired by shows like Netflix’s Black Mirror and Love, Death & Robots, which similarly delve into the potential consequences of humanity’s growing reliance on technology.

“In a way, Echoes is a parody of a world where all the AI doomsday visions have become reality,” explains Kemppainen. “It portrays an exaggerated exploration of what such a world would be like, highlighting the unrealistic and absurd aspects of such a vision. While I aim to avoid an overly apocalyptic tone, you could describe Echoes as an AI-driven postapocalyptic story.”

Perhaps fittingly, nearly every component of the game’s audio-visual presentation has been created or modified using AI tools. This includes the breathtaking backgrounds, 3D models, and even voice performances. The use of these tools, however, raises ethical concerns among many artists and game development professionals, as programs like Midjourney are trained on human artists’ work without their consent or compensation.

Developers React To Use of AI in Echoes of Somewhere

“The way AI ‘art’ works is by scraping the internet for unprotected images, extracting parts that match the given prompts, and reassembling them into something new,” explains Lorelei Shannon, former Sierra On-Line developer who designed Phantasmagoria: A Puzzle of Flesh and co-designed King’s Quest VII alongside Roberta Williams. “Although the individual images it scrapes are no longer recognizable, the resulting AI ‘art’ is still a composite of other artists’ work. It can’t be created without taking from other artists.”

As a primarily one-person operation, Kemppainen is acutely aware of the controversy surrounding his methods. Consequently, he has decided to release Echoes of Somewhere as a free game to the public.

“I don’t consider the game’s visuals to be solely my creation,” he emphasizes. “They are a collaborative effort between millions of artists and photographers, working in conjunction with the Midjourney AI. It’s a true collaboration like no other.”

echoes of somewhere article pic 1

Unreleased screenshot from Echoes of Somewhere

Some game artists don’t share Kemppainen’s perspective on the extent of collaboration involved in the process. Katie Hallahan, a writer and designer at Phoenix Online Studios, believes that Echoes‘ freeware status doesn’t excuse it from ethical scrutiny.

“[The tools themselves] are unethical in their current state, because they’re learning from works whose copyright owners have not granted permission, nor have they been compensated,” Hallahan says.

Although Phoenix Online has released a number of commercial adventure games, their first notable project was the freeware King’s Quest fan game The Silver Lining.

“I’m glad [Kemppainen] won’t be charging for the game, as charging for it would also be unethical,” Hallahan continues. “When we made The Silver Lining, we always knew it was going to be released for free and not something we could make money from. We’ve always been careful to keep its development separate from any commercial work. And this is a game where every piece of art, every line of writing, all the programming, and so forth, came from our team – even screens and characters that were recreations from official King’s Quest games.”

The appeal of AI tools for small or even one-person development teams like Kemppainen’s Dinosaurs Are Better is understandable. Many aspiring designers lack either the skill or funds to create what tools like Midjourney can accomplish in seconds. Still, Shannon agrees with Hallahan’s assessment.

“I sympathize with aspiring game designers who want to create, but don’t have the ability to create the art for their games themselves, or the money to hire an artist,” she says. “However, AI-generated images, at this point in time, are always made by scraping other people’s artwork.

Kemppainen, an established artist and designer himself, relates to these reservations.

“I am torn on this as well,” he echoes. “I have an internal struggle with AI tools all of the time, being a 3D artist myself. I hope that the way I have chosen to explore the new medium is something that can be seen as an exploration, instead of an exploitation.”

echoes of somewhere article pic 2

In fact, Kemppainen sees the story of Echoes‘ development as being equal to, if not more significant than, the game itself. On the game’s website, he maintains a development blog where he chronicles the trials and tribulations of working with AI technology. He hopes this documentation will serve as either pioneering insight or, in the event of unforeseen consequences, a cautionary tale.

“For me, this project is as much about experimenting with the psychological side of using AI as it is the practical side,” he explains. “Naturally you cannot discuss this game without discussing AI. To some extent, for me, the real product I am building is not [just] a game, but a development blog [about developing a game with AI].”

With AI legislation still in its infancy, especially in relation to the rapid pace of its technological growth, Kemppainen remains keenly aware of the legal obstacles Echoes’ development might yet have to face.

“I have not set any expectations for the outcome of the project,” he says. “This has all become a big experiment to me, and if the game is ‘canceled’ online for its use of AI tools, this is a great result! Of course I would love for people to see it for what it is and hopefully enjoy it as a great point-and-click game, but if the use of AI tools eclipses the game, then I have to accept it, and I will document it in my blog…. I have never done anything like this before and I am not advocating unrestricted use of AI by any means.”

While many artists’ reactions to AI tools range from cautious to outraged, not all artists or developers dismiss them entirely. Robert Holmes, producer and composer of the Gabriel Knight series, sees both sides of the argument.

“I think [the ubiquity of AI] is bound to happen in all areas – and frankly for areas like art and the more expensive parts of game development, it makes good production sense,” he admits. “If gamers want truly human-created art, they will have to pay much much more for the games they play, which – even with all of today’s tech – are way too costly to make.”

Holmes believes that this discussion is just the latest iteration of a long-standing debate about the impact of technology on the arts.

“The democratization of anything is always threatening to those who have held the control in the past,” he explains. “I was around when CGI first started and there were similar concerns, but really they are all just more tools to do cool stuff. Will it have the same soul? Maybe not, but one could argue the spirit in creative work has diminished bit by bit as the technology in all fields advances.”

While Holmes is hesitant to outright endorse unregulated use of AI art, he expressed support for Kemppainen’s efforts.

“I think [Echoes is] an interesting experiment,” he says. “If we were to do a new Gabriel Knight game, we would be nuts not to look at ways AI could help in game production.”

gabriel knight ai concepts

AI-generated concept art for Gabriel Knight. (Images provided by Robert Holmes)

Still, other artists fear for the long-term implications and consequences the use of these tools could have for the arts, extending beyond the immediacy of artist compensation. Fantasy artist Bruce Brenneise, known for his contributions to Magic: The Gathering, Dungeons & Dragons and Numenera, has been an outspoken advocate for artists’ rights against the impending tide of AI.

“Historically, any time barriers to entry lower for a profession, it leads to devaluation of the profession,” Brenneise says. “Rather than worry about whether AI will replace all art (it won’t), we should be worried about the overall ecosystems of the arts.… AI will be able to work at inhuman speed and scale, and arguably already does. Those who argue that artists just need to adapt fail to consider that the AI can potentially adapt even faster, making adaptation at inhuman scale or speed an impossibility without burnout.”

While Holmes is more optimistic about the utility of AI, he doesn’t believe the reins of artistic creation can just be handed over to computers.

“An important thing to remember is that you will always need the talented eye of a designer or director to make quality decisions; to decide what is good or bad for the game,” he clarifies. “This has always been the case, and it’s why people with true artistic vision will always be critical to making great work.”

Kemppainen agrees. Interestingly, there’s one creative aspect he’s not prepared to relinquish to the machines: the writing and puzzle design.

“I am not ready to compromise on the storytelling and give that over to the AI,” he explains. “I cannot for example simply write the story and expect AI tools to provide me with perfect locations, props and character.”

While Kemppainen takes charge of crafting the plot and puzzles, AI-generated art often influences revisions.

“I need to explore what AI can produce and, after the fact, rewrite my story to match the AI-generated content,” he says. “This is similar to working with a very stubborn artist who is very bad at taking direction.… I believe that this will change over the years, but it is the current state of the tools.… Using AI for a thing like this makes puzzle design extremely limited in some scenarios.”

echoes of somewhere article pic 3

Although the use of these tools has significantly expedited the development process, it has also presented him with a unique set of obstacles, as Kemppainen details in his blog.

“I work around the limitations of the current AI technology quite a bit,” he says. “I give a lot of creative freedom to the AI and accept the decisions it makes and work around them.”

For example, he often finds himself limited visually by Midjourney’s own particular style.

“The only stylistic direction I have given to the art creation process is ‘cyberpunk adventure game HDR masterpiece,’” he explains. “Then I simply take that output and forge my world and story around it. If I was trying to art-direct the AI tools too much, it would slow the process down considerably, or make the project impossible to finish.”

As for the trajectory of AI tools in game development, and the job security of human artists, the future is still being shaped. At time of writing there have been 134 regulatory bills proposed in the United States alone, with three major proposals pending in the EU this year. While many artists and AI technology experts have called for a moratorium on development, some believe these efforts will only delay the inevitable. In the arena of game arts, the writing may already be on the wall. Big studios stand to reap major financial benefits from reduced creative labor costs. Ryan Duffin, an animation artist for Microsoft, painted a bleak picture in his speech at this year’s GDC.

“The removal of the need for human expertise is thematically appropriate when we’re talking about AI, because it makes no secret about that trajectory,” Duffin posited. “The discourse among professionals and professional creatives seems to be either angry shouting that this shouldn’t exist, or casual acceptance that it’s just another tool on the inevitable march of progress. Whether you think it should exist or not, it does. And yes it’s a tool, but its advocates have made no secret that ‘democratizing’ is a nice-sounding word for cutting out experts like us.” (source: Eurogamer)

Holmes, whose own Gabriel Knight IP with wife Jane Jensen is currently slated to be acquired by Microsoft as part of its pending Activision merger, is unsurprised by this corporate opportunism.

“It’s futile to argue about it being ethical or not,” Holmes says. “The sad reality is that with extremely rare exceptions, human beings and the human race are not ethical creatures. If you are waiting for humans to make ethical decisions on anything that has the possibility of being profitable, you’ll be waiting a long time.”

echoes of somewhere article pic 4

Amid the controversy, an important personal question lingers for Kemppainen: will the game be any good? Medium aside, he understands that the design of the game is paramount to its success, even as freeware.

“I would absolutely want to make the game regardless of the existence of AI tools,” he affirms. “I hope that Echoes of Somewhere will be a step forward on my career path to making adventure games professionally. Then I would not have to resort to the use of AI for the graphics, but I could hire people or do all the art by hand myself.”

In the meantime, Kemppainen hopes that people will learn from his experiences, whether the game achieves critical success or even sees the light of day.

“I hope that people understand why I chose to use AI for the project and find value in me documenting the process in great detail as I map these uncharted waters,” Kemppainen concludes.


You May Also Like…


  1. Sean Parker

    Great read! Obviously opinions are divided, but I think the game’s critical commentary on AI sorta gives it a free pass to make extensive use of AI. The fact that it’s going to be freeware should put an end to the controversy… there’s better things to focus on and fight against.

    Fact is, people are going to use whatever tools are at their disposal to make their passion projects. For noncommercial ventures, the usage of AI art tools feels no worse than reusing existing material to make the GIFs and memes we’re all so familiar with online. People may balk at things like this game being “art,” but as long as it has a human touch curating and assembling it, it’s art.

    The issue gets murkier when it comes to actual game studios firing their artists and replacing them with AI tools to elevate profits. That’s where it starts to get problematic.

  2. Aiba

    No shade to Lorelei Shannon, but her description of AI is completely off-base. While it’s understandable to include her perspective, to not challenge her regurgitation of a common misconception is disappointing.

    I understand this article is meant to discuss the discussion surrounding AI art from a variety of perspectives and broadly. To include her perspective is ok. I am disappointed that the only attempt at explaination of the underlying tech included is inaccurate.

    • Jack Allin

      Feel free to enlighten those of us who aren’t as familiar with the tech. How is it inaccurate?

      • Gabriel

        GPT-4 chimes in:

        Lorelei Shannon’s description of AI is inaccurate in the following ways:

        – She implies that AI art works by scraping the internet for unprotected images and reassembling them into something new. This is not how stable diffusion and midjourney work. These tools use generative adversarial networks (GANs), which are composed of two neural networks that compete with each other to create realistic images from random noise or text prompts. They do not rely on existing images as input, but rather learn from a large dataset of images to generate novel ones.

        – She also implies that AI art is always a composite of other artists’ work and cannot be created without taking from other artists. This is not true either. While GANs are trained on a dataset of images, they do not copy or extract parts from those images. They learn the statistical patterns and features of those images and use them to synthesize new images that are not identical or derivative of any existing image. Therefore, AI art can be original and creative, not just a collage of stolen pieces.

        – She fails to acknowledge the human involvement and intervention in the process of AI art creation. While GANs can generate images autonomously, they still require human input and guidance to select the dataset, the prompts, the parameters, and the outputs. Moreover, human artists can use AI tools as a source of inspiration, experimentation, or enhancement, rather than a replacement or a threat. AI art is not purely machine-generated, but rather a collaboration between humans and machines.

      • Shawn

        A correction on the correction: many of the newer systems like Stable Diffusion and MidJourney don’t actually use GANs, but use diffusion instead. I believe the way most of them work is this:
        First, a “classifier” AI is trained against various images and text descriptions. A classifier basically looks at an image and tells you what things are in it, with confidence levels. So if shown a picture of a red car, it might return something like “car 90%, red 95%, man 2%, etc”. An example of a classifier is CLIP from OpenAI.

        Now, you train the image creation AI. An image is given to the classifier and then noise is added to it. The AI is told to make it a bit less noisy, along with the target classifications. So in our example above, try to make the noise more “car-like” and “red-like”. Then the resulting image is run through the classifier and compared to the original classifications and used as feedback on how good of a job it did.

        When creating an image from scratch, you usually start with pure noise, and have it denoise in steps. This effectively ends up being like finding an animal in the clouds. The first step picks out a very basic shape of a car in the noise and then progressively adds more and more fine details to it. This also allows you to modify an existing image with AI, based on how much noise you add to it beforehand.

  3. Jack Allin

    Thanks, Gabriel. GPT-4’s first two points strike me as a distinction without a difference. Not a practical difference anyway. Nowhere does Lorelei suggest wholescale plagiarism, but the AI is indeed using other artists’ collective work as its resource, no matter how large the “dataset” and how minute the recreations in a different form. I imagine it like taking individual tiny pieces of many jigsaws and reassembling them into a whole different puzzle picture. At a tiny enough scale, the original influence could be unrecognizable to the human eye, but it would still exist. And to me that’s all Lorelei is referring to as “scraping.”

    (Note that I’m not arguing for or against here, just clarifying the point of whether or not Lorelei’s description is indeed inaccurate.)

    • Gabriel

      First, my apologies Jack. I thought your comment was a troll and I just reflexively posted a GPT response without really engaging with either that response or the issue at large. Shame on me. One correction should also be noted, some image generators do indeed use GAN’s but Stable Diffusion and Midjourney employ diffusion models. Different techniques under the hood – but ones that likely do not affect the outcome of the argument from a legal perspective.

      The jigsaw analogy is not very accurate, because the latent codes stored by the model are not fixed pieces (e.g. pixels or sets of pixels) that can be rearranged, but rather probabilistic distributions that can be sampled from. The model learns to generate images by sampling from these distributions and adjusting them according to the text input.

      “Scraping” is almost certainly referring to the gathering of images into the form of a dataset, from which an AI image generation model will be trained. When you use an AI model like Midjourney, you don’t perform any “scraping” either of the net or the internal image model. Hence the objection to: “However, AI-generated images, at this point in time, are always made by scraping other people’s artwork”.

      You are correct in saying that “AI is indeed using other artists’ collective work as its resource” during the training phase of model creation. But don’t human artists also use other artists’ collective work as a guide and resource in their own training?

      The legality of all this will be determined by the courts. A key factor in the decision will be whether the works created by individuals are seen as “transformative” of the content they were trained on. Generally speaking, we notice that few court rulings have hindered the development of a new and emerging technology category. And this is a massive industry under formation. Indeed, many US laws show an interest in fostering growth, such as Section 230 did for the early internet. Therefore, I believe that the probable outcome will be that AI image generation technology is here to stay and that it will be able to compete commercially with artists in the market.

      • Shawn

        It is definitely all a weird gray area. Certainly images are being used in a machine process without people’s permission. The “learning” process is perhaps more like a human learns than most people would think, but it still isn’t a human, has no free will, was developed for a particular purpose, etc. Ideally, everyone needs to decide as a group what is ok and what isn’t, but it’ll probably end up being decided by some combination of commercial and governmental interests.

        I do think it is probably too late to really “stop” AI imagery from a practical standpoint. There are improvements all the time on how much data is needed to train an AI, how to decide on quality learning material, overall output quality, etc. That doesn’t mean there shouldn’t be an effort to stop “train on the whole internet”, but my thought is that big companies will be able to afford licensing enough material to still create good images. And if those are configurable enough, you will probably be able to emulate certain styles without officially asking for X artist. Then other people can write up unofficial “If you use exactly these settings, it looks like X artist” guides, etc.

        And if a “clean” open source model is released, anyone would be able to further train that on particular images on their own without supervision (and would be hard to prove what was in their dataset). Even without that, the custom models/mixes people have already made on top of Stable Diffusion 1.5 are very good (look at ones like epiCRealism on CivitAI), and any of those is literally just a 2GB file that you can run with no internet connection on a modern phone (my iPhone 14 Pro can make a 512×768 image in a minute or so, running entirely on the phone in airplane mode). Trying to regulate that will be very difficult. And by the time any real regulations come out, the official release of SDXL will be out in the wild.

        As an aside, when talking about how they make images, it might be useful to think in terms of learning “techniques to create different visual concepts”. People have talked about the AIs being a form of compression (when talking both for and against it), which is kind of true. As I said above, a usable version of SD1.5 is a 2GB file, when it was trained on many terabytes of images (most of which were already compressed as jpegs). If you look at my other comment, it’ll make a bit more sense why this is, but the compression is more on the level of visual concepts, instead of a particular image.

        For instance, if you think about a spiral as a concept, the best way to compress it is to have an algorithm which involves a center point, how tightly it is wound, how many times it winds, etc. Then can add color, line thickness, texture, etc. An object like “banana” involves sub-concepts of various common shapes (at diff angles), colors, textures. A style like pencil involves how certain details are expressed, in terms of color, shading, etc. All those concepts are refined over time by the training, and are built on and influence each other (like certain styles also imply certain subjects are more common).

        As a test in MidJourney, I was able to make a picture of a vacuum cleaner with the combined styles of HR Giger and Lisa Frank. These are very different styles and am pretty sure neither of them actually painted a vacuum cleaner. That is possible because it is putting together all these concepts at once on different levels at the same time (object and styles which themselves are built on many other concepts like common colors, brush stroke styles, etc) and trying to find it in the noise.

        You can imagine an avatar builder that often has a bunch of sliders for things like eye shape, head size, but on crazy steroids. Not only does each part have a lot of settings, but there’s additional sliders for famous things like “Tom Cruise” which encompass a lot of other sliders, but those other sliders still exist and can fight with it. So if you up the red hair slider too, he’ll probably get some freckles too unless you manually lower the freckles slider. And if you put the Tom Cruise and Brad Pitt sliders up at once, it’ll be an average of those two plus what other stuff you specified. And if you up the pencil slider, that’ll start making it more monochromatic and change the textures.

        In the end, the resulting AI definitely was influenced by all the images it saw, but more in terms of its understanding of how to draw a spiral or what the sliders for Tom Cruise are, vs actually saving and re-arranging pixels per se. Unless it has seen one image so often that it basically becomes its own slider that resembles the original if it is maxed out. Starry Night is like that. But concepts are more expensive so usually only things that are “important” are memorized, in terms of seeing it quite a lot in the training.

Leave a Reply

%d bloggers like this: