blog cover

How to Make Game Graphics with AI

October 17, 2022
Stable DiffusionAI

Why do I choose to remaster "Blood of Demons" ?


First of all, I didn't like the old graphics. The reason I use those graphics is that I don't have time to do better ones and my skills are limited (I'm not a digital artist). If I had time, I could do better, but how good they will be is debatable. As a person who closely follows technological developments, artificial intelligence that can paint according to directives caught my attention. I was also curious about the practical implications of the idea of producing game graphics using artificial intelligence. I decided that this game was a good test subject for that.



Why Stable Diffusion ?


Until I discovered "Stable Diffusion", I thought other AIs wouldn't quite meet my needs. Because while other artificial intelligence works only "text to image", Stable Diffusion can work in "image to image". In other words, it can produce images not only with written instructions, but also using reference image input. This is a very important criterion for maintaining the continuity of images while producing frame-based animation.


Remastering Characters


I started by copying the graphic files of the old version. My goal was to replace each "frame" in these files containing the character frames with the new frame. In this way, I aimed to remain loyal to the existing spritesheets.

First of all, I presented the reference "frame" to artificial intelligence either directly or by making some simple changes on it (for example, I interpreted the main character differently). Afterwards, I made various changes on the result (deleting the parts I didn't like and keeping the good parts) and sent it back to the artificial intelligence. This process continued until I got the server I wanted.

I created the other "frames" usually using the reference (which I created with the help of artificial intelligence) at hand. I rarely recreated an image with artificial intelligence. One of the reasons for this is the difficulty in maintaining consistency with the previous "frame". The second reason is the time it takes to clear the background.


Creating Spritesheets


Again here I used copies of previous source files. I replaced the images in the source file with the new ones exactly, created a new file with the same dimensions as the old spritesheet file and replaced with old file.


Creating App Icon and Store Images


At this stage, I applied similar procedures as before. However, at this stage, I created the images with "Text to image" instead of "Image to image", then manipulated them with "Photoshop" and created them using the "image to image" feature.

You can watch these steps that I explained above in more detail in the video below.



Conclution


As a result, I have replaced the graphics of the current game with relatively better ones. More importantly, I've discovered some of the potential of "Stable Diffusion" in creating game graphics. I think it is a toy that will be very useful for people like me who are trying to develop games on their own.

But I must also say this. It doesn't seem to be very efficient for frame-based work. It seems that it is not so easy to maintain consistency between very different "frames". I mean, let's say we're trying to animate a character that rotates 360 degrees around itself. In this case, the frame after the first "frame"; Let's say the character is rotated 45 degrees. The image of the character will change quite a bit between two frames, and it is not possible to consistently draw what the same character will look like from a 45-degree angle to a 90-degree angle. I say this for myself, at least for now (other methods may be found in the future). There are so many things I want to try with "Stable Diffusion". Therefore, it is too early to make a definitive judgment. However, I should add that it will be much more successful in "bone based" animations.

You can watch the "game play" of the new version of the game from the video below.