Introduction
## Can AI remake old game graphics in higher resolution. Yes, AI image generation tools can create re-imagined, higher-resolution versions of old video game graphics, but the process requires patience and prompt engineering. Using commercial tools like Stable Diffusion, DALL-E, and Midjourney, it is possible to retell classic visual stories with more detail and fidelity. For example, the intro cinematic of the 1987 MSX game Nemesis 2 was recreated panel by panel, showing how AI can enhance pixelated, low-res originals into modern digital art styles while preserving the narrative.
How to generate better images with AI prompts
The key to generating good AI images is crafting detailed prompts that include style keywords, not just subject descriptions. For instance, a prompt like “fighter jets flying over a red planet in space with stars in the black sky” evolved into a more effective one by adding style cues such as “realistic scifi spaceship, ” “vintage retro scifi, ” and “dramatic lighting.” This approach led to higher-quality images in Stable Diffusion and DALL-E. Searching prompt galleries like Lexica, which hosts millions of prompt-image pairs, can help find these arcane keywords and styles proven to work.

Using Midjourney for improved image quality and speed
Midjourney consistently produces especially beautiful images with less need for prompt tweaking compared to other models. When reimagining Nemesis 2’s intro, Midjourney delivered stunning results quickly, making it a preferred choice for multiple panels. It captures dramatic lighting and sci-fi aesthetics well but sometimes struggles to match exact poses or details from the original images. Midjourney also saves generation history automatically, which is useful for managing several image iterations.

Steps to recreate specific panels with AI tools
Each cinematic panel required custom prompts and some trial and error. For example, Panel 2’s villain Dr. Venom needed a prompt describing “a scary green skinned bald man with red eyes wearing a red coat with shoulder spikes, looking from behind prison bars, black background, dramatic green lighting.” Attempts to show him restrained with chains failed, so changing the pose to behind bars improved results. Aspect ratio commands like “–ar 3: 2” helped generate wide-format images matching the original cinematic layout.

Handling text and complex elements in AI images
Current AI image generation tools struggle with reproducing readable text in images. For a star map panel, the AI created a detailed map but failed to generate correct text or precise line placements. This limitation requires manual editing afterward, such as importing the AI-generated image into Photoshop to add text and lines. Although Google’s Imagen has demonstrated text generation in images, it is not widely accessible yet.

Inpainting and partial image editing challenges
Inpainting, where AI regenerates a portion of an image, was attempted to recreate Dr. Venom’s iconic three eyes but did not yield satisfying results within the time available. Despite AI’s capability to generate complex eye images (as seen in horror-themed galleries), controlling specific features precisely remains difficult. This indicates that while AI can create stunning new visuals, replicating very specific iconic elements still needs refinement.
Using DALL
Using DALL-E outpainting to expand images. DALL-E’s outpainting feature allows expanding an existing image’s canvas by generating new content around it. This was used to enlarge a panel showing the ship’s captain, maintaining visual continuity while adding background details. Unlike text-to – image generation, outpainting requires adjusting prompts for each portion of the extended canvas to guide the AI’s creations properly. This technique is useful for creating larger scenes from smaller original frames.
Comparing Dream
Comparing Dream Studio Stable Diffusion and Midjourney pros and cons. Dream Studio by Stability AI offers a managed Stable Diffusion experience with advanced features like inpainting and API access for developers. It is the most used tool by the author, praised for its user-friendly interface and flexibility. However, it currently lacks a robust image history feature and newer model versions require community learning to optimize prompts. Midjourney excels in generation quality with minimal prompt effort and automatically saves generation archives. Its community feed also inspires creativity. However, it can struggle to reproduce exact details or poses consistently across multiple images.

Conclusion on AI image tools for game graphic remakes
AI image generation tools today can successfully remake old video game graphics in higher resolution and detail, but it requires careful prompt design, iterative tweaking, and sometimes manual editing. Stable Diffusion, DALL-E, and Midjourney each have strengths that complement the process. While reproducing exact iconic elements or text remains challenging, these tools already enable nostalgic stories to be retold visually with impressive fidelity and artistic flair. The ongoing development of AI models promises even better results soon.
