AI Is Our Third Teammate: How We Create Pixel Gun 2 Videos Without Rendering or Animation
Cubic Games, part of the GDEV group, has been actively using generative AI to create videos for its hit game Pixel Gun 2. The studio’s lead motion designer Evgenii Mikhidenko has now detailed the pipeline and shared key results in an exclusive article for Game World Observer.
Evgenii Mikhidenko
How We Use AI to Create Videos in the Pixel Gun Universe
When we at Cubic Games first started using AI, we were primarily focused on classic use cases: speeding up game production, increasing content output, and improving quality. But over time, we began actively applying AI in marketing — specifically in creating videos that help maintain audience interest even when we’re not yet ready to show gameplay. That’s how the idea for short “cartoon-style” videos was born. These don’t distract the team from core development, but allow us to stay connected with the community.
The Lie Detector Concept
When we launched one of these videos, we intentionally chose not to link it directly to Pixel Gun 3D. PG3D and Pixel Gun 2 are, after all, two different products. So we went with a light, ironic format: a video featuring a lie detector, where a game designer answers tricky questions and the device instantly reveals when he’s lying. This creates emotional engagement without impacting the PG3D brand and subtly builds excitement around Pixel Gun 2 — without claiming it’s “better.” We just want it to be fun — with destructibility, perks, and all the features we’re working on now.
Embedding Meaning into the Videos
We aim to convey not just the atmosphere, but also the values that matter to our community. For example, we want to show that the game is about game design — not monetization or pay-to-win. Right now, Danya is working on scripts, and one idea is to show the development process from the inside: how we use AI as a team and how it helps in real tasks. We plan to make a dedicated video about this.
That’s also why we launched several new social channels — to clearly separate the two products visually, stylistically, and tonally. In fact, we’re gradually shaping a whole series, like “The Office,” starring our character Pixelman. He exists within the studio, interacts with artists, developers, game designers — and through humor, light storytelling, and character interactions, we convey meaningful messages. It’s like storytelling about challenges and solutions — not directly, but through characters. Think of it like “Ninjago”: there’s a plot, an obstacle, and a conclusion. We believe the 30–40 second short episode format fits this perfectly.
How We Made the Actual Video
This project was initially intended as a gift for the community, so we paid extra attention to quality. Our goal was to make the video entirely with AI — but without classic AI artifacts: blurred faces, broken hands, drifting eyes, etc. Ironically, this meant more manual tweaking than in, say, ad creatives or user acquisition videos, where compromises are acceptable for the sake of speed.
Roughly 70–80% of the video was AI-generated — from image generation to animation — but we completely excluded hand-drawn animation and traditional 3D rendering as a matter of principle.
Manual work included prepping references, final editing, fixing artifacts, and inserting key elements — like the lie detector screen. This hybrid approach allowed us to preserve visual consistency while iterating flexibly.
Our Pipeline
First, I exported game assets and laid out the scene in Blender — this was the fastest way to recreate the needed composition and style. Then, using GPT, I generated key frames. Typically, I asked it to enhance the render, give it a cinematic feel, preserve detail, and set the desired lighting and mood. Once I got a successful frame, I saved it with the label “Interview” and requested all subsequent frames in that same style.
If there were artifacts or blur, I ran them through Krea Enhancer. If the aspect ratio changed, I extended the background in Photoshop. Animation was done via Kling.ai — it provided the needed style and fit our format better than, for example, Veo 3, which tends to introduce artifacts and struggles with consistency.
In Kling, I uploaded short 5-second clips — it was easier to manage artifacts that way. Emotional expressions worked best using visual references, not text prompts — sometimes just redrawing the eyebrows did the trick.
Voice-over was done via Eleven Labs V2, using voices Mark and Reginald — tonally, they were the best fit. The lie detector scene was a challenge: it almost always generated bugs. So I:
- generated it separately in Kling,
- created a looping animation,
- inserted it into After Effects,
- and polished the screen (added pulse, movement, etc.)
What AI Helped Us Achieve
After publishing, we tracked the first 24 hours of views, checked activity on Discord, Instagram, and other platforms. Interest had previously been on the decline — but this video brought a sharp uptick. It almost matched our top post in engagement! So we’re definitely continuing this approach.
We see two types of content:
- Entertainment-focused (when there’s nothing new to show, but we want to maintain interest).
- Product-focused (which AI doesn’t help much with — we don’t run misleading ads and only use build-captured footage).
That said, AI does speed up scene composition, asset selection, static generation — in other words, it helps with prep, but doesn’t create the final product.
We now use AI mainly as an accelerator and optimizer: what used to take 10 hours now takes 5. It’s not about replacing people — it’s about how two people can now make 20 videos instead of 15. Team costs grow, but without growing the team. We save resources by investing in licenses and experiments.
What’s Next?
We’re continuing to experiment with tools, and in our next video we improved voiceover by blending human voices with AI enhancement. Our goal — just like the broader goal at GDEV — is to do things faster, better, and at scale. Anything that cuts down routine work — we test it.
We want to heavily develop Pixel Gun 2. Even without final gameplay, we can tell stories through text, images, and mascots to build excitement. We’re hopeful that we can eventually reach a format of short animated episodes set in the Pixel Gun Universe — covering both the studio (through Pixelman and others) and the in-game lore.
This would allow us to talk about Pixel Gun 2 with the community — without breaching NDA or showing raw content. The most important thing is not to hide the game away, but to communicate honestly with players. Always.