Guided Hybrid Workflow
For over two decades, my work has been defined by the pursuit of atmosphere, narrative, and creating believable worlds. From the early days of traditional matte painting to AAA digital production, the tools have shifted, but the fundamental requirement remains the same: Intentionality.
This section is a dedicated space for my current explorations into hybrid workflows. Here, I want to document how Generative AI can be harnessed not as a shortcut to a final image, but as a high-velocity tool for Art Direction. By treating AI as a sophisticated tool rather than a creator, I can explore vast conceptual spaces in a fraction of the time, while maintaining the final authority on composition, lighting, and storytelling.
The intent showcased here are explorations in creative control. My goal is to demonstrate that the value of the modern artist lies in their ability to direct, curate, and refine. Every exploration here involves a human-driven phase and a rigorous ‘production polish’ phase—ensuring that while the tool may be new, the standard of quality is the same.
High fidelity mood exploration
The Production Challenge: In film and game development, a mood piece is often a loose, gestural painting meant to capture lighting and emotion. However, to get a green light for a sequence, stakeholders often need to see a production still—an image that looks more polished, post-processed frame from the movie or game. Traditionally, moving from a rough sketch to this level of fidelity requires days (if not weeks) of digital painting, photobashing, 3D rendering, and meticulous over-painting.
The Hybrid Workflow: I utilize generative AI as a surface and material engine. By feeding a rough, hand-drawn mood sketch into a guided AI pipeline, I can leverage the tool to better visualize the scene with realistic light-wrap, atmospheric haze, and material specularity. This process doesn’t change the composition or the intent of the scene; it helps to accelerate the rendering phase to meet AAA production standards in hours instead of days.
Material fidelity and asset up-rez
The Production Challenge: One of the most common bottlenecks in the pipeline is the translation gap between a concept and the final 3D asset. If a concept lacks clear material definition or resolution, it leads to guesswork during the modeling and texturing phases, resulting in multiple costly revisions.
The Hybrid Solution: I utilize Generative AI to up-rez this concept and to define complex material surfaces. Up-rezing a concept to include more fidelity like micro-scratches, weathered industrial textures, etc., would have taken a decent amount of time depending on the resolution and final desired quality. Using a hybrid approach where you’re in control of the tool reduces that time significantly.
Environment & weathering exploration
The Production Challenge: Establishing how a primary environment asset reacts to various climate conditions—such as heavy snow, ice accumulation, or corrosive sandstorms—is a labor-intensive part of look development. Traditionally, artists must manually repaint every surface for each weather variation, which can create bottlenecks when a production requires multiple seasonal or planetary biomes.
The Hybrid Workflow: I utilize Gen AI as a tool to explore these environmental states. By using a base production-ready still as the foundation, I can apply complex weathering layers like snow-pack, ice-melt, and atmospheric occlusion in a non-destructive manner. This allows me to test the visual durability of the design across vastly different lighting and weather scenarios while ensuring the structural silhouette remains identical.
Cinematic post-production and color grading
The Production Challenge: Finalizing a large-scale environments often require extensive manual color grading and atmospheric painting to ensure the mood aligns with the cinematic direction of the project. Traditional post-production workflows involve tedious layering of volumetric fog, light-wrap, and global illumination adjustments, which can be difficult to iterate upon quickly when art direction shifts.
The Hybrid Workflow: I utilize Gen AI as a post production tool to visualize better the atmospheric and color grading to inform the post production partners. Using either a screenshot or a concept as the foundation, I can synthesize sophisticated lighting responses, water animation, while also balancing the cinematic color palette. This allows for rapid exploration of different looks and atmospheric densities while maintaining the structural integrity of the original painting.
Directed concepts and production stills
The Production Challenge: In a standard pipeline, a concept artist often spends a good amount of their time on rendering, painting light, shadow, and material response. This often comes at the expense of pure design time. Once a concept is finished, creating a clean production line drawing for the 3D team or external vendors is an additional, often tedious task.
The Hybrid Solution: In this workflow, I can spend more time on original concept sketches to drive the AI’s synthesis. I am essentially using the AI as a real-time lighting and material engine. Once the material intent is established through these AI explorations, I can then use generative tools to help synthesize a high-fidelity Production Line Drawing—a clean schematic that defines form without the distraction of color or light. This can focus and prioritize more time on the most important phase of all in design.