AI Generation Job Overview
This documentation is for update 2.0+, applied to 5.4+.
The AI Generation Job Type is currently experimental! It is subject to change in future updates.
Optional: Clicking the 'X' on the Experimental Warning will dismiss it forever.

Welcome to the AI Generation Job, a powerful tool that integrates the ComfyUI image generation pipeline directly into the Unreal Engine editor. This guide will walk you through setting up the tool, understanding its interface, and generating your first textures.
1. First-Time Setup
Before you can generate images, you need to connect the plugin to your ComfyUI installation. This is a one-time setup process.
Step 1: Download and Install ComfyUI
First, you need a working, standalone installation of ComfyUI.
You can download the latest version from the official ComfyUI GitHub page.
Follow the installation instructions provided in the ComfyUI documentation or Readme file to ensure it is set up correctly before proceeding.
Step 2: Set the ComfyUI Path
You must first tell the plugin where your ComfyUI is located.
In the job's Details panel, expand the Settings category.
In the ComfyUI Path field, enter or browse to the root directory of your
ComfyUI_windows_portable
installation (e.g.,C:/ComfyUI_windows_portable
).

Step 3: Install Prerequisites
ComfyUI requires several third-party custom nodes to work with the included workflows. The plugin can install these for you automatically.
Click the Get Prerequisites button.
A progress bar will appear as the plugin downloads and installs the required custom nodes and Python packages into your ComfyUI directory. This may take a few minutes.


Step 4: Start ComfyUI
The plugin needs to run the ComfyUI backend process.
Click the Link/Start ComfyUI button.
The ComfyUI Status text will change. You should see it go from
Stopped
->Disconnected
->Connected
.Stopped: The ComfyUI process is not running.
Disconnected: The process is running, but the Unreal editor is not yet communicating with it via the WebSocket. It should connect automatically.
Connected: Everything is ready to go!
You only need to do this once per editor session. The plugin will remember the running process.

2. The AI Generation Panel
The main interface is located in the Details panel of the AI Generation Job. It's broken down into several sections.

Main Controls & Status
ComfyUI Status: Shows the current connection state.
Save/Load Settings: The button at the top-right allows you to save your entire configuration as a Data Asset preset and load it later. This is great for saving your favorite settings.
Interrupt Generation: If a generation is taking too long or you want to cancel it, this button will stop the current job in ComfyUI.
Send To IMG2IMG: After a generation is complete, this button will appear. Clicking it sends your last generated "diffuse" texture to the Img2Img input slot, allowing for fast iteration.
Stop ComfyUI: Shuts down the ComfyUI background process.

Core Generation Settings
Positive & Negative Prompts: Large text boxes where you enter your desired concepts (e.g., "masterpiece, best quality, brick wall texture") and things to avoid (e.g., "worst quality, blurry, text, watermark").
Workflow: This dropdown is crucial. It lets you select the ComfyUI graph to use (e.g.,
TXT2IMG.json
,IMG2IMG_ControlNet_Upscale.json
). The settings available below will change dynamically based on the nodes present in the selected workflow.AI Models:
Main Model: Your primary checkpoint file (e.g.,
.safetensors
).Custom VAE: (Optional) Use a specific VAE instead of the one baked into the main model.
Upscale Model: The model to use for upscaling nodes.
ControlNet Model: The model to use for a ControlNet pass.
Settings (Advanced):
Output Path: The Content Browser folder where your generated textures will be saved.
Delete Output After Gen: If checked, the temporary
.png
files in yourComfyUI/output
folder will be deleted after being imported into Unreal.Delete Previous Gen: If checked, textures from the previous run will be deleted from the Content Browser when you start a new one.

3. Workflow-Specific Settings
The real power of the tool is its dynamic interface. Depending on the Workflow you select, different categories of settings will appear.
Sampler: Appears in all generative workflows. Here you control the core generation parameters:
Seed: The random seed for the generation. Check "Randomize Seed" for a new result every time.
Step Count: The number of sampling steps. Higher is generally better but slower (e.g., 20-30).
cfg: How strongly the AI should adhere to your prompt (e.g., 7-10).
Sampler/Scheduler: The specific sampling algorithm to use (e.g.,
euler
,dpm++_2m_sde
,karras
).Image Width/Height: The dimensions of the generated image.
IMG2IMG: Appears in workflows that start with
IMG2IMG
.Enable Inpaint: Toggles inpainting mode. You will need to provide a mask in the
InputMaskTexture
slot.KSampler_Denoise: How much the AI should change the input image. 0.0 means no change, 1.0 is a completely new image.
ControlNet: Appears in workflows with
_ControlNet
in their name.ControlNet_Strength: The influence of the ControlNet on the final image.
SD Upscale: Appears in workflows with
_Upscale
in their name.UpscaleBy: The factor to multiply the resolution by (e.g., 2.0 for 2x).
Upscale_Denoise: How much detail the upscaler can add or change. Low values (0.1-0.3) are recommended.
LORAs: Appears in workflows that support LoRA. You can add them here, specifying the file and its model/text strength.
PBR: Appears in workflows that can output PBR maps. This is a group of categories for fine-tuning the post-processing effects that generate the Normal, Roughness, and AO maps.

4. Generating Textures
Select a Workflow: Choose the appropriate workflow for your task (e.g.,
TXT2IMG.json
for a simple text-to-image, orIMG2PBR.json
for a PBR conversion).Set Your Prompts & Settings: Fill in the prompts and adjust the Sampler, Model, and other settings as desired.
(Optional) Provide Input Textures: If using an
IMG2IMG
orControlNet
workflow, drag and drop textures from the Content Browser into theInputTextureImg2Img
orInputTextureControlNet
slots in the TextureInputs category.Click Generate: Press the main "Generate" button at the top of the Texture Tools window.
Monitor Progress: A progress bar will appear at the bottom of the job panel, showing the overall progress and the title of the specific ComfyUI node currently being executed.
Find Your Texture: Once complete, the generated texture(s) will appear in the Output Path you specified in the Content Browser.

5. Saving and Loading Presets
You can save a complete set of configurations to a Data Asset to reuse later.
Save As: Click the Save/Load button and choose "Save Settings As". This will prompt you to create a new
AIGenSettings
Data Asset in your Content Browser.Save: If you have already saved or loaded a preset, this option will overwrite that asset with your current settings. An asterisk
*
next to the preset name indicates you have unsaved changes.Load: To load a preset, click the Save/Load button. The bottom section of the menu is an asset picker. Simply find and click on a previously saved
AIGenSettings
asset to apply all its settings to your current job instantly.
Last updated
Was this helpful?