Back to articles
image-to-image with local AI models — which model for what, and how denoise strength actually works

image-to-image with local AI models — which model for what, and how denoise strength actually works

via Dev.toDavid

Most local AI image tools give you text-to-image and call it done. You type a prompt, get an image, and if it's not what you wanted — you start over with a different prompt. That's fine for exploration, but it's a terrible workflow when you have a specific result in mind. Image-to-Image (I2I) changes that. You start with a reference image — a photo, a sketch, a previous generation — and tell the model what to change. Keep the composition, adjust the style. Keep the pose, change the outfit. Keep the layout, make it photorealistic. The source image anchors the generation so you're refining instead of rolling dice. We added I2I to Locally Uncensored in v2.3.0, and it works with every image model the app supports. Here's how it works and which models to use for what. How Image-to-Image Works The core mechanic is denoise strength — a value between 0.0 and 1.0 that controls how much the model changes your source image. 0.1–0.3 : Subtle adjustments. Color grading, minor style shifts, texture

Continue reading on Dev.to

Opens in a new tab

Read Full Article
3 views

Related Articles