Back to articles
Run Uncensored AI Chat + Image Gen Locally in 5 Minutes (No Docker, No Cloud)
How-ToSystems

Run Uncensored AI Chat + Image Gen Locally in 5 Minutes (No Docker, No Cloud)

via Dev.to TutorialDavid

i keep seeing the same question in every AI subreddit: "how do i run AI locally without sending my data to openai?" the answer used to be complicated. install ollama, configure it, then install comfyui separately, figure out the python dependencies, download models manually, pray nothing conflicts. now its actually 5 minutes. here's how. What You're Getting By the end of this you'll have: AI chat with uncensored models (no content filters) Image generation (stable diffusion, flux, whatever you want) Video generation All in one UI, all running on YOUR machine No cloud. No API keys. No subscriptions. No "we updated our privacy policy" emails. Prerequisites You need two things installed: 1. Ollama (for chat) # Mac/Linux curl -fsSL https://ollama.ai/install.sh | sh # Windows - download from ollama.ai Pull a model: ollama pull llama3.1 2. ComfyUI (for image/video gen) Follow their install guide . Its basically clone the repo and run python main.py . You'll also want at least one image model

Continue reading on Dev.to Tutorial

Opens in a new tab

Read Full Article
3 views

Related Articles