Helpful Resources
I’ll add more here as I remember them. Feel free to add more in the comments.
- AUTOMATIC1111’s Stable Diffusion WebUI is the software nearly everybody uses
- System requirements wise, for a while I used it on a 1050Ti with 4GBs of VRAM. I wouldn’t recommend going any lower than that. An RX580 with 8GB VRAM does wonders at a similar secondhand price point (if there isn’t any crypto hype going around where you are)
- Using https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111 can provide a really nice speed boost if configured correctly.
- Civitai has a really good selection of models, loras, and other resources
Models
Models are basically the brains of Stable Diffusion. They are the data SD uses to learn what your prompts mean.
The built-in models that come with Stable Diffusion are really bad for porn. Don’t use them. In fact don’t use them at all unless you’re training your own models, there are better SFW models.
Here are some of my personal favourites:
Anime
- MeinaHentai is a great model to start with. Compared to other models it’s really easy to prompt
- AOM3 also does really well, though it might be a little more difficult to guide
For all of those, I recommend installing https://github.com/DominikDoom/a1111-sd-webui-tagcomplete, as they heavily rely on danbooru tags.
- Berry Mix (Pre-mixed version here) can also work pretty well, depending on what you want to do. AFAIK it uses rule34 tags instead of danbooru, so it probably won’t work all too well with prompts used for the above ones
Realistic
- Uber Realistic Porn Merge is the only realistic model I know of that does hardcore stuff. It’s unfortunate problem is that it’s REALLY DAMN HARD TO USE
VAEs
VAEs are mostly used for finetuning colors, sharpness, what have you. Some models come with a VAE builtin, but for ones that don’t, it’s recommended to have one on hand.
- “Anything VAE”, “Orangemix VAE”, and “NAI Leak VAE” are the same exact thing under different names. If you already have one on hand, don’t bother with the others. Most VAEs are renamed versions or modifications of this one.
- Waifu Diffusion’s kl-f8-anime2 is also a pretty good one. It doesn’t require Waifu Diffusion.
- The one that comes with Stable Diffusion is the only one that seems to work for realistic stuff.
LoRAs
LoRAs teach models about concepts (characters, clothing, environments, style, …) they might not know about. There are a LOT of them, so feel free to browse Civitai to find ones you might want.
LoRAs tend to be specific for families of models, or at the very least styles (using anime LoRAs on realistic models tend to be a bad idea), but there are a fair few that will work across the board.
Locon and LyCORIS are newer formats of LoRAs. Not sure on the technical differences between them, but they will not work out of the box and need an extension such as https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris to get working
Textual Inversions / Embeddings and Hypernetworks
These are mostly obsoleted by LoRAs. There are a few embeddings such as Deep Negative and EasyNegative that are still quite useful, but in most cases you’ll want to use LoRAs instead.
I tried running the web UI on a steam deck with both meinahentai and URPM but I keep getting segfaults after “applying optimization: Doggettx”
Any idea what causes those? The deck should act like it has 10GB RAM and 4GB VRAM, so I’m not sure where those errors come from.
ROCm is flaky in regular consumer GPUs at the best of times. I’m surprised you could even get that far on a steam deck.
Try the command line arg
--opt-sdp-attention
. You might also want to try out--medvram
or--lowvram
(4GBs is considered low when it comes to AI). although I have a feeling it’s just because of the custom nature of the deck’s APU.Your best bet would be to search for builds of ROCm, PyTorch and Torchvision that are specifically made for the deck, if such things even exist.
Why did mine get downvoted to oblivion
deleted by creator
Are these softwares all free to download/use? Also, how does one start doing this? do i just need the WebUi or do i need extra files to feed into it and stuff?
It’s all free, yes.
For the how: Aside from the webui (or whatever else you’re running the stable diffusion code on), you’ll need models that instruct it what to create. The builtin models are terrible for porn.
Yea the webui and stable diffusion models/checkpoints are generally free to use.
I just figured it out tonight playing around with the links and readmes available above. If you get stuck I can try to answer more specific questions.
Hmm ill probably have mroe questions but for now im curious:
- how much space was the download(s)?
- how confusing is the software to use?
- what kind of limitations does the software have? can i do multiple people? monsters? futa? etc.
Thanks for your help! :)
Sure,
1: The initial download was pretty small ~10GB. But with the models, lora(s), extensions, ect. I’m up to ~60GB.
2: The guide at the top was pretty easy to follow. Install the dependencies, then install the UI. Launch and run. There is a bit of a learning curve with all of the options but so far it hasn’t been too confusing.
3: That’s where the extra models/lora(s) come in. Various models are trained in different styles, poses, actions, ect. The lora files are smaller things, like poses. IE: Cowgirl is it’s own lora file that tells the model how to use the prompts you give.
how much space was the download(s)?
On my end, it’s sitting at ~64GB (with btrfs compression shenanigans), though 60 of those are from all the models I have installed. The download would probably be ~2GB, even less if you disable downloading the “default” models with
--no-download-sd-model
and instead pick models off of Civit or wherever manually.Edit: Should have mentioned. Most full models are between 2-4 GBs each. Some can be 5+ but they tend to be “full” versions intended for merging & such. LoRAs are generally smaller. Depending on how much they’re pruned they’ll be anywhere between 10-100 MBs each.
how confusing is the software to use?
There’s definitely a learning curve, yes. But there’s plenty of resources (and more importantly, examples) out there.
what kind of limitations does the software have? can i do multiple people? monsters? futa? etc.
As long as you have the correct models set up it can generate basically anything. At least with anime models, monsters and futa are a given. Your main issue will probably be multiple people, although there are solutions to that. (See the multidiffusion upscaler GitHub repo on the main post)
After extracting the WebUI files and doing the git clone, ive tried to double click webui-user.bat but the terminal that opens up says “python not found” but i’ve got python downloaded…
Not on Windows right now so I can’t confirm, but you probably forgot to pick “Add Python to PATH” or whatever the option is in the installer. Try running the Python installer again, maybe it’ll let you add it without needing to uninstall & reinstall
Edit: If you’re on Nvidia, there seems to be a simpler install method now: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs (Method 1)
Hmm i uninstalled and reinstaqlled and i’m not seeing an option for “Add Python to PATH” Could there be an alternate name?
It’s probably “Add python to environment variables.” They must have changed the wording on that at some point.
I’ve been meaning to figure this out. So the commonly used ones are labeled WebUI, but just how much of the content goes to the web itself?
If I wanted to train on images that I don’t want going online, will they? Or will the products that I create end up online, or does all this stay local?
By Web UI it means that the graphical part of it – where you write your prompt and hit generate – is running inside your browser and not as a separate window or command line. Everything is kept on your own computer unless you explicitly tell it to open up remote access.