Prompting Guide
Eris<<< Back to main Page 📚
📝 How to write a prompt? (Basics)
Start your prompt with a short natural language prompt describing what you want, then pad it with e621 tags to refine specific concepts. Dont use "_" and separate tags with ","
Don't tag something that you cant see on desired view of your resulting picture.
(Wip) Things that can help you compose your prompt:
Anthro/feral?
View? (Front view, side view, rear view, three-quarters view, low-angle view, high-angle view, first person view, penetrating pov, male pov, bird's-eye view, worm's-eye view..)
Format? (Bust, portrait, half-body, full-length portrait, headshot..)
Clothing? (Clothed, nude, partially clothed, topless, bottomless, topwear, bottomwear, underwear only..)
Looking where? (Looking at viewer, looking back at viewer, looking down, looking at other, looking at self..)
Gender? (Male, female, gynomorph, andromorph, intersex, herm, cuntboy..)
Body composition? (Slim, skinny, fat, obese, chubby, slightly chubby, overweight, muscular, athletic..)
Style? (3d, digital artwork, toony, realistic, pencil drawing, sketch, lineart..)
Rating? (safe, questionable, explicit)
Background? (Simple background, amazing background, nature background, inside, outside..)
Emotion? (Smile, shy, orgasm face, ahegao, embarrassed, eye roll, angry, sad, tears, crying, calm, :3, :o, annoyed..)
Profanity? Yes, just add it.
Solo/duo/trio/group? Negative the others to evade unwanted results.
🧑🎨 Why I need to use artists in my prompts?
Most prompts without artists tags coming out very "basic", try to use your favorite artists using syntax by %artist%.
Example: by personalami
You can mix artists together to find new styles.
by personalami, by pixelsketcher, by scruffythedeer
Also dont forget when using any artist it will be better if you ask AI for stuff that this artist usually create. Use this to get specific concepts/situations/species artist is good at.
For example if you use Zaush you will get better results asking for canines he mostly draws.
You also can use artist LoRAs to reinforce artist's style even more. You can find more info about LoRAs lower in this article.
🔠 How to write tags correctly from e621.net?
- Don't use underscore:
bedroom_eyes is WRONG
bedroom eyes is CORRECT - If tag have brackets in it like this one:
death (puss in boots)
Write it like:
death \(puss in boots\) - Use "by" before artists like this:
by kenket - If tag have ( ' ) or (-) use it like usual:
worm's-eye view
🚫 What is negative prompt?
A negative prompt is a way to use Stable Diffusion in a way that allows you to specify what you don't want to see on the image. You don't need to write something like "no wings" or "no clothing". Use the negative prompt instead. Or use different meaning tags. Nude instead of no clothing for example.
You can use negative prompt in Eris like this:
/txt2img <prompt> neg: <negative prompt>
Eris have a basic negative prompt in place, so if you are not trying to avoid something exact better not touch it.
📛 What is negative embeddings?
Look at embedding like on set of tags you put in just one word - name of the embedding. They are mostly created so you dont need to write stuff like this in your negative prompt: "bad anatomy, multiple heads, weird legs, bad quality and etc.."
Negative embeddings are a sets of tags that are used to help the AI understand what to avoid or do (if put in a positive prompt) in pictures. You can use them in your prompts to make your pictures look better. For example, Boring e621 embeddings are trained on e621 art that no one has ever favoured or upvoted, so you can use these negative embeddings to guide the AI towards more "interesting" output.
You can check them all out with Eris Infoline Bot using /emb command.
🌀 How much steps i need?
Stable Diffusion creates images by starting with a noisy canvas and slowly removes noise from it up until you get the final image. This setting determines how many times it cleans up the noise. The default is 25 steps, which works well for most generations.
More isn't always better, but if you're making a detailed image, such as a photorealistic portrait with fur, and you think it's missing some detail or texture, or your prompt is really complicated - try increasing it a bit to 30-40 steps.
We using a very step-efficient sampler (DPM++ 2M Karras) so there is no need in super high values.
⚖️ What are weights?
Weighting allows you to add more or less attention for specific part of prompt. That usually helps if you can't get the right result or want. 1 is 100%, 2 is 200%, 0.5 is 50%
Cheat sheet:
a tag - attention of the tag is 1.0
a (tag) - changes attention of the tag to 1.1
a ((tag)) - changes attention of the tag to 1.21 (= 1.1 * 1.1)
a [tag] - changes attention of the tag to 0.9
a (tag:1.5) - changes attention of the tag to 1.5
a (tag:0.25) - changes attention of the tag to 0.25
Dont forget about weight normalization and dont use very high weight values.
Like if you prompt:
(masterpiece:1.2) (best:1.3) (quality:1.4) girl
It actually becomes:
(masterpiece:0.98) (best:1.06) (quality:1.14) (girl:0.81)
That means if you use weights like:
(masterpiece:2) (best:1.2) (quality:1.1) girl
It actually becomes like this and some of your tags might be completely ignored by AI with lower weight:
(masterpiece:1.6) (best:0.46) (quality:0.42) (girl:0.34)
🖼️ What resolution is right?
All AI-models used by Eris are trained on specific sizes of pictures, the highest possible resolution is 1088 on any side. So anything higher than this will most likely lead to artifacts. Just keep it under 1088 and upscale good results. Recommended sizes for each side are: 576, 704, 832, 960 and 1088 pixels.
💎 How to upscale in bot?
Reply with:
/img2img scale: 2 denoising: 0.4
to message with picture and prompt. Scale refers to the factor of picture enlargement. 500x500 -> 1000x1000 for scale 2. Denoising will use a fraction of the original picture and generates on top of it. In this case: it uses 60% of the original and uses 40% of the new picture. In other words, low numbers will keep the picture very close to the original, while high numbers can lead to new features inside it. Don't forget about spaces between parameter and value.
Upscaler used is latent.
If there is two separate messages for prompt and image - reply with to image:
/img2img <prompt> scale: 2 denoising: 0.4
🌱 What is seed?
Seed is a random number from which noise is generated.
The seed is a special number that controls the initial noise canvas. The default value is random. This randomness is what makes the final image unique. Running the same prompt with the same seed on the same model will give you exactly the same image. But if you change the seed, you will get a different image. (Also can vary from different GPUs used)
Even a slight difference in seed means completely different picture. If you change the seed even for +1 number, the initial noise will be different - the composition will change, posing, characters, etc.
Using the same seed and the same prompt with slight changes allows us to do a few things:
You can set your seed in Eris like this:
/txt2img <prompt> seed: 322
🗨️ How you can use /txt2img command
/txt2img just as message will ask you for prompt. You can reply with prompt and parameters to that message again and again, no need to call /txt2img again. You can add prompt/parameters after command in the same message to skip that step.
/txt2img as reply to message from will re-queue prompt, if message came from bot or contains generation parameters. Otherwise it will use reply text as prompt. If there is prompt/parameters after command - prompt/negative prompt will be added to prompt/negative prompt from reply, and parameters overridden.
Example:
/txt2img cute vulpera waving, greeting, looking at viewer, paws, pawpads, traditional artwork neg: explicit seed: 31337 steps: 20 cfg: 7 size: 576x832
Additional available parameters (case insensitive, specify them after new line or comma with space and don't forget to add space between parameter name and value as well):
Negative prompt: or Negative: or Neg: - negative prompt. Specify what you don't want to see. Default: boring_e621_fluffyrock_v3, boring_e621_v4, deformityv6, worst quality, bad quality, where boring_e621_fluffyrock_v3, boring_e621_v4, deformityv6 is negative embeddings.
Seed: - initial number of noise. Default: -1, which means random value. Works only on new /txt2img requests.
Steps: or Cycles: - how long should AI work on image. More doesn't mean better. Default: 25.
CFG Scale: or CFG: or Detail: - how strictly AI should follow your prompt. More doesn't mean better. Default: 7
Size: or Resolution: Size of generation in format WxH, where W is width and H is height. More doesn't mean better. Default: 576x704
🏁 How you can use /img2img command
/img2img just as message will ask you for image and prompt. You can reply with prompt and parameters to "Please describe message" message again and again, no need to call /img2img again. You can add prompt/parameters after command, then bot will ask only for image.
/img2img as reply to message will reprompt it, if message came from bot or contains generation parameters and attached image. Otherwise it will use reply text as prompt and ask for image. If there is prompt/parameters after command - prompt/negative prompt will be added to prompt/negative prompt from reply, and parameters overridden.
Upscaler used is latent.
Example (as reply on image that you want to "repaint"):
/img2img wolf, smile, looking at viewer, digital art, bust portrait denoising: 0.5
Additional available parameters (case insensitive, specify them after new line and don't forget to add space between parameter name and value, you can use parameters from txt2img too):
Scale: x - scale output size up to x times, where x not greater than 2. Respects global maximum resolution value, which currently is 1 million pixels.
Denoising strength: or Denoising: or Denoise: - how much of initial picture is used, where 0.75 means that 25% of original image is used. You can specify 0.75 as 75%. Default: 0.75
🖼️ What is /pnginfo?
Uncompressed images that came from local installations of Stable DIffusion WebUI may contain parameters used for generation.
You can use /pnginfo command to extract them.
Note: this command will not magically extract tags from random image that you converted to PNG.
FAQ and advanced prompting:
🧑💻 How to install something like this on my PC?
1) Download the A1111 webui package, unzip it in a Drive and Folder of your choice.
2) Download the model and config, put them both near each other in webui/models/Stable-diffusion folder inside your A1111 install.
— EasyFluff 11.2
— Сonfig for EasyFluff 11.2
(You need to press Ctrl+S to save the config, it usually saves as .yaml.txt file, you need to rename it to just .yaml)
3) Add --xformers argument to set COMMANDLINE_ARGS= in webui-user.bat file inside the webui folder
Additionally, if you have errors generating big resolution pictures try to add --medvram or --lowvram
4) Run update.bat once, it will download the newest files.
5) Run run.bat, it will take some time, because it has to install all the dependencies. Don't worry, the next start will be a lot faster. If a new browser tab opens up with 127.0.0.1:7860, you are done.
🧠 What is LoRA?
Think about LoRAs as additional plug-in neural models, that you can call with <lora:some_lora_name:1> and trigger word(s), if specific LoRA got one.
They may change style of output, add details, or help to generate lesser known characters or specific subject. You can use multiple loras in the same prompt.
You can view the always up-to-date list in @eris_info_bot
👥 How about multiple characters?
Duo/trio or even more with proper characters can be very random. If you use SD locally, you can run multiple batches until a good seed comes out, then lock it and adjust prompt/weights further. If you use the bot, just keep trying. Be sure to add all duo/group-related tags (duo, cuddle hug, etc). Also landscape orientation for the picture tends to be better for duo+ prompts.
For local WebUI you can use extension called Regional Prompter.
🖋️ Possible artists/tags?
Use the e621 wiki as a good starting point. Here is the breasts overview article for example.
Also here is a list of tags/artists extracted from dataset of another model called FluffyRock.
EasyFluff is based not only on Fluffyrock and may know more, but you still can use the Fluffyrock tag/artist list to have an idea which tags are recognized.
🎆 Why my image is oversaturated?
Don't use very high CFG value. Just don't. Keep it somewhere in 6-11 range.
🔥 Why my image is burned out?
Maybe your forgot the space when you wanted to set your Steps. And the bot's parser took it as word 'steps' with weight 40.
Steps:40 is WRONG
Steps: 40 is CORRECT
Or maybe the worker that generated your images probably experiencing some issues with a "model config". Report that in chat, if it was already reported - wait for a fix.
🎲 What is prompt editing?
Prompt editing is feature in A1111 WebUI that allows you to swap something at the some point of generation. It may be useful to create hybrid characters. The syntax for it is very simple:
[from:to:when]
[bird:dog:0.2]
That means, it would try to generate a bird in first 20% of the steps and a dog in last 80% of the steps, merging both together, with base of a bird.
🧩 What meaning has [tag1|tag2] in a prompt?
It is a syntax for swapping parts of the prompt every other step.
For example [wolf|horse], sitting, smile
Step 1, prompt is "wolf, sitting, smile"
Step 2 is "horse, sitting, smile"
Step 3 is "wolf, sitting, smile" and so on.
Also can be extended endlessly which is pretty useful for mixing artists [word1|word2|word3|word4|...].