In Celebration of our first member I have setup a space for ComfyUI to be used by our members. ComfyUI is an image generator. If you don't know how to use it, don't worry. It has this great feature that lets you load the state that an image was generated in by extracting the data from the image and rebuilding the pipeline. Long story short, just drag and drop the image attached to this post into ComfyUI (make sure it gets dropped on the back ground). and the program will do the set for you. Now just hit Queue and you're generating art. Parameters to play with:
- Prompt: the text fields on the board, they are often inside the nodes classified as a "Text Encoder" and are what the model uses to interpret your request. Focus on the the positives and break your concept into smaller chunks separated by a comma. Ex. "Beautiful sunset, blue skies, sandy beach, ocean lapping into shore, award wining illustration, very detailed, bold linework, bright saturated colors"
- CFG: you can find this on the node called "Sampler" often K Sampler. This controls how closely the model tries to adhere to your request. Lower number means it will stick to your prompt, high number means it will be more creative. There is a diminishing returns though. After a certain point the image will look "burnt" or "overcooked." Basically over exposed and ugly. 7-9 is a good starting range.
- Denoise: This is also on the sampler, it controls how much noise (think of static on a television but the static is the starting image) is taken away during the process. 1(or 100% they use 1 as the representation of the percent, just add two zeros to the number to figure out the percent) means no noise left over, 0 means nothing but noise. I usually go for 80-90% on an image generated from text and 10-30% for image to image.
- Steps: You can select the number of passes that the model takes over the image. Each pass it removes a bit more white noise. To more passes the more clear the image gets, but like CFG it has diminishing returns. If it passes over an image with no noise it will start damaging the image removing something that is not there. I like 15-25 with regular stable diffusion models and 35-50 with XL models.
- Sampler: Think of these as kind of like brushes, they give different textures and other characteristics that are not easily definable. i like dimm, euler and dpmm_3m_sde_gpu(the name rolls of the tongue doesn't it?), but play around see what works, what doesnt.
- Scheduler: These are the algorithms that decide when and how much noise to take out. I personally almost always stick to exponential but you can get good results with any of them. It depends on your other settings. Best way to figure it out is to play with the know and hit the button to see what comes out.
That's the ground level basics you need to know. just start with generating and modifying those values. The first image will generate a static image. As a treat I've added a second image that will generate a short video that will get converted into a gif. Don't for get to save your files and have fun!
Note I have not put any security on this endpoint for now. Please do not share it widely, I will be added OAuth soon so you will need a google account to access it. If it gets abused I will pull it down.