Home Artists Posts Import Register

Content

https://drive.google.com/file/d/1zY7BTTlhjVJWUW9Q647EjQ057hA8MXzu/view?usp=sharing

Wii try to upload the file to a mirror soon, there been some users with download problems on the last release.

New features:

GPU selection: If you have multiple cards, you can select the one you want to use. You should be able to run two applications, one for each card to use then both? I only have one card, so it may need some fixing.

Use Float32: For the cards that can't handle half precision and return a black image, it will use full precision. This don't affect the result (maybe a little bit if you download the full precision model) and use double the Vram to load the model in memory. My current card can't handle the extra memory, so it will probably be improved once I get my new card.

Scheduler: I will not pretend to fully understand what the scheduler job is, but it help compute the latent image. It generate different final images.

Currently there is three schedulers: 

PNDM: This is the default one, the one from all the version on the GUI until now:

DDIMS: Another scheduler

 LMSDiscrete: This scheduler work a lot different from the others, is not simply variable replacement, so img2img and inpainting current not work with this. Will try to fix in the future.

There is a bunch of more schedulers to add, will try to add a few more on the next update. Some schedulers allow you to use more steps without breaking the image.

Each Scheduler have a default version: This version is the same schedulers, but without the initial variables used by SD. Personally it gave me bad results, but since some users like this experimental stuff in this version I left then on. Might remove then on the next update.


Prompt with parameters:

You can now use double quotes in the prompt and then add some extra parameters, similar to the discord bot. Here some working examples:

The face of a man

"The face of a man"

"The face of a man" -x 64 -y 64 -seed 123 -str 50 -samples 7 -scale 7.5

You don't need to set all parameters, just the ones you like to override from the GUI


Improvements:

Better memory handle for upscaler

Better img2img step handler

Improvements to inpainting, still can use some work, will try to improve it again later on.

Default model (2.4) now is in a .pt format, this change nothing, but it load the model faster once you open the app.


Fix: 

img2img strength is correct now

Save In Grid should be working

Small fixes on the code.


Bugs:

LMSDiscrete will not work with img2img and inpainting for now.

GPU selection is not working for upscale, will have to fix it. You will need to run the GUI without upscaler on GPU != 1 for now

Files

Stable Diffusion GRisk GUI 0.51.rar

Comments

Anonymous

Not all are problems, I find a very good time optimization on huge renders. I am testing with 1472x832 and it does on a 3080 10Gb at 1.56s/it, on 0.5 it tdo at 3.25s/it, so it is double fast!! :D With 896x512 0.51 do at 3.49it/s and 0.50 at 2.56it/s With 512x512 0.51 do at 6.26it/s and 0.50 at 5.48it/s This was with same prompt and options, and "Save Vram cheked" If I do not check the "Save Vram checked" checkbox then: at 1472x832 with 0.51 do at 1.47s/it and 0.50 do at 2.74s/it So the "Save Vram checked" do not affect in speed on 0.51, and also 0.51 it is ever faster than 0.50. Also I can do bigger images on 0.51 than 0.50 before running out of memory. So this is a very good improvement!! Thanks for your work :)

Anonymous

A suggestion about high resolution images.... Would be nice to automate a routine that works very fine for me. When I try to do a high resolution image it will add deformations or repetitions, but it does not occur (or at least no too much) if I render a low resolution image and the use it in img2img with a high resolution output with exact same prompt. So would be nice to automate it, create a low resolution images, pass automatically to img2img and render again it result in more sharp images than any upscaler Greetings

DAINAPP

Thanks for the suggestion Javier, I was thinking about something like that. In any case it look like V2 of SD is being trained on 1024X1024 images, so it might fix itself? But will try to add something like that in the future as well.

DAINAPP

That is good to hear, there been some changes in 0.51, so i'm glad is working correctly, just need to fix the other bugs now

Anonymous

Hi. First of all, congrats on your hardware update, hope it will serve you well! Second, I can't seem to be able to download v5.0 or v5.1, when I press ''Download'' button it takes me to the Home page on Patreon, showing 'Latest posts'. Is it something that I'm missing or doing wrong ? Tried with both Chrome or Mozilla on latest versions. Thanks!

Anonymous

You need to click where it says: Stable Diffusion GRisk GUI 0.51 rar Google Docs at the top of this post

Anonymous

Thank you very much, now it's working! And it's faster than v0.4, at 6it/s instead of 4,5it/s.

Anonymous

Not sure what is causing this issue. I tried with: "prompt" --gpus 1 "prompt" -gpus 1 Tried with/without upscaling (as per the OP details above) Same failure on all attempts Output below. ==== Render Start Loading model (May take a little time.) Loading model from path stable-diffusion-v1-4-f16 Prompts in Queue: 1 Rendering Text2Img: test 0it [00:00, ?it/s] Traceback (most recent call last): File "start.py", line 304, in Renderer File "torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 169, in Txt2Img File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "diffusers\models\unet_2d_condition.py", line 225, in forward File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "diffusers\models\embeddings.py", line 73, in forward File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm) ====

Anonymous

Can you add a Pop-up menu with help? Those commands of -seed -str, etc are very useful, but I can't remember it when I want to use and need to come here. -str it is for "Origin Image Stregth"? if it is for that, didnt work to me Edit: I found that -y and -x also do not work when using img2img

Anonymous

I detect 2 bugs with img2img. If I select an image to do img2img and select "Origin image Strength" to 100.00 then it give me "ZeroDivisionError" Render Start Prompts in Queue: 1 Rendering Img2Img: two grils, Full body, clothed. realistic style at CGSociety by WLOP, Ilya kuvshinov, Krenz Cushart, Greg Rutkowski, trending on artstation. Realistic fantasy cute indigenous brunette Pixar-style young woman, expressing joy, silky hair, wearing flowers, Cinematic dramatic atmosphere of a mystic forest, sharp focus, soft volumetric studio lighting Traceback (most recent call last): File "start.py", line 304, in Renderer File "torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 245, in Img2Img ZeroDivisionError: float divmod() The second bug it is also with Origin Image Stregth, with values of 95, 90 or 85 there is no problem, but with for example 87, 93, etc it does not do 50 steps as I select, it do 270 steps! it is weird but happen every time I try

Anonymous

I re-downloaded the entire zip. Its doing the same thing. Not sure what to do here.

Anonymous

I have found more bugs, not present on 0.5 Some of my saved prompts have " to highlight a word, on 0.51 it crashes completly the program. Also has problems with some special characters that uses some nordic name's artists

Anonymous

Do not want to flood the comments but I use all the day this software 😅 and I have a suggestion. In Midjourney have an option to "evolve" a render, I am not sure what it does but I think it simply load the output render to img2img with same prompt and different seed. Would be nice to have a button to do some variants of the output. We can do it manually, but it is useful automatically

Anonymous

Could we get img2img to generate an appropriate text prompt as seen in this video? https://youtu.be/PddIlnAdv68?t=150 - starting at 2:30. That would really help to generate images by img2img that actually look good because you can include the text prompt and edit it as you want. Thanks!

DAINAPP

I need to really watch the entirely video to see what is going on there. Will save the video to watch it later.

DAINAPP

I will add some option to load the im2img result as a input. But the generation probably is using CLIP to generate alterations, with will also appear eventually to the GUI.

DAINAPP

Nordic names I'm already fixing it. I may change the double quotes to something else, so this stop happening.

DAINAPP

Good catch on the strength = 100, will fix. It's strange the other bug, will do some tests and see what is going on.

DAINAPP

-x and -y should work with img2img, will test to see what is going on. After this update, will add something to help with the commands.

DAINAPP

I'm already working to fix the GPU problem, it should be fixed on the next update.

Anonymous

AMAZING RELEASE THANK YOU <3 btw anyone knows how to change the upscaling model? is there a folder to drop the custom upscaling models?

DAINAPP

There is no way to change upscalers for now, what upscaler would you like to use?