Home Artists Posts Import Register

Content

Here we go, there was a lot of little stuff I had to fix yesterday:

There is probably still some bugs, but I did try to test most of options/tools on this build.

Link: 

https://drive.google.com/file/d/1kLacShaNSBwRxvzQibFbEQa8TBAnl2wl/view?usp=sharing

Mirror:

https://drive.google.com/file/d/1Ply2B2zSxAiEST5hHLqYBcZ0NY76MEs3/view?usp=sharing


Who forgot to add a file just to keep tradition? I did! Just add this file to the root to load .ckpt files https://drive.google.com/file/d/1x87aQ70fNDVwb1xg53c2SoINUbGL77_0/view?usp=sharing


Changes/Features:

  • Experimental .ckpt loader, just add them to the model folder.
  • A little button on the GUI to open the model folder.
  • Checkbox for "Advanced Prompts": This turn on/off the extra parameters along the prompt, you can hover the mouse to see the comands.
  • Turning off "Display During Rendering" with "Save Vram" turned on may save a little more vram now.
  • Outpainting option
  • Had to temporarily remove the discrete schedulers. There was some changes to the source code that broke Discrete schedulers. 


Details about outpainting:

Outpainting is working, but there is a lot of fixes that need to be done. I did not want to delay the build one more week because of this, so for now it's more of a experimental tool until I manage to finish it.

Inpainting/outpainting on Stable Diffusion are not that great, this can be fixed and already been improved a lot by the community , but the vanilla code don't give that good results.

What I will do now is start reading about solutions for those tools and implement/improve the current inpating/outpaint tool. I already have quite some ideas to improve it.

I have a strong feeling that those tools can be improved with the right scheduler, so I'm gonna also start working on adding more schedulers.


Current bugs with outpainting:

  • You need to use Samples = 1
  • VAE encoder/decoder change a little the colors of the image. I need to mask the original image back at the end of the render, because for now, it sometimes leave a square with slight changed colors.
  • Sometimes the new render have such conflicting latents that it will simple "eat" the original image and simple make a 100% new image. Until the code is improved, just need to keep rendering until this don't happen.
  • Outpaiting image don't care that much about the original image, this will be the biggest challenge to fix, but already have some ideas. It will take some time until you get a render that make sense.


Using outpainting:

  • Select a Img2Img source image
  • Select Mode: Outpainting
  • On the right window, drag with the mouse until you get the square at the correct location.
  • Press render and wait.
  • If you like the result in the "Previews:" press "Use output as Input" to start using this image as the source.

Working now:

  • Improving Inpaiting/Outpainting
  • More Schedulers
  • A GUI for DreamBooth

Comments

Anonymous

Maybe this is a dumb question, I tried following the format of the advanced prompts.. adding -seed or -str didnt seem to work at all, can someone give an example of how a working advanced prompt would be written?

Anonymous

Not a priority, but would presets be possible? I feel that would be a big convenience. Thanks for the awesome work

Anonymous

Thanks for the continuous development of this app. :) I haven't tried inpainting and outpainting yet, but it's great that they are there when I will need them. Keep up the good work !

DAINAPP

That weird, will test it out more later, but and example would be: "A red ball" -seed 632233 -str 50

Anonymous

Nevermind (about my outpainting problem earlier in this post). I figured it out :) . Sorry to bother you.

Anonymous

Oh, see! i didn't realize i needed quotes. Quotation marks worked in previous versions but I think since version 0.4, quotes would shut down the program. Thank you!

Anonymous

where is the root directory? I get this error Traceback (most recent call last): File "start.py", line 1519, in OnRender File "os.py", line 225, in makedirs FileNotFoundError: [WinError 3] The system cannot find the path specified: ''

Anonymous

got an error running with the fix added to root using ckpt model: Render Start Loading model (May take a little time.) Running experimental .ckpt converter Traceback (most recent call last): File "start.py", line 1388, in OnRender File "start.py", line 1270, in LoadModel File "convert_original_stable_diffusion_to_diffusers.py", line 631, in Convert File "torch\serialization.py", line 712, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "torch\serialization.py", line 1049, in _load result = unpickler.load() File "torch\serialization.py", line 1042, in find_class return super().find_class(mod_name, name) ModuleNotFoundError: No module named 'pytorch_lightning'

Anonymous

can you teach this version of stable diffusion with images? if so, how do I do that?

Anonymous

The model itself you cannot teach any new tricks directly. But you can make something like a addon to it called Textual inversion. I tried it but i did not get good results trying to teaching it my face for example. You can google it, there are programms for that, but it´s not as easy as Grisk Gui. Also as far as i know those files you generate cannot be used in Grisk Gui yet.

Anonymous

Would love to see a switch to always use the last image generated when generating multiple images in order to create progressions.

Anonymous

Would be really cool if we could fine-tune using our own images

Anonymous

Does anyone know if there is an issue with the new 40 series cards? I just upgraded from a 2080ti and I was getting around 6-7 it/s with default settings... But on my 4090 I am only getting 5-6 it/s with the default settings.