Home Artists Posts Import Register

Content

Hello all, hope everyone is fine.

First of all, a quick note. If you are sending me feedback or a bug report, please send me a private message. The problem is that Patreon is really bad to show me messages inside posts. I try to answer everyone but is really easy to miss new comments inside a post.


There been quite a few bug reports and suggestions for SD, so in the next update expect:

  • Fixing resolution on the mp4 output
  • There been report of 0.41 using less memory, this been tricky to find, but I may have found the little extra memory the versions after those is using, will try to see if I can get it back to the memory usage on 0.41
  • Making a button to select the "Models" folder, so you don't have to copy/paste the models each new update.
  • Making a button to use "Little memory as possible" just to make it a little easier to setup.
  • There been reports on models that use another architecture, I'm well aware there is other codes for Stable Diffusion that don't use "Diffusers" (the official library). I'm making an effort to make those work, but I need the original code to make this happen, if you have any crazy model you want to make it worl, please PM the code (not only the trained model)
  • Batch render is fixed again
  • Add a option to use or not advanced prompts: -x -y etc and some helper text showing those.
  • Testing xattention, to speed up render time. The main problem is making it work with the final build.
  • Testing a GUI to create out painting, this will still take some time to work.
  • Improving a little more the fp32 option.

Comments

Anonymous

can we add our own .ckpt files to this tool?

Anonymous

Thanks for the nice GUI! One bug to report: from what I understand, seed -1 randomizes the seed for multiple samples, but it would be useful to have the actual seed saved in the text file of the samples (right now it gets saved as seed -1, which makes it impossible to re-generate the prompt with the same seed)

Aeonica

This is a fantastic tool already, though I'm absolutely looking forward to outpainting.

Anonymous

Look near the end of the text in the text-file. Look for "Selected_seed:"

Anonymous

Can't wait for outpainting! Awesome tool!

Anonymous

Forgive me if this has already been answered elsewhere, but is there currently a way to implement negative prompts in the Grisk GUI Stable Diffusion app? If not, is that something that would be possible to implement eventually? Thanks!

Anonymous

Hi, I tried Automatic1111 and I found it do renders in a little less than half of time, or GRisk takes x1.8 times the time to render more than Automatic1111. NMKD it is also faster than GRISK GUI. Can you take a look? Thanks! 🙏

Anonymous

Google's DreamBooth, which was modified to work in a lower VRAM environment, is now available with an additional 10GB VRAM. GRisk may be able to use this. https://www.reddit.com/r/StableDiffusion/comments/xtc25y/dreambooth_stable_diffusion_training_in_10_gb/ DreamBooth Stable Diffusion training in 10 GB VRAM, using xformers, 8bit adam, gradient checkpointing and caching latents.

Anonymous

Thanks for the update. Any plans for an animation option like in the deforum colab?

Anonymous

I'm trying to use models from https://rentry.org/sdmodels#stable-diffusion-models and when I try to render with one I get this Render Start Loading model (May take a little time.) Loading model from file gg1342_testrun1_pruned.ckpt Traceback (most recent call last): File "start.py", line 944, in OnRender File "start.py", line 845, in LoadModel AttributeError: 'dict' object has no attribute 'enable_attention_slicing'

Anonymous

How to use a .ckpt models?

DAINAPP

There should be a field called selected_seed, with the correct seed. Unless there is a bug