Home Artists Posts Import Register

Content

There is a bug on one of the libraries from the project. I fixed this bug a few versions back, but now I had to update the library to fix the problem on Dreambooth training and the bug was back.

This time I did open an issue on the library source code, so the bug don't appear again in the future. 


The library does a test on xformer that can crash everything, even if you are disabling xformer.


This version have my fix back again, and hopeful the library will fix it as well in the future:


Link:

https://drive.google.com/drive/folders/1Jvr2nCUBuVOnO0Ic_LcUW9Jm2zjLZgSB?usp=share_link 

Comments

Anonymous

Thanks for all the Work. Is it possible to get the stable diffusion tiling model running with your gui? its called marterial stable diffusion

cool1

Also I was trying the img2Img again with video (3840x2160 source vid, around 896x512 output. I couldn't get the videos it created to play but the frames it created could be made into a playable video in another app. So it worked in that way. Though even with the same seed the image created changed a lot from frame to frame. It would be good if we could control that if possible (not just with origin image strength|). eg. if you want to change what the side of a car looks like (like a car wrap) then you probably don't want it to change a lot from frame to frame. If the stable diffusion app only works with 1 source image + the origin image strength%, what if as well as using the source video (when it's a video), there was an option to also specify a strength of the previously generated frame as a source for the new frame (or the parts of it from a certain area - eg. that were quite different to the source video frame). eg. for a way keep the frames more consistent without just increasing the origin image strength (which would limit what was added), eg. so an added car wrap or something could look more the same without changing lots per frame. So if the stable diffusion app can't handle more than just the origin image (current frame)+origin image strength% - maybe have the GUI have an option to combine the previously generated frame at a certain strength% with the current frame+origin image strength, so the user could try and make it more consistent. There'd be some issues with that way like you may get a bit of a ghosting effect on the ground and car in the car example but it still may help for some videos where you want less changes per frame.

Anonymous

I'm getting this error when trying to use 512-inpainting-ema.ckpt as an inpainting model: Loading model (May take a little time.) Running experimental .ckpt converter Traceback (most recent call last): File "start.py", line 2136, in OnRender File "start.py", line 2038, in LoadData File "start.py", line 1749, in LoadModel File "convert_original_stable_diffusion_to_diffusers.py", line 825, in Convert File "torch\nn\modules\module.py", line 1604, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel: size mismatch for conv_in.weight: copying a param with shape torch.Size([320, 9, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 4, 3, 3]).

Anonymous

Hi, great job with the continued development! Would it be possible to add negative prompts and number of steps to the advanced prompt options? Something like: "my prompt" -np "ugly, bad art" -steps 55 That would be useful to experiment with different steps and different negative prompts to create variations of the same image one after the other. Thank you!