Home Artists Posts Import Register

Content

Stable Diff:

Hi all, during the weekend I got my projects to work on the new SSD, now the computer can run 50 steps a lot faster than the time it took to run 10 on the old computer lol
Needless to say that this helped me me to fix the bugs from the last update a lot faster.

I could also test using the model with float32, so it was easy to fix this as well.

The only untested feature is selecting the GPU, but I fixed a few problems for it using the feedback from everyone here, so it's possible that is fixed now.

There is also some others fixes, like using emoji and Japanese characters in the input and quite some fixes for the DDIM and LMSDiscrete schedulers. Img2Img will work with all schedulers now. Inpaiting still won't work with LMSDiscrete

I'm also adding the option to select a .mp4 or gif as input (video2video). It will generate a .mp4 (using img2img for each frame) and a .png for each frame.

I should be able to build the new version tonight.


Rife-App Update:

I just finished training the new model, the new GUI is pretty much ready, there is my new computer to help testing. Just need to finish the Real Life model training to finally be able to release a new update. Once this happen will do a lengthy post about the development from the last update until now.

Comments

Anonymous

nice work, as always!

Andrew McKenzie

Exciting, are we going to be able to lock the seed for video2video and adjust the weighting of the influence of the source to try and get frame to frame consistency?

DAINAPP

In this next version it will use the same seed for all frames, to try improve consistence. It will use the img2img strength as well.

Anonymous

I appreciate these updates and your hard work. I wish you would optimize the code for basic use which is text2img before you go and try to implement advanced features that few really use. I find myself stuck at v0.4 as the 0.5 and even v0.51 are still unusable for me with memory issues and that kind of sucks as a VIP Patron.

Anonymous

I've seen a couple other solutions that have integrated a masking tool for doing inpainting. Where you basically just get a preview of the image and can paint white over the past and apply. Would you consider something like that ?

Anonymous

Is there any way currently/in the future to access the AI remotely (e.g. type a prompt in your phone and have your PC process it and return the results wirelessly?) I heard some web ui versions can do this but I'm not sure if it's possible here

Anonymous

Thank you for developing this. I am very much looking forward to video2video!!!! (mp4 option)

Anonymous

Having a new computer components is always fun and exciting. Glad you can have it, and thank you for making use of it for advancement of your Stable Diffusion program.

Anonymous

Sounds awesome, thanks for the update! Looking forward to testing out the mp4 generation!

Anonymous

same i just hope my pc would be able to do it idk if it will be able to handle video2video

Anonymous

so for 0.51 of SD i cant use the models from hugging face that were working before i get this error: Traceback (most recent call last): File "start.py", line 304, in Renderer File "torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 169, in Txt2Img File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "diffusers\models\unet_2d_condition.py", line 225, in forward File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "diffusers\models\embeddings.py", line 73, in forward File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)

DAINAPP

I find this really strange, since 0.51 should be using a little less memory than 0.4, can you take a print of memory usage on 0.4 and 0.51? With "Same Memory" turned on and upscale turned off.

DAINAPP

Yeah, a paint tool take a little work, so it will still take a little while to appear.

DAINAPP

It would not be impossible, but it is a lot of work, it would need to really be useful to users to implement something like that.

DAINAPP

Yep, go into Performance and then select your graphic card. I need to see the GPU Memory Usage while running the code.

Anonymous

I trained two styles on a Google Colab, I am sure you are planning to add the funtionality, but can be possible to load more than one trained concepts (.BIN)? There is no other software it can do at the moment. Would be fun to make fake pictures of my wife and I around the world 😄

DAINAPP

Can you send me the colab script on private message? I want to add this option, but need to take a look on the code for that.