Home Artists Posts Import Register

Content

Hi all, got home kinda of late yesterday, but managed to test and deploy this new version.

The new computer really is a blessing to do builds. The old one it was a hour of work, in this one the build get ready in less than 15 minutes.

Since I got home a little late, I wish it could had been a little more tested, but it seen that everything was working. i'm just unable to test the GPU selection since the pc only have one card.


https://drive.google.com/file/d/1HJP3ej1_bpw3Ii22pa4IpjOb5I43nM7W/view?usp=sharing

Mirror:

https://drive.google.com/file/d/1Ev-h8fooampc_ot4hwQv81lB5m_HfVcw/view?usp=sharing

The download should work now with videos.


Edit: Batch render may be broken? Man, its impossible to make everything work lol Will Fix ASAP

My attempt was to fix some bugs, Schedulers should work better. Only Discrete + Inpanting should not work.

Emoticons and other characters should work on prompts

Prompt stack should work again. (Press render before the process is complete)

If you select a .mp4 or .gif as Img2Img, it will process the video. If you have any other type of video, just rename it to .mp4 for now. It should work. It will use the same seed for all frames and img2img strength as well.


There is quite a few user suggestions on my TODO list that I'm already started to add to the next release.

Comments

cool1

Also when using a source video it created an output video with the video aspect at 16:9 (x res 896, y res 512, landscape format) it created an output video in the same directory as the output images but the output video was the wrong aspect ratio it was width 512 height 896 instead of width 896 height 512.

cool1

Also in the interface there's a checkbox for "use float32" and it says "use only if your card return black images. It use more VRAM", but can't it also improve quality a bit - it seems to improve quality a bit in some images. If it can improve quality (by using 32 bits instead of 16) shouldn't the help mention that as being one of the reasons to have it checked? Also in the text files it creates for images it says "'half': 1,", what does the Half:1 do? Does that mean it's doing something at half resolution? If it affects quality would it be possible to change the setting in the interface please?

Anonymous

Getting this error when trying to open a mp4 file: "'ffprobe' is not recognized as an internal or external command"

cool1

So go to the Google Drive link that GRisk AI gave in reply to someone in this comments section. That link has the missing files ffmpeg.exe and ffprobe.exe and just copy those into your stable diffusion 0.52 directory. Or you can get those 2 files from the ModNet app's directory after it's installed (ModNet app is in the downloads section if you want to install that).

Anonymous

I cant seem to get other models to work anymore on this one, I can select the model but trying to run it gets the error "RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 7.19 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF" even if I remove the base model. Is there a difference between pt and ckpt files?

Anonymous

Just confirming that the black images issue isn't a problem with this release when enable the float32 option. This is for the GTX 1660 cards

Anonymous

Just wondering how best to use a video as an input. Is the point to also use a prompt that it'll use along with the video, merge the 2 in some way and produce a new video ?

Anonymous

I had the same error with video2video(mp4), but the advice you gave me solved the problem. Thank you very much. cool1 wrote: > That link has the missing files ffmpeg.exe and ffprobe.exe and just copy those into your stable diffusion 0.52 directory. > you can get those 2 files from the ModNet app's directory.

cool1

Yes I think so. You could use a prompt like "cartoon" or "oil painting" and I think it makes it more like that. The "origin image strength %" value is how much you want it to try to look like the original video (or image if you were doing an image). Though if you wanted you could overlay the finished video onto the original in a video editor at various opacity settings (it wouldn't work the same way as the "origin image %)".

cool1

Thanks for the update. Would it be possible to add an option to train the AI ourselves if we have a GPU that's capable of it (like additional training for new things, there's a Youtube video where someone trains a Stable Diffusion AI for new things but using an online way of doing it but that cost him in paying to use a server with a capable GPU)?

DAINAPP

I was missing this files on the first upload. But glad you managed to fix it =)

DAINAPP

Self training should be possible yes, but it will still take some time until the GUI make it possible.

DAINAPP

half:1 is a old variable, I will remove it or just reflect the float32 option. float32 should change a little with the full weights, may create a small tutorial for it.

DAINAPP

Are you using fp32 or more than 1 sample? No model should be taking that much memory. Send me a PM if you still have this error.

Anonymous

Hi. Where is the best place to learn how to use all the features? Any links to videos?

Anonymous

Thanks. Or, if any of the more advanced users have put anything on youtube - please could they link to it here?

Anonymous

Hi, I'm trying to use the Float32 setting for my GTX 1650 Ti as it was return black images, but I've recieved the following error: Tried to allocate 86.00 MiB (GPU 0; 4.00 GiB total capacity; 3.17 GiB already allocated; 0 bytes free; 3.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Is the documentation refered here the documentation for Nvidia CUDA or Pytorch?

DAINAPP

You must have 4vram? I don't think you will be able to run the model with f32 and 4vram. Float32 pretty much double the vram needed. Unless I can find some fix for it.