Home Artists Posts Import Register

Content

Stable Diffusion GUI:

Is regulary being updated, check the latests posts to find the last build.


Stable Diffusion GUI MODELS FOR DOWNLOAD:

https://mega.nz/folder/2x1CBRJL#MzMfyijznMs-2t-VqBXYPQ


https://drive.google.com/drive/folders/1MqjzOisTaV0Y1fBfYRVRS9fG6UrFOkK7?usp=share_link 


FlowBlur-App:

Generate a motion blur on videos.

https://grisk.itch.io/flowblur-app

https://github.com/BurguerJohn/FlowBlur-App


Dain-App:

The first project to interpolate videos using AI;

https://grisk.itch.io/dain-app/patreon-access

https://github.com/BurguerJohn/Dain-App


Rife-App:

Sucessor for Dain-App, use a faster AI and have more options.

https://grisk.itch.io/rife-app/patreon-access


Real-ESRGAN App:

Upscale real life or animation videos/photos.

https://drive.google.com/file/d/1hm5n3VfL_pjYkdRDPPMOAv5MAAXBIgDb/view


Clip-App:

Create a 2D image from a text line.

  • Public V1.0:

https://grisk.itch.io/clip-app

  • Private Version:

https://drive.google.com/file/d/1LKK7ySYYs4ePoGoucVojln3Nng12LhYY/view?usp=sharing 


ModNet-App:

Auto-Mask people from video and photos.

https://drive.google.com/file/d/1Fo0BXhm7gLVY4gJ7nlSRZR63PLF05F80/view?usp=sharing 


💭 Stable Diffusion TUTORIALS FOR THE GUI:

Current tutorial for the Stable Diffusion GUI:

Prompt Examples:

https://docs.google.com/document/d/1f2vAJnwaJw4KisnHM_r8i0e7nWuGK1Wvc81o1U87NrY/edit?usp=sharing

Create Inpainting/Outpainting:

https://docs.google.com/document/d/1_Fc36pHyPhiokTwyqvrD-qYg_4Q62NYduI98pMtv8pc/edit?usp=sharing

Training Dreambooth:

https://docs.google.com/document/d/15gsTo2IaNxSfGox_RQatg1nuIyxs3Tm0fyUZMUHTXbw/edit?usp=sharing

Downloading models from HuggingFace site:

https://docs.google.com/document/d/15_nOmXsQJxDeM3XDSIFLjXceB4bZMsmtWr53iZs98P4/edit?usp=sharing

Comments

Anonymous

is it possiblre to get this model running with your gui? https://github.com/TomMoore515/material_stable_diffusion

Anonymous

Any chance we could get more RIFE updates?

DAINAPP

Belive it or not, I pretty much try to improve the model daily. Will try to release a WIP version of the new model in a day or two then.

DAINAPP

From what application? All?

Anonymous

howdy! wondering if you ended up making any more RIFE updates. the last one I can see is 3.20 from april 2022 (on ITCH.IO)

cool1

Would it be possible to create a basic GUI for the StableLM language model(s)? See https://github.com/stability-AI/stableLM/ They've released 2 of the smaller file size models and are going to add a few more (I think bigger file sizes). Would those be quite good for a local language model? Or would there be an issue with that eg. file sizes/the terms etc?

cool1

A few days ago (26th July) the new version of stable diffusion (SDXL 1.0) was released. Would it be possible for your stable diffusion GUI to be able to use that if possible please? Also do you think it would be possible for an animation option where you can move the camera in 3d like they show in some AI animations?

Anonymous

Hey is Rife still being developed?

Anonymous

is there any documents on how to start using these files? I'm very new to this, so if anyone could point me in the right direction to start learning I would really appreciate it

DAINAPP

This is an almost public list of applications, since the paid itchio links require a Patreon account. Driver links will be kept separated in new posts.

Anonymous

can you help me? i have a doubt

Anonymous

which one of these is the one used for making anime 60 fps

DAINAPP

Rife-App and Dain-App, Rife-App is way faster and i'm currently improving it.

KOAN

Hi GRisk, the link to clip_art 1.1 is down. do you have another link?

DAINAPP

Can you belivet it? I changed the link a few days ago for new ones. But Patreon only change the text, the hyperlink is kept as the old one *Facepalm*. Ok, now its fixed. Thanks for telling me.

Anonymous

Hi - getting this error message on using Rife 2.7 on a png sequence:

Anonymous

Interpolations: 4X Use Smooth: 1 Use Alpha: 0 Use YUV: 0 Encode: libx264 Device: cuda:0 Using Half-Precision: False Resolution: 1280x720 Loading Pre-train Using Model: anime_best Traceback (most recent call last): File "my_design.py", line 86, in run File "my_DAIN_class.py", line 1436, in RenderVideo File "my_DAIN_class.py", line 1495, in RenderVideoWithModel File "my_DAIN_class.py", line 143, in make_inference File "model\flow_big.py", line 270, in inference File "torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "torch\nn\modules\conv.py", line 443, in forward return self._conv_forward(input, self.weight, self.bias) File "torch\nn\modules\conv.py", line 439, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same QObject::setParent: Cannot set parent, new parent is in a different thread

DAINAPP

Hi there, you need to set the Data Precision to normal. Will be fixing this on the next update.

Anonymous

When I'm done, how can I fix the inconsistency between the picture and the sound?

Anonymous

Hi, I get many blank frames in the output video. Would you give some advice? https://drive.google.com/file/d/14qEZz-G9krozU4qmNaSgCFRZAcjTTTE3/view?usp=sharing

Anonymous

with this link https://streamable.com/hg5olb

DAINAPP

That is pretty strange, can you test on H264 and see if the problem still there?

Anonymous

Don't use h.265 in the encoding selection. I had the same problem and I switched back to h.264 and it's working fine now.

Anonymous

Using codec H265 and AV1 is going wrong. But H264 have a good result.

Anonymous

Using Titan RTX GPU card and RIF-App 3.20 .

DAINAPP

Ok, it's been some time that I was suppose to improve the AV1/H265 code, will try to do it tomorrow.

Anonymous

Hi, I get this error message when trying to use Rife-App? QObject::setParent: Cannot set parent, new parent is in a different thread Using Benchmark: True Batch Size: 1 Input FPS: 1.0 Use all GPUS: False Render Mode: 0 Interpolations: 2X Use Smooth: 0 Use Alpha: 0 Use YUV: 0 Encode: libx264 Device: cuda:0 Using Half-Precision: True Unable to read resolution of input file. Traceback (most recent call last): File "my_design.py", line 86, in run File "my_DAIN_class.py", line 1448, in RenderVideo File "my_DAIN_class.py", line 1497, in RenderVideoWithModel File "my_DAIN_class.py", line 1107, in GetInputSize Exception: Sorry, can't read resolution

DAINAPP

It seen to be a problem with the input, what extension are you using? Can you test it with a simple gif?

Anonymous

Hey. I just joined the patreon and noticed there is the app "Stable Diffusion 0.52" and "Clip-App 1.1". The seem really similar though. Which do I best use?

DAINAPP

Clip-App is another application, download the Stable Diffusion .

Anonymous

how do i join the discord

Anonymous

would be cool to include the latest (stable) SD build in pinned comment

BMinor

Btw - The Real-ESRGAN Google drive link is dead.

Anonymous

Loading Scheduler CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Xformer not working for this GPU Xformer don't work on this GPU, changing it to ONE head Using One Header Traceback (most recent call last): File "start.py", line 2137, in OnRender File "start.py", line 2101, in LoadData File "diffusers\pipeline_utils.py", line 848, in set_use_memory_efficient_attention_xformers File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff [Previous line repeated 3 more times] File "diffusers\pipeline_utils.py", line 839, in fn_recursive_set_mem_eff File "diffusers\models\attention.py", line 482, in set_use_memory_efficient_attention_xformers raise e File "diffusers\models\attention.py", line 476, in set_use_memory_efficient_attention_xformers _ = xformers.ops.memory_efficient_attention( File "xformers\ops.py", line 862, in memory_efficient_attention File "xformers\ops.py", line 305, in forward_no_grad File "torch\_ops.py", line 143, in __call__ return self._op(*args, **kwargs or {}) RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

AntonIceland

Render Start Loading model (May take a little time.) Loading model from path stable-diffusion-2-base The config attributes {'feature_extractor': [None, None], 'safety_checker': [None, None]} were passed to StableDiffusionPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'feature_extractor': [None, None], 'safety_checker': [None, None]} are not expected by StableDiffusionPipeline and will be ignored. Loading Model DType Loading VAE Loading Hyper Loading Scheduler Using Half Header Traceback (most recent call last): File "start.py", line 2137, in OnRender File "start.py", line 2093, in LoadData File "diffusers\pipeline_utils.py", line 848, in set_use_memory_efficient_attention_xformers File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff [Previous line repeated 3 more times] File "diffusers\pipeline_utils.py", line 839, in fn_recursive_set_mem_eff File "diffusers\models\attention.py", line 482, in set_use_memory_efficient_attention_xformers raise e File "diffusers\models\attention.py", line 476, in set_use_memory_efficient_attention_xformers _ = xformers.ops.memory_efficient_attention( File "xformers\ops.py", line 862, in memory_efficient_attention File "xformers\ops.py", line 305, in forward_no_grad File "torch\_ops.py", line 143, in __call__ return self._op(*args, **kwargs or {}) RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

AntonIceland

I’m using 2080. there was no problem using it until SD 2.0. This is the error message when I try running SD 2.0 and 2.1. I get a similar message from all the 2.0 and all 2.1 models.

DAINAPP

Check out my edits on the post. I think I found the problem and already fixing it.

AntonIceland

I moved SD 2.0 and 2.1 from models_sd to models_V2 and now I get this message.

AntonIceland

Render Start Loading model (May take a little time.) Loading model from path stable-diffusion-2-1 The config attributes {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} were passed to StableDiffusionPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} are not expected by StableDiffusionPipeline and will be ignored. Loading VAE Loading Hyper Loading Scheduler Using Half Header Traceback (most recent call last): File "start.py", line 2137, in OnRender File "start.py", line 2093, in LoadData File "diffusers\pipeline_utils.py", line 848, in set_use_memory_efficient_attention_xformers File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff File "diffusers\pipeline_utils.py", line 842, in fn_recursive_set_mem_eff [Previous line repeated 3 more times] File "diffusers\pipeline_utils.py", line 839, in fn_recursive_set_mem_eff File "diffusers\models\attention.py", line 482, in set_use_memory_efficient_attention_xformers raise e File "diffusers\models\attention.py", line 476, in set_use_memory_efficient_attention_xformers _ = xformers.ops.memory_efficient_attention( File "xformers\ops.py", line 862, in memory_efficient_attention File "xformers\ops.py", line 305, in forward_no_grad File "torch\_ops.py", line 143, in __call__ return self._op(*args, **kwargs or {}) RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

AntonIceland

im going to download the 0.60 fix and see if it works

AntonIceland

Thanks for fixing it so fast. It's working fine now. I have no problems using 2.0 and 2.1

AntonIceland

I don't know if dreamboot is ready for use with SD 2.0. but i tried it anyway and got this error. Render Start Loading model (May take a little time.) Loading model from path stable-diffusion-2-1 The config attributes {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} were passed to StableDiffusionPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} are not expected by StableDiffusionPipeline and will be ignored. Loading Model DType Loading VAE Loading Hyper Loading Scheduler Using Half Header Steps: 0%| | 0/1620 [00:00

Anonymous

SD2 has a bunch of folders with names that dont match the folders in the "models" folder. I tried putting them in the models folder but it doesnt show up in the models dropdown in the gui

AntonIceland

Problem found loading var db_save_chk Render Start Loading model (May take a little time.) Loading model from path stable-diffusion-2-1 The config attributes {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} were passed to StableDiffusionPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} are not expected by StableDiffusionPipeline and will be ignored. Loading Model DType Loading VAE Loading Hyper Loading Scheduler CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Xformer not working for this GPU Xformer don't work on this GPU, changing it to ONE head Using One Header Steps: 0%| | 0/9960 [00:00

AntonIceland

I got this error when trying to run dreambooth with 2.1

AntonIceland

Render Start Loading model (May take a little time.) Loading model from path stable-diffusion-2-1 The config attributes {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} were passed to StableDiffusionPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} are not expected by StableDiffusionPipeline and will be ignored. Loading Model DType Loading VAE Loading Hyper Loading Scheduler Using Half Header Steps: 0%| | 0/1620 [00:00> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Steps: 0%| | 0/1620 [00:17

AntonIceland

And this is the secend error massage

AntonIceland

on my second attempt, I'm sure everything was setup correctly for me.

AntonIceland

Render Start Loading model (May take a little time.) Loading model from path stable-diffusion-2-1 The config attributes {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} were passed to StableDiffusionPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} are not expected by StableDiffusionPipeline and will be ignored. Loading Model DType Loading VAE Loading Hyper Loading Scheduler Using Half Header 30it [00:06, 4.84it/s] | 50/1494 [00:29<12:12, 1.97it/s, loss=0.286, lr=5e-6] 30it [00:04, 7.29it/s] | 100/1494 [00:59<11:04, 2.10it/s, loss=0.293, lr=5e-6] 30it [00:04, 7.18it/s] | 150/1494 [01:28<10:57, 2.04it/s, loss=0.322, lr=5e-6] 30it [00:04, 7.17it/s] | 200/1494 [01:58<10:57, 1.97it/s, loss=0.316, lr=5e-6] 30it [00:04, 7.16it/s] | 250/1494 [02:27<10:16, 2.02it/s, loss=0.28, lr=5e-6] 30it [00:04, 7.21it/s] | 300/1494 [02:56<09:50, 2.02it/s, loss=0.278, lr=5e-6] Steps: 23%|███████████▌ | 339/1494 [03:20<09:31, 2.02it/s, loss=0.338, lr=5e-6]C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [0,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. Traceback (most recent call last): File "start.py", line 398, in DBRenderer File "dreambooth.py", line 812, in main File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "transformers\models\clip\modeling_clip.py", line 811, in forward return self.text_model( File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "transformers\models\clip\modeling_clip.py", line 713, in forward causal_attention_mask = self._build_causal_attention_mask(bsz, seq_len, hidden_states.dtype).to( RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Steps: 23%|███████████▌ | 339/1494 [03:20<11:22, 1.69it/s, loss=0.338, lr=5e-6]

AntonIceland

I downloaded again 2.1 and new it at least starts. but i do get this error new

AntonIceland

Render Start Loading model (May take a little time.) Loading model from path stable-diffusion-2-1 The config attributes {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} were passed to StableDiffusionPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'feature_extractor': [None, None], 'safety_checker': [None, None], 'requires_safety_checker': False} are not expected by StableDiffusionPipeline and will be ignored. Loading Model DType Loading VAE Loading Hyper Loading Scheduler Using One Header Steps: 1%|▍ | 30/3486 [00:29<56:20, 1.02it/s, loss=0.331, lr=2.05e-7]Traceback (most recent call last): File "start.py", line 398, in DBRenderer File "dreambooth.py", line 900, in main Exception: Loss is NaN, stopping training Steps: 1%|▍ | 30/3486 [00:30<58:49, 1.02s/it, loss=0.331, lr=2.05e-7]

Anonymous

Are there any SD.exe that work? The ones ive tried all crash

DAINAPP

They should work, did it create a crash_log.txt on the folder?

Anonymous

I open with 7zip and then double click on the SD.exe and then it simply opens terminal and then quickly closes

Anonymous

Real-ESRGAN App - Download Link is down :)

Anonymous

Yes ! I confirme Real-ESRGAN App - Download Link is down !?

DAINAPP

Well shoot, gonna need to find the .rar and upload it again. Will try to find it tomorrow.

DAINAPP

Well shoot, gonna need to find the .rar and upload it again. Will try to find it tomorrow.

Anonymous

I would love to see a GUI for this at some point down the line: https://huggingface.co/hakurei/lit-6B