Home Artists Posts Import Register

Content

Made this tutorial really quick to help you guys to download new models.

The tutorial still need work, will try to slowly try to improve it.

Downloading models from HuggingFace the first time can be kinda hard, but is the faster way to get access to updates to a model and getting a new model once is released.

I will try to add easier ways to download models in the future, but it will be almost impossible to keep all up to date, this right here is the correct way.



Files

Downloading models from HuggingFace

In this file you will learn how to download models from HuggingFace and use them on SD GRisk GUI Installing TortoiseGit: HuggingFace keeps the model files in a repository. That means we will require a software that can communicate with repositories to download those files. For windows, a nice...

Comments

Anonymous

The tutorial does not mention that TortoiseGit requires separate installation of Git itself, which can be important for beginners to not get totally lost. I've also noticed that even putting in the fp32 variant of SD still gives the exact pixel-to-pixel results of the fp16 verision, even after manually changing the option 'half' to zero in the config json. Is there any way to inference in fp32?

Anonymous

Just wanted to say thank you for your work, seriously :) . It sucks trying to develop on a problematic computer, I know full well. Keep it up! I'll keep my sub going, hope you can meet your goal soon.

Anonymous

Hi, just noticed that the 3:4 and 4:3 buttons are inverted.

Anonymous

Which model produces the "best" results? Can you say that specifically? Or do different models just all produce different results? And can I select the respective model after copying it into the "Model" folder (which does not exist in my 0.4 installation?) in the drop down menu?

Anonymous

For anyone, not just GRisk, I downloaded the full precision diffusion 1.4 model and changed 1 to 0 in the config_user.json for half_precision. Hard to tell if it is working or not. Is there a way to confirm this? Or is it possibly disabled by code? Or am I just dumb? :P . Also, vram and ram haven't noticeably changed. On the bright side, I can render past 640x640 now on a gtx1060 :) . But honestly was expecting to run out of memory trying to run the full precision model, then just delete it, go back to half precision and say I tried :p .

Anonymous

Seems to do nothing from what I can tell. https://imgur.com/a/f1lAzGc

Anonymous

Yeah. Oh well. Maybe in another update(?) Thanks for replying :) .

Anonymous

Not sure about v0.4 but v0.5 the combo box for models works. I wouldn't say there is a "best", at least yet(?) I wish I had access to more custom models but besides waifu and the standard diffusion models, I don't think we're there yet. Scrapping images off of sites, curating 10's if not 100's of thousands of those images, editing all that metadata for keywords and then days/weeks at a time of training ... I think it's going to be a while still. Would love someone to correct me if I am wrong.

Anonymous

Whats the full precision model for?

Anonymous

No problem! :) I'm guessing it will be removed since it's not in the GUI anymore, as well as the default supplied model is the half-precision one. So if you want to render at half-precision you can just swap the model with the dropdown.

Anonymous

I did the same- downloaded full model, then discovered in the text info for generated images that it still has HALF = 1, changed it to zero in config_user.json file but without any noticeable difference...Maybe something other must be done. Other thing I noticed is when switching model, it looks we need to restart application otherwise it uses model it loaded on first run...

Anonymous

AFAIK it should provide better output (half precision is “rounded” and because of that uses less VRAM and also take less disk space. But I haven't compared them side by side so far.

DAINAPP

Mostly for users with Graphic cards that can't handle half. The results are the same and use more Vram

DAINAPP

It seen a few users had trouble with full precision, will try to do some tests before next release.

DAINAPP

Oh, I forgot it require Git to be installed. You are right. Will update the tutorial soon

Anonymous

what is huggingface exactly? is it an alternate model?

DAINAPP

Is the group responsible for Stable Diffusion. Is a site to download models and a lot of stuff

Anonymous

okay so the moddle included in grisk are from there? i saw a waifu option once, which doesn't sound like it would apply to me, are there others?

DAINAPP

Need to do some digging to find new models. Will see if I can make a list eventually.

Anonymous

Is it possible to upload .ckpt models, or will it be in the future ?

DAINAPP

Yes and no, .ckpt need to be loaded by the code exactly as it was generated. Unless there is a standardized code that I can use as base, this will not work. The only standardized model for now it the Diffusers folder models.

Anonymous

Im just a newbie in the whole stable diffusion world. But I was downloading the model via the instructions. Easy. But when I put these files in the models folder all attempts to render fail in. "expected all tensors to be on the same device but found at least two devices" I did notice the file already in the folder is a .Py file and the downloaded models are folders with data. Any advice? Edit: I got it working. I got the above error because I had "Save vram" unchecked. After that it was still giving issues. I switched to the Float16 version and it was working.

DAINAPP

Ah great, a few of those bugs will be fixed on the next update.

ChopChop

A list would help, Ive been trying for a week and cant add new models. Tthe instructions you gave on google docs are fine , but leave out what kind of file is it i add to the model folder? how do i know if i even get the right version that works with Grisk?

Anonymous

I also did not understand what files and in which folder to put. There is a "models" folder, there is no "model" folder. Need to create a "model" folder and copy all files and folders into it?