Home Artists Posts Import Register

Content

The outpainting interface is pretty much ready, it read the correct pixels, concatenate with the original image and merge it back.

Just need now to run the AI instead of inverting it's colors.

Some side notes:

From time to time someone ask me why the inpainting not work that well. SD don't have a inpainting as good as Dalle2 and/or the technique that everyone use for inpainting is not that great.
There is two solutions for this.

The first: There been some users training a SD models that work better for inpanting. I did not test it, but is a possible solution.

The second: Change the algorithm that do Inpaiting. I'm already working on this, but  did not had that much time to mess with it so I could focus on the out-painting, soon will try to mess with it again. 


Using .ckpt models:

 There is another scripts around for SD that don't use Diffusers and some users train the models on it.

They are very different codes. Luckily a lot of users want to use those models on Diffusers, so I did manage to find some scripts that help me convert .ckpt models to diffusers.

I did some tests and seen to work, so in the next update probably there will be a very experimental .ckpt model loader.

Files

Comments

Anonymous

Exciting stuff!! Thx for your efforts!!

Anonymous

Great work! Excited to start using it!

cool1

The update looks good. My question is - would having an option to feather the edges of the rectangle where it will outpaint/the joined area help in blending the extended areas with the original areas (with maybe having an option for rounded 'brushes' for outpainting)? Or would that be complex/slow/not really be needed (eg. not have much benefit)? eg. on one youtube video of outpainting where there's a bit of sky the rectangular area where it was extended seems noticeable in the sky (where the colours didn't quite match) and I was wondering if feathering etc. might have helped with that that. edit: one of the outpainting demo videos shows a "mask blur" slider so maybe having that would work similar to my feathering suggestion to help blend the joined parts.

Anonymous

thankyou for your work, ckpt is very important.

Anonymous

If it acts like a feathered inpainting mask (you can try this now yourself using Photoshop or Gimp and the Inpainting in GRisk) the results are much worse than with a sharp edge brush. Seems the Ai can't handle it well. Guess it would be the same but maybe I'm wrong.

Anonymous

Awesome to hear, you’re really doing a great job with this! I’ll try to stay a Patreon for as long as my finances allow to support you in your endeavors

cool1

Thanks. These are 2 youtube vids I watched. Though I don't know if it would have the same affect in this app, but it's also using stable diffusion. https://youtu.be/-8jmBGgGj2E?t=111 - that one is where you can see the join in the sky (that's where I think something like feathering/mask blur might help), this one https://youtu.be/QTouu5nomPg?t=654 shows a mask blur slider in the interface so it seems like it may help eg. when he changes the sky (though maybe some of that is inpainting rather than outpainting). Having something like a mask blur could allow you to try it with that to see if it helped when the joined areas are noticeable without the mask blur.

Anonymous

The real (tricky) task is to do overlapping generations with the AI using sharp masks, but then add blur to the joining step by merging the top image twice everytime, once with blur and a cropped version without blur, equal to the overlap margin, thus adding the feather. It's a lot of code and tricky thing to do, but it can be done. No idea if it's necessary here, but I've seen it done with a neural style tiling script and that's how they did it. https://github.com/ProGamerGov/Neural-Tile

Jables

Does your app support negative prompts?

Anonymous

Is it possible to feed the app different images as source? I noticed it struggles to create many popular characters. For example if I put in "The character Farnese from the Berserk Manga", then the results look nothing like the results on google. So maybe feeding it example images could fix it Google: https://www.google.com/search?q=The+character+Farnese+from+the+Berserk+Manga&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjlnPb858v6AhXchv0HHS4QAPwQ_AUoAXoECAEQAw&biw=1920&bih=953&dpr=1

Anonymous

I would love to see this implemented some time in the future: https://github.com/bloc97/CrossAttentionControl

Anonymous

With 'Textual Inversion' it should be possible to show the model some examples and (hopefully) have it learn from those example. I do believe that it is being implemented in a future version of this app.

DAINAPP

You are correct, feather edges are really important for in/out painting. For this first version this will not be an option, but once the tool is better, you will be able to change the feather strength.

DAINAPP

Its possible by training a model with dream booth/ Textual Inversion. Will add it as soon Outpaiting is ready.

DAINAPP

This can be interesting if work properly, gonna need to test it sometime to see how well it work.