Home Artists Posts Import Register

Content

Rife-App:

The animation model is done! Horray. Still need to update some code on the software, this new model require some new stuff on the input, but should be ready soon.

Today will start training the RL model while making the build for the anime model.

For this next build, gonna need to remove the RL model, because the code will have lots of chance, no point in updating the code of the old model if the next build will have a new model.

I already have 10 new ideas to improve the model, but will finish the RL first or else is a infinite circle of improving the model and never release it.


Black and white to color images:

I like to start my projects with animation/anime, because they are way hard to make something work and it's easier to see progress with the naked eye.

Little by little I'm getting some good result, but for anime, it have a looot of work to get something useful, there is 4 or 5 models to be trained (and deleted, trained again).
I predict at least 2 months to have something a little decent. 

So for now I'm gonna try to make a model from BW to Color for RL, it's seen to be a lot easier for now.

Clip-App:

This is one app that require tons of attention, and it's the hardest to develop since my computer barely handle it.

Want to add Disco Diffusion.

Want to add animation option.

Want to fix/improve start/end images.

Want to make a real time preview.

Want to fix a lot of stuff t.t

Comments

Anonymous

for Clip-App... one simple fix that would make my workflow much easier is just to add a few leading zeros to file names. as it is, i have to manually rename the early files (e.g. 0.png to 000.png for a 999-image series) to get Rife to stitch them into a video in the correct order.

Anonymous

that said been getting some wild textures out of it lately. so much depth https://youtube.com/shorts/8fbl2IqW4gA?feature=share

DAINAPP

Ah yes, that is a good suggestion, dunno why I didn't fix this yet.

Snake Plissken

https://github.com/subeeshvasu/Awesome-Deblurring#multi-imagevideo-motion-deblurring This link might help you find some good stuff with video deblurring, not sure if it needs to be a different model or not…. But this links to the papers which I figure will dislose which data sets they used for testing… you might be a able to make your own training data set by taking 60 fps video (call it ground truth), reducing it to 30 fps and adding motion blur artificially?

DAINAPP

Ah, just downloaded a few Datasets from this link. Yes, this is good. I can totally try to train a model using this dataset. Bigger problem is that I'm very close to need a GPU farm to train all the stuff I have in the Queue haha. But definitely will give it a shot once the Rife models are ready.