Home Artists Posts Import Register

Content


1. You can now watch the interpolation while it render, it might still have some problems with resolution and play speed.

2. A alternative way to render frames:

  • It have less "Flow" but try to retain more information of the original frames.
  • It can be done in 2X 3X 4X 5X 6X 7X 8X
  • Sometimes better for animation since it keep more information of the original frames.
  • Can be faster to render

Here an example of the new Alternative Render.

Default render:

https://streamable.com/lkwfr1

Alternative render:

https://streamable.com/ok0cpd


Files

RIFE-App 1.73.7z

Comments

Anonymous

Hi, it's me again, for the same problem about using 2nd graphic card. I had a try and this time shows "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!". Still unable to run on that one. (>﹏<)

Anonymous

I am getting this error with my K80 Tesla, I have tried selecting just one gpu and also use all gpus to no avail. Starting... ['C:/Users/James/Videos/MORTAL KOMBAT Trailer.webm'] Input FPS: 60.0 C:/Users/James/Documents Using Benchmark: True Using Half-Precision: True Batch Size: -1 Input FPS: 60.0 Use all GPUS: False Scale: 0.25 Render Mode: 0 Interpolations: 2X Using Model: 2_4 Selected auto batch size, testing a good batch size. Resolution: 3840x2160 Setting new batch size to 2 Resolution: 3840x2160 Total Frames: 9611.0 0%| | 8/9611 [00:03<1:05:10, 2.46it/s, file=File 4]Exception ignored in thread started by: Traceback (most recent call last): File "my_DAIN_class.py", line 294, in queue_model File "my_DAIN_class.py", line 93, in make_inference File "model\RIFE_HDv2.py", line 236, in inference File "model\RIFE_HDv2.py", line 201, in predict File "torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "model\RIFE_HDv2.py", line 111, in forward File "torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "torch\nn\modules\conv.py", line 840, in forward return F.conv_transpose2d( RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue. import torch torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.benchmark = True torch.backends.cudnn.deterministic = False torch.backends.cudnn.allow_tf32 = True data = torch.randn([2, 4, 2176, 3840], dtype=torch.half, device='cuda', requires_grad=True) net = torch.nn.Conv2d(4, 32, kernel_size=[4, 4], padding=[1, 1], stride=[2, 2], dilation=[1, 1], groups=1) net = net.cuda().half() out = net(data) out.backward(torch.randn_like(out)) torch.cuda.synchronize() ConvolutionParams data_type = CUDNN_DATA_HALF padding = [1, 1, 0] stride = [2, 2, 0] dilation = [1, 1, 0] groups = 1 deterministic = false allow_tf32 = true input: TensorDescriptor 000002B7E012EE00 type = CUDNN_DATA_HALF nbDims = 4 dimA = 2, 4, 2176, 3840, strideA = 33423360, 8355840, 3840, 1, output: TensorDescriptor 000002B7E012EE70 type = CUDNN_DATA_HALF nbDims = 4 dimA = 2, 32, 1088, 1920, strideA = 66846720, 2088960, 1920, 1, weight: FilterDescriptor 000002B7DDEB3A50 type = CUDNN_DATA_HALF tensor_format = CUDNN_TENSOR_NCHW nbDims = 4 dimA = 32, 4, 4, 4, Pointer addresses: input: 0000001BE0ED8000 output: 0000001B2B860000 weight: 0000001B22E5B000

DAINAPP

Sorry, I did try to fix for the update, but it seen I'm still missing something, is a little tricky to fix with a single card. There is any extra information on crash_log.txt? Any information can help me.

DAINAPP

Can you try to update your graphic card drivers and see if it fix it?

Anonymous

It is updated, 1/19/2021 version 27.21.14.6133

DAINAPP

I think the model will not work with CUDA version smaller than 5.0 then. Will try to take a better look at that.