Using FFmpeg to replace video frames

Machine learning algorithms for video processing typically work on frames (images) rather than video.

In a typical use-case, FFmpeg can be used to extract images from video – in this example, a 50-frame sequence starting at 1:47:

>ffmpeg -i input.vid -vf “select=’gte(t,107)*lt(selected_n,50)'” -vsync passthrough ‘107+%06d.png’

Omit the -vf option if extracting the entire video. . . . → Read More: Using FFmpeg to replace video frames

Upscale and interpolate video super-resolution using STARnet

Increase video resolution with an opensource machine learning algorithm for upscaling and interpolating video image frames using an automated command line script.

Bringing machine learning algorithms a step closer to usability.

Given a low-resolution video file, this script uses a machine-learning algorithm to increase (upscale) each frame’s resolution and optionally add (interpolate) an additional . . . → Read More: Upscale and interpolate video super-resolution using STARnet

Upscale video super-resolution using RBPN

Increase video resolution with an opensource machine learning algorithm for upscaling video image frames using an automated command line script.

Bringing machine learning algorithms a step closer to usability.

Given a low-resolution video file, this script uses a machine-learning algorithm to increase (upscale) each frame’s resolution using information from neighbouring frames. The workhorse is . . . → Read More: Upscale video super-resolution using RBPN