Simple GAN model, which produces data set from selected video (cuts it into frames) and learns to generate related images by simultaneously training generator and discriminator neural networks. Needs about a day of running on GPU to get significant results, but after about 100k epochs the process collapses and model starts to generate monotonous pictures. This probably could be solved by tuning hyper-parameters or choosing different layers architecture.
Paper about GANs: https://arxiv.org/pdf/1406.2661.pdf
Used some code from: https://www.coursera.org/learn/intro-to-deep-learning/home/week/4