AI-generated videos now possible with gaming GPUs with just 6GB of VRAM

AI-generated videos now possible with gaming GPUs with just 6GB of VRAM


Lvmin Zhang at GitHub, in collaboration with Maneesh Agrawala at Stanford University, has introduced FramePack this week. FramePack offers a practical implementation of video diffusion using fixed-length temporal context for more efficient processing, enabling longer and higher-quality videos. A 13-billion parameter model built using the FramePack architecture can generate a 60-second clip with just 6GB of video memory.

FramePack is a neural network architecture that uses multi-stage optimization techniques to enable local AI video generation. At the time of writing, the FramePack GUI is said to run a custom Hunyuan-based model under the hood, though the research paper mentions that existing pre-trained models can be fine-tuned using FramePack.



Source link