Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams

1Tsinghua University, 2ByteDance Inc.
*Equal contribution, Corresponding authors, Project lead.
Flash-VStream Teaser
Comparing (a) conventional offline pipeline and (b) human processing pipeline
with (c) our proposed Flash-VStream for online video streaming understanding.

TL;DR

We proposed Flash-VStream, a video-language model that simulates the memory mechanism of human. Our model is able to process extremely long video streams in real-time and respond to user queries simultaneously. We also proposed VStream-QA, a novel question answering benchmark specifically designed for online video streaming understanding.

Flash-VStream Radar Plot
Flash-VStream is the new state-of-the-art in multiple video-QA benchmarks.

Abstract

Benefiting from the advancements in large language models and cross-modal alignment, existing multi-modal video understanding methods have achieved prominent performance in offline scenario. However, online video streams, as one of the most common media forms in the real world, have seldom received attention. Compared to offline videos, the 'dynamic' nature of online video streams poses challenges for the direct application of existing models and introduces new problems, such as the storage of extremely long-term information, interaction between continuous visual content and 'asynchronous' user questions. Therefore, in this paper we present Flash-VStream, a video-language model that simulates the memory mechanism of human. Our model is able to process extremely long video streams in real-time and respond to user queries simultaneously. Compared to existing models, Flash-VStream achieves significant reductions in inference latency and VRAM consumption, which is intimately related to performing understanding of online streaming video. In addition, given that existing video understanding benchmarks predominantly concentrate on offline scenario, we propose VStream-QA, a novel question answering benchmark specifically designed for online video streaming understanding. Comparisons with popular existing methods on the proposed benchmark demonstrate the superiority of our method for such challenging setting. To verify the generalizability of our approach, we further evaluate it on existing video understanding benchmarks and achieves state-of-the-art performance in offline scenarios as well.

Pipeline

Pipeline
The overview of Flash-VStream framework for real-time online video stream understanding. Flash-VStream is executed by two processes, namely 'frame handle' and 'question handler'. The frame handler is responsible for encoding frames and writing to memory, which contains a visual encoder, a STAR memory and a feature buffer. The question handler is responsible for reading from memory and answering questions anytime, which contains a projector and a Large Language Model.

Results

Results
Results

Benchmark

Results

Case Study

Results
Comparison of different video LLMs on VStream-QA-Movie.
In this movie, a policeman pulls over a vehicle driven by a couple, but they point a gun at the policeman and kill him.
Our Flash-VStream is the only model that successfully understands the theme of this long movie clip.

BibTeX

@article{flashvstream,
        title={Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams},
        author={Zhang, Haoji and Wang, Yiqin and Tang, Yansong and Liu, Yong and Feng, Jiashi and Dai, Jifeng and Jin, Xiaojie},
        journal={arXiv preprint arXiv:2406.08085},
        year={2024}
  }