Video compression is becoming increasingly important, with several applications. There are two kinds of redundancies in a video sequence, namely spatial and temporal. Vector quantisation (VQ) is an efficient technique for exploiting spatial correlation. Temporal redundancies are usually removed by using motion estimation/compensation compensation techniques. The coding performance of VQ may be improved by employing adaptive techniques at the expense of increases in computational complexity. Both VQ and motion estimation algorithms are essentially template matching operations. However, they are computer intensive, necessitating the use of special-purpose architectures for real-time implementation. The authors propose a unified associative memory architecture for real-time implementation of motion estimation and frame-adaptive vector quantisation for video compression. The proposed architecture has the advantage of simplicity, partitionability and modularity and has hence the potential for VLSI implementation.
ASJC Scopus subject areas
- Theoretical Computer Science
- Hardware and Architecture
- Computational Theory and Mathematics