Associative memory architecture for video compression

F. Idris, S. Panchanathan

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Video compression is becoming increasingly important, with several applications. There are two kinds of redundancies in a video sequence, namely spatial and temporal. Vector quantisation (VQ) is an efficient technique for exploiting spatial correlation. Temporal redundancies are usually removed by using motion estimation/compensation compensation techniques. The coding performance of VQ may be improved by employing adaptive techniques at the expense of increases in computational complexity. Both VQ and motion estimation algorithms are essentially template matching operations. However, they are computer intensive, necessitating the use of special-purpose architectures for real-time implementation. The authors propose a unified associative memory architecture for real-time implementation of motion estimation and frame-adaptive vector quantisation for video compression. The proposed architecture has the advantage of simplicity, partitionability and modularity and has hence the potential for VLSI implementation.

Original languageEnglish (US)
Pages (from-to)55-64
Number of pages10
JournalIEE Proceedings: Computers and Digital Techniques
Volume142
Issue number1
DOIs
StatePublished - Jan 1 1995
Externally publishedYes

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Hardware and Architecture
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Associative memory architecture for video compression'. Together they form a unique fingerprint.

Cite this