Feature-level and pixel-level fusion routines when coupled to infrared night-vision tracking scheme

Yi Zhou, Abdel Mayyas, Ala Qattawi, Mohammed Omar

Research output: Contribution to journalArticle

11 Citations (Scopus)

Abstract

This manuscript evaluates the feature-based and the pixel-based fusion schemes quantitatively when applied to fuse infrared LWIR and visible TV sequences. The input sequence is from a commercial night-vision module dedicated for automotive applications. The text presents an in-house feature-level fusion routine that applies three fusing relationships; intersection, disjointing and inclusion, in addition to a new objects tracking routine. The processing is done for two specific night driving scenarios; a passing vehicle and an approaching vehicle with glare. The study presents the feature-level fusion details that include; a registration done at the hardware-level, a Gaussian-based preprocessing, a feature extraction subroutine, and finally the fusing logic. The evaluation criteria are based on the retrieved objects morphology and the number of features extracted. Presented comparison show that feature-level is more robust over variations in intensity of input channels and provides higher signal to noise ratio; 6.18 compared to 4.72 for the pixel-level case. Additionally, this study indicates that the pixel-level extracts more information from the channel with higher intensity while the feature-level highlights the input with higher number of features.

Original languageEnglish (US)
Pages (from-to)43-49
Number of pages7
JournalInfrared Physics and Technology
Volume53
Issue number1
DOIs
StatePublished - Jan 2010
Externally publishedYes

Fingerprint

night vision
Fusion reactions
fusion
Pixels
pixels
Infrared radiation
Glare
Subroutines
Electric fuses
Feature extraction
Signal to noise ratio
vehicles
Hardware
glare
subroutines
Processing
fuses
preprocessing
pattern recognition
night

Keywords

  • Feature-based fusion
  • Gaussian filtering
  • Night vision
  • Pixel-level fusion
  • Weighted average

ASJC Scopus subject areas

  • Atomic and Molecular Physics, and Optics
  • Condensed Matter Physics
  • Electronic, Optical and Magnetic Materials

Cite this

Feature-level and pixel-level fusion routines when coupled to infrared night-vision tracking scheme. / Zhou, Yi; Mayyas, Abdel; Qattawi, Ala; Omar, Mohammed.

In: Infrared Physics and Technology, Vol. 53, No. 1, 01.2010, p. 43-49.

Research output: Contribution to journalArticle

@article{da3c9b316f82449780251689ab8cfee4,
title = "Feature-level and pixel-level fusion routines when coupled to infrared night-vision tracking scheme",
abstract = "This manuscript evaluates the feature-based and the pixel-based fusion schemes quantitatively when applied to fuse infrared LWIR and visible TV sequences. The input sequence is from a commercial night-vision module dedicated for automotive applications. The text presents an in-house feature-level fusion routine that applies three fusing relationships; intersection, disjointing and inclusion, in addition to a new objects tracking routine. The processing is done for two specific night driving scenarios; a passing vehicle and an approaching vehicle with glare. The study presents the feature-level fusion details that include; a registration done at the hardware-level, a Gaussian-based preprocessing, a feature extraction subroutine, and finally the fusing logic. The evaluation criteria are based on the retrieved objects morphology and the number of features extracted. Presented comparison show that feature-level is more robust over variations in intensity of input channels and provides higher signal to noise ratio; 6.18 compared to 4.72 for the pixel-level case. Additionally, this study indicates that the pixel-level extracts more information from the channel with higher intensity while the feature-level highlights the input with higher number of features.",
keywords = "Feature-based fusion, Gaussian filtering, Night vision, Pixel-level fusion, Weighted average",
author = "Yi Zhou and Abdel Mayyas and Ala Qattawi and Mohammed Omar",
year = "2010",
month = "1",
doi = "10.1016/j.infrared.2009.08.011",
language = "English (US)",
volume = "53",
pages = "43--49",
journal = "Infrared Physics and Technology",
issn = "1350-4495",
publisher = "Elsevier",
number = "1",

}

TY - JOUR

T1 - Feature-level and pixel-level fusion routines when coupled to infrared night-vision tracking scheme

AU - Zhou, Yi

AU - Mayyas, Abdel

AU - Qattawi, Ala

AU - Omar, Mohammed

PY - 2010/1

Y1 - 2010/1

N2 - This manuscript evaluates the feature-based and the pixel-based fusion schemes quantitatively when applied to fuse infrared LWIR and visible TV sequences. The input sequence is from a commercial night-vision module dedicated for automotive applications. The text presents an in-house feature-level fusion routine that applies three fusing relationships; intersection, disjointing and inclusion, in addition to a new objects tracking routine. The processing is done for two specific night driving scenarios; a passing vehicle and an approaching vehicle with glare. The study presents the feature-level fusion details that include; a registration done at the hardware-level, a Gaussian-based preprocessing, a feature extraction subroutine, and finally the fusing logic. The evaluation criteria are based on the retrieved objects morphology and the number of features extracted. Presented comparison show that feature-level is more robust over variations in intensity of input channels and provides higher signal to noise ratio; 6.18 compared to 4.72 for the pixel-level case. Additionally, this study indicates that the pixel-level extracts more information from the channel with higher intensity while the feature-level highlights the input with higher number of features.

AB - This manuscript evaluates the feature-based and the pixel-based fusion schemes quantitatively when applied to fuse infrared LWIR and visible TV sequences. The input sequence is from a commercial night-vision module dedicated for automotive applications. The text presents an in-house feature-level fusion routine that applies three fusing relationships; intersection, disjointing and inclusion, in addition to a new objects tracking routine. The processing is done for two specific night driving scenarios; a passing vehicle and an approaching vehicle with glare. The study presents the feature-level fusion details that include; a registration done at the hardware-level, a Gaussian-based preprocessing, a feature extraction subroutine, and finally the fusing logic. The evaluation criteria are based on the retrieved objects morphology and the number of features extracted. Presented comparison show that feature-level is more robust over variations in intensity of input channels and provides higher signal to noise ratio; 6.18 compared to 4.72 for the pixel-level case. Additionally, this study indicates that the pixel-level extracts more information from the channel with higher intensity while the feature-level highlights the input with higher number of features.

KW - Feature-based fusion

KW - Gaussian filtering

KW - Night vision

KW - Pixel-level fusion

KW - Weighted average

UR - http://www.scopus.com/inward/record.url?scp=71349087261&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=71349087261&partnerID=8YFLogxK

U2 - 10.1016/j.infrared.2009.08.011

DO - 10.1016/j.infrared.2009.08.011

M3 - Article

VL - 53

SP - 43

EP - 49

JO - Infrared Physics and Technology

JF - Infrared Physics and Technology

SN - 1350-4495

IS - 1

ER -