Incremental learning for multitask pattern recognition problems

Seiichi Ozawa, Asim Roy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

This paper presents a learning model of multitask pattern recognition (MTPR) which is constructed by several neural classifiers, long-term memories, and the detector of task changes. In the MTPR problem, several multi-class classification tasks are sequentially given to the learning model without notifying their task categories. This implies that the learning model is supposed to detect task changes by itself and to utilize the knowledge on the previous tasks for learning of new tasks. In addition, the learning model must acquire knowledge of multiple tasks incrementally without unexpected forgetting under the condition that not only tasks but also training samples are sequentially given. The proposed model is evaluated for two artificial MTPR problem. In the experiments, we verify that the proposed model can acquire and accumulate task knowledge very stably and the speed of knowledge acquisition for new tasks is enhanced by transferring knowledge.

Original languageEnglish (US)
Title of host publicationProceedings - 7th International Conference on Machine Learning and Applications, ICMLA 2008
Pages747-751
Number of pages5
DOIs
StatePublished - 2008
Event7th International Conference on Machine Learning and Applications, ICMLA 2008 - San Diego, CA, United States
Duration: Dec 11 2008Dec 13 2008

Other

Other7th International Conference on Machine Learning and Applications, ICMLA 2008
CountryUnited States
CitySan Diego, CA
Period12/11/0812/13/08

Fingerprint

Pattern recognition
Knowledge acquisition
Classifiers
Detectors
Data storage equipment
Experiments

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Software

Cite this

Ozawa, S., & Roy, A. (2008). Incremental learning for multitask pattern recognition problems. In Proceedings - 7th International Conference on Machine Learning and Applications, ICMLA 2008 (pp. 747-751). [4725059] https://doi.org/10.1109/ICMLA.2008.70

Incremental learning for multitask pattern recognition problems. / Ozawa, Seiichi; Roy, Asim.

Proceedings - 7th International Conference on Machine Learning and Applications, ICMLA 2008. 2008. p. 747-751 4725059.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ozawa, S & Roy, A 2008, Incremental learning for multitask pattern recognition problems. in Proceedings - 7th International Conference on Machine Learning and Applications, ICMLA 2008., 4725059, pp. 747-751, 7th International Conference on Machine Learning and Applications, ICMLA 2008, San Diego, CA, United States, 12/11/08. https://doi.org/10.1109/ICMLA.2008.70
Ozawa S, Roy A. Incremental learning for multitask pattern recognition problems. In Proceedings - 7th International Conference on Machine Learning and Applications, ICMLA 2008. 2008. p. 747-751. 4725059 https://doi.org/10.1109/ICMLA.2008.70
Ozawa, Seiichi ; Roy, Asim. / Incremental learning for multitask pattern recognition problems. Proceedings - 7th International Conference on Machine Learning and Applications, ICMLA 2008. 2008. pp. 747-751
@inproceedings{0391d33131dd44f9b6f3e3ff4d19226e,
title = "Incremental learning for multitask pattern recognition problems",
abstract = "This paper presents a learning model of multitask pattern recognition (MTPR) which is constructed by several neural classifiers, long-term memories, and the detector of task changes. In the MTPR problem, several multi-class classification tasks are sequentially given to the learning model without notifying their task categories. This implies that the learning model is supposed to detect task changes by itself and to utilize the knowledge on the previous tasks for learning of new tasks. In addition, the learning model must acquire knowledge of multiple tasks incrementally without unexpected forgetting under the condition that not only tasks but also training samples are sequentially given. The proposed model is evaluated for two artificial MTPR problem. In the experiments, we verify that the proposed model can acquire and accumulate task knowledge very stably and the speed of knowledge acquisition for new tasks is enhanced by transferring knowledge.",
author = "Seiichi Ozawa and Asim Roy",
year = "2008",
doi = "10.1109/ICMLA.2008.70",
language = "English (US)",
isbn = "9780769534954",
pages = "747--751",
booktitle = "Proceedings - 7th International Conference on Machine Learning and Applications, ICMLA 2008",

}

TY - GEN

T1 - Incremental learning for multitask pattern recognition problems

AU - Ozawa, Seiichi

AU - Roy, Asim

PY - 2008

Y1 - 2008

N2 - This paper presents a learning model of multitask pattern recognition (MTPR) which is constructed by several neural classifiers, long-term memories, and the detector of task changes. In the MTPR problem, several multi-class classification tasks are sequentially given to the learning model without notifying their task categories. This implies that the learning model is supposed to detect task changes by itself and to utilize the knowledge on the previous tasks for learning of new tasks. In addition, the learning model must acquire knowledge of multiple tasks incrementally without unexpected forgetting under the condition that not only tasks but also training samples are sequentially given. The proposed model is evaluated for two artificial MTPR problem. In the experiments, we verify that the proposed model can acquire and accumulate task knowledge very stably and the speed of knowledge acquisition for new tasks is enhanced by transferring knowledge.

AB - This paper presents a learning model of multitask pattern recognition (MTPR) which is constructed by several neural classifiers, long-term memories, and the detector of task changes. In the MTPR problem, several multi-class classification tasks are sequentially given to the learning model without notifying their task categories. This implies that the learning model is supposed to detect task changes by itself and to utilize the knowledge on the previous tasks for learning of new tasks. In addition, the learning model must acquire knowledge of multiple tasks incrementally without unexpected forgetting under the condition that not only tasks but also training samples are sequentially given. The proposed model is evaluated for two artificial MTPR problem. In the experiments, we verify that the proposed model can acquire and accumulate task knowledge very stably and the speed of knowledge acquisition for new tasks is enhanced by transferring knowledge.

UR - http://www.scopus.com/inward/record.url?scp=60649105757&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=60649105757&partnerID=8YFLogxK

U2 - 10.1109/ICMLA.2008.70

DO - 10.1109/ICMLA.2008.70

M3 - Conference contribution

AN - SCOPUS:60649105757

SN - 9780769534954

SP - 747

EP - 751

BT - Proceedings - 7th International Conference on Machine Learning and Applications, ICMLA 2008

ER -