TY - GEN
T1 - Video-based self-positioning for intelligent transportation systems applications
AU - Chandakkar, Parag S.
AU - Venkatesan, Ragav
AU - Li, Baoxin
N1 - Publisher Copyright:
© Springer International Publishing Switzerland 2014.
PY - 2014
Y1 - 2014
N2 - Many urban areas face traffic congestion. Automatic traffic management systems and congestion pricing are getting prominence in recent research. An important stage in such systems is lane prediction and on-road self-positioning. We introduce a novel problem of vehicle self-positioning which involves predicting the number of lanes on the road and localizing the vehicle within those lanes, using the video captured by a dashboard camera. To overcome the disadvantages of most existing low-level vision-based techniques while tackling this complex problem, we formulate a model in which the video is a key observation. The model consists of the number of lanes and vehicle position in those lanes as parameters, hence allowing the use of high-level semantic knowledge. Under this formulation, we employ a lane-width-based model and a maximum-likelihoodestimator making the method tolerant to slight viewing angle variation. The overall approach is tested on real-world videos and is found to be effective.
AB - Many urban areas face traffic congestion. Automatic traffic management systems and congestion pricing are getting prominence in recent research. An important stage in such systems is lane prediction and on-road self-positioning. We introduce a novel problem of vehicle self-positioning which involves predicting the number of lanes on the road and localizing the vehicle within those lanes, using the video captured by a dashboard camera. To overcome the disadvantages of most existing low-level vision-based techniques while tackling this complex problem, we formulate a model in which the video is a key observation. The model consists of the number of lanes and vehicle position in those lanes as parameters, hence allowing the use of high-level semantic knowledge. Under this formulation, we employ a lane-width-based model and a maximum-likelihoodestimator making the method tolerant to slight viewing angle variation. The overall approach is tested on real-world videos and is found to be effective.
UR - http://www.scopus.com/inward/record.url?scp=84916607154&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84916607154&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-14249-4_69
DO - 10.1007/978-3-319-14249-4_69
M3 - Conference contribution
AN - SCOPUS:84916607154
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 718
EP - 729
BT - Advances in Visual Computing - 10th International Symposium, ISVC 2014, Proceedings
A2 - Bebis, George
A2 - Boyle, Richard
A2 - Parvin, Bahram
A2 - Koracin, Darko
A2 - McMahan, Ryan
A2 - Jerald, Jason
A2 - Zhang, Hui
A2 - Drucker, Steven M.
A2 - Chandra, Kambhamettu
A2 - Maha, El Choubassi
A2 - Deng, Zhigang
A2 - Carlson, Mark
PB - Springer Verlag
T2 - 10th International Symposium on Visual Computing, ISVC 2014
Y2 - 8 December 2014 through 10 December 2014
ER -