Modelo para caracterização visual de cenas aplicado à localização robótica em ambientes externos dinâmicos e ao reconhecimento de objetos

Título da Revista

ISSN da Revista

Título de Volume

Editor

Universidade Federal do Espírito Santo

Resumo

Considering the current Robotics challenges which are related to cognitive problems, one can note that some of these activities easy for human beings to execute are not so easy for robots. Then, there are still some unsolved problems in a robust way, that are related to robots failure caused by the caothic real world, mainly in dynamic outdoor environments such as: mapping , trajectory planning, localization, navigation and object recognition by shape and function. As the majority of those problems can be solved by Computer Vision techniques, the goal of this work was develop an autonomous online visual localization method applied to dynamic outdoor environments, without considering a priori information using visual scene characterization. For this, it was developed a model for visual scene characterization, based on Probability Mass Functions (PMFs) of visual features SURF from places of an environment map. Using that model to localize place samples of the map, it was defined a localization method, that calculates probabilities of a sample to belong to a candidate place from the map, and compares them to reference values, defined by ROC curves from candidate places. The tests executed for evaluating the characterization power of the developed model and the quality of the proposed localization method, used a visual map generated from a image group for each 28 places of the environment UFES. For evaluating the generalization of the characterization model aplied to the objects recognition problem, it was used a group of characterization images from 4 similar objects. Thus, the obtained results show this work reached its goals, because visual samples from the used dynamic outdoor environment were correctly localized , reaching global classification performance at least acceptable (AUC > 0,7), spending 3,361 seconds in the best case; and it was also possible to recognize objects from the used group with global classification performance at least acceptable too, spending 265 miliseconds in the best case.

Descrição

Palavras-chave

Caracterização visual

Citação

Avaliação

Revisão

Suplementado Por

Referenciado Por