Multimodal Data Fusion and Representations with Applications to Biometrics and Autonomous off-road Robot Navigation
Fusion of multi-sensor data enhances the performance of systems. Fusion at the score and decision levels has produced improved performance in multimodal biometrics and terrain classification. It is believed that fusion at the data and feature levels will outperform the score and decision level fusion techniques. However, limited research has been done in this area due to a number of challenges (e.g. incompatibility, high dimensionality). This thesis will focus on data and feature level fusion. It will produce unified representations for multimodal data for applications in biometrics and to off-road robot navigation (terrain classification, obstacle detection and landmark-based localization). Hybrid fusion (data/feature levels and local/global features) algorithms will also be developed for these applications. The performance of the unified representations will be tested using ROC curves and will be compared with the benchmark performance (score level).
It is believed that fusion at the data and feature levels will outperform the score and decision level fusion techniques. But numerous challenges have limited research in this area so far.