• Article
  • Ingénierie & Outils numériques

Article : Articles dans des revues internationales ou nationales avec comité de lecture

This paper presents a novel WiFi-Visual data fusion method for indoor robot (TIAGO++) localization. Long-term follow-up experiments show that this method can use 10 WiFi samples and 4 low-resolution images ($58 times 58$ in pixels) to localize an indoor robot with an average error distance of about 1.32 meters 3 months (or 1.7 meters 7 months ) after training data collection. Instead of neural network design, this paper focuses on soft data fusion to prevent unbounded errors in visual localization. The proposed soft data fusion includes first-layer WiFi-Visual feature fusion and second-layer decision vector fusion. Firstly, motivated by the excellent capability of the neural network in image processing and recognition, temporal-spatial features are extracted from WiFi data and represented in image form. Secondly, the image-form WiFi features and the visual features taken by the robot camera are combined together, and jointly exploited by a classification neural network to produce a likelihood vector for WiFi-Visual localization. This is called first-layer fusion. Similarly, these two types of features can be separately exploited by neural networks to produce another two independent likelihood vectors. Thirdly, the three likelihood vectors are fused by Hadamard product to produce a final likelihood vector. This is called second-layer fusion. The proposed soft data fusion does not apply any threshold or prioritize any data source over the other in the fusion process. It never excludes low-probability candidate positions, which can avoid information loss due to a hard decision. The demo video is providedfootnote{The demo video available at: href{https://youtu.be/__AlHGhDNSI}{https://youtu.be/__AlHGhDNSI}}, and the code will be open to the public after the publication of this work.