Ground truth for indoor datasets

As already explained here, indoor datasets from Rawseeds include two kinds of ground truth: executive drawings and robot trajectory. To reconstruct the trajectory of the robot through the explored environment, we used two separate systems:

  • a system based on industrial cameras, visual tags mounted on the robot, and ad hoc software
  • a system using a set of Laser Range Finders andad hoc software, used to localize a rectangular hull mounted on the robot

Both systems were set up to cover a subset of the explored environment (we called that region of space the “GT area”). The two trajectory-reconstruction systems worked independently one from the other (and from the sensor systems on board of the robot). Off-line, their recorded outputs were combined to generate a third stream of ground truth data with still lower error. Such combined data stream, describing the trajectory of the robot whenever it traversed the GT area, is the final one, supplied as ground truth with Rawseeds’ datasets.

The vision-based ground truth collection system was based on a multiple-camera GigE Vision system composed of Prosilica GC-series cameras, a Linux-based PC running custom software and a Gigabit Ethernet switch to concentrate all the vision data on a single network link to the PC. The LRF-based system used well-established Sick LMS200 sensors, and again a Linux-based PC running custom software.

A quantitative analysis of the performance of Rawseeds’ indoor trajectory reconstruction setup can be found in Deliverable D2.1 – part 2.