back to list

Project: Local Geometry Extraction from Tactile Input

Description

In the field of robotics and automation, determining the position and orientation of objects is essential for various applications. Traditionally, this has been accomplished using vision-based sensors. The information from these sensors can recover the local geometry of a target object, which is done making use of convolutional neural networks. After this, the orientation of a target object can be determined by scanning different positions of the object. Using the ground truth of the target object, it can be determined at which position this scanned part is most likely to be [1].

Demonstration of the position estimation done in MidasTouch [1]

A new flexible tactile sensor that uses a grid of piezoresistive sensors has been developed [2]. This skin-like touch-based sensor offers an alternative to vision-based sensors. The goal of this project is to explore the capabilities of this sensor. Mapping sensor input to physical positions in space, as well as reconstructing as much information as possible about the geometry of a target object touching the sensor. The eventual goal of this project is giving similar output to the vision-based tactile sensor, such that it can be used for the same goal described above.

References
  • [1] S. Suresh, Z. Si, S. Anderson, M. Kaess and M. Mukadam, “MidasTouch: Monte-Carlo inference over distributions across sliding touch”, in 6th Annual Conference on Robot Learning, 2022. https://doi.org/10.48550/arXiv.2210.14210
  • [2] Lee, H., Kwon, D., Cho, H. et al. Soft Nanocomposite Based Multi-point, Multi-directional Strain Mapping Sensor Using Anisotropic Electrical Impedance Tomography. Sci Rep 7, 39837 (2017). https://doi.org/10.1038/srep39837
Details
Student
Jelmer Blaas
Supervisor
Andrei Jalba