Terrain Preference Learning via VAE Query Selection

In this work I create a variational auto-encoder to create trajectory query pairs for active preference learning.

Links to my github repository for this projectLinks to the live website for this project (if available)Links to the Medium article associated with this project (if available)

Python

PyTorch

ROS

Terrain Preference Learning via VAE Query Selection

Many navigational problems in robotics necessitate a well defined cost map related to the environment. Traditional techniques in creating these involve manual specification of terrain costs based on some context known to the human. However, this becomes intractable with large numbers of terrain types. Preference learning offers a unique way of tackling this type of problem by inferring a reward function through trajectory queries. However, offline preference learning suffers from the variability of the initial dataset, which limits the amount of information that can be gained from query responses and introduces a higher degree of cognitive burden on the human. In this paper, we propose to utilize recent advancements in preference learning surrounding the use of generative models, specifically variational autoencoders, as they utilize a lower dimensional latent space useful for clustering and inferring similarity or dissimilarity, to combat analogous or insufficient trajectory sets towards robotic navigation through learned terrain weights.

Clusters visualized using T-SNE
Average weights over time for my VAE approach
Average weights over time for random approach