Distributed Evolution Using Smartphone Robotics Thesis uri icon



  • Thesis (M.S., Computer Science) -- University of Idaho, 2016 | In learning from demonstration (LfD) a human trainer demonstrates desired behaviors to a robotic agent, creating a training set that the agent can learn from. LfD allows non-programmers to easily and naturally train robotic agents to perform specific tasks. However, to date most LfD has focused on single robot, single trainer paradigms leading to bottlenecks in both the time required to demonstrate tasks and the time required to learn behaviors. A previously untested approach to addressing these limitations is to use distributed LfD with a distributed, evolutionary algorithm. Distributed learning is a model for robust real world learning without the need for a central computer. In the distributed LfD system presented here multiple trainers train multiple robots on different, but related, tasks in parallel and each robot runs its own on-board evolutionary algorithm. The robots share the training data, reducing the total time required for demonstrations, and exchange the genetic encoding from the best solutions they've discovered. In these experiments robotic performance on a task after distributing either the genetic encoding for behavior or the demonstrations used to learn single behaviors are compared to the performance using a non-distributed LfD model receiving demonstrations used to learn multiple behaviors. Results show an improvement in performance when distributing training data on single behaviors greater than the improvement in performance when the sharing genetic information of robots trained on multiple behaviors. This implies that robots can learn robust performance of multiple part tasks by learning each of the individual parts of a task and distributing the training of the robot.

publication date

  • June 1, 2016