Animal Tracking


In my first semester at the RNEL, I developed videotracking software via background model subtraction, morphological operations, and further algorithmic manipulations for use in future behavioral studies. After isolating the foreground, my algorithm would identify the largest object (I affectionately refer to this as RatBlob), and sequentially bifurcate first the entire RatBlob, then the two RatBlob halves left over. With sufficient training on the background, my algorithm performed fairly well. However, the performance was not robust to my liking — an unsteady background, multiple animals and visual noise (i.e. camera shaking or readjusting) could throw off the tracker. Additionally, occlusion by wires or an implant may disrupt tracking, which rendered my solution useful for only wireless experiments — thereby making my algorithm rather useless to many of my labmates. Thus, I was not satisfied with the result.

Spring 2018

Fast forward to a year later, after having spent summer of 2017 and the following fall semester developing the RELevator, I felt it was time to return to my initial project. This time, however, I wanted to gain more experience with machine learning methods, particularly with neural networks. With this in mind, I began exploring neural network applications for object tracking and image processing. To access the GitHub repository, click here!


A more detailed report on the development and applications of a neural network implementation for rodent videotracking.