UAV hardware-software integration and onboard systems development
This project was done as a part of my research with the Distributed Aerospace Systems and Control Laboratory at the University of Michigan. The goal of this project is to develop a functional baseline platform that can be used to test a flight controller that can fly autonomously to scenes of interest and develop a semantic understanding of the scene by using object detection/ computer vision-driven algorithms.

This can be broken down into two major components: the first part is the theoretical formulation of the UAV path planning algorithm which is studied and developed by Vishnu Chipade, and the second part is to have a working platform where this theoretical model can be implemented, tested and improved – which is the primary focus of this report.
This project is of particular interest to the Distributed Aerospace Systems and Controls Library (DASC) as there are many useful applications for such an autonomous system – that leverages both the autonomous flying capabilities of current commercially available UAVs and also the advances in computer vision/semantic classification algorithms.
For instance, given a scenario of a search and rescue operation in an unknown environment, the use case for this control algorithm would be that a drone can be deployed in the area of interest, and develop a semantic understanding of the environment around it to find a particular object. When uncertain about certain/objects, the drone can autonomously fly closer to an object with low confidence probabilities and increase its own confidence in the understanding of the scene and do all of this autonomously through a flight trajectory planning algorithm that uses the above information.

The report given below covers the experimental nuances of the hardware and software used to make testing possible. The information will include the different hardware used, their functions, and the software packages associated with them. In the case of the hardware, the physical wired-connections, and different details to make all the hardware work with each other are described. On the other hand, the software section covers the Onboard SDK platform, implementational details, and necessary environment setups to make this possible. Additionally, scripts and relevant packages used to integrate all the sub-systems are explained with code and descriptions.
Here’s a video to show the YOLO object detection algorithm running with the drone cameras.
Description
This project is one of the most holistic UAV projects I have ever done. Everything from purchasing hardware, integrating it via software packages, and writing code to extract information from them was done here. The platform is capable of integrating data from LIDAR and RGB cameras and storing it at every timestep for 3D object detection. Furthermore, various API-based control techniques are also used to execute flight trajectories. Overall, a very challenging project but also a great learning experience.