Research

 

At its very core, Project Artemis is a research project which aims to provide improved navigation solutions for UAVs.

Our software stack is available on Github : www.github.com/ProjectArtemis

All of the Artemis MAVs follow the same basic, distributed  system architecture. There is a high-level onboard computer, and a low-level embedded flight controller, typically a PX4 autopilot board, or similar derivative. The middleware of choice on the high-level companion computer is ROS (Robot Operating System) and the PX4 Middleware on the deeply embedded controller.

Software

Visual Navigation

Multiple cameras provide proprioceptive information about the environment, used for mapping and localisation. Forward stereo cameras are used to compute depth images in realtime.
The optical flow camera is used for precision dead-reckoning if the active SLAM system diverges.

disparity

All cameras are synchronised in time with respect to each other, and to the IMU (Inertial Measurement Unit) of the flight controller. Depth images are sent to a volumetric mapping framework (Octomap) and a 3D map of the environment is built incrementally.

octomap

We also use a SLAM (Simultaneous Localisation and Mapping) technique on our robot. The system continuously constructs a sparse map of the environment which is optimised in the background. Visual SLAM is globally consistent, and centimetre-level accurate unlike GPS, and works both indoors and outdoors. Tight fusion with time-synchronised inertial measurements greatly increases robustness and accuracy.

State Estimation

image.09Q5DY

The system is designed to navigate using all available sensors in the environment, which includes both GPS and vision outdoors and pure vision indoors. Since sensor availability is not guaranteed, a modular sensor fusion approach using a hybrid Kalman filter with fault detection is used to maintain a robust state estimate. Motivation to use all the information from all the sensors is that even if a particular subset or module were to fail, the overall system performance would not be compromised.

Terrain Analysis

mapping

Obstacle Avoidance

The global volumetric map is used to continuously compute a collision-free trajectory for the vehicle. In assisted modes, the motion planner only intervenes if the operator’s high-level position commands could lead to a possible collision. In autonomous modes, the planner computes optimal trajectories based on a next-best-view analysis in order to optimise 3D reconstruction. The planner sends its commands to the minimum-snap trajectory controller running on the low-level flight controller, which computes motor outputs.


Help us out!

We always need help to fund our prototype systems, and you can help out too! Both hardware and monetary help is highly appreciated. Simply click below if you’d like to contribute via PayPal. If you think you can provide novel hardware (e.g high-end computing systems, vision systems, etc.) that is awesome as well. Contact Kabir@UASys.io to know more.

If you would like to officially become a sponsor, and get your logo on our home page, please feel free to contact us.