My senior-year capstone project was under the tutelage of Dr. Abubakar Muhammad at the Laboratory for Cyber Physical Systems at LUMS (CyPhyNets). The ultimate goal was to create a robust, cost-effective mine detector arm for the award-winning Marwa autonomous landmine detection platform.
Due to the close functional proximity of standard minesweeping equipment, the primary requirement was for the arm to track the profile of a patch of terrain in front of the robot as accurately as possible. The setup consisted of multiple modules which, when combined, were able to track the terrain in the sweep space in front of the robot to an accuracy of +/- 2 cm.
Mine Detector Sweeper Arm
To sweep a rectangular space in front of a robot vehicle while keeping the mine detector perpendicular to the ground, an arm with at least 5 degrees of freedom (DOF) is required – three degrees to position the end effector in 3-dimensional space with x, y and z coordinates, and two additional degrees of freedom to match the slope of the surface plane.
To keep costs low, we decided to go with an arm consisting of three revolute and two prismatic joints, with the horizontal sweeping motion taken care of by one prismatic joint alone. The remaining four joints combined determined the position and tilt of the mine detector in a plane extending from the robot. Potentiometers at every joint relayed their position back to the feedback controller.
Stereoscopic Vision System
In order to keep Marwa’s vision system simple and low-cost while still being reasonably accurate and cost-effective, a pair of USB web cameras was used as a stereoscopic camera pair. After feature matching the images from the cameras, the disparity between them was calculated.
Using this disparity, we were able to reconstruct a 3D point cloud with PCL that gave us the profile of the terrain in front of the robot.
I was also able to utilize template matching to detect the position of the end effector in front of the robot, giving one estimate of the mine detector’s position.
Sensor Fusion and Pose Estimation
Because of the amount of play in the cost-effective-hence-loosely-designed aluminum and steel arm, it was difficult to estimate the position of the arm from the arm-mounted potentiometers alone. Therefore, we decided to introduce redundancy into the system by also measuring the pose of the end effector using the stereo vision system, and combining the results.
With 5 degrees of freedom as well as 5 constraints, inverse kinematics from the potentiometer readings of the arm provided a straightforward solution.
However, identifying common axes in which to transform the readings from both of the sensor groups proved difficult. Eventually, a power-up calibration routine was devised in which the arm was moved through a set of pre-determined positions and the coordinates from the cameras and potentiometers were reconciled to a single coordinate frame to serve as ground truth for each particular run.
Merging the data was also problematic, as the relative accuracy of the two sensor groups varied depending on ambient lighting as well as robot stability. In order to complete a working prototype, I used a static data fusion model based on the relative errors from both sensor groups under lab conditions.
Quite possibly one of the biggest challenges I faced, sensor fusion proved to be something that I was not able to solve satisfactorily. Ideally, the system would have been self-correcting at run-time while tracking previous errors. In addition, it should have been sensitive to operating conditions and weighted inputs accordingly to cater to extreme ambient light as well as physical disturbances to the robot, to provide a more robust fused output.
Path Planning Module
To cover the sweep space in front of the robot, I took the point cloud from the stereo system and, after statistically removing outlier points, approximated the cloud with a grid of planes (‘tiles’).
Each ‘tile’ represent a point in the configuration space, and a path across a series of workspace-connected configurations indicates the path the end effector must follow. A grid-based search (A*) with grid size 2cm was used.
It was important to note that the rates of actuation of some joints were much faster than others, and so the algorithm needed to be tweaked and several intermediate steps and compensatory delays added when moving from one configuration to another simply because otherwise the robot might make an invalid move in the configuration space and collide with the terrain.
Feedback Control Module
This was a relatively simple Arduino discrete digital feedback controller, with standard PID tuning employed in the z-domain to control the arm. Extensive tuning was required to achieve some semblance of control, because of two issues: i) high starting currents of the joint motors introduced jerkiness, and ii) the play in the hardware caused the landmine detector arm to take a long time to settle.
We discussed the possibility of self-learning sensors to better control the system, but reached the conclusion that the inherent jerkiness and coarseness of the system would render convergence difficult and perhaps even impossible, and therefore did not explore this further.
For more on what I’ve done, do check out my resume
Get in touch with me at: qasimzafar AT outlook DOT com
Leave a Reply