The goal of this project was to use the Texas Instruments F28379D Launchpad board alongside the openMV camera and IR sensors to build a balancing robot that follows a line, detects obstacles, and can plot its tracked motion in LabView.
We use a state machine to integrate obstacle avoidance and line detection. Three actions were defined: Find Line, Avoid Obstacle, and Follow Line. The robot will follow a line unless an obstacle is detected within a set distance of the robot, or the line is no longer in view. In the Avoid Obstacle state, it will reverse direction away from an obstacle, proportional to the distance of the object. In the Find Line state, it reverses slowly until the line comes into view of the camera. An obstacle avoidance maneuver to move around an obstacle is drafted in the comments of the state machine code but a more robust line finding approach will be needed.
The balancing control depends on feedback from optical encoders and an IMU. The controller uses a Kalman filter to combine the sensor data from the gyro and the accelerometer. Both of these sensors tell the controller the tilt angle of the robot, but the accelerometer is more accurate and the gyro is more responsive. By fusing these two sensor inputs, we can determine a more accurate state of the robot's tilt without drift in the measurement over time.
A PID controller is implemented to steer the robot. A PI controller is used for forward and backward movement. The balance control uses a full state feedback controller to balance the robot. The balance control law applies a gain to the tilt, tilt velocity, average wheel velocity, and tilt acceleration as shown in the line of code below:
ubal = -K1*tilt - K2*gyrorate - K3*(velLeft + velRight)/2.0 - K4*gyrorate_dot;
This ubal control effort is halved and shared between the left and right motors to balance the robot with forward and backward motion.
While it was not used in the demos in the attached video, we attempted to implement an alternative balance controller that accounts for possible wheel slip due to loss in traction. It is written as a callable function BalanceNoSlip() in the provided zip file. It adjusts for error in tilt, tilt velocity, and average wheel velocity like the original controller but instead of tilt acceleration it controls differential wheel speed. One benefit to this approach would be the ability to balance by turning, in addition to the forward backward motion. The second benefit is how it should adjust gain values based on estimated slip. The control law is a linear combination of four full state feedback controllers and estimates slip with an observer and Kalman filter. The dynamic model and control method is proposed by Huang and Yeh in Anti Slip Balancing Control for Wheeled Inverted Pendulum Vehicles | IEEE Journals & Magazine | IEEE Xplore. A MATLAB file was created along with the final code to help calculate the gain matrices and observer gain matrix based on physical parameters of the robot. Ten linear matrix inequalities are created from the system model and solved for four gain matrices. It requires tweaking can provide a starting point for future work.
The turning control is determined through a line-following program. This program is written in MicroPython and runs on an OpenMV camera. The algorithm uses the get_regression() function to find a line through the pixels in the camera frame. The function can handle roughly 25% of the pixels being outliers, so the image is filtered and thresholded to simplify the image and only show a singular line with as much noise removed as possible. The get_regression() function returns a theta() and rho() value of a line as shown in the image below.
Unfortunately, these theta() and rho() values are not that useful on their own since their outputs do not scale intuitively with the angle and position of the line, and instead depend on the quadrant the line is in. Thus, these values cannot be applied directly to a control scheme and are converted to a theta that ranges between -1 and 1, and a rho that ranges between -40 and 40. When the line becomes more angled or further from the center of the camera frame, theta and rho deviate from zero. These two values are multiplied by gains and this serves as the error in a PD function. This function uses proportional and derivative control to calculate a steering value which is applied by changing the speed between the left and right motors. This steering value is sent over UART to the Launchpad board, where it is combined with the balance control scheme to differ the speed of the motors while still balancing the robot. In practice, we found that the robot's tendency to balance itself after turning resulted in camera shake, which then resulted in overcompensation by the turning algorithm. As a result, the line following is not very smooth and loses the line easily. Future work would include better PID tuning of the turning and raising the camera to increase its field of view and avoid losing sight of the line so easily. This algorithm is largely borrowed from Kwabena Agyeman of the OpenMV team, but has been modified to work with the specific lighting conditions, line appearance, and control scheme of our balancing robot.
The robot has three integrated IR distance sensors for obstacle detection and avoidance. The sensors are pointed forward-left, forward-right, and straight forward from the robot. Each sensor attaches to a 3x1 soldered header with connections to a 3.3V supply rail (Vcc), a ground rail (GND), and an output signal (Vo). The output signals are limited to 3.3V via an onboard protection chip and passed to the F28379D ADC-D. A start-of-conversion (SOC) for each channel is triggered using the ePWM5 signal with a 1ms period. Once all sensor values have been converted, the ADCD_ISR interrupt function will be called which reads and stores the values. The analog voltages from the IR sensors are converted into integer values between 0 and 4095, where a higher value means a shorter distance. If any of the IR sensors provide a value greater than an arbitrarily determined threshold of 2000, the robot will flag that an obstacle has been detected and enter an Avoid Obstacle state.
We use LabVIEW and an ESP8266 in order to plot the route taken by the robot. The Segbot uses the optical encoders in order to determine its pose and bearing in feet and radians, which is then sent to Labview as a string. The pose and bearing is constantly updated as the Segbot traverses the obstacle course. Then, LabVIEW converts that string into two separate coordinates as integers and scales it to fit in a 700 by 700 pixel square. LabVIEW then takes those two coordinates and draws a blue square as the current position of the robot. The output of this program is shown in Figure 3.
Thank you to Professor Dan Block and Teaching Assistant Li-Wei Shih
Comments