The purpose of our project was to create a line-following robot car that was activated by a ‘passcode’ of various notes. A series of three notes must be played and detected by the microphone on our robot car in order for the robot to begin line following. A more detailed description of these processes can be found below.
Fast Fourier Transform (FFT) was used to detect the three notes. The microphone ADCB sampled 1024 data points at 10, 000 Hz. ‘Ping-pong’ buffering, or multiple buffering, was used to collect and process data simultaneously. We stored 512 data points in one buffer and set up a flag for when it gets filled up. When it is filled, we started to fill up the second buffer with the remaining 512 data points. The buffers are processed to output the maximum power frequency tone and its index detected from the tone. The state machine case structure is then used to arrange a sequence of three tones in the ‘passcode’, the line following, and the celebratory dance. A total of 7 states were created to carry this out. State 1 is the default state, and its purpose is to detect the frequency of note A6. When a frequency is detected and it lies within a Nyquist index range of +/-10 for A6, State 2 is activated. Else, it continues to stay in State 1. State 2 acts as a time buffer to give the user time to play the second note before moving into State 3. The time buffer lasts for about 2-3 seconds. State 3 works like State 1, but for the second note, D7. If the note isn’t detected, it returns to State 1 where the process has to restart. If D7 is detected, we move to State 4 which is another time buffer before State 5. State 5 looks to detect the third note, G7. It moves to State 6 if G7 is detected, else we restart at State 1 again. State 6 initiates the line following protocol (that is described in the next paragraph). When the camera no longer detects blobs within its FOV (hence the centroid is null), we move to State 7 which is our celebratory dance. After the dance is complete, we reset back to State 1.
Once the robot correctly detects the series of three notes, the line following protocol is activated. The majority of this protocol is conducted via LabView programmed to the myRIO. A block diagram was constructed using the LabView VisionAcquisition and VisionAssistant blocks to process the images gathered by a USB camera. This camera was connected to the myRIO and directed towards the floor in front of the bot. Camera resolution was set to 160 x 120 at 30fps. The images captured by the camera were processed into a binary image with HSV hue ranges set to solely detect the fluorescent-pink color of the tape lines. Blob-detection was then used to identify the largest region of interest in the camera’s FOV. The centroid (X and Y coordinates with respect to the camera’s image resolution) and area of the largest blob were then collected and sent over to the TI LaunchPad’s serial port via LabView’s UART block. From here, the C code performs a simple error calculation of our desired centroid versus the centroid readings being sent over from the myRIO. Since the camera was set to 160 x 120 resolution, our desired centroid was (80, 60). Using this error calculation, a turn command was continuously modified which helped the robot center itself along the tape line. A reference speed (“VRef” as referred to in our code and video explanation) of 0.3 ft/s was set to give the robot a constant forward motion.
After completion of line following (hence no regions of interests are being detected), the robot completes a celebratory spin cycle. Trust me, the robot is just as happy as us that everything works.
We would like to give a huge thank you to Professor Daniel Block and TA Siyuan Chen for their help all semester and especially for their help with this project. We would also like to thank Texas Instruments for their generous donation of our LaunchPad boards. Video demonstrations and images of our final robot design can be found below, alongside our C code and LabView VI.