KEYWORDS: Image processing, Autonomous vehicles, Cameras, Detection and tracking algorithms, Unmanned vehicles, RGB color model, Mobile robots, Algorithm development, Image segmentation, Control systems
This paper presents the implementation of a lane detection and tracking algorithm for the autonomous navigation of an Ackermann-steering mobile robot. The proposed implementation employs an RGB camera mounted on the robot, the image information is processed through the lane detection and tracking algorithm to define the robot’s present and future position within the lane. This information is used to determine the orientation of the wheels required to steer the robot within the lane. The implementation employs a Raspberry Pi as the primary logic controller to process the image received from the RGB camera. The Ackermann-steering mobile robot performs steering and navigation with a proportional-integral-derivative controller that manages the orientation of the steering. Experimental results are presented to validate the implementation considering a physical implementation of the Ackermann-steering mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.