Speaker
Description
The use of autonomous vehicles is a widely adopted practice to enhancing productivity in the agricultural sector. Various platforms have been developed by multiple research groups, employing diverse methodologies for implementing the mechanical design and the autonomous navigation and interaction systems with the environment. Projects focused on assessing the characteristics of the crops visited by the robot incorporate various sensors (such as Lidars and encoders) as well as different types of cameras to capture images of the agricultural landscape. Although these experimental robotic platforms feature embedded processors for the Supervisory and Autonomous Navigation Systems, computational cost remains a limiting factor in deploying these autonomous devices. Their autonomy is typically constrained by battery power, and the integration of complex computational systems in accessory components leads to high energy consumption.
This work presents an alternative solution through the development of a dedicated hardware device for the navigation of agricultural mobile robots. This device identifies navigable and non-navigable areas, as well as the vehicle's tilt angle in relation to the planting line. By accurately determining this angle, the Vehicle Navigation System can make real-time adjustments to the wheel steering, allowing the vehicle to align with the planting line or the navigable area defined by the road between planting rows. Consequently, this angle estimation, achieved through a computationally lightweight system, provides high-quality information to the platform—information that would typically require sensor fusion from multiple embedded sensors at a significant computational cost.
The project development was based on a method for extracting local visual features through the processing of color images obtained from a video camera. The circuit was implemented using a development tool based on a low-cost FPGA. It comprises stages for classification, morphological processing, and navigation line extraction. In the first stage, pixels are classified using the HSL color model into categories representing navigable and non-navigable areas. The subsequent morphological processing stage performs tasks such as filtering, grouping, and edge extraction. This morphological processing utilizes a multi-stage arrangement in a serial pipeline architecture to achieve real-time image processing through fundamental mathematical morphology operations like erosion and dilation. Operations are conducted through blocks, with each block having inputs for the binary image from the classification circuit and video synchronization signals. The outputs consist of the processed binary image and its corresponding synchronization signals, which are delayed relative to the input signals by a period corresponding to the pixel processing latency. This design allows for cascading these processing units, forming a sequence of basic morphological operations that align with the proposed morphological algorithm. The line extraction process is carried out using a linear regression method. The architecture developed in this project facilitates real-time image processing for the autonomous navigation of mobile robots in agricultural environments.
The application of the HSL color model enhanced the system's performance under varying lighting conditions. The morphological processing architecture enabled real-time capabilities in tasks such as filtering and edge extraction. During project development testing, up to 40 morphological processing units were employed. The proposed architecture for the orientation line extraction circuit allowed for real-time calculation of the parameters defining the navigable area and the central line for vehicle orientation, making it suitable for any agricultural mobile robot platform.