Part 2 – Surviving Real-World Elements

In Part 1 of this blog post series, we discussed the different perception technologies to consider when building an autonomous or semi-autonomous machine for outdoor use. Key to success is a robust perception system that ensures accurate localization, even when GPS signals are jammed, weak or unavailable. While there’s a range of technologies that are available for building localization solutions, each has its limitations and failure cases. Combining multiple technologies and modalities is essential to achieve comprehensive coverage and reliability.
In this blog post we will delve into the challenging environmental conditions that outdoor machines operate under and see how visual perception can be leveraged for developing a localization solution that ensures reliable operations in diverse and often unpredictable environments.
Real-World Environmental Challenges
Outdoor environments are characterized by changing environmental conditions. A localization solution must be robust enough to cope with these conditions by understanding and adapting to them.
Below are a few examples of the conditions we encountered while testing the RGo Perception Engine with customer machines outdoors:
Rain or Dust – both can impact the image captured by the perception solution and be considered by the system as a moving item. They might change the color and reflectivity of the ground or objects in the scene, block part of the camera view or have significant impact on Lidar systems. We had to improve some of the algorithms to cope well with such cases, detect and ignore false measurements.
Snow – has similar effects to rain and dust when it falls, but when it piles on the ground it can completely change the way the ground or other objects look.
Wind – can move lightweight items (leaves or branches for example) when it's mild and bigger objects when it's strong. It can also have destructive impact on depth-based sensors such as 3D Lidar if not handled well.
Changes in lighting conditions throughout the day/year – light intensity and the relative position of the sun could vary during the day and in different seasons. This can cause the same place to look quite different to the sensors. The core algorithms must be developed to be invariant to light intensity, and able to cope with direct light and shadows.
Changes in landscape due to seasons – objects like trees may look differently throughout the year. This may confuse the perception solution and limit its ability to identify visual landmarks.
Uneven surfaces – natural outdoor environments tend to be uneven, bumpy, and often have slopes and hills. Handling slopes is a major challenge for 2D perception systems such as laser scanners, and not trivial even for 3D Lidar.
Vibrations and wheel slippage – both are more common and intense outdoors, especially on uneven, wet, and muddy surfaces. This can significantly impact on the quality of the data provided by different types of sensors and cause errors if the system is not designed to handle that.
Multi-Modality - Key for Facing the Elements
Experience with various machines across different outdoor use cases in a variety of environmental conditions has shown that combining modalities, by using several types of sensors and algorithms, improves overall robustness. If done correctly multi-modality can guarantee that failure of one modality, because of weather or environmental condition, will not fail the system.
RGo Perception Engine combines Visual SLAM with GNSS-RTK, inertial sensors, and wheel odometry within an advanced sensor fusion engine to provide localization information that meets the highest performance requirements. We use multiple algorithms independently, guaranteeing that the solution will detect and overcome edge-cases that otherwise would cause failure. In addition, the Perception Engine uses a Learning Vision System that adds visual anchoring to further improve the performance with an independent, error-bound, location estimate, without requiring any infrastructure or installation.
A powerful multi-modality sensor fusion software further improves reliability and performance of the Perception Engine by constantly monitoring and evaluating the confidence level of the different inputs from different modalities, enabling the localization solution to fuse the different inputs in a robust manner. For instance, when the floor is slippery, the sensor fusion engine will ignore wheel encoder data and give more weight to vision, GNSS-RTK, and inertial odometry.
Real World Examples
Seamless navigation in transition between extremely different environments -
One of our customers tested the Perception Engine by installing it in a car and driving it into and out of an underground parking lot. When the RTK GPS signal was completely lost, the Perception Engine handled it flawlessly by leveraging sensor fusion and multi-modality consensus. Similar cases exist on tractors operating within orchards. When driving underneath the trees canopy, the GPS quality is poor, sometimes tens of meters away from the real location, but the Perception Engine retains accurate position estimates based on other sensors.
A solution for all seasons -
One of the most frequent questions we encounter regarding vision-based navigation relates to our ability to keep high performance despite seasonal changes. Trees, ground, and lighting condition vary throughout the year. Consequently, vision anchoring by comparing input from the sensor to known images in pre-mapped area is ineffective. The Perception Engine solves this challenge by using a unique learning vision algorithm to enable the robot to automatically adapt to significant changes in the environment.

Keeping high performance with low visibility -
Poor visibility is another known problem. With the right algorithms, we can operate in rain, dust, and dynamic lighting condition that often break other solutions. Consider the parking lot scenario once more. Transitioning from a low-light, underground environment to a bright, outdoor setting can disrupt the path and cause pose jumps. However, with proper sensor control and software enhancements, the Perception Engine handles this transition smoothly.
Key Takeaways
Building autonomous or semi-autonomous machines for outdoor use presents a complex challenge that demands advanced technology and expertise. Whether in agriculture, construction, logistics or defense, these machines must operate reliably in diverse and often unpredictable environments. Key to their success is a robust perception system that ensures accurate localization, even when GPS signals are weak or unavailable.
Various technologies are available for building a localization solution, each with its own limitations and potential failure cases. Combining multiple technologies and modalities is essential for achieving comprehensive coverage and reliability.
RGo’s artificial perception software platform leverages advanced AI, sensor fusion, and learning algorithms to deliver a solution that works anywhere, anytime. Tested across various of outdoor environments, RGo’s platform provides exceptional accuracy, adaptability to environmental changes, and cost-effectiveness, making it the ideal choice for companies aiming to stay ahead of the competition.
Want to learn more? Contact us today
Comments