CamelEdge
ai

Robot Localization: From HMMs to Particle Filters

cat
Table Of Content

    By CamelEdge

    Updated on Mon Mar 17 2025


    Robot localization is a fundamental problem in robotics, where a robot needs to determine its position within a known environment. In the previous blog post, we discussed Hidden Markov Models (HMMs) as a theoretical framework for addressing this challenge.

    In this section, we will explore how the Hidden Markov Model (HMM) can be applied to the problem of robot localization.

    The Forward Algorithm for Location Estimation

    The forward algorithm provides a recursive method for computing the belief state of a robot—the probability distribution over possible locations given the history of observations and actions. It consists of two main steps:

    Prediction Step

    In this step, we compute the probability distribution over the next state based on our current estimate of the state (belief state) and the robot's motion model (transition model):

    P(Xt+1e1:t)=XtP(Xt+1Xt)P(Xte1:t)P(X_{t+1} | e_{1:t}) = \sum_{X_t} P(X_{t+1} | X_t) \cdot P(X_t | e_{1:t})

    This equation integrates over all possible previous states, weighted by their probabilities, to predict the next state distribution.

    Update Step

    Once we receive a new observation, we update our prediction using the sensor model:

    P(Xt+1e1:t+1)=αP(et+1Xt+1)P(Xt+1e1:t)P(X_{t+1} | e_{1:t+1}) = \alpha \cdot P(e_{t+1} | X_{t+1}) \cdot P(X_{t+1} | e_{1:t})

    Where α\alpha is a normalization factor ensuring the probabilities sum to 1.

    Here's how the implementation looks in code:

    # assume the object robot implements the function for robot class
    def forward(robot, belief_state, observation):
        all_free_pos = robot.get_free_positions() # get all possible states the robot can be in
        
        ## Prediction
        pred = {}
        for x1 in all_free_pos:
            pred[x1] = 0
            for x0 in all_free_pos:
                pred[x1] += robot.transition_model(x0, x1) * belief_state[x0]
        
        ## Filtering
        for x1 in all_free_pos:
            filt_prob[x1] = robot.observation_model(x1, observation) * pred[x1]
        
        # Normalize
        s = sum([value for value in belief_state.values()])
        belief_state = {key: value / s for key, value in belief_state.items()}
        
        return belief_state
    

    This implementation iterates through all possible positions in the maze, calculating the predicted probability for each position based on the previous belief state. Then it updates these probabilities based on the current observation and normalizes the result.

    Limitations of Exact Inference

    While the forward algorithm provides an exact solution to the localization problem, it faces significant limitations when applied to real-world scenarios:

    1. Computational Complexity: The algorithm requires summing over all possible states for each update, resulting in O(n²) complexity, where n is the number of possible states. In large environments with fine-grained position discretization, this becomes computationally prohibitive.

    2. Memory Requirements: Maintaining a probability distribution over all possible states requires substantial memory, especially in large 2D or 3D environments.

    3. Continuous State Spaces: Real-world robotics often involves continuous state spaces, which cannot be directly represented using the tabular approach of the forward algorithm.

    These limitations necessitate approximation methods for practical robot localization implementations.

    Particle Filters: An Efficient Alternative

    Particle filters offer an elegant solution to the scalability issues of exact inference by using a finite set of weighted samples (particles) to approximate the belief state. Instead of maintaining probabilities for all possible states, we maintain a representative sample of likely states.

    Key Components of Particle Filters

    1. Particles: Each particle represents a possible robot state (position). The collection of particles approximates the probability distribution over states.

    2. Prediction: Particles are moved according to the robot's motion model, introducing appropriate randomness to account for motion uncertainty.

    3. Weighting: Each particle is assigned a weight based on how well it explains the current observation using the sensor model.

    4. Resampling: Particles are resampled with replacement according to their weights, maintaining a constant number of particles while focusing computational resources on high-probability regions.

    Here's the implementation from our code:

    def particle_filter(robot, particles, observation):
        # Weights for each particle
        weights = [1.0 / num_particles] * num_particles
        
        # Predict new particle positions
        predicted_particles = [robot.sample_step(particle) for particle in particles]
        
        # Update weights based on the observation
        for i, particle in enumerate(predicted_particles):
            weights[i] *= robot.observation_model(particle, observation)
        
        # Normalize weights
        total_weight = sum(weights)
        if total_weight == 0:
            # If all weights are zero, reinitialize particles
            particles = [random.choice(robot.maze.get_free_positions()) for _ in range(num_particles)]
            weights = [1.0 / num_particles] * num_particles
        else:
            weights = [w / total_weight for w in weights]
        
        # Resample particles based on weights
        new_particles = random.choices(predicted_particles, weights, k=num_particles)
        
        return new_particles
    

    This implementation demonstrates how particles are predicted, weighted according to observations, and resampled to form the next generation of particles. It also includes a recovery mechanism when all particles receive zero weight, preventing filter divergence.

    Advantages of Particle Filters

    1. Scalability: The computational complexity scales with the number of particles, not the size of the state space.

    2. Adaptability: Resources are automatically focused on high-probability regions of the state space.

    3. Representation Power: Can represent multimodal distributions and handle non-linear dynamics and non-Gaussian noise.

    4. Simplicity: Conceptually straightforward and relatively easy to implement

    The Maze Problem Implementation

    To illustrate these algorithms in a practical scenario, let's apply them to a robot navigating a maze. The challenge: The robot must ascertain its position on a discrete grid using imprecise sensor data.

    The state variable XtX_t denotes the robot's position on a discrete grid. The robot can move to any adjacent empty cell with equal likelihood. The robot is equipped with sensors on all four sides that provide an approximate measure of the distance to the nearest wall. The sensor outputs are 'n' for near and 'f' for far. The sensor data is noisy, meaning the probability of returning 'n' increases as the distance to the wall decreases, and conversely, the probability of 'f' increases as the distance grows.

    1. Exact Inference: Maintains a probability distribution over all possible locations and updates it using the forward algorithm. The video illustrates the maze with the robot depicted in red. The green color represents the probability distribution of the robot's estimated location.
    1. Particle Filter: This method employs a set of particles (20 in this example) to estimate the robot's position. The video demonstrates that initially, the particles are distributed uniformly, suggesting that the robot could be located anywhere with equal probability. As the robot receives observations, the particle filter refines the position estimate by resampling the particles based on the new data.

    When implementing robot localization in real-world scenarios, several factors should be considered:

    1. Number of Particles: Too few particles may result in poor approximation, while too many increase computational cost. Finding the right balance is crucial.

    2. Sensor Model Accuracy: The accuracy of the sensor model significantly impacts localization performance. More accurate sensor models lead to faster convergence.

    3. Particle Diversity: Particle filters can suffer from sample impoverishment, where all particles converge to a single location. Techniques like adding random noise or adaptive resampling help maintain diversity.

    4. Initialization: Starting with a uniform distribution of particles ensures the algorithm can recover from uncertain initial conditions.

    Conclusion

    Robot localization through probabilistic methods like HMMs provides a robust way for robots to determine their position in known environments. While exact inference methods like the forward algorithm offer theoretical elegance, particle filters provide a practical and efficient alternative for real-world applications.

    The maze example demonstrates how both methods can successfully track a robot's position using only noisy sensor readings. Particle filters, in particular, offer an excellent balance between computational efficiency and localization accuracy, making them a popular choice in robotics and autonomous systems.