At the beginning of the month, we hosted a workshop centered around the subject of Simultaneous Localization and Mapping (SLAM) as it pertains to robotics. The workshop was held at our Kharkiv office and was run by Data Science Engineer, Kostiantyn Isaienkov. People from several other companies came to our office to participate in Kostiantyn’s workshop.
What is SLAM?
Simultaneous Localization and Mapping is a system used in robotics and Artificial Intelligence to assist with navigation in a previously unknown static environment while building and updating a map of the surrounding area. SLAM can be used along with the following conditions:
- The robot must be autonomous (working without any human activity)
- There can be no prior knowledge about the mapping area
- There has to be no opportunity to place beacons (also in GPS-denied environments)
- The robot needs to know its own position
Workshop Details
The workshop covered the basic aspects of SLAM, from features to solutions to algorithms. The following is an outline for how the workshop was conducted:
- SLAM Features
- SLAM Task Formulation
- SLAM Difficulties
- SLAM Algorithms
- Visual SLAM Features and Summary
- Technology Stack for SLAM Simulator
- Implementation of FastSlam Algorithm
SLAM Features
The SLAM algorithm creates a map from natural scenery while using laser/sonar technology to detect wall segments, planes, corners and vision to detect salient point features, lines, and textured surfaces. The algorithm allows reliable matching if the features are distinctive and easily recognizable from different viewpoints.
SLAM Task Formulation
Input:
- Time sequence of measurements made as robot moves through an initially unknown environment
- The robot is in control
- Observation of nearby features
Output:
- Robot position estimation in coordinate system of the map
- An update to the map of the environment
- Path of the robot (optional)
SLAM Difficulties
The SLAM system is very complex and comes with uncertainty at every level. There are many aspects that go into utilizing this system, such as autonomous, collaborative robots and mapping in multiscale environments, with various approaches. One of the most challenging aspects of the system is that while a map is needed to make localizing a robot possible, at the same time, the robot and its position is needed to create and update the map. The system is also very delicate and even the smallest error will quickly accumulate over time. This is why the SLAM system is considered one of the most fundamental problems for true automatization of mobile robots.
SLAM Algorithms
Scan Matching:
Solution: Reduce scan-map to a point cloud and run the Iterative Closest Point (ICP) algorithm
Usage: Mostly for amp correction in multi-agent SLAM with independent agents
Iterative Point Cloud:
- For each point in the source cloud, find the closest point in the reference cloud
- Estimate the combination of rotation and translation using a root mean square distance metric minimization technique which will best align each source pint to its match found in the previous step
- Transform the source cloud using the obtained transformation
- Iterate until threshold was achieved
Kalman Filter:
Problem of Kalman filter: linear models are usually not applicable in the real world.
Solution: use extended Kalman filter that works with nonlinearity.
The Kalman Filter gives the best estimation for the given model and updates the state and errors sequentially.
Particle Filter:
- Represent the distribution of robot location by a large amount of simulated samples. Approximation of Parametric method
- Each landmark is represented by a 2×2 Extended Kalman Filter
- Each particle therefore has to maintain M EKFs
- Resample state and map at each time step
Fast SLAM Based on Particle Filter:
- Sample a new robot path given the new control
- Update landmark filters corresponding to the new observation
- Assign a weight to each of the particles
- Resample the articles according to their weights
Visual SLAM Features and Summary
The Visual SLAM system works by tracking set points through successive camera frames and has the possibility to work what a single 3D camera. As long as there is a sufficient number of points being tracked through each frame, both the orientation of the sensor and the structure of the surrounding physical environment can be rapidly understood. The Visual SLAM operates in real-time and works to minimize the difference between the projected and actual points, usually through an algorithmic solution called Bundle Adjustment.
There are two types of Visual SLAM: Feature-Based (indirect) and Direct.
Feature-Based:
- Find key points in image
- Match to key points in other images
- Pose estimation from matched features between images
- Find matches between current frame and a set of previous frames
Direct:
- Recover the environment depth and structure and the camera pose through an optimization on the map and camera parameters together.
- Faster than indirect method because there is no feature processing.
- Sensitive to light changes.
Technology Stack for SLAM Simulator
- JetBrains PyCharm Community Edition 2017.2.3 x64
- Python 3.6.5
- NumPy 1.16.0
- SciPy 1.2.0
- PyGame 1.9.4
Implementation of FastSlam Algorithm
Pygame: Pygame is a cross-platform set of Python modules designed for writing video games. It is used for the creation of user interface and for processing keyboard inputs.
Game Loop: Setup section- create a window, load, and prepare some content, and then enter the game loop.
- Poll for events — catch events that have occurred
- Update internal data structures or objects that need changing
- Draw the current state of the game into surface
- Put the just-drawn surface on display
The SLAM system is a truly fascinating subject and we would like to thank Kostiantyn Isaienkov and our Kharkiv office for hosting this workshop and for showing how extensive Akvelon’s technical knowledge is.
Want to learn more about our experience with hosting and participating in workshops? Click on the stories below!
More Insights
AKVELON MACHINE LEARNING ENGINEER SPEAKS AT OCTOPUS AI CONFERENCE
Our very own Kostiantyn Isaienkov, a data science and machine learning engineer, had the opportunity to speak at “Octopus AI”. Octopus AI is a data science conference which has the goal of strengthening professional skills in the world of artificial intelligence, machine learning, and data science, according to the conference website. Read more.
AKVELON IVANOVO OFFICE HOSTS HEALTH AND FITNESS TECHNOLOGY HACKATHON
Our Ivanovo office recently held their annual hackathon competition, “Hakvelon 2018”. This competition gave Akvelon employees the opportunity to showcase their abilities while participating in a little friendly competition with their peers. Read more.
AKVELON ATTENDS YAPPIDAYS CONFERENCE
At the beginning of November, our Akvelon team members had the opportunity to attend the YappiDays conference in Yaroslavl. With over 400 participants, YappiDays is the biggest event for developers in the Yaroslavl region. Read more.