The core of the Mapflow are the Mapping Models. Mapflow enables to detect and extract features in satellite and aerial images powered by semantic segmentation and other deep learning techniques. See requirements page to better understand what data to use with each model, and price list for breakdown of the processings billing.
Note
For the data requirements β see Model requirements. For the breakdown of the Mapflow processings billing β see Mapflow pricing.
AI-Mapping Modelsο
π Buildingsο
Extracting of rooftops of buildings from imagery of high resolution. High performance deep learning model is trained to detect the buildings roofs. Three different models are used for different geographic regions for better fit to the various urban environments around the world. The decision is automatic, based on the location of your AOI.
Note: The building candidates with area less than 25 sq.m. are removed to avoid clutter
The model does not extract the footprints directly, because they are not clearly visible in the images, but we can obtain them, just like human cartographers, by moving the roof to the bottom of the wall (see Additional options).
Additional options:
Classification by types of buildings β typology of buildings is represented by the main classes (see reference).
Simplification - the algorithm corrects the irregularities of the contours of our model. The irregular geometries are replaced with rectangles, circles or arbitary polygons with 90 degree angles, which fits better to the original shape. Also the corrected buildings are rotated to align with the nearest roads. This option produces much more map-friendly shapes which look better and are easier to edit, but some shape accuracy can be lost. See our blog post for more information and some visuals.
Merge with OSM [Mapflow Web only] - some of the areas have great coverage of OpenStreetMap data, and if you prefer human-annotated data, you can select this option. In this case, we check for each building whether it has a good corresponding object in OSM (Jaccard index more than 0.7) and if there is one, we replace our result with OSM contour. This makes the result not based on the image, so the buildings can be shifted from actual positions, and some changes that have occurred after OSM mapping may be lost.
A sample of processing result with different options for Prague, Czech Republic.
π² Forestο
Forest Segmentation. The model is trained on high-resolution data (0.6 m) for different areas and climate zones.
The result includes all areas covered with tree and shrub vegetation, including sparse forest and shrublands.
Model resolution allows to detect small group of trees and narrow tree lines.
The model is robust to region change, and performs well in most environments, including urban. The image should be taken in active vegetation period, because leafless trees or vegetation covered with snow are not target class.
Hint
This model can be used to speed up trees detection and area estimation in forest inventory assessment.
Additional options:
Heights β Forest Segmentation follows the usual forest segmentation model, with additional separation of forest height classes.
Additionally we use models for density and height estimation, dividing the forested area into the following classes:
Shrubs lower than 4 meters;
Forest from 4 to 10 meters high;
Forest more than 10 meters high;
Hint
This model can be used as a decision support for the forest growth clearing. See the professional solutions by Geoalert
Processing results samples
π Roadsο
Model for road segmentation in high resolution imagery (0.3 - 0.5 m)
The model is trained primarily for rural and suburban areas. Multi-task learning is applied in order to improve the road mask connectivity, especially in the spots obscured by trees or buildings. Best suited for areas with low urbanization, and can fail in cities where wide roads with sidewalks and complex crossroads are present. We extract the road central line in order to decrease the clutter and optimize the extracted road network, and then the road lines are inflated back to polygonal object.
In version 1.1 we added the road graph postprocessing:
geometry simplification;
merging of the gaps;
removal of double edges;
removal of detached and too short segments;
Processing results samples
ποΈ Constructionsο
This model outlines the areas in the satellite image that contain construction sites and buildings under construction. The current model dataset is limited to some countries and the work of extending it is in progress.
Buildings (βοΈAerial imagery) (DEPRECATED)ο
Warning
This model has been deprecated as default one. itβs available only by request.
This model is specifically designed to be used on a very high resolution aerial imagery (10 cm per pixel) for extraction of small buildings and structures. It is best suited for rural and suburban residential areas.
We do not recommend using this model in areas with high-dense urban buildings. Use Buildings model instead, even for aerial imagery.
π High-density housing (DEPRECATED)ο
Warning
This model has been deprecated as default one. itβs available only by request.
Our βhigh-density housingβ AI model is designed for areas with terraced or otherwise densely built buildings, common in the Middle East, parts of Africa, etc. This model, just like the regular building model, detects the building roofs.
Firstly, the building blocks are segmented as a whole, and then each block is attempted to be devided into individual houses based on the detection of individual roof markers with rectangular grid or Voronoi diagram.
Processing result sample for dense urban development area (Tunisia, Africa):
π Agriculture fields (DEPRECATED)ο
Warning
This model has been deprecated as default one. itβs available only by request.
Model for fields segmentation allows to detect the agricultural fields and delineate the nearby fields from each other, if there is a visual boundary (forest line, road, different crop stage). The model is trained on the high resolution data (1-1.2 m), primarily for Europe, Russia. It performs better with larger fields with active vegetation. Smaller and terrace fields (typical for Asia) are delineated not so good. Fields without vegetation, especially in winter period, are not target class.
β¨ Segment anything (DEPRECATED)ο
Warning
This model has been deprecated as default one. itβs available only by request.
The βSegment Anythingβ (originaly introduced by Meta as universal segmentation model) is available as yet another experimental model in Mapflow. We adjusted it to Mapflow workflows to be used on a scale. There are the same steps required to launch this model:
Select your data source
Select your geographical area - either polygon, GeoJSON file, or your image extent
Yet there is one difference in the model workflow:
if you run this model using GTIFF fileβββthe original resolution of the image will be used
if you run it via TMS (e.g. Imagery providers like Mapbox Satellite)βββyou need to select the Zoom level (image resolution) from the model options which will be used for the input
Depending on the input resolution, the SAM model will interpret and generate different objects. It can be empirically classified by the zoom levels as follows.
SAM options β semantic classificationο
ZOOM LEVELS |
SEMANTIC OBJECTS |
---|---|
14 |
Land use, forests, parks, fields, bodies of water |
16 |
Small fields, large buildings, lawns, plots |
18 |
Farms, buildings, groups of trees, etc. |
Aero |
Houses, trees, vehicles, roof structures, etc. |
Note
βοΈ SAM is not provided in Mapfow for QGIS list of default models, as the zoom options are not enabled in the current pluginβs design. Yet if you work in QGIS and want to try SAM there β send us a request and we will connect corresponding workflow scenarios with all zoom options specified.