Drone mapping projects are becoming larger and more complex every year. Modern UAVs equipped with high-resolution cameras can easily capture tens of thousands of images during a single survey mission. While this allows the creation of extremely detailed 3D models and orthomosaics, it also introduces a significant challenge: processing such massive datasets without overwhelming the hardware.
Many users working with Agisoft Metashape encounter crashes, memory errors, or extremely long processing times when attempting to process datasets containing 20,000 or more images.
The good news is that large-scale photogrammetry projects can be processed successfully if the workflow is optimized correctly. By adjusting hardware resources, project settings, and processing strategies, it is possible to reconstruct very large drone datasets without crashes.
This guide explains the best techniques for processing massive drone mapping projects efficiently in Agisoft Metashape.
Understanding the Challenges of Large Photogrammetry Datasets
Processing 20,000 drone images places enormous demands on both computing resources and software efficiency. Photogrammetry reconstruction involves several computationally intensive steps, including feature detection, camera alignment, depth map calculation, and mesh generation.
Each of these stages requires large amounts of RAM, GPU power, and storage bandwidth.
Typical problems encountered when processing large datasets include:
- Out-of-memory errors
- Extremely slow alignment times
- GPU memory limitations
- Software crashes during depth map generation
- Large project file sizes
Understanding these challenges is the first step toward building a reliable workflow for large-scale photogrammetry projects.
Use Powerful Hardware
The most important factor when processing large datasets is hardware capability. Photogrammetry workflows are highly demanding and require powerful workstations.
For datasets with 20,000 images or more, recommended hardware specifications include:
- High-core-count CPU (Threadripper or Xeon class)
- 128 GB to 256 GB RAM
- High-end GPU with large VRAM (RTX series)
- Fast NVMe SSD storage
RAM is particularly important because the image alignment stage requires storing large numbers of features in memory.
Insufficient RAM is one of the most common causes of crashes during processing.
Split the Dataset into Chunks
One of the most effective techniques for handling very large image sets is dividing the dataset into smaller groups known as chunks.
Instead of processing all images simultaneously, the project can be split into several manageable subsets.
For example:
- 20,000 images divided into 4 chunks of 5,000 images
- Process each chunk separately
- Merge the chunks later in the workflow
This approach significantly reduces memory usage and makes the project more stable.
Chunk-based workflows are widely used in professional drone mapping pipelines.
Optimize Photo Alignment Settings
Photo alignment is one of the most computationally intensive stages of photogrammetry processing.
When working with very large datasets, it is important to optimize alignment parameters.
Recommended settings include:
- Accuracy: Medium or High
- Key point limit: 40,000
- Tie point limit: 4,000–10,000
- Generic preselection: Enabled
- Reference preselection: Enabled (if GPS data exists)
Reference preselection can dramatically reduce processing time by limiting image comparisons to nearby photos.
Use Gradual Selection to Reduce Tie Points
After photo alignment, the sparse point cloud may contain millions of tie points.
Cleaning these points using Gradual Selection tools helps improve model stability and reduce memory usage.
The most common filters include:
- Reconstruction Uncertainty
- Projection Accuracy
- Reprojection Error
Removing poor-quality tie points improves camera alignment and makes subsequent steps more efficient.
Build Depth Maps in Stages
Depth map generation is often the stage where large datasets cause crashes due to high GPU memory requirements.
To avoid problems, consider the following strategies:
- Use Medium quality instead of High
- Process chunks individually
- Disable GPU if VRAM is insufficient
- Use Mild depth filtering
Although lower depth map quality slightly reduces point density, the results are usually sufficient for most mapping applications.
Generate Dense Clouds Carefully
The dense point cloud reconstruction stage can produce billions of points when processing large datasets.
This can easily exceed available system memory.
To reduce memory usage:
- Use Medium depth quality
- Export dense clouds after each chunk
- Merge results later
This ensures that each stage remains manageable for the hardware.
Manage Disk Space Efficiently
Large photogrammetry projects can generate enormous amounts of data.
A dataset of 20,000 drone images may require several hundred gigabytes of storage during processing.
To avoid storage bottlenecks:
- Use NVMe SSD drives
- Store project files on fast local storage
- Avoid external USB drives during processing
Fast storage significantly improves processing speed and reduces the risk of crashes.
Use Network Processing for Large Projects
Agisoft Metashape supports distributed processing using multiple computers.
This feature allows large projects to be processed across several machines simultaneously.
Benefits of network processing include:
- Faster processing times
- Reduced workload per machine
- Improved scalability for very large datasets
Large mapping companies frequently use clusters of workstations to process massive drone surveys.
Consider Cloud Processing
Another option for extremely large datasets is cloud computing.
Cloud platforms can provide temporary access to powerful GPU-enabled machines with large memory capacities.
This approach is particularly useful when processing datasets that exceed the capabilities of local workstations.
Best Practices for Large Drone Mapping Projects
Successfully processing massive datasets requires careful planning.
Professional photogrammetry teams often follow these best practices:
- Plan flights to maintain consistent overlap
- Avoid capturing unnecessary images
- Use ground control points for stability
- Divide projects into logical sections
Good data acquisition reduces processing complexity and improves reconstruction reliability.
Conclusion
Processing extremely large drone mapping datasets in Agisoft Metashape can be challenging, but it is entirely possible with the right workflow.
By using powerful hardware, splitting projects into chunks, optimizing alignment parameters, and carefully managing memory usage, users can process datasets containing 20,000 images or more without crashes.
As drone mapping projects continue to grow in scale, mastering these large dataset workflows will become increasingly important for professionals working in photogrammetry, surveying, and geospatial analysis.
With proper planning and optimization, Agisoft Metashape remains a powerful tool capable of handling even the most demanding photogrammetry projects.


