Limitations

In most cases, we will run out of machine resources before hitting any limitations of Manifold.  There are a few limitations that may be encountered even in large machines:

 

64 TB max size for .map files - Using 64-bit Manifold, the maximum amount of data stored within a .map file cannot be larger than 64 Terabytes (TB), although a .map file can contain data sources linking to a DBMS that contain far larger data, petabytes, in size.   As a practical matter, the most pressing limitation on the size of data is the limit of 2 billion records per table.  That limitation will be increased in future builds.

 

256 GB max size for temporary databases within queries - The amount of data stored within temporary databases created on the fly for the duration of a query cannot be larger than 256 GB, in 64-bit Manifold.  

 

1 terabyte max size for changes to data - .map files can work with data larger than 1 TB (1000 GB) when that data is stored in external data sources, such as an enterprise class DBMS.  Although there is no theoretical limit to the size of data Manifold can work with on the desktop, a hard limit on the amount of data that can be changed in a single operation is 1 terabyte.   For example, if we have a single image that is 2 terabytes in size we can display that and manipulate it, but if we want to change all of the values in all of the pixels in the image we will have to do that in two operations, both up to 1 TB in size.   The 1 TB limit is the current, somewhat arbitrary, setting for temporary caches utilized internally by Manifold.   This limitation can be increased in future builds if the community requires larger operations.

 

2 billion records in .map tables - Individual tables stored within the .map file cannot have more than two billion records in the table.  Each individual record can be extremely large, allowing terabytes of data per table, but at the present time there can be no more than two billion records per table.   This limitation will be increased in future builds.  When Manifold stores tables in external data sources such as Oracle,  a table can have many more than two billion records, up to whatever is the limit imposed by that data source.

 

2 billion objects in .map drawings - Because individual tables stored within the .map file cannot have more than two billion records in the table, a drawing that takes geometry from a table stored in a .map file is limited to 2 billion objects.  This limitation will be increased in future builds.   Tables stored in external data sources such as Oracle can have many more than two billion records, so drawings based on those tables can have many more than two billion objects.

 

2 GB limit on individual object vertices - A single object in Manifold cannot have more than 2 GB vertices.    When we create very complex drawings automatically, such as creating contours automatically, we might exceed this limitation.  This limitation will be increased in future builds.

 

2 GB limit on text strings - an individual text string of any kind cannot be larger than 2 GB.  Some datasets that use JSON, a text format, to store large objects might run into this limitation.   This limitation will be increased in future builds.

 

16 terapixel limit on .map images - Images stored in a .map file are limited by dialogs to 16 terapixels, that is, no more than 16,000,000,000,000 pixels.  Due to compression and other storage methods, that is an outer bound that will not be reached before other limitations, such as the maximum size of a .map file or the maximum number of tiles in a table, will be reached.   This limit might be hit accidentally when using the Reproject Component dialog to reproject an image, using an unrealistic size for pixels.

 

2 billion tiles in .map images - Because individual tables stored within the .map file cannot have more than two billion records in the table, an image that takes tile data from a table stored in a .map file is limited to 2 billion tiles.  This limitation will be increased in future builds.   Tables stored in external data sources such as Oracle can have many more than two billion records, so images based on those tables can have many more than two billion tiles.

 

Notes

Temporary data - The 1 TB limit on changes to data arises from Manifold's internal 1 TB limit on temporary data size.  There are mainly two different types of temporary data:  temporary data that lives only in memory,  and temporary data that may spill to disk and is accessed through cache.

 

Temporary data that lives only in memory is limited by the size of the system pagefile.  As more memory (beyond that initially allocated to Manifold) is required, the system asks Windows to allocate more and more memory.  If requirements grow to the point where Windows runs out and declines the request, then the operation fails and results will not appear.   In typical Windows installations the size of the pagefile is usually bigger than the amount of physical memory, but usually it is not much bigger than twice the size of physical memory.   There is competition for pagefile use from other applications, so the full pagefile size might not be available to Manifold.

 

Temporary data that may spill to disk uses both memory and disk and can grow much bigger than the pagefile without putting pressure on (or being restricted by) other applications.  Such growth, however, is not free in terms of resources that Manifold must expend due to the overhead of record-keeping, so Manifold applies a limit of 1 TB. Prior to the increase to 1 TB for temporary data the limit in 64-bit mode was 256 GB.  That is a fairly large amount for a desktop machine, but at times uses could hit it when performing operations that generate big results, such as building dense contours on big rasters.  

 

When an operation hits the limit on temporary data, the operation fails:  everything rolls back but results are not achieved and the processing time spent on the operation is lost.  Increasing the temporary data limit to 1 TB increases overhead slightly, although mostly only when more than 256 GB of temporary data space is required.   Future builds will likely utilize a modified cache design that will allow no limit other than the size of free space on disk.