How to Buy
Massively Parallel Speed
It is not an exaggeration to say that massively parallel GPGPU technology could be the most revolutionary thing to happen in computing since the invention of the microprocessor. It is that fast, that inexpensive and has that much potential. GPU is so important that all Manifold users should insist the computer hardware they procure is GPGPU-enabled. GPGPU computation requires an NVIDIA GPU of Fermi generation or later and 64-bit Windows.
Important: To ensure compatibility with NVIDIA hardware Manifold uses NVIDIA drivers and supporting software such as CUDA. NVIDIA in 2018 will cease supporting 32-bit Windows, at which time Manifold will no longer support GPGPU on 32-bit Windows. Please use 64-bit Windows to ensure continued use of GPGPU.
Ensuring your system is GPGPU-capable might cost you zero extra, because modern systems all have GPUs for graphics anyway and many systems use NVIDIA chips. Reasonably recent computers with an NVIDIA GPU already are GPGPU-capable. If not, adding a GPGPU-capable card costs almost nothing because for $100 you can buy a GPU card with hundreds of cores that will absolutely crush any CPU.
Manifold works perfectly to deliver GPGPU power even using inexpensive GeForce and Titan series cards that anyone can afford. Manifold also runs with high-end Tesla and Quadro NVIDIA cards, which feature ECC memory and other enhanced features for the most demanding applications.
Manifold will automatically use multiple GPU cards to run thousands of cores up to however many GPU chips your NVIDIA driver software supports. Four top end GPU cards with 5120 cores can provide a total of 20480 cores. Manifold can use them all.
Manifold is inherently a parallel processing system. Whenever it makes sense to do so, Manifold will automatically utilize multiple processors or multiple processor cores by parallelizing a task into multiple threads for execution on more than one core. Given hyperthreading plus multi-core CPUs it is now routine to encounter desktop systems with 8, 16, 32 or even more CPU cores available. See the Parallel CPU page for info on CPU parallelism in Manifold.
In addition to this basic, parallel processing capability using multiple CPU cores Manifold also includes the ability to utilize massively parallel multiprocessing utilizing GPUs, potentially launching tasks on thousands of processing cores at once for true supercomputer computational performance, far beyond what can be achieved with CPUs.
Manifold automatically parallelizes and dispatches as many tasks as make sense to GPGPU, with automatic fallback to parallelized tasks dispatched to multiple CPU cores if a GPU is not available. CPU parallelism in Manifold is also a key part of providing massively parallel GPGPU function, because many CPU cores working in parallel are required to ensure maximum use of GPU. A single CPU core running non-parallel is not fast enough to keep up.
Automatic GPGPU Utilization
GPGPU acceleration works everywhere in Manifold SQL where worthwhile work arises: in the SELECT list, in WHERE, in EXECUTE, ...everywhere. For example, if you add to a table a computed field that combines multiple tiles together, that computed field will use GPGPU. If you do some tile math in a FUNCTION, that FUNCTION will use GPGPU as well.
You don't have to write something special or learn programming environments like CUDA. Use the same SQL you already know and Manifold automatically parallelizes it to use GPGPU. If you don't use SQL but prefer point-and-click Manifold templates, those automatically use GPGPU as well.
When you write something like SELECT tilea + tileb ∗ 5 + tilec ∗ 8 FROM ..., the Manifold engine takes the expression with three additions and two multiplications, generates GPGPU code for that function in a Just In Time, JIT manner and uploads the resulting code to GPGPU to execute the computations.
To save execution time and boost efficiency, JIT code generation for GPGPU functions is cache-friendly for the driver. Running the same query again, or even running different queries for which the GPGPU expressions are sufficiently similar to each other, will engage the compilation cache maintained by the driver
If you save the project using that computed field or FUNCTION into a Manifold .map file and then bring that .map file onto a machine running Manifold that has no GPGPU, the computed field will be executed by Manifold automatically falling back to using Manifold's CPU parallelism, taking advantage of as many CPU cores are available using CPU core parallelism instead of GPGPU. If you bring the .map file back onto a machine that has a GPGPU, Manifold will automatically use the GPGPU.
Other optimizations play along transparently. If a particular subexpression inside of an expression that runs on GPGPU is a constant in the context of that expression, it will only be evaluated once. If an expression that can run on GPGPU refers to data from multiple tables and has parts that only reference one of these tables, the join optimizer will split the GPGPU expression into pieces according to dependencies and will run these pieces separately and at different times, minimizing work. A SELECT with more than one thread will run multiple copies of GPGPU expressions simultaneously. There are many other similar optimizations automatically integrated with GPGPU utilization.
Some operations are so trivial in terms of computational requirements it makes no sense to dispatch them to GPGPU, the classic case being scalars (trivial) as opposed to tiles (more bulk). CASE expressions, conditionals and similar constructions or functions that operate on scalar values stay on the CPU while functions that operate on tile values generally go to GPGPU unless they use tiles in a trivial fashion, such as making a simple comparison.
Manifold's automatic CPU parallelism with typical multicore CPUs is so fast that keeping lighter operations on parallel CPU is faster than packaging them for dispatch to GPU. Each processor core in a modern CPUs is a very powerful computing machine: When Manifold parallelizes a task to eight or sixteen hypercores that is a massive amount of computing power. Manifold automatically adapts to however many CPU cores are in the computer.
Abs(v)takes a number and returns a number: it stays on CPU. TileAbs(t)takes a tile and returns a tile: it can go to GPGPU. TileContrast(t, c, p)takes a tile and two numbers, and returns a tile: it can go to GPGPU. TileToValuesstays on CPU since it is simply splitting pixels out of tile with no need for GPGPU for something so simple. If the operation was doing a computation on the pixels first and then splitting it might then be sent to GPGPU.
CASE conditions are scalar, so they stay on CPU. When CASE is used with tiles whether it is faster to dispatch the task to GPGPU depends on exactly how the tiles are used. Some examples where vXX are scalar values and tXX are tiles:
CASE WHEN v=2 THEN t1 ELSE t2 END
In the above not much is being done with the tiles so the entire construction stays on CPU.
CASE v WHEN 3 THEN TileAbs(t1)+ t2*t3 + TileSqrt(t4) ELSE t1 END
In the above, the expression in THEN will go to GPGPU while the rest of CASE will stay on CPU.
CASE WHEN t1 < t2 THEN 0 ELSE 8 END
In the above the comparison in WHEN does use tiles but it uses them like raw binary values, similar to how ORDER works, so it is more efficient to leave it on CPU.
How Fast Is Manifold GPGPU?
If you are doing computations it's fast. Really fast. Gains are usually from 20 times faster to 100 times faster running typical computations on low end, dirt cheap GPU cards. Running complex computations on faster cards, performance can be 100 to 200 times faster than using CPU alone. It's fairly common to do in a second or two what takes more than five minutes without Manifold.
If your time is worth more than minimum wage and you're doing anything that requires your machine to think at a higher level than your coffee pot timer, you'll often pay back the cost of a Manifold license the first time you use it for anything serious. It's that fast. Nothing else comes close.
The NVIDIA Quadro QV100 GPU card illustrated above provides 5120 GPU CUDA cores that deliver 110 TeraFLOPS of performance. Manifold will use them all for massively parallel power. Plug two of these cards in your desktop computer and you have 10240 GPU cores. Manifold will use them all for supercomputer performance. At 220 TeraFLOPS a desktop computer with two high end NVIDIA cards is 150 times faster than the fastest supercomputer in the world in 1997, the ASCI RED supercomputer built at Sandia for thermonuclear weapons simulations. That's over 150 times faster.
220 TeraFLOPS is so much power it is difficult for humans to imagine. To match what four such cards can do in one second a human would have to do a floating point calculation every second, 24 hours a day nonstop, every day, for over seven million years.
It's true that sucn almost incomprehensible power is far more than most GIS tasks require. For almost all GIS tasks a single, low-cost, GPU card is plenty: no need to spend thousands per card. But it is still cool - whether you are running 100 cores or 20,000 cores - that Manifold will take advantage of every last core that can help the job run faster. Manifold makes that power available to you with a point and click. No other GIS or spatial engineering software can do that.
Parallel CPU or Parallel GPU? Which is better?
Easy. Use both! GPGPU is so fast and so inexpensive that no matter how many CPU cores you have it makes sense to also use GPGPU. Don't even think about it. Just do it.
Systems with many CPU cores will also be able to utilize GPGPU more effectively, because Manifold's automatic CPU parallelism will launch tasks in parallel on many CPU cores better to keep up with the insane speed of GPGPU. The main advantage that brings is that with Manifold you can use the power of GPU even with very inexpensive GPU cards.
The biggest technical challenge with advanced GPUs is keeping them busy. Hundreds or thousands of GPU cores are so fast they easily finish tasks that just one CPU core can send them, and then they wait around for something more to do. That's why first and second generation GPGPU applications quickly top out.
Such applications are not parallel but run conventional single core software that dispatches tasks to GPGPU. But non-parallel, single core software cannot remotely keep up with what a thousand GPU cores can do, let alone five thousand or ten thousand GPU cores. To effectively make use of GPU cores the system that feeds them must be totally parallel as well, using many CPU cores in parallel. That's the hallmark of a third or fourth generation fully parallel application like Manifold.
The more CPU cores you have the better your system can load your GPU cores. Manifold technology automatically utilizes many CPU cores in parallel to dispatch massively parallel tasks into many GPU cores. It's all automatic with no code or anything special from you. Just write the SQL you already know or launch point-and-click templates in Manifold and everything happens automatically.
Manifold Viewer is a read-only subset of Manifold Release 9. Although Viewer cannot write projects or save edited data back out to the original data sources, Viewer provides phenomenal capability to view and to analyze almost all possible different types of data in tables, vector geometry, raster data, drawings, maps and images from thousands of different sources. Manifold Viewer delivers a truly useful, Radian technology tool you can use for free to experience Manifold power firsthand. See Viewer in action Watch the Manifold Viewer Introduction YouTube video.
Suggestions to improve Manifold are always welcome. Please see the Suggestions page for tips on making effective suggestions.
Buy Now via the Online Store
Buy Manifold products on the Online Store. The store is open 24 hours / seven days a week / every day of the year. Orders are processed immediately with serial number email sent out in seconds. Use Manifold products today!
Manifold products deliver quality, performance and value in the world's most sophisticated, most modern and most powerful spatial engineering products. Total integration ensures unbeatably low cost of ownership. Tell your friends!