How to build and output 100km square in Pro?

I’m evaluating WM2 Pro for use in our pipeline and just loving what it can do. Tiling issues have me stumped though. Our target resolutoin is 2049px2 per km2 with a height range of 11km, however for the sake of testing speed, am working at 513px first.

  1. Based on the docs, my first thought was to set up 1km2 tiles each at 513 resolution and do a tiled build. However, that produces “interesting” results, e.g., some terrain features flow across tiles roughly correct, some don’t, most have large altitude changes in the edge regions. I’m currently rerunning a subset of test tiles in a 20km2 square with blending at 100% to see if that helps (should be done and I will update here in about 21 hours)—though going from default to 100% blending seems to increase per tile times by 300–400% so not sure if that’d even be practicable option, i.e., per build cost would go from ~100USD to 300–400USD per build.

  2. I see some people recommending a non-tiled build followed by a tiled build, but the non-tiled build option pane seems to have problems with a 100km2 square, i.e., it seems to be showing a number I recall thinking looked a lot like UINT_MAX for memory required once I push the resolution up towards 1m=1px region. Our upper limit for memory in a machine is around around 240GB (though we get the best cost efficiency at around 60GB.) It seems odd to me that the only way in the WM2 architecture to not need to keep everything in memory at once might be tiling, so I wonder if I’m just stupidly missing an obvious point somewhere?

So, in a nutshell, how would one get from a 100km2 square to output at at 2049px per km inside 60GB of RAM, up to 240GB if needs must, with final product being 1km2 with repeated edges (for use in Unity3D.) In case it matters, I’m outputting to RAW FP32.

I said I’d update this once I’d seen the results of 100% blend, but so far have not been able to get a decent scoped build to complete. I left a 32 core (Xeon 2670s) machine for about 8 hours overnight and it had barely put a 5% dent into the number of tiles built (that at a much decreased resolution, about 1px = 2m.) So increasing blend doesn’t seem a viable option as the time to build increase may be more exponential than linear (which stands to reason I suppose.)

Hi there,

Hopefully I can help with a few of your questions below:

First, for clarification : When you say 100km2 do you mean 10km x 10km square (=100km^2) or 100km x 100km? From the time required you’ve been mentioning I’m guessing you mean 100km x 100km, but this is important to clarify :slight_smile: This is not an especially large number by itself – its of course the actual grid size (the resolution you’ve set) that matters in terms of work.

  1. That’s a pretty reasonable first guess! And as long as you have reasonable blending set (25-50%) generally you should not have too many pathological tile differences. The main culprits for funny business are:

a. Look for any devices in your device network that have a red (!) next to their name. These devices are highly likely to cause problems with tiling due to requiring context that is not available. A short list of problematic devices (worst first): Flipper, Equalizer without a captured sample, Clamp device in “normalize” mode, Simple Displacement.

b. Simulation-type devices (Erosion, Snow, etc) are usually OK but depending on specific settings can cause issues. If this is the cause, see answer #2.

A good rule of thumb is to go into Layout View and zoom into your terrain at different locations. Layout View uses the same underlying mechanics as Tiling so any major issues will stick out quickly.

  1. Combining a non-tiled with tiled build is useful because you are using the non-tiled build to “set” your gross terrain features; The non-tiled build has full access to all the context it needs to produce the right results whereas the tiled build can produce more detail but has only limited context. To do this you would create a single-file build that is relatively detailed but still several-orders-of-magnitude-smaller than your intended tile build. You would then load this result with a file input and then add additional fine detail – this also can help you speed things up significantly if most of the earth-moving is done in the first non-tiled world, as you can use much less erosion, etc in your final tiled build, which then lets you get away with less tile blending – a double benefit.

2b. Memory use: I can’t comment intelligently here without knowing the answer to my first clarifying question – but in generalities, a single file build needs each heightfield to be able to fit fully in memory while processing (although it generally only needs at a time out of the potentially hundreds of heightfields that are created in your network…)

Note that having “memory conservation” checked for your normal single-file builds is recommended on large worlds: Memory conservation ensures that only the output heightfields are kept in the network, as opposed to keeping the results of each device available for inspection.

I hope I’ve helped a bit!

Thanks for the help; all very instructive. You’re quite right, I miswrote: I did mean a 100km square (100km on each side), i.e., 10,000km2. My abbreviation happy brain just decided squared and square are close enough, less typing, great! The ultimate goal is either 1 or 2px per meter in FP32, right now I’m working in the 5–15px per meter range just to see how different approaches generally perform. Unfortunately, the 100km square is also just a test size, ultimately we need more in the region of a 1,000km square. But, in that realm, some manual stitching and touch up becomes less infeasible.

I’m currently trying out approaches to the two step build, i.e., non-tiled followed by tiled; but have run into one issue I haven’t yet found the solution: it seems there might be a maximum file size for the File Input device. It would not read a 4GB heightmap (RAW FP32), but read the same heightmap scaled down to roughly 1GB. In one attempt the boxes stayed red, in the other they changes to yellow, but built blank tiles. Seems like once I find a way of splitting the output I could easily work around this by combining inputs (assuming there is a limit and I don’t just have the idiot checkbox ticked.)

In terms of memory, I’ve been having reasonable luck running with 240GB RAM for the single-heightmap build, then on a faster machine with less RAM for tiling. I somehow got it in my head that it just wouldn’t run at all if the physical memory was less than the build’s need, but of course it’s just fine doing that. Still a bit slow, so I thought I’d see how less RAM but paging to SSDs works out, later this week.

Are there any rough performance notes on devices, e.g., “to make the Erosion device go faster adjust value X (of course losing quality Y in exchange)” or even just a sticky forum thread I’ve missed etc.?

There should not be an inbuilt upper bound for the file input device; I’ll take a look and double check in a moment. There is a patch coming out soon (WM 2.3.2) which fixes a couple important issues and if I find anything I will include it.

Re: device speed: I’m not sure if there’s actually a complete reference for this anywhere. But for the slowest class of devices, the important parameters are as follows (these are usually the “duration” parameter):

Erosion: Erosion base duration directly scales the amount of work done.
Thermal: Iterations directly scales the amount of work done
Snow: Snow amount directly scales the amount of work done
Blur: radius
Expander: distance

Everything else should scale relatively linearly with the number of pixels; The above have sub-linear scaling.

Just an update:

I found the problem with the File Input: There is a standard library bug(!?) in MSVC 2010 which unnecessarily restricts C++ file operators to 32bit signed integer filesizes even on 64bit platforms. This is fixed in msvc2012… but can also be worked around now that the problem is known.

Should be in the next patch!