Down-sampling world resolution

Right now, when I build a 4k world with Conserve memory OFF, all whats been generated gets storred. But when I lower the resolution to 2k when doing test why doesn’t world machine downsample the heightmap, instead of rebuilding the world every time when no “real” changes have been made. This could potentially save me a whole bunch of time.

Because most of the devices does different results depending on the resolution. You won’t have the same result building at 2K than building at 4K then downsampling…

Do Seed values get changed by output pixel resolution?

If so, it would be better if that was hooked up to the actual World Resolution ( km/m/cm/nanometers :wink: ). To keep things consistent and/or more predictable.

The resolution dependencies aren’t caused by seed values. You’re simply getting problems when the output of a device relies heavily on features in the input that only exist in the hi-res version.
Devices that work with high frequencies, derivatives or other small details are particularly affected. Erosion for example is somewhat chaotic, butterfly effect-ish.

k, i’ll stop guessing :wink:

I guess my point is that it would be nice to be able to iterate through versions faster, without having to regenerate much of what has already been done. But that would also mean that memory requirements would grow.

Any ideas?

One of the ideas I tossed around when implementing sessions is to cache all data I possibly can; that is, save each render extent’s results independent of each other, which would allow you to have multiple resolution builds all with up to date data.

For a couple reasons I decided against it, but its a fairly simple extension in principle.

Yeah, this type of feature would make things abit more complicated and a little bit more involved.
I don’t think caching devices that build fast is even needed, but for things such as Erosion, snow and blur this may be essential. But then again if I really need to cache things I could always bake a current state to a heigthmap file and use that as a new base. But, that is very destructive in its nature.

Back to the topic though, maybe for that down-sampling feature there should be a checkbox attribute in the heightmap/mesh/bitmap output telling WM to match the resolution in World Parameters or resample the world to a custom resolution.

Oh, btw, I should probably mention, this is for Mesh Output in particular, because other maps can be resized in Photoshop or other 2d applications, but an OBJ mesh is sometimes a bit too heavy, and needs to be built to a lower resolution. So, if a outputs are not generated in a correct order, then I end up spending alot of time regenerating what’s done.

Here’s a scenario:

  1. I set up a node network.
  2. Set needed world resolution (8k/8k).
  3. Add output nodes. (except Mesh Output, since at 64million polygons the output is too heavy to be useful )
  4. Build world.
  5. Wait a couple of hours for the build to finish, while trying to look busy.
  6. Build is done and I like what I see, Check if maps were exported.
  7. All good.
  8. Add mesh output.
  9. Change world resolution to 512/512.
  10. World gets deconstructed.
  11. Build again.
  12. Coffee
  13. Open 3d app, import the mesh, apply texture, quick render. No normal map!
  14. Back to WM, set resolution again (8k/8k).
  15. World gets deconstructed.
  16. Another Build, I nervously wait another couple of hours trying to look even busier.
  17. Hope 512/512 is enough. Otherways another deconstruction and build is gonna be needed.

But, if I had a resample feature in mesh output. I would add it and Export a resampled Mesh without the “World gets deconstructed” steps.

This very feature has been added in WM 2.2!

Open your Mesh Output device, and change the type to Simple Triangle Reduction. In the box below, put your target in kTri.

You can now limit your Mesh density and have to do only one build to output a suitable mesh

Why don’t you render it 8k the first time and then downsample/export it using a new WM file?

:!: Ahh! doh. Ok, My excuse is that I didn’t want to output a Triangulated mesh. :wink:
I’d have to experiment with that? So, 128kTris is equivalent to 128px/128px resolution?

That would give me another version of the master file to keep track of and update if any further changes should be made. :wink:

The kTri limiter should be more accurately called the kElement limiter, as it works for both triangles and quads.

So 1,000ktri is equal to and would limit to a 1 million polygon mesh, 128k would be a 128,000 poly mesh, etc. If your mesh is rectangular, the length of each side is sqrt(limit). So a 128x128 terrain size would be a 16k mesh, a 128k mesh would be approximately 362 x 362, a 1M poly mesh would be 1024x1024, etc.

Ah, ok, that explain it,

Thanks

I can’t seem to get the mesh reduction to work… I’ve been rendering everything at double my target resolution for my project.
Then I downsample and store off my .DDS textures
Then I downsample the the heightmap and bake out the .obj mesh files at the target resolution (pre-decimated) for my projects tools.

What could I be doing wrong?

Here’s an example world that demonstrates how to use the mesh reducer:

It is a 1025x1025 world. Build it as-is

  1. The Bitmap Output will be a full resolution (1k x 1k ) texture
  2. The Full Res Mesh will be an extremely dense >1M poly mesh
  3. The 16k Mesh will be only 16k elements in size

Hope it helps!

Ah, I see… I had not grokked the ‘kTri’ bit, I was putting in a exact tri count :o

So, does this also work the same way with tiled builds?

It should! (so if it doesn’t… let me know!)