Havent been here since 2008 so Im sure much has changed. Ther ewas a feature request thread back then, but instead openign that thread I thought I’d ask here:
Is therea way to do the following:
Bring in a terrain mesh (treat normals as upward)
Convert to heightfield as suggested to do the additional detailing in WM2
On export have option to export height field as a difference from the mesh. So the height is only capturing the added detail (not full height values).
So basically you are just using WM2 to add an additional layer of displacement on top of the mesh geo. So the geo alwasy stays the same …you jsut have a height field to add displacements. (in maya, renderman etc)
At the moment I have to generate a low and then a high…and try to extract the difference as maps.
Potentially even better would be the ability to exprt a new mesh (at resolution you select) and then the remaining detailing, displacement and texture in maps or resolution selected.
Has this feature been implemented yet. I see a few thread with same request but hadnt seen any thread with comfirmation.
This is really important for our industy (film, I work in New Zealand vfx company)
We looking at these softwares and have a license of WM for years , but havent really been able to use it in our pipeline.
We need a base mesh and then additional detail in maps.
Just coming back to see if this has changed?
Would absolutely love to be able to pipe in a mesh with predefined UVs , then run the weathering tools and back out maps for detail change and masks.
In addition to paul’s request, I think I can explain what is needed as a general feature that would drastically benefit both general users and production users.
Keep the present architecture, but add a secondary displacement feature to overlay device. The heightmap input to this secondary port will be mapped onto the generated primary displacement (or as a bump map for previewing purposes). This way we can implement overhangs and cliff faces inside wm instead of sculpting the details separately.
A mesh input feature is sorely needed, even if it is just a Mesh->heightfield convertor device.
These would seriously skyrocket the uses of world machine as a terrain generation solution.
+1 - but it´s very quiet on that topic… and I want to add: no words regarding tri-planar texturing, “megatextures”, … getting a bit quiet here, or am I wrong?
Adding meshes as an exposed primitive type definitely brings in a lot of power and use cases. There are some issues that need to be carefully considered with regards to tiling mostly, however. Specifically, splitting & blending meshes when they cross a tile boundary would need to occur to have a proper implementation, but this is a non-trivial thing to get right. Not “hard”, but nontrivial. Because of this, I’ve been very cautious about including meshes as a primitive, which holds up the implementation of all the other things that can be done.
But you’re right – it would be very powerful to be able to diff a heightfield against a mesh; or import, convert to heightfield, process, and then re-export with adjusted vertex heights but the same number and location; and dozens more uses.
The lack of the mesh as primitive delays a lot of useful functionality. It really is on the feature backlog, but so are dozens of other things I encourage you and others though, to chime in about your needs for this – Features that I hear about the need for the most (especially if they are easy to implement) float to the top of the feature list quickly.
Basically at work (I wont mention the name here due to NDA stuffs, but you can look on my IMDB and get good guess) we use low res meshes, with multiple UV tiles, and then add detail on top with extDps floating point .tif / .exr
So for example a large area geo I would happily split into multiple UV tiles. (think maps) …easily 20+ … Some assets we’ve had over 1000s Uv tiles on a complex model.
The tiles stack in rows of 10. And our tile numbering starts at 1001. So the default tile (0,0 - 1,1 is 1001, then moving across the UVs 1001-1010 is the first row, 1011-1020 second etc) Some apps can handle this now (mudbox, mari etc)
So what would be an idea is, I import a base mesh into worldmachine, it is already UVed. Im happy for your software to assume this mesh is say Y up, and height based so you effectively re-sample it into your world as height data if necessary. Scale is defined by the mesh I give you so I get exactly 1:1 with maya etc (units in cm or meter can be selected)
I then run all your fun procedurals.
The last step is to bake the resulting height change and colour/mask channels back onto the original model.
You could even be more advanced and allow the original model to get tweaks in the vertices to closer resemble the final output (if significant changes took place)
Since UVs are predefined tiling isnt an issue. just bake to UV and bleed a little.
If this is possible I seriously would start relooking at this in our pipeline.
If WETA is looking at this, then it’s a reason enough to think about implementing it.
@Paul: In my experience, World machine is better suited to build a single mountain or a localized group of similar terrain features. I usually refine my terrain in Mudbox or zbrush AFTER I render it out of world machine. But this approach seriously limits the potential procedurals have. I use terragen or vue to extend beyond the localized cluster I build in WM.
Lets just say some one that works there is
And if its useful then I can test and present findings.
Yeah., I have quite a bit of experience with DEM data and reprocessing in zbrush, mesh + extDsp and retopo.
It works but really doesnt utilise all the kool stuff we could add with WM and the like. We really miss all that procedural goodness when bringing into maya and getting ready for renderman etc. We can only really feed in geo + maps
Baking maps in WM is definately a useful start in the right direction…but still limited to finite resolution based on map pixels.
Not sure if its even ever going to be possible until procedurals like this become native to Maya/renderman shading.
That would work but not really ideal and not benefitting from what WM could do.
I think there needs to be a good obj importer to WM. With an option that you can select “is height based”, “Y up”, “units cm” etc
So from that Wm can import the obj correct scale and orientation. It can then auto generate its intial height data to store the obj in WM native type.
At the end after all the procedurals the difference between original height object and final height object becomes the displacement maps. (32 ,floating point exr)
Finally bake this into UVs (if initial model is UVed)
A bonus secondary step as mentioend would be to modify the initial mesh vertices to push thm up/done to match the final a bit closer. Having the base mesh closer to the final rendered result is never a bad idea.
Just wanted to pop in here and see if this has been implemented in any way yet? The method that SnS describes in his last post is exactly the workflow i had hoped to implement in our film vfx pipeline but haven’t been able to figure out a way to do so as described in this thread.
We have licences of world machine at our facility already by the way, but I’d like to extend the usage as there is bags of potential from being able to utilise world machines great procedural workflow, but we need to do it in a way that complements our existing pipeline and allows us to just use the bits of world machine that we need at a given time.
Really hope this gets some attention, at the very least here is another use begging for this development!
Unfortunately, I don’t have any additional progress on this front to report.
I do recognize the value of this workflow, and thanks for chiming in here – the more I hear about how useful something would be, the more likely it is to be implemented.
As a workaround I think about using a hires height map (b&w) and then making a lowres height map in photoshop. This can be used as base for the obj. Then you would have to make a difference map in photoshop to get the displacement map. When I have more time, I’ll try to refine this idea.
Remnant - Like SnS, I work at another of the worlds largest vfx facilites, and I’m not the only one who would love to see this feature, I’m just the only one who has asked directly I guess, but I promise there any many more here who would love to see this implemented in case that sways you
Vision4Next - I had a similar thought about extracting a difference map in Nuke, but haven’t had any dev time to test the workflow yet. Let me know if you have any success, and if i get around to it first I’ll post back here too with my results.