Feature request: Flag groups/devices for memory conservation

Hi,

if you have followed some of my posts you might know that I am notoriously struggling with memory because I am building in high resolution and like to build crazy messy networks :slight_smile:
So the problems definitely sits in front of the PC - at least partly - but I also got ideas which could help conserve memory better:

  • Automatic tml caching (similar to doing it manually with libraries):
    Already requested here
    I added a comment to this request that it would be awesome to not only limit this to checkpoints but to be able to use this also for groups or even for single devices.

  • Flag groups/devices for conservation:
    Similar to what the request above already described and what the “High” conservation mode is already kinda doing with checkpoints but with even more flexibility. I think it would be awesome if groups or even single devices could be flagged for conservation because with this you could completely freely mix and match your network to the degree of conservation but also ease of working as you like.

What do you think?

Cheers

1 Like

Love the idea of flagging groups and devices for conservation. Much more intuitive and fast than using checkpoints

I’ve been meaning to clarify some things about disk caching, especially working within heightmapping context most terrain tools work with. There have been some misconceptions going on about memory saving and disk caching. Guess this is as good a place as any, also applies to my other feature request about checkpoint caching.

Disk caching is NOT a MEMORY SAVING feature (by itself), it’s a “Time saving” feature, sometimes at the cost of additional memory usage. Especially when talking about terrain mapping tools like World Machine and Gaea.

Let me explain:

There are two different ways disk caching works for heightmap data. “Passive” and “Active”.

Passive

Passive baking is already well built into world machine. This works by decoupling “disc caching” process into a separate node (Library I/O), passively caching or loading data from disk, from and to your main memory. When you save a cache, it essentially dumps your memory to a file on disk. When you load a cache, that dump is again placed back into your memory. So essentially, by itself, it doesn’t save any memory while working within a single graph. The method I described in this tutorial project, shows this problem clearly.

Passive caching however, can be used to split your work into parts, such that only the part you are working on gets loaded into memory. THAT is how memory can be saved, by splitting your graph into a separate projects.

Active

Active Disk caching is exactly what both our feature requests describe (checkpoint, per node, and group caching, to disk). Here the caching and loading process is built into every node. When you cache a node, it’s built to full project resolution, and baked into a memory dump on disk (in case of WM, tml files). This why, when you’re done for the night, you can cache your project, and resume the next day from that memory dump. No more rebuilding your massive projects every time you save and reload a project, thus SAVING TIME. The next time you open a project however, that cache has to be placed back into memory, so that your data can be edited further. NO MEMORY IS SAVED in the process, and unless you find a creative way to split projects, mixing active caching with passive loading, there’s no direct way it’ll save any in the future.

:arrow_down: :arrow_down: :arrow_down:

World Machine already has Passive Disk Caching well integrated in the form of “Library input” and “Library output” nodes. Quadspinner Gaea has Active Disk Caching well built in the form of “per node” baking of graphs into a “resource” file on disk.

Both work very well at this point, well integrated to save on build time. On their own, both features are NOT designed to SAVE MEMORY

:hash: :hash: :hash:

@hamzaaa1988

That out of the way, +1 for the second part of your request, quoted below. I don’t know if I’ll be using it frequently, but seems like a natural progression for memory conservation as it stands currently.

1 Like

Thanks for the concise explanation. Ok I guess I didn’t fully understand how WM was managing memory. I thought that caches dumped to disk then would not need to be loaded into memory but if I want to further work with them in the graph of course it was foolish of me to think so ^^
But how I use it, it’s mostly both timesaving and memory saving because all the nodes that led to the library node don’t have to remain in memory and I just use the final output for further editing down the line so it already provides this selective memory conservation feature that I mentioned just in a less intuitive and more tedious way.

1 Like