I can’t say I’ve done extensive 8k testing, but what I did do was not problematic. What other memory intensive applications do you run on this system? What does memory use look like in Task Manager during this?
Actually, I had the numbers reversed, Task Manager had 700-800M used and 1.3M available, when I was getting these crashes.
Here’s the weird thing though - right now, it seems to be semi-working. I tried the larger world I had earlier that couldn’t even do the file input single node “build to current device” without crashing, and it just recalculated everything. So this seems to come and go. Not sure why I was having such consistent issues yesterday, with any world.
I’ll do more testing.
However, nothing should hard crash - if it’s out of memory, it should say.
I agree, applications should ideally let you know when they run out of memory and handle it gracefully. Unfortunately it’s not always up to the application developer - a lot of that stuff is handled by the OS and the dev can’t do much to change its behavior.
You might have missed my question about what other memory-intensive apps you use on your machine btw. It sounds like something of a workhorse (with 2GB of memory, etc.). What’s the main use for the system?
I didn’t miss your note, I didn’t have any other memory intensive apps running.
One thing I think is that I was losing the memory settings when it would crash (even after saving, oddly), so I think sometimes I forgot to switch back to the memory conservation. usually it would then tell you it didn’t have enough memory, but it looks like sometimes it just crashes.
It looks like this thing is hard coded to not touch swap space at all, to only use physical RAM, and it crashes if it touches, ever so slightly, the top of physical RAM. Even if it doesn’t comit it. Every time I’ve had a crash I’ve had more than 500Meg of RAM available. It looks like it might have needed 700, after I closed some IE windows and shut down some running services (Java, etc).
The earliest patent on Virtual Memory was:
3292152 Dec., 1966 Barton 365/49.
WM uses the standard C library memory functions, which should have no problem hitting swap space since the virtual memory management is an OS function and not a program-level function.
When does it crash? Immediately? Partway through the build? Only after clicking OK to exit the stats?
I replicated this crash on a XP pro 512Ram with a constant generator and a file output (the fastet wayt to do it), though not super-consistently: 3 out of 4 times it crashed, but once it just gave me an “out of memory” message and no crash… Possibly had something to do with some previous build I may have made at a different size.
Crash occurs a while after the File Output is processing its input (if write every build is selected), but before it writes anything to disk… It also crashes a bit after you press the “write output to disk” button. The timing suggests me it is something right before writing. But no file is even created.
What I was getting at is that it might be a hardware-related issue, and not something necessarily specific to WM, assuming you haven’t run any other memory-intensive apps. I did not mean resident apps, I meant any other memory-intensive applications such as rendering programs, etc. In other words is this a machine that has already gone through a rigorous Prime95/Memtest86/etc. burn-in cycle, or at least been stress tested on other demanding applications? If not, although Fil has now semi-duplicated the issue (which I am so far unable to do), it may be worth running some diagnostic and/or stress test utils like those mentioned above just to be sure it’s not hardware. I don’t mean to suggest WM is surely without fault, only that until others can duplicate it reliably, it would be useful at least to eliminate the hardware you’re running it on as a potential point of failure.
I’ve tried running Fil’s test on two machines, both will build 8K terrains but both crash (or give a “world machine has encountered a problem and needs to close” message) when I attempt to write the output to disk.
It’s only a guess but it may be the related to the issue that allows some people to use 8K terrains with Terragen while others get error messages (not that it helps much as we could never track down the Terragen problem if I recall).
Having read the above I set up for the 8kx8k never gone beyond 2kx2k before so I thought interesting, I’ll have a go. Now, to start I had 49 :oops: processes running in the background. I only have 512 Mb ram. So oft I went.
So far so good, but then I clicked the OK button and waited for the 3D image to appear main screen… but on the way I got this…
Now I’ve seen this before, when processing a 2kx2k terrain but it came and went so quickly, I couldn’t make out what it was before. Any way.
Up comes the 3D which I rotated around ok. Then I wrote the file to Terragen folder, OK. However when I click ok after the write it went…
So I closed down WM and opened up Terragen-reg
Everything ok although a tad long winded.
Did a quick preview render, but only this happened…
Then this…
Of course this may be a Terragen fault but if anyone wants to try the .ter I’ll zip it here.
Well although eventually the file loaded into Terragen at the correct size, no way would it render it. I created another WM terrain this time using a different perlin seed. Result it did not crash nor did I get the pretty green screen that I had before. Again loaded file into Terragen OK, but then it dawned on me that my maximum terragen world size preset is 4kx4k. So I down sized to 4x4 and both files rendered OK. Steps back gracefully and disappears in to the haze.
Ah! I tend to always be outputting in .ter, so that would explain why I hadn’t gotten the error yet. I don’t have resources free to test it right now (rendering an animation), but I will do so as soon as I can and see if I can duplicate it. It seems reasonably proven by now though. Perhaps a problem in the file output routines?
Redsquare, the error you’re getting in Terragen is the “normal” error it gives when your machine is unable to render 8193 terrains, and it is a Terragen-specific thing, nothing to do with the terrain other than it being “oversized”. No one yet knows why some machines can render them and some can’t. It seems to be at least partly resource related, but some 2GB machines can’t do it, while even some 1GB machines can. There does not seem to be a visible pattern yet.
It may be worse with TGA files - I had switched over to reading RAW files as input at one point I think, as I was splitting my big world machine up into parts, and you need to avoid TGA in that case.
It’s not hardware related, this machine gets a pretty hard workout most of the time, and it’s been fine. I’m a kernel / services level C++ developer, so I brought up WM in the debugger one time it crashed:
Unhandled exception at 0x7c910a0a in World Machine.exe: 0xC0000005: Access violation reading location 0xfffffffd.
(this was at the start of the thread, along with the full MS ‘send the error’ business).
The 0xfffffffd location usually means something returned a bad value, and then that bad value wasn’t checked and used for subsequent memory references.
This may all be a moot point, however - I resized to a 4k x 8k terrain, started a world with only one big time sucker (erosion) and it’s less than 1/4 of the way through the erosion step after 23 hours.
This is my main modeling machine so I’ll probably have to kill the job, but I can let it run tonight too. I’m going away for business in September, maybe I’ll try a 4kx8k run then.
This is with an Athlon 64, 3700 San Diego core (the big L2 cache).
At least it hasn’t crashed, however. So maybe having a TGA input or output is part of the issue.
This is definitely a memory allocation issue – or more accurately, an out-of-memory issue happening. Part of the problem is that even with a very high-ram system with lots of swap space, in 2000/XP as you probably know 2GB is the limit of address space for an application. Between all of the heightfields being allocated, WM’s required memory for standard operations, memory required to store 3D mesh vertices, and heap fragmentation, a device world using 1539MB of RAM could very easily push the total used ram close enough to the 2GB limit that malloc() is failing, and it doesn’t matter how much physical RAM or virtual swap is available.
The thread has had various things discussed in it; to clarify does it also do this with Mem Conservation switched on? In Mem Conservation mode, maximum memory used should be (Size of a HF)Number of Output Nodes at a given depthNumber of Independant paths through the network. This number is usually much, much less than without, and it should solve the problem.
Mem Conserv is currently a per-session (not persistent) setting, which is probably contributing to the confusion. If you’re still having this problem even with it on, I would be definitely interested in seeing the world file that is causing this; you can email it to remnant@world-machine.com
World Machine is an application that would definitely benefit from the transition to 64bit windows. 1.1 Should bring some better error handling and detection of memory-wary areas, but doesn’t fix the fundamental problem – that WM really needs more address space for large nets at high res.