I originally followed Wenda111287's tutorial that involved World Machine. In doing so I accidentally created a large flat angled subsection of the terrain. When I tried to erode this, a straight channel was created. So I researched why, and started to think about ways I could fix this. Real erosion in those circumstances is chaotic: very minor perturbations of the state space cause significant long term differences to emerge.

http://home.comcast.net/~tom_forsyth/papers/cellular_automata_for_physical_modelling.htmlI found this paper, and started to formulate ideas about how and why meanders form. It was crucial for the development of what I am about to detail. The only other resource on CA being used to describe erosion was an academic paper from an Italian research team, that had a very complicated model and a greater focus on physical accuracy.

It is my understanding now that WM has everything needed for meanders to form, except the ability for water to flow faster on the exterior of a bend [but no clear way of getting this effect; keep reading]. So I thought I would add momentum to the cells, and that this would fix it. I developed a system where the flow rate between cells was integrated over time, subject to some small exponential decay.

Each cell maintained a continuous, non negative quantity of water. Each link between von Neumann adjacent cells (roughly equal to twice the number of cells) contained a continuous flow rate. This flow rate was increment in proportion to the difference of the total altitudes (terrain height + water depth) of the cells the link joined, and then decreased by a small percentage to assure stability [this works like a low pass filter].

In the case that there was insufficient fluid for outflow, the cell is completely drained and each other cell receives an equivalent percentage of the fluid due to it. If there is no fluid on either cell of a link, then the link's flow rate is set to 0. A border of cells 1 wide is created around the simulation, with their altitude and water depth are set independently by a first order linear interpolation of the exterior of the simulation proper. (the corner case is not needed due to the von Neumann neighborhood being use) This prevented pooling on the exterior (naïve implementations of many erosion algorithms encourage lakes to form on the border) and allowed unbiased outflow (that is to say, water that flowed to those cells were removed from the simulation).

http://www.youtube.com/playlist?list=PLXpzskMR1uFuVVz8bUXLcrn3PCiZznVREI implemented this in Unity3d and it worked fairly okay. It showed basic wave mechanics (interference and diffraction) but it was fundamentally unstable and still expressed a tendency to prefer the interior bend. Flow on the exterior of a bend WAS slightly higher (don't have a video showing this on me, sorry; it does have support for non level terrain), but it was typically much less than 1%. My test case for this was a 3, 5, and 7 wide right angle bend, with even terrain on the bend, and flat, sloped terrain (that is to say, planar) terrain leading into and out of the bend, with obstacles on the exterior.

I should see if I can get some videos of that, the ones with terrain are a bit more visually impressive.

Oscillations would build up if the damping coefficient was removed. I initially tried using a number (2% damping per time step) that I had empirically found performant when filtering the I term of a PID controller (for line following and other motion planning related problems) in a children's robotics class I was teaching a couple years ago. Further experimentation showed the oscillatory behavior of the fluid simulation system was very similar to what I observed with the robot on a variety of damping coefficients, and 2% damping showed the best response while remaining stable in both systems. It is worth noting that the maximal (dimensionless) error in both cases was about +/– 100 [and when I altered the gain on the flow rate update, the optimal dampening coefficient changed in proportion]. I might investigate this later, but for now I will remember that there exists some rule of thumb for such systems.

I decided that the reason that the bend interior saw so much flow was because while there was momentum on the flow between cells (not that the fluid itself had inertia or momentum!) there was no mechanism to cause any test unit of volume to continue to propagate in a certain direction. Once it had flowed into a cell, it was not biased towards any of the three links out of a cell. I decided that I would need to create a way that each cell would contain a vector of four continuous quantities describing the amount of water to flow in each direction. These values could then be updated by the links, or some other rule could be implemented to cause the water to spread out (strict following of this rule would cause a unit of water to only travel on an axis!).

So I submitted my project proposal having no real idea how I was going to do what I wanted to, but a vague notion that I needed to keep track of portions of fluid moving in each direction, and some way to cause them to spill over into other bins. As I was taking an almost full load in this summer, I was very relieved to finish my project proposal in time. I treated myself with some undirected personal research (this being how I know about WM, why I was teaching children robotics, how I learned about CA and even how to program). I had come across a term, the Lattice Boltzmann Method (LBM), that I knew created very pretty pictures of turbulent fluids and used a discrete grid, but had no idea how it worked. Two hours later, I had discerned that I needed to change my project and abandoned direct work on the erosion simulator.

The LBM is very similar to my method, but is grounded in statistical mechanics (the branch of physics which uses the limiting, statistical properties of a very large number of particles undergoing interactions in a purely classical way to describe phenomenon which are impossible to approach directly; like fluid dynamics and thermodynamics). Boltzmann made huge contributions to this field and was a pretty cool guy (with an *incredible* beard). Historically the LBM evolved out of a simulation of discrete particles on a discrete grid with deterministic (or partially deterministic) update rules. There was a need to use statistics to average the discrete particles to discern the properties of the fluid at that position. The key insight in the LBM was to store the densities of particles which would move between cells and update those continuous values directly. This is directly analogous to my idea that I would need to store the continuous amounts of fluid which would transition in each direction, to conserve momentum globally.

http://wiki.palabos.org/_media/hosted_doc:viggen_masters_09.pdfI later stumbled upon that master's thesis and it is what really made the LBM clear to me. Unfortunately I only found it after I had I started to implement the two dimensional (D2Q9) LBM in CUDA. I was directly porting the MATLAB code on this page:

http://wiki.palabos.org/numerics:codesI got about 343 million lattice updates per second, so a grid of 1000 by 1000 can be updated 343 times per second. The MATLAB code claims about 134 thousand lattice updates per second (about 1 million when compiled), while the optimized C++ implementation got around 3.15 million (on an old computer). There are stats elsewhere on their page that destroy my implementation in terms of performance, but use ~64 CPU cores to do so (while I am using a single Fermi-architecture M2070).

http://www.youtube.com/playlist?list=PLXpzskMR1uFv51vp2FVP4KRTec9akvE8FI tried to improve the performance using a tiled, multi-step per kernel implementation that allows tuning the arithmetic intensity by amortizing memory access times. Unfortunately I was dumb about the way I did it and mucked it up. I saw about 25% lower performance. I expect to see performance improve once I fix my mistakes. That is what I have planned for today.

http://www.youtube.com/watch?v=drE4yh8CeZA&list=PLXpzskMR1uFv51vp2FVP4KRTec9akvE8F&index=6That video in particular shows the increased flow around a bend. The plane waves you see coming off of the obstacles are because I initialized the fluid to moving right as if it was in an open channel. This is incorrect, but it ultimately doesn't matter too much. The Reynolds number basically controls the “viscosity” of the simulation [or rather what the lay person would think of as viscosity]. The LBM is only stable for a range of Reynolds numbers. It also loses some physical accuracy for flow through very tiny openings (less than 5 lattice points).

There are a few ways you can tune the LBM. Grid resolution (changing the scale effectively changes the behavior of the entire simulation!), Reynolds number, speed of sound, and “relaxation time” (which basically shows how much time is passing per time step). The LBM is designed to be stable for mesoscopic scales (millimeters and centimeters). It is not usually used for simulating meters or kilometers, but thankfully we don't need real physical accuracy. We just have to be able to fake it.

You can also model the effects of gravity on the fluids. I am going to investigate altering this technique so that each two dimensional cell gets the effects of gravity in direction and proportional to the gradient of the terrain at this location. It might be necessary to upsample the terrain for this purpose, and simulate multiple fluid update cycles per erosion update. We will see. I still don't know exactly how this will work. Implementing gravity [probably from Stokup's book on LBM] is what I will do immediately after fixing the improved (work-redundant) method so that it is actually an improvement.

I am also looking at sheer, stress, and strain measurements, discrete element modeling, and immersed boundary methods. I have not had a class on statistical mechanics or CFD yet, so I am really just poking around in the dark. However, I will continue to learn more about CFD, LBM or not, and have hopes of applying it to this field.