So far I’ve just messed around with filters whose inputs and outputs were the same size. I’m now attempting to satisfy my own feature request for a general horizontal coordinate transform device. To supply an output over a requested worldspace extent will require input data from very different worldspace extents.
As one specific example, let the device output a 45 degree rotation of its one input. To fill an N x N output grid without gaps, it needs access to an 1.414N x 1.414N input grid. As a second example, let the device output a M:1 compression along the X axis. To fill an N x N output grid without gaps, it needs access to an N*M x N input grid.
I don’t know how to do that, and now I realize there’s a lot about the WM architecture I don’t understand. So the following questions may just be “wrong questions”…
Q1) I see a few references in the PDK to “proxy builds”. Is this where the desired render extent gets expanded into specific build extents for each device? What’s the sequence of events relative to proxy builds and Activate calls?
Q2) Can/Are BuildContexts be modified (or a new proxy build be triggered) on the basis of parameter change? (Assuming: yes)
Q3) Ignoring output tiling for the moment, will a device ever get multiple “fragmentary” Activate calls on a single world build, to fill in various subregions of its output? Or, can a device ever call RetrieveData multiple times on the same port to get different input packets on a single Activate call? (Assuming: no)
Q4) If a device needs input from a known non-square shape, can it call for multiple input packets that cover that shape more efficiently than the single enclosing square of input? (Assuming: no)
Q5) If it’s possible to achieve my objective, are there any code samples to look at? Certainly none of the simple examples in the PDK do this. I’ve looked at howardzzh’s Downsizer example from 2007. While that generates an output smaller than the input, there’s nothing in that code that calls for any particular input (or output) size.
None of the below is actually possible currently within the WM framework. (Q1-Q5 are all no, basically)
The reason:
All data flow is forward-directed in WM; generators create their outputs from their parameters and world context without regard for whatever filters are applied further down the line. This thus implies that a filter that requires additional context (such as a rotation device) is out of luck.
There are many good reasons for this, both technically and in terms of keeping the sequencing of the graph understandable for the user. Fundamentally, though, it means that filters in an effects chain that manipulate space and/or coordinate systems itself (flipping, rotation, skewing, distortion, etc) are problematic. This is the reason that a popular request for a rotation device don’t exist in WM.
Given the current design, the proper place for space transforms (distortion, flip, rotate, etc) is actually on the input side of the generators; manipulate space the way you want BEFORE invoking a generator, and you can produce exactly the needed results. Funny enough, this is an in-progress work item for the next dev build…
Regarding proxy packets:
Proxy builds are indeed a new ability of the framework that will lend itself to some useful things in the future; right now its main purpose is to allow “pretend builds” to discover how the chain of devices is planning on handling packet resolutions in the world. In the future, it might indeed serve at least somewhat like you’re surmising. But this is not currently implemented, and even when it is, will still not function as well for space transforms as doing them simply and easily on the input-side.
Obviously, I will be looking forward to seeing what you have in mind there.
At least one of the nodal compositing systems I work with has the ability to propagate “render extents” back upstream for devices that support it. This serves both as a run-time optimization (no need to render what won’t be consumed downstream) and as an auto-conforming capability for both aspect ratio and resolution.
BTW, I should mention the application that prompted all of this. I want to write a macro to generate horst/graben terrain. The uplifted/dropped blocks typically have a long axis perpendicular to the geographic spreading axis. To generate randomish blobs with a specifically oriented long axis, I need a non-uniform scale with arbitrary semi-major axis, or an axis-aligned non-uniform scale followed by a general rotation.
It would definitely be useful to have the flexibility to have each device in the chain actually alter its evaluation order (so that a file output could, as in your example, modify the render extent inputs to the start of the device chain before the chain begins processing). In limited testing though, I found this ability often led to more difficult to understand graphs, since the evaluation order was no longer strictly left-to-right.
Anyways,
Have you considered using the Scatter device to create your graben/horst terrain? It is very convenient for creating terrain with an aligned axis; it’s likely you can do what you want without creating a custom device.
Check out the example file “New Examples for Dev Channel\Localspace Custom Fractal” for an example of using the scatter device to create an axis aligned terrain.