Effects in demos that you don't know how they work
category: code [glöplog]
rasmus: Rename the .dat file to .avi, open it in a media player and see what happens. 8)
Is this raymarched?
http://www.pouet.net/prod.php?which=55557
http://www.pouet.net/prod.php?which=55557
las: No, that's rasterized polygons.
I guess instancing or sth like that?
powly, that was pretty much not what was discussed. gpu->gpu textures are fast and cpu->gpu are not. the latter was in question.
But you asked how they are done, it's by rendering directly to textures. It's just another shader for those textures to move the particles.
ok, so they are raytraced dots from what i can tell
No, they are point sprites.
rasmus: http://directtovideo.wordpress.com/2009/10/06/a-thoroughly-modern-particle-system/
pommak, ok i just scanned the fs files and saw the word "march" :-)
Anyone has more detailed information on this post I made a long time ago:
http://pouet.net/topic.php?which=7415
http://pouet.net/topic.php?which=7415
rasmus: wrong shader :) particles aren't marched.
About voxels: There is a brief description of the voxel technique in my diary article about Luminagia. In particular, read the entry for January 13, 2008.
nice Blueberry, gotta read it
Well then help me, please! :) I recently sat down to do some voxel effects and so far I've rehashed the twister and landscape. I've even applied a polar map transform to the twister and that yielded a need torus-like twisty like seen in 'live evil'.
But then my old arch nemesis: the tunnel and the ball. I tried mapping a landscape along the Y-axis of a buffer and doing a polar transform, for a tunnel. This came close but still looked a bit off. Any pointers?
Then the ball. I recently rewatched non-stop ibiza and it's obvious (is it?) that there are some objects in there that are essentially a heightmap wrapped around a sphere. Without any transforms I figured that a way to do this might be the following:
Iterate over each angle of a full circle (360) and for each ray cast outwards from the middle of the screen and (at least for starters) the middle of the heightmap, into the correct corner (like a good radial blur). And then just for each step along the ray do the project-height-and-render-spans-front-to-back thing. Two serious issues with this. Firstly how to project/scale the sampled height. I guess an option would be to go the twister route and imagine the ray is a 2D half circle slice (so 0-1-0 sine curve, 180deg), but that might need another thought. Secondly it's obviously not very memory efficient to read and write pixels in that way, I'm guessing that won't really fly full framerate on an Amiga? :)
So in the end this is probably also solved by a map transform of sorts. I just feel to see how/where right now :)
But then my old arch nemesis: the tunnel and the ball. I tried mapping a landscape along the Y-axis of a buffer and doing a polar transform, for a tunnel. This came close but still looked a bit off. Any pointers?
Then the ball. I recently rewatched non-stop ibiza and it's obvious (is it?) that there are some objects in there that are essentially a heightmap wrapped around a sphere. Without any transforms I figured that a way to do this might be the following:
Iterate over each angle of a full circle (360) and for each ray cast outwards from the middle of the screen and (at least for starters) the middle of the heightmap, into the correct corner (like a good radial blur). And then just for each step along the ray do the project-height-and-render-spans-front-to-back thing. Two serious issues with this. Firstly how to project/scale the sampled height. I guess an option would be to go the twister route and imagine the ray is a 2D half circle slice (so 0-1-0 sine curve, 180deg), but that might need another thought. Secondly it's obviously not very memory efficient to read and write pixels in that way, I'm guessing that won't really fly full framerate on an Amiga? :)
So in the end this is probably also solved by a map transform of sorts. I just feel to see how/where right now :)
(also shit I didn't see the link to blueberry's Luminagia doc... kind of confirms a few things but,
so hmm, if I were to, for each angle of the sphere, do what I said above (and what the doc says) -- but instead of directly rendering the actual voxels radially i render them vertically to a buffer and then do a polar transform blit with that bufer?)
Quote:
For instance, to produce a voxel blob, shoot the rays out in all directions from one point, scale the height by a sine function (half a period) and map polar-wrapped.
so hmm, if I were to, for each angle of the sphere, do what I said above (and what the doc says) -- but instead of directly rendering the actual voxels radially i render them vertically to a buffer and then do a polar transform blit with that bufer?)
I guess the latter must be somewhat it. In fact I figured one would get a shitload of ugly overdraw issues when drawing the fan spans directly on screen. Okay, I'll test this approach.
You can do it in 1-pass as well. You accomplish that by, for each "slice", having a precomputed list of (screen offset, voxel height) pairs, sorted on the voxel height values.
Thus, when you paint all the (screen offset) locations for a slice, you effectively paint a ray on-screen that starts in the screen-center and works its way outward.
What's good about this approach is that you paint each pixel exactly once. (With the 2-pass method, most of the information that you are painting close to the center of the image will never be used.) On the other hand, the per-pixel work is more convoluted, and you cannot easily gain performance by scaling back rendering quality in the same way that you can in the 2-pass approach.
I don't know which method is faster for the same quality level.
Thus, when you paint all the (screen offset) locations for a slice, you effectively paint a ray on-screen that starts in the screen-center and works its way outward.
What's good about this approach is that you paint each pixel exactly once. (With the 2-pass method, most of the information that you are painting close to the center of the image will never be used.) On the other hand, the per-pixel work is more convoluted, and you cannot easily gain performance by scaling back rendering quality in the same way that you can in the 2-pass approach.
I don't know which method is faster for the same quality level.
I think that just might be smart. Doing the second pass isn't that cheap either: there's filtering (reallly necessary in 640x480 or higher) and there's poor locality when traversing the transform map pixel-by-pixel (so you'd end up with 8x8 tiling or something to that extent).
So, if I read this right, the list consist of a projected height-on-screen transformed to an actual coordinate, for each height, and this for each angle. So you'd end up with new_height = table[angle][map_height] and use a line algorithm to draw the span from previous to new height?
So, if I read this right, the list consist of a projected height-on-screen transformed to an actual coordinate, for each height, and this for each angle. So you'd end up with new_height = table[angle][map_height] and use a line algorithm to draw the span from previous to new height?
I'm not reading it right. But I'll "think" out loud: I cast a ray for a certain angle (or slice), start traversing it and at that point I know 3 things: the fan angle (or the direction vector of the ray both in-map and on-screen), the sampled height and how far along the ray I am (which I figure has an effect on the projection/scaling). Now I guess I am to project said height by the appropriate sinecurve (= voxel height?) and that + the fan angle I'm currently doing gives me a screen offset to draw the span to?
During precomputation, you precompute the inverse of a tunnel table.
That is, you build a bunch of lists of target pixels. The interpretation of list number X is, "if I paint all the pixels in list X, then I draw all the pixels which lie at an angle of X degrees as measured from the screen origin". In addition to this, you should also store the distance for each pixel - and sort them in ascending height within each list.
Then, render time.
At the beginning of each ray, you choose which list of target pixels that you should be using. This is determined entirely from the fan angle.
Then, when you are progressing along a ray...
You have the current fan angle and how far along the ray that you have travelled so far.
In addition to this, you also track how far along you currently are in the list of target pixels.
So you sample from the heightmap (using fan angle + distance to pick location) and apply sinecurve. The value that you have is the height value.
Now, it's time to paint zero or more pixels. Check the height value against the next target pixel's height in the list. If the target pixel's height value is lower -- paint at that target pixel's screen offset, advance in list, and redo the test. Keep on going until you've reached a target pixel whose height value is too high. At that point, it is time to advance along the ray and perform a new heightmap sample.
I WANNA KNOW HOW RAYMARCHING WURKZZZZ
Awesome Kalms, thank you. Can't be much clearer than this.
and ferris, cram a sock in it and get back to rm'ing cubes with spherical holes in them within the comfort of a pixelshader that solves all actual work for you :)
(i'll get back to d3d as soon as this runs, for good measure)
(i'll get back to d3d as soon as this runs, for good measure)