1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

RLE acceleration hardware for nextgen graphics ?! ;) :)

Discussion in 'Nvidia' started by Skybuck Flying, May 21, 2010.

  1. Hello,

    Last time I saw/heared/read John Carmack (the famous programmer of Doom and
    Quake) talking about nextgen graphics rendering technology he showed a demo
    of "voxel/volume" based rendering.

    Such voxel's/volume's based rendering would probably need special hardware
    to quickly compresses and decompress voxel/volume's into for example RLE

    Thus I expect if volume rendering is to become a reality for games for new
    hardware to be designed ?! ;) =D

    Though suddenly I do remember "octal tree's and all of that stuff"... maybe
    that's too difficult/complex to do in hardware... and maybe RLE is much
    easier to do...

    Question left open are:

    1. Do the volumes need to be rotated ?

    2. What other possibilities are there for hardware acceleration of
    "volume-rendering" based graphics" ?!?

    3. Is NVIDIA and/or ATI/AMD heading in the wrong direction by focussing to
    much on "programmable hardware" / "parallel software" and should they be
    concentrating more on working together with John Carmack to try and bring
    the next graphics technology to reality ?!?

    4. Are they already secretly working with John Carmack... or have they
    somehow shitted on his face ? :):):)

    I think working together with the good man would be a smart idea ! ;) :)

    Skybuck =D
    Skybuck Flying, May 21, 2010
    1. Advertisements

  2. I sure they do: either because of changes in world state (e.g. the
    person is wheeling around) or changes in view point (e.g. I'm turning
    to look left).

    Paul E. Black, May 21, 2010
    1. Advertisements

  3. I am not so sure about that:

    Two possibilities to not have to rotate:

    1. Camera view changes... instead of rotating the entire world, rotate the
    camera around the static objects. And cast rays from the camera into the

    2. Instead of rotating objects, keep track of there rotations and when
    calculations need to be done like rays entering into the object... try to
    use an imaginery sphere around the object... when the "rays" enter the
    sphere rotate the ray around the sphere... and then let the ray continue
    into the object as normal.

    These two solutions would/could probably solve the rotation problem.

    Therefore assuming the volume is accessible from x,y,z from any direction
    vector no rotations would be needed except the rays going into the volume.
    Which could be a lot less rotations then rotating entire volume's.

    Ultimately graphics cards would need some api to retrieve a voxel like so:

    UncompressVoxel( VolumeObject, VoxelX, VoxelY, VoxelZ );

    Then such an api could be used by ray-tracers to traverse the volume.

    Even better would be ray-tracers which can trace/traverse a ray through a
    volume and detect for example a collision or so:

    DetectVoxelCollission( in: VolumeObject, RayStartX, RayStartY, RayStartZ,
    RayEndX, RayEndY, RayEndZ, out: VoxelX, VoxelY, VoxelZ );

    Which could be usefull for "collision detection".

    Also for drawing the scene a slightly extended version:

    ReturnVoxelCollission( in: VolumeObject, RayStartX, RayStartY, RayStartZ,
    RayEndX, RayEndY, RayEndZ, out: VoxelX, VoxelY, VoxelZ, VoxelColor );

    Which would also return the voxel's color...

    In case voxel's have/needed other properties then it could also return the
    entire voxel like so:

    ReturnVoxelCollission( in: VolumeObject, RayStartX, RayStartY, RayStartZ,
    RayEndX, RayEndY, RayEndZ, out: VoxelX, VoxelY, VoxelZ, VoxelData );

    So which these api's the hardware could do "ray traversal" entire in
    hardware/memory... and uncompress voxel's as necessary.

    Skybuck Flying, May 21, 2010
  4. again I avertise for the GigaVoxel paper...
    Nicolas Bonneel, May 21, 2010
  5. The word compress is found 1 time.

    The word compression is found 0 times.

    How does it compress volumes ?

    Can it compress for example a "(hollow) hull" to mere kilobytes ?

    Skybuck Flying, May 22, 2010
  6. Also the API I mentioned is not only for gp/gpgpu...

    The CPU might also need modest ammounts of information from volumes/voxels.

    The document at the link seems to be mostly about "rendering"... and not so
    much about "retrieving information from volume's".

    The API I mentioned would use the GPU's capabilities to return information
    to the CPU.

    Skybuck Flying, May 22, 2010
  7. To understand that, it requires more than pressing F3 to search for
    words in the document.
    They also give a good review of the previous work which should allow you
    to find all the infos you want, including a STAR report [EHK*06].

    Finally, storing hollow closed surfaces as volumetric data instead of
    polygons or parametric surfaces is not that clever (except if you're
    dealing with implicit surfaces simulation etc.).
    Nicolas Bonneel, May 22, 2010
  8. You didn't mention any API... did you ?

    Your whole post was about volume rendering... isn't it ?
    This paper deal both with the acceleration structure (octree), the way
    to use and update it efficiently on the GPU (during camera motion for
    example, if the goal is rendering) and the rendering itself (ray
    marching and filtering).

    GPU-CPU transfers are usually to be avoided. What kind of information do
    you want to transfer ? The final image (-> then it's volume rendering) ?
    "Some" results of "some" computations ?... it's vague.
    Nicolas Bonneel, May 22, 2010
  9. I looked through it... it seems about some kind of "block" compression.

    Quite complex too... with nodes and such.
    Polygons need to be rendered/pixelated with complex formula's and routines.

    Voxel's can be ray-traced and massively parallel too ?!?

    Like John Carmack said: When the polygons become that small it doesn't make
    much sense to use polygons anymore...

    Points is not an option those too small.

    Voxels seems ok, they lie in a grid... so their more like boxes.

    Skybuck Flying, May 24, 2010
  10. See follow-up post.
    No not really, it's about nextgen-hardware to speed-up the use of
    "volumes/voxels-based technology".

    Rendering is just a part of it.
    As long as it renders one or two objects that not very impressive...

    It needs to render entire scenes... and then the scenes need to have physics
    as well.
    The compressed voxels/volumes, and just some coordinates for lines.
    That perhaps to after the rendering.
    "Compression/Decompression" computations.

    Voxels/Volumes can be compressed well, but need to be decompressed to do
    computations on...

    I don't think CPU's are suited to handle that kind of work...

    Nor do graphics cards seem really suited for it...

    Perhaps new technology needs to be created which focus on decompressing the
    volumes very rapidly to allow computations on them.

    The computations could be done inside the new technology as well so that the
    big volumes don't have to be transferred.

    The compressed volumes could stay inside the new technology, the
    uncompressed can simply be thrown away/overwritten.

    The volumes get decompressed only when needed.

    Skybuck Flying, May 24, 2010
  11. a state of the art report. Just check the reference [EHK*06].

    Projection is a division by z. It's not complex.
    Rasterization is not complex, whether you rasterize a quad and check for
    triangle inside/outside condition, or whether you do a Bresenham like
    It can be costly though for tiny polygons.
    They can be ray-marched. This is much more costly than ray tracing where
    line/triangles intersection have an analytic formula.
    Usually you just use giant voxels (ie. octrees, kd-trees...) as an
    acceleration structure to raytrace polygons.
    I agree. I've seen a powerpoint presentation given at siggraph a few
    years ago where he evaluates the cost of transferring the data to the
    gpu compared to the cost of rendering the pixels on cpu and transferring
    the resulting pixels, knowing that data transfer is the bottleneck
    rather than the cost of rendering. If the triangle is less than a few
    pixels wide, then it's more useful to directly transfer the rasterized
    pixels rather than sending the 3 vertices+normals+UVs and asking the gpu
    to rasterize.
    That's in part why billboards exist.
    No. Points *are* an option. More precisely, you don't render "points"
    but "splats" which are basically stretched discs. Look for example at:
    They render forests with points.
    They are also prone to filtering issues, are harder to render than
    splats, are harder to raytrace than triangles or even splats, can
    require a lot of memory for simple surfaces, can result in artefacts
    (kind of stratifications), are harder to edit or model, are harder to
    texture/parametrize (look at the paper "Tile Trees" at i3D 2007 for
    example, which works for surfaces :
    http://www-sop.inria.fr/reves/Basilic/2007/LD07/LD07.pdf )...

    There is no magic in it. I agree it is a good area of research. I
    however don't believe it to be the solution to everything.

    Nicolas Bonneel, May 26, 2010
  12. Are you joking ? It needs to do the coffee as well ??
    They render billions of voxels which is more than enough to render full
    scenes. When they show the camera going into the human body, it *is* a
    full scene which is being drawn. Replace the flesh by Quake 4 walls if
    you want. Idem for the Sierpinsky sponge.

    Physics is out of the scope.
    The reference I gave show that they are. Voxels are "decompressed" as
    you said, since ray marching is performed on them. Replace the ray
    marching step by anything else you want.
    This looks like a crank sentence.
    Nobody has waited for you to do that.
    Nicolas Bonneel, May 26, 2010
  13. Let's see, a billion you say, a few billion you say ?

    What is a billion ?

    A billion is:


    Eye-opener for you:

    That's not in 3D

    That's a resolution of 1000x1000x1000.

    That won't even fill my 1920x1200 monitor !

    The moral of the story: 1D figures/numbers mean nothing when it comes to 3D
    and are very deceptive.
    Physics needs to be done for games as well... even if it's simply simple as
    collision/intersection detection or so.
    Nope, I know very well what my hardware from 2006 can do.

    It has a resolution of 4096x4096 for 2D textures which in itself isn't even
    that big.

    And for 3D it gets worse, for 3D it's actually 512x512x512.

    That's even smaller than the small 1000x1000x1000 example !
    I guess for now I would be mostly interested in the decompression itself.

    Leave out the whole rendering/opengl and what not stuff... and just focus on
    the decompression if possible...

    First explain it conceptually... give it a name if possible.

    Try to explain the basic concepts of the compression if possible.

    For the time being I am not interested in actually writing any volume
    software or whatever...

    Though I am slightly interested in it, in the technology, and maybe for the
    future if/when I do might want to write some software for it.

    So for now I want to see "light" documentation which is easy to read and
    explains the basic concepts as simple as possible...

    It doesn't seem your document does that... it could help if it had some
    pseudo code in a pascal like language or so... some comments or so.

    Maybe some more pictures... whatever helps to make it more understable ;) :)

    I want it/the documentation/document to be modular, not integrated...

    I don't want a document describing the total renderer.

    I want seperate documents focusing on each part of it.

    For now I am only interested in the compression/decompression of volumes.

    So everything in the document can be "thrown away" except the compression
    side of it.

    So my advice to you is:

    Make a new document describing the compression/decompression only.

    Maybe then I will start to take it a bit more seriously... because how else
    would you transfer 1 billion bytes ?

    Just once is nice for diashows/powerpoint... but for games I expect a lot
    more traffic... that's why the compression
    is the most important part of it.

    I think the pci bus was actually limited to a few billions bytes per
    second... so that's very low for uncompressed stuff.

    I think I made my point clear...
    No, it's just like images.

    The images are read from the harddisk in compressed form... then they enter
    the memory/cpu and are decompressed there and they can stay there for a
    while until the next compressed image needs to be decompressed and when room
    is needed.
    The technology could be intelligent and automatically throw away
    decompressed volumes if it needs to make room for new ones.

    It could also remain dumb and require the programmer to explicitly "release

    Or maybe even both for maximum control in case desired or easy of use when

    Skybuck Flying, May 26, 2010
  14. Points is not an option those too small.
    I guess the "splat" idea is the same as the "vector balls" idea from long,
    long, long ago:

    If I wanted to try it I would probably try to drive this concept to the

    "UltraForce Vector Demo:"

    See 3:07 where the vector balls start ;) :)

    Is that the same concept as "splat" ? ;) :)

    To me vector balls seem to represent reality the closest...

    Aren't we all build up of tiny little molecules/atoms anyway ?

    They were always pictured as "balls" in highschool...

    Though later on they showed protons/neutrons/electrons...

    And ultimately it might be energy strings or whatever...

    But so far "atoms"/"tiny little balls" seems a good appromixation of reality
    for "matter" ?

    Skybuck Flying, May 26, 2010
  15. First, I said billionS.

    Then, just *read* the paper, and see they render from 8192^3 resolution
    voxels for real datas and up to 8.4M^3 virtual resolution (Sierpinsky).
    Where did you see that 1 voxel = 1 pixel ?

    For *THEIR* 8192^3 resolution, THEY use a 8800 GTS graphics card with
    512Mb of memory. It was basically a quite high-end 2006 hardware but
    could be found, or at worse in 2007. Currently, this hardware is
    outdated (how much does a 8800GTS cost currently ? 20 bucks?).

    They didn't invent their work (and I personally know well all of the
    authors, and published with one of them). They just... compress data!
    This is the scope of the paper though, but you don't seem to be willing
    to read it.

    I suggest you read a little bit more before posting your "innovative" ideas.

    lol, giving a pascal code just for you ? Just *read* papers, and not
    just one.

    There is an accompanying video.
    If *you* want something, do it yourself! Researchers are not here to
    fulfill your personal desires because you are too lazy to read.

    Read it. It has been presented at Siggraph 2009 also (but it is an i3d

    you definitely need to read.
    Nicolas Bonneel, May 26, 2010

  16. no. Splats are 2D. But again, you'll need to READ to understand that.
    Come on, the paper I gave is 8 pages long.... you are only able to watch
    youtube videos ??

    The splatting approach is just as old as 2001, in "Surface Splatting"
    from Zwicker et al. (8 pages as well, and it's the first google result),
    but there are other slightly different older works (Qsplat in 200...).
    The point based rendering approach though is as old as 1985 (The Use of
    Points as Display Primitives, by Levoy and Whitted).
    Nicolas Bonneel, May 26, 2010
  17. Ok, nice tutorials about rendering and such... but why you refer me towards
    them is beyond me ?


    Even the last document clearly states/shows that compression is somehow


    So the problem of compression is still not solved.

    Thanks anyway it was slightly interesting :)

    Skybuck :)
    Skybuck Flying, May 26, 2010

  18. This document did mention to check out/google for:

    "dwtgpu" (related to compression/decompression ?)

    I haven't done so yet... but maybe that's about compression.

    Skybuck Flying, May 26, 2010
  19. Then I guess it's the same since the vector balls are 2D as well, just their
    coordinates are 3D.

    Skybuck Flying, May 26, 2010
  20. I scanned through all the documents and the compression problem remains.

    Skybuck Flying, May 26, 2010
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.