1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

RLE acceleration hardware for nextgen graphics ?! ;) :)

Discussion in 'Nvidia' started by Skybuck Flying, May 21, 2010.

  1. Hello,

    Last time I saw/heared/read John Carmack (the famous programmer of Doom and
    Quake) talking about nextgen graphics rendering technology he showed a demo
    of "voxel/volume" based rendering.

    Such voxel's/volume's based rendering would probably need special hardware
    to quickly compresses and decompress voxel/volume's into for example RLE
    streams.

    Thus I expect if volume rendering is to become a reality for games for new
    hardware to be designed ?! ;) =D

    Though suddenly I do remember "octal tree's and all of that stuff"... maybe
    that's too difficult/complex to do in hardware... and maybe RLE is much
    easier to do...

    Question left open are:

    1. Do the volumes need to be rotated ?

    2. What other possibilities are there for hardware acceleration of
    "volume-rendering" based graphics" ?!?

    3. Is NVIDIA and/or ATI/AMD heading in the wrong direction by focussing to
    much on "programmable hardware" / "parallel software" and should they be
    concentrating more on working together with John Carmack to try and bring
    the next graphics technology to reality ?!?

    4. Are they already secretly working with John Carmack... or have they
    somehow shitted on his face ? :):):)

    I think working together with the good man would be a smart idea ! ;) :)

    Bye,
    Skybuck =D
     
    Skybuck Flying, May 21, 2010
    #1
    1. Advertising

  2. On Thursday 20 May 2010 23:12, Skybuck Flying wrote:
    > Last time I saw/heared/read John Carmack (the famous programmer of Doom
    > and Quake) talking about nextgen graphics rendering technology he showed a
    > demo of "voxel/volume" based rendering.
    >
    > ...
    >
    > 1. Do the volumes need to be rotated ?


    I sure they do: either because of changes in world state (e.g. the
    person is wheeling around) or changes in view point (e.g. I'm turning
    to look left).

    -paul-
    --
    Paul E. Black ()
     
    Paul E. Black, May 21, 2010
    #2
    1. Advertising

  3. I am not so sure about that:

    Two possibilities to not have to rotate:

    1. Camera view changes... instead of rotating the entire world, rotate the
    camera around the static objects. And cast rays from the camera into the
    objects.

    2. Instead of rotating objects, keep track of there rotations and when
    calculations need to be done like rays entering into the object... try to
    use an imaginery sphere around the object... when the "rays" enter the
    sphere rotate the ray around the sphere... and then let the ray continue
    into the object as normal.

    These two solutions would/could probably solve the rotation problem.

    Therefore assuming the volume is accessible from x,y,z from any direction
    vector no rotations would be needed except the rays going into the volume.
    Which could be a lot less rotations then rotating entire volume's.

    Ultimately graphics cards would need some api to retrieve a voxel like so:

    UncompressVoxel( VolumeObject, VoxelX, VoxelY, VoxelZ );

    Then such an api could be used by ray-tracers to traverse the volume.

    Even better would be ray-tracers which can trace/traverse a ray through a
    volume and detect for example a collision or so:

    DetectVoxelCollission( in: VolumeObject, RayStartX, RayStartY, RayStartZ,
    RayEndX, RayEndY, RayEndZ, out: VoxelX, VoxelY, VoxelZ );

    Which could be usefull for "collision detection".

    Also for drawing the scene a slightly extended version:

    ReturnVoxelCollission( in: VolumeObject, RayStartX, RayStartY, RayStartZ,
    RayEndX, RayEndY, RayEndZ, out: VoxelX, VoxelY, VoxelZ, VoxelColor );

    Which would also return the voxel's color...

    In case voxel's have/needed other properties then it could also return the
    entire voxel like so:

    ReturnVoxelCollission( in: VolumeObject, RayStartX, RayStartY, RayStartZ,
    RayEndX, RayEndY, RayEndZ, out: VoxelX, VoxelY, VoxelZ, VoxelData );

    So which these api's the hardware could do "ray traversal" entire in
    hardware/memory... and uncompress voxel's as necessary.

    Bye,
    Skybuck.
     
    Skybuck Flying, May 21, 2010
    #3
  4. Nicolas Bonneel, May 21, 2010
    #4
  5. "Nicolas Bonneel" <> wrote in message
    news:ht6t7g$dj1$...
    > Skybuck Flying wrote:
    >> 2. What other possibilities are there for hardware acceleration of
    >> "volume-rendering" based graphics" ?!?

    >
    > again I avertise for the GigaVoxel paper...
    > http://artis.imag.fr/Publications/2009/CNLE09/


    The word compress is found 1 time.

    The word compression is found 0 times.

    How does it compress volumes ?

    Can it compress for example a "(hollow) hull" to mere kilobytes ?

    Bye,
    Skybuck.
     
    Skybuck Flying, May 22, 2010
    #5
  6. "Nicolas Bonneel" <> wrote in message
    news:ht6t7g$dj1$...
    > Skybuck Flying wrote:
    >> 2. What other possibilities are there for hardware acceleration of
    >> "volume-rendering" based graphics" ?!?

    >
    > again I avertise for the GigaVoxel paper...
    > http://artis.imag.fr/Publications/2009/CNLE09/


    Also the API I mentioned is not only for gp/gpgpu...

    The CPU might also need modest ammounts of information from volumes/voxels.

    The document at the link seems to be mostly about "rendering"... and not so
    much about "retrieving information from volume's".

    The API I mentioned would use the GPU's capabilities to return information
    to the CPU.

    Bye,
    Skybuck.
     
    Skybuck Flying, May 22, 2010
    #6
  7. Skybuck Flying wrote:
    > "Nicolas Bonneel" <> wrote in message
    > news:ht6t7g$dj1$...
    >> Skybuck Flying wrote:
    >>> 2. What other possibilities are there for hardware acceleration of
    >>> "volume-rendering" based graphics" ?!?

    >> again I avertise for the GigaVoxel paper...
    >> http://artis.imag.fr/Publications/2009/CNLE09/

    >
    > The word compress is found 1 time.
    >
    > The word compression is found 0 times.
    >
    > How does it compress volumes ?
    >
    > Can it compress for example a "(hollow) hull" to mere kilobytes ?


    To understand that, it requires more than pressing F3 to search for
    words in the document.
    They also give a good review of the previous work which should allow you
    to find all the infos you want, including a STAR report [EHK*06].

    Finally, storing hollow closed surfaces as volumetric data instead of
    polygons or parametric surfaces is not that clever (except if you're
    dealing with implicit surfaces simulation etc.).
     
    Nicolas Bonneel, May 22, 2010
    #7
  8. Skybuck Flying wrote:
    > "Nicolas Bonneel" <> wrote in message
    > news:ht6t7g$dj1$...
    >> Skybuck Flying wrote:
    >>> 2. What other possibilities are there for hardware acceleration of
    >>> "volume-rendering" based graphics" ?!?

    >> again I avertise for the GigaVoxel paper...
    >> http://artis.imag.fr/Publications/2009/CNLE09/

    >
    > Also the API I mentioned is not only for gp/gpgpu...


    You didn't mention any API... did you ?


    > The CPU might also need modest ammounts of information from volumes/voxels.
    >
    > The document at the link seems to be mostly about "rendering"... and not so
    > much about "retrieving information from volume's".


    Your whole post was about volume rendering... isn't it ?
    This paper deal both with the acceleration structure (octree), the way
    to use and update it efficiently on the GPU (during camera motion for
    example, if the goal is rendering) and the rendering itself (ray
    marching and filtering).


    > The API I mentioned would use the GPU's capabilities to return information
    > to the CPU.


    GPU-CPU transfers are usually to be avoided. What kind of information do
    you want to transfer ? The final image (-> then it's volume rendering) ?
    "Some" results of "some" computations ?... it's vague.
     
    Nicolas Bonneel, May 22, 2010
    #8
  9. "Nicolas Bonneel" <> wrote in message
    news:ht7aia$pg0$...
    > Skybuck Flying wrote:
    >> "Nicolas Bonneel" <> wrote in message
    >> news:ht6t7g$dj1$...
    >>> Skybuck Flying wrote:
    >>>> 2. What other possibilities are there for hardware acceleration of
    >>>> "volume-rendering" based graphics" ?!?
    >>> again I avertise for the GigaVoxel paper...
    >>> http://artis.imag.fr/Publications/2009/CNLE09/

    >>
    >> The word compress is found 1 time.
    >>
    >> The word compression is found 0 times.
    >>
    >> How does it compress volumes ?
    >>
    >> Can it compress for example a "(hollow) hull" to mere kilobytes ?

    >
    > To understand that, it requires more than pressing F3 to search for words
    > in the document.


    I looked through it... it seems about some kind of "block" compression.

    Quite complex too... with nodes and such.

    > They also give a good review of the previous work which should allow you
    > to find all the infos you want, including a STAR report [EHK*06].


    ?

    > Finally, storing hollow closed surfaces as volumetric data instead of
    > polygons or parametric surfaces is not that clever (except if you're
    > dealing with implicit surfaces simulation etc.).


    Polygons need to be rendered/pixelated with complex formula's and routines.

    Voxel's can be ray-traced and massively parallel too ?!?

    Like John Carmack said: When the polygons become that small it doesn't make
    much sense to use polygons anymore...

    Points is not an option those too small.

    Voxels seems ok, they lie in a grid... so their more like boxes.

    Bye,
    Skybuck.
     
    Skybuck Flying, May 24, 2010
    #9
  10. "Nicolas Bonneel" <> wrote in message
    news:...
    > Skybuck Flying wrote:
    >> "Nicolas Bonneel" <> wrote in message
    >> news:ht6t7g$dj1$...
    >>> Skybuck Flying wrote:
    >>>> 2. What other possibilities are there for hardware acceleration of
    >>>> "volume-rendering" based graphics" ?!?
    >>> again I avertise for the GigaVoxel paper...
    >>> http://artis.imag.fr/Publications/2009/CNLE09/

    >>
    >> Also the API I mentioned is not only for gp/gpgpu...

    >
    > You didn't mention any API... did you ?


    See follow-up post.

    >
    >> The CPU might also need modest ammounts of information from
    >> volumes/voxels.
    >>
    >> The document at the link seems to be mostly about "rendering"... and not
    >> so much about "retrieving information from volume's".

    >
    > Your whole post was about volume rendering... isn't it ?


    No not really, it's about nextgen-hardware to speed-up the use of
    "volumes/voxels-based technology".

    Rendering is just a part of it.

    > This paper deal both with the acceleration structure (octree), the way to
    > use and update it efficiently on the GPU (during camera motion for
    > example, if the goal is rendering) and the rendering itself (ray marching
    > and filtering).


    As long as it renders one or two objects that not very impressive...

    It needs to render entire scenes... and then the scenes need to have physics
    as well.

    >> The API I mentioned would use the GPU's capabilities to return
    >> information to the CPU.

    >
    > GPU-CPU transfers are usually to be avoided. What kind of information do
    > you want to transfer ?


    The compressed voxels/volumes, and just some coordinates for lines.

    > The final image (-> then it's volume rendering) ?


    That perhaps to after the rendering.

    > "Some" results of "some" computations ?... it's vague.


    "Compression/Decompression" computations.

    Voxels/Volumes can be compressed well, but need to be decompressed to do
    computations on...

    I don't think CPU's are suited to handle that kind of work...

    Nor do graphics cards seem really suited for it...

    Perhaps new technology needs to be created which focus on decompressing the
    volumes very rapidly to allow computations on them.

    The computations could be done inside the new technology as well so that the
    big volumes don't have to be transferred.

    The compressed volumes could stay inside the new technology, the
    uncompressed can simply be thrown away/overwritten.

    The volumes get decompressed only when needed.

    Bye,
    Skybuck.
     
    Skybuck Flying, May 24, 2010
    #10
  11. Skybuck Flying wrote:
    >> They also give a good review of the previous work which should allow you
    >> to find all the infos you want, including a STAR report [EHK*06].

    >
    > ?


    a state of the art report. Just check the reference [EHK*06].


    >> Finally, storing hollow closed surfaces as volumetric data instead of
    >> polygons or parametric surfaces is not that clever (except if you're
    >> dealing with implicit surfaces simulation etc.).

    >
    > Polygons need to be rendered/pixelated with complex formula's and routines.


    Projection is a division by z. It's not complex.
    Rasterization is not complex, whether you rasterize a quad and check for
    triangle inside/outside condition, or whether you do a Bresenham like
    rasterization.
    It can be costly though for tiny polygons.

    > Voxel's can be ray-traced and massively parallel too ?!?


    They can be ray-marched. This is much more costly than ray tracing where
    line/triangles intersection have an analytic formula.
    Usually you just use giant voxels (ie. octrees, kd-trees...) as an
    acceleration structure to raytrace polygons.

    > Like John Carmack said: When the polygons become that small it doesn't make
    > much sense to use polygons anymore...


    I agree. I've seen a powerpoint presentation given at siggraph a few
    years ago where he evaluates the cost of transferring the data to the
    gpu compared to the cost of rendering the pixels on cpu and transferring
    the resulting pixels, knowing that data transfer is the bottleneck
    rather than the cost of rendering. If the triangle is less than a few
    pixels wide, then it's more useful to directly transfer the rasterized
    pixels rather than sending the 3 vertices+normals+UVs and asking the gpu
    to rasterize.
    That's in part why billboards exist.

    >
    > Points is not an option those too small.


    No. Points *are* an option. More precisely, you don't render "points"
    but "splats" which are basically stretched discs. Look for example at:
    http://www.irit.fr/~Gael.Guennebaud/docs/deferred_splatting_eg04.pdf
    They render forests with points.

    >
    > Voxels seems ok, they lie in a grid... so their more like boxes.


    They are also prone to filtering issues, are harder to render than
    splats, are harder to raytrace than triangles or even splats, can
    require a lot of memory for simple surfaces, can result in artefacts
    (kind of stratifications), are harder to edit or model, are harder to
    texture/parametrize (look at the paper "Tile Trees" at i3D 2007 for
    example, which works for surfaces :
    http://www-sop.inria.fr/reves/Basilic/2007/LD07/LD07.pdf )...

    There is no magic in it. I agree it is a good area of research. I
    however don't believe it to be the solution to everything.


    Cheers
     
    Nicolas Bonneel, May 26, 2010
    #11
  12. Skybuck Flying wrote:
    > "Nicolas Bonneel" <> wrote in message
    >> This paper deal both with the acceleration structure (octree), the way to
    >> use and update it efficiently on the GPU (during camera motion for
    >> example, if the goal is rendering) and the rendering itself (ray marching
    >> and filtering).

    >
    > As long as it renders one or two objects that not very impressive...
    >
    > It needs to render entire scenes... and then the scenes need to have physics
    > as well.


    Are you joking ? It needs to do the coffee as well ??
    They render billions of voxels which is more than enough to render full
    scenes. When they show the camera going into the human body, it *is* a
    full scene which is being drawn. Replace the flesh by Quake 4 walls if
    you want. Idem for the Sierpinsky sponge.

    Physics is out of the scope.

    > Voxels/Volumes can be compressed well, but need to be decompressed to do
    > computations on...
    >
    > I don't think CPU's are suited to handle that kind of work...
    >
    > Nor do graphics cards seem really suited for it...


    The reference I gave show that they are. Voxels are "decompressed" as
    you said, since ray marching is performed on them. Replace the ray
    marching step by anything else you want.

    > The computations could be done inside the new technology as well so that the
    > big volumes don't have to be transferred.


    This looks like a crank sentence.

    >
    > The volumes get decompressed only when needed.


    Nobody has waited for you to do that.
     
    Nicolas Bonneel, May 26, 2010
    #12
  13. "Nicolas Bonneel" <> wrote in message
    news:hti1mq$6bn$...
    > Skybuck Flying wrote:
    >> "Nicolas Bonneel" <> wrote in message
    >>> This paper deal both with the acceleration structure (octree), the way
    >>> to use and update it efficiently on the GPU (during camera motion for
    >>> example, if the goal is rendering) and the rendering itself (ray
    >>> marching and filtering).

    >>
    >> As long as it renders one or two objects that not very impressive...
    >>
    >> It needs to render entire scenes... and then the scenes need to have
    >> physics as well.

    >
    > Are you joking ?


    No.

    > It needs to do the coffee as well ??


    No.

    > They render billions of voxels which is more than enough to render full


    Let's see, a billion you say, a few billion you say ?

    What is a billion ?

    A billion is:

    1000.000.000

    Eye-opener for you:

    That's not in 3D

    That's a resolution of 1000x1000x1000.

    That won't even fill my 1920x1200 monitor !

    The moral of the story: 1D figures/numbers mean nothing when it comes to 3D
    and are very deceptive.

    > scenes. When they show the camera going into the human body, it *is* a
    > full scene which is being drawn. Replace the flesh by Quake 4 walls if you
    > want. Idem for the Sierpinsky sponge.
    >
    > Physics is out of the scope.


    Physics needs to be done for games as well... even if it's simply simple as
    collision/intersection detection or so.

    >> Voxels/Volumes can be compressed well, but need to be decompressed to do
    >> computations on...
    >>
    >> I don't think CPU's are suited to handle that kind of work...
    >>
    >> Nor do graphics cards seem really suited for it...

    >
    > The reference I gave show that they are.


    Nope, I know very well what my hardware from 2006 can do.

    It has a resolution of 4096x4096 for 2D textures which in itself isn't even
    that big.

    And for 3D it gets worse, for 3D it's actually 512x512x512.

    That's even smaller than the small 1000x1000x1000 example !

    > Voxels are "decompressed" as you said, since ray marching is performed on
    > them. Replace the ray marching step by anything else you want.


    I guess for now I would be mostly interested in the decompression itself.

    Leave out the whole rendering/opengl and what not stuff... and just focus on
    the decompression if possible...

    First explain it conceptually... give it a name if possible.

    Try to explain the basic concepts of the compression if possible.

    For the time being I am not interested in actually writing any volume
    software or whatever...

    Though I am slightly interested in it, in the technology, and maybe for the
    future if/when I do might want to write some software for it.

    So for now I want to see "light" documentation which is easy to read and
    explains the basic concepts as simple as possible...

    It doesn't seem your document does that... it could help if it had some
    pseudo code in a pascal like language or so... some comments or so.

    Maybe some more pictures... whatever helps to make it more understable ;) :)

    I want it/the documentation/document to be modular, not integrated...

    I don't want a document describing the total renderer.

    I want seperate documents focusing on each part of it.

    For now I am only interested in the compression/decompression of volumes.

    So everything in the document can be "thrown away" except the compression
    side of it.

    So my advice to you is:

    Make a new document describing the compression/decompression only.

    Maybe then I will start to take it a bit more seriously... because how else
    would you transfer 1 billion bytes ?

    Just once is nice for diashows/powerpoint... but for games I expect a lot
    more traffic... that's why the compression
    is the most important part of it.

    I think the pci bus was actually limited to a few billions bytes per
    second... so that's very low for uncompressed stuff.

    I think I made my point clear...

    >> The computations could be done inside the new technology as well so that
    >> the big volumes don't have to be transferred.

    >
    > This looks like a crank sentence.


    No, it's just like images.

    The images are read from the harddisk in compressed form... then they enter
    the memory/cpu and are decompressed there and they can stay there for a
    while until the next compressed image needs to be decompressed and when room
    is needed.

    >>
    >> The volumes get decompressed only when needed.

    >
    > Nobody has waited for you to do that.


    The technology could be intelligent and automatically throw away
    decompressed volumes if it needs to make room for new ones.

    It could also remain dumb and require the programmer to explicitly "release
    volumes/memory".

    Or maybe even both for maximum control in case desired or easy of use when
    desired.

    Bye,
    Skybuck.
     
    Skybuck Flying, May 26, 2010
    #13
  14. >> Points is not an option those too small.
    >
    > No. Points *are* an option. More precisely, you don't render "points" but
    > "splats" which are basically stretched discs. Look for example at:
    > http://www.irit.fr/~Gael.Guennebaud/docs/deferred_splatting_eg04.pdf
    > They render forests with points.


    I guess the "splat" idea is the same as the "vector balls" idea from long,
    long, long ago:

    If I wanted to try it I would probably try to drive this concept to the
    limit:

    "UltraForce Vector Demo:"

    http://www.youtube.com/watch?v=2suZ1KkZ9HI

    See 3:07 where the vector balls start ;) :)

    Is that the same concept as "splat" ? ;) :)

    To me vector balls seem to represent reality the closest...

    Aren't we all build up of tiny little molecules/atoms anyway ?

    They were always pictured as "balls" in highschool...

    Though later on they showed protons/neutrons/electrons...

    And ultimately it might be energy strings or whatever...

    But so far "atoms"/"tiny little balls" seems a good appromixation of reality
    for "matter" ?

    Bye,
    Skybuck.
     
    Skybuck Flying, May 26, 2010
    #14
  15. Skybuck Flying wrote:
    > "Nicolas Bonneel" <> wrote in message
    > news:hti1mq$6bn$...
    >> Skybuck Flying wrote:
    >>> "Nicolas Bonneel" <> wrote in message
    >>>> This paper deal both with the acceleration structure (octree), the way
    >>>> to use and update it efficiently on the GPU (during camera motion for
    >>>> example, if the goal is rendering) and the rendering itself (ray
    >>>> marching and filtering).
    >>> As long as it renders one or two objects that not very impressive...
    >>>
    >>> It needs to render entire scenes... and then the scenes need to have
    >>> physics as well.

    >> Are you joking ?

    >
    > No.
    >
    >> It needs to do the coffee as well ??

    >
    > No.
    >
    >> They render billions of voxels which is more than enough to render full

    >
    > Let's see, a billion you say, a few billion you say ?
    >
    > What is a billion ?
    >
    > A billion is:
    >
    > 1000.000.000
    >
    > Eye-opener for you:
    >
    > That's not in 3D
    >
    > That's a resolution of 1000x1000x1000.


    First, I said billionS.

    Then, just *read* the paper, and see they render from 8192^3 resolution
    voxels for real datas and up to 8.4M^3 virtual resolution (Sierpinsky).

    >
    > That won't even fill my 1920x1200 monitor !


    Where did you see that 1 voxel = 1 pixel ?


    >>> Voxels/Volumes can be compressed well, but need to be decompressed to do
    >>> computations on...
    >>>
    >>> I don't think CPU's are suited to handle that kind of work...
    >>>
    >>> Nor do graphics cards seem really suited for it...

    >> The reference I gave show that they are.

    >
    > Nope, I know very well what my hardware from 2006 can do.
    >
    > It has a resolution of 4096x4096 for 2D textures which in itself isn't even
    > that big.
    >
    > And for 3D it gets worse, for 3D it's actually 512x512x512.
    >
    > That's even smaller than the small 1000x1000x1000 example !


    For *THEIR* 8192^3 resolution, THEY use a 8800 GTS graphics card with
    512Mb of memory. It was basically a quite high-end 2006 hardware but
    could be found, or at worse in 2007. Currently, this hardware is
    outdated (how much does a 8800GTS cost currently ? 20 bucks?).

    They didn't invent their work (and I personally know well all of the
    authors, and published with one of them). They just... compress data!
    This is the scope of the paper though, but you don't seem to be willing
    to read it.

    I suggest you read a little bit more before posting your "innovative" ideas.


    > It doesn't seem your document does that... it could help if it had some
    > pseudo code in a pascal like language or so... some comments or so.


    lol, giving a pascal code just for you ? Just *read* papers, and not
    just one.


    > Maybe some more pictures... whatever helps to make it more understable ;) :)


    There is an accompanying video.

    > I want it/the documentation/document to be modular, not integrated...
    >
    > I don't want a document describing the total renderer.
    >
    > I want seperate documents focusing on each part of it.


    If *you* want something, do it yourself! Researchers are not here to
    fulfill your personal desires because you are too lazy to read.


    > Maybe then I will start to take it a bit more seriously... because how else
    > would you transfer 1 billion bytes ?


    Read it. It has been presented at Siggraph 2009 also (but it is an i3d
    paper).


    >>> The volumes get decompressed only when needed.

    >> Nobody has waited for you to do that.

    >
    > The technology could be intelligent and automatically throw away
    > decompressed volumes if it needs to make room for new ones.


    you definitely need to read.
     
    Nicolas Bonneel, May 26, 2010
    #15
  16. Skybuck Flying wrote:
    >>> Points is not an option those too small.

    >> No. Points *are* an option. More precisely, you don't render "points" but
    >> "splats" which are basically stretched discs. Look for example at:
    >> http://www.irit.fr/~Gael.Guennebaud/docs/deferred_splatting_eg04.pdf
    >> They render forests with points.

    >
    > I guess the "splat" idea is the same as the "vector balls" idea from long,
    > long, long ago:
    >
    > If I wanted to try it I would probably try to drive this concept to the
    > limit:
    >
    > "UltraForce Vector Demo:"
    >
    > http://www.youtube.com/watch?v=2suZ1KkZ9HI
    >
    > See 3:07 where the vector balls start ;) :)
    >
    > Is that the same concept as "splat" ? ;) :)



    no. Splats are 2D. But again, you'll need to READ to understand that.
    Come on, the paper I gave is 8 pages long.... you are only able to watch
    youtube videos ??

    The splatting approach is just as old as 2001, in "Surface Splatting"
    from Zwicker et al. (8 pages as well, and it's the first google result),
    but there are other slightly different older works (Qsplat in 200...).
    The point based rendering approach though is as old as 1985 (The Use of
    Points as Display Primitives, by Levoy and Whitted).
     
    Nicolas Bonneel, May 26, 2010
    #16
  17. "Nicolas Bonneel" <> wrote in message
    news:hti129$5pc$...
    > Skybuck Flying wrote:
    >>> They also give a good review of the previous work which should allow you
    >>> to find all the infos you want, including a STAR report [EHK*06].

    >>
    >> ?

    >
    > a state of the art report. Just check the reference [EHK*06].


    Ok, nice tutorials about rendering and such... but why you refer me towards
    them is beyond me ?

    http://www.real-time-volume-graphics.org/?page_id=28

    Even the last document clearly states/shows that compression is somehow
    needed:

    http://www.cg.informatik.uni-siegen.de/data/Tutorials/EG2006/RTVG14_LargeVolumes.pdf

    So the problem of compression is still not solved.

    Thanks anyway it was slightly interesting :)

    Bye,
    Skybuck :)
     
    Skybuck Flying, May 26, 2010
    #17
  18. "Skybuck Flying" <> wrote in message
    news:29fe1$4bfca826$54190f09$1.nb.home.nl...
    >
    > "Nicolas Bonneel" <> wrote in message
    > news:hti129$5pc$...
    >> Skybuck Flying wrote:
    >>>> They also give a good review of the previous work which should allow
    >>>> you to find all the infos you want, including a STAR report [EHK*06].
    >>>
    >>> ?

    >>
    >> a state of the art report. Just check the reference [EHK*06].

    >
    > Ok, nice tutorials about rendering and such... but why you refer me
    > towards them is beyond me ?
    >
    > http://www.real-time-volume-graphics.org/?page_id=28
    >
    > Even the last document clearly states/shows that compression is somehow
    > needed:
    >
    > http://www.cg.informatik.uni-siegen.de/data/Tutorials/EG2006/RTVG14_LargeVolumes.pdf



    This document did mention to check out/google for:

    "dwtgpu" (related to compression/decompression ?)

    I haven't done so yet... but maybe that's about compression.

    Bye,
    Skybuck.
     
    Skybuck Flying, May 26, 2010
    #18
  19. "Nicolas Bonneel" <> wrote in message
    news:...
    > Skybuck Flying wrote:
    >>>> Points is not an option those too small.
    >>> No. Points *are* an option. More precisely, you don't render "points"
    >>> but "splats" which are basically stretched discs. Look for example at:
    >>> http://www.irit.fr/~Gael.Guennebaud/docs/deferred_splatting_eg04.pdf
    >>> They render forests with points.

    >>
    >> I guess the "splat" idea is the same as the "vector balls" idea from
    >> long, long, long ago:
    >>
    >> If I wanted to try it I would probably try to drive this concept to the
    >> limit:
    >>
    >> "UltraForce Vector Demo:"
    >>
    >> http://www.youtube.com/watch?v=2suZ1KkZ9HI
    >>
    >> See 3:07 where the vector balls start ;) :)
    >>
    >> Is that the same concept as "splat" ? ;) :)

    >
    >
    > no. Splats are 2D.


    Then I guess it's the same since the vector balls are 2D as well, just their
    coordinates are 3D.

    Bye,
    Skybuck.
     
    Skybuck Flying, May 26, 2010
    #19
  20. I scanned through all the documents and the compression problem remains.

    Bye,
    Skybuck.
     
    Skybuck Flying, May 26, 2010
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. O.L.
    Replies:
    2
    Views:
    234
  2. Brook Harty
    Replies:
    3
    Views:
    228
    Pluvious
    Feb 7, 2004
  3. Beemer
    Replies:
    4
    Views:
    230
    Beemer
    Jun 15, 2005
  4. Replies:
    3
    Views:
    321
    dandrea
    Mar 8, 2006
  5. Speeker

    Hardware Acceleration

    Speeker, Sep 2, 2003, in forum: Nvidia
    Replies:
    8
    Views:
    603
    Speeker
    Sep 4, 2003
Loading...

Share This Page