> Could any one help me to solve some fundamental issues in rendering
> shafts of light.
> I am following the paper "Interactive rendering method for displaying
> shafts of light" by Dobashi, Nishita and Yamamoto for rendering light
I have not read the paper you cited above, but I know the cloud
rendering papers by Dobashi et al. They also discuss shafts of light
there and I suppose it's the same principle. So blame me, if I'm wrong.
> Since I am new to graphics programming, i am finding
> difficulties in calculating the total intensity of the scattered light
> Is (Second term of the equation)
I suppose the term Is is the amount of light arriving at an arbitrary
point P from the sun, attanuated by the scene objects lying between P
and the sun. Dobashi et al. propose to calculate this amount by first
rendering the scene with OpenGL from the perspective of the sun/light
source and during this step evaluating the Is terms for every P. In case
of shafts of light for clouds the P's lie on spheres with their centers
at the view point of the obserever. This is at least easiest for rendering.
My idea for implementing this is as follows:
0. Init the framebuffer with the color of the light source. Usually white.
1. Create the P's by, e.g., placing them homogenously on the spheres
width varying radians.
2. Sort the P's in ascending distances from the sun.
3. Sort the objects' fragments, i.e., the cloud particles, colored glass
faces, or whatever, also in increasing distances from the sun.
4. Step through your two sorted lists in increasing order:
If a fragmet comes next, render it into the framebuffer with
OpenGL-blending so that it attenuates the colors stored in the
framebuffer (see the paper for details).
If a P comes next, read out the color of that pixel in the framebuffer,
which corresponds to the rendering coordiates of that P. The reading out
of the framebuffer color may be accomplished by glReadPixels(). Thereto
you have to get the rendering coordinates of P, i.e., the screen
coordinates of P if it would be rendered. These you get either by
rendering P with alpha==0 (so that it doesn't modify the framebuffer)
and reading out the rendering coordinates with glReadback(), or you do
the projection of the of the point P yourself, which I don't suppose you
want to. (In case of the light source beeing the sun this would be a
parallel projection.) The framebuffer color you read out with
glReadPixels() is the "luminance" of the point P, i.e., the term Is. So
you have to store it associated to the P.
After that you have all Is-terms for all P's.
5. Clear the framebuffer with the backround color of the scene.
6. Sort the scene objects' fragments, but this time, according to the
distances from the view point. The points P are already sorted by
definition (P's are on spheres which are centered at the view point).
7. Render the scene from the view point of the observer in decreasing
order according to the distance from the view point: Step through your
sorted list of fragments.
If a fragment comes next render it in normal mode, i.e. with their
If a sphere's radius comes next, render the sphere by, e.g., rendering a
triangle mesh with the vertices P of that sphere. You should choose
smooth color interpolation between the vertice colors for rendering the
mesh fragments, and you should use blending. This time blending should
add the amount of light scattered from the light source into the
direction of the viewer. This amount depends on the terms Is of the
There we are. Beautiful shafts of light are on the screen ... hopefully.
This is my suggestion for a shafts-of-light algorithm. If anyone has a
better idea, how to get the Is terms, please tell me because I want to
implement it myself in the near future.
A bottleneck of the algorithm are the glReadPixel() and glReadback()
function calls. Since they are usually not optimized by the OpenGL
hardware drivers they slow down the lighting step. A very sophisticated
work around would be to store the pixel colors of the corresponding P
coordinates as a texture, pixel by pixel and use the texture colors
during the rendering step. But this would be tricky to set up in OpenGl
and I didn't think through it if this is possible in principle regarding
the all the blending-texturing-combinations to generate the desired
rendering effect. But maybe someone else has ... :-)
Regarding your questions:
> How to construct the virtual plane
This would be step 1 above.
> How can I find the direction towards the intersection point of the ray
> with the virtual plane?
I guess you mean the rendering coordinates of the P's. So see above.
> How to store the intensity distribution of light sources as light maps
The light map, in this case, is the framebuffer.
> Implementation of projective texture mapping techniques
I recommend some OpenGL literature :-) But anyway, you don't really need
it here. At least not in a first implementation of the alogrithm
So long ... I hope I was helpful.