It’s been a couple of weeks, so time for a new update!
In my previous post, I was able to show some preliminary results of my Lightslice implementation for virtual ray lights. I’ve been testing it ever since, trying to get an idea of how well it performs and making adjustments and improvements wherever possible. I think it’s going rather well. Let’s see another example, which is based on an adaption of a scene by Veach et al.:
Image rendered with 2500 virtual ray lights, no clustering.
Same scene as above, now rendered with 750 VRLs, again no clustering.
There is an obvious difference in quality between those two. Now have a look at my results:
Lightslice results, clustering 2500 virtual ray lights into 500 representatives.
And now for the important part: The first image was rendered in 692 seconds, the second image in around 200 seconds, and my image in 174 seconds. So, the clustering is done faster than rendering 250 VRLs, and the quality is very close to the non-clustered image with the same 2500 virtual ray lights.
I’m quite happy about the above results, but one problem of the Lightslice technique keeps bugging me: the lack of direction on determining the many parameters of the algorithm. Obviously, I didn’t just pick 500 representatives and 2500 VRLs and got these results out of the algorithm, I had to play around to find the right settings first. In order to tackle this problem, I have been wondering whether I could use some techniques of the Lightcuts algorithm in the Lightslice algorithm. I’m currently experimenting with the following: Instead of choosing the amount of clusters as a fixed number beforehand, the refinement of clusters for a slice (i.e. the final step of the Lightslice algorithm) is stopped whenever the remaining estimated error is below a percentage of the estimated total intensity of that slice. This is an essential part of the lightcuts algorithm, and based on Weber’s law. In order to keep the amount of work at bay, there is also a maximum number of representatives for one slice (e.g. a thousand, as in Lightcuts).
You can avoid having to choose some parameters with this technique, but it also has another advantage: By stopping the refinement process as soon as possible, the rendering gets faster: rendering 250 VRLs is faster than rendering 500, so if the 500 representative VRLs aren’t a definite improvement over the 250, you might as well go with the 250.
I just implemented this today, so it’s in an early stage, but it shows a lot of promise, and I hope to experiment thoroughly with it in the next two weeks. This an example result, with between 200 and 1000 clusters for every slice:
veach scene, 2500 VRLs clustered, with adaptive stopping condition.
It appears to be close to the above images, and was rendered in 133 seconds. yay!