One of the deliverables for our master’s thesis is a poster explaining the results of your thesis. I finished mine a couple of days ago, after including the remarks I received after presenting it to my promotor. It has been quite a while since I posted something here, so I figured I’d post it here. Click here to have a look at it 🙂 (it’s in dutch!)

The goal of the poster is to show your results to the general public, so don’t expect a lot of technicalities 🙂

It’s going rather well, as you can see on the poster: I’ve managed to render decent approximations of a set of VRLs in about a quarter of the time. The last step is explaining everything in a nice, coherent text!




week 6 update: adaptiveness

It’s been a couple of weeks, so time for a new update!

In my previous post, I was able to show some preliminary results of my Lightslice implementation for virtual ray lights. I’ve been testing it ever since, trying to get an idea of how well it performs and making adjustments and improvements wherever possible. I think it’s going rather well. Let’s see another example, which is based on an adaption of a scene by Veach et al.:

Image rendered with 2500 virtual ray lights, no clustering.

Image rendered with 2500 virtual ray lights, no clustering.

Same scene as above, now rendered with 750 VRLs, again no clustering.

Same scene as above, now rendered with 750 VRLs, again no clustering.

There is an obvious difference in quality between those two. Now have a look at my results:

Lightslice results, clustering 2500 virtual ray lights into 500 representatives.

Lightslice results, clustering 2500 virtual ray lights into 500 representatives.

And now for the important part: The first image was rendered in 692 seconds, the second image in around 200 seconds, and my image in 174 seconds. So, the clustering is done faster than rendering 250 VRLs, and the quality is very close to the non-clustered image with the same 2500 virtual ray lights.

I’m quite happy about the above results, but one problem of the Lightslice technique keeps bugging me: the lack of direction on determining the many parameters of the algorithm. Obviously, I didn’t just pick 500 representatives and 2500 VRLs and got these results out of the algorithm, I had to play around to find the right settings first. In order to tackle this problem, I have been wondering whether I could use some techniques of the Lightcuts algorithm in the Lightslice algorithm. I’m currently experimenting with the following: Instead of choosing the amount of clusters as a fixed number beforehand, the refinement of clusters for a slice (i.e. the final step of the Lightslice algorithm) is stopped whenever the remaining estimated error is below a percentage of the estimated total intensity of that slice. This is an essential part of the lightcuts algorithm, and based on Weber’s law. In order to keep the amount of work at bay, there is also a maximum number of representatives for one slice (e.g. a thousand, as in Lightcuts).

You can avoid having to choose some parameters with this technique, but it also has another advantage: By stopping the refinement process as soon as possible, the rendering gets faster: rendering 250 VRLs is faster than rendering 500, so if the 500 representative VRLs aren’t a definite improvement over the 250, you might as well go with the 250.

I just implemented this today, so it’s in an early stage, but it shows a lot of promise, and I hope to experiment thoroughly with it in the next two weeks. This an example result, with between 200 and 1000 clusters for every slice:

veach scene, 2500 VRLs clustered, with adaptive stopping condition.

veach scene, 2500 VRLs clustered, with adaptive stopping condition.

It appears to be close to the above images, and was rendered in 133 seconds. yay!



week 3 update

In my previous post, I was able to show the concept behind the Lightslice technique with two pictures. By now, I can add something to that: show actual results of the technique, coming from my own implementation.

The best thing to start with, are two pictures without any clustering:



The upper picture is rendered using 500 virtual ray lights, while the lower one is rendered with 1500 virtual ray lights. You might notice that I only used VRLs directly from the light source, so the pictures show light transport of the form LVVE. I also hope you notice that the lower picture, using more VRLs, is smoother and closer to converged. The final important observation to make is that the lower picture took a longer time to render, which is conveniently included in the picture 😉 This is quite reasonable: computing the contribution of 1500 VRLs to all pixels is more work than the contribution of 500, and more work takes more time. The problem here is that the time necessary to render follows the number of VRLs in a linear way: three times as many VRLs mean three times as much time (apart from some constant term). It would be better if three times as many VRLs needed twice as much time, or one and a half times as much. That’s exactly what I’m trying to achieve by using Lightslice: the quality of an image with a lot of VRLs, but render it faster. At the moment, this is where I’m at:


This image is obviously better than the one with 500 VRLs and it rendered faster than the one with 1500 VRLs. The next weeks I’ll be experimenting with my implementation: how does it affect image quality (and does it converge)? How much faster can I go without losing too much quality? How should I choose the parameters? Can I improve the implementation by replacing (some of) the clustering algorithms?

There isn’t that much time left, so I’ll have to go hard the coming weeks…



Week 2 update

Week 2 (of the second semester) is almost over, and I’ve got a nice result out of my Lightslice implementation that really shows what the goal of my project is.

As I explained before, I am implementing the Lightslice technique for the Virtual Ray Light algorithm, hoping to get the advantages of VRLs in an algorithm with sub-linear complexity in the amount of virtual lights.

To give you an idea what the mumbo-jumbo in the previous sentence actually means, have a look at the following two pictures:



The first picture shows the basic VRL algorithm at work: 100 virtual lights are generated, and for each pixel, the algorithm iterates over all of these lights to compute their contribution. The sum of all contributions is the final result for that pixel.

The second picture shows a part of the Lightslice result. Lightslice tries to “summarize” the 100 original virtual lights in 10 (I chose that number myself, there isn’t really a number specified by the technique) representative virtual lights. However, the group of representatives is different for every part of the picture. In the second picture, you can see the representatives for the white part of the picture. As far as I can tell, seven of the ten representatives actually go through the part of the image in question. That’s exactly the result you want: the VRLs that don’t have a large influence on the white part, are coarsely grouped into three representatives. For the important VRLs, with a lot of influence on the white part, a lot of representatives are used, in order to keep as much of the local detail as possible.

As you can see, I’m getting closer to my goal, albeit step-by-step 🙂 However, the implementation is a complex process: I want to use the best possible clustering algorithms, but they’re often difficult to implement and time-consuming. Particularly, the latter is a problem: it makes no sense using slow clustering algorithms, because I want to speed up the rendering, not slow it down. On the other hand, using faster clustering techniques usually implies settling for sub-optimal clusters, and reducing the quality of the resulting image. A lot of decisions to make and a lot of work to do 🙂



Semester 2 start

Exams are over, and the second semester starts tomorrow. Time to make a quick wrap-up of the past month, and set out the direction I’m heading. That direction should be pretty clear by now, so I’m going to keep it short.

Between Christmas and today, I mainly did four things (concerning my thesis that is. I also had exams to prepare):

  • I wrote a paper, which we are required to do. I already mentioned it in my previous post.
  • I experimented a bit with a sampling technique for the volume-to-volume equation of virtual ray lights, a technique I’ve thought of myself, which was just a repeated version of the sampling technique by Kulla et al. I hoped it would perform better than the technique described in the VRL paper. However, my experiments showed that the two techniques were close, and the new sampling procedure didn’t really improve results. Since the sampling isn’t really relevant for my thesis, I invested no more time in it, and avoided a theoretical analysis.
  • I’ve always said I was planning on making a Lightslice-variant for VRLs, choosing Lightslice over Lightcuts for obvious reasons. Still, I always thought of  lightcuts as a more elegant technique (I just like it more, don’t know why 🙂 ). So I’ve spent some time on finding (and trying out) upper bounds for equation 7  of the VRL paper (volume to volume transport). With hindsight, these experiments served as a way of proving that Lightslice is indeed the better choice: Even with the assumption of a homogeneous, diffuse medium, upper bounds for VRL clusters are pretty difficult to find, especially since these bounds should be as tight as possible but easy to compute.
  • I’ve started my  implementation of the Lightslice technique. I’ve started with some easy clustering algorithms, just to check whether everything goes as I expect it to go. In the next weeks I will improve the algorithm step-by-step.

So, the next weeks will consist of implementing better clustering and sampling algorithms. 🙂


Semester 1 wrap-up

Oops! Forgot to update my blog a couple of weeks. It just has been really really really busy (and it still is).

Anyway, first semester is almost over, time to look back at what I did:

  •  I’ve read a lot of papers on instant radiosity, volume rendering and related things.
  • I implemented the virtual ray lights algorithm, and validated the implementation.
  • I played around (experimented) with some of its properties, mainly the sampling technique.

I still need to fix some details of the sampling technique, but apart from that, I’m ready to start the actual research: clustering Virtual Ray Lights. As I already explained in an earlier post, I will try to use Lightslice to do is.

I’m going to skip explaining everything that I mentioned above: most of it has already been covered in a previous post on this blog. Furthermore, 2 weeks ago, I had to give a presentation about my progress, and I also have to write a paper (deadline: tomorrow 🙂 ). Everything is covered in detail in the paper, found here. My slides are right here.

New year’s resolution: update my blog more regularly! But not in the next two weeks, as exams are upon us 🙂


week 10 update

I’ve skipped a week, because I hadn’t much to show or tell. The same goes for this week, but I can’t really skip two weeks in a row.

I’ve been trying to find out whether my algorithm gives correct results after converging. Unsurprisingly, it doesn’t. Something is still off, and I am trying to find out what it is. This is a very time-consuming process, and I feel like I am losing a lot of time…

In order to compare my renders to renders of other algorithms, I had to include all possible light paths, not only those that are included in the VRL algorithm. I’ve lost a lot of time on including these, but they are included now. So, now I am looking for errors/problems/mistakes in my implementation of the VRL algorithm. I hope I will find these as soon as possible…

Back to work! 🙂