bramz' diary : Parsing PBRT scenes

December 30th, 2010 by Bramz

I’ve started writing this post back in June, but I guess something else came up =) LiAR always has been an on-and-off thing, and this is no different. But let’s finish this one before the end of the year.

For the last few weeks months, I’ve been working on a parser for the PBRT scene description language. This is the file format used by the likewise named renderer from the book Physically Based Rendering: From Theory to Implementation by Matt Pharr and Greg Humphreys.

The goal is to leverage the example scenes from the book as test cases for LiAR: being able to render from the exact same files as PBRT itself, I can directly compare the outputs. Plus, as LuxRender is based on PBRT, I can add support for their extensions to the file format, to take advantage of the different exporters in existence for Blender, Maya, …

And of course, I’m doing all this in Python. The benefit is that I’ve got syntax checking for free by simply implementing the commands as Python functions and calling them directly with a number of positional and keyword arguments. More on that below …

Writing the lexer

The command-oriented structure of the PBRT file format makes it very easy to build a parser for, and double so in Python. The file consists of a series of statements, each one starting with a command name followed by a number of arguments, either positional or named. Following are two statements from an example in the book. The first is Rotate with four positional arguments. The second is Shape with "disk" as positional argument and two named ones radius and height, valued [20] and [-1].

Rotate 135 1 0 0
Shape "disk" "float radius" [20] "float height" [-1]

Killeroo rendered by LiAR with PBRT parser. Killeroo model courtesy of headus 3D tools; scene from PBRT book.

The lexer is probably the hardest part to write: the function that translates the scene file into a series of tokens. And even that one is very simple if you make use of the undocumented Scanner class from the re module. We build a generator function _scan that reads the file line by line and feeds them to the scanner. The tokens are returned as a list of (type, value) pairs, which we yield one by one.

def _scan(stream):
  scanner = Scanner([
    (r"[a-zA-Z_]\w*", lambda s, tok: (_IDENTIFIER, tok)),
    (r"[\-+]?(\d+\.\d*|\.\d+)([eE][\-+]?[0-9]+)?", 
      lambda s, tok: (_NUMBER, float(tok))),
    # more rules ...
  ])
  for line in enumerate(stream):
    tokens, remainder = scanner.scan(line)
    assert not remainder, "syntax error"
    for (type, value) in tokens:
      yield (type, value)

PBRT Commands as Python functions

The major trick of the parser is to implement all PBRT commands as direct callable Python functions. Following is the implementation of Rotate. The first parameter self serves the same function as the C++ this pointer, and is independent of the PBRT syntax. It is only necessary because I’ve implemented the commands as methods of the PbrtScene class. The next four parameters correspond to arguments of the scene description, one by one.

def Rotate(self, angle, x, y, z):
  transform = liar.Transformation3D.rotation(
    (x, y, z), math.radians(angle))
  self.__cur_transform = transform.concatenate(
    self.__cur_transform)

DOF dragons rendered by LiAR with PBRT parser. Dragon model courtesy of Stanford University Computer Graphics Laboratory; scene from PBRT book.

The Shape command is a bit harder, as the first parameter is the shape type, and determines what parameters should follow. I tackled this by playing the same trick again. For each shape type, I provide a function _shape_<name> to be called. Shape eats the positional argument name, and the Python interpreter collects all remaining keyword arguments in **kwargs. Shape does not have any positional arguments other than name, so there’s no *args. Eventually Shape uses name to lookup the appropriate shape function, and calls it with the keyword arguments received. In the example, _shape_disk will be called and the content of **kwargs will automatically be mapped on the parameters height and radius.

def Shape(self, name, **kwargs):
  shape = getattr(self, "_shape_" + name)(**kwargs)
  shape.shader = self.__material
  self.__add_shape(shape)

def _shape_disk(self, height=0, radius=1):
  return liar.scenery.Disk(
    (0, 0, height), (0, 0, 1), radius)

Putting it all together

All that is left to be done, is parsing the tokens generated by the lexer, and calling the command functions.

Here’s a simplified version of the main loop, I’ve left out the Include statement. Each time we encounter a new identifier, we know we’re at the start of a new statement. We execute the previous one, and we store the current identifier for later use. If we find a parameter name, we know the value following will be a keyword argument, so we store the keyword. In any other case, we have a parameter value. If it’s the start of a list, we first eat tokens to complete the list. If we have a stored keyword, the argument is stored in kwargs and the keyword is reset. Otherwise, we append it to the positional arguments args.

key, identifier, args, kwargs = None, None, [], {}
tokens = _scan(path, stream)
for (type, value) in tokens:
  if type == _IDENTIFIER:
    # start of new statement, execute last one
    if identifier:
      getattr(self, identifier)(*args, **kwargs)
    identifier = value
    args = []
    kwargs = {}
  elif type == _PARAMETER:
    keyword = value
  else:
    if type == _START_LIST:
      arg = []
      for (type, value) in tokens:
        if type == _END_LIST: break
        arg.append(value)
    else:
      arg = token
    if keyword:
      kwargs[keyword] = arg
      keyword = None
    else:
      args.append(arg)
if identifier:
  getattr(self, identifier)(*args, **kwargs)

To execute the statement (in bold), we lookup the corresponding method using getattr. If it doesn’t exists, an AttributeError is raised. Next, we calll the method passing the positional and keyword arguments *args and **kwargs. The upshot of all this, is that we get syntax checking for free. If we call Rotate with the wrong number of arguments, the Python interpreter will complain with a TypeError.

TypeError: Rotate() takes exactly 4 arguments (5 given)

Default parameter values are handled automatically too. If kwargs doesn’t have an entry for a parameter, the default value is used instead. And if kwargs contains an unknown parameter name, you get a TypeError

TypeError: _shape_disk() got an unexpected keyword
argument 'z'

Complex ecosystem rendered by LiAR with PBRT parser. Model from Deussen et al. Realistic modeling and rendering of plant ecosystems; scene from PBRT book.

Voila, this sums it about up. It gets a little more complex than this, but not so much. Large parts of the PBRT scene description (version 1) are already implemented, but there’s lots to be done. And then we didn’t mention version 2 of the scene description and the LuxRender extensions. But as said in the introduction, this is an on-and-off thing, and features are implemented on as-needed basis.

PS: many thanks to Matt Pharr and Greg Humphreys for writing such a wonderful book!

bramz' diary : Building OpenEXR libraries for Windows x64

May 25th, 2010 by Bramz

LiAR supports the OpenEXR file format for several months now, but I only had it working on 32-bit windows. I was still missing the binaries for the 64-bit build, as you need to build the library yourself. For 64-bit Windows that turns out to be one tricky affair:

  1. Only Visual Studio solution files are available so that rules out nmake.
  2. DLL creation involves an custom build tailored for x86.
  3. The Visual C++ Express editions are x86 only anyway, so you have to work around that as well.
  4. And surprisingly, it seems that no-one has ever bothered so before because there’s hardly anything to be found on the web.

After several hours of hairpulling, I’ve finally managed to set up a working x64 build. To document it for future reference, and for other people on the same adventure, here are the magic build steps:

Setting up the build tree

The OpenEXR build is rather picky on the build tree layout. While the absolute location is free for choice, you need to add an extra layer for things to play out nicely. On my machine, I’ve put everything in C:\libs-x64\openexr-1.6.1, but that could be just any other location.

  1. Download ilmbase-1.0.1.tar.gz and openexr-1.6.1.tar.gz.
  2. Create C:\libs-x64\openexr-1.6.1\openexr.

    This extra layer is necessary to make sure the goodies are gathered in C:\libs-x64\openexr-1.6.1\Deploy. Otherwise, they would end up in C:\libs-x64\Deploy.

  3. Extract ilmbase-1.0.1.tar.gz to
    C:\libs-x64\openexr-1.6.1\openexr\ilmbase-1.0.1
  4. Extract openexr-1.6.1.tar.gz to
    C:\libs-x64\openexr-1.6.1\openexr\openexr-1.6.1

Using VC Express for x64 builds

For Windows, OpenEXR comes with Visual Studio project files only. So you must use Visual C++ for building. If you’re using the VC Express edition, that’s a problem because it only has a x86 compiler on board. And you can’t directly use the x64 compiler of the Windows SDK, as that requires nmake makefiles, which you don’t have. But you can do it indirectly …

  1. Open a Windows SDK CMD shell. This sets all the required environment variables to use the x64 compiler.
  2. "%VCINSTALLDIR%\..\Common7\IDE\VCExpress" /useenv or whatever the location to vcexpress.exe is.

    The /useenv switch is important to tell the IDE not to use the default compiler, but to assume the environment is already setup ready to go. This is exactly why we run this within the Windows SDK shell.

  3. Open a solution and make sure you target the x64 platform.

    You won’t be able to setup a new x64 configuration, but you can just open the Win32 configuration, go to the advanced linker options and specify MachineX64 (/MACHINEX64) as target machine.

  4. Build …

Building IlmBase

IlmBase contains the createDLL project that builds a tool for the custom build steps. You need to tweak its sources to correctly skip names starting with an underscore. Because createDLL assumes the x86 __cdecl calling convention which decorates names with an extra underscore, it actually only skips names that start with at least two underscores. But the x64 calling convention does not decorate them as such, so names starting with a single underscore slip through the net.

  1. Start VC Express, open
    ilmbase-1.0.1\vc\vc8\IlmBase\IlmBase.sln and convert if necessary.
  2. Set the target machine to MachineX64 (see above)
  3. Open createDLL.cpp to make the following code changes:
    • Replace /MACHINE:X86 by /MACHINE:X64 (line 683)
    • Replace the following (three occurrences!)

      // symbol starts with two underbars, skip it
      else if (buf[0] == ‘_’ && buf[1] == ‘_’) {

      by

      // symbol starts with underbar, skip it
      else if (buf[0] == ‘_’) {

  4. Build …

    If you get errors like fatal error LNK1112: module machine type 'X86' conflicts with target machine type 'x64', you should have started VCExpress with the /useenv switch from the Windows SDK CMD shell, as described above.

Building zlib:

OpenEXR has a dependency on zlib. So if you don’t have done so yet, now is the time to build it …

  1. Download zlib125.zip and extract to C:\libs-x64\zlib-1.2.5
  2. Open a Windows SDK CMD shell
  3. cd /D C:\libs-x64\zlib-1.2.5
  4. set Include=C:\libs-x64\zlib-1.2.5;%Include%
  5. nmake -f win32/Makefile.msc AS=ml64 LOC="-DASMV -DASMINF" OBJA="inffasx64.obj gvmat64.obj inffas8664.obj"

Building OpenEXR

OpenEXR expects the zlib headers and libraries to be in the Deploy directory as well, so that’s where we’re going to put them.

  1. Copy zlib.h and zconf.h from C:\libs-x64\zlib-1.2.5 to Deploy\include
  2. Copy zdll.lib to both Deploy\lib\Debug and
    Deploy\lib\Release
  3. Copy zlib1.dll to both Deploy\bin\Debug and
    Deploy\bin\Release
  4. Start VCExpress and open
    openexr-1.6.1\vc\vc8\OpenEXR\OpenEXR.sln
  5. Set the target machine to MachineX64 (see above)
  6. Build …

That’s it. By now you should have a Deploy directory filled with OpenEXR headers and libraries, ready to be used in your x64 build of, for example, LiAR =)

Happy Towel Day!

bramz' diary : Participating Media …

May 4th, 2010 by Bramz

… are really good fun. And a huge performance killer too. Especially if you enable in-scattering on your final gather rays. Oh boy, that really hurts! Still playing around with the Sponza atrium, I wanted to do a render of sunlight being casted through the arches on the first floor.

Anyway, to cut things short, here’s the final render. It only took a ridiculous long time to compute (12 hours on a quadcore). Click on the image for full resolution. Tonemapping has been done in photoshop. Model & textures courtesy of Marko Dabrovic.

Adding single scattering was easy enough. Just ray march or sample some points along your camera rays and add light source contributions to each of them using a suitable phase function. Don’t forget to add some attenuation – Beer’s law comes to mind – to your camera rays, light rays and photon paths. Basically on anything that travels through your medium. As phase function, I’ve chosen to use the widely used Henyey-Greenstein model. It isn’t particularly the fastest one around, but as I don’t have a good profiler (yet ;), I don’t really know its impact on the render time anyway.

Multiple scattering is somewhat of a different beast, though it’s not too difficult either. In the photon mapping world, it means augmenting the renderer with a volumetric photon map that records all scatter events during the photon trace pass. For each photon that travels through a medium, you sample a possible scattering location. In case of a homogenous medium, this is as simple as feeding a uniform variate in the inverse cumulative of the exponential distribution, which can yield a travel distance from zero to infinity. If it is nearer than the first surface intersection, you store the photon and sample the phase function for a new direction. Some more details can be found in Lafortune and Willems (1996) and Raab, Seibert and Keller (2008).

In the rendering pass, I used the beam radiance estimate from Jarosz, Zwicker and Jensen (2008) to collect the in-scattered light from the volumetric map on the camera and final gather rays. This method represents the volumetric photons as spheres, and all photons intersected by a ray contribute to it. In case of camera rays, I ignore “single scattered” photons, as I account for single scattering seperately.

Because this operation weights very heavy on the final gather step, I stochastically skip the in-scattering estimate for a number of gather rays. For each ray, a uniform variate is compared to a quality factor. Only if it is lower, volumetric photons are collected. The result is divided by the quality factor to compensate. That way I can trade speed for accuracy.

bramz' diary : Pi and the Sponza atrium

March 14th, 2010 by Bramz

“you may expect updates soon” means “Look, I’m working on it, but I can’t pull all-nighters anymore. So, it’s going to take a while, OK?” =) So, a mere three weeks after that previous statement, here’s my first real update on the LiAR ray tracer. What have I been doing? Well, I concentrated my efforts on the Sponza atrium, to get this one rendered correctly and fast enough. Although it’s an old challenge, I figured that if I wouldn’t be able to tackle this one, there’s no point in trying more up to date ones. So here’s a list of things improved over the last couple of weeks:

  • Added TriangleMeshComposite which merges the triangle meshes of different objects into one acceleration tree. This is especially important in the Sponza atrium, as the 34 different TriangleMesh objects in the sponza scene largely overlap. So ray traversals are going to hit the bounding boxes of most of them, most of the time, and as such the traversal degrades to a linear search of the scene objects. Using the TriangleMeshComposite, ray traversal visits only one triangle mesh.
  • LightSky caches the radiances in a rasterized map, using the same resolution as the CDF map. This ensures better coherence with the PDFs of the drawn samples (using importance sampling)
  • Using Multiple Importance Sampling for the direct lighting pass solves a lot of the noise issues in the shadows (otherwise, most of the shadow rays are aimed at the sun and thus blocked)
  • Final Gather step uses the local effective estimation radius to decide if secondary gather rays are necessary. Before, it used the global maximum estimation radius, but that one needs to be quite large. So the secondary gather step was triggered a lot, causing either a huge performance hit, or a quality drop. In areas with high photon density, this threshold may be lower as the photon map is much more detailed and the need for second gather rays averted. The rule is now that secondary gathering is only used if the length of the first gather ray is smaller than the effective estimation radius of the photon lookup. Otherwise, it is assued that the first gather step is of sufficient quality.
  • The samplers (Stratifier and LatinHypercube) take into account the pixel super sampling when generating samples for lights, BSDFs, gather rays, … For example, when using 3×3 super sampling with 8×8 gather rays each, the stratifier will generate the gather samples on a (3*8)x(3*8) grid.
  • The shaders now return a BSDF instance, caching the texture lookups. Don’t know why I haven’t done this before, but it surely helps with complex textures …
  • Added JPEG and OpenEXR image codecs
  • Lots of improvements to the Blender exporter, though there’s still lots and lots missing

Here’s a render showing the current state of LiAR. Click on the image for full resolution. Tonemapping has been done in photoshop. Model & textures courtesy of Marko Dabrovic.

Work on a ray tracer is of course never done, and even for this Sponza atrium, there’s still room for improvement, both quality and performance wise. Here’s a bit of a TODO list for the near and far future:

  • Improve final gathering quality, using importance sampling based on the directions of the local photons
  • Use a profiler to investigate how much can be gained by improving the performance of the ray traversals and Kd-tree lookups
  • Use caching and subsampling to improve the perfomance and smoothness of the indirect lighting
  • Add volumetric stuff
  • Spectral rendering. Now I’m using XYZ values everywhere, which is (a) physically wrong and (b) doesn’t allow dispersion
  • Implement subtractive lighting to render objects with an existing backdrop in one pass.
  • Implement the Metropolis Light Transport algorithm

Stay tuned, and happy Pi day!

news : liar in a revival

February 20th, 2010 by Bramz

OK, it has been more than two years since my last update on liar.bramz.net. Back then, we moved from liar.sf.net to liar.bramz.org to liar.bramz.net, the subversion repository got a new URL, and everything went silent. What happened?

Well, I started taking real pictures instead, that’s what happened … And I started doing it quite regularly as well, mostly concert photography. Started shooting befriended bands, then I joined Indiestyle and later I was the in-house photographer of Westtalent 2009.

But then, fellow indiestyler Jaan Meert posted on facebook a 3D render of a lego tractor he made for school. And that’s got it itching again … I wanted to do 3D renders myself, but it LiAR was severely out of shape. So shaping has to be done, and after all this time, coding on LiAR is fun again.

Currently, I’m working on the good old sponza atrium, to get things in good condition to tackle further challenges. I’m using Blender as a platform to set up the scene, and export it to a python script for LiAR. It’s working as a charm, and you may expect updates soon!

in other news : Plücker coordinates not so good for you?

July 17th, 2007 by Bramz

Christer Ericson, author of Real-Time Collision Detection has written a little article on Plücker coordinates and why they are considered harmful. It’s great. I never really understood what all the fuss was about, as I could do the Plücker coord tricks as easily with basic 3D linear algebra and its triple product. So, I’m glad to see some of the big guys agree!

(via Pete Shirley’s Graphics Blog)

“LiAR is entirely Plücker free” ;)

news : new URL for Subversion access

June 12th, 2007 by Bramz

As of 28 June 2007, Sourceforge will decommission the SVN access URLs starting with https://svn.sourceforge.net/svnroot/. They are replaced by URLs with the project names in front. For LiAR, this looks like https://liar.svn.sourceforge.net/svnroot/. The new URL scheme is part of an upgrade of the Subversion access method to improve its stability. If your local working copy uses the old URL, follow the switch instructions so that you can still access the repository after 28 June.

PS: I know LiAR has been silent for the last couple of months. There was just too many other things to take care off. However, the last couple of weeks, I’ve resumed coding on LiAR, albeit slowly, so hopefully I will be able to add some new posts in the near future.

news : we’ve moved to liar.bramz.net

February 8th, 2007 by Bramz

We’ve successfully moved the LiAR home page to http://liar.bramz.net. This should allow us to upload bigger renders, and get better search engine coverage. The old URLs starting with http://liar.sourceforge.net should redirect traffic to the new site so that no links get broken.

Share and Enjoy!

bramz' diary : things learnt while installing VS2005 SP1 …

February 7th, 2007 by Bramz

If you’re planning to install Visual Studio 2005 Service Pack 1, keep this in mind:

  • Make sure you have at least three gigs of free space on your C: drive. To be on the safe side, make it four gigs.
  • Windows Update tries to shove this up your arse as a security update, but don’t let it. Do a manual install instead.
  • Log in as Administrator. I know you already have super monkey admin powers, but do it anyway.

The aftermath?

Well, it did solve the test errors we had in the win32_vc8 build of Lass, like the fld loading corrupt data on the FPU stack. It also solved the template class member function overload ambiguity we suffered in testUtilThreadFun.

It did cause some other troubles though. Apparently SP1 screws up on default arguments in function template declarations. In lass::prim, all intersection functions are implemented in *.inl files with the function declarations being listed in the accompanying *.h file. These functions have a parameter tMin that defaults to zero. This worked/works fine on VC6, 7, 7.1 and 8 sans SP1, all GCCs I could get my hands on, but not on VC8 SP1. It still compiles the code, but at runtime, tMin contains garbage when the default value is used. Unfortunately, I was unable to reproduce the problem on a smaller scale. Anyway, the solution to this problem was to move all intersection functions to the header files so that the separate function declarations no longer exist.

bramz' diary : overriding default compiler options in distutils

January 29th, 2007 by Bramz

While debugging another segmentation fault on linux, I was trying to run LiAR in gdb (actually KDbg). The program crashed somewhere in an allocator, but the reason why was almost impossible to see. The debugging experience was crippled because distutils compiles with full optimization -O3 by default, at least my linux box.

If I was going to debug it properly, I would have to get rid of that switch and use -O0 instead. But for some mysterious reason, distutils always used -DNDEBUG -g -O3 -Wall -Wstrict-prototypes, regardless of the --debug switch. It turns out distutils is getting these from the original Makefile used to build Python and stores them, together with the name gcc, in an attribute compiler_so of the CCompiler object. This attribute is later used to invoke the compiler.

Fortunately, in liar_build_shared_lib and liar_build_ext, we have access to the compiler object. All we have to do is, before building, to grab compiler_so, remove any -O switch and put an -O0 instead:

def force_no_optimisation(compiler):
    for i in range(4):
        try: compiler.compiler_so.remove("-O%s" % i)
        except: pass
    compiler.compiler_so.append("-O0")

class liar_build_shared_lib(build_clib):
    def build_library(self, lib_name, build_info):
        force_no_optimisation(self.compiler)
        ...

class liar_build_ext(build_ext):
    def build_extension(self, ext):
        force_no_optimisation(self.compiler)
        ...

What was causing the segmentation fault? The std::vector in kernel::Intersection was requesting memory for 0 elements. The lass::util::AllocatorBinned wasn’t really prepared for that. Once identified, it was easily fixed …