2.4.5 Utilities, models, etc. |
POV-Ray 3.6 for UNIX documentation 2.4.6 Rendering speed |
2.4.7 Miscellaneous questions |
"Will POV-Ray render faster if I buy the latest and fastest 3D videocard?"
No.
3D-cards are not designed for raytracing. They read polygon meshes and then scanline-render them. Scanline rendering has very little, if anything, to do with raytracing. 3D-cards cannot calculate typical features of raytracing as reflections etc. The algorithms used in 3D-cards have nothing to do with raytracing.
This means that you cannot use a 3D-card to speed up raytracing (even if you wanted to do so). Raytracing makes lots of float number calculations, and this is very FPU-consuming. You will get much more speed with a very fast FPU than a 3D-card.
What raytracing does is actually this: Calculate 1 pixel color and (optionally) put it on the screen. You will get little benefit from a fast videocard since only individual pixels are drawn on screen.
This question can be divided into 2 questions:
1) What kind of hardware should I use to increase rendering speed?
(Answer by Ken Tyler)
The truth is the computations needed for rendering images are both complex and time consuming. This is one of the few program types that will actualy put your processors FPU to maximum use.
The things that will most improve speed, roughly in order of appearance, are:
2) How should I make the POV-Ray scenes so that they will render as fast as possible?
These are some things which may speed up rendering without having to compromise the quality of the scene:
+Q
).
#if
-statements) the majority of the light sources and leave only the
necessary ones to see the scene.
#if
-statements) slow objects (such as superellipsoids) with faster ones (such as
boxes).
quick_color
statement to do this (it will work when you render with quality 5 or lower, ie. command line parameter +Q5
).
max_trace_level
, try setting the adc_bailout
value to something bigger than the default 1/256.
"How do the different kinds of CSG objects compare in speed? How can I speed them up?"
There is a lot of misinformation about CSG speed out there. A very common allegation is that "merge is always slower than union". This statement is not true. Merge is sometimes slower than union, but in some cases it is even faster. For example, consider the following code:
global_settings { max_trace_level 40 } camera { location -z*8 look_at 0 angle 35 } light_source { <100,100,-100> 1 } merge { #declare Ind=0; #while(Ind<20) sphere { z*Ind,2 pigment { rgbt .9 } } #declare Ind=Ind+1; #end }
There are 20 semitransparent merged spheres there. A test render took 64 seconds. Substituting 'merge' with 'union' took 352 seconds to render (5.5 times longer). The difference in speed is very notable.
So why is 'merge' so much faster than 'union' in this case? Well, the answer is probably that the number of visible surfaces play a very important role in the rendering speed. When the spheres are unioned there are 18 inner surfaces, while when merged, those inner surfaces are gone. POV-Ray has to calculate lighting and shading for each one of those surfaces and that makes it so slow. When the spheres are merged, there is no need to perform lighting and shading calculations for those 18 surfaces.
So is 'merge' always faster than 'union'? No. If you have completely non-transparent objects, then 'merge' is slightly slower than 'union', and in that case you should always use 'union' instead. It makes no sense using 'merge' with non-transparent objects.
Another common allegation is "difference is very slow; much slower than union". This can also be proven as a false statement. Consider the following example:
camera { location -z*12 look_at 0 angle 35 } light_source { <100,100,-100> 1 } difference { sphere { 0,2 } sphere { <-1,0,-1>,2 } sphere { <1,0,-1>,2 } pigment { rgb <1,0,0> } }
This scene took 42 seconds to render, while substituting the 'difference' with a 'union' took 59 seconds (1.4 times longer).
The crucial thing here is the size of the surfaces on screen. The larger the size, the slower to render (because POV-Ray has to do more lighting and shading calculations).
But the second statement is much closer to the truth than the first one: differences are usually slow to render, specially when the member objects of the difference are very much bigger than the resulting CSG object. This is because POV-Ray's automatic bounding is not perfect. A few words about bounding:
Suppose you have hundreds of objects (like spheres or whatever) forming a bigger CSG object, but this object is rather small on screen (like a little house for example). It would be really slow to test ray-object intersection for each one of those objects for each pixel of the screen. This is speeded up by bounding the CSG object with a bounding shape (such as a box). Ray-object intersections are first tested for this bounding box, and it is tested for the objects inside the box only if it hits the box. This speeds rendering considerably since the tests are performed only in the area of the screen where the CSG object is located and nowhere else.
Since it is rather easy to automatically calculate a proper bounding box for a given object, POV-Ray does this and thus you do not have to do it by yourself.
But this automatic bounding is not perfect. There are situations where a perfect automatic bounding is very hard to calculate. One situation is the difference and the intersection CSG operations. POV-Ray does what it can, but sometimes it makes a pretty poor job. This can be specially seen when the resulting CSG object is very small compared to the CSG member objects. For example:
intersection { sphere { <-1000,0,0>,1001 } sphere { <1000,0,0>,1001 } }
(This is the same as making a difference with the second sphere inversed)
In this example the member objects extend from <-2001,-1001,-1001> to <2001,1001,1001> although the resulting CSG object is a pretty small lens-shaped object which is only 2 units wide in the x direction and perhaps 10 or 20 or something wide in the y and z directions. As you can see, it is very difficult to calculate the actual dimensions of the object (but not impossible).
In this type of cases POV-Ray makes a huge bounding box which is useless. You should bound this kind of objects by hand (specially when the it has lots of member objects). This can be done with the bounded_by keyword.
Here is an example:
camera { location -z*80 look_at 0 angle 35 } light_source { <100,200,-150> 1 } #declare test = difference { union { cylinder {<-2, -20, 0>, <-2, 20, 0>, 1} cylinder {<2, -20, 0>, <2, 20, 0>, 1} } box {<-10, 1, -10>, <10, 30, 10>} box {<-10, -1, -10>, <10, -30, 10>} pigment {rgb <1, .5, .5>} bounded_by { box {<-3.1, -1.1, -1.1>, <3.1, 1.1, 1.1>} } } #declare copy = 0; #while (copy < 40) object {test translate -20*x translate copy*x} #declare copy = copy + 3; #end
This took 51 seconds to render. Commenting out the 'bounded_by' line increased the rendering time to 231 seconds (4.5 times slower).
No, and most likely never will.
There are several good reasons for this:
Note: There are a few things in POV-Ray that use single precision math (such as color handling). This is one field where some optimization might be possible without degrading the image quality.
2.4.5 Utilities, models, etc. | 2.4.6 Rendering speed | 2.4.7 Miscellaneous questions |