Camera shaders¶
A camera shader allows you to implement a custom camera type. You could for instance implement a fish-eye camera or you may write a shader which exactly matches the camera used to capture live footage.
There are two types of OSL cameras: the regular OSL camera, which is the usual positioned camera, and the OSL baking camera, which behaves much like the baking camera but the UV coordinate to sample and the resulting ray direction is generated by a shader.
The following is a minimal implementation of a perspective camera. It takes into account the aspect ratio of the image, and allows some of the usual camera features in octane like viewport navigation and stereo rendering.
1 2 3 4 5 6 7 8 9 | shader OslCamera( output point pos = P, output vector dir = 0) { float uv[2]; getattribute("hit:uv", 2, uv); vector right = cross(I, N); dir = I + right * (uv[0] - .5) + N * (uv[1] - .5); } |
Output variables¶
A camera shader has 3 outputs representing a ray (the names are arbitrary, but the types have to match exactly).
Argument | Meaning |
---|---|
pont pos |
Ray position |
vector dir |
Ray direction |
float clip[2] or float tMax |
Near and far clipping distance or Far clipping distance |
The vector dir
will be normalized by the render engine.
Clipping distance is given as a number of units in the ray direction. To disable far clipping set
the second value to +∞ (1.0/0.0
).
If clip
defines an empty interval, or if dir
has 0 length, the returned ray is considered
invalid, and the renderer will not perform any path tracing for this sample.
The third output (clip
or tMax
) can be omitted if no clipping is needed. Octane 3.08 only supports the float tMax
form.
Cameras may use setmessage("octane:throughput", value)
to set the initial throughput of a ray. This can be used for things like optical vignette.
Using the OSL camera¶
Camera position¶
Like other camera types, OSL camera nodes have static input pins which define the position and orientation of the camera. It is not mandatory for your camera shader to use this position, but if it does your camera automatically supports motion blur and stereo rendering.
Within camera shaders, the position and orientation of the camera is available via the standard global variables defined by OSL:
Variable | Meaning |
---|---|
point P |
Camera position |
vector I |
Camera direction (sometimes called forward) |
normal N |
Up vector, perpendicular to I |
float u, float v |
First UV channel |
The UV coordinates are mapped on the film plane, with the U axis pointing right and the V axis pointing up. Camera shaders can use 2 UV channels:
UV channel | Meaning |
---|---|
1 | Maps the film plane to the unit square (0, 0)–(1, 1). |
2 | Similar to channel 1, but the V axis is scaled around 0.5 according to the image aspect ratio. |
The camera position is also available via the "camera"
coordinate space. This is usually an orthonormal coordinate space. Without transform the camera is looking along the negative Z axis with the Y axis as up-vector, so the axes are defined as:
Axis | Meaning |
---|---|
+X | Right vector |
+Y | Up vector |
–Z | Camera direction |
Using this coordinate space the last line of the example above may be written as:
1 | dir = vector("camera", (uv[0] - .5), (uv[1] - .5), -1.0); |
Field of view¶
Some features of Octane take into account the camera field of view. Examples are the placement of geometry widgets, and the speed when panning a camera.
If an input parameter is declared as float fov
, Octane will assume it represents the
field of view, and these features will work with the OSL camera.
Focal depth and autofocus¶
If an input parameter is declared as float focalDepth
, Octane assumes it represents
the focal depth. Your camera node will have two input pins for this input variable, focalDepth
(float) and autofocus
(bool), and you can use the focus picker tool with the camera.
Using the baking camera¶
Baking cameras don't have access to a position, but can figure out which geometry covers a given UV point, using _findbakingprimitive().
Since a baking camera is not positioned, most global variables don't return meaningful values. The UV channels have the same meaning as with other OSL camera nodes.
The following is a shader to bake the surface of a mesh.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | #include <octane-oslintrin.h> shader OslCamera( output point pos = 0, output vector dir = 0, output float tMax = 1.0/0.0) { if (!_findBakingPrimitive(u, v)) { tMax = 0; return; } float offset; getmessage("baking", "N", dir); getmessage("baking", "P", pos); getmessage("baking", "offset", offset); pos += offset * dir; dir = -dir; } |
Alternatively, when omitting the last line dir = -dir
, the baked mesh will
work like a custom lens. In both cases the position should be set slightly above
the mesh.
Implementing a basic thin lens camera¶
This example implements a thin lens camera with a variable focal length and depth of field.
We will use the special input names above so the camera supports the expected built-in features for perspective cameras.
1 2 3 4 5 6 7 8 9 10 11 12 | shader ThinlensCamera( float fov = 45 [[ float min = 0.001, float max = 180, float slidermin = 1, float sliderexponent = 4]], float aperture = 0.01 [[ float min = 0, float max = 1, float sliderexponent = 4]], float focalDepth = 3 [[ float min = 0.01, float max = 1e10, float sliderexponent = 4]], output point pos = 0, output vector dir = 0, output float tMax = 1.0/0.0) { |
First we can use the provided UV coordinates to calculate a direction:
1 2 3 4 5 6 7 | // camera ray direction along -Z float film_uv[2]; getattribute("hit:uv", 2, film_uv); float u1 = 2 * (film_uv[0] - .5); float v1 = 2 * (film_uv[1] - .5); float invF = tan(radians(min(fov, 179.99) / 2)); dir = vector(u1 * invF, v1 * invF, -1.0); |
The start position by default is the origin, we can move this point in the XY plane
to implement DOF. Octane provides two random numbers for this purpose which can be fetched
using getattribute("camera:dofrandom", dof_uv)
.
These random numbers give you a random point on the square between 0 and 1. You'll need some code to transform this point to a different shape. For instance, a disk.
1 2 3 4 5 6 7 8 9 10 | if (aperture > 0) { float dof_uv[2]; getattribute("camera:dofrandom", dof_uv); // dof disk float dofsn, dofcs; sincos(dof_uv[1] * M_2PI, dofsn, dofcs); float dofR = aperture * sqrt(dof_uv[0]); pos = point(dofcs*dofR, dofsn*dofR, 0); } |
Because we want points at a certain distance to be in focus, calculate this point for our direction, and adjust the direction so it still points towards this point.
1 2 3 | // adjust direction point target = dir * focalDepth / abs(dir[2]); dir = target - pos; |
Finally we need to transform our ray to world space.
1 2 3 4 | // camera coordinate space to world pos = transform("camera", "world", pos); dir = transform("camera", "world", dir); } |