čtvrtek 10. září 2015

Radiance and BSDF as measure

   Radiance and BSDF are two important concepts in ray-tracing. Normally, they are considered as standard functions, radiance as a function on ray space and BSDF as a function with two rays at a point as function arguments. In this article I will try to convince you that they are not functions but rather measures and why it is beneficial to think about them as such.

The reason why I do this is not for the sake of mathematical purity but for purely practical reasons. I was reading implementation of a ray-tracer and the code was littered by special cases for specular surfaces. In the case of spacular surfaces the notion of a BSDF as a function breaks down, because it is dirac delta function. The theory of ray-tracing do not fully emphasizes that a BSDF can be Dirac delta function. I believe that emphasizing that a BSDF is a measure in theory can lead to better code design. The same goes for radiance. For example in the case of directional light source the radiance has to be again Dirac delta function. Thus we will consider radiance as measure too.

So please do not expect something radically new or important. It is just small refinement of ray-tracing theory which I will use as guiding principle for better code design.

Review of standard theory.

   We adopt most notation and terminology from Veach's PhD thesis. Thus \(\mathcal{M}\) is the scene geometry, \(S^2\) is the unit sphere, \(\mathcal{R} = \mathcal{M}\times S^2\) is the ray space. Radiance \(L\) and BSDF \(f\) are following functions
\begin{align}
L: \mathcal{R} & \rightarrow [0,\infty], \\
f: \mathcal{M} \times S^2 \times S^2 & \rightarrow [0,\infty].
\end{align}
The backbone  of ray-tracing is the scattering equation
\begin{align}
L_o(x,\omega_o) = \int_{S^2} L_i(x,\omega_i) f(x,\omega_i,\omega_o) \, d\sigma^\perp_x(\omega_i)
\end{align}
This equation is the root of all the problems. What if the light is coming in only from one direction \(\omega'_i\), than \(L_i(x,\omega_i)\) is zero for all \(\omega_i \neq \omega'_i\) therefore the whole integral should be zero, because \( \{\omega'_i \} \) is single point which has zero measure. But normally we say, Oh! this is just special case, the value of the integral is \( L_i(x,\omega'_i) f(x,\omega'_i,\omega_o)\). The same goes for specular BSDFs. We fix these problems by defining radiance and BSDF as measure

Radiance and BSDF as measure

   What is radiance? It is energy flux per unit solid angle per unit area. It is natural measure on ray space which gives you energy flux for given set of points and directions. We denote radiance as measure with letter \(\ell\) to differentiate it from radiance as function. With radiance as measure we have no problem defining radiance of laser at point \(x\) pointing at direction \(\omega\), such radiance is just Dirac measure on ray space \(\ell = I \delta_{(x,\omega)}\), where \(I\) is power of laser.

What is BSDF? BSDF tells you what happens with light when it hits surface i.e. it gives you the amount of light that goes off in given direction. Look at this in perspective of one photon. Photon hits surface at point \(x\) and direction \(\omega_i\), than BSDF gives you probability of that photon continuing in direction \(\omega_o\). Probability is special case of measure, therefore BSDF gives for each ray \((x,\omega_i)\) a probability measure. In following we wil denote BSDF as measure with \(\rho\).

To sum up we think of radiance \(\ell\) and BSDF \(\rho\) as
\begin{align}
\ell: \mathcal{B}(\mathcal{R}) & \rightarrow [0,\infty),  \\
\rho: \mathcal{R} \times  \mathcal{B}(\mathcal{S^2}) & \rightarrow [0,\infty),
\end{align}
where \( \mathcal{B}(X) \) is set of Borel subsets of \( X\). In words, \(\ell(A)\) is energy flux through set of rays \(A \subset \mathcal{R}\) and \( \rho((x,\omega),B)\) is probability of a photon continuing in some direction in \(B\subset S^2\) which hit surface at a point \(x\) in a direction \(\omega\).

Scattering equation now takes form
\begin{align}
\ell_o(A) = \int_{\mathcal{R}} \rho( (x,\omega_i), A^x) \, d\ell_i(x,\omega_i),
\end{align}
where \(A\subset \mathcal{R}\), \(A^x = \{ \omega \in S^2 : (x,\omega) \in A\} \). This form of scattering equation does not suffer from problems mentioned previously, but it looks a little bit cryptic. To make a little more sense of the equation we will use Lebesgue Decomposition Theorem.

Going back

   By Lebesgue Decomposition Theorem we can decompose \(\rho(r, \cdot)\) with the respect to the projected solid angle measure \(\sigma^\perp\).
\begin{align}
\rho(r,\cdot) &= \rho_d(r,\cdot) + \rho_s(r,\cdot) \\
\rho_d(r,\cdot) &\ll \sigma^\perp \\
\rho_s(r,\cdot) &\perp \sigma^\perp
\end{align}
\(\rho_d\) is the familiar diffuse part of BSDF and \(\rho_s\) is the specular part of BSDF. We can even take Radon-Nikodym derivative of \(\rho_d\) with the respect to the \( \sigma^\perp \) to obtain standard BSDF function,
\begin{align}
\frac{d \rho_d((x,\omega_i),\omega_o)}{d \sigma^\perp(\omega_o)} = f(x,\omega_i,\omega_o).
\end{align}
The specular part \(\rho_s\) is in the most cases just counting measure over one or two directions, these are reflection and refraction directions, occasionally it can be counting measure over more directions. In complete generality it can be quite nasty measure, for example it can be some measure over Cantor set, but who on Earth would have material which reflects in the shape of Cantor set or Sierpinski triangle?

What is next?

   With this decomposition of BSDF we can define diffusive and specular scattering operators D,S. They lead naturally to Heckbert's path notation. Next we need to find out what is importance, is it a function ? is it a measure? What is adjont of BSDF? What to do with \(\ell\)? Do we want to decompose it as well? How to define contribution measure on whole paths? There are many questions left to be answered :)

úterý 27. srpna 2013


Here is my second take on explaining my multiparent script.


What it does? Standard technique in rigging is linking. You link one object A to another object B so that its position and rotation of object A is fixed if we manipulate with object B. Quite often one would like to do converse, so manipulate with A and keep position and orientation of B fixed against A. Normally you have to destroy the original link(A to B) and link B to the A. Here I propose solution where you don't have to relink those objects. But everything comes with its price, so you loose direct control of those object but to that later.

Ok we start by developing such a link constraint between two objects A and B such that you can rotate center of A around center of B and vice versa. The orientation of A and B won't be affected by such rotation, this is necessary because you need some unconstrained controls.
To achieve that we have to introduce a helper object H to which will be centers of A and B fixed and rotation of A,B will somehow affect position and orientation of H.

Next we face difficulty that position of A,B at time \(t\) does depend on rotation history of those objects. Have a look here for an example http://www.youtube.com/watch?v=lOfaFJq5Wqk as you can see rotation of A,B at time \(t\) cannot fully describe the system at time \(t\). You have to know the rotation history of A,B to fully determine position of those objects. So we will define exact positions of A,B  at some time \(t_0\) and to find out position of A,B at time \(t\) we will march frame by frame from time the \(t_0\) to the time \(t\).

In opposite to the previous post I won't use quaternions except when I will combine rotations but I will propose different approach with matrix exponential.

\begin{align}
& x_A(t) \quad \text{position of object A} \\
& x_B(t) \quad \text{position of object H} \\
& x_H(t) \quad \text{position of object B} \\
& R_A^t \quad \text{rotation matrix of object A at time t} \\
& R_B^t \quad \text{rotation matrix of object B at time t} \\
& R_H^t \quad \text{rotation matrix of object H at time t}
\end{align}

As already said position of centers of A and B are fixed against H, so
\begin{align}
x_A(t) &= x_H(t) + R_H^tp_A \\
p_A &= (R_H^{t_0})^{-1}(x_A(t_0) - x_H(t_0))\\

x_B(t) &= x_H(t) + R_H^tp_B \\
p_B &= (R_H^{t_0})^{-1}(x_B(t_0) - x_H(t_0))

\end{align}

Now if we rotate A a little bit we want to move with B, but B is fixed against H so it is sufficient to move with H. So just for a moment forget about B and imagine that H is linked to the A. Than if we rotate with A position of H will change as
\begin{align}
x_H( t + \Delta t )  = x_H( t ) + R^{t+\Delta t}_A (R^t_A)^{-1} (x_H(t)-x_A(t))
\end{align}
and the orientation of H will change as
\begin{align}
R_H^{t + \Delta t } =  R^{t+\Delta t}_A (R^t_A)^{-1} R^t_H
\end{align}
Now if we take account the B's contribution we get equation

\begin{align}
x_H( t + \Delta t )  = x_H( t ) + R^{t+\Delta t}_A (R^t_A)^{-1} (x_H(t)-x_A(t))  + R^{t+\Delta t}_B (R^t_B)^{-1} (x_H(t)-x_B(t))
\end{align}
But there is trouble with orientation update  of H because matrix multiplication is not commutative so we don't know in which order to multiply those matrix. For now we just write something and we will  explain later what \( \{ \cdot , \cdot \} \) means.
\begin{align}
R_H^{t + \Delta t } =  \{R^{t+\Delta t}_A (R^t_A)^{-1}, R^{t+\Delta t}_A (R^t_A)^{-1} \}R^t_H
\end{align}

What  \( \{ \cdot , \cdot \} \) does is that it takes two rotations and produce another new rotation, which somehow captures those two rotations. From this we require from  \( \{ \cdot , \cdot \} \) these identities
\begin{align}
\{ R_1 , R_2 \}  &= \{ R_2 , R_1 \} \\
\{R_1, I \} &= R_1
\end{align}
where \(I\) is identity(ie no rotation). First says that it does not depend on the order of the matrices and the second say that if we combine some rotation with identity(ie no rotation) we should get the original rotation.

Now to define  \( \{ \cdot , \cdot \} \)  without quaternions we need to know a little bit about rotations and matrix exponential.

If you have rotation around axis \( n \) by angle \( \omega \). Than its rotation matrix \( R \) can be expressed as
$$
R  = e^{\omega [n]_\times} = \sum_{k=0}^\infty \frac{ \omega^k}{k!} [n]_\times^k
$$
where \( [n]_\times \) is cross-product matrix .
Other way around if you have antisymmetric matrix \(A\) that \(e^A\) is rotation matrix.

We are ready to define  \( \{ \cdot , \cdot \} \). Let's have two rotation matrices \( R_1 = e^{A_1}, R_2 = e^{A_2} \). Than
$$
 \{ R_1, R_2 \} =  \{ e^{A_1}, e^{A_2} \}  = e^{A_1+A_2}
$$
Observe that \(A_1+A_2\) is again antisymmetric matrix so \(e^{A_1+A_2}\) is rotation matrix. Next \(A_1+A_2 = A_2+A_1 \) therefore  \( \{ e^{A_1}, e^{A_2} \} =  \{ e^{A_2}, e^{A_1} \}\). Lastly \( I = e^0 \) so  \( \{ e^{A_1}, e^0 \} = e^{A_1 + 0} = e^{A_1} \). So  \( \{ \cdot , \cdot \} \)  satisfy all identities we wanted.

How to program this then.


What are the inputs?
We have to specify time \(t_f\) at which we want to get positions and orientations of A,B,H. Next we have to know positions of A,B,H at time \(t_0\) and orientation of H at time \(t_0\). Than we have to know the whole history of rotation matrices of A,B from time \(t_0\) to time \(t_f\).
What is the output?
Position of A,B,H at time \(t\) and orientation of H.

So the code would be something like this:

1. precalculate values  \(p_A,p_B\)

2. then use update equations, start at time \(t_0\) and and at time \(t_f\).
\begin{align}
x_H( t + \Delta t )  &= x_H( t )  - R^{t+\Delta t}_A (R^t_A)^{-1} R_H^t p_A  - R^{t+\Delta t}_B (R^t_B)^{-1}R_H^t p_B\\
R_H^{t + \Delta t } &=  \{R^{t+\Delta t}_A (R^t_A)^{-1}, R^{t+\Delta t}_A (R^t_A)^{-1} \}R^t_H
\end{align}

3. from valuse \( x_H(t_f), R_H^{t_f} \) calculate \(x_A(t_f),x_B(t_f)\)


neděle 9. června 2013


Multi-parent constraint

Ok what is the deal? For example when you animate foot sometimes you want to rotate it about forefoot and sometimes about hindfoot. Or you want to animate things like this http://www.youtube.com/watch?v=lOfaFJq5Wqk . I have simple solution.

I assume that reader is familiar with quaternions and their aplication to rotation if not than please see http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation

So let's get mathematical. Beware everything is quaternion(I will try to stay consistent in notaion with wiki page on quaternion, like conjugation is denoted with star)!
\begin{align}
t & \dots \text{time} \\
p_0(t) & \dots \text{position of the foot} \\
q_0(t) & \dots \text{quaternion which specifies rotation of foot around}  p_0 \\
& \text{now for  } \, i>0 \\
p_i(t) & \dots \text{positions of points you want to rotate around} \\
q_i(t) & \dots \text{quaterion which specifies rotations around corresponding point}
\end{align}

Suppose you know \(p_0,q_0,p_i\) at some time \(t_0\) and \(q_i(t)\) for all \(t>t_0\). Than you want to find \(p_0(t),q_0(t),p_i(t)\). But what should they satisfy??

First denote with \(v_i\) relative position of \(p_i\) to the \(p_0\) at time \(t\). So:
$$ v_i(t) = p_i(t)-p_0(t) $$
Points \(p_i\) should stay fixed relative to foot. Therefore they have to satisfy:
$$ p_i(t) = p_0(t) + R_0(t)v_i(t_0)R_0^*(t)$$

where \(R_i(t) = q_i(t)q_i^{-1}(t_0)\) which represents change in rotation from time \(t_0\) to time \(t\). .

Now imagine situation when you rotate only about point \(p_1\). So \(p_i,q_i\)  for \(i>1\), \(p_1\) are constant in time and only \(q_1\) changes in time. What do we get? We rotate point \(p_0\) around point \(p_1\):

$$ p_0(t) = p_1(t_0) - R_1(t) v_1(t_0) R_1^*(t)$$
$$ q_0(t) = R_1(t)q_0(t_0) $$

Now replace \(p_1(t_0)\) with \( p_i(t_0) = p_0(t_0) + v_i(t_0)  \) we get:

$$p_0(t)-p_0(t_0) =  v_1(t_0) - R_1(t) v_1(t_0) R_1^*(t) $$

So point \(p_0\) changes about \(v_i(t_0) - R_1(t) v_1(t_0) R_1^*(t) \) thanks to rotation about point \(p_1\). We assumed that \(p_1\). When \(p_1\) varies in time than previous equation is "sort of OK"(it does not represents accurately the rotation of p_0 around point p_1(which varies over time)) only for small times \(\Delta t = t-t_0\).

Now suppose that \(p_i,q_i\) all varies in time and \(\Delta t\) is small. Than we can generalize our update equation:

$$p_0(t_0+\Delta t) - p_0(t_0) = \sum_i v_i(t_0) - R_i(t_0,\Delta t) v_i(t_0) R_i^*(t,\Delta t)$$

where \(R(t,\Delta t) = q_i(t+\Delta t)q_i^{-1}(t) \). But how do we update \(q_0\) ?? We could do \(q_0(t+\Delta t) = R_1(t,\Delta t)\dots R_n(t,\Delta t)q_0(t) \) But it depends on order of the \(R_i\) which is undesirable. So we do better with:
$$q_0(t+\Delta t) = \{R_i(t,\Delta t)\}_i q_0(t) $$

where\( \{R_i(t,\Delta t)\} \)is what I call normalized anticommutator defined by:

\(a_1,\dots,a_n\) are any quaternions than

$$ \{a_i\}_i =\frac{\sum_{ \sigma \in \Pi_n } a_{\sigma(1)} \dots a_{\sigma(n)}}{||\sum_{ \sigma \in \Pi_n } a_{\sigma(1)} \dots a_{\sigma(n)}||} $$

where \( \Pi_n \) is set of all permutations of size \(n\). I think that normalized anticommutator is the most convenient way how to combine \(n\) rotations and get again rotation. And quaternions are the good way to do it. Problem with matrices is that when you add two rotational matrices than you hardly get rotation matrix and finding closest rotation matrix to that sum is just pain. With quaternions it is easy, just add them up and normalize.

We are almost finished. We just polish those equations a little bit.

denote \(t_n = t_0 + n \Delta t\) than our update equations looks like this:

\begin{align*}
v_i(t_{n+1})& = p_i(t_{n+1})-p_0(t_{n+1}) = R_0(t_n,\Delta t) v_i(t_n) R_0^*(t,\Delta t)\\
p_0(t_{n+1}) &= p_0(t_n) + \sum_i v_i(t_n) - R_i(t_n,\Delta t) v_i(t_n) R_i^*(t_n,\Delta t) \\
q_0(t_{n+1}) &= \{R_i(t_n,\Delta t)\}_i q_0(t_n)  \\
p_i(t_n) &= p_0(t_n) + R_0(t_0,t_n-t_0)v_i(t_0)R_0^*(t_0,t_n-t_0)
\end{align*}

Horay! finished.

I implemented it to Autodesk Maya. You can download the plugin here:
http://uloz.to/xgp17H2G/multiparent-zip
or
http://www.4shared.com/zip/MqwiOuUt/multiParent_1.html

Watch video how to use it:
http://www.youtube.com/watch?v=5Oip-YiKuik

I hope you like it! If you find any mistakes please let me know ;)

Further investigation:

Send \(\Delta t\) to zero and obtain differential equations from those update equations and analyze their behavior!

úterý 14. května 2013

Takze tady jsem napsal jednu polovinu:
$$ \frac{1}{2} $$