The ability of a material to conduct heat depends on its geometry and chemical composition. In infinite systems, this property is described
by the bulk thermal conductivity tensor, \( \kappa \). In non-homogenous materials the
effective thermal conductivity tensor,
\( \kappa_{\mathrm{eff}} \), is computed by solving Fourier's law and the steady-state continuity equation,
\begin{equation}\label{eq:fourier}
\nabla \cdot \left[\kappa_{\mathrm{bulk}} \nabla T(\mathbf{r}) \right] = 0.
\end{equation}
In practice, given a geometry, \( \kappa_{\mathrm{eff}} \) is evaluated by solving Eq. \eqref{eq:fourier} for different directions
of the applied temperature gradient. Once Eq. \eqref{eq:fourier} is discretized in space (here we use the
finite-volume method), it can be casted into the linear equation
\(\mathbf{A} \mathbf{x}= \mathbf{b}\). Finally, the effective thermal conductivity is obtained by
\(\kappa = \alpha \mathbf{b}^T \mathbf{A}^{-1}\mathbf{b} \), with \(\alpha \) a normalization constant.
In this App, we are interested in
inverse design, i.e. the identification of a material
with a prescribed thermal conductivity, \( \tilde{\kappa} \). To this end, we adopt a
density-based topology optimization approach, where a material
is described in terms of
pixels and a fictitious density, \( \boldsymbol \rho\). The optimization algorithm reads
\begin{eqnarray}\label{eq:opt}
\min_{\boldsymbol \rho} g(\boldsymbol \rho) \nonumber \\
0\le \boldsymbol \rho \le 1
\end{eqnarray}
Where the cost function is the Frobenius norm \(g(\boldsymbol \rho) = ||\kappa- \tilde{\kappa}||\).
The optimization requires the sensitivity of \(g\) with respect to \( \boldsymbol \rho\),
\begin{equation}
\frac{\partial g }{\partial \mathbf{\rho} } = \frac{1}{g}\left[\Delta \kappa_{xx}\frac{\partial \Delta \kappa_{xx}}{\partial \boldsymbol \rho}+\Delta \kappa_{yy}\frac{\partial \Delta \kappa_{yy}}{\partial \boldsymbol \rho} \right],
\end{equation}
where \(\Delta \kappa_{ii} = \tilde{\kappa}_{ii} - \kappa_{ii} \). To proceed with optimization, we thus need to evaluate \(\frac{\partial \kappa_{ii}}{\partial \boldsymbol \rho} \).
To this end, we use the
adjoint method, which is derived by differentiating analytically the linear system mentioned above with respect to \( \boldsymbol \rho \).
As a result, the gradient can be computed by solving just an additional linear system, which is the adjoint of the original one. As detailed
here, in our case the adjoint solution is simply the solution of the original one with minus sign.
The resulting gradient is
\begin{equation}
\frac{d \kappa_{ii}}{d \boldsymbol \rho} = \frac{\partial \kappa_{ii}}{\partial \boldsymbol \rho} -
\mathbf{x}^T\frac{\partial \mathbf{b}}{\partial \boldsymbol \rho} + \alpha \mathbf{x}^T\frac{\partial \mathbf{A}}{\partial\boldsymbol \rho} \mathbf{x}, \,\, i \in [x,y]
\end{equation}
where all the partial derivatives are computed analytically. As for the optimization algorithm, we choose the
method of moving asymtptotes, implemented in the free open-source
software
NlOpt. To avoid checkerboard patterns and to enforce the final solution to be binary, we adopt
filtering and projection.
Specifically, we use a conic filter and a projection parameter (commonly known as \(\beta\)) that doubles every 30 iterations.