In many real-world cases, however, learning a single field is not enough. For example, when reconstructing 3D
vehicle shapes from
Lidar data, it is desirable to have a machine learning model that can work with arbitrary shapes (e.g., a
car, a
bicycle, a
truck, etc.). The solution is to include additional parameters, the
latent variables (or latent code) \boldsymbol{z} \in \R^d, to vary the field and adapt it to diverse tasks. • Auto-decoding: each training example has its own latent code, jointly trained with the neural field parameters. When the model has to process new examples (i.e., not originally present in the training
dataset), a small optimization problem is solved, keeping the network parameters fixed and only learning the new latent variables. Since the latter strategy requires additional optimization steps at inference time, it sacrifices speed, but keeps the overall model smaller. Moreover, despite being simpler to implement, an encoder may harm the
generalization capabilities of the model. Specifically, it consists of approximating \Gamma(\boldsymbol{z}) with a neural network \hat\Gamma_{\gamma}(\boldsymbol{z}), where \boldsymbol{\gamma} are the trainable parameters of the hypernetwork. This approach is the most general, as it allows to learn the
optimal mapping from latent codes to neural field parameters. However, hypernetworks are associated to larger
computational and memory complexity, due to the large number of trainable parameters. Hence, leaner approaches have been developed. For example, in the Feature-wise Linear Modulation (FiLM), the hypernetwork only produces scale and bias coefficients for the neural field layers.
Meta-learning Instead of relying on the latent code to adapt the neural field to a specific task, it is also possible to exploit gradient-based
meta-learning. In this case, the neural field is seen as the specialization of an underlying meta-neural-field, whose parameters are modified to fit the specific task, through a few steps of
gradient descent. An extension of this meta-learning framework is the CAVIA algorithm, that splits the trainable parameters in context-specific and shared groups, improving
parallelization and
interpretability, while reducing meta-
overfitting. This strategy is similar to the auto-decoding conditional neural field, but the training procedure is substantially different. == Applications ==