Best Paper Awards
We are please to announce the Best Paper and two Honorable Mentions of VMV 2019.
Best Paper
Stochastic Convolutional Sparse Coding
Jinhui Xiong, Peter Richtarik, and Wolfgang Heidrich
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
Abstract:
State-of-the-art methods for Convolutional Sparse Coding usually employ Fourier-domain solvers in order to speed up the convolution operators. However, this approach is not without shortcomings. For example, Fourier-domain representations implicitly assume circular boundary conditions and make it hard to fully exploit the sparsity of the problem as well as the small spatial support of the filters. In this work, we propose a novel stochastic spatial-domain solver, in which a randomized subsampling strategy is introduced during the learning sparse codes. Afterwards, we extend the proposed strategy in conjunction with online learning, scaling the CSC model up to very large sample sizes. In both cases, we show experimentally that the proposed subsampling strategy, with a reasonable selection of the subsampling rate, outperforms the state-of-the-art frequency-domain solvers in terms of execution time without losing the learning quality. Finally, we evaluate the effectiveness of the over-complete dictionary learned from large-scale datasets, which demonstrates an improved sparse representation of the natural images on account of more abundant learned image features.
DOI: 10.2312/vmv.20191317
Honorable Mention
Multi-Level-Memory Structures for Adaptive SPH Simulations
Rene Winchenbach and Andreas Kolb
University of Siegen, Germany
Abstract:
In this paper we introduce a novel hash map-based sparse data structure for highly adaptive Smoothed Particle Hydrodynamics (SPH) simulations on GPUs. Our multi-level-memory structure is based on stacking multiple independent data structures, which can be created efficiently from the same particle data by utilizing self-similar particle orderings. Furthermore, we propose three neighbor list algorithms that improve performance, or significantly reduce memory requirements, when compared to Verlet-lists for the overall simulation. Overall, our proposed method significantly improves the performance of spatially adaptive methods, allows for the simulation of unbounded domains and reduces memory requirements without interfering with the simulation.
DOI: 10.2312/vmv.20191323
Honorable Mention
Trigonometric Moments for Editable Structured Light Range Finding
Sebastian Werner, Julian Iseringhausen, Clara Callenberg, and Matthias Hullin
University of Bonn, Germany
Abstract:
Structured-light methods remain one of the leading technologies in high quality 3D scanning, specifically for the acquisition of single objects and simple scenes. For more complex scene geometries, however, non-local light transport (e.g. interreflections, sub-surface scattering) comes into play, which leads to errors in the depth estimation. Probing the light transport tensor, which describes the global mapping between illumination and observed intensity under the influence of the scene can help to understand and correct these errors, but requires extensive scanning. We aim to recover a 3D subset of the full 4D light transport tensor, which represents the scene as illuminated by line patterns, rendering the approach especially useful for triangulation methods. To this end we propose a frequency-domain approach based on spectral estimation to reduce the number of required input images. Our method can be applied independently on each pixel of the observing camera, making it perfectly parallelizable with respect to the camera pixels. The result is a closed-form representation of the scene reflection recorded under line illumination, which, if necessary, masks pixels with complex global light transport contributions and, if possible, enables the correction of such measurements via data-driven semi-automatic editing.
DOI: 10.2312/vmv.20191315