Nguyen DT, Quach M, Valenzise G, Duhamel P (2021)
Publication Language: English
Publication Type: Journal article, Original article
Publication year: 2021
Book Volume: 31
Pages Range: 4617-4629
Journal Issue: 12
URI: https://ieeexplore.ieee.org/abstract/document/9496667
DOI: 10.1109/TCSVT.2021.3100279
This paper proposes a lossless point cloud (PC) geometry compression method that uses neural networks to estimate the probability distribution of voxel occupancy. First, to take into account the PC sparsity, our method adaptively partitions a point cloud into multiple voxel block sizes. This partitioning is signalled via an octree. Second, we employ a deep auto-regressive generative model to estimate the occupancy probability of each voxel given the previously encoded ones. We then employ the estimated probabilities to code efficiently a block using a context-based arithmetic coder. Our context has variable size and can expand beyond the current block to learn more accurate probabilities. We also consider using data augmentation techniques to increase the generalization capability of the learned probability models, in particular in the presence of noise and lower-density point clouds. Experimental evaluation, performed on a variety of point clouds from four different datasets and with diverse characteristics, demonstrates that our method reduces significantly (by up to 37%) the rate for lossless coding compared to the state-of-the-art MPEG codec.
APA:
Nguyen, D.T., Quach, M., Valenzise, G., & Duhamel, P. (2021). Lossless Coding of Point Cloud Geometry using a Deep Generative Model. IEEE Transactions on Circuits and Systems For Video Technology, 31(12), 4617-4629. https://doi.org/10.1109/TCSVT.2021.3100279
MLA:
Nguyen, Dat Thanh, et al. "Lossless Coding of Point Cloud Geometry using a Deep Generative Model." IEEE Transactions on Circuits and Systems For Video Technology 31.12 (2021): 4617-4629.
BibTeX: Download