Pointwise Convolutional Neural Networks

Binh-Son Hua1,2         Minh-Khoi Tran2          Sai-Kit Yeung2

1The University of Tokyo 2Singapore University of Technology and Design

Computer Vision and Pattern Recognition (CVPR) 2018

Point-wise convolution

Deep learning with 3D data such as reconstructed point clouds and CAD models has received great research interests recently. However, the capability of using point clouds with convolutional neural network has been so far not fully explored. In this paper, we present a convolutional neural network for semantic segmentation and object recognition with 3D point clouds. At the core of our network is pointwise convolution, a new convolution operator that can be applied at each point of a point cloud. Our fully convolutional network design, while being surprisingly simple to implement, can yield competitive accuracy in both semantic segmentation and object recognition task.


SceneNN data: 76 scenes re-annotated with NYU-D v2 40 classes.
56 scenes for training and 20 scenes for testing scene semantic segmentation task.
Link 1: SUTD
Link 2: Google Drive

    title = {Pointwise Convolutional Neural Networks},
    author = {Binh-Son Hua and Minh-Khoi Tran and Sai-Kit Yeung},
    booktitle = {Computer Vision and Pattern Recognition (CVPR)},
    year = {2018}


We thank Quang-Hieu Pham for helping with the 2D-to-3D semantic segmentation experiment and proofreading the paper, Quang-Trung Truong and Benjamin Kang Yue Sheng for their kind support for the neural network training experiments.

Binh-Son Hua and Sai-Kit Yeung are supported by the SUTD Digital Manufacturing and Design Centre which is supported by the Singapore National Research Foundation (NRF). Sai-Kit Yeung is also supported by Singapore MOE Academic Research Fund MOE2016-T2-2-154, Heritage Research Grant of the National Heritage Board, Singapore, and Singapore NRF under its IDM Futures Funding Initiative and Virtual Singapore Award No. NRF2015VSGAA3DCM001- 014.