Skip to content

Commit 7231a5a

Browse files
committed
Finished chapter 9
1 parent 8f9b97a commit 7231a5a

File tree

2 files changed

+50
-8
lines changed

2 files changed

+50
-8
lines changed

bib/main.bib

+40-8
Original file line numberDiff line numberDiff line change
@@ -418,6 +418,16 @@ @Book{dey22
418418
year = {2022},
419419
}
420420

421+
@Article{dodziuk1976finite,
422+
author = {Dodziuk, Jozef},
423+
journal = {American Journal of Mathematics},
424+
title = {Finite-difference approach to the {H}odge theory of harmonic forms},
425+
year = {1976},
426+
number = {1},
427+
pages = {79--104},
428+
volume = {98},
429+
}
430+
421431
@Article{dupuis2023generalization,
422432
author = {Dupuis, Benjamin and Deligiannidis, George and {\c{S}}im{\c{s}}ekli, Umut},
423433
journal = {arXiv preprint arXiv:2302.02766},
@@ -432,6 +442,17 @@ @Article{ebli2020simplicial
432442
year = {2020},
433443
}
434444

445+
@Article{eckmann1944harmonische,
446+
author = {Eckmann, Beno},
447+
journal = {Commentarii Mathematici Helvetici},
448+
title = {Harmonische funktionen und randwertaufgaben in einem komplex},
449+
year = {1944},
450+
number = {1},
451+
pages = {240--255},
452+
volume = {17},
453+
publisher = {Springer},
454+
}
455+
435456
@Book{edelsbrunner2010computational,
436457
author = {Edelsbrunner, Herbert and Harer, John},
437458
publisher = {American Mathematical Soc.},
@@ -634,14 +655,14 @@ @InProceedings{joslyn2021hypernetwork
634655
pages = {377--392},
635656
}
636657

637-
@article{KimLipmanChen2010,
638-
title = {M\"{o}bius transformations for global intrinsic symmetry analysis},
639-
author = {Kim, Vladimir G and Lipman, Yaron and Chen, Xiaobai and Funkhouser, Thomas},
640-
year = 2010,
641-
journal = {Computer Graphics Forum},
642-
volume = 29,
643-
number = 5,
644-
pages = {1689--1700}
658+
@Article{KimLipmanChen2010,
659+
author = {Kim, Vladimir G and Lipman, Yaron and Chen, Xiaobai and Funkhouser, Thomas},
660+
journal = {Computer Graphics Forum},
661+
title = {M\"{o}bius transformations for global intrinsic symmetry analysis},
662+
year = {2010},
663+
number = {5},
664+
pages = {1689--1700},
665+
volume = {29},
645666
}
646667

647668
@Article{kipf2016semi,
@@ -678,6 +699,17 @@ @InProceedings{krizhevsky2012
678699
url = {https://papers.nips.cc/paper_files/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html},
679700
}
680701

702+
@Article{krizhevsky2017imagenet,
703+
author = {Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E},
704+
journal = {Communications of the ACM},
705+
title = {Imagenet classification with deep convolutional neural networks},
706+
year = {2017},
707+
number = {6},
708+
pages = {84--90},
709+
volume = {60},
710+
publisher = {ACM New York, NY, USA},
711+
}
712+
681713
@Article{la2022music,
682714
author = {La Gatta, Valerio and Moscato, Vincenzo and Pennone, Mirko and Postiglione, Marco and Sperl{\'\i}, Giancarlo},
683715
journal = {IEEE Transactions on Neural Networks and Learning Systems},

rmd/09-implementation-and-numerical-experiments.rmd

+10
Original file line numberDiff line numberDiff line change
@@ -121,10 +121,20 @@ In order to demonstrate the effectiveness of our MOG pooling approach, we conduc
121121

122122
### Mesh classification: CC-pooling with input vertex and edge features
123123

124+
In this experiment, we consider the vertex feature vector to be the position concatenated with the normal vectors for each vertex in the underlying mesh. For the edge features, we compute the first ten eigenvectors of the 1-Hodge Laplacian [@dodziuk1976finite; @eckmann1944harmonische] and attach a 10-dimensional feature vector to the edges of the underlying mesh. The CC that we consider here is 3-dimensional, as it consists of the triangular mesh (vertices, edges and faces) and of 3-cells. The 3-cells are obtained using the MOG algorithm, and are used for augmenting each mesh. We calculate the 3-cells via the MOG algorithm using the AGD scalar function as input. We conduct this experiment using the CCNN defined via the tensor diagram $\mbox{CCNN}_{MOG1}$ given in Figure \@ref(fig:mesh-net)(d). During training, we augment each mesh with ten additional meshes, with each of these additional meshes being obtained by a random rotation as well as 0.1% noise perturbation to the vertex positions. We train $\mbox{CCNN}_{MOG1}$ for 100 epochs using a learning rate of 0.0002 and the standard cross-entropy loss, and obtain an accuracy of 98.1%. While the accuracy of $\mbox{CCNN}_{MOG1}$ is lower than the one we report for $\mbox{CCNN}_{SHREC}$ (99.17%) in Table \@ref(tab:shrec), we note that $\mbox{CCNN}_{MOG1}$ requires a significantly smaller number of replications for mesh augmentation to achieve a similar accuracy ($\mbox{CCNN}_{MOG1}$ requires 10, whereas $\mbox{CCNN}_{SHREC}$ required 30 replications).
125+
126+
**Architecture of $\mbox{CCNN}_{MOG1}$**. The tensor diagram $\mbox{CCNN}_{MOG1}$ of Figure \@ref(fig:mesh-net)(d) corresponds to a pooling CCNN. In particular, $\mbox{CCNN}_{MOG1}$ pushes forward the signal towards two different higher-order cells: the faces of the mesh as well as the 3-cells obtained from the MOG algorithm.
127+
124128
### Mesh classification: CC-pooling with input vertex features only
125129

130+
In this experiment, we consider the position and the normal vectors of the input vertices. The CC structure that we consider is the underlying graph structure obtained from each mesh; i.e., we only use the vertices and the edges, and ignore the faces. We augment this structure by 2-cells obtained via the MOG algorithm using the AGD scalar function as input. We choose the network architecture to be relatively simpler than $\mbox{CCNN}_{MOG1}$, and report it in Figure \@ref(fig:mesh-net)(d) as $\mbox{CCNN}_{MOG2}$. During training we augment each mesh with 10 additional meshes, with each of these additional meshes being obtained by a random rotation as well as 0.05% noise perturbation to the vertex positions. We train $\mbox{CCNN}_{MOG2}$ for 100 epochs using a learning rate of 0.0003 and the standard cross-entropy loss, and obtain an accuracy of 97.1%.
131+
132+
**Architecture of $\mbox{CCNN}_{MOG2}$ for mesh classification**. The tensor diagram $\mbox{CCNN}_{MOG2}$ of Figure \@ref(fig:mesh-net)(d) corresponds to a pooling CCNN. In particular, $\mbox{CCNN}_{MOG2}$ pushes forward the signal towards a single 2-cell obtained from the MOG algorithm. Observe that the overall architecture of $\mbox{CCNN}_{MOG2}$ is similar in principle to AlexNet [@krizhevsky2017imagenet], where convolutional layers are followed by pooling layers.
133+
126134
### Point cloud classification: CC-pooling with input vertex features only
127135

136+
In this experiment, we consider point cloud classification on the SHREC11 dataset. The setup is similar in principle to the one studied in Section \@ref(mesh-classification-cc-pooling-with-input-vertex-features-only) where we consider only the features supported on the vertices of the point cloud as input. Specifically, for each mesh in the SHREC11 dataset, we sample 1,000 points from the surface of the mesh. Additionally, we estimate the normal vectors of the resulting point clouds using the Point Cloud Utils package [@point-cloud-utils]. To build the CC structure, we first consider the $k$-nearest neighborhood graph obtained from each point cloud using $k=7$. We then augment this graph by 2-cells obtained via the MOG algorithm using the AGD scalar function as input. We train the $\mbox{CCNN}_{MOG2}$ shown in Figure \@ref(fig:mesh-net)(d). During training, we augment each point cloud with 12 additional instances, each one of these instances being obtained by random rotation. We train $\mbox{CCNN}_{MOG2}$ for 100 epochs using a learning rate of 0.0003 and the standard cross-entropy loss, and obtain an accuracy of 95.2% (see Table \@ref(tab:shrec)).
137+
128138
## Ablation studies
129139

130140
In this section, we perform two ablation studies. The first ablation study reveals that pooling strategies in CCNNs have a crucial effect on predictive performance. The second ablation study demonstrates that CCNNs have better predictive capacity than GNNs; the advantage of CCNNs arises from their topological pooling operations and from their ability to learn from topological features.

0 commit comments

Comments
 (0)