Skip to content

Commit eba4316

Browse files
committed
Added subsection 9.3.3
1 parent eb2627c commit eba4316

File tree

1 file changed

+31
-0
lines changed

1 file changed

+31
-0
lines changed

rmd/09-implementation-and-numerical-experiments.rmd

+31
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,37 @@ knitr::kable(domains, align=c('l', 'c', 'c'), booktabs=TRUE, caption="Predictive
7272

7373
### Graph classification
7474

75+
For the graph classification task, we use the graph classification benchmark provided in [@bianchi2020mincutpool]; the dataset consists of graphs with three different labels. For each graph, the feature vector on each vertex (the 0-cochain) is a one-hot vector of size five, and it stores the relative position of the vertex on the graph. To construct the CC structure, we use the 2-clique complex of the input graph. We then proceed to build the CCNN for graph classification, denoted by $\mbox{CCNN}_{Graph}$, which is visualized in Figure \@ref(fig:mesh-net)(c). The matrices used for the construction of $\mbox{CCNN}_{Graph}$ are $B_{0,1},~B_{1,2},~B_{0,2}$, their transpose matrices, and the (co)adjacency matrices $A_{0,1},A_{1,1},~coA_{2,1}$. The cochains of $\mbox{CCNN}_{Graph}$ are constructed as follows. For each graph in the dataset, we set the 0-cochain to be the one-hot vector of size 5 provided by the dataset. This one-hot vector stores the relative position of the vertex on the graph. We also construct the 1-cochain and 2-cochain on the 2-clique complex of the graph by considering the coordinate-wise max value of the one-hot vectors attached to the vertices of each cell. The input to $\mbox{CCNN}_{graoh}$ consists of the 0-cochain provided as a part of the dataset as well as the constructed 1 and 2-cochains. The grey node in Figure \@ref(fig:mesh-net)(c) indicates a simple mean pooling operation. We train this network with a learning rate of 0.005 and no data augmentation.
76+
77+
Table \@ref(tab:wrap-tab) reports the results on the *easy* and the *hard* versions of the datasets^[The difficulty in these datasets is controlled by the compactness degree of the graph clusters; clusters in the 'easy' data have more in-between cluster connections, while clusters in the `hard' data are more isolated [@bianchi2020mincutpool].], and compares them to six state-of-the-art GNNs. As shown in Table \@ref(tab:wrap-tab), CCNNs outperform all six GNNs on the hard dataset, and five of the GNNs on the easy dataset. The proposed CCNN outperforms MinCutPool on the hard dataset, while it attains comparable performance to MinCutPool on the easy dataset.
78+
79+
```{r wrap-tab, echo=FALSE}
80+
dsets <- c('Easy', 'Hard')
81+
graclus <- c('97.81', '69.08')
82+
ndp <- c('97.93', '72.67')
83+
diffpool <- c('98.64', '69.98')
84+
topk <- c('82.47', '42.80')
85+
sagpool <- c('84.23', '37.71')
86+
mincutpool <- c('99.02', '73.80')
87+
ccnn <- c('98.90', '75.59')
88+
domains <- data.frame(
89+
dsets, graclus, ndp, diffpool, topk, sagpool, mincutpool, ccnn
90+
)
91+
colnames(domains) <- c(
92+
'Dataset',
93+
'Graclus',
94+
'NDP',
95+
'DiffPool',
96+
'Top-K',
97+
'SAGPool',
98+
'MinCutPool',
99+
'CCNN'
100+
)
101+
knitr::kable(domains, align=c('l', 'c', 'c', 'c', 'c'), booktabs=TRUE, caption="Predictive accuracy on the test set of [@bianchi2020mincutpool] related to graph classification. All results are reported using the $\\mbox{CCNN}_{Graph}$ architecture.")
102+
```
103+
104+
**Architecture of $\mbox{CCNN}_{Graph}$**. In the $\mbox{CCNN}_{Graph}$ displayed in Figure \@ref(fig:mesh-net)(c) we choose a CCNN pooling architecture as given in Definition \@ref(def:general-pooling-hoan) that pushes signals from vertices, edges and faces, and aggregate their information towards the higher-order cells before making making the final prediction. For the dataset of [@bianchi2020mincutpool], we experiment with two architectures; the first one is identical to the $\mbox{CCNN}_{SHREC}$ shown in Figure \@ref(fig:mesh-net)(b), and the second one is the $\mbox{CCNN}_{Graph}$ shown in Figure \@ref(fig:mesh-net)(c). We report the results for $\mbox{CCNN}_{Graph}$, as it provides superior performance. Note that when this neural network is conducted on an underlying simplicial complex, the neighborhood matrices $B_{0,1}$ and $B_{1,3}$ are typically not considered, hence the CC-structure equipped with these additional incidence matrices improves the generalization performance of the $\mbox{CCNN}_{Graph}$.
105+
75106
## Pooling with mapper on graphs and data classification
76107

77108
### Mesh classification: CC-pooling with input vertex and edge features

0 commit comments

Comments
 (0)