You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the graph classification task, we use the graph classification benchmark provided in [@bianchi2020mincutpool]; the dataset consists of graphs with three different labels. For each graph, the feature vector on each vertex (the 0-cochain) is a one-hot vector of size five, and it stores the relative position of the vertex on the graph. To construct the CC structure, we use the 2-clique complex of the input graph. We then proceed to build the CCNN for graph classification, denoted by $\mbox{CCNN}_{Graph}$, which is visualized in Figure \@ref(fig:mesh-net)(c). The matrices used for the construction of $\mbox{CCNN}_{Graph}$ are $B_{0,1},~B_{1,2},~B_{0,2}$, their transpose matrices, and the (co)adjacency matrices $A_{0,1},A_{1,1},~coA_{2,1}$. The cochains of $\mbox{CCNN}_{Graph}$ are constructed as follows. For each graph in the dataset, we set the 0-cochain to be the one-hot vector of size 5 provided by the dataset. This one-hot vector stores the relative position of the vertex on the graph. We also construct the 1-cochain and 2-cochain on the 2-clique complex of the graph by considering the coordinate-wise max value of the one-hot vectors attached to the vertices of each cell. The input to $\mbox{CCNN}_{graoh}$ consists of the 0-cochain provided as a part of the dataset as well as the constructed 1 and 2-cochains. The grey node in Figure \@ref(fig:mesh-net)(c) indicates a simple mean pooling operation. We train this network with a learning rate of 0.005 and no data augmentation.
76
+
77
+
Table \@ref(tab:wrap-tab) reports the results on the *easy* and the *hard* versions of the datasets^[The difficulty in these datasets is controlled by the compactness degree of the graph clusters; clusters in the 'easy' data have more in-between cluster connections, while clusters in the `hard' data are more isolated [@bianchi2020mincutpool].], and compares them to six state-of-the-art GNNs. As shown in Table \@ref(tab:wrap-tab), CCNNs outperform all six GNNs on the hard dataset, and five of the GNNs on the easy dataset. The proposed CCNN outperforms MinCutPool on the hard dataset, while it attains comparable performance to MinCutPool on the easy dataset.
knitr::kable(domains, align=c('l', 'c', 'c', 'c', 'c'), booktabs=TRUE, caption="Predictive accuracy on the test set of [@bianchi2020mincutpool] related to graph classification. All results are reported using the $\\mbox{CCNN}_{Graph}$ architecture.")
102
+
```
103
+
104
+
**Architecture of $\mbox{CCNN}_{Graph}$**. In the $\mbox{CCNN}_{Graph}$ displayed in Figure \@ref(fig:mesh-net)(c) we choose a CCNN pooling architecture as given in Definition \@ref(def:general-pooling-hoan) that pushes signals from vertices, edges and faces, and aggregate their information towards the higher-order cells before making making the final prediction. For the dataset of [@bianchi2020mincutpool], we experiment with two architectures; the first one is identical to the $\mbox{CCNN}_{SHREC}$ shown in Figure \@ref(fig:mesh-net)(b), and the second one is the $\mbox{CCNN}_{Graph}$ shown in Figure \@ref(fig:mesh-net)(c). We report the results for $\mbox{CCNN}_{Graph}$, as it provides superior performance. Note that when this neural network is conducted on an underlying simplicial complex, the neighborhood matrices $B_{0,1}$ and $B_{1,3}$ are typically not considered, hence the CC-structure equipped with these additional incidence matrices improves the generalization performance of the $\mbox{CCNN}_{Graph}$.
105
+
75
106
## Pooling with mapper on graphs and data classification
76
107
77
108
### Mesh classification: CC-pooling with input vertex and edge features
0 commit comments