@@ -33,16 +33,18 @@ Read the paper [here](https://arxiv.org/abs/1902.06714).
33
33
| Embedding | ` embedding ` | n/a | 2 | ✅ | ✅ |
34
34
| Dense (fully-connected) | ` dense ` | ` input1d ` , ` dense ` , ` dropout ` , ` flatten ` | 1 | ✅ | ✅ |
35
35
| Dropout | ` dropout ` | ` dense ` , ` flatten ` , ` input1d ` | 1 | ✅ | ✅ |
36
- | Convolutional (2-d) | ` conv2d ` | ` input3d ` , ` conv2d ` , ` maxpool2d ` , ` reshape ` | 3 | ✅ | ✅(* ) |
36
+ | Locally connected (1-d) | ` locally_connected1d ` | ` input2d ` , ` locally_connected1d ` , ` conv1d ` , ` maxpool1d ` , ` reshape2d ` | 2 | ✅ | ✅ |
37
+ | Convolutional (1-d) | ` conv1d ` | ` input2d ` , ` conv1d ` , ` maxpool1d ` , ` reshape2d ` | 2 | ✅ | ✅ |
38
+ | Convolutional (2-d) | ` conv2d ` | ` input3d ` , ` conv2d ` , ` maxpool2d ` , ` reshape ` | 3 | ✅ | ✅ |
39
+ | Max-pooling (1-d) | ` maxpool1d ` | ` input2d ` , ` conv1d ` , ` maxpool1d ` , ` reshape2d ` | 2 | ✅ | ✅ |
37
40
| Max-pooling (2-d) | ` maxpool2d ` | ` input3d ` , ` conv2d ` , ` maxpool2d ` , ` reshape ` | 3 | ✅ | ✅ |
38
41
| Linear (2-d) | ` linear2d ` | ` input2d ` , ` layernorm ` , ` linear2d ` , ` self_attention ` | 2 | ✅ | ✅ |
39
42
| Self-attention | ` self_attention ` | ` input2d ` , ` layernorm ` , ` linear2d ` , ` self_attention ` | 2 | ✅ | ✅ |
40
43
| Layer Normalization | ` layernorm ` | ` linear2d ` , ` self_attention ` | 2 | ✅ | ✅ |
41
44
| Flatten | ` flatten ` | ` input2d ` , ` input3d ` , ` conv2d ` , ` maxpool2d ` , ` reshape ` | 1 | ✅ | ✅ |
45
+ | Reshape (1-d to 2-d) | ` reshape2d ` | ` input2d ` , ` conv1d ` , ` locally_connected1d ` , ` maxpool1d ` | 2 | ✅ | ✅ |
42
46
| Reshape (1-d to 3-d) | ` reshape ` | ` input1d ` , ` dense ` , ` flatten ` | 3 | ✅ | ✅ |
43
47
44
- (* ) See Issue [ #145 ] ( https://github.com/modern-fortran/neural-fortran/issues/145 ) regarding non-converging CNN training on the MNIST dataset.
45
-
46
48
## Getting started
47
49
48
50
Get the code:
0 commit comments