|
| 1 | +# Contributing guide |
| 2 | + |
| 3 | +This document describes the organization of the neural-fortran codebase to help |
| 4 | +guide the code contributors. |
| 5 | + |
| 6 | +## Overall code organization |
| 7 | + |
| 8 | +The source code organization follows the usual `fpm` convention: |
| 9 | +the library code is in [src/](src/), test programs are in [test/](test/), |
| 10 | +and example programs are in [example/](example/). |
| 11 | + |
| 12 | +The top-level module that suggests the public, user-facing API is in |
| 13 | +[src/nf.f90](src/nf.f90). |
| 14 | +All other library source files are in [src/nf/](src/nf/). |
| 15 | + |
| 16 | +Most of the library code defines interfaces in modules and implementations in |
| 17 | +submodules. |
| 18 | +If you want to know only about interfaces, in other words how to call procedures |
| 19 | +and what these procedures return, you can read just the module source files and |
| 20 | +not worry about the implementation. |
| 21 | +Then, if you want to know more about the implementation, you can find it in the |
| 22 | +appropriate source file that defines the submodule. |
| 23 | +Each library source file contains either one module or one submodule. |
| 24 | +The source files that define the submodule end with `_submodule.f90`. |
| 25 | + |
| 26 | +## Components |
| 27 | + |
| 28 | +Neural-fortran defines several components, described in a roughly top-down order: |
| 29 | + |
| 30 | +* Networks |
| 31 | +* Layers |
| 32 | + - Layer constructor functions |
| 33 | + - Concrete layer implementations |
| 34 | +* Optimizers |
| 35 | +* Activation functions |
| 36 | + |
| 37 | +### Networks |
| 38 | + |
| 39 | +A network is the main component that the user works with, |
| 40 | +and the highest-level container in neural-fortran. |
| 41 | +A network is a collection of layers. |
| 42 | + |
| 43 | +The network container is defined by the `network` derived type |
| 44 | +in the `nf_network` module, in the [nf_network.f90](src/nf/nf_network.f90) |
| 45 | +source file. |
| 46 | + |
| 47 | +In a nutshell, the `network` type defines an allocatable array of `type(layer)` |
| 48 | +instances, and several type-bound methods for training and inference. |
| 49 | + |
| 50 | +### Layers |
| 51 | + |
| 52 | +Layers are the main building blocks of neural-fortran and neural networks in |
| 53 | +general. |
| 54 | +There is a common, high-level layer type that maintains the data flow |
| 55 | +in and out and calls the specific layer implementations of forward and backward |
| 56 | +pass methods. |
| 57 | + |
| 58 | +When introducing a new layer type, study how the [dense](src/nf/nf_dense_layer.f90) |
| 59 | +or [convolutional](src/nf/nf_conv2d_layer.f90) concrete types are defined and |
| 60 | +implemented in their respective submodules. |
| 61 | +You will also need to follow the same use pattern in the |
| 62 | +[high-level layer type](src/nf/nf_layer.f90) and its corresponding submodule. |
| 63 | + |
| 64 | +### Optimizers |
| 65 | + |
| 66 | +Optimizers are the algorithms that determine how the model parameters are |
| 67 | +updated during training. |
| 68 | + |
| 69 | +Optimizers are currently implmented in the [nf_optimizers.f90](src/nf/nf_optimizers.f90) |
| 70 | +source file and corresponding module. |
| 71 | +An optimizer instance is passed to the network at the `network % train()` call |
| 72 | +site. |
| 73 | + |
| 74 | +### Activation functions |
| 75 | + |
| 76 | +Activation functions and their derivatives are defined in the |
| 77 | +[nf_activation.f90](src/nf/nf_activation.f90) source file and corresponding |
| 78 | +types. |
| 79 | +They are implemented using a base activation abstract type and concrete types |
| 80 | +for each activation function. |
| 81 | +When implementing a new activation function in the library, you need to define |
| 82 | +a new concrete type that extends the abstract activation function type. |
| 83 | +The concrete type must have `eval` and `eval_prime` methods that evaluate the |
| 84 | +function and its derivative, respectively. |
0 commit comments