- PyTorch, Torchvision...
-
How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites
-
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
-
K-Winner implementation:
- Sparse Coding
- Sparse Distributed Representations
- ISTA Implementation
- Bayesian Bits
- Sparse AutoEncoder
- Sparse AutoEncoder examples
- Understanding Pytorch hooks
$ KL(\rho||\hat{\rho})_{Ber} = \rho \log\left(\dfrac{\rho}{\hat{\rho}}\right) + (1-\rho)\log\left(\dfrac{1-\rho}{1-\hat{\rho}}\right)$
- Blind Spot Convolution
- Just observe the noisy context of a pixel
- 'Efficient Blind-Spot Neural Network Architecture for Image Denoising' [Ref]
- Learning Hybrid Sparsity Prior for Image Restoration
- Sparse Linear
- Instead of activation functions
- Reference: Extending PyTorch
- Important! Include a configuration where you can set the sparse property with a 'constant prunning' or 'gradual prunning'.
- Notebook with plotting KL Div and gradient estimation