v0.6.0
ExplainableAI v0.6.0
Closed issues:
- Add LRP support for
Parallel
layer (#10) - Document use of LoopVectorization.jl and CUDA.jl (#64)
- Add LRP support for nested Chains (#90)
- Add generalized Gamma rule (#91)
- Rename composite primitives (#120)
- Add composite primitive to assign rule at specific position in model (#121)
- Support
BatchNorm
layers in LRP (#122) - Refactor results struct (#123)
- Add Aqua.jl tests (#124)
- Update canonizer to support nested Flux Chains (#132)
- Update documentation for
v0.6.0
release (#133)
Merged pull requests:
- Add
GeneralizedGammaRule
(#109) (@adrhill) - Simplify LRP analyzer and clean-up rule default parameters (#110) (@adrhill)
- Simplify LRP model checks (#112) (@adrhill)
- Update dependencies (#116) (@adrhill)
- Bump actions/cache from 1 to 3 (#117) (@dependabot[bot])
- Bump actions/checkout from 2 to 3 (#118) (@dependabot[bot])
- Support nested Flux Chains in LRP (#119) (@adrhill)
- Add Aqua.jl tests (#125) (@adrhill)
- Refactor
Explanation
struct (#126) (@adrhill) - Faster tests and benchmarks (#127) (@adrhill)
- Set LRP output relevance to one (#128) (@adrhill)
- Enable
BatchNorm
layers in LRP (#129) (@adrhill) - Rename composite primitives (#130) (@adrhill)
- Support nested indexing in composite primitive
LayerMap
(#131) (@adrhill) - Add
PassRule
on normalization layers to composite presets, improveshow
(#134) (@adrhill) - Support
Parallel
layers in LRP (#135) (@adrhill) - Rename
Explanation
fieldattribution
toval
(#136) (@adrhill) - Update documentation for
v0.6.0
(#137) (@adrhill) - Support canonization of
Parallel
layers (#138) (@adrhill)