Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: cair/tmu
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.8.1
Choose a base ref
...
head repository: cair/tmu
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref

Commits on Apr 18, 2023

  1. Visual tokens demo

    olegranmo committed Apr 18, 2023
    Copy the full SHA
    1667b33 View commit details
  2. Copy the full SHA
    0b334e8 View commit details

Commits on Apr 19, 2023

  1. Copy the full SHA
    973d4c6 View commit details
  2. Copy the full SHA
    ffd1b2b View commit details

Commits on Apr 23, 2023

  1. Copy the full SHA
    abf89fa View commit details
  2. Copy the full SHA
    40d2b20 View commit details
  3. Copy the full SHA
    9a66b98 View commit details
  4. Copy the full SHA
    57077ee View commit details
  5. Copy the full SHA
    11c2301 View commit details

Commits on Apr 24, 2023

  1. Copy the full SHA
    22dac08 View commit details

Commits on Apr 25, 2023

  1. Copy the full SHA
    461b930 View commit details
  2. Copy the full SHA
    1adb5d5 View commit details
  3. Copy the full SHA
    09b1046 View commit details
  4. Copy the full SHA
    2a93444 View commit details

Commits on Apr 26, 2023

  1. Copy the full SHA
    f6d8c49 View commit details

Commits on May 2, 2023

  1. Copy the full SHA
    1e39803 View commit details

Commits on May 3, 2023

  1. Small fix

    olegranmo committed May 3, 2023
    Copy the full SHA
    8313b3b View commit details
  2. Refactoring of autoencoder

    olegranmo committed May 3, 2023
    Copy the full SHA
    44e3d18 View commit details
  3. Refactoring of autoencoder

    olegranmo committed May 3, 2023
    Copy the full SHA
    cb8b649 View commit details
  4. Refactoring of autoencoder

    olegranmo committed May 3, 2023
    Copy the full SHA
    46acca7 View commit details
  5. Copy the full SHA
    166759f View commit details

Commits on May 4, 2023

  1. Update

    olegranmo committed May 4, 2023
    Copy the full SHA
    288e84f View commit details

Commits on May 7, 2023

  1. Concept Learning Demo

    olegranmo committed May 7, 2023
    Copy the full SHA
    0a40b91 View commit details

Commits on May 8, 2023

  1. Copy the full SHA
    3b46e66 View commit details
  2. Copy the full SHA
    6323830 View commit details
  3. Copy the full SHA
    693c5cc View commit details
  4. Copy the full SHA
    96a0f09 View commit details
  5. Copy the full SHA
    bdae244 View commit details
  6. Copy the full SHA
    4e4d0a1 View commit details
  7. Copy the full SHA
    73d1343 View commit details
  8. Copy the full SHA
    d0a36d8 View commit details
  9. Fixed GPU fallback

    perara committed May 8, 2023
    Copy the full SHA
    39b7a0e View commit details

Commits on May 9, 2023

  1. Copy the full SHA
    c5c9481 View commit details
  2. Copy the full SHA
    71e1e58 View commit details
  3. Copy the full SHA
    c5dd806 View commit details
  4. Copy the full SHA
    ba1d9c8 View commit details
  5. Copy the full SHA
    83dc16a View commit details
  6. Copy the full SHA
    15de998 View commit details
  7. Updated Autoencoder

    perara committed May 9, 2023
    Copy the full SHA
    8bdd7c4 View commit details
  8. Moving encode to CUDA

    olegranmo committed May 9, 2023
    Copy the full SHA
    6b5278b View commit details
  9. Copy the full SHA
    0c44a08 View commit details
  10. Copy the full SHA
    c4911a5 View commit details

Commits on May 10, 2023

  1. Update README.md

    perara authored May 10, 2023
    Copy the full SHA
    ddddc29 View commit details
  2. Copy the full SHA
    99237ed View commit details
  3. Fixed CUDA Clause loading

    perara committed May 10, 2023
    Copy the full SHA
    d27c3ec View commit details
  4. Fix

    olegranmo committed May 10, 2023
    Copy the full SHA
    cf3f7ee View commit details
  5. Copy the full SHA
    55f00d5 View commit details
  6. New Fix

    olegranmo committed May 10, 2023
    Copy the full SHA
    ab62796 View commit details
  7. New Fix

    olegranmo committed May 10, 2023
    Copy the full SHA
    855eefb View commit details
  8. Updates for clause bank

    perara committed May 10, 2023
    Copy the full SHA
    e0b228e View commit details
Showing with 15,564 additions and 2,239 deletions.
  1. +5 −2 .github/workflows/{build_doxygen.yaml → build-docs.yml}
  2. +11 −8 .github/workflows/{build_test.yaml → build-tests.yml}
  3. +66 −64 .github/workflows/{build_wheels.yaml → build-wheels.yml}
  4. +5 −0 .gitignore
  5. +1 −1 LICENSE
  6. +2 −1 MANIFEST.in
  7. +78 −33 README.md
  8. +1 −0 docs/description.txt
  9. +27 −0 docs/long_description.rst
  10. +190 −0 docs/tutorials/devcontainers/devcontainers.md
  11. 0 examples/{type_iii_feedback/utils.py → __init__.py}
  12. +121 −79 examples/autoencoder/DimensionalityReductionDemo.py
  13. +195 −127 examples/autoencoder/IMDbWordEmbeddingDemo.py
  14. +163 −64 examples/classification/CIFAR2Demo3x3LiteralBudget.py
  15. +83 −31 examples/classification/IMDbSparseAbsorbingTextCategorizationDemo.py
  16. +47 −19 examples/classification/IMDbTextCategorizationDemo.py
  17. +141 −90 examples/classification/InterpretabilityDemo.py
  18. +38 −14 examples/classification/MNISTConvolutionDemo.py
  19. +62 −21 examples/classification/MNISTDemo.py
  20. +41 −10 examples/classification/MNISTDemoCoalesced.py
  21. +41 −13 examples/classification/MNISTDemoWithSerialization.py
  22. +87 −26 examples/classification/MNISTSparseAbsorbingDemo.py
  23. +41 −13 examples/classification/XORDemo.py
  24. +11 −0 examples/classification/__init__.py
  25. +151 −0 examples/composite/TMCompositeCIFAR10Demo.py
  26. 0 examples/composite/__init__.py
  27. 0 examples/composite/hpsearch/__init__.py
  28. +69 −0 examples/composite/hpsearch/example_2.py
  29. +82 −0 examples/experimental/autoencoder/MNISTReconstructionDemo.py
  30. +134 −0 examples/experimental/autoencoder/NoiseRemovalDemo.py
  31. +73 −0 examples/experimental/classification/CIFAR10CoalescedDemo.py
  32. +73 −0 examples/experimental/classification/CIFAR10Demo.py
  33. +425 −0 examples/experimental/classification/CIFAR10DemoS.py
  34. +83 −0 examples/experimental/classification/CIFAR2Demo3x3HSVLiteralBudget.py
  35. +104 −0 examples/experimental/classification/CIFAR2Demo3x3LiteralBudgetSpecificity.py
  36. +96 −0 examples/experimental/classification/CIFAR2Demo3x3RGCLiteralBudget.py
  37. +101 −0 examples/experimental/classification/CIFAR2Histogram.py
  38. +126 −0 examples/experimental/classification/CIFAR2VisualTokensHyperVector.py
  39. +114 −0 examples/experimental/classification/CombinatorialCompositionDemo.py
  40. +46 −0 examples/experimental/classification/ConceptLearningDemo.py
  41. +430 −0 examples/experimental/classification/FashionMNISTDemoS.py
  42. +130 −0 examples/experimental/classification/FashionMNISTMultioutput.py
  43. +177 −0 examples/experimental/classification/IMDbMultiWordPredictionDemoCoalescedV2.py
  44. +177 −0 examples/experimental/classification/IMDbMultiWordPredictionDemoCoalescedV3.py
  45. +1 −1 examples/experimental/classification/InterpretabilityDemoAND.py
  46. +1 −2 examples/experimental/classification/InterpretabilityDemoOneVsOne.py
  47. +80 −0 examples/experimental/classification/MNISTConvolutionVisualTokens.py
  48. +1 −1 examples/experimental/classification/MNISTDemo2DConvolutionOneVsOne.py
  49. +1 −1 examples/experimental/classification/MNISTDemoOneVsOne.py
  50. +83 −0 examples/experimental/classification/MNISTMultioutput.py
  51. +92 −0 examples/experimental/classification/MNISTVisualTokensHyperVector.py
  52. +0 −30 examples/experimental/regression/RegressionDemo.py
  53. +1 −5 examples/experimental/relational/RelationalTMDemo.py
  54. 0 examples/{ → experimental}/type_iii_feedback/coma.ipynb
  55. 0 examples/{ → experimental}/type_iii_feedback/exp2.ipynb
  56. 0 examples/experimental/type_iii_feedback/utils.py
  57. +67 −20 examples/regression/RegressionDemo.py
  58. +0 −2 examples/requirements.txt
  59. +0 −27 examples/stats.py
  60. +101 −10 pyproject.toml
  61. 0 scripts/example_verifier/__init__.py
  62. +80 −0 scripts/example_verifier/verify_experiments.py
  63. +9 −0 scripts/performance_test/match.json
  64. +85 −0 scripts/performance_test/performance_test.py
  65. +2 −0 scripts/performance_test/requirements.txt
  66. +82 −45 setup.py
  67. +101 −0 test/test_classifiers.py
  68. +80 −0 test/test_components.py
  69. +106 −17 test/test_datasets.py
  70. +0 −12 tmu.iml
  71. +13 −7 tmu/__init__.py
  72. +32 −12 tmu/clause_bank/base_clause_bank.py
  73. +210 −96 tmu/clause_bank/clause_bank.py
  74. +173 −44 tmu/clause_bank/clause_bank_cuda.py
  75. +237 −69 tmu/clause_bank/clause_bank_sparse.py
  76. +67 −0 tmu/clause_bank/cuda/calculate_clause_outputs_patchwise.cu
  77. +0 −6 tmu/clause_bank/cuda/clause_feedback.cu
  78. +276 −0 tmu/clause_bank/cuda/tools.cu
  79. +1 −0 tmu/composite/__init__.py
  80. 0 tmu/composite/callbacks/__init__.py
  81. +49 −0 tmu/composite/callbacks/base.py
  82. 0 tmu/composite/components/__init__.py
  83. +24 −0 tmu/composite/components/adaptive_thresholding.py
  84. +74 −0 tmu/composite/components/base.py
  85. +49 −0 tmu/composite/components/color_thermometer_scoring.py
  86. +72 −0 tmu/composite/components/histogram_of_gradients.py
  87. +1 −0 tmu/composite/components/image/__init__.py
  88. +520 −0 tmu/composite/components/image/experimental.py
  89. +415 −0 tmu/composite/composite.py
  90. +14 −0 tmu/composite/config.py
  91. 0 tmu/composite/gating/__init__.py
  92. +14 −0 tmu/composite/gating/base.py
  93. +16 −0 tmu/composite/gating/linear_gate.py
  94. +31 −0 tmu/composite/gating/neural_gate.py
  95. +144 −0 tmu/composite/tuner.py
  96. +8 −0 tmu/data/__init__.py
  97. +206 −0 tmu/data/bot_iot.py
  98. +187 −0 tmu/data/cic_ids.py
  99. +61 −0 tmu/data/cifar10.py
  100. +52 −0 tmu/data/cifar100.py
  101. +129 −0 tmu/data/fashion_mnist.py
  102. +153 −0 tmu/data/imdb_keras.py
  103. +195 −0 tmu/data/kdd_99.py
  104. +36 −0 tmu/data/mnist.py
  105. +254 −0 tmu/data/nsl_kdd.py
  106. +39 −0 tmu/data/tmu_dataset.py
  107. +11 −87 tmu/{data.py → data/tmu_datasource.py}
  108. +205 −0 tmu/data/unsw_nb15.py
  109. 0 tmu/data/utils/__init__.py
  110. +107 −0 tmu/data/utils/downloader.py
  111. 0 tmu/experimental/__init__.py
  112. 0 tmu/experimental/models/__init__.py
  113. 0 tmu/{ → experimental}/models/attention.py
  114. +108 −58 tmu/{models/classification → experimental/models}/multichannel_classifier.py
  115. +372 −0 tmu/experimental/models/multioutput_classifier.py
  116. +103 −56 tmu/{models/classification → experimental/models}/one_vs_one_classifier.py
  117. 0 tmu/experimental/models/relational/__init__.py
  118. +202 −127 tmu/{ → experimental}/models/relational/vanilla_relational.py
  119. +159 −0 tmu/lib/CMakeLists.txt
  120. +689 −0 tmu/lib/cpp/include/models/classifiers/tm_vanilla.h
  121. +547 −0 tmu/lib/cpp/include/tm_clause_dense.h
  122. +88 −0 tmu/lib/cpp/include/tm_memory.h
  123. +102 −0 tmu/lib/cpp/include/tm_weight_bank.h
  124. +152 −0 tmu/lib/cpp/include/utils/sparse_clause_container.h
  125. +98 −0 tmu/lib/cpp/include/utils/tm_dataset.h
  126. +55 −0 tmu/lib/cpp/include/utils/tm_math.h
  127. +129 −0 tmu/lib/cpp/main.cpp
  128. +681 −0 tmu/lib/cpp/nb/tmulibcpp.cpp
  129. +178 −0 tmu/lib/cpp/scripts/STM32L475VGTX_FLASH.ld
  130. +31 −0 tmu/lib/cpp/scripts/build_stm32.sh
  131. +40 −0 tmu/lib/cpp/scripts/stm32l4_toolchain.cmake
  132. +3 −0 tmu/lib/cpp/src/memory.cpp
  133. +96 −0 tmu/lib/cpp/tests/MNISTDemoCPP.py
  134. +16 −0 tmu/lib/cpp/tests/mnist_generator.py
  135. +22 −5 tmu/lib/include/Attention.h
  136. +145 −23 tmu/lib/include/ClauseBank.h
  137. +94 −12 tmu/lib/include/ClauseBankSparse.h
  138. +121 −19 tmu/lib/include/ClauseWeightBank.h
  139. +27 −2 tmu/lib/include/Tools.h
  140. +17 −3 tmu/lib/include/WeightBank.h
  141. +6 −22 tmu/lib/include/fast_rand.h
  142. +3 −0 tmu/lib/include/fast_rand_seed.h
  143. +30 −0 tmu/lib/pyproject.toml
  144. +27 −0 tmu/lib/setup.py
  145. +21 −4 tmu/lib/src/Attention.c
  146. +225 −31 tmu/lib/src/ClauseBank.c
  147. +119 −45 tmu/lib/src/ClauseBankSparse.c
  148. +86 −51 tmu/lib/src/Tools.c
  149. +16 −2 tmu/lib/src/WeightBank.c
  150. +16 −0 tmu/lib/src/random/pcg32_fast.c
  151. +24 −0 tmu/lib/src/random/xorshift128.c
  152. +230 −128 tmu/models/autoencoder/autoencoder.py
  153. +176 −19 tmu/models/base.py
  154. +0 −36 tmu/models/classification/base_classification.py
  155. +242 −113 tmu/models/classification/coalesced_classifier.py
  156. +165 −79 tmu/models/classification/multitask_classifier.py
  157. +374 −212 tmu/models/classification/vanilla_classifier.py
  158. +94 −56 tmu/models/regression/vanilla_regressor.py
  159. 0 tmu/preprocessing/__init__.py
  160. +13 −1 tmu/preprocessing/standard_binarizer/binarizer.py
  161. +25 −19 tmu/tools.py
  162. 0 tmu/util/__init__.py
  163. +94 −0 tmu/util/cuda_profiler.py
  164. +57 −0 tmu/util/encoded_data_cache.py
  165. +83 −0 tmu/util/sparse_clause_container.py
  166. +62 −0 tmu/util/statistics.py
  167. +5 −6 tmu/weight_bank/weight_bank.py
Original file line number Diff line number Diff line change
@@ -8,9 +8,12 @@ on:

jobs:
deploy:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: DenverCoder1/doxygen-github-pages-action@v1.2.0
- name: Install graphviz
run: sudo apt-get update && sudo apt-get install -y graphviz

- uses: DenverCoder1/doxygen-github-pages-action@v1.3.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: gh-pages
Original file line number Diff line number Diff line change
@@ -10,11 +10,13 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, ubuntu-22.04, macos-latest, windows-latest]
python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"]
exclude:
- os: ubuntu-22.04
python-version: "3.6"
os: [ubuntu-20.04]
python-version: ["3.10"]
# os: [ubuntu-20.04, ubuntu-22.04, macos-latest, windows-latest]
# python-version: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12"]
# exclude:
# - os: ubuntu-22.04
# python-version: "3.6"
steps:
- uses: actions/checkout@v3
- name: Set up Python
@@ -25,11 +27,12 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
pip install -r examples/requirements.txt
- name: Install tmu
run: |
pip install -e .
pip install .
pip install .[composite]
pip install .[examples]
pip install .[tests]
- name: Test with pytest
run: pytest test --doctest-modules --junitxml=junit/test-results-${{ matrix.os }}-${{ matrix.python-version }}.xml
- name: Upload pytest test results
Original file line number Diff line number Diff line change
@@ -3,9 +3,13 @@ name: Build Wheels
on:
workflow_dispatch:
push:
branches:
- '**'
release:
types:
- created
- published


jobs:
build_wheels:
@@ -28,81 +32,25 @@ jobs:
run: python -m cibuildwheel --output-dir wheelhouse

#- name: Prefix wheels with branch name
# if: startsWith(github.ref, 'refs/heads/')
# run: |
# BRANCH_NAME=$(echo "${{ github.ref }}" | sed -r "s/^refs\/heads\///")
# for wheel in wheelhouse/*.whl; do
# mv "$wheel" "wheelhouse/${BRANCH_NAME}-$(basename $wheel)"
# branchName=${GITHUB_REF/refs\/heads\//}
# for file in wheelhouse/*.whl; do
# newName="${branchName}-$(basename $file)"
# mv "$file" "wheelhouse/$newName"
# done

- name: Prefix wheels with branch name
if: startsWith(github.ref, 'refs/heads/')
shell: pwsh
run: |
$branchName = "${{ github.ref }}".Replace("refs/heads/", "")
Get-ChildItem -Path wheelhouse -Filter *.whl | ForEach-Object {
$newName = "${branchName}-$($_.Name)"
Rename-Item -Path $_.FullName -NewName $newName
}
# shell: bash

- uses: actions/upload-artifact@v2
with:
name: wheels
path: wheelhouse/*.whl

publish_wheels_to_release_page:
name: Publish wheels to Release Page
needs: build_wheels
if: github.event_name == 'release' && github.event.action == 'created'
runs-on: ubuntu-latest

steps:
- name: Download artifacts
uses: actions/download-artifact@v2
with:
name: wheels
path: wheelhouse

- name: Get release info
uses: actions/github-script@v5
id: get_release_info
with:
script: |
const { upload_url } = await github.rest.repos.getReleaseByTag({
owner: context.repo.owner,
repo: context.repo.repo,
tag: context.payload.release.tag_name,
});
return { upload_url: upload_url };
- uses: shogo82148/actions-upload-release-asset@v1
with:
upload_url: ${{ steps.get_release_info.outputs.upload_url }}
asset_path: wheelhouse/*.whl

publish_wheels_to_pypi:
name: Publish wheels to PyPI
needs: build_wheels
if: github.event_name == 'release' && github.event.action == 'created'
runs-on: ubuntu-latest

steps:
- name: Download artifacts
uses: actions/download-artifact@v2
with:
name: wheels
path: wheelhouse

- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.PYPI_API_TOKEN }}
packages_dir: wheelhouse


publish_wheels_to_gh_pages:
name: Publish wheels to GitHub Pages
runs-on: ubuntu-latest
needs: build_wheels

steps:
- name: Checkout repository
uses: actions/checkout@v2
@@ -113,7 +61,6 @@ jobs:
name: wheels
path: wheelhouse


- name: Checkout gh-pages branch
uses: actions/checkout@v2
with:
@@ -149,3 +96,58 @@ jobs:
git add wheels/*.whl wheels/index.html
git commit -m "Upload wheels to GitHub Pages and update index.html"
git push
publish_wheels_to_pypi:
name: Publish wheels to PyPI
needs: build_wheels
runs-on: ubuntu-latest
if: github.event_name == 'release' && github.event.action == 'created'

steps:
- name: Download artifacts
uses: actions/download-artifact@v2
with:
name: wheels
path: wheelhouse

- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.PYPI_API_TOKEN }}
packages_dir: wheelhouse


publish_wheels_to_release_page:
name: Publish wheels to Release Page
needs: build_wheels
runs-on: ubuntu-latest
if: github.event_name == 'release' && github.event.action == 'created'

steps:
- name: Download artifacts
uses: actions/download-artifact@v2
with:
name: wheels
path: wheelhouse

- name: Get release info
uses: actions/github-script@v5
id: get_release_info
with:
script: |
const { upload_url } = await github.rest.repos.getReleaseByTag({
owner: context.repo.owner,
repo: context.repo.repo,
tag: context.payload.release.tag_name,
});
return { upload_url: upload_url };
- name: Print upload URL
run: echo "Upload URL is ${{ steps.get_release_info.outputs.upload_url }}"

- name: Upload wheels to release assets
uses: shogo82148/actions-upload-release-asset@v1
with:
upload_url: ${{ steps.get_release_info.outputs.upload_url }}
asset_path: wheelhouse/*.whl
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -133,3 +133,8 @@ dmypy.json

html/
wheelhouse/

cmake-build-debug
cmake-build-release

.envrc
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2021 Centre for Artificial Intelligence Research (CAIR)
Copyright (c) 2023 Centre for Artificial Intelligence Research (CAIR)

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
3 changes: 2 additions & 1 deletion MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
include tmu/*.h
recursive-include tmu/clause_bank/cuda *.cu
include tmu/logging_example.json
include tmu/*.so
111 changes: 78 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,81 @@
# Tsetlin Machine Unified - One Codebase to Rule Them All
![License](https://img.shields.io/github/license/microsoft/interpret.svg?style=flat-square) ![Python Version](https://img.shields.io/pypi/pyversions/interpret.svg?style=flat-square)![Maintenance](https://img.shields.io/maintenance/yes/2023?style=flat-square)

The TMU repository is a collection of Tsetlin Machine implementations, namely:
* Tsetlin Machine (https://arxiv.org/abs/1804.01508)
* Coalesced Tsetlin Machine (https://arxiv.org/abs/2108.07594)
* Convolutional Tsetlin Machine (https://arxiv.org/abs/1905.09688)
* Regression Tsetlin Machine (https://royalsocietypublishing.org/doi/full/10.1098/rsta.2019.0165)
* Weighted Tsetlin Machine (https://ieeexplore.ieee.org/document/9316190)
* Autoencoder (https://arxiv.org/abs/2301.00709)
* Multi-task classifier (to be published)
* One-vs-one multi-class classifier (to be published)
* Relational Tsetlin Machine (under development, https://link.springer.com/article/10.1007/s10844-021-00682-5)

Further, we implement many TM features, including:
* Support for continuous features (https://arxiv.org/abs/1905.04199)
* Drop clause (https://arxiv.org/abs/2105.14506)
* Literal budget (https://arxiv.org/abs/2301.08190)
* Focused negative sampling (https://ieeexplore.ieee.org/document/9923859)
* Type III Feedback (to be published)
* Incremental clause evaluation (to be published)
* Sparse computation with absorbing actions (to be published)

TMU is written in Python with wrappers for C and CUDA-based clause evaluation and updating.

# Installation

## Installing on Windows
To install on windows, you will need the MSVC build tools, [found here](https://visualstudio.microsoft.com/visual-cpp-build-tools/
). When prompted, select the `Workloads → Desktop development with C++` package,
which is roughly 6-7GB of size, install it and you should be able to compile the cffi modules.

## Installing TMU
# Tsetlin Machine Unified (TMU) - One Codebase to Rule Them All
![License](https://img.shields.io/github/license/cair/tmu.svg?style=flat-square) ![Python Version](https://img.shields.io/pypi/pyversions/tmu.svg?style=flat-square) ![Maintenance](https://img.shields.io/maintenance/yes/2024?style=flat-square)

TMU is a comprehensive repository that encompasses several Tsetlin Machine implementations. Offering a rich set of features and extensions, it serves as a central resource for enthusiasts and researchers alike.

## Features
- Core Implementations:
- [Tsetlin Machine](https://arxiv.org/abs/1804.01508)
- [Coalesced Tsetlin Machine](https://arxiv.org/abs/2108.07594)
- [Convolutional Tsetlin Machine](https://arxiv.org/abs/1905.09688)
- [Regression Tsetlin Machine](https://royalsocietypublishing.org/doi/full/10.1098/rsta.2019.0165)
- [Weighted Tsetlin Machine](https://ieeexplore.ieee.org/document/9316190)
- [Autoencoder](https://arxiv.org/abs/2301.00709)
- Multi-task Classifier *(Upcoming)*
- One-vs-one Multi-class Classifier *(Upcoming)*
- [Relational Tsetlin Machine](https://link.springer.com/article/10.1007/s10844-021-00682-5) *(In Progress)*

- Extended Features:
- [Support for Continuous Features](https://arxiv.org/abs/1905.04199)
- [Drop Clause](https://arxiv.org/abs/2105.14506)
- [Literal Budget](https://arxiv.org/abs/2301.08190)
- [Focused Negative Sampling](https://ieeexplore.ieee.org/document/9923859)
- [Type III Feedback](https://arxiv.org/abs/2309.06315)
- Incremental Clause Evaluation *(Upcoming)*
- [Sparse Computation with Absorbing Actions](https://arxiv.org/abs/2310.11481)
- TMComposites: Plug-and-Play Collaboration Between Specialized Tsetlin Machines *([In Progress](https://arxiv.org/abs/2309.04801))*

- Wrappers for C and CUDA-based clause evaluation and updates to enable high-performance computation.

## Guides and Tutorials
- [Setting up efficient Development Environment](docs/tutorials/devcontainers/devcontainers.md)

## 📦 Installation

#### **Prerequisites for Windows**
Before installing TMU on Windows, ensure you have the MSVC build tools. Follow these steps:
1. [Download MSVC build tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
2. Install the `Workloads → Desktop development with C++` package. *(Note: The package size is about 6-7GB.)*

#### **Dependencies**
Ubuntu: `sudo apt install libffi-dev`

#### **Installing TMU**
To get started with TMU, run the following command:
```bash
# Installing Stable Branch
pip install git+https://github.com/cair/tmu.git

# Installing Development Branch
pip install git+https://github.com/cair/tmu.git@dev
```

## 🛠 Development

If you're looking to contribute or experiment with the codebase, follow these steps:

1. **Clone the Repository**:
```bash
git clone -b dev git@github.com:cair/tmu.git && cd tmu
```

2. **Set Up Development Environment**:
Navigate to the project directory and compile the C library:
```bash
# Install TMU
pip install .

# (Alternative): Install TMU in Development Mode
pip install -e .

# Install TMU-Composite
pip install .[composite]

# Install TMU-Composite in Development Mode
pip install -e .[composite]
```

3. **Starting a New Project**:
For your projects, simply create a new **branch** and then within the 'examples' folder, create a new project and initiate your development.

---
1 change: 1 addition & 0 deletions docs/description.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Implements the Tsetlin Machine, Coalesced Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, and Weighted Tsetlin Machine, with support for continuous features, drop clause, Type III Feedback, focused negative sampling, multi-task classifier, autoencoder, literal budget, incremental clause evaluation, sparse computation with absorbing exclude, and one-vs-one multi-class classifier. TMU is written in Python with wrappers for C and CUDA-based clause evaluation and updating.
27 changes: 27 additions & 0 deletions docs/long_description.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
Implements the Tsetlin Machine
==================================

- `Tsetlin Machine <https://arxiv.org/abs/1804.01508>`_
- `Coalesced Tsetlin Machine <https://arxiv.org/abs/2108.07594>`_
- `Convolutional Tsetlin Machine <https://arxiv.org/abs/1905.09688>`_
- `Regression Tsetlin Machine <https://royalsocietypublishing.org/doi/full/10.1098/rsta.2019.0165>`_
- `Weighted Tsetlin Machine <https://ieeexplore.ieee.org/document/9316190>`_

Features and Extensions
=======================

- Support for continuous features: `<https://arxiv.org/abs/1905.04199>`_
- Drop clause: `<https://arxiv.org/abs/2105.14506>`_
- Type III Feedback (to be published)
- Focused negative sampling: `<https://ieeexplore.ieee.org/document/9923859>`_
- Multi-task classifier (to be published)
- Autoencoder: `<https://arxiv.org/abs/2301.00709>`_
- Literal budget: `<https://arxiv.org/abs/2301.08190>`_
- Incremental clause evaluation (to be published)
- Sparse computation with absorbing exclude (to be published)
- One-vs-one multi-class classifier (to be published)

Technical Details
=================

TMU is written in Python with wrappers for C and CUDA-based clause evaluation and updating.
Loading