Skip to content

Commit b1aff8f

Browse files
committed
Fixed typo in docs and added makefile for building docs
1 parent 441b5f8 commit b1aff8f

File tree

6 files changed

+22
-16
lines changed

6 files changed

+22
-16
lines changed

Makefile

+6
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
all: docs
2+
3+
docs:
4+
cd documentation && mkdocs build && mv ../docs/reference site && rsync -a --delete site/* ../docs/ && rm -rf site
5+
6+
.PHONY: docs

docs/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -231,5 +231,5 @@ <h1 id="getting-help">Getting Help</h1>
231231

232232
<!--
233233
MkDocs version : 0.17.2
234-
Build Date UTC : 2020-08-25 21:10:36
234+
Build Date UTC : 2021-04-19 19:23:50
235235
-->

docs/machine_learning/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@
171171
expressed in <a href="../pycomputations/index.html#specifying-tensor-algebra-computations">index
172172
notation</a> as </p>
173173
<p>
174-
<script type="math/tex; mode=display">A_{ij} = B_{ij} \cdot C_{ik} \cdot C_{kj}.</script>
174+
<script type="math/tex; mode=display">A_{ij} = B_{ij} \cdot C_{ik} \cdot D_{kj}.</script>
175175
</p>
176176
<p>You can use the taco C++ library to easily and efficiently compute the SDDMM, as
177177
shown here:</p>

docs/search/search_index.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@
197197
},
198198
{
199199
"location": "/machine_learning/index.html",
200-
"text": "Sampled dense-dense matrix product (SDDMM) is a bottleneck operation in many\nfactor analysis algorithms used in machine learning, including Alternating\nLeast Squares and Latent Dirichlet Allocation [1]. Mathematically, the\noperation can be expressed as \n\n\n\n\nA = B \\circ CD,\n\n\n\n\nwhere \nA\n and \nB\n are sparse matrices, \nC\n and \nD\n are dense matrices,\nand \n\\circ\n denotes component-wise multiplication. This operation can also be\nexpressed in \nindex\nnotation\n as \n\n\n\n\nA_{ij} = B_{ij} \\cdot C_{ik} \\cdot C_{kj}.\n\n\n\n\nYou can use the taco C++ library to easily and efficiently compute the SDDMM, as\nshown here:\n\n\n// On Linux and MacOS, you can compile and run this program like so:\n// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib sddmm.cpp -o sddmm -ltaco\n// LD_LIBRARY_PATH=../../build/lib ./sddmm\n#include \nrandom\n\n#include \"taco.h\"\nusing namespace taco;\nint main(int argc, char* argv[]) {\n std::default_random_engine gen(0);\n std::uniform_real_distribution\ndouble\n unif(0.0, 1.0);\n // Predeclare the storage formats that the inputs and output will be stored as.\n // To define a format, you must specify whether each dimension is dense or sparse\n // and (optionally) the order in which dimensions should be stored. The formats\n // declared below correspond to doubly compressed sparse row (dcsr), row-major\n // dense (rm), and column-major dense (dm).\n Format dcsr({Sparse,Sparse});\n Format rm({Dense,Dense});\n Format cm({Dense,Dense}, {1,0});\n\n // Load a sparse matrix from file (stored in the Matrix Market format) and\n // store it as a doubly compressed sparse row matrix. Matrices correspond to\n // order-2 tensors in taco. The matrix in this example can be download from:\n // https://www.cise.ufl.edu/research/sparse/MM/Williams/webbase-1M.tar.gz\n Tensor\ndouble\n B = read(\"webbase-1M.mtx\", dcsr);\n // Generate a random dense matrix and store it in row-major (dense) format.\n Tensor\ndouble\n C({B.getDimension(0), 1000}, rm);\n for (int i = 0; i \n C.getDimension(0); ++i) {\n for (int j = 0; j \n C.getDimension(1); ++j) {\n C.insert({i,j}, unif(gen));\n }\n }\n C.pack();\n\n // Generate another random dense matrix and store it in column-major format.\n Tensor\ndouble\n D({1000, B.getDimension(1)}, cm);\n for (int i = 0; i \n D.getDimension(0); ++i) {\n for (int j = 0; j \n D.getDimension(1); ++j) {\n D.insert({i,j}, unif(gen));\n }\n }\n D.pack();\n\n // Declare the output matrix to be a sparse matrix with the same dimensions as\n // input matrix B, to be also stored as a doubly compressed sparse row matrix.\n Tensor\ndouble\n A(B.getDimensions(), dcsr);\n\n // Define the SDDMM computation using index notation.\n IndexVar i, j, k;\n A(i,j) = B(i,j) * C(i,k) * D(k,j);\n\n // At this point, we have defined how entries in the output matrix should be\n // computed from entries in the input matrices but have not actually performed\n // the computation yet. To do so, we must first tell taco to generate code that\n // can be executed to compute the SDDMM operation.\n A.compile();\n // We can now call the functions taco generated to assemble the indices of the\n // output matrix and then actually compute the SDDMM.\n A.assemble();\n A.compute();\n // Write the output of the computation to file (stored in the Matrix Market format).\n write(\"A.mtx\", A);\n}\n\n\n\nYou can also use the TACO Python library to perform the same computation, as\ndemonstrated here:\n\n\nimport pytaco as pt\nfrom pytaco import dense, compressed\nimport numpy as np\n\n# Define formats that the inputs and output will be stored as. To define a\n# format, you must specify whether each dimension is dense or sparse and\n# (optionally) the order in which dimensions should be stored. The formats\n# declared below correspond to doubly compressed sparse row (dcsr), row-major\n# dense (rm), and column-major dense (dm).\ndcsr = pt.format([compressed, compressed])\nrm = pt.format([dense, dense])\ncm = pt.format([dense, dense], [1, 0])\n\n# The matrix in this example can be download from:\n# https://www.cise.ufl.edu/research/sparse/MM/Williams/webbase-1M.tar.gz\nB = pt.read(\"webbase-1M.mtx\", dcsr)\n\n# Generate two random matrices using NumPy and pass them into TACO\nx = pt.from_array(np.random.uniform(size=(B.shape[0], 1000)))\nz = pt.from_array(np.random.uniform(size=(1000, B.shape[1])), out_format=cm)\n\n# Declare the result to be a doubly compressed sparse row matrix\nA = pt.tensor(B.shape, dcsr)\n\n# Declare index vars\ni, j, k = pt.get_index_vars(3)\n\n# Define the SDDMM computation\nA[i, j] = B[i, j] * C[i, k] * D[k, j]\n\n# Perform the SDDMM computation and write the result to file\npt.write(\"A.mtx\", A)\n\n\n\nWhen you run the above Python program, TACO will generate code under the hood\nthat efficiently performs the computation in one shot. This lets TACO only \ncompute elements of the intermediate dense matrix product that are actually \nneeded to compute the result, thus reducing the asymptotic complexity of the \ncomputation.\n\n\n[1] Huasha Zhao. 2014. High Performance Machine Learning through Codesign and\nRooflining. Ph.D. Dissertation. EECS Department, University of California,\nBerkeley.",
200+
"text": "Sampled dense-dense matrix product (SDDMM) is a bottleneck operation in many\nfactor analysis algorithms used in machine learning, including Alternating\nLeast Squares and Latent Dirichlet Allocation [1]. Mathematically, the\noperation can be expressed as \n\n\n\n\nA = B \\circ CD,\n\n\n\n\nwhere \nA\n and \nB\n are sparse matrices, \nC\n and \nD\n are dense matrices,\nand \n\\circ\n denotes component-wise multiplication. This operation can also be\nexpressed in \nindex\nnotation\n as \n\n\n\n\nA_{ij} = B_{ij} \\cdot C_{ik} \\cdot D_{kj}.\n\n\n\n\nYou can use the taco C++ library to easily and efficiently compute the SDDMM, as\nshown here:\n\n\n// On Linux and MacOS, you can compile and run this program like so:\n// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib sddmm.cpp -o sddmm -ltaco\n// LD_LIBRARY_PATH=../../build/lib ./sddmm\n#include \nrandom\n\n#include \"taco.h\"\nusing namespace taco;\nint main(int argc, char* argv[]) {\n std::default_random_engine gen(0);\n std::uniform_real_distribution\ndouble\n unif(0.0, 1.0);\n // Predeclare the storage formats that the inputs and output will be stored as.\n // To define a format, you must specify whether each dimension is dense or sparse\n // and (optionally) the order in which dimensions should be stored. The formats\n // declared below correspond to doubly compressed sparse row (dcsr), row-major\n // dense (rm), and column-major dense (dm).\n Format dcsr({Sparse,Sparse});\n Format rm({Dense,Dense});\n Format cm({Dense,Dense}, {1,0});\n\n // Load a sparse matrix from file (stored in the Matrix Market format) and\n // store it as a doubly compressed sparse row matrix. Matrices correspond to\n // order-2 tensors in taco. The matrix in this example can be download from:\n // https://www.cise.ufl.edu/research/sparse/MM/Williams/webbase-1M.tar.gz\n Tensor\ndouble\n B = read(\"webbase-1M.mtx\", dcsr);\n // Generate a random dense matrix and store it in row-major (dense) format.\n Tensor\ndouble\n C({B.getDimension(0), 1000}, rm);\n for (int i = 0; i \n C.getDimension(0); ++i) {\n for (int j = 0; j \n C.getDimension(1); ++j) {\n C.insert({i,j}, unif(gen));\n }\n }\n C.pack();\n\n // Generate another random dense matrix and store it in column-major format.\n Tensor\ndouble\n D({1000, B.getDimension(1)}, cm);\n for (int i = 0; i \n D.getDimension(0); ++i) {\n for (int j = 0; j \n D.getDimension(1); ++j) {\n D.insert({i,j}, unif(gen));\n }\n }\n D.pack();\n\n // Declare the output matrix to be a sparse matrix with the same dimensions as\n // input matrix B, to be also stored as a doubly compressed sparse row matrix.\n Tensor\ndouble\n A(B.getDimensions(), dcsr);\n\n // Define the SDDMM computation using index notation.\n IndexVar i, j, k;\n A(i,j) = B(i,j) * C(i,k) * D(k,j);\n\n // At this point, we have defined how entries in the output matrix should be\n // computed from entries in the input matrices but have not actually performed\n // the computation yet. To do so, we must first tell taco to generate code that\n // can be executed to compute the SDDMM operation.\n A.compile();\n // We can now call the functions taco generated to assemble the indices of the\n // output matrix and then actually compute the SDDMM.\n A.assemble();\n A.compute();\n // Write the output of the computation to file (stored in the Matrix Market format).\n write(\"A.mtx\", A);\n}\n\n\n\nYou can also use the TACO Python library to perform the same computation, as\ndemonstrated here:\n\n\nimport pytaco as pt\nfrom pytaco import dense, compressed\nimport numpy as np\n\n# Define formats that the inputs and output will be stored as. To define a\n# format, you must specify whether each dimension is dense or sparse and\n# (optionally) the order in which dimensions should be stored. The formats\n# declared below correspond to doubly compressed sparse row (dcsr), row-major\n# dense (rm), and column-major dense (dm).\ndcsr = pt.format([compressed, compressed])\nrm = pt.format([dense, dense])\ncm = pt.format([dense, dense], [1, 0])\n\n# The matrix in this example can be download from:\n# https://www.cise.ufl.edu/research/sparse/MM/Williams/webbase-1M.tar.gz\nB = pt.read(\"webbase-1M.mtx\", dcsr)\n\n# Generate two random matrices using NumPy and pass them into TACO\nx = pt.from_array(np.random.uniform(size=(B.shape[0], 1000)))\nz = pt.from_array(np.random.uniform(size=(1000, B.shape[1])), out_format=cm)\n\n# Declare the result to be a doubly compressed sparse row matrix\nA = pt.tensor(B.shape, dcsr)\n\n# Declare index vars\ni, j, k = pt.get_index_vars(3)\n\n# Define the SDDMM computation\nA[i, j] = B[i, j] * C[i, k] * D[k, j]\n\n# Perform the SDDMM computation and write the result to file\npt.write(\"A.mtx\", A)\n\n\n\nWhen you run the above Python program, TACO will generate code under the hood\nthat efficiently performs the computation in one shot. This lets TACO only \ncompute elements of the intermediate dense matrix product that are actually \nneeded to compute the result, thus reducing the asymptotic complexity of the \ncomputation.\n\n\n[1] Huasha Zhao. 2014. High Performance Machine Learning through Codesign and\nRooflining. Ph.D. Dissertation. EECS Department, University of California,\nBerkeley.",
201201
"title": "Machine Learning: SDDMM"
202202
},
203203
{

docs/sitemap.xml

+12-12
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
<url>
66
<loc>/index.html</loc>
7-
<lastmod>2020-08-25</lastmod>
7+
<lastmod>2021-04-19</lastmod>
88
<changefreq>daily</changefreq>
99
</url>
1010

@@ -13,19 +13,19 @@
1313

1414
<url>
1515
<loc>/tensors/index.html</loc>
16-
<lastmod>2020-08-25</lastmod>
16+
<lastmod>2021-04-19</lastmod>
1717
<changefreq>daily</changefreq>
1818
</url>
1919

2020
<url>
2121
<loc>/computations/index.html</loc>
22-
<lastmod>2020-08-25</lastmod>
22+
<lastmod>2021-04-19</lastmod>
2323
<changefreq>daily</changefreq>
2424
</url>
2525

2626
<url>
2727
<loc>/scheduling/index.html</loc>
28-
<lastmod>2020-08-25</lastmod>
28+
<lastmod>2021-04-19</lastmod>
2929
<changefreq>daily</changefreq>
3030
</url>
3131

@@ -35,25 +35,25 @@
3535

3636
<url>
3737
<loc>/tutorial/index.html</loc>
38-
<lastmod>2020-08-25</lastmod>
38+
<lastmod>2021-04-19</lastmod>
3939
<changefreq>daily</changefreq>
4040
</url>
4141

4242
<url>
4343
<loc>/pytensors/index.html</loc>
44-
<lastmod>2020-08-25</lastmod>
44+
<lastmod>2021-04-19</lastmod>
4545
<changefreq>daily</changefreq>
4646
</url>
4747

4848
<url>
4949
<loc>/pycomputations/index.html</loc>
50-
<lastmod>2020-08-25</lastmod>
50+
<lastmod>2021-04-19</lastmod>
5151
<changefreq>daily</changefreq>
5252
</url>
5353

5454
<url>
5555
<loc>/pyreference/index.html</loc>
56-
<lastmod>2020-08-25</lastmod>
56+
<lastmod>2021-04-19</lastmod>
5757
<changefreq>daily</changefreq>
5858
</url>
5959

@@ -63,19 +63,19 @@
6363

6464
<url>
6565
<loc>/scientific_computing/index.html</loc>
66-
<lastmod>2020-08-25</lastmod>
66+
<lastmod>2021-04-19</lastmod>
6767
<changefreq>daily</changefreq>
6868
</url>
6969

7070
<url>
7171
<loc>/data_analytics/index.html</loc>
72-
<lastmod>2020-08-25</lastmod>
72+
<lastmod>2021-04-19</lastmod>
7373
<changefreq>daily</changefreq>
7474
</url>
7575

7676
<url>
7777
<loc>/machine_learning/index.html</loc>
78-
<lastmod>2020-08-25</lastmod>
78+
<lastmod>2021-04-19</lastmod>
7979
<changefreq>daily</changefreq>
8080
</url>
8181

@@ -84,7 +84,7 @@
8484

8585
<url>
8686
<loc>/optimization/index.html</loc>
87-
<lastmod>2020-08-25</lastmod>
87+
<lastmod>2021-04-19</lastmod>
8888
<changefreq>daily</changefreq>
8989
</url>
9090

documentation/docs/machine_learning.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ and \(\circ\) denotes component-wise multiplication. This operation can also be
1010
expressed in [index
1111
notation](pycomputations.md#specifying-tensor-algebra-computations) as
1212

13-
$$A_{ij} = B_{ij} \cdot C_{ik} \cdot C_{kj}.$$
13+
$$A_{ij} = B_{ij} \cdot C_{ik} \cdot D_{kj}.$$
1414

1515
You can use the taco C++ library to easily and efficiently compute the SDDMM, as
1616
shown here:

0 commit comments

Comments
 (0)