Skip to content

Commit aad76b3

Browse files
committed
Fixed documentation examples
1 parent 9139799 commit aad76b3

File tree

9 files changed

+418
-27
lines changed

9 files changed

+418
-27
lines changed

docs/data_analytics/index.html

+66-2
Original file line numberDiff line numberDiff line change
@@ -170,8 +170,72 @@
170170
<p>
171171
<script type="math/tex; mode=display">A_{ij} = B_{ikl} \cdot D_{lj} \cdot C_{kj}.</script>
172172
</p>
173-
<p>You can use the TACO Python library to easily and efficiently compute MTTKRP,
174-
as shown here:</p>
173+
<p>You can use the TACO C++ library to easily and efficiently compute the MTTKRP,
174+
as shown here:
175+
<pre class="highlight"><code class="language-c++">// On Linux and MacOS, you can compile and run this program like so:
176+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib mttkrp.cpp -o mttkrp -ltaco
177+
// LD_LIBRARY_PATH=../../build/lib ./mttkrp
178+
#include &lt;random&gt;
179+
#include "taco.h"
180+
using namespace taco;
181+
int main(int argc, char* argv[]) {
182+
std::default_random_engine gen(0);
183+
std::uniform_real_distribution&lt;double&gt; unif(0.0, 1.0);
184+
// Predeclare the storage formats that the inputs and output will be stored as.
185+
// To define a format, you must specify whether each dimension is dense or
186+
// sparse and (optionally) the order in which dimensions should be stored. The
187+
// formats declared below correspond to compressed sparse fiber (csf) and
188+
// row-major dense (rm).
189+
Format csf({Sparse,Sparse,Sparse});
190+
Format rm({Dense,Dense});
191+
192+
// Load a sparse order-3 tensor from file (stored in the FROSTT format) and
193+
// store it as a compressed sparse fiber tensor. The tensor in this example
194+
// can be download from: http://frostt.io/tensors/nell-2/
195+
Tensor&lt;double&gt; B = read("nell-2.tns", csf);
196+
// Generate a random dense matrix and store it in row-major (dense) format.
197+
// Matrices correspond to order-2 tensors in taco.
198+
Tensor&lt;double&gt; C({B.getDimension(1), 25}, rm);
199+
for (int i = 0; i &lt; C.getDimension(0); ++i) {
200+
for (int j = 0; j &lt; C.getDimension(1); ++j) {
201+
C.insert({i,j}, unif(gen));
202+
}
203+
}
204+
C.pack();
205+
206+
207+
// Generate another random dense matrix and store it in row-major format.
208+
Tensor&lt;double&gt; D({B.getDimension(2), 25}, rm);
209+
for (int i = 0; i &lt; D.getDimension(0); ++i) {
210+
for (int j = 0; j &lt; D.getDimension(1); ++j) {
211+
D.insert({i,j}, unif(gen));
212+
}
213+
}
214+
D.pack();
215+
216+
// Declare the output matrix to be a dense matrix with 25 columns and the same
217+
// number of rows as the number of slices along the first dimension of input
218+
// tensor B, to be also stored as a row-major dense matrix.
219+
Tensor&lt;double&gt; A({B.getDimension(0), 25}, rm);
220+
221+
222+
// Define the MTTKRP computation using index notation.
223+
IndexVar i, j, k, l;
224+
A(i,j) = B(i,k,l) * D(l,j) * C(k,j);
225+
// At this point, we have defined how entries in the output matrix should be
226+
// computed from entries in the input tensor and matrices but have not actually
227+
// performed the computation yet. To do so, we must first tell taco to generate
228+
// code that can be executed to compute the MTTKRP operation.
229+
A.compile();
230+
// We can now call the functions taco generated to assemble the indices of the
231+
// output matrix and then actually compute the MTTKRP.
232+
A.assemble();
233+
A.compute();
234+
// Write the output of the computation to file (stored in the FROSTT format).
235+
write("A.tns", A);
236+
}</code></pre></p>
237+
<p>You can also use the TACO Python library to perform the same computation, as
238+
demonstrated here:</p>
175239
<pre class="highlight"><code class="language-python">import pytaco as pt
176240
import numpy as np
177241
from pytaco import compressed, dense

docs/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -227,5 +227,5 @@ <h1 id="getting-help">Getting Help</h1>
227227

228228
<!--
229229
MkDocs version : 0.17.2
230-
Build Date UTC : 2020-06-21 19:11:02
230+
Build Date UTC : 2020-07-26 01:30:52
231231
-->

docs/machine_learning/index.html

+65-1
Original file line numberDiff line numberDiff line change
@@ -169,8 +169,72 @@
169169
<p>
170170
<script type="math/tex; mode=display">A_{ij} = B_{ij} \cdot C_{ik} \cdot C_{kj}.</script>
171171
</p>
172-
<p>You can use the TACO Python library to easily and efficiently compute SDDMM, as
172+
<p>You can use the taco C++ library to easily and efficiently compute the SDDMM, as
173173
shown here:</p>
174+
<pre class="highlight"><code class="language-c++">// On Linux and MacOS, you can compile and run this program like so:
175+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib sddmm.cpp -o sddmm -ltaco
176+
// LD_LIBRARY_PATH=../../build/lib ./sddmm
177+
#include &lt;random&gt;
178+
#include "taco.h"
179+
using namespace taco;
180+
int main(int argc, char* argv[]) {
181+
std::default_random_engine gen(0);
182+
std::uniform_real_distribution&lt;double&gt; unif(0.0, 1.0);
183+
// Predeclare the storage formats that the inputs and output will be stored as.
184+
// To define a format, you must specify whether each dimension is dense or sparse
185+
// and (optionally) the order in which dimensions should be stored. The formats
186+
// declared below correspond to doubly compressed sparse row (dcsr), row-major
187+
// dense (rm), and column-major dense (dm).
188+
Format dcsr({Sparse,Sparse});
189+
Format rm({Dense,Dense});
190+
Format cm({Dense,Dense}, {1,0});
191+
192+
// Load a sparse matrix from file (stored in the Matrix Market format) and
193+
// store it as a doubly compressed sparse row matrix. Matrices correspond to
194+
// order-2 tensors in taco. The matrix in this example can be download from:
195+
// https://www.cise.ufl.edu/research/sparse/MM/Williams/webbase-1M.tar.gz
196+
Tensor&lt;double&gt; B = read("webbase-1M.mtx", dcsr);
197+
// Generate a random dense matrix and store it in row-major (dense) format.
198+
Tensor&lt;double&gt; C({B.getDimension(0), 1000}, rm);
199+
for (int i = 0; i &lt; C.getDimension(0); ++i) {
200+
for (int j = 0; j &lt; C.getDimension(1); ++j) {
201+
C.insert({i,j}, unif(gen));
202+
}
203+
}
204+
C.pack();
205+
206+
// Generate another random dense matrix and store it in column-major format.
207+
Tensor&lt;double&gt; D({1000, B.getDimension(1)}, cm);
208+
for (int i = 0; i &lt; D.getDimension(0); ++i) {
209+
for (int j = 0; j &lt; D.getDimension(1); ++j) {
210+
D.insert({i,j}, unif(gen));
211+
}
212+
}
213+
D.pack();
214+
215+
// Declare the output matrix to be a sparse matrix with the same dimensions as
216+
// input matrix B, to be also stored as a doubly compressed sparse row matrix.
217+
Tensor&lt;double&gt; A(B.getDimensions(), dcsr);
218+
219+
// Define the SDDMM computation using index notation.
220+
IndexVar i, j, k;
221+
A(i,j) = B(i,j) * C(i,k) * D(k,j);
222+
223+
// At this point, we have defined how entries in the output matrix should be
224+
// computed from entries in the input matrices but have not actually performed
225+
// the computation yet. To do so, we must first tell taco to generate code that
226+
// can be executed to compute the SDDMM operation.
227+
A.compile();
228+
// We can now call the functions taco generated to assemble the indices of the
229+
// output matrix and then actually compute the SDDMM.
230+
A.assemble();
231+
A.compute();
232+
// Write the output of the computation to file (stored in the Matrix Market format).
233+
write("A.mtx", A);
234+
}</code></pre>
235+
236+
<p>You can also use the TACO Python library to perform the same computation, as
237+
demonstrated here:</p>
174238
<pre class="highlight"><code class="language-python">import pytaco as pt
175239
from pytaco import dense, compressed
176240
import numpy as np

docs/scientific_computing/index.html

+66-3
Original file line numberDiff line numberDiff line change
@@ -161,14 +161,77 @@
161161
<p>
162162
<script type="math/tex; mode=display">y = Ax + z,</script>
163163
</p>
164-
<p>where <script type="math/tex">A</script> is a sparse matrix and <script type="math/tex">x</script>, <script type="math/tex">y</script>, and <script type="math/tex">z</script>
165-
are dense vectors. The computation can also be expressed in <a href="../pycomputations/index.html#specifying-tensor-algebra-computations">index
164+
<p>where <script type="math/tex">A</script> is a sparse matrix and <script type="math/tex">x</script>, <script type="math/tex">y</script>, and <script type="math/tex">z</script> are dense vectors.
165+
The computation can also be expressed in <a href="../pycomputations/index.html#specifying-tensor-algebra-computations">index
166166
notation</a> as </p>
167167
<p>
168168
<script type="math/tex; mode=display">y_i = A_{ij} \cdot x_j + z_i.</script>
169169
</p>
170-
<p>You can use the TACO Python library to easily and efficiently compute SpMV, as
170+
<p>You can use the TACO C++ library to easily and efficiently compute SpMV, as
171171
shown here:</p>
172+
<pre class="highlight"><code class="language-c++">// On Linux and MacOS, you can compile and run this program like so:
173+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib spmv.cpp -o spmv -ltaco
174+
// LD_LIBRARY_PATH=../../build/lib ./spmv
175+
#include &lt;random&gt;
176+
#include "taco.h"
177+
using namespace taco;
178+
int main(int argc, char* argv[]) {
179+
std::default_random_engine gen(0);
180+
std::uniform_real_distribution&lt;double&gt; unif(0.0, 1.0);
181+
// Predeclare the storage formats that the inputs and output will be stored as.
182+
// To define a format, you must specify whether each dimension is dense or sparse
183+
// and (optionally) the order in which dimensions should be stored. The formats
184+
// declared below correspond to compressed sparse row (csr) and dense vector (dv).
185+
Format csr({Dense,Sparse});
186+
Format dv({Dense});
187+
188+
// Load a sparse matrix from file (stored in the Matrix Market format) and
189+
// store it as a compressed sparse row matrix. Matrices correspond to order-2
190+
// tensors in taco. The matrix in this example can be downloaded from:
191+
// https://www.cise.ufl.edu/research/sparse/MM/Boeing/pwtk.tar.gz
192+
Tensor&lt;double&gt; A = read("pwtk.mtx", csr);
193+
194+
// Generate a random dense vector and store it in the dense vector format.
195+
// Vectors correspond to order-1 tensors in taco.
196+
Tensor&lt;double&gt; x({A.getDimension(1)}, dv);
197+
for (int i = 0; i &lt; x.getDimension(0); ++i) {
198+
x.insert({i}, unif(gen));
199+
}
200+
x.pack();
201+
202+
// Generate another random dense vetor and store it in the dense vector format..
203+
Tensor&lt;double&gt; z({A.getDimension(0)}, dv);
204+
for (int i = 0; i &lt; z.getDimension(0); ++i) {
205+
z.insert({i}, unif(gen));
206+
}
207+
z.pack();
208+
209+
// Declare and initializing the scaling factors in the SpMV computation.
210+
// Scalars correspond to order-0 tensors in taco.
211+
Tensor&lt;double&gt; alpha(42.0);
212+
Tensor&lt;double&gt; beta(33.0);
213+
214+
// Declare the output matrix to be a sparse matrix with the same dimensions as
215+
// input matrix B, to be also stored as a doubly compressed sparse row matrix.
216+
Tensor&lt;double&gt; y({A.getDimension(0)}, dv);
217+
// Define the SpMV computation using index notation.
218+
IndexVar i, j;
219+
y(i) = alpha() * (A(i,j) * x(j)) + beta() * z(i);
220+
// At this point, we have defined how entries in the output vector should be
221+
// computed from entries in the input matrice and vectorsbut have not actually
222+
// performed the computation yet. To do so, we must first tell taco to generate
223+
// code that can be executed to compute the SpMV operation.
224+
y.compile();
225+
// We can now call the functions taco generated to assemble the indices of the
226+
// output vector and then actually compute the SpMV.
227+
y.assemble();
228+
y.compute();
229+
// Write the output of the computation to file (stored in the FROSTT format).
230+
write("y.tns", y);
231+
}</code></pre>
232+
233+
<p>You can also use the TACO Python library to perform the same computation, as
234+
demonstrated here:</p>
172235
<pre class="highlight"><code class="language-python">import pytaco as pt
173236
from pytaco import compressed, dense
174237
import numpy as np

0 commit comments

Comments
 (0)