Skip to content

Commit 9577d14

Browse files
committed
Updated documentation intro
1 parent d2a7e23 commit 9577d14

File tree

2 files changed

+17
-10
lines changed

2 files changed

+17
-10
lines changed

docs/index.html

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,9 @@
5757
<li class="toctree-l2"><a href="#system-requirements">System Requirements</a></li>
5858

5959

60+
<li class="toctree-l2"><a href="#getting-help">Getting Help</a></li>
61+
62+
6063
</ul>
6164
</li>
6265

@@ -161,20 +164,19 @@
161164
computations can be performed on any mix of dense and sparse tensors. Under the
162165
hood, TACO automatically generates efficient code to perform these
163166
computations.</p>
164-
<p>The sidebar to the left links to documentation for the TACO Python library as
165-
well as some examples demonstrating how TACO can be used in real-world
166-
applications.</p>
167+
<p>The sidebar to the left links to documentation for the TACO C++ and Python
168+
libraries as well as some examples demonstrating how TACO can be used in
169+
real-world applications.</p>
167170
<h1 id="system-requirements">System Requirements</h1>
168171
<ul>
169172
<li>A C compiler that supports C99, such as GCC or Clang<ul>
170173
<li>Support for OpenMP is also required if parallel execution is desired</li>
171174
</ul>
172175
</li>
173-
<li>Python 3 with NumPy and SciPy</li>
176+
<li>Python 3 with NumPy and SciPy (for the Python library)</li>
174177
</ul>
175-
<!--# Getting Help
176-
177-
Questions and bug reports can be submitted [here](https://github.com/tensor-compiler/taco/issues).-->
178+
<h1 id="getting-help">Getting Help</h1>
179+
<p>Questions and bug reports can be submitted <a href="https://github.com/tensor-compiler/taco/issues">here</a>.</p>
178180

179181
</div>
180182
</div>
@@ -225,5 +227,5 @@ <h1 id="system-requirements">System Requirements</h1>
225227

226228
<!--
227229
MkDocs version : 0.17.2
228-
Build Date UTC : 2020-06-21 18:49:24
230+
Build Date UTC : 2020-06-21 19:11:02
229231
-->

docs/search/search_index.json

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,19 @@
22
"docs": [
33
{
44
"location": "/index.html",
5-
"text": "TACO\n is a library for performing sparse and\ndense linear algebra and tensor algebra computations. The computations can\nrange from relatively simple ones like sparse matrix-vector multiplication to\nmore complex ones like matricized tensor times Khatri-Rao product. All these\ncomputations can be performed on any mix of dense and sparse tensors. Under the\nhood, TACO automatically generates efficient code to perform these\ncomputations.\n\n\nThe sidebar to the left links to documentation for the TACO Python library as\nwell as some examples demonstrating how TACO can be used in real-world\napplications.\n\n\nSystem Requirements\n\n\n\n\nA C compiler that supports C99, such as GCC or Clang\n\n\nSupport for OpenMP is also required if parallel execution is desired\n\n\n\n\n\n\nPython 3 with NumPy and SciPy",
5+
"text": "TACO\n is a library for performing sparse and\ndense linear algebra and tensor algebra computations. The computations can\nrange from relatively simple ones like sparse matrix-vector multiplication to\nmore complex ones like matricized tensor times Khatri-Rao product. All these\ncomputations can be performed on any mix of dense and sparse tensors. Under the\nhood, TACO automatically generates efficient code to perform these\ncomputations.\n\n\nThe sidebar to the left links to documentation for the TACO C++ and Python\nlibraries as well as some examples demonstrating how TACO can be used in\nreal-world applications.\n\n\nSystem Requirements\n\n\n\n\nA C compiler that supports C99, such as GCC or Clang\n\n\nSupport for OpenMP is also required if parallel execution is desired\n\n\n\n\n\n\nPython 3 with NumPy and SciPy (for the Python library)\n\n\n\n\nGetting Help\n\n\nQuestions and bug reports can be submitted \nhere\n.",
66
"title": "Home"
77
},
88
{
99
"location": "/index.html#system-requirements",
10-
"text": "A C compiler that supports C99, such as GCC or Clang Support for OpenMP is also required if parallel execution is desired Python 3 with NumPy and SciPy",
10+
"text": "A C compiler that supports C99, such as GCC or Clang Support for OpenMP is also required if parallel execution is desired Python 3 with NumPy and SciPy (for the Python library)",
1111
"title": "System Requirements"
1212
},
13+
{
14+
"location": "/index.html#getting-help",
15+
"text": "Questions and bug reports can be submitted here .",
16+
"title": "Getting Help"
17+
},
1318
{
1419
"location": "/tensors/index.html",
1520
"text": "Declaring Tensors\n\n\ntaco::Tensor\n objects, which correspond to mathematical tensors, form the core of the taco C++ library. You can declare a new tensor by specifying its name, a vector containing the size of each dimension of the tensor, and the \nstorage format\n that will be used to store the tensor:\n\n\n// Declare a new tensor \"A\" of double-precision floats with dimensions \n// 512 x 64 x 2048, stored as a dense-sparse-sparse tensor\nTensor\ndouble\n A(\"A\", {512,64,2048}, Format({Dense,Sparse,Sparse}));\n\n\n\nThe name of the tensor can be omitted, in which case taco will assign an arbitrary name to the tensor:\n\n\n// Declare another tensor with the same dimensions and storage format as before\nTensor\ndouble\n A({512,64,2048}, Format({Dense,Sparse,Sparse}));\n\n\n\nScalars, which are treated as order-0 tensors, can be declared and initialized with some arbitrary value as demonstrated below:\n\n\nTensor\ndouble\n alpha(42.0); // Declare a scalar tensor initialized to 42.0\n\n\n\nDefining Tensor Formats\n\n\nConceptually, you can think of a tensor as a tree with each level (excluding the root) corresponding to a dimension of the tensor. Each path from the root to a leaf node represents a tensor coordinate and its corresponding value. Which dimension each level of the tree corresponds to is determined by the order in which dimensions of the tensor are stored.\n\n\ntaco uses a novel scheme that can describe different storage formats for any tensor by specifying the order in which tensor dimensions are stored and whether each dimension is sparse or dense. A sparse dimension stores only the subset of the dimension that contains non-zero values and is conceptually similar to the index arrays used in the compressed sparse row (CSR) matrix format, while a dense dimension stores both zeros and non-zeros. As demonstrated below, this scheme is flexibile enough to express many commonly-used matrix storage formats.\n\n\nYou can define a new tensor storage format by creating a \ntaco::Format\n object. The constructor for \ntaco::Format\n takes as arguments a vector specifying the type of each dimension and (optionally) a vector specifying the order in which dimensions are to be stored, following the above scheme:\n\nFormat dm({Dense,Dense}); // (Row-major) dense matrix\nFormat csr({Dense,Sparse}); // Compressed sparse row matrix\nFormat csc({Dense,Sparse}, {1,0}); // Compressed sparse column matrix\nFormat dcsr({Sparse,Sparse}, {1,0}); // Doubly compressed sparse column matrix\n\n\nAlternatively, you can define a tensor format that contains only sparse or dense dimensions as follows:\n\n\nFormat csf(Sparse); // Compressed sparse fiber tensor\n\n\n\nInitializing Tensors\n\n\nYou can initialize a \ntaco::Tensor\n by calling the \ninsert\n method to add a non-zero component to the tensor. The \ninsert\n method takes two arguments, a vector specifying the coordinate of the non-zero component to be added and the value to be inserted at that coordinate:\n\n\nA.insert({128,32,1024}, 42.0); // A(128,32,1024) = 42.0\n\n\n\nThe \ninsert\n method adds the inserted non-zeros to a temporary buffer. Before a tensor can actually be used in a computation though, you must invoke the \npack\n method to compress the tensor into the storage format that was specified when the tensor was first declared:\n\n\nA.pack(); // Construct dense-sparse-sparse tensor containing inserted non-zeros\n\n\n\nLoading Tensors from File\n\n\nRather than manually invoking \ninsert\n and \npack\n to initialize a tensor, you can load tensors directly from file by calling \ntaco::read\n as demonstrated below:\n\n\n// Load a dense-sparse-sparse tensor from file A.tns\nA = read(\"A.tns\", Format({Dense, Sparse, Sparse}));\n\n\n\nBy default, \ntaco::read\n returns a packed tensor. You can optionally pass a Boolean flag as an argument to indicate whether the returned tensor should be packed or not:\n\n\n// Load an unpacked tensor from file A.tns\nA = read(\"A.tns\", Format({Dense, Sparse, Sparse}), false);\n\n\n\nCurrently, taco supports loading from the following matrix and tensor file formats:\n\n\n\n\nMatrix Market (Coordinate) Format (.mtx)\n\n\nRutherford-Boeing Format (.rb)\n\n\nFROSTT Format (.tns)\n\n\n\n\nWriting Tensors to File\n\n\nYou can also write a (packed) tensor directly to file by calling \ntaco::write\n, as demonstrated below:\n\n\nwrite(\"A.tns\", A); // Write tensor A to file A.tns\n\n\n\ntaco::write\n supports the same set of matrix and tensor file formats as \ntaco::read\n.",

0 commit comments

Comments
 (0)