Skip to content

Commit

Permalink
Automatic docstrings re-configured output #7
Browse files Browse the repository at this point in the history
  • Loading branch information
apmoore1 committed Oct 17, 2021
1 parent d066684 commit be7bc8a
Show file tree
Hide file tree
Showing 20 changed files with 561 additions and 103 deletions.
123 changes: 108 additions & 15 deletions docs/docs/API/basic_tagger.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,21 @@
<div>
<p className="alignleft"><i>pymusas</i><strong>.basic_tagger</strong></p>
<p className="alignright"><a className="sourcelink" href="https://github.com/allenai/allennlp/blob/main/allennlp/basic_tagger.py">[SOURCE]</a></p>
</div>
<div></div>

---
sidebar_label: basic_tagger
title: basic_tagger
---

#### load\_lexicon
<a id="pymusas.basic_tagger.load_lexicon"></a>

### load\_lexicon

```python
def load_lexicon(lexicon_path: Path, has_headers: bool = True, include_pos: bool = True) -> Dict[str, List[str]]
def load_lexicon(
lexicon_path: Path,
has_headers: bool = True,
include_pos: bool = True
) -> Dict[str, List[str]]
```

**Arguments**:
Expand All @@ -18,29 +27,113 @@ def load_lexicon(lexicon_path: Path, has_headers: bool = True, include_pos: bool
contain no lexicon data. When this is set to True the
first line of the lexicon file is ignored.
param include_pos: Whether or not the returned dictionary uses POS
within it&#x27;s key.
within it's key.
- `lexicon_path`: File path to the lexicon data. This data should be in
- `has_headers`: This should be set to True if the lexicon file on it&#x27;s
- `has_headers`: This should be set to True if the lexicon file on it's

**Returns**:

A dictionary whereby the key is a tuple of

## RuleBasedTagger Objects
<a id="pymusas.basic_tagger.tag_token"></a>

### tag\_token

```python
class RuleBasedTagger()
def tag_token(
text: str,
lemma: str,
pos: str,
lexicon_lookup: Dict[str, List[str]],
lemma_lexicon_lookup: Dict[str, List[str]]
) -> List[str]
```

#### tag\_data
A description

__Parameters__


- __text __: [`RuleBasedTagger`](#rulebasedtagger)

__Returns__


`List[str]`

<a id="pymusas.basic_tagger.RuleBasedTagger"></a>

## RuleBasedTagger

```python
def tag_data(tokens: List[Tuple[str, str, str]]) -> List[List[str]]
class RuleBasedTagger:
| ...
| def __init__(lexicon_path: Path, has_headers: bool) -> None
```

**Arguments**:
__Parameters__


- __lexicon_path __: `Path`
File path to the USAS lexicon.

- __has_headers __: `bool`
Whether the USAS lexicon contains any header information.

__Attributes__


- `lexicon_lookup `: `Dict[str, List[str]]`

- `lexicon_lemma_lookup `: `Dict[str, List[str]]`

<a id="pymusas.basic_tagger.RuleBasedTagger.tag_data"></a>

### tag\_data

```python
class RuleBasedTagger:
| ...
| def tag_data(
| self,
| tokens: List[Tuple[str, str, str]]
| ) -> List[List[str]]
```

Just a bit of a description

__Parameters__


- __tokens __: `List[Tuple[str, str, str]]`
Each tuple represents a token. The tuple must contain the
following lingustic information per token;
1. token text,
2. lemma,
3. Part Of Speech.

__Returns__


`List[List[str]]`

<a id="pymusas.basic_tagger.Anything"></a>

## Anything

```python
class Anything
```

<a id="pymusas.basic_tagger.Anything.ArrayField"></a>

#### ArrayField

```python
class Anything:
| ...
| ArrayField: Type[DocusaurusRenderer] = DocusaurusRenderer
```

following lingustic information per token: 1. token text,
2. lemma, 3. Part Of Speech.
- `tokens`: Each tuple represents a token. The tuple must contain the
For backwards compatibility, we keep the name `ArrayField`.

13 changes: 9 additions & 4 deletions docs/docs/API/file_utils.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
<div>
<p className="alignleft"><i>pymusas</i><strong>.file_utils</strong></p>
<p className="alignright"><a className="sourcelink" href="https://github.com/allenai/allennlp/blob/main/allennlp/file_utils.py">[SOURCE]</a></p>
</div>
<div></div>

---
sidebar_label: file_utils
title: file_utils
---

#### download\_url\_file
<a id="pymusas.file_utils.download_url_file"></a>

### download\_url\_file

```python
def download_url_file(url: str) -> str
Expand Down
84 changes: 62 additions & 22 deletions docs/docs/API/lexicon_collection.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,35 @@
<div>
<p className="alignleft"><i>pymusas</i><strong>.lexicon_collection</strong></p>
<p className="alignright"><a className="sourcelink" href="https://github.com/allenai/allennlp/blob/main/allennlp/lexicon_collection.py">[SOURCE]</a></p>
</div>
<div></div>

---
sidebar_label: lexicon_collection
title: lexicon_collection
---

## LexiconEntry Objects
<a id="pymusas.lexicon_collection.LexiconEntry"></a>

## LexiconEntry

```python
@dataclass(init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=True)
class LexiconEntry()
@dataclass(init=True, repr=True, eq=True, order=False,
unsafe_hash=False, frozen=True)
class LexiconEntry
```

As frozen is true no values can be assigned after creation of an instance of
this class.

## LexiconCollection Objects
<a id="pymusas.lexicon_collection.LexiconCollection"></a>

## LexiconCollection

```python
class LexiconCollection(MutableMapping)
class LexiconCollection(MutableMapping):
| ...
| def __init__(
| self,
| data: Optional[Dict[str, List[str]]] = None
| ) -> None
```

This is a dictionary object that will hold LexiconEntry data in a fast to
Expand All @@ -31,34 +44,50 @@ the most likely semantic tag is Z3 and the least likely tag is A1:

```
from pymusas.lexicon_collection import LexiconEntry, LexiconCollection
lexicon_entry = LexiconEntry(&#x27;London&#x27;, [&#x27;Z3&#x27;, &#x27;Z1&#x27;, &#x27;A1&#x27;], &#x27;noun&#x27;)
lexicon_entry = LexiconEntry('London', ['Z3', 'Z1', 'A1'], 'noun')
collection = LexiconCollection()
collection.add_lexicon_entry(lexicon_entry)
most_likely_tag = collection[&#x27;London|noun&#x27;][0]
least_likely_tag = collection[&#x27;London|noun&#x27;][-1]
most_likely_tag = collection['London|noun'][0]
least_likely_tag = collection['London|noun'][-1]
```

#### \_\_str\_\_
<a id="pymusas.lexicon_collection.LexiconCollection.__str__"></a>

### \_\_str\_\_

```python
def __str__() -> str
class LexiconCollection(MutableMapping):
| ...
| def __str__() -> str
```

Human readable string.

#### \_\_repr\_\_
<a id="pymusas.lexicon_collection.LexiconCollection.__repr__"></a>

### \_\_repr\_\_

```python
def __repr__() -> str
class LexiconCollection(MutableMapping):
| ...
| def __repr__() -> str
```

Machine readable string. When printed and run eval() over the string
you should be able to recreate the object.

#### add\_lexicon\_entry
<a id="pymusas.lexicon_collection.LexiconCollection.add_lexicon_entry"></a>

### add\_lexicon\_entry

```python
def add_lexicon_entry(value: LexiconEntry, include_pos: bool = True) -> None
class LexiconCollection(MutableMapping):
| ...
| def add_lexicon_entry(
| self,
| value: LexiconEntry,
| include_pos: bool = True
| ) -> None
```

Will add the LexiconEntry to the collection, whereby the key is the
Expand All @@ -75,21 +104,32 @@ If the pos value is None then then only the lemma is used, e.g.:
- `value`: A LexiconEntry.
- `include_pos`: Whether to include the POS tag within the key.

#### to\_dictionary
<a id="pymusas.lexicon_collection.LexiconCollection.to_dictionary"></a>

### to\_dictionary

```python
def to_dictionary() -> Dict[str, List[str]]
class LexiconCollection(MutableMapping):
| ...
| def to_dictionary() -> Dict[str, List[str]]
```

**Returns**:

The dictionary object that stores all of the data.

#### from\_tsv
<a id="pymusas.lexicon_collection.LexiconCollection.from_tsv"></a>

### from\_tsv

```python
@staticmethod
def from_tsv(tsv_file_path: Union[PathLike, str], include_pos: bool = True) -> Dict[str, List[str]]
class LexiconCollection(MutableMapping):
| ...
| @staticmethod
| def from_tsv(
| tsv_file_path: Union[PathLike, str],
| include_pos: bool = True
| ) -> Dict[str, List[str]]
```

If `include_pos` is True and the TSV file does not contain a
Expand Down
9 changes: 0 additions & 9 deletions docs/docs/API/sidebar.json

This file was deleted.

24 changes: 24 additions & 0 deletions docs/docs/API/spacy_tagger.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
<div>
<p className="alignleft"><i>pymusas</i><strong>.spacy_tagger</strong></p>
<p className="alignright"><a className="sourcelink" href="https://github.com/allenai/allennlp/blob/main/allennlp/spacy_tagger.py">[SOURCE]</a></p>
</div>
<div></div>

---

<a id="pymusas.spacy_tagger.RuleBasedTagger"></a>

## RuleBasedTagger

```python
class RuleBasedTagger:
| ...
| def __init__(
| self,
| nlp: Language,
| lexicon_lookup: Optional[Dict[str, List[str]]] = None,
| lexicon_lemma_lookup: Optional[Dict[str, List[str]]] = None,
| usas_tags_token_attr: str = 'usas_tags'
| ) -> None
```

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
7 changes: 5 additions & 2 deletions docs/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,9 @@ const config = {
docs: {
sidebarPath: require.resolve('./sidebars.js'),
// Please change this to your repo.
editUrl: 'https://github.com/facebook/docusaurus/edit/main/website/',
editUrl: 'https://github.com/ucrel/pymusas/edit/main/docs/',
showLastUpdateTime: true,
showLastUpdateAuthor: true,
},
blog: {
showReadingTime: true,
Expand All @@ -52,10 +54,11 @@ const config = {
items: [
{
type: 'doc',
docId: 'intro',
docId: 'documentation/intro',
position: 'left',
label: 'Documentation',
},
{to: '/docs/api/basic_tagger', label: 'API', position: 'left'},
{to: '/blog', label: 'Blog', position: 'left'},
{
href: 'https://github.com/ucrel/pymusas',
Expand Down
Loading

0 comments on commit be7bc8a

Please sign in to comment.