Skip to content

Commit 8fd755a

Browse files
authored
Merge pull request #3625 from co63oc/fix1
fix typos in docs and comments
2 parents a538403 + abb6177 commit 8fd755a

File tree

6 files changed

+7
-7
lines changed

6 files changed

+7
-7
lines changed

docs/tutorial/tutorial-basics/part-of-speech-tagging.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Tagging parts-of-speech
22

3-
This tutorials shows you how to do part-of-speech tagging in Flair, showcases univeral and language-specific models, and gives a list of all PoS models in Flair.
3+
This tutorials shows you how to do part-of-speech tagging in Flair, showcases universal and language-specific models, and gives a list of all PoS models in Flair.
44

55
## Language-specific parts-of-speech (PoS)
66

@@ -111,7 +111,7 @@ Universal parts-of-speech are a set of minimal syntactic units that exist across
111111
will have VERBs or NOUNs.
112112

113113

114-
We ship models trained over 14 langages to tag upos in **multilingual text**. Use like this:
114+
We ship models trained over 14 languages to tag upos in **multilingual text**. Use like this:
115115

116116
```python
117117
from flair.nn import Classifier

docs/tutorial/tutorial-embeddings/flair-embeddings.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Words are now embedded using a concatenation of three different embeddings. This
120120

121121
We also developed a pooled variant of the [`FlairEmbeddings`](#flair.embeddings.token.FlairEmbeddings). These embeddings differ in that they *constantly evolve over time*, even at prediction time (i.e. after training is complete). This means that the same words in the same sentence at two different points in time may have different embeddings.
122122

123-
[`PooledFlairEmbeddings`](#flair.embeddings.token.PooledFlairEmbeddings) manage a 'global' representation of each distinct word by using a pooling operation of all past occurences. More details on how this works may be found in [Akbik et al. (2019)](https://www.aclweb.org/anthology/N19-1078/).
123+
[`PooledFlairEmbeddings`](#flair.embeddings.token.PooledFlairEmbeddings) manage a 'global' representation of each distinct word by using a pooling operation of all past occurrences. More details on how this works may be found in [Akbik et al. (2019)](https://www.aclweb.org/anthology/N19-1078/).
124124

125125
You can instantiate and use [`PooledFlairEmbeddings`](#flair.embeddings.token.PooledFlairEmbeddings) like [`FlairEmbeddings`](#flair.embeddings.token.FlairEmbeddings):
126126

docs/tutorial/tutorial-training/how-to-train-span-classifier.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ This tutorial section show you how to train models using the [Span Classifier](#
77

88
## Training an entity linker (NEL) model with transformers
99

10-
For a state-of-the-art NER sytem you should fine-tune transformer embeddings, and use full document context
10+
For a state-of-the-art NER system you should fine-tune transformer embeddings, and use full document context
1111
(see our [FLERT](https://arxiv.org/abs/2011.06993) paper for details).
1212

1313
Use the following script:

examples/ner/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ To use the recently introduced [FLERT](https://arxiv.org/abs/2011.06993) the fol
4242
# Example
4343

4444
The following example shows how to fine-tune a model for the recently released [Masakhane](https://arxiv.org/abs/2103.11811) dataset for
45-
the Luo language. We choose XLM-RoBERTa Base for fine-tuning. In this example, the best model (choosen on performance on development set)
45+
the Luo language. We choose XLM-RoBERTa Base for fine-tuning. In this example, the best model (chosen on performance on development set)
4646
is used for final evaluation on the test set.
4747

4848
## Choosing the dataset

flair/models/entity_mention_linking.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ def process_entity_name(self, entity_name: str) -> str:
233233

234234
entity_name = entity_name.strip()
235235

236-
# NOTE: Avoid emtpy string if mentions are just punctutations (e.g. `-` or `(`)
236+
# NOTE: Avoid empty string if mentions are just punctuations (e.g. `-` or `(`)
237237
entity_name = original if len(entity_name) == 0 else entity_name
238238

239239
return entity_name

flair/training_utils.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ def _init_weights_index(self, key, state_dict, weights_to_watch):
155155

156156

157157
class AnnealOnPlateau:
158-
"""A learningrate sheduler for annealing on plateau.
158+
"""A learningrate scheduler for annealing on plateau.
159159
160160
This class is a modification of
161161
torch.optim.lr_scheduler.ReduceLROnPlateau that enables

0 commit comments

Comments
 (0)