Skip to content

Commit 664d7a2

Browse files
gwulfsmartinwicke
authored andcommitted
change link to reflect v2 (and future version) (tensorflow#271)
1 parent f9cea14 commit 664d7a2

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

syntaxnet/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# SyntaxNet: Neural Models of Syntax.
22

33
*A TensorFlow implementation of the models described in [Andor et al. (2016)]
4-
(http://arxiv.org/pdf/1603.06042v1.pdf).*
4+
(http://arxiv.org/abs/1603.06042).*
55

66
**Update**: Parsey models are now [available](universal.md) for 40 languages
77
trained on Universal Dependencies datasets, with support for text segmentation
@@ -29,13 +29,13 @@ Model
2929
[Martins et al. (2013)](http://www.cs.cmu.edu/~ark/TurboParser/) | 93.10 | 88.23 | 94.21
3030
[Zhang and McDonald (2014)](http://research.google.com/pubs/archive/38148.pdf) | 93.32 | 88.65 | 93.37
3131
[Weiss et al. (2015)](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43800.pdf) | 93.91 | 89.29 | 94.17
32-
[Andor et al. (2016)](http://arxiv.org/pdf/1603.06042v1.pdf)* | 94.44 | 90.17 | 95.40
32+
[Andor et al. (2016)](http://arxiv.org/abs/1603.06042)* | 94.44 | 90.17 | 95.40
3333
Parsey McParseface | 94.15 | 89.08 | 94.77
3434

3535
We see that Parsey McParseface is state-of-the-art; more importantly, with
3636
SyntaxNet you can train larger networks with more hidden units and bigger beam
3737
sizes if you want to push the accuracy even further: [Andor et al. (2016)]
38-
(http://arxiv.org/pdf/1603.06042v1.pdf)* is simply a SyntaxNet model with a
38+
(http://arxiv.org/abs/1603.06042)* is simply a SyntaxNet model with a
3939
larger beam and network. For futher information on the datasets, see that paper
4040
under the section "Treebank Union".
4141

@@ -45,7 +45,7 @@ Parsey McParseface is also state-of-the-art for part-of-speech (POS) tagging
4545
Model | News | Web | Questions
4646
-------------------------------------------------------------------------- | :---: | :---: | :-------:
4747
[Ling et al. (2015)](http://www.cs.cmu.edu/~lingwang/papers/emnlp2015.pdf) | 97.78 | 94.03 | 96.18
48-
[Andor et al. (2016)](http://arxiv.org/pdf/1603.06042v1.pdf)* | 97.77 | 94.80 | 96.86
48+
[Andor et al. (2016)](http://arxiv.org/abs/1603.06042)* | 97.77 | 94.80 | 96.86
4949
Parsey McParseface | 97.52 | 94.24 | 96.45
5050

5151
The first part of this tutorial describes how to install the necessary tools and
@@ -475,7 +475,7 @@ predicts the next action to take.
475475

476476
### Training a Parser Step 1: Local Pretraining
477477

478-
As described in our [paper](http://arxiv.org/pdf/1603.06042v1.pdf), the first
478+
As described in our [paper](http://arxiv.org/abs/1603.06042), the first
479479
step in training the model is to *pre-train* using *local* decisions. In this
480480
phase, we use the gold dependency to guide the parser, and train a softmax layer
481481
to predict the correct action given these gold dependencies. This can be

0 commit comments

Comments
 (0)