Skip to content

Commit 68cc30a

Browse files
committed
some small changes and reorgs
1 parent acc8051 commit 68cc30a

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

docs/day2/patterns/attention.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
Attention
2-
=========
1+
Design Pattern: Attention
2+
=========================
33

44
Attention is a useful pattern for when you want to take a collection of vectors---whether it be a sequence of vectors representing a sequence of words, or an unordered collections of vectors representing a collection of attributes---and summarize them into a single vector. This has similar analogs to the CBOW examples we saw on Day 1, but instead of just averaging or using max pooling, we are learning a function which learns to compute the weights for each of the vectors before summing them together.
55

docs/day2/sampling.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
Sampling from an RNN
2-
====================
1+
Exercise: Sampling from an RNN
2+
==============================
33

44
The goal of sampling from an RNN is to initialize the sequence in some way, feed it into the recurrent computation, and retrieve the next prediction.
55

docs/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ Natural Language Processing (NLP) with PyTorch
3030
day2/warmup
3131
day2/failfastprototypemode
3232
day2/tensorfu1
33-
day2/sampling
3433
day2/tensorfu2
3534
day2/adventures/interpolation
35+
day2/sampling
3636
day2/patterns/attention
3737

3838

0 commit comments

Comments
 (0)