Skip to content

Commit 2ac6512

Browse files
authored
Fix typos in Twitter and web crawler exercises (donnemartin#438)
1 parent 914736a commit 2ac6512

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

solutions/system_design/twitter/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca
2626
#### Out of scope
2727

2828
* **Service** pushes tweets to the Twitter Firehose and other streams
29-
* **Service** strips out tweets based on user's visibility settings
29+
* **Service** strips out tweets based on users' visibility settings
3030
* Hide @reply if the user is not also following the person being replied to
3131
* Respect 'hide retweets' setting
3232
* Analytics
@@ -129,7 +129,7 @@ If our **Memory Cache** is Redis, we could use a native Redis list with the foll
129129
| tweet_id user_id meta | tweet_id user_id meta | tweet_id user_id meta |
130130
```
131131

132-
The new tweet would be placed in the **Memory Cache**, which populates user's home timeline (activity from people the user is following).
132+
The new tweet would be placed in the **Memory Cache**, which populates the user's home timeline (activity from people the user is following).
133133

134134
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
135135

solutions/system_design/web_crawler/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ Handy conversion guide:
7777
7878
### Use case: Service crawls a list of urls
7979

80-
We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc
80+
We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc.
8181

8282
We'll use a table `crawled_links` to store processed links and their page signatures.
8383

0 commit comments

Comments
 (0)