Skip to content

Commit 6c39aae

Browse files
committed
chore(book): improve grammar of various chapters
1 parent 2b96f00 commit 6c39aae

35 files changed

+244
-259
lines changed

book/content/colophon.asc

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ For online information and ordering this and other books, please visit https://a
99

1010
No part of this publication may be produced, store in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher.
1111

12-
While every precaution has been taking in the preparation of this book, the publisher and author assume no responsibility for errors or omissions, or damages resulting from the use of the information contained herein.
12+
While every precaution has been taking in the preparation of this book, the publisher and author assume no responsibility for errors or omissions, or damages resulting from using the information contained herein.
1313

1414
// {revremark}, {revdate}.
1515
Version {revnumber}, {revdate}.

book/content/dedication.asc

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
[dedication]
22
== Dedication
33

4-
_To my wife Nathalie who supported me in my long hours of writing and my baby girl Abigail._
4+
_To my wife Nathalie, who supported me in my long hours of writing, and my baby girl Abigail._

book/content/introduction.asc

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@
44
You are about to become a better programmer and grasp the fundamentals of Algorithms and Data Structures.
55
Let's take a moment to explain how we are going to do that.
66

7-
This book is divided into 4 main parts:
7+
This book is divided into four main parts:
88

9-
In *Part 1*, we will cover the framework to compare and analyze algorithms: Big O notation. When you have multiple solutions to a problem, this framework comes handy to know which solution will scale better.
9+
In *Part 1*, we will cover the framework to compare and analyze algorithms: Big O notation. When you have multiple solutions to a problem, this framework comes in handy to know which solution will scale better.
1010

1111
In *Part 2*, we will go over linear data structures and trade-offs about using one over another.
1212
After reading this part, you will know how to trade space for speed using Maps, when to use a linked list over an array, or what problems can be solved using a stack over a queue.

book/content/part01/algorithms-analysis.asc

+2-2
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ _7n^3^ + 3n^2^ + 5_
143143

144144
You can express it in Big O notation as _O(n^3^)_. The other terms (_3n^2^ + 5_) will become less significant as the input grows bigger.
145145

146-
Big O notation only cares about the “biggest” terms in the time/space complexity. It combines what we learn about time and space complexity, asymptotic analysis, and adds a worst-case scenario.
146+
Big O notation only cares about the “biggest” terms in the time/space complexity. It combines what we learn about time and space complexity, asymptotic analysis and adds a worst-case scenario.
147147

148148
.All algorithms have three scenarios:
149149
* Best-case scenario: the most favorable input arrangement where the program will take the least amount of operations to complete. E.g., a sorted array is beneficial for some sorting algorithms.
@@ -152,7 +152,7 @@ Big O notation only cares about the “biggest” terms in the time/space comple
152152

153153
To sum up:
154154

155-
TIP: Big O only cares about the run time function's highest order on the worst-case scenario.
155+
TIP: Big O only cares about the run time function's highest order in the worst-case scenario.
156156

157157
WARNING: Don't drop terms that are multiplying other terms. _O(n log n)_ is not equivalent to _O(n)_. However, _O(n + log n)_ is.
158158

book/content/part01/big-o-examples.asc

+5-3
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Before we dive in, here’s a plot with all of them.
2525
// image::image5.png[CPU time needed vs. Algorithm runtime as the input size increases]
2626
image::big-o-running-time-complexity.png[CPU time needed vs. Algorithm runtime as the input size increases]
2727

28-
The above chart shows how the algorithm's running time is related to the work the CPU has to perform. As you can see, O(1) and O(log n) is very scalable. However, O(n^2^) and worst can convert your CPU into a furnace 🔥 for massive inputs.
28+
The above chart shows how the algorithm's running time is related to the CPU's work. As you can see, O(1) and O(log n) is very scalable. However, O(n^2^) and worst can convert your CPU into a furnace 🔥 for massive inputs.
2929

3030
[[constant]]
3131
==== Constant
@@ -71,7 +71,9 @@ include::{codedir}/runtimes/02-binary-search.js[tag=binarySearchRecursive]
7171

7272
This binary search implementation is a recursive algorithm, which means that the function `binarySearchRecursive` calls itself multiple times until the program finds a solution. The binary search splits the array in half every time.
7373

74-
Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some tools like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem]. The `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call, you are most likely in front of a logarithmic runtime: _O(log n)_.
74+
Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some approaches like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem].
75+
76+
Since the `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call, you are most likely in front of a logarithmic runtime: _O(log n)_.
7577

7678
[[linear]]
7779
==== Linear
@@ -171,7 +173,7 @@ Cubic *O(n^3^)* and higher polynomial functions usually involve many nested loop
171173
[[cubic-example]]
172174
===== 3 Sum
173175

174-
Let's say you want to find 3 items in an array that add up to a target number. One brute force solution would be to visit every possible combination of 3 elements and add them up to see if they are equal to target.
176+
Let's say you want to find 3 items in an array that add up to a target number. One brute force solution would be to visit every possible combination of 3 elements and add them to see if they are equal to the target.
175177

176178
[source, javascript]
177179
----

book/content/part01/how-to-big-o.asc

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ endif::[]
66
=== How to determine time complexity from code?
77

88
In general, you can determine the time complexity by analyzing the program's statements.
9-
However, you have to be mindful how are the statements arranged. Suppose they are inside a loop or have function calls or even recursion. All these factors affect the runtime of your code. Let's see how to deal with these cases.
9+
However, you have to be mindful of how are the statements arranged. Suppose they are inside a loop or have function calls or even recursion. All these factors affect the runtime of your code. Let's see how to deal with these cases.
1010

1111
*Sequential Statements*
1212

@@ -114,7 +114,7 @@ If instead of `m`, you had to iterate on `n` again, then it would be `O(n^2)`. A
114114
[[big-o-function-statement]]
115115
*Function call statements*
116116

117-
When you calculate your programs' time complexity and invoke a function, you need to be aware of its runtime. If you created the function, that might be a simple inspection of the implementation. However, if you are using a library function, you might infer it from the language/library documentation.
117+
When you calculate your programs' time complexity and invoke a function, you need to be aware of its runtime. If you created the function, that might be a simple inspection of the implementation. However, you might infer it from the language/library documentation if you use a 3rd party function.
118118

119119
Let's say you have the following program:
120120

@@ -210,7 +210,7 @@ graph G {
210210

211211
If you take a look at the generated tree calls, the leftmost nodes go down in descending order: `fn(4)`, `fn(3)`, `fn(2)`, `fn(1)`, which means that the height of the tree (or the number of levels) on the tree will be `n`.
212212

213-
The total number of calls, in a complete binary tree, is `2^n - 1`. As you can see in `fn(4)`, the tree is not complete. The last level will only have two nodes, `fn(1)` and `fn(0)`, while a complete tree would have 8 nodes. But still, we can say the runtime would be exponential `O(2^n)`. It won't get any worst because `2^n` is the upper bound.
213+
The total number of calls in a complete binary tree is `2^n - 1`. As you can see in `fn(4)`, the tree is not complete. The last level will only have two nodes, `fn(1)` and `fn(0)`, while a full tree would have eight nodes. But still, we can say the runtime would be exponential `O(2^n)`. It won't get any worst because `2^n` is the upper bound.
214214

215215
==== Summary
216216

book/content/part02/array-vs-list-vs-queue-vs-stack.asc

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ endif::[]
55

66
=== Array vs. Linked List & Queue vs. Stack
77

8-
In this part of the book, we explored the most used linear data structures such as Arrays, Linked Lists, Stacks and Queues. We implemented them and discussed the runtime of their operations.
8+
In this part of the book, we explored the most used linear data structures such as Arrays, Linked Lists, Stacks, and Queues. We implemented them and discussed the runtime of their operations.
99

1010
.Use Arrays when…
1111
* You need to access data in random order fast (using an index).
@@ -17,7 +17,7 @@ In this part of the book, we explored the most used linear data structures such
1717
* You want constant time to remove/add from extremes of the list.
1818

1919
.Use a Queue when:
20-
* You need to access your data on a first-come, first served basis (FIFO).
20+
* You need to access your data on a first-come, first-served basis (FIFO).
2121
* You need to implement a <<part03-graph-data-structures#bfs-tree, Breadth-First Search>>
2222

2323
.Use a Stack when:

book/content/part02/array.asc

+4-4
Original file line numberDiff line numberDiff line change
@@ -256,7 +256,7 @@ array.pop(); // ↪️111
256256
// array: [2, 5, 1, 9]
257257
----
258258

259-
No other element was touched, so it’s an _O(1)_ runtime.
259+
While deleting the last element, no other item was touched, so that’s an _O(1)_ runtime.
260260

261261
.JavaScript built-in `array.pop`
262262
****
@@ -293,7 +293,7 @@ To sum up, the time complexity of an array is:
293293
| `unshift` ^| O(n) | Insert element on the left side.
294294
| `shift` ^| O(n) | Remove leftmost element.
295295
| `splice` ^| O(n) | Insert and remove from anywhere.
296-
| `slice` ^| O(n) | Returns shallow copy of the array.
296+
| `slice` ^| O(n) | Returns a shallow copy of the array.
297297
|===
298298
//end::table
299299

@@ -474,7 +474,7 @@ Notice that many middle branches (in red color) have the same numbers, but in a
474474

475475
*Sliding window algorithm*
476476

477-
Another approach is using sliding windows. Since the sum always has `k` elements, we can compute the cumulative sum for k first elements from the left. Then, we slide the "window" to the right and remove one from the left until we cover all the right items. In the end, we would have all the possible combinations without duplicated work.
477+
Another approach is using sliding windows. Since the sum always has `k` elements, we can compute the cumulative sum for the k first elements from the left. Then, we slide the "window" to the right and remove one from the left until we cover all the right items. In the end, we would have all the possible combinations without duplicated work.
478478

479479
Check out the following illustration:
480480

@@ -537,7 +537,7 @@ _Solution: <<array-q-max-subarray>>_
537537
// tag::array-q-buy-sell-stock[]
538538
===== Best Time to Buy and Sell a Stock
539539

540-
*AR-2*) _You are given an array of integers. Each value represents the closing value of the stock on that day. You have only one chance to buy and then sell. What's the maximum profit you can obtain? (Note: you have to buy first and then sell)_
540+
*AR-2*) _You have an array of integers. Each value represents the closing value of the stock on that day. You have only one chance to buy and then sell. What's the maximum profit you can obtain? (Note: you have to buy first and then sell)_
541541

542542
Examples:
543543

book/content/part02/hash-map.asc

+6-6
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ endif::[]
77
[[hashmap-chap]]
88
=== Map
99

10-
A Map is a data structure where a `key` is mapped to a `value`. It's used for a fast lookup of values based on the given key. Only one key can map to a value (no duplicates).
10+
A Map is a data structure where a `key` is mapped to a `value`. It's used for a fast lookup of values based on the given key. Only one key can map to a value (no key duplicates are possible).
1111

1212
NOTE: Map has many terms depending on the programming language. Here are some other names: Hash Map, Hash Table, Associative Array, Unordered Map, Dictionary.
1313

@@ -242,7 +242,7 @@ map.set('art', 8);
242242
.Internal HashMap representation
243243
image::image41.png[image,width=528,height=299]
244244

245-
No hash function is perfect, so it's going to map two different keys to the same value for some cases. That's what we called a *collision*. When that happens, we chain the results on the same bucket. If we have too many collisions, it could degrade the lookup time from `O(1)` to `O(n)`.
245+
No hash function is perfect, so it will map two different keys to the same value for some cases. That's what we called a *collision*. When that happens, we chain the results on the same bucket. If we have too many collisions, it could degrade the lookup time from `O(1)` to `O(n)`.
246246

247247
The Map doubles the size of its internal array to minimize collisions when it reaches a certain threshold. This restructuring is called a *rehash*. This *rehash* operation takes `O(n)`, since we have to visit every old key/value pair and remap it to the new internal array. Rehash doesn't happen very often, so statistically speaking, Maps can insert/read/search in constant time `O(1)`.
248248

@@ -343,7 +343,7 @@ The LRU cache behavior is almost identical to the Map.
343343
- LRU cache has a limited size, while Map grows until you run out of memory.
344344
- LRU cache removes the least used items once the limit is reached.
345345

346-
We can extend the Map functionality. Also, the Map implementation on JavaScript already keeps the items by insertion order. So, every time we read or update a value, we can remove it from where it was and add it back. That way, the oldest (least used) it's the first element on the Map.
346+
We can extend the Map functionality. Also, the Map implementation on JavaScript already keeps the items by insertion order. Every time we read or update a value, we can remove it from where it was and add it back. That way, the oldest (least used) it's the first element on the Map.
347347

348348
.Solution: extending Map
349349
[source, javascript]
@@ -505,9 +505,9 @@ image:sliding-window-map.png[sliding window for abbadvdf]
505505

506506
As you can see, we calculate the length of the string on each iteration and keep track of the maximum value.
507507

508-
What would this look like in code? Let's try a couple of solutions. Let's go first with the brute force and then improve.
508+
What would this look like in code? Let's try a couple of solutions. Let's go first with the brute force and then how we can improve it.
509509

510-
We can have two pointers, `lo` and `hi` to define a window. We can can use two for-loops for that. Later, within `lo` to `hi` we want to know if there's a duplicate value. We can use two other for-loops to check for duplicates (4 nested for-loop)! To top it off, we are using labeled breaks to skip updating the max if there's a duplicate.
510+
We can have two pointers, `lo` and `hi`, to define a window. We can use two for-loops for that. Later, within `lo` to `hi` window, we want to know if there's a duplicate value. A simple and naive approach is to use another two for-loops to check for duplicates (4 nested for-loop)! We need labeled breaks to skip updating the max if there's a duplicate.
511511

512512
WARNING: The following code can hurt your eyes. Don't try this in production; for better solutions, keep reading.
513513

@@ -615,7 +615,7 @@ Something that might look unnecessary is the `Math.max` when updating the `lo` p
615615

616616
.Complexity Analysis
617617
- Time Complexity: `O(n)`. We do one pass and visit each character once.
618-
- Space complexity: `O(n)`. We store everything one the Map so that the max size would be `n`.
618+
- Space complexity: `O(n)`. We store everything on the Map so that the max size would be `n`.
619619

620620
<<<
621621
==== Practice Questions (((Interview Questions, Hash Map)))

0 commit comments

Comments
 (0)