You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: book/B-self-balancing-binary-search-trees.asc
+2-3
Original file line number
Diff line number
Diff line change
@@ -36,8 +36,8 @@ Let's go one by one.
36
36
37
37
Right rotation moves a node on the right as a child of another node.
38
38
39
-
Take a look at the `@example` in the code below.
40
-
As you can see we have an unbalanced tree `4-3-2-1`.
39
+
Take a look at the examples in the code in the next section.
40
+
As you will see we have an unbalanced tree `4-3-2-1`.
41
41
We want to balance the tree, for that we need to do a right rotation of node 3.
42
42
So, we move node 3 as the right child of the previous child.
43
43
@@ -140,4 +140,3 @@ This rotation is also referred to as `RL rotation`.
140
140
=== Self-balancing trees implementations
141
141
142
142
So far, we have study how to make tree rotations which are the basis for self-balancing trees. There are different implementations of self-balancing trees such a Red-Black Tree and AVL Tree.
One of the most efficient ways to find repeating characters is using a `Map` or `Set`. Use a `Map` when you need to keep track of the count/index (e.g., string -> count) and use a `Set` when you only need to know if there are repeated characters or not.
Copy file name to clipboardExpand all lines: book/content/colophon.asc
+1-1
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ For online information and ordering this and other books, please visit https://a
9
9
10
10
No part of this publication may be produced, store in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher.
11
11
12
-
While every precaution has been taking in the preparation of this book, the publisher and author assume no responsibility for errors or omissions, or damages resulting from the use of the information contained herein.
12
+
While every precaution has been taking in the preparation of this book, the publisher and author assume no responsibility for errors or omissions, or damages resulting from using the information contained herein.
Copy file name to clipboardExpand all lines: book/content/introduction.asc
+2-2
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,9 @@
4
4
You are about to become a better programmer and grasp the fundamentals of Algorithms and Data Structures.
5
5
Let's take a moment to explain how we are going to do that.
6
6
7
-
This book is divided into 4 main parts:
7
+
This book is divided into four main parts:
8
8
9
-
In *Part 1*, we will cover the framework to compare and analyze algorithms: Big O notation. When you have multiple solutions to a problem, this framework comes handy to know which solution will scale better.
9
+
In *Part 1*, we will cover the framework to compare and analyze algorithms: Big O notation. When you have multiple solutions to a problem, this framework comes in handy to know which solution will scale better.
10
10
11
11
In *Part 2*, we will go over linear data structures and trade-offs about using one over another.
12
12
After reading this part, you will know how to trade space for speed using Maps, when to use a linked list over an array, or what problems can be solved using a stack over a queue.
Copy file name to clipboardExpand all lines: book/content/part01/algorithms-analysis.asc
+2-2
Original file line number
Diff line number
Diff line change
@@ -143,7 +143,7 @@ _7n^3^ + 3n^2^ + 5_
143
143
144
144
You can express it in Big O notation as _O(n^3^)_. The other terms (_3n^2^ + 5_) will become less significant as the input grows bigger.
145
145
146
-
Big O notation only cares about the “biggest” terms in the time/space complexity. It combines what we learn about time and space complexity, asymptotic analysis, and adds a worst-case scenario.
146
+
Big O notation only cares about the “biggest” terms in the time/space complexity. It combines what we learn about time and space complexity, asymptotic analysis and adds a worst-case scenario.
147
147
148
148
.All algorithms have three scenarios:
149
149
* Best-case scenario: the most favorable input arrangement where the program will take the least amount of operations to complete. E.g., a sorted array is beneficial for some sorting algorithms.
@@ -152,7 +152,7 @@ Big O notation only cares about the “biggest” terms in the time/space comple
152
152
153
153
To sum up:
154
154
155
-
TIP: Big O only cares about the run time function's highest order on the worst-case scenario.
155
+
TIP: Big O only cares about the run time function's highest order in the worst-case scenario.
156
156
157
157
WARNING: Don't drop terms that are multiplying other terms. _O(n log n)_ is not equivalent to _O(n)_. However, _O(n + log n)_ is.
Copy file name to clipboardExpand all lines: book/content/part01/big-o-examples.asc
+6-4
Original file line number
Diff line number
Diff line change
@@ -23,9 +23,9 @@ Before we dive in, here’s a plot with all of them.
23
23
24
24
.CPU operations vs. Algorithm runtime as the input size grows
25
25
// image::image5.png[CPU time needed vs. Algorithm runtime as the input size increases]
26
-
image::big-o-running-time-complexity.png[CPU time needed vs. Algorithm runtime as the input size increases]
26
+
image::time-complexity-manual.png[{half-size}]
27
27
28
-
The above chart shows how the algorithm's running time is related to the work the CPU has to perform. As you can see, O(1) and O(log n) is very scalable. However, O(n^2^) and worst can convert your CPU into a furnace 🔥 for massive inputs.
28
+
The above chart shows how the algorithm's running time is related to the CPU's work. As you can see, O(1) and O(log n) is very scalable. However, O(n^2^) and worst can convert your CPU into a furnace 🔥 for massive inputs.
This binary search implementation is a recursive algorithm, which means that the function `binarySearchRecursive` calls itself multiple times until the program finds a solution. The binary search splits the array in half every time.
73
73
74
-
Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some tools like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem]. The `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call, you are most likely in front of a logarithmic runtime: _O(log n)_.
74
+
Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some approaches like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem].
75
+
76
+
Since the `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call, you are most likely in front of a logarithmic runtime: _O(log n)_.
75
77
76
78
[[linear]]
77
79
==== Linear
@@ -171,7 +173,7 @@ Cubic *O(n^3^)* and higher polynomial functions usually involve many nested loop
171
173
[[cubic-example]]
172
174
===== 3 Sum
173
175
174
-
Let's say you want to find 3 items in an array that add up to a target number. One brute force solution would be to visit every possible combination of 3 elements and add them up to see if they are equal to target.
176
+
Let's say you want to find 3 items in an array that add up to a target number. One brute force solution would be to visit every possible combination of 3 elements and add them to see if they are equal to the target.
Copy file name to clipboardExpand all lines: book/content/part01/how-to-big-o.asc
+3-3
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ endif::[]
6
6
=== How to determine time complexity from code?
7
7
8
8
In general, you can determine the time complexity by analyzing the program's statements.
9
-
However, you have to be mindful how are the statements arranged. Suppose they are inside a loop or have function calls or even recursion. All these factors affect the runtime of your code. Let's see how to deal with these cases.
9
+
However, you have to be mindful of how are the statements arranged. Suppose they are inside a loop or have function calls or even recursion. All these factors affect the runtime of your code. Let's see how to deal with these cases.
10
10
11
11
*Sequential Statements*
12
12
@@ -114,7 +114,7 @@ If instead of `m`, you had to iterate on `n` again, then it would be `O(n^2)`. A
114
114
[[big-o-function-statement]]
115
115
*Function call statements*
116
116
117
-
When you calculate your programs' time complexity and invoke a function, you need to be aware of its runtime. If you created the function, that might be a simple inspection of the implementation. However, if you are using a library function, you might infer it from the language/library documentation.
117
+
When you calculate your programs' time complexity and invoke a function, you need to be aware of its runtime. If you created the function, that might be a simple inspection of the implementation. However, you might infer it from the language/library documentation if you use a 3rd party function.
118
118
119
119
Let's say you have the following program:
120
120
@@ -210,7 +210,7 @@ graph G {
210
210
211
211
If you take a look at the generated tree calls, the leftmost nodes go down in descending order: `fn(4)`, `fn(3)`, `fn(2)`, `fn(1)`, which means that the height of the tree (or the number of levels) on the tree will be `n`.
212
212
213
-
The total number of calls, in a complete binary tree, is `2^n - 1`. As you can see in `fn(4)`, the tree is not complete. The last level will only have two nodes, `fn(1)` and `fn(0)`, while a complete tree would have 8 nodes. But still, we can say the runtime would be exponential `O(2^n)`. It won't get any worst because `2^n` is the upper bound.
213
+
The total number of calls in a complete binary tree is `2^n - 1`. As you can see in `fn(4)`, the tree is not complete. The last level will only have two nodes, `fn(1)` and `fn(0)`, while a full tree would have eight nodes. But still, we can say the runtime would be exponential `O(2^n)`. It won't get any worst because `2^n` is the upper bound.
Copy file name to clipboardExpand all lines: book/content/part02/array-vs-list-vs-queue-vs-stack.asc
+2-2
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ endif::[]
5
5
6
6
=== Array vs. Linked List & Queue vs. Stack
7
7
8
-
In this part of the book, we explored the most used linear data structures such as Arrays, Linked Lists, Stacks and Queues. We implemented them and discussed the runtime of their operations.
8
+
In this part of the book, we explored the most used linear data structures such as Arrays, Linked Lists, Stacks, and Queues. We implemented them and discussed the runtime of their operations.
9
9
10
10
.Use Arrays when…
11
11
* You need to access data in random order fast (using an index).
@@ -17,7 +17,7 @@ In this part of the book, we explored the most used linear data structures such
17
17
* You want constant time to remove/add from extremes of the list.
18
18
19
19
.Use a Queue when:
20
-
* You need to access your data on a first-come, firstserved basis (FIFO).
20
+
* You need to access your data on a first-come, first-served basis (FIFO).
21
21
* You need to implement a <<part03-graph-data-structures#bfs-tree, Breadth-First Search>>
Copy file name to clipboardExpand all lines: book/content/part02/array.asc
+6-6
Original file line number
Diff line number
Diff line change
@@ -256,7 +256,7 @@ array.pop(); // ↪️111
256
256
// array: [2, 5, 1, 9]
257
257
----
258
258
259
-
No other elementwas touched, so it’s an _O(1)_ runtime.
259
+
While deleting the last element, no other item was touched, so that’s an _O(1)_ runtime.
260
260
261
261
.JavaScript built-in `array.pop`
262
262
****
@@ -293,7 +293,7 @@ To sum up, the time complexity of an array is:
293
293
| `unshift` ^| O(n) | Insert element on the left side.
294
294
| `shift` ^| O(n) | Remove leftmost element.
295
295
| `splice` ^| O(n) | Insert and remove from anywhere.
296
-
| `slice` ^| O(n) | Returns shallow copy of the array.
296
+
| `slice` ^| O(n) | Returns a shallow copy of the array.
297
297
|===
298
298
//end::table
299
299
@@ -474,7 +474,7 @@ Notice that many middle branches (in red color) have the same numbers, but in a
474
474
475
475
*Sliding window algorithm*
476
476
477
-
Another approach is using sliding windows. Since the sum always has `k` elements, we can compute the cumulative sum for k first elements from the left. Then, we slide the "window" to the right and remove one from the left until we cover all the right items. In the end, we would have all the possible combinations without duplicated work.
477
+
Another approach is using sliding windows. Since the sum always has `k` elements, we can compute the cumulative sum for the k first elements from the left. Then, we slide the "window" to the right and remove one from the left until we cover all the right items. In the end, we would have all the possible combinations without duplicated work.
*AR-2*) _You are given an array of integers. Each value represents the closing value of the stock on that day. You have only one chance to buy and then sell. What's the maximum profit you can obtain? (Note: you have to buy first and then sell)_
540
+
*AR-2*) _You have an array of integers. Each value represents the closing value of the stock on that day. You have only one chance to buy and then sell. What's the maximum profit you can obtain? (Note: you have to buy first and then sell)_
A Map is a data structure where a `key` is mapped to a `value`. It's used for a fast lookup of values based on the given key. Only one key can map to a value (no duplicates).
10
+
A Map is a data structure where a `key` is mapped to a `value`. It's used for a fast lookup of values based on the given key. Only one key can map to a value (no key duplicates are possible).
11
11
12
12
NOTE: Map has many terms depending on the programming language. Here are some other names: Hash Map, Hash Table, Associative Array, Unordered Map, Dictionary.
13
13
@@ -35,6 +35,7 @@ A Map uses an array internally. It translates the key into an array's index usin
35
35
36
36
JavaScript has two ways to use Maps: one uses objects (`{}`), and the other is using the built-in `Map`.
37
37
38
+
[[hashmap-examples]]
38
39
.Using Objects as a HashMap.
39
40
[source, javascript]
40
41
----
@@ -241,7 +242,7 @@ map.set('art', 8);
241
242
.Internal HashMap representation
242
243
image::image41.png[image,width=528,height=299]
243
244
244
-
No hash function is perfect, so it's going to map two different keys to the same value for some cases. That's what we called a *collision*. When that happens, we chain the results on the same bucket. If we have too many collisions, it could degrade the lookup time from `O(1)` to `O(n)`.
245
+
No hash function is perfect, so it will map two different keys to the same value for some cases. That's what we called a *collision*. When that happens, we chain the results on the same bucket. If we have too many collisions, it could degrade the lookup time from `O(1)` to `O(n)`.
245
246
246
247
The Map doubles the size of its internal array to minimize collisions when it reaches a certain threshold. This restructuring is called a *rehash*. This *rehash* operation takes `O(n)`, since we have to visit every old key/value pair and remap it to the new internal array. Rehash doesn't happen very often, so statistically speaking, Maps can insert/read/search in constant time `O(1)`.
247
248
@@ -342,7 +343,7 @@ The LRU cache behavior is almost identical to the Map.
342
343
- LRU cache has a limited size, while Map grows until you run out of memory.
343
344
- LRU cache removes the least used items once the limit is reached.
344
345
345
-
We can extend the Map functionality. Also, the Map implementation on JavaScript already keeps the items by insertion order. So, every time we read or update a value, we can remove it from where it was and add it back. That way, the oldest (least used) it's the first element on the Map.
346
+
We can extend the Map functionality. Also, the Map implementation on JavaScript already keeps the items by insertion order. Every time we read or update a value, we can remove it from where it was and add it back. That way, the oldest (least used) it's the first element on the Map.
346
347
347
348
.Solution: extending Map
348
349
[source, javascript]
@@ -504,9 +505,9 @@ image:sliding-window-map.png[sliding window for abbadvdf]
504
505
505
506
As you can see, we calculate the length of the string on each iteration and keep track of the maximum value.
506
507
507
-
What would this look like in code? Let's try a couple of solutions. Let's go first with the brute force and then improve.
508
+
What would this look like in code? Let's try a couple of solutions. Let's go first with the brute force and then how we can improve it.
508
509
509
-
We can have two pointers, `lo` and `hi` to define a window. We can can use two for-loops for that. Later, within `lo` to `hi` we want to know if there's a duplicate value. We can use two other for-loops to check for duplicates (4 nested for-loop)! To top it off, we are using labeled breaks to skip updating the max if there's a duplicate.
510
+
We can have two pointers, `lo` and `hi`, to define a window. We can use two for-loops for that. Later, within `lo` to `hi` window, we want to know if there's a duplicate value. A simple and naive approach is to use another two for-loops to check for duplicates (4 nested for-loop)! We need labeled breaks to skip updating the max if there's a duplicate.
510
511
511
512
WARNING: The following code can hurt your eyes. Don't try this in production; for better solutions, keep reading.
512
513
@@ -614,7 +615,7 @@ Something that might look unnecessary is the `Math.max` when updating the `lo` p
614
615
615
616
.Complexity Analysis
616
617
- Time Complexity: `O(n)`. We do one pass and visit each character once.
617
-
- Space complexity: `O(n)`. We store everything one the Map so that the max size would be `n`.
618
+
- Space complexity: `O(n)`. We store everything on the Map so that the max size would be `n`.
618
619
619
620
<<<
620
621
==== Practice Questions (((Interview Questions, Hash Map)))
@@ -626,7 +627,7 @@ Something that might look unnecessary is the `Math.max` when updating the `lo` p
626
627
627
628
// end::hashmap-q-two-sum[]
628
629
629
-
// _Seen in interviews at: Amazon, Google, Apple._
0 commit comments