Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit f2cbfa6

Browse files
authoredJul 11, 2023
grammar fixes (#93)
1 parent 5548d76 commit f2cbfa6

File tree

21 files changed

+156
-123
lines changed

21 files changed

+156
-123
lines changed
 

‎array/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# Array
22

3-
Arrays are a basic and essential data structure in computer science. They consist of a fixed-size contiguous block of memory and offer O(1) read and write time complexity. As a fundamental element of programming languages, arrays come built-in as part of their core.
3+
Arrays are a basic and essential data structure in computer science. They consist of a fixed-size contiguous memory block and offer O(1) read and write time complexity. As a fundamental element of programming languages, arrays come built-in in their core.
44

5-
To provide a real-world analogy, consider an array of athletes preparing for a sprinting match. Each athlete occupies a specific position within the array, which is typically denoted as 1, 2,..., n. While it is technically possible for each athlete to be in a different position, the positions generally carry some form of significance, such as alphabetical order or seniority within the sport.
5+
To provide a real-world analogy, consider an array of athletes preparing for a sprinting match. Each athlete occupies a specific position within the array, typically denoted as 1, 2,, n. While it is technically possible for each athlete to be in a different position, the positions generally carry some form of significance, such as alphabetical order or seniority within the sport.
66

77
## Implementation
88

9-
In the Go programming language, arrays are considered values rather than pointers and represent the entirety of the array. Whenever an array is passed to a function, a copy of the array is created, resulting in additional memory usage. However, to avoid this issue, it is possible to pass a pointer to the array instead.
9+
In the Go programming language, arrays are considered values rather than pointers and represent the entirety of the array. Whenever an array is passed to a function, a copy is created, resulting in additional memory usage. However, to avoid this issue, it is possible to pass a pointer to the array instead.
1010

11-
To define an array in Go, it is necessary to specify the size of the array using a constant. By using constants in this manner, it is no longer necessary to utilize the make function to create the array.
11+
To define an array in Go, it is possible to specify the array size using a constant. By using constants in this manner, it is no longer necessary to use the make function to create the array.
1212

1313
```Go
1414
package main
@@ -24,7 +24,7 @@ func main() {
2424

2525
Although arrays are fundamental data structures in Go, their constant size can make them inflexible and difficult to use in situations where a variable size is required. To address this issue, Go provides [slices](https://blog.golang.org/slices-intro) which are an abstraction of arrays that offer more convenient access to sequential data typically stored in arrays.
2626

27-
Slices enable the addition of values using the `append` function, which allows for dynamic resizing of the slice. Additionally, selectors of the format [low:high] can be used to select or manipulate data in the slice. By utilizing slices instead of arrays, Go programmers gain a more flexible and powerful tool to manage their data structures.
27+
Slices enable the addition of values using the `append` function, which allows for dynamic slice resizing. Additionally, selectors of the format [low:high] can be used to select or manipulate data in the slice. By utilizing slices instead of arrays, Go programmers gain a more flexible and powerful tool to manage their data structures.
2828

2929
```Go
3030
package main
@@ -74,11 +74,11 @@ func main() {
7474

7575
## Complexity
7676

77-
In computer science, the act of accessing an element within an array using an index `i` has an O(1) time complexity. This means that regardless of the size of the array, the read and write operations for a given element can be performed in constant time.
77+
Accessing an element within an array using an index has O(1) time complexity. This means that regardless of the size of the array, read and write operations for a given element can be performed in constant time.
7878

7979
While arrays are useful for certain tasks, searching an unsorted array can be a time-consuming O(n) operation. Since the target item could be located anywhere in the array, every element must be checked until the item is found. Due to this limitation, alternative data structures such as trees and hash tables are often more suitable for search operations.
8080

81-
Both addition and deletion operations on arrays can be O(n) operations in Arrays. The process of removing an element can create an empty slot that must be eliminated by shifting the remaining items. Similarly, adding items to an array may require shifting existing items to create space for the new item. These inefficiencies can make alternative data structures, such as [trees](../tree) or [hash tables](../hashtable), more suitable for managing operations involving additions and deletions.
81+
Addition and deletion operations are O(n) operations in Arrays. The process of removing an element can create an empty slot that must be eliminated by shifting the remaining items. Similarly, adding items to an array may require shifting existing items to create space for the added item. These inefficiencies can make alternative data structures, such as [trees](../tree) or [hash tables](../hashtable), more suitable for managing operations involving additions and deletions.
8282

8383
## Application
8484

‎backtracking/README.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -6,22 +6,20 @@ Backtracking can be compared to how a person solves a maze or searches for an ex
66

77
## Implementation
88

9-
A backtracking algorithm is typically implemented in these steps:
9+
Backtracking algorithms are typically implemented in these steps:
1010

11-
1. Pruning: eliminating invalid approaches when possible
12-
2. Generating a partial solution by iterating through available alternatives
13-
3. Checking the validity of the selected alternative according to the problem conditions and rules
14-
4. Checking for solution completion when required
11+
1. Prune invalid approaches when possible.
12+
2. Generate a partial solution by iterating through available alternatives.
13+
3. Check the validity of the selected alternative according to the problem conditions and rules.
14+
4. Check for solution completion when required.
1515

1616
## Complexity
1717

18-
The time complexity of backtracking algorithms may vary depending on the problem at hand, but they generally require iterating through possible alternatives and checking for validity at each step. Although backtracking may be the only feasible approach for certain problems, it does not always guarantee an optimal solution. To improve the time complexity of backtracking algorithms, pruning, which involves eliminating known invalid options before iterating through the alternatives, is an effective technique.
19-
20-
In addition, the space complexity of backtracking algorithms is typically not efficient since the recursive process requires maintaining a copy of the state at each step.
18+
The time complexity of backtracking algorithms may vary depending on the problem at hand, however they generally require iterating through possible alternatives and checking for validity at each step. Although backtracking may be the only feasible approach to certain problems, it does not always guarantee an optimal solution. To improve the time complexity of backtracking algorithms, pruning, which involves eliminating known invalid options before iterating through the alternatives, is an effective technique.
2119

2220
## Application
2321

24-
Backtracking is widely used to solve board games and is often employed by computers to select their next moves. Furthermore, the backtracking technique is also applied to graphs and trees through the use of [Depth First Search](../graph/graph#depth-first-search---dfs). It also has applications in object detection in image processing.
22+
Backtracking is widely used to solve board games and computers use it to select their next moves. Furthermore, the backtracking technique is also applied to graphs and trees through the use of [Depth First Search](../graph/graph#depth-first-search---dfs). It also has applications in object detection and image processing.
2523

2624
## Rehearsal
2725

‎bit/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ AND 1100 OR 1100 XOR 1100 Negation 1100 L Shift 1100 R Shift 1100
1313

1414
## Implementation
1515

16-
Go provides below operators that can be used in bit manipulation:
16+
Go provides the below operators for bit manipulation:
1717

1818
```Go
1919
package main
@@ -44,7 +44,7 @@ func printBinary(n int) {
4444

4545
## Arithmetic by Shifting
4646

47-
Left shifting can be viewed as a multiplication operation by 2 raised to the power of a specified number, while right shifting can be viewed as a division operation by 2 raised to the power of a specified number. For instance, a << b can be interpreted as multiplying a by 2^b, and a >> b can be interpreted as dividing a by 2^b.
47+
Left shifting can be viewed as a multiplication operation by 2 raised to the power of a specified number. Right shifting can be viewed as a division operation by 2 raised to the power of a specified number. For instance, a << b can be interpreted as multiplying a by 2^b, and a >> b can be interpreted as dividing a by 2^b.
4848

4949
```Go
5050
package main
@@ -65,7 +65,7 @@ func main() {
6565

6666
## Cryptography and Other XOR applications
6767

68-
The XOR operation can be used to perform basic cryptography. By XORing a message with a key, we can generate an encrypted message. This encrypted message can be shared with someone else who knows the same key. If they XOR the key with the encrypted message, they will obtain the original plaintext message. This method is not secure enough because the key is relatively easy to guess from the encrypted message. The following example demonstrates this process:
68+
The XOR operation can be used for basic cryptography. By XORing a message with a key, we can generate an encrypted message. This encrypted message can be shared with someone else who knows the same key. If they XOR the key with the encrypted message, they will obtain the original plaintext message. This method is not secure enough because the key is relatively easy to guess from the encrypted message. The following example demonstrates this process:
6969

7070
```Go
7171
package main
@@ -91,19 +91,19 @@ func xorCrypt(key, message []byte) []byte {
9191

9292
## Complexity
9393

94-
Bit manipulation operations are characterized by a constant time complexity of O(1). This high level of performance renders them an optimal choice as a replacement for other approaches, especially when working with large data sets. As a result, they are frequently utilized in algorithmic design to optimize the execution of certain operations.
94+
Bit manipulation operations are characterized by a constant time complexity. This high level of performance renders them an optimal choice to replace other approaches, especially when working with large data sets. As a result, they are frequently used to achieve better performance.
9595

9696
## Application
9797

98-
Bit manipulation techniques are widely utilized in diverse fields of computing, such as cryptography, data compression, network protocols, and databases, to name a few. Each specific bitwise operation has its own qualities that make it useful in different scenarios.
98+
Bit manipulation techniques are widely utilized in diverse fields of computing, such as cryptography, data compression, network protocols, and databases, to name a few. Each bitwise operation has its own qualities that make it useful in different scenarios.
9999

100-
AND is used to extract bit(s) from a larger number. For example, to check if a certain bit is set in a number, we can AND the number with a mask that has only that bit set to 1, and if the result is not 0, then that bit was set. Another application is to clear or reset certain bits in a number by ANDing with a mask that has those bits set to 0.
100+
AND extracts bit(s) from a larger number. For example, to check if a certain bit is set in a number, we can AND the number with a mask that has only that bit set to 1, and if the result is not 0, then that bit was set. Another application is to clear or reset certain bits in a number by ANDing with a mask that has those bits set to 0.
101101

102-
OR can be useful in solving problems where we want to "set" or "turn on" certain bits in a binary number. For example, if we have a variable flag, which is a binary number representing various options, we can set a particular flag by ORing the variable with a binary number where only the corresponding bit for that flag is 1. This will turn on the flag in the variable without affecting any other flags.
102+
OR can be useful in solving problems where we want to "set" or "turn on" certain bits in a binary number. For example, if we have a variable flag, which is a binary number representing various options, we can set a particular flag by ORing the variable with a binary number where only the corresponding bit for that flag is 1. This will turn on the flag in the variable without affecting other flags.
103103

104104
XOR can be used for encryption and decryption, as well as error detection and correction. It can also be used to swap two variables without using a temporary variable. Additionally, XOR can be used to solve problems related to finding unique elements in a list or array or to check whether two sets of data have any overlapping elements.
105105

106-
Negation can be used to invert a set of flags or to find the two's complement of a number. In computer architecture, negation is often used in the implementation of logical and arithmetic operations.
106+
Negation can be used to invert a set of flags or find the two's complement of a number.
107107

108108
## Rehearsal
109109

‎complexity.md

Lines changed: 19 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ To address these questions, the Big O asymptotic notation, which characterizes h
99

1010
## Big O
1111

12-
Big O is a mathematical notion commonly used to describe the impact on time or space as input size `n` increases. Seven Big O notations that are commonly used in algorithm complexity analysis are discussed in the following sections.
12+
Big O is a mathematical notion commonly used to describe the impact on time or space as input size `n` increases. Seven Big O notations commonly used in algorithm complexity analysis are discussed in the following sections.
1313

1414
```ASCII
15-
[Figure 1] Schematic diagrams of Big O for common run times from fastest to slowest.
15+
[Figure 1] Schematic diagram of Big O for common run times from fastest to slowest.
1616
1717
O(1) O(Log n) O(n)
1818
▲ ▲ ▲
@@ -69,24 +69,24 @@ t│ .
6969
n
7070
```
7171

72-
To understand the big O notation, let us focus on time complexity and specifically examine the O(n) diagram. This diagram depicts a decline in the algorithm's performance as the input size increases. In contrast, the O(1) diagram represents an algorithm that consistently performs in constant time, with the input size having no impact on its efficiency. Consequently, the latter algorithm generally outperforms the former.
72+
To understand the big O notation, let us focus on time complexity and specifically examine the O(n) diagram. This diagram depicts a decline in algorithm performance as input size increases. In contrast, the O(1) diagram represents an algorithm that consistently performs in constant time, with input size having no impact on its efficiency. Consequently, the latter algorithm generally outperforms the former.
7373

7474
However, it is essential to note that this is not always the case. In practice, a O(1) algorithm with a single time-consuming operation might be slower than a O(n) algorithm with multiple operations if the single operation in the first algorithm requires more time to complete than the collective operations in the second algorithm.
7575

7676
The Big O notation of an algorithm can be simplified using the following two rules:
7777

78-
1. Remove constants. `O(n) + 2*O(n*Log n) + 3*O(K) + 5` is simplified to `O(n) + O(n*Log n) + O(K)`.
79-
2. Remove non-dominant or slower terms. `O(n) + O(n*Log n) + O(K)` is simplified to `O(n*Log n)` because `O(n*Log n)` is the most dominant term..
78+
1. Remove the constants. `O(n) + 2*O(n*Log n) + 3*O(K) + 5` is simplified to `O(n) + O(n*Log n) + O(K)`.
79+
2. Remove non-dominant or slower terms. `O(n) + O(n*Log n) + O(K)` is simplified to `O(n*Log n)` because `O(n*Log n)` is the dominant term.
8080

8181
### Constant - O(K) or O(1)
8282

83-
Constant time complexity represents the most efficient scenario for an algorithm, where the execution time remains constant regardless of the input size. Achieving constant time complexity often involves eliminating loops and recursive calls. Examples:
83+
Constant time complexity represents the most efficient scenario for an algorithm, where execution time remains constant regardless of input size. Achieving constant time complexity often involves eliminating loops and recursive calls. Examples:
8484

85-
* Reads and writes in a [hash table](./hashtable/README.md)
85+
* Read and write in a [hash table](./hashtable/README.md)
8686
* Enqueue and Dequeue in a [queue](./queue/README.md)
8787
* Push and Pop in a [stack](./stack/README.md)
88-
* Finding the minimum or maximum in a [heap](./heap/README.md)
89-
* Removing the last element of a [doubly linked list](./linkedlist/README.md)
88+
* Find the minimum or maximum in a [heap](./heap/README.md)
89+
* Remove the last element of a [doubly linked list](./linkedlist/README.md)
9090
* [Max without conditions](./bit/max_function_without_conditions.go)
9191

9292
### Logarithmic - O(Log n)
@@ -101,36 +101,38 @@ Attaining logarithmic time complexity in an algorithm is highly desirable as it
101101

102102
### Linear - O(n)
103103

104-
Linear time complexity is considered favorable when an algorithm necessitates traversing every input with no feasible way to avoid it. Examples:
104+
Linear time complexity is considered favorable when an algorithm traverses every input with no feasible way to avoid it. Examples:
105105

106-
* Removing the last element in a [singly linked list](./linkedlist/README.md)
107-
* Searching an unsorted [array](./array/README.md) or [linked list](./linkedlist/README.md)
106+
* Remove the last element in a [singly linked list](./linkedlist/README.md)
107+
* Search an unsorted [array](./array/README.md) or [linked list](./linkedlist/README.md)
108108
* [Number of Islands](./graph/number_of_islands.go)
109109
* [Missing Number](./hashtable/missing_number.go)
110110

111111
### O(n*Log n)
112112

113-
The time complexity of O(n*Log n) is commonly observed when it is necessary to iterate through all inputs and can yield an outcome at the same time through an efficient operation. Sorting is a common example. It's not possible to sort items faster than O(n*Log n). Examples:
113+
The time complexity of O(n*Log n) is commonly observed when it is necessary to iterate through all inputs and yield an outcome at the same time through an efficient operation. Sorting is a common example. It's impossible to sort items faster than O(n*Log n). Examples:
114114

115-
* [Merge Sort](./dnc/merge_sort.go) and [Heap Sort](./heap/README.md)
115+
* [Merge Sort](./dnc/merge_sort.go)
116+
* [Quick Sort](./dnc/quick_sort.go)
117+
* [Heap Sort](./heap/heap_sort.go)
116118
* [Knapsack](./greedy/knapsack.go)
117119
* [Find Anagrams](./hashtable/find_anagrams.go)
118120
* In order traversal of a [Binary Search Tree](./tree/README.md)
119121

120122
### Polynomial - O(n^2)
121123

122-
Polynomial time complexity marks the initial threshold of problematic time complexity for algorithms. This complexity often arises when an algorithm includes nested loops involving both an inner loop and an outer loop. Examples:
124+
Polynomial time complexity marks the initial threshold of problematic time complexity for algorithms. This complexity often arises when an algorithm includes nested loops involving both an inner loop an outer loop. Examples:
123125

124126
* [Bubble Sort](./array/bubble_sort.go)
125127
* [Cheapest Flight](./graph/cheapest_flights.go)
126128
* [Remove Invalid Parentheses](./graph/remove_invalid_parentheses.go)
127129

128130
### Exponential O(2^n)
129131

130-
Exponential complexity is considered highly undesirable; however, it represents only the second-worst complexity scenario. Examples:
132+
Exponential complexity is considered highly undesirable; however, it represents only the second-worst complexity scenario. Examples:
131133

132134
* [Climbing Stairs](./recursion/climbing_stairs.go)
133-
* [Tower of Hanoi](./dnc/towers_of_hanoi.go)
135+
* [Towers of Hanoi](./dnc/towers_of_hanoi.go)
134136
* [Generate Parentheses](./backtracking/generate_parentheses.go)
135137
* Basic [Recursive](./recursion/README.md) implementation of Fibonacci
136138

‎dnc/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@ The divide-and-conquer (DNC) paradigm is a common approach to solving problems [
44

55
1. Divide: The problem is divided into smaller sub-problems, typically by partitioning the input data into two or more subsets.
66
2. Conquer: The smaller sub-problems are solved recursively using the same algorithm, typically by applying the divide-and-conquer approach again.
7-
3. Combine: The solutions to the smaller sub-problems are combined to obtain a solution to the original problem.
7+
3. Combine: The solutions to the smaller sub-problems are combined to solve the original problem.
88

99
## Implementation
1010

11-
The binary search algorithm is a classic example of a Divide-and-Conquer algorithm that is commonly used to find a specific value in a sorted list or array. The search process begins by comparing the target value to the middle element of the list. If they are not equal, the half of the list where the target value cannot be present is eliminated. This process is repeated on the remaining half of the list until the target value is found or until there are no more elements to search. Binary search can be implemented iteratively or recursively, and both implementations are shown below.
11+
The binary search algorithm is a classic Divide-and-Conquer algorithm used to find a specific value in a sorted list. The search process begins by comparing the target value to the middle element of the list. If they are not equal, the half of the list in which the target value cannot be present is eliminated. This process is repeated on the remaining half of the list until the target value is found or until there are no more elements to search. Binary search can be implemented iteratively or recursively. Both implementations are shown below.
1212

1313
```Go
1414
package main
@@ -57,7 +57,7 @@ func binarySearchIterative(list []int, target int) int {
5757
}
5858
```
5959

60-
A pre-implemented binary search function is available in Go which returns the index of the first element satisfying a given condition. This function eliminates the need to manually implement the binary search algorithm.
60+
Go comes with a built-in a binary search function which returns the index of the first element satisfying a given condition. This function eliminates the need to manually implement the binary search algorithm.
6161

6262
```Go
6363
package main
@@ -84,7 +84,7 @@ func search(list []int, target int) int {
8484

8585
## Complexity
8686

87-
If used inappropriately, DNC algorithms can lead to an exponential number of unnecessary recursive calls, resulting in a time complexity of O(2^n). However, if an appropriate dividing strategy and base case that can be solved directly are identified, DNC algorithms can be very effective, with a time complexity as low as O(Log n) in the case of binary search. As DNC algorithms are recursive, their complexity analysis is analogous to that of [recursive](../recursion) algorithms.
87+
If used inappropriately, DNC algorithms can lead to an exponential number of unnecessary recursive calls, resulting in a time complexity of O(2^n). However, if an appropriate dividing strategy and a base case that can be solved directly are identified, DNC algorithms can be very effective. They have a time complexity as low as O(Log n) such as the case of binary search. As DNC algorithms are recursive, their complexity analysis is similar to that of [recursive](../recursion) algorithms.
8888

8989
## Application
9090

‎dp/README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,28 +4,28 @@ Dynamic Programming (DP) and [divide-and-conquer](../dnc) share a common strateg
44

55
## Implementation
66

7-
A Dynamic Programming (DP) approach to algorithms typically entails four steps:
7+
Dynamic Programming (DP) approach to algorithms typically involves four steps:
88

9-
1. Characterizing an optimal solution
10-
2. Defining the value of an optimal solution through [recursion](../recursion)
11-
3. Computing the value of the optimal solution in a bottom-up manner
12-
4. Determining the optimal solution using the values computed in the previous steps.
9+
1. Characterize an optimal solution.
10+
2. Define the value of an optimal solution through [recursion](../recursion).
11+
3. Compute the value of the optimal solution in a bottom-up manner.
12+
4. Determine the optimal solution using the values computed in the previous steps.
1313

14-
If only the value of the solution is needed, step 4 may be omitted. Conversely, when the solution is required, it is advisable to consider the necessary values at the final step to facilitate the storage of pertinent information at step 3 and simplify step 4.
14+
If only the solution value is needed, step 4 may be omitted. Conversely, when the solution is required, it is advisable to consider the necessary values at the final step. This will facilitate the storage of pertinent information at step 3 and simplify step 4.
1515

16-
There are two general methods for writing DP algorithms: the top-down and bottom-up approaches.
16+
There are two general methods for writing DP algorithms: top-down and bottom-up approaches.
1717

1818
In the top-down approach, a caching mechanism is utilized to store each sub-problem solution and prevent redundancy. This technique is also known as memoization.
1919

20-
In the bottom-up approach, sub-problems are solved in order of size, with smaller ones tackled first. Since all subsequent smaller sub-problems are already solved when addressing a particular sub-problem, the result is stored.
20+
In the bottom-up approach, sub-problems are solved in order of size, with the smaller ones tackled first. Since all subsequent smaller sub-problems are already solved when addressing a particular problem, the result is calculated and stored.
2121

2222
## Complexity
2323

24-
The top-down and bottom-up approaches typically perform similarly.
24+
Top-down and bottom-up approaches usually perform similarly.
2525

2626
## Application
2727

28-
DP is well-suited for tackling an array of complex problems, including those in logistics, game theory, machine learning, resource allocation, and investment policies. In graph theory, DP is commonly used to determine the shortest path between two points. DP algorithms are particularly effective in optimizing problems that have a vast number of potential solutions, where the goal is to identify the most optimal one.
28+
DP is well-suited for tackling an array of complex problems, including logistics, game theory, machine learning, resource allocation, and investment policies. In graph theory, DP is commonly used to determine the shortest path between two points. DP algorithms are particularly effective at optimizing problems with a vast number of potential solutions, where the goal is to identify the most optimal one.
2929

3030
## Rehearsal
3131

‎graph/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Graph
22

3-
A graph is a collection of vertices that are connected through directed or undirected edges. In an edge-weighted tree, each edge is assigned a value representing its cost or benefit. Graphs can be used to model various real-world scenarios. For example graph A in the below figure can represent university courses and their prerequisites, cities and their highways, followers in a social network, links on a site, and many more.
3+
A graph is a collection of vertices connected through directed or undirected edges. In an edge-weighted tree, each edge is assigned a value representing its cost or benefit. Graphs can be used to model various real-world scenarios. For example graph A in the below figure can represent university courses and their prerequisites, cities and their highways, followers in a social network, links on a site, and many more.
44

55
```ASCII
6-
[Figure 1] Graph Examples - Numbers, arrows, and numbers in brackets each respectively represent vertices, edges, and edge weights.
6+
[Figure 1] Graph Examples - Numbers, arrows, and numbers in brackets represent vertices, edges, and edge weights.
77
88
5 3 5 3 6 1 - Is it a DAG?
99
↗ ↑ ↖ ↗ ↘ ↖ ↘ (4) ↖ (1) 2- Is it a connected graph?
@@ -14,15 +14,15 @@ A graph is a collection of vertices that are connected through directed or undir
1414
(A) (B) (C) (D)
1515
```
1616

17-
An entire branch of mathematics named graph theory is dedicated to the study of graphs. Here are some essential graph concepts that are important to remember:
17+
Graph theory a branch of mathematics is dedicated to study of graphs. Here are some essential graph concepts to remember:
1818

19-
* **Directed Acyclic Graph - DAG**: A directed graph with no cycles
19+
* **Directed Acyclic Graph - DAG**: A directed graph with no cycles
2020
* **Connected Graph**: There is a path between any two vertices
2121
* **Minimum Spanning Tree**: Subset of edges in an undirected, edge-weighted, and connected graph that connects all the vertices at the lowest cost
2222

2323
## Implementation
2424

25-
Graphs are commonly represented using either an adjacency matrix or an adjacency list.
25+
Graphs are commonly represented using an adjacency matrix or an adjacency list.
2626

2727
* **Adjacency Matrix**: Faster look-up times and more suitable for dense graphs
2828
* **Adjacency List**: More space efficient, suitable for graphs with fewer edges
@@ -83,7 +83,7 @@ func main() {
8383

8484
### Searching Graphs
8585

86-
When working with graphs, it is often necessary to perform a search to solve different problems. The efficiency of the algorithm used depends on the order in which the graph is searched. Two commonly used search methods are:
86+
When working with graphs, it is often necessary to search them in order to solve problems. The efficiency of the algorithm used depends on the order in which the graph is searched. Two commonly used search methods are:
8787

8888
* **Breadth First Search - BFS** used to find the shortest path
8989
* **Depth First Search - DFS** often a subroutine, in another algorithm. Used in maze traversal, cycle finding, and pathfinding
@@ -136,7 +136,7 @@ For any vertex S that is reachable from V, the simple path in the BFS tree from
136136

137137
Depth First Search (DFS) is a graph traversal algorithm that explores a graph by visiting as far as possible along each branch before backtracking. It uses a [stack](../stack) data structure when implemented iteratively, is [recursive](../recursion), and is a generalization of pre-order traversal in trees.
138138

139-
When given a graph G and a vertex S, DFS systematically discovers all nodes in G reachable from S. It is typically implemented using a driver that discovers the edges of the most recently discovered vertex V that has unexplored edges. Once all of V's edges have been explored, the search [backtracks](../backtracking/) to explore all edges leaving the vertex from which V was discovered. This process continues until the discovery of all edges.
139+
When given a graph G and a vertex S, DFS systematically discovers all nodes in G reachable from S. It is typically implemented using a driver that discovers the edges of the most recently discovered vertex V that has unexplored edges. Once all of V's edges have been explored, the search [backtracks](../backtracking/) to explore all edges leaving the vertex from which V was discovered. This process continues until the all edges are discovered.
140140

141141
```Go
142142
package graph_test
@@ -187,7 +187,7 @@ DFS is capable of categorizing edges (u,v) into four types:
187187
* Forward edge: v is a decedent of u, 1 to 8
188188
* Cross edge: all other edges, 5 to 4
189189

190-
If a DFS algorithm identifies a back edge, it indicates that the graph is cyclic.
190+
Discovery of a back edge in a DFS algorithm indicates a cyclic graph.
191191

192192
### Dijkstra's Algorithm
193193

‎greedy/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The concept of greedy algorithms involves choosing a naive solution and continuously refining it rather than developing a sophisticated algorithm by beginning with what appears to be an easy win. In this approach, a local optimum is chosen, and the aim is to build upon it to ultimately arrive at the global optimum.
44

5-
For example, consider someone kayaking to an island and discovering a treasure while a tsunami is imminent. To decide which pieces of treasure to take back, the greedy approach would involve selecting the most seemingly valuable items first, such as heavy pieces of gold or diamonds rather than small pieces of silver. However, this approach may result in a suboptimal solution if a valuable item, such as a small silver ring with a significant archeological value that makes it the most valuable piece, is overlooked. This is similar to the knapsack problem solved in the rehearsals.
5+
For example, consider someone kayaking to an island and discovering a treasure while a tsunami is imminent. To decide which pieces of treasure to take back, the greedy approach would involve selecting the most seemingly valuable items first, such as heavy pieces of gold or diamonds rather than small pieces of silver. However, this approach may result in a suboptimal solution if a valuable item, such as a small silver ring with a significant archeological value that makes it the most precious piece, is overlooked. This is similar to the knapsack problem solved in the rehearsals.
66

77
Therefore, greed and greedy algorithms may not always produce optimal solutions if the local optimum does not equal the global optimum. However, they can be useful for approximating solutions in cases where exact answers are not required.
88

@@ -12,7 +12,7 @@ The process for developing a greedy algorithm can be structured into six steps.
1212

1313
The process for developing a greedy algorithm can be broken down into six steps:
1414

15-
1. Identify the optimal structure of the problem.
15+
1. Identify the optimal problem structure.
1616
2. Develop a recursive solution.
1717
3. Show that with a greedy choice, only one subproblem remains.
1818
4. Prove that it is safe to make the greedy choice.

‎greedy/knapsack.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ type KnapsackItem struct {
88
Value int
99
}
1010

11-
// Knapsack solves the problem in O(n*Log n) time and O(1) space.
11+
// Knapsack solves the problem in O(n*Log n) time and O(1) space.
1212
func Knapsack(items []KnapsackItem, capacity int) int {
1313
sort.Slice(items, func(i, j int) bool {
1414
return items[i].Value/items[i].Weight > items[j].Value/items[j].Weight

‎greedy/task_scheduling.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ type Event struct {
88
EndTime int
99
}
1010

11-
// Solves the problem in O(n*Log n) time and O(1) space.
11+
// Solves the problem in O(n*Log n) time and O(1) space.
1212
func ScheduleEvents(events []Event) []Event {
1313
sort.Slice(events, func(i, j int) bool {
1414
return events[i].EndTime < events[j].EndTime

‎hashtable/README.md

Lines changed: 43 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# Hash Table
22

3-
Hash tables are a fundamental data structure that operates based on key-value pairs and enables constant-time operations for lookup, insertion, and deletion. The keys used in hash tables are immutable and can be a simple string or integer in basic usage. However, in more complex applications, a hashing function, along with different collision resolution methods such as separate chaining, linear probing, quadratic probing, and double hashing, can be used to ensure efficient performance.
3+
Hash tables are a fundamental data structure that operates based on key-value pairs and enables constant-time operations for lookup, insertion, and deletion. Hash tables use immutable keys that can be a simple strings or integers. However, in more complex applications, a hashing function, along with different collision resolution methods such as separate chaining, linear probing, quadratic probing, and double hashing, can be used to ensure efficient performance.
44

55
## Implementation
66

7-
In Go, hash tables are implemented as maps, which is a built-in data type of the language. To declare a map, the data type for the key and the value must be specified, and the map needs to be initialized using the make function before it can be used. Below is an example of how to declare a map with string keys and integer values:
7+
In Go, hash tables are implemented as maps, which is a built-in data type of the language. To declare a map, the data type for the key and the value must be specified. The map needs to be initialized using the make function before it can be used. Below is an example of how to declare a map with string keys and integer values:
88

99
```Go
1010
package main
@@ -26,15 +26,54 @@ func main() {
2626
}
2727
```
2828

29-
When using maps in Go, it is crucial to remember that the order of the items stored in the map is not preserved, unlike arrays and slices. Relying on the order of the contents of a map can lead to unexpected issues, such as inconsistent code behavior and intermittent failures.
29+
When using maps in Go, it is crucial to remember that the order of the items stored in the map is not preserved. This is unlike arrays and slices. Relying on the order of map contents can lead to unexpected issues, such as inconsistent code behavior and intermittent failures.
30+
31+
As shown below it is possible in Go to store variables as keys in a map. It's also possible to have a map of only keys with no values.
32+
33+
```Go
34+
package main
35+
36+
import (
37+
"fmt"
38+
)
39+
40+
type person struct {
41+
name string
42+
age int
43+
}
44+
45+
var (
46+
person1 = &person{name: "p1", age: 20}
47+
person2 = &person{name: "p2", age: 30}
48+
person3 = &person{name: "p3", age: 40}
49+
allowListedPeople = map[*person]struct{}{
50+
person1: struct{}{},
51+
person2: struct{}{},
52+
}
53+
)
54+
55+
func main() {
56+
isAllowed(person1) // p1 is allowed
57+
isAllowed(person2) // p2 is allowed
58+
isAllowed(person3) // p3 is not allowed
59+
}
60+
61+
func isAllowed(person *person) {
62+
verb := "is"
63+
if _, ok := allowListedPeople[person]; !ok {
64+
verb += " not"
65+
}
66+
fmt.Printf("%s %s allowed\n", person.name, verb)
67+
}
68+
```
3069

3170
## Complexity
3271

3372
Hash tables provide O(1) time complexity for inserting, deletion, and searching operations.
3473

3574
## Application
3675

37-
When there is no need to keep the data hash tables are widely used in algorithms to cache and memoize data for fast constant access times. This advantage in performance makes hash tables more suitable than [Arrays](../arrays) and even [Binary Search Trees](../tree).
76+
When there is no need to preserve the order of data, hash tables are used for fast O(1) reads and writes. This performance advantage makes hash tables more suitable than [Arrays](../arrays) and even [Binary Search Trees](../tree).
3877

3978
Compilers use hash tables to generate a symbol table to keep track of variable declarations.
4079

‎hashtable/find_anagrams.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import "sort"
44

55
type sortRunes []rune
66

7-
// FindAnagrams solves the problem in O(n*Log n) time and O(n) space.
7+
// FindAnagrams solves the problem in O(n*Log n) time and O(n) space.
88
func FindAnagrams(words []string) [][]string {
99
anagrams := make(map[string][]string)
1010
for _, word := range words {

‎heap/README.md

Lines changed: 8 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
# Heap
22

3-
Heaps are tree data structures that function by retaining the minimum or maximum of the elements inserted into them. There are two types of heaps: minimum heaps and maximum heaps.
3+
Heaps are tree data structures that retain the minimum or maximum of the elements inserted into them. There are two types of heap: minimum and maximum heaps.
44

55
A heap must satisfy two conditions:
66

77
1. The structure property requires that the heap be a complete binary search [tree](../tree), where each level is filled left to right, and all levels except the bottom are full.
88
2. The heap property requires that the children of a node be larger than or equal to the parent node in a min heap and smaller than or equal to the parent in a max heap, meaning that the root is the minimum in a min heap and the maximum in a max heap.
99

10-
As a result, if you push elements to the min or max heap and then pop them one by one, you will obtain a list that is sorted in ascending or descending order, respectively. This sorting technique is also an O(n*Log n) algorithm known as [heap sort](./heap_sort_test.go). Although there are other sorting algorithms available, none of them are faster than O(n*Logn).
10+
As a result, if you push elements to the min or max heap and then pop them one by one, you will obtain sorted list in ascending or descending order, respectively. This sorting technique known as heap sort which is included in the rehearsals of this section. Although there are other sorting algorithms available, none are faster than O(n*Logn).
1111

12-
When pushing a new element to a heap, because of the structure property, we always add the new element to the first available position on the lowest level of the heap, filling from left to right. Then to maintain the heap property, if the newly inserted element is smaller than its parent in a min heap (larger in a max heap), then we swap it with its parent. We continue swapping the swapped element with its parent until the heap property is achieved.
12+
When pushing an element to a heap, because of the structure property, we always add the new element to the first available position on the lowest level of the heap, filling from left to right. Then to maintain the heap property, if the newly inserted element is smaller than its parent in a min heap (larger in a max heap), then we swap it with its parent. We continue swapping the swapped element with its parent until the heap property is achieved.
1313

1414
```ASCII
1515
[Figure 1] Minimum heap push operation
@@ -75,29 +75,25 @@ func (m *maxHeap) Pop() interface{} {
7575
}
7676
```
7777

78-
To utilize a heap to store a particular type, certain methods such as len and less must be implemented for that type to conform to the heap interface. By default, the heap is a min heap, where each node is less than its children. However, the package provides the flexibility to define what "being less than" means. For instance, changing `m[i] > m[j]` to `m[i] < m[j]` would transform the heap into a minimum heap.
78+
To utilize a heap to store a particular type, certain methods such as len and less must be implemented for that type to conform to the heap interface. By default, the heap is a min heap, where each node is smaller than its children. However, the package provides the flexibility to define what "being less than" means. For instance, changing `m[i] > m[j]` to `m[i] < m[j]` would transform the heap into a minimum heap.
7979

80-
In Go, the heap implementation is based on slices. The heap property is maintained such that the left child of the node at index `i` (where i is greater than or equal to 1) is always located at `2i`, and the right child is at `2i+1`. If the slice already contains elements before any pushing operation, the heap must be initialized using `heap.Init(h Interface)` to establish the order.
80+
In Go, heaps are implemented with slices. The heap property is maintained such that the left child of the node at index `i` (where i is greater than or equal to 1) is always located at `2i`, and the right child is at `2i+1`. If the slice already contains elements before pushing, the heap must be initialized using `heap.Init(h Interface)` to establish the order.
8181

8282
## Complexity
8383

84-
The time complexity of pushing and popping heap elements is O(LogN). On the other hand, initializing a heap, which involves pushing N elements, has a time complexity of O(n*Log n).
85-
86-
The insertion strategy entails percolating the new element up the heap until the correct location is identified. Similarly, the deletion strategy involves percolating down the heap.
87-
88-
Pushing and Popping heap elements are all O(Log n) operations. The strategy for inserting the new element is percolating up the heap until the correct location is found. Similarly, the strategy for deletion is to percolate down.
84+
Pushing and popping are O(LogN) operations in heaps. On the other hand, initializing a heap, which involves pushing N elements, has a time complexity of O(n*Log n).
8985

9086
## Application
9187

9288
Heaps in the form of priority queues are used for scheduling in operating systems and job schedulers. They are also used in simulating and implementing scheduling and priority systems for capacity management in hospitals and businesses.
9389

94-
Priority queues implemented as heaps are utilized in job scheduling, for example scheduling the execution of different processes in operating systems. They are also employed in simulating and implementing priority and scheduling systems to manage capacity in hospitals and businesses.
90+
Priority queues implemented as heaps are commonly used in job scheduling, for example scheduling the execution of different processes in operating systems. They are also employed in simulating and implementing priority and scheduling systems to manage capacity in hospitals and businesses.
9591

9692
## Rehearsal
9793

94+
* [Heap Sort](./heap_sort_test.go), [Solution](./heap_sort.go)
9895
* [Kth Largest Element](./kth_largest_element_test.go), [Solution](./kth_largest_element.go)
9996
* [Merge Sorted Lists](./merge_sorted_list_test.go), [Solution](./merge_sorted_list.go)
10097
* [Median in a Stream](./median_in_a_stream_test.go), [Solution](./median_in_a_stream_test.go)
10198
* [Kth Closest Points to the Center](./k_closest_points_to_origin_test.go), [Solution](./k_closest_points_to_origin.go)
10299
* [Sliding Maximum](./sliding_maximum_test.go), [Solution](./sliding_maximum.go)
103-
* [Heap Sort](./heap_sort_test.go), [Solution](./heap_sort.go)

‎heap/heap_sort.go

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ func (m *MinHeap) Push(value int) {
4444
Val: value,
4545
}
4646
m.Data = append(m.Data, vertex)
47-
m.heapifyUp(len(m.Data) - 1)
47+
m.percolateUp(len(m.Data) - 1)
4848
}
4949

5050
// Pop removes the root value from the heap.
@@ -57,7 +57,7 @@ func (m *MinHeap) Pop() int {
5757
lastIndex := len(m.Data) - 1
5858
m.Data[0] = m.Data[lastIndex]
5959
m.Data = m.Data[:lastIndex]
60-
m.heapifyDown(0)
60+
m.percolateDown(0)
6161
return rootValue
6262
}
6363

@@ -66,8 +66,8 @@ func (m *MinHeap) Len() int {
6666
return len(m.Data)
6767
}
6868

69-
// heapifyUp moves the vertex up the heap to maintain the heap property.
70-
func (m *MinHeap) heapifyUp(index int) {
69+
// percolateUp swaps the vertex up the heap to maintain the heap property.
70+
func (m *MinHeap) percolateUp(index int) {
7171
for index > 0 {
7272
parentIndex := (index - 1) / 2
7373
if m.Data[parentIndex].Val <= m.Data[index].Val {
@@ -78,8 +78,8 @@ func (m *MinHeap) heapifyUp(index int) {
7878
}
7979
}
8080

81-
// heapifyDown moves the vertex down the heap to maintain the heap property.
82-
func (m *MinHeap) heapifyDown(index int) {
81+
// percolateDown moves the vertex down the heap to maintain the heap property.
82+
func (m *MinHeap) percolateDown(index int) {
8383
for {
8484
leftChildIndex := 2*index + 1
8585
rightChildIndex := 2*index + 2

‎linkedlist/README.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,19 +2,17 @@
22

33
Linked lists are a collection of nodes, each of which is capable of storing at least one data element and is linked to the next node via a reference. One of the key advantages of linked lists over [arrays](../array) is their dynamic size, which allows for items to be added or removed without necessitating the resizing or shifting of other elements.
44

5-
Two types of linked lists exist: singly linked lists, in which each node is linked only to the next node, and doubly linked lists, which enable each node to be connected to both the next and previous nodes.
5+
Two types of linked lists exist: singly linked lists, in which each node is linked only to the next node, and doubly linked lists lists, in which each node is connected to both the next and previous nodes.
66

77
## Implementation
88

9-
When implementing linked lists, you typically have two concepts, the linked list itself, which has a link to the head and bottom nodes, and node(s), which contains at least one piece of data that is being stored in the linked list and also a link to the next node for singly linked lists, and an additional link to the previous node in doubly linked lists. To have a linked list in Go, you can define a node `struct` like the one below and use a pointer to the same structure. The example below creates a singly linked list that looks like `1->2->3->4->5`
10-
119
When implementing linked lists, two key concepts come into play: the linked list itself, which includes references to the head and tail nodes, and the individual nodes which hold:
1210

1311
* At least one piece of data, such as name or number
1412
* A reference to the next node
1513
* A reference to the previous node in doubly linked lists
1614

17-
To create a linked list in Go, a node structure can be defined, and a pointer to the same structure can be used. For instance, a singly linked list that follows the format 1->2->3->4->5 can be created using the following example:
15+
To create a linked list in Go, a node `struct` can be defined, and a pointer to the same structure can be used to link the node into the next one. For instance, a singly linked list with the data `1->2->3->4->5` can be created using the following example:
1816

1917
```Go
2018
package main
@@ -48,7 +46,7 @@ func addToFront(node *node) {
4846

4947
In a singly linked list, a pointer is typically stored for the first and last items. Adding items to the front or back of the list is a constant-time operation. However, deleting the last item can be challenging, as the last item's pointer needs to be updated to the second-to-last item. This is where having a reference to the last item in each node proves useful. In contrast, doubly linked lists maintain pointers to both the previous and next nodes, which makes deletion operations less expensive.
5048

51-
The Go standard library contains an implementation of [doubly linked lists](https://golang.org/pkg/container/list/). In the following example, numbers from 1 to 10 are added to the list, and then even numbers are removed, and the resulting linked list containing odd numbers is printed.
49+
The Go standard library contains an implementation of [doubly linked lists](https://golang.org/pkg/container/list/). In the following example, numbers from 1 to 10 are added to the list. Even numbers are removed, and the resulting linked list containing odd numbers is printed.
5250

5351
```Go
5452
package main
@@ -83,13 +81,11 @@ func main() {
8381

8482
Adding new items to the front or back of the linked list has the time complexity of O(1).
8583

86-
Deleting the first item is also O(1). Deleting the last item in a singly linked list is O(n), because we have to find the node before the last node, and for that, we have to visit every node. In a doubly linked list where a node is connected to both previous and next nodes, though, deleting the last item can be done in O(1) time.
84+
Deleting the first item is also O(1). Deleting the last item in a singly linked list is O(n), because we have to find the node before the last node, and for that, we have to visit every node. In a doubly linked list deleting the last item can be done in O(1) time, since nodes are connected to both previous and next nodes.
8785

8886
## Application
8987

90-
Linked lists can be useful where the order of similar items matters, especially if there is a need for flexible reordering of the items or having current, next, and previous items.
91-
92-
A music album is a good example of linked lists. The tracks come in an order, and at any given time, you are listening to one track while you have the option of going to the next song in a singly linked list and both the previous and next song in a doubly linked list.
88+
Linked lists can be useful where the order of items matters, especially if there is a need for flexible reordering of the items or having current, next, and previous items. Music albums for example have tracks have an order, and at any given time, you are listening to one track while you have the option of going to the next song in a singly linked list and both the previous and next song in a doubly linked list.
9389

9490
## Rehearsal
9591

‎linkedlist/keep_repetitions_test.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ TestKeepRepetitions tests solution(s) with the following signature and problem d
99
1010
func KeepRepetitions(head *Node) *Node
1111
12-
Given a linked-list of sorted integers, create a copy of the list that contains one example of
12+
Given a linked list of sorted integers, create a copy of the list that contains one example of
1313
each repeated item.
1414
*/
1515
func TestKeepRepetitions(t *testing.T) {

‎queue/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ func dequeue() (int, error) {
7575

7676
## Complexity
7777

78-
Enqueue and dequeue operations are both done in O(1) times.
78+
Enqueue and dequeue operations both perform in O(1) times.
7979

8080
## Application
8181

‎recursion/README.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Recursion is a computational technique that implements a [divide-and-conquer](..
55
* One or more base cases that provide output for simple inputs
66
* A recursive case that combines the outputs obtained from recursive function calls to generate a solution for the original problem.
77

8-
Although recursions enhance code readability, they are usually not efficient and can be challenging to debug. Consequently, unless they provide a more efficient solution to a problem, such as in the case of Quicksort, they are generally not preferred.
8+
Although recursions enhance code readability, they are usually not efficient and challenging to debug. Consequently, unless they provide a more efficient solution to a problem, such as in the case of Quicksort, they are generally not preferred.
99

1010
During execution, a program typically stores function variables in a memory area known as the stack before executing recursion. The recursive function may assign different values to the same variables during each recursion. When the recursion ends, the stack pops and remembers the values. However, if recursion continues indefinitely, the stack will grow with each call, causing the familiar stack overflow error. Since recursion employs the stack to execute, every recursive problem can be converted into an iterative one. This transformation, however, typically leads to more complex code and may require the use of a [stack](../stack).
1111

@@ -37,20 +37,22 @@ func fibonacci(n int) int {
3737
When formulating recursive algorithms, it is essential to consider the following four rules of recursion:
3838

3939
1. It is imperative to establish a base case, or else the program will terminate abruptly
40-
2. The algorithm should make progress toward the base case at each recursive call.
41-
3. Recursive calls are presumed to be effective; thus, it is unnecessary to traverse every recursive call and perform bookkeeping.
42-
4. Use memoization, a technique that prevents redundant computation by caching previously computed results, can be used to enhance the algorithm's efficiency.
40+
2. The algorithm should progress toward the base case at each recursive call.
41+
3. Recursive calls are presumed effective; thus, it is unnecessary to traverse every recursive call and perform bookkeeping.
42+
4. Use memoization, a technique that prevents redundant computation by caching previously computed results, can enhance the algorithm's efficiency.
4343

4444
## Complexity
4545

46-
Recursions are often inefficient, both in terms of time and memory usage. The number of recursive calls required to solve a problem can grow exponentially, reaching a complexity of n factorial in some cases. For instance, the recursive Fibonacci algorithm has a time complexity of O(2^n).
46+
Recursions are often inefficient in both time and space complexity. The number of recursive calls required to solve a problem can grow exponentially (for example the Fibonacci implementation an the last section). They can also be O n(k!) in worse cases such as [permutations](../backtracking/permutations_test.go).
4747

4848
There are a few different ways of determining the time complexity of recursive algorithms:
4949

5050
1. Recurrence Relations: This approach involves defining a recurrence relation that expresses the time complexity of the algorithm in terms of the time complexity of its sub-problems. For example, for the recursive Fibonacci algorithm, the recurrence relation is T(n) = T(n-1) + T(n-2) + O(1), where T(n) represents the time complexity of the algorithm for an input of size n.
51-
2. Recursion Trees: This method involves drawing a tree to represent the recursive calls made by the algorithm. The time complexity of the algorithm can be calculated by summing the work done at each level of the tree. For example, for the recursive factorial algorithm, each level of the tree represents a call to the function with a smaller input size, and the work done at each level is constant.
51+
2. Recursion Trees: This method involves drawing a tree to represent the algorithm's recursive calls. The time complexity of the algorithm can be calculated by summing the work done at each level of the tree. For example, for the recursive factorial algorithm, each level of the tree represents a call to the function with a smaller input size, and the work done at each level is constant.
5252
3. Master Theorem: This approach is a formula for solving recurrence relations that have the form T(n) = aT(n/b) + f(n). The Master Theorem can be used to quickly determine the time complexity of some [Divide-and-conquer](../dnd) algorithms.
5353

54+
The space complexity of recursive calls is affected by having to store a copy of the state and variables in the stack with each recursion.
55+
5456
## Application
5557

5658
Recursion finds practical application within a range of algorithms, including [Dynamic Programming](../dp), [Graph](../graph), [Divide-and-conquer](../dnd), and [Backtracking](../backtracking). Typically, recursion is best suited to problems that exhibit sub-problems or require the calculation of the nth or first nth value in a series.

‎stack/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Stack
22

3-
Stacks are data structures that operate on the principle of Last-In-First-Out (LIFO), where the last element added to the stack is the first to be removed. Stacks allow push, pop, and top operations to add and remove data.
3+
Stacks are data structures that operate on the principle of Last-In-First-Out (LIFO), where the last element added to the stack is the first to be removed. Stacks allow push, pop, and top operations to add, remove, and read data.
44

55
One way to visualize stacks is to think of a backpack where items are placed and later removed in reverse order, with the last item added to the bag being the first item removed.
66

@@ -73,13 +73,13 @@ func pop() (int, error) {
7373

7474
## Complexity
7575

76-
The push and pop operations in stacks are considered O(1) operations, making them highly efficient. Additionally, many machines have built-in stack instruction sets, further increasing their speed and performance. This unique efficiency and usefulness of stacks have solidified their place as one of the most fundamental data structures, second only to [arrays](../array).
76+
Push and pop operations in stacks are considered O(1) operations, making them highly efficient. Additionally, many machines have built-in stack instruction sets, further increasing their performance. Stacks' unique efficiency and usefulness have solidified their place as one of the most fundamental data structures, second only to [arrays](../array).
7777

7878
## Application
7979

8080
Stacks are helpful when LIFO operations are desired. Many [graph](../graph) problems are solved with stacks.
8181

82-
Within the context of process execution, a portion of memory known as the "stack" is reserved to hold stack frames. Whenever a function is called, relevant data such as parameters, local variables, and return values are stored within a frame to be accessed after the function has been completed. As such, when an excessive number of function calls or an infinite recursive function are made, the computer's ability to store all of this information is exceeded, resulting in the well-known stack overflow error.
82+
During process execution, a portion of memory known as the "stack" is reserved to hold stack frames. Whenever a function is called, relevant data such as parameters, local variables, and return values are stored within a frame to be accessed after the function has been completed. As such, when an excessive number of function calls or an infinite recursive function are made, the computer's ability to store all of this information is exceeded. This results in the well-known stack overflow error.
8383

8484
## Rehearsal
8585

‎strings/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# String
22

3-
Strings are a ubiquitous data structure that typically manifests as built-in data types within programming languages. However, beneath the surface, they are essentially arrays of characters that enable the storage and manipulation of textual data.
3+
String is a ubiquitous data structure that is typically built-in data type in programming languages. However, beneath the surface, they are essentially arrays of characters that enable textual data storage and manipulation.
44

55
## Implementation
66

7-
In Go, strings are a data type. Behind the scene strings are a slice of bytes. The `strings` package provides several useful convenience functions. Examples include:
7+
In Go, strings are a data type. Behind the scenes strings are a slice of bytes. The `strings` package provides several useful convenience functions. Examples include:
88

99
* [Index](https://golang.org/pkg/strings/#Index), [Contains](https://golang.org/pkg/strings/#Contains), [HasPrefix](https://golang.org/pkg/strings/#HasPrefix), [HasSuffix](https://golang.org/pkg/strings/#HasSuffix)
1010
* [Split](https://golang.org/pkg/strings/#Split), [Fields](https://golang.org/pkg/strings/#Split), [Join](https://golang.org/pkg/strings/#Join)
@@ -13,7 +13,7 @@ In Go, strings are a data type. Behind the scene strings are a slice of bytes. T
1313
* [Title](https://golang.org/pkg/strings/#Title), [ToLower](https://golang.org/pkg/strings/#ToLower), [ToUpper](https://golang.org/pkg/strings/#ToUpper)
1414
* [Trim](https://golang.org/pkg/strings/#Trim), [TrimSpace](https://golang.org/pkg/strings/#TrimSpace), [TrimSuffix](https://golang.org/pkg/strings/#TrimSuffix), [TrimPrefix](https://golang.org/pkg/strings/#TrimPrefix)
1515

16-
When you iterate through a string in Go using the `range` keyword, every element becomes a [rune](https://blog.golang.org/strings#TOC_5.) which is the same as `int32`. If you are writing code that deals with characters, it's often easier to define your function parameters and variables as rune. The following code shows how to iterate through a string.
16+
When you iterate through a string in Go using the `range` keyword, every element becomes a [rune](https://blog.golang.org/strings#TOC_5.) which is the same as `int32`. If you are writing code that deals with characters, it's often easier to define function parameters and variables as rune. The following code shows how to iterate through a string.
1717

1818
```Go
1919
package main
@@ -33,11 +33,11 @@ func main() {
3333

3434
Strings have the same complexity as [arrays](../array/) and slices in Go.
3535

36-
The Go standard library includes a [regular expressions](https://golang.org/pkg/regexp/) package that, unlike most other languages, comes with a guarantee to run in O(n) time.
36+
Unlike many other programming languages [regular expressions](https://golang.org/pkg/regexp/) are guaranteed to have O(n) time complexity in Go, allowing efficient string pattern matching.
3737

3838
## Application
3939

40-
Strings are used to store words, characters, sentences, etc.
40+
Strings store words, characters, sentences, etc.
4141

4242
## Rehearsal
4343

‎tree/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Traversing a tree means visiting every node in a tree and performing some work o
1515
1 3 5 7
1616
```
1717

18-
The traversal methods of trees are named when a particular operation is performed on the current node relative to its children. For example, in Pre-Order traversal, the operation is performed on the node before traversing its children.
18+
Tree traversal methods are named after the order in which the nodes are explored. For example, in Pre-Order traversal, the node is explored prior to its children and in post order traversal the node is explored after its children.
1919

2020
There are many types of trees. Some important tree types include:
2121

@@ -24,7 +24,7 @@ There are many types of trees. Some important tree types include:
2424
* **Full Binary Tree**: Every non-leaf node has exactly two children
2525

2626
```ASCII
27-
[Figure 2] Important Tree Concepts
27+
[Figure 2] Important tree Concepts
2828
2929
1 4 1 * 1 - Is it a binary tree?
3030
/|\ / \ / / \ 2 - Is it a complete binary tree?
@@ -40,11 +40,11 @@ There are many types of trees. Some important tree types include:
4040

4141
A Binary Search Tree (BST) is a type of sorted tree where, for every node n, the values of all nodes in its left subtree are less than n, and the values of all nodes in its right subtree are greater than n.
4242

43-
Performing an In-Order traversal of a binary search tree and outputting each visited node results in a sorted (In-Order) list of nodes. This is known as the tree sort algorithm, which has a time complexity of O(NLogN). While there are other sorting algorithms available, none are more efficient than O(n*Log n).
43+
Performing an In-Order traversal of a binary search tree and outputting each visited node results in a sorted (In-Order) list of nodes. This is known as the tree sorting algorithm, which has a time complexity of O(NLogN). While there are other sorting algorithms available, none are more efficient than O(n*Log n).
4444

4545
### BST Complexity
4646

47-
The time complexity of operations such as Search, Deletion, Insertion, and finding the minimum and maximum values in a binary search tree is O(h), where h represents the height of the tree.
47+
The time complexity of operations such as Search, Deletion, Insertion, and finding the minimum and maximum values in a binary search tree is O(h), where h represents the tree height.
4848

4949
## AVL - Height Balanced BST
5050

@@ -54,19 +54,19 @@ To maintain balance after an insertion, a single rotation is needed if the inser
5454

5555
### AVL Complexity
5656

57-
Same as a Binary Search Tree except that the height of the tree is known. So Search, Deletion, Insertion, and finding Min and Max in an AVL tree are all O(Log n) operations.
57+
Same as a Binary Search Tree except that the tree height is known. So Search, Deletion, Insertion, and finding Min and Max in an AVL tree are all O(Log n) operations.
5858

5959
## Trie
6060

61-
A Trie, also known as a prefix tree, is a tree data structure that is commonly used for representing strings. Each descendant of a node in the Trie shares a common prefix, and a path from the root to a leaf node spells out a word held in the Trie.
61+
A Trie, also known as a prefix tree, is a tree data structure commonly used for representing strings. Each descendant of a node in the Trie shares a common prefix, and a path from the root to a leaf node spells out a word held in the Trie.
6262

6363
### Trie Complexity
6464

65-
Insertion and Search are done in O(K), where K is the length of the word.
65+
Insertion and Search are done in O(K), where K is the length word length.
6666

6767
## Application
6868

69-
Trees, such as Binary Search Trees (BSTs), can offer a time complexity of O(Log n) for searches, as opposed to the linear access time of linked lists. Trees are widely employed in search systems, and operating systems can represent file information using tree structures.
69+
Trees, such as Binary Search Trees (BSTs) offer O(Log n) time complexity for searches, which is superior to [linked lists](../linkedlist/)' and [array](../array/)'s linear access time. Trees are widely employed in search systems, and operating systems can represent file information using tree structures.
7070

7171
## Rehearsal
7272

0 commit comments

Comments
 (0)
Please sign in to comment.