You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: service-catalog/gcp-backup/README.md
+12-5
Original file line number
Diff line number
Diff line change
@@ -73,12 +73,16 @@ The accidental account deletion is not a threat anymore because if either AWS or
73
73
## Implementation details
74
74
75
75
The account where we store the backup is called `rust-backup`. It contains two GCP projects: `backup-prod` and `backup-staging`.
76
-
Here we have one Google [Object Storage](https://cloud.google.com/storage?hl=en) in the `europe-west-1` (Belgium) region for the following AWS S3 buckets:
76
+
Here we have one Google [Object Storage](https://cloud.google.com/storage?hl=en) in the `europe-west1` (Belgium) region for the following AWS S3 buckets:
77
77
78
78
-`crates-io`. Cloudfront URL: `cloudfront-static.crates.io`. It contains the crates published by the Rust community.
79
79
-`static-rust-lang-org`. Cloudfront Url: `cloudfront-static.rust-lang.org`. Among other things, it contains the Rust releases.
80
80
81
-
The [storage class](https://cloud.google.com/storage/docs/storage-classes) is set to "archive" for both buckets. This is the cheapest class for infrequent access.
81
+
For the objects:
82
+
- Set the [storage class](https://cloud.google.com/storage/docs/storage-classes) to "archive" for all buckets.
83
+
This is the cheapest class for infrequent access.
84
+
- Enable [object-versioning](https://cloud.google.com/storage/docs/object-versioning) and [soft-delete](https://cloud.google.com/storage/docs/soft-delete),
85
+
so that we can recover updates and deletes so that we can recover updates and deletes.
82
86
83
87
We use [Storage Transfer](https://cloud.google.com/storage-transfer/docs/overview) to automatically transfer the content of the s3 bucket into the Google Object Storage.
84
88
This is a service managed by Google. We'll use it to download the S3 buckets from cloudfront to perform a daily incremental transfer. The transfers only move files that are new, updated, or deleted since the last transfer, minimizing the amount of data that needs to be transferred.
@@ -96,17 +100,20 @@ You can also run the following test:
96
100
- Edit the file in AWS and check that you can recover the previous version from GCP.
97
101
- Delete the in AWS and check that you can recover all previous versions from GCP.
98
102
99
-
In the future, we might want to create alarts in Datadog to monitor if the transfer job fails.
103
+
In the future, we might want to create alerts in:
104
+
-_Datadog_: to monitor if the transfer job fails.
105
+
-_Wiz_: to monitor if the access control changes.
100
106
101
107
### FAQ 🤔
102
108
103
109
#### Do we need a multi-region backup for the object storage?
104
110
105
111
No. [Multi-region](https://cloud.google.com/storage/docs/availability-durability#cross-region-redundancy) only helps if we want to serve this data real-time and we want to have a fallback mechanism if a GCP region fails. We just need this object storage for backup purposes, so we don't need to pay more 👍
106
112
107
-
#### Why did you choose the `us-west-1` GCP region?
113
+
#### Why did you choose the `europe-west1` GCP region?
108
114
109
-
It's the same region where the AWS S3 buckets are located. In this way, we reduce the latency of the transfer job.
115
+
It's far from the `us-west-1` region where the AWS S3 buckets are located. This protects us from geographical disasters.
116
+
The con is that the latency of the transfer job is higher when compared to a region in the US.
110
117
Also, the cost calculator indicates that this regions has a "Low CO2" and it's among the cheapest regions.
0 commit comments