Skip to content

Commit c5679a4

Browse files
committed
updates
1 parent a1657f1 commit c5679a4

File tree

1 file changed

+12
-5
lines changed

1 file changed

+12
-5
lines changed

service-catalog/gcp-backup/README.md

+12-5
Original file line numberDiff line numberDiff line change
@@ -73,12 +73,16 @@ The accidental account deletion is not a threat anymore because if either AWS or
7373
## Implementation details
7474

7575
The account where we store the backup is called `rust-backup`. It contains two GCP projects: `backup-prod` and `backup-staging`.
76-
Here we have one Google [Object Storage](https://cloud.google.com/storage?hl=en) in the `europe-west-1` (Belgium) region for the following AWS S3 buckets:
76+
Here we have one Google [Object Storage](https://cloud.google.com/storage?hl=en) in the `europe-west1` (Belgium) region for the following AWS S3 buckets:
7777

7878
- `crates-io`. Cloudfront URL: `cloudfront-static.crates.io`. It contains the crates published by the Rust community.
7979
- `static-rust-lang-org`. Cloudfront Url: `cloudfront-static.rust-lang.org`. Among other things, it contains the Rust releases.
8080

81-
The [storage class](https://cloud.google.com/storage/docs/storage-classes) is set to "archive" for both buckets. This is the cheapest class for infrequent access.
81+
For the objects:
82+
- Set the [storage class](https://cloud.google.com/storage/docs/storage-classes) to "archive" for all buckets.
83+
This is the cheapest class for infrequent access.
84+
- Enable [object-versioning](https://cloud.google.com/storage/docs/object-versioning) and [soft-delete](https://cloud.google.com/storage/docs/soft-delete),
85+
so that we can recover updates and deletes so that we can recover updates and deletes.
8286

8387
We use [Storage Transfer](https://cloud.google.com/storage-transfer/docs/overview) to automatically transfer the content of the s3 bucket into the Google Object Storage.
8488
This is a service managed by Google. We'll use it to download the S3 buckets from cloudfront to perform a daily incremental transfer. The transfers only move files that are new, updated, or deleted since the last transfer, minimizing the amount of data that needs to be transferred.
@@ -96,17 +100,20 @@ You can also run the following test:
96100
- Edit the file in AWS and check that you can recover the previous version from GCP.
97101
- Delete the in AWS and check that you can recover all previous versions from GCP.
98102

99-
In the future, we might want to create alarts in Datadog to monitor if the transfer job fails.
103+
In the future, we might want to create alerts in:
104+
- _Datadog_: to monitor if the transfer job fails.
105+
- _Wiz_: to monitor if the access control changes.
100106

101107
### FAQ 🤔
102108

103109
#### Do we need a multi-region backup for the object storage?
104110

105111
No. [Multi-region](https://cloud.google.com/storage/docs/availability-durability#cross-region-redundancy) only helps if we want to serve this data real-time and we want to have a fallback mechanism if a GCP region fails. We just need this object storage for backup purposes, so we don't need to pay more 👍
106112

107-
#### Why did you choose the `us-west-1` GCP region?
113+
#### Why did you choose the `europe-west1` GCP region?
108114

109-
It's the same region where the AWS S3 buckets are located. In this way, we reduce the latency of the transfer job.
115+
It's far from the `us-west-1` region where the AWS S3 buckets are located. This protects us from geographical disasters.
116+
The con is that the latency of the transfer job is higher when compared to a region in the US.
110117
Also, the cost calculator indicates that this regions has a "Low CO2" and it's among the cheapest regions.
111118

112119
#### Why GCP?

0 commit comments

Comments
 (0)