- This is not a Google product or software
- Use the new version in python. The old scripts stays in place for historic.
A script that extracts information from one, or more, GCP projects, that can be used for any kind of analysis. Its a inventory of your cloud.
Calling the collect.py program the process read and exports data to csv ou excel, you choose.
- Use a Service Account that has ONLY read-only access, although this script does not change anything in the cloud, it's a good practice to give only the access needed.
pip3 install -r requirements.yaml
./collect.py
. Requirements file generated by pipreqs.
You can save the collect date to csv, xls or json, by using the -o parameter. If ommited will assume the default csv. The files will going to be saved in ./output folder. Example: Save both to csv and xls.
./collect.py -o csv -o xls
Output patterns:
Type | Output |
---|---|
xls | output.xlsx |
csv | output_{resource_type}.csv |
json | output_{resource_type}.json |
By using the parameters -r is possible to determine just a kind of resources to collect data. This parameters can be used multiple times. Example: Collect data of gcs and gke only:
./collect.py -r gcs -r gke
Resource | Option |
---|---|
Virtual Machines | compute |
CloudSql | sql |
Deployment Manager | deployment |
Network (VPC/VPN) | network |
Functions | functions |
GCS | gcs |
GKE | gke |
Artifact Repository | art_repo |
PubSub | pubsub |
- Virtual Machines
- CloudSQL
- Functions
- GCS
- GKE
- Artifact Registry
- PubSub
- Deployment Manager
- VPC
- VPN
By passing parameter -m will allow to loop through multiple projects. Text file format: List of project_id(Number) saperated by new line. i.e:
./collect.py -m projects.txt -r compute
- Question: Why reading compute machines that some time?
Response: During the process of reading compute machines, the collector reads also the 1-The metrics of CPU from the monitoring service 2-Get the machineType for CPU e mem