Skip to content

Commit 7914fd6

Browse files
authored
Merge pull request #14 from openearth/report-pdf-cloud-run
Report-pdf-cloud-run
2 parents 9214026 + 5d7b2b9 commit 7914fd6

27 files changed

+6537
-169
lines changed

.github/workflows/deploy_function.yml

+10-2
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,10 @@ name: Build and Deploy Report function to Cloud Run
4747
on:
4848
push:
4949
branches:
50-
- "main"
51-
- feature/pdf-cloud-function
50+
- main
51+
pull_request:
52+
branches:
53+
- main
5254

5355
env:
5456
PROJECT_ID: dgds-i1000482-002
@@ -97,14 +99,20 @@ jobs:
9799
# END - Docker auth and build
98100

99101
- name: Deploy to Cloud Run
102+
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
100103
id: deploy
101104
uses: google-github-actions/deploy-cloudrun@v2
102105
with:
103106
service: ${{ env.SERVICE }}
104107
region: ${{ env.REGION }}
105108
# NOTE: If using a pre-built image, update the image name here
106109
image: ${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.REPOSITORY }}/${{ env.SERVICE }}:${{ github.sha }}
110+
env_vars: |-
111+
OPENAI_API_BASE=${{ secrets.OPENAI_API_BASE }}
112+
AZURE_OPENAI_API_KEY=${{ secrets.AZURE_OPENAI_API_KEY }}
113+
AZURE_OPENAI_DEPLOYMENT_NAME=${{ secrets.AZURE_OPENAI_DEPLOYMENT_NAME }}
107114
108115
# If required, use the Cloud Run url output in later steps
109116
- name: Show Output
117+
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
110118
run: echo ${{ steps.deploy.outputs.url }}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
.venv/
2+
*__pycache__/
3+
*.pyc
4+
*.pyo
5+
*.pyd
6+
*.pyw
7+
*.pyz
8+
*.pyj
9+
*.pyx
10+
*.pyd
11+
data/catalogs

App/functions/report-python-cloud-run/Dockerfile

+54-20
Original file line numberDiff line numberDiff line change
@@ -12,23 +12,57 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
# Use the official lightweight Python image.
16-
# https://hub.docker.com/_/python
17-
FROM python:3.11-slim
18-
19-
# Allow statements and log messages to immediately appear in the Knative logs
20-
ENV PYTHONUNBUFFERED True
21-
22-
# Copy local code to the container image.
23-
ENV APP_HOME /app
24-
WORKDIR $APP_HOME
25-
COPY . ./
26-
27-
# Install production dependencies.
28-
RUN pip install -r requirements.txt
29-
30-
# Run the web service on container startup. Here we use the gunicorn
31-
# webserver, with one worker process and 8 threads.
32-
# For environments with multiple CPU cores, increase the number of workers
33-
# to be equal to the cores available.
34-
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
15+
# Use the official uv python image
16+
# Use a Python image with uv pre-installed
17+
FROM ghcr.io/astral-sh/uv:python3.11-bookworm
18+
19+
# Install the project into `/app`
20+
WORKDIR /app
21+
22+
# Enable bytecode compilation
23+
ENV UV_COMPILE_BYTECODE=1
24+
25+
# Copy from the cache instead of linking since it's a mounted volume
26+
ENV UV_LINK_MODE=copy
27+
RUN apt-get update && apt-get install -y libgdal-dev libgl1
28+
29+
# Install the project's dependencies using the lockfile and settings
30+
RUN --mount=type=cache,target=/root/.cache/uv \
31+
--mount=type=bind,source=uv.lock,target=uv.lock \
32+
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
33+
uv sync --frozen --no-install-project --no-dev
34+
35+
# Then, add the rest of the project source code and install it
36+
# Installing separately from its dependencies allows optimal layer caching
37+
ADD . /app
38+
RUN --mount=type=cache,target=/root/.cache/uv \
39+
uv sync --frozen --no-dev
40+
41+
# Accept EULA and install Microsoft fonts (including Arial)
42+
RUN apt-get update && \
43+
echo "deb http://deb.debian.org/debian bookworm contrib non-free" > /etc/apt/sources.list.d/contrib.list && \
44+
apt-get update && \
45+
echo "ttf-mscorefonts-installer msttcorefonts/accepted-mscorefonts-eula select true" | debconf-set-selections && \
46+
apt-get install -y ttf-mscorefonts-installer
47+
48+
# Place executables in the environment at the front of the path
49+
ENV PATH="/app/.venv/bin:$PATH"
50+
51+
# Clone the coclicodata and global-coastal-atlas repositories STAC catalogs
52+
RUN mkdir -p /app/data/catalogs
53+
RUN cd /app/data/catalogs && \
54+
git clone -n --depth=1 --filter=tree:0 https://github.com/openearth/coclicodata.git && \
55+
cd coclicodata && \
56+
git sparse-checkout set --no-cone /current && \
57+
git checkout && \
58+
cd /app/data/catalogs && \
59+
git clone -b subsidence_etienne -n --depth=1 --filter=tree:0 https://github.com/openearth/global-coastal-atlas.git && \
60+
cd global-coastal-atlas && \
61+
git sparse-checkout set --no-cone /STAC/data/current && \
62+
git checkout && \
63+
cd /app
64+
65+
ENV STAC_ROOT_DEFAULT="./data/catalogs/global-coastal-atlas/STAC/data/current/catalog.json"
66+
ENV STAC_COCLICO="./data/catalogs/coclicodata/current/catalog.json"
67+
68+
CMD uv run --with gunicorn gunicorn --bind :8080 --workers 1 --threads 8 --timeout 0 main:app

App/functions/report-python-cloud-run/__init__.py

Whitespace-only changes.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
catalogs/
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
services:
2+
report-service:
3+
build:
4+
context: .
5+
dockerfile: Dockerfile
6+
env_file:
7+
- .env.prod
8+
ports:
9+
- "8080:8080"

App/functions/report-python-cloud-run/main.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -73,4 +73,4 @@ def return_html():
7373

7474

7575
if __name__ == "__main__":
76-
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
76+
app.run(host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
[project]
2+
name = "gca-report"
3+
version = "0.1.0"
4+
description = "Add your description here"
5+
readme = "README.md"
6+
requires-python = ">=3.10"
7+
dependencies = [
8+
"flask>=3.1.0",
9+
"geopandas>=1.0.1",
10+
"ipykernel>=6.29.5",
11+
"jinja2>=3.1.4",
12+
"langchain>=0.3.11",
13+
"matplotlib>=3.9.3",
14+
"numba>=0.60.0",
15+
"openai>=1.57.1",
16+
"opencv-python>=4.10.0.84",
17+
"pymupdf>=1.25.1",
18+
"pystac-client>=0.7",
19+
"resilientplotterclass",
20+
"rioxarray>=0.16.0",
21+
"sentence-transformers>=3.3.1",
22+
"weasyprint>=63.1",
23+
"xarray>=2024.4.0",
24+
"zarr>=2.18.2",
25+
]
26+
27+
[tool.uv.sources]
28+
resilientplotterclass = { git = "https://github.com/Deltares-research/ResilientPlotterClass" }
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
<div class="section-grid">
2+
<h>The State of the Coast Report, <span style="color: rgb(0, 204, 150)">explained</span></h>
3+
<p>The data of this report is obtained from various STACs and the content of this report is generated by the Azure AI.
4+
This page explains the approach to process individual data from the STACs and relevant literature regarding the datasets.
5+
The datasets can be found in the following STACs:</p>
6+
<p>https://raw.githubusercontent.com/openearth/global-coastal-atlas/subsidence_etienne/STAC/data/current/catalog.json</p>
7+
<p>https://raw.githubusercontent.com/openearth/coclicodata/main/current/catalog.json</p>
8+
9+
<p>Coastal Types: The dataset indicates the sediment composition of a beach at certain transects,
10+
which are globally spaced shore-normal transects in every 500m. The transects can either labelled as sandy, muddy, vegetated, coastal cliff or other materials.
11+
This dataset is derived from satellite images and other parameters and generated by using a supervised random forest classifier.
12+
Details of the methodology can be referred to Breiman et al. (2001).</p>
13+
14+
<p>The Population: The dataset provides a global population count per pixel at approximately 100m resolution
15+
and it is based on the United Nation Development Programme (UNDP) 2020 estimates for in total 183 countries.</p>
16+
17+
<p>Historical Shoreline Change: The dataset provides the annual shoreline position over the period from 1984 to 2021 along 1.8 million transects in the world.
18+
The transects are 500m spaced shore-normal transects. The position of shoreline is derived from satellite images.
19+
Details of the methodology can be referred to XXXX.</p>
20+
21+
<p>SSP: SSP stands for Shared Socioeconomic Pathway, which was introduced in the IPCC 6th Assessment Report (AR6).
22+
This indicator describes different socioeconomic assumptions, such as population, economic growth, technological development and etc.
23+
Details can be referred to the AR6.</p>
24+
25+
<p>RCP: RCP stands for Representative Concentration Pathway, which was introduced in IPCC 5th Assessment Report (AR5).
26+
RCP4.5 and 8.5 describe scenarios with intermediate or very high greenhouse gas (GHG) emission and other radiative forcings.
27+
RCP4.5 describes a scenario that the CO2 emission will remain around current levels until 2050 and fall but not reach net zero by 2100.
28+
RCP8.5 describes a scenario that the CO2 emission will become triple by 2075.</p>
29+
30+
<p>Sea Level Rise Projection: This is median projections of regional sea level rise from 2020 to 2150, relative to a 1995-2014 baseline.
31+
This projection data is originally from the AR6. The projections are based on several scenarios, including SSP126, SSP245 and SSP585.
32+
Details of the projection can be found in XXX.</p>
33+
34+
<p>Future Shoreline Projections: The average shoreline change rate is based on projection of locations of sandy shorelines relative to their reference locations in RCP4.5 and RCP8.5 scenarios.
35+
According to Luijendijk et al. (XXXX), the shoreline locations in 2021 and the projected shoreline locations in 2050 and 2100 are defined.
36+
The average shoreline change rate is calculated based on the spatial distance difference of the shoreline location between two referenced years and divided by the year difference.
37+
The future shoreline location is estimated based on XXXXX. Details can be found in “XXXX” (Luijendijk et al., XXXX ).
38+
The erosion and accretion classification system is the same as the one in the historical shoreline change.</p>
39+
</div>
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,30 @@
11
from typing import Optional
22
import xarray as xr
33

4-
from .datasetcontent import DatasetContent
5-
from .esl import get_esl_content
4+
from report.datasets.datasetcontent import DatasetContent
5+
6+
from report.datasets.shoremon import (
7+
get_sedclass_content,
8+
get_shoremon_content,
9+
get_shoremon_fut_content,
10+
)
11+
from report.datasets.popgpd import get_world_pop_content
12+
# from .subtreat import get_sub_threat_content
613

714

815
def get_dataset_content(dataset_id: str, xarr: xr.Dataset) -> Optional[DatasetContent]:
916
match dataset_id:
10-
case "esl_gwl":
11-
return get_esl_content(xarr)
17+
# case "esl_gwl":
18+
# return get_esl_content(xarr)
19+
case "sed_class":
20+
return get_sedclass_content(xarr)
21+
case "shore_mon":
22+
return get_shoremon_content(xarr)
23+
case "shore_mon_fut":
24+
return get_shoremon_fut_content(xarr)
25+
case "world_pop":
26+
return get_world_pop_content(xarr)
27+
# case "sub_threat":
28+
# return None
1229
case _:
1330
return None

0 commit comments

Comments
 (0)