Skip to content

Commit

Permalink
Merge branch 'main' into handle-ranges-per-guidelines
Browse files Browse the repository at this point in the history
  • Loading branch information
i-be-snek authored Jul 8, 2024
2 parents 2bee2a4 + 8fdf143 commit f19294e
Show file tree
Hide file tree
Showing 11 changed files with 54 additions and 68 deletions.
3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Affected.parquet

This file was deleted.

3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Buildings_Damaged.parquet

This file was deleted.

3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Damage.parquet

This file was deleted.

3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Deaths.parquet

This file was deleted.

3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Displaced.parquet

This file was deleted.

3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Events.parquet

This file was deleted.

3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Homeless.parquet

This file was deleted.

3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Injured.parquet

This file was deleted.

3 changes: 0 additions & 3 deletions Database/gold/gold_from_excel_2/Insured_Damage.parquet

This file was deleted.

7 changes: 4 additions & 3 deletions Database/scr/normalize_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -285,8 +285,8 @@ def merge_json(self, file_path_dir: str) -> list[pd.DataFrame]:

return dfs

@staticmethod
def save_json(
self,
dfs: list[pd.DataFrame],
model_name: str,
output_dir: str,
Expand Down Expand Up @@ -314,11 +314,12 @@ def save_json(
"""
Takes a list of dataframes, merges it into a single file, and stores file in output_dir with the correct set and model names
"""
captured_columns = set([x for xs in [df.columns for df in dfs] for x in xs])
captured_columns = set([x for xs in [df.keys() for df in dfs] for x in xs])
self.logger.info(f"Captured Columns: {captured_columns}")
model_output = pd.DataFrame(dfs, columns=[c for c in columns if c in captured_columns])
filename = f"{output_dir}/{model_name}.json"
model_output.to_json(
filename,
orient="records",
)
return filename
return filename
88 changes: 50 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Wikimapcts is the first version of climate impact dataset creating by generative AI GPT4.0


### Dependencies
## Dependencies
Prerequisite:
- Install [`poetry`](https://python-poetry.org/docs/#installation)
Then activate a virtual environment and install the dependencies:
Expand All @@ -26,27 +26,28 @@ pre-commit installed at .git/hooks/pre-commit
git lfs install
```

### Quickstart
## Quickstart

#### Parsing and evaluation pipeline
### Parsing and evaluation pipeline

If you have generated some LLM output and would like to test it against the dev and test gold sets, here is a list of command to enable you to experiment with this yourself.

1. Choose a new experiment name! You will use this <EXPERIMENT_NAME> for the whole pipeline
#### (Step 1) Experiment name

**PRESTEP**:
If the system output is split across several files (such as Mixtral and Mistral system outputs), then first merge it:
Choose a new experiment name! You will use this <EXPERIMENT_NAME> for the whole pipeline.

```shell
poetry run python3 Database/merge_json_output.py \
--input_dir Database/raw/<EXPERIMENT_NAME>/<RAW_JSON_FILES> \
--output_dir Database/raw/<EXPERIMENT_NAME> \
--model_name <MY_MODEL>
```
#### PRESTEP (before Step 2):
If the system output is split across several files (such as Mixtral and Mistral system outputs), then first merge it:

```shell
poetry run python3 Database/merge_json_output.py \
--input_dir Database/raw/<EXPERIMENT_NAME>/<RAW_JSON_FILES> \
--output_dir Database/raw/<EXPERIMENT_NAME> \
--model_name <MY_MODEL>
```

> [!WARNING]
> Your raw system output files should always land in the [`Database/raw/<EXPERIMENT_NAME>`] directory!
> Your raw system output files should always land in the `Database/raw/<EXPERIMENT_NAME>` directory!
> [!TIP]
> JSON files can be formatted easily with pre-commit:
Expand All @@ -55,31 +56,32 @@ If you have generated some LLM output and would like to test it against the dev
> pre-commit run --files Database/raw/<EXPERIMENT_NAME>/> <JSON_FILE_THAT_NEEDS_FORMATTING>
> ```
2. Once all system output files are merged into a single JSON file (**or if this was already the case, such as with GPT4 output**), you can parse them so they are ready to be evaluated.
The parsing script [`Database/parse_events.py`](Database/parse_events.py) will normalize numbers (to min and max) and locations (using OpenStreetMap) and output a JSON file.
#### (Step 2) Parsing events and subevents
```shell
poetry run python3 Database/parse_events.py \
--raw_dir Database/raw/<EXPERIMENT_NAME> \
--filename <JSON_FILE> \
--output_dir Database/output/<EXPERIMENT_NAME> \
# "sub", "main" or "all"
--event_type all \
Once all system output files are merged into a single JSON file (**or if this was already the case, such as with GPT4 output**), you can parse them so they are ready to be evaluated.
The parsing script [`Database/parse_events.py`](Database/parse_events.py) will normalize numbers (to min and max) and locations (using OpenStreetMap) and output a JSON file.
# if your country and location columns have a different name
# you can specify it here (otherwise, defaults to
# "Country" and "Location" (respectively)):
--country_column "Custom_Country_Column" \
--location_column "Locations"
```
```shell
poetry run python3 Database/parse_events.py \
--raw_dir Database/raw/<EXPERIMENT_NAME> \
--filename <JSON_FILE> \
--output_dir Database/output/<EXPERIMENT_NAME> \
# "sub", "main" or "all"
--event_type all \
# if your country and location columns have a different name
# you can specify it here (otherwise, defaults to
# "Country" and "Location" (respectively)):
--country_column "Custom_Country_Column" \
--location_column "Locations"
```
> [!WARNING]
> Normalizing countries will go slow the first time. This is because we are using a free API (currently!). However, each time this script is run locally, geopy will cache the results, meaning that it will go faster the next time you run it on your local branch. Allow for 15-20 minutes the first time.
3. Evaluate against the dev and test sets
#### (Step 2) Evaluate against the dev and test sets
##### (A) Choose your config and columns
The python dictionary in <a href="Evaluation/weights.py"><code>weights.py</code></a> contains different weight configs. For example, the experiments nlp4climate weighs all the column types equally but excludes the "Event_Name" from evaluation.
Expand Down Expand Up @@ -131,7 +133,7 @@ poetry run python3 Evaluation/evaluator.py --sys-file Database/output/nlp4clima
--weights_config nlp4climate
```
#### Parsing and normalization
### Parsing and normalization
If you have new events to add to the database, first parse them and insert them.
Expand All @@ -145,7 +147,7 @@ If you have new events to add to the database, first parse them and insert them.
poetry run python3 Database/parse_events.py --help
```
#### Inserting
### Inserting
- To insert new main events:
```shell
Expand All @@ -168,7 +170,7 @@ If you have new events to add to the database, first parse them and insert them.
poetry run python3 Database/parse_events.py --help
```
#### Database-related
### Database-related
- To generate the database according to [`Database/schema.sql`](Database/schema.sql):
```shell
Expand Down Expand Up @@ -201,9 +203,19 @@ To be implemented:
> Please don't track or push excel sheets into the repository
> The file `Database/gold/ImpactDB_DataTable_Validation.xlsx` has the latest gold annotations from 01/06/2024 and will be updated in the future.
#### Develop
### Develop
Always pull a fresh copy of the `main` branch first! To add a new feature, check out a new branch from the `main` branch, make changes there, and push the new branch upstream to open a PR. PRs should result in a **squash commit** in the `main` branch. **It is recommended to code responsibly and ask someone to review your code. You can always tag @i-be-snek as a reviewer.**
Always pull a fresh copy of the `main` branch first! To add a new feature, check out a new branch from the `main` branch, make changes there, and push the new branch upstream to open a PR. PRs should result in a **squash commit** in the `main` branch. It is recommended to code responsibly and ask someone to review your code.
And don't forget to pull large files from Git Large File Storage!
```
# always pull first
git pull main
# fetch all large files
git lfs fetch --all
```
Make sure any new dependencies are handled by `poetry`.You should be tracking and pushing both `poetry.lock` and `pyproject.toml` files.
There is no need to manually add dependencies to the `pyproject.toml` file. Instead, use `poetry` commands:
Expand All @@ -216,15 +228,15 @@ poetry add pandas -G main
poetry add [email protected] -G dev
```
#### Problems?
### Problems?
Start an Issue on GitHub if you find a bug in the code or have suggestions for a feature you need.
If you run into an error or problem, please include the error trace or logs! :D
> [!TIP]
> Consult this [Github Cheat Sheet](https://education.github.com/git-cheat-sheet-education.pdf)
#### Sources & Citations
### Sources & Citations
- GADM world data | `Database/data/gadm_world.csv`
https://gadm.org/license.html
Expand Down

0 comments on commit f19294e

Please sign in to comment.