Skip to content

Commit

Permalink
initial commit for the official code--final version
Browse files Browse the repository at this point in the history
  • Loading branch information
cui-shaobo committed Jun 21, 2024
1 parent 99ec4cf commit 9fc96b8
Show file tree
Hide file tree
Showing 225 changed files with 774,086 additions and 1 deletion.
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2023 Anonymous

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
156 changes: 155 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,155 @@
# logogram
[![Python 3.8](https://img.shields.io/badge/python-3.8-blue.svg)](https://www.python.org/downloads/release/python-380/)
[![MIT License](https://img.shields.io/github/license/m43/focal-loss-against-heuristics)](LICENSE)

# <img src="./image/logoemoji.png" width="116.4" height="48"/> (LOgogram)
We introduce <img src="./image/logoemoji.png" width="58.2" height="24"/> (LOgogram), a novel heading-generation benchmark comprising 6,653 paper abstracts with corresponding *descriptions* and *acronyms* as *headings*.

To measure the generation quality, we propose a set of evaluation metrics from three aspects: summarization, neology, and algorithm.

Additionally, we explore three strategies (generation ordering, tokenization, and framework design) under prelavent learning paradigms (supervised fine-tuning, reinforcement learning, and in-context learning with Large Language Models).

## Environment Setup

We recommend you to create a new [conda](https://docs.conda.io/en/latest/) virtual environment for running codes in the repository:

```shell
conda create -n logogram python=3.8
conda activate logogram
```
Then install [PyTorch](https://pytorch.org/) 1.13.1. For example install with pip and CUDA 11.6:

```shell
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
```

Finally, install the remaining packages using pip:

```shell
pip install -r requirements.txt
```

## 1. Dataset Processing

### 1.1 Collection of Paper whose Heading Contains Acronyms

We crawl the [ACL Anthology](https://aclanthology.org/) and then exclude examples whose headings do not contain acronyms.

The unfiltered dataset is saved in `/raw-data/acl-anthology/data_acl_all.jsonl`.

### 1.2 Apply Filtering Rules and Replace Acronyms in Abstracts with Masks

We further applied a set of tailored filtering rules based on data inspection to eliminate anomalies. Acronyms in the abstracts were replaced with a mask to prevent acronym leakage. The details are in `src/data_processing.ipynb`.

### 1.3 Dataset Statistics

We plot the distributions with regard to the text length and the publication number of our dataset in Figure 3 and 4 in our paper. To reproduce, see `src/data_statistics.ipynb`.

## 2. Justification of Metrics

We evaluate the generated headings from the summarization, neologistic, and algorithmic constraints. Specifically, we propose three novel metrics, *WordLikeness* (WL), *WordOverlap* (WO), and *LCSRatio* (LR) from the neologistic and algorithmic aspects. To justify our metrics, we also plot the density estimation of different metrics and their joint distribution in Figure 5 and 6, demonstrating that the gold-standard examples achieve high value in these metrics. To reproduce, see `src/data_statistics.ipynb`.

## 3. Apply Strategies under Learning Paradigms

### 3.1 Supervised Fine-Tuning (SFT) Paradigm

We fine-tune the [T5](https://arxiv.org/abs/1910.10683) model and explore the effectiveness of the generation ordering, tokenization, and framework design strategies.

1. To fine-tune and inference <img src="./image/da_tok_one.png" weight="51.8" height="14.7"> (description then acronym, acronym subword-level tokenization, onestop framework), run:

```shell
accelerate launch t5_brute_finetune.py --model_name t5-base --model_mode abstract2description:shorthand --model_save_path ./models/t5-a2ds-token-base --save_total_limit 1

accelerate launch t5_brute_inference.py --model_name models/t5-a2ds-token-base/checkpoint-5 --model_mode abstract2description:shorthand --prediction_save_path ./prediction/brute_t5_a2ds_token_predictions.csv
```

2. To fine-tune and inference <img src="./image/ad_tok_one.png" weight="51.8" height="14.7"> (acronym then description, acronym subword-level tokenization, onestop framework), run:

```shell
accelerate launch t5_brute_finetune.py --model_name t5-base --model_mode abstract2shorthand:description --model_save_path ./models/t5-a2sd-token-base --save_total_limit 1

accelerate launch t5_brute_inference.py --model_name models/t5-a2sd-token-base/checkpoint-5 --model_mode abstract2shorthand:description --prediction_save_path ./prediction/brute_t5_a2sd_token_predictions.csv
```

3. To fine-tune and inference <img src="./image/da_chr_one.png" weight="51.8" height="14.7"> (description then acronym, acronym letter-level tokenization, onestop framework), run:

```shell
accelerate launch t5_brute_finetune.py --model_name t5-base --model_mode abstract2description:shorthand --shorthand_mode character --model_save_path ./models/t5-a2ds-char-base --save_total_limit 1

accelerate launch t5_brute_inference.py --model_name models/t5-a2ds-char-base/checkpoint-5 --model_mode abstract2description:shorthand --shorthand_mode character --prediction_save_path ./prediction/brute_t5_a2ds_char_predictions.csv
```

4. To fine-tune and inference <img src="./image/ad_tok_pip.png" weight="51.8" height="14.7"> (description then acronym, acronym subword-level tokenization, pipeline framework), run:

```shell
accelerate launch t5_brute_finetune.py --model_name t5-base --model_mode abstract2description --model_save_path ./models/t5-a2ds-token-pipe/1 --save_total_limit 1

accelerate launch t5_brute_finetune.py --model_name t5-base --model_mode abstract-description2shorthand --model_save_path ./models/t5-a2ds-token-pipe/2 --save_total_limit 1

accelerate launch t5_brute_inference.py --model_name models/t5-a2ds-token-pipe/1/checkpoint-5 --model_mode abstract2description --prediction_save_path ./prediction/brute_t5_a2ds_token_pipe_predictions.csv

accelerate launch t5_brute_inference.py --model_name models/t5-a2ds-token-pipe/2/checkpoint-5 --model_mode abstract-description2shorthand --prediction_save_path ./prediction/brute_t5_a2ds_token_pipe_predictions.csv
```

### 3.2 Reinforcement Learning (RL) Paradigm

The RL paradigm is built upon the foundation of the SFT paradigm. Specifically, we choose the Proximal Policy Optimization (PPO) algorithm. We evaluate all strategies with the exception of <img src="./image/ad_tok_pip.png" weight="51.8" height="14.7"> due to the relatively unexplored territory of feedback mechanisms within the RL paradigm for pipeline language models.

1. To further fine-tune and inference <img src="./image/da_tok_one.png" weight="51.8" height="14.7">, run:

```shell
TOKENIZERS_PARALLELISM=false accelerate launch t5_ppo_finetune.py --model_mode abstract2description:shorthand --model_save_path ./models/t5-a2ds-token-ppo --save_total_limit 1

accelerate launch t5_brute_inference.py --model_name models/t5-a2ds-token-ppo --model_mode abstract2description:shorthand --prediction_save_path ./prediction/brute_t5_a2ds_token_ppo_predictions.csv
```

2. To further fine-tune and inference <img src="./image/ad_tok_one.png" weight="51.8" height="14.7">, run:

```shell
TOKENIZERS_PARALLELISM=false accelerate launch t5_ppo_finetune.py --model_mode abstract2shorthand:description --model_save_path ./models/t5-a2sd-token-ppo --save_total_limit 1

accelerate launch t5_brute_inference.py --model_name models/t5-a2sd-token-ppo --model_mode abstract2shorthand:description --prediction_save_path ./prediction/brute_t5_a2sd_token_ppo_predictions.csv
```

3. To further fine-tune and inference <img src="./image/da_chr_one.png" weight="51.8" height="14.7">, run:

```shell
TOKENIZERS_PARALLELISM=false accelerate launch t5_ppo_finetune.py --model_mode abstract2description:shorthand --shorthand_mode character --model_save_path ./models/t5-a2ds-char-ppo --save_total_limit 1

accelerate launch t5_brute_inference.py --model_name models/t5-a2ds-char-ppo --model_mode abstract2description:shorthand --shorthand_mode character --prediction_save_path ./prediction/brute_t5_a2ds_char_ppo_predictions.csv
```

### 3.3 In-Context Learning with Large Language Models (ICL) Paradigm
To replicate the results of ICL, run the following code
```shell
python icl_main.py
```
The generation model can be selected from
+ `onestop` <img src="./image/da_tok_one.png" weight="51.8" height="14.7">
+ `onestop_sd`: <img src="./image/ad_tok_one.png" weight="51.8" height="14.7">
+ `onestop_char`: <img src="./image/da_chr_one.png" weight="51.8" height="14.7">
+ `pipeline`: <img src="./image/ad_tok_pip.png" weight="51.8" height="14.7">


## 4. Evaluation

To evaluate the generated acronyms, run:

```shell
python run_eval.py \
--file <CSV file> \
--eval_type shorthand \
--hypos-col <the column name of generated acronyms> \
--refs-col <the column name of ground truth acronyms>
```

For descriptions, run:

```shell
python run_eval.py \
--file <CSV file> \
--eval_type description \
--hypos-col <the column name of generated descriptions> \
--refs-col <the column name of ground truth descriptions>
```

By default, the CSV files are saved in `prediction/`.
Binary file added __pycache__/t5_trainer.cpython-37.pyc
Binary file not shown.
6 changes: 6 additions & 0 deletions baselines/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
"""
@Project : abbreviation
@File : __init__.py.py
@Author : Shaobo Cui
@Date : 14.07.23 10:44
"""
19 changes: 19 additions & 0 deletions csv_to_jsonl.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
import csv
import json

csv_file = 'prediction/icl_gpt-4-1106-preview_onestop_char.csv'
jsonl_file = 'results-icl/icl_gpt-4-1106-preview_onestop_char.jsonl'


def csv_to_jsonl(csv_file, jsonl_file):
with open(csv_file, mode='r', newline='') as csv_file:
csv_reader = csv.DictReader(csv_file)

with open(jsonl_file, mode='w') as jsonl_file:
for row in csv_reader:
json_line = json.dumps(row)
jsonl_file.write(json_line + '\n')


if __name__ == "__main__":
csv_to_jsonl(csv_file, jsonl_file)
1 change: 1 addition & 0 deletions data/icl_demos.jsonl
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"Type":"conference","Year":2023,"Area":"AI","Where":"eacl-2023","Abbreviation":"DyLoRA","Title":"Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation","Abstract":"With the ever-growing size of pretrained models (PMs), fine-tuning them has become more expensive and resource-hungry. As a remedy, low-rank adapters (LoRA) keep the main pretrained weights of the model frozen and just introduce some learnable truncated SVD modules (so-called LoRA blocks) to the model. While LoRA blocks are parameter-efficient, they suffer from two major problems: first, the size of these blocks is fixed and cannot be modified after training (for example, if we need to change the rank of LoRA blocks, then we need to re-train them from scratch); second, optimizing their rank requires an exhaustive search and effort. In this work, we introduce a dynamic low-rank adaptation (<MASKED_ACRONYM>) technique to address these two problems together. Our <MASKED_ACRONYM> method trains LoRA blocks for a range of ranks instead of a single rank by sorting the representation learned by the adapter module at different ranks during training. We evaluate our solution on different natural language understanding (GLUE benchmark) and language generation tasks (E2E, DART and WebNLG) using different pretrained models such as RoBERTa and GPT with different sizes. Our results show that we can train dynamic search-free models with <MASKED_ACRONYM> at least 4 to 7 times (depending to the task) faster than LoRA without significantly compromising performance. Moreover, our models can perform consistently well on a much larger range of ranks compared to LoRA.","wordlikeness":0.5,"lcsratio":1.0,"wordcoverage":0.7272727273}
Loading

0 comments on commit 9fc96b8

Please sign in to comment.