Skip to content

[AAAI2023 Oral] The official implementation of "Hierarchical Contrastive Learning for Temporal Point Processes"

Notifications You must be signed in to change notification settings

qingmeiwangdaily/HCL_TPP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HCL_TPP

[AAAI2023 Oral] The official implementation of "Hierarchical Contrastive Learning for Temporal Point Processes"

In this work, we develop a novel hierarchical contrastive (HCL) learning method for temporal point processes, which provides a new regularizer of maximum likelihood estimation. In principle, our HCL considers the noise contrastive estimation (NCE) problem at the event-level and that at the sequence-level jointly. Given a sequence, the event-level NCE maximizes the probability of each observed event given its history while penalizing the conditional probabilities of the unobserved events. At the same time, we generate positive and negative event sequences from the observed sequence and maximize the discrepancy between their likelihoods through the sequence-level NCE. Instead of using time-consuming simulation methods, we generate the positive and negative sequences via a simple but efficient model-guided thinning process. image

Reference

If you use this code as part of any published research, please acknowledge the following paper:

  @article{wang2023hierarchical,
  title={Hierarchical Contrastive Learning for Temporal Point Processes},
  author={Wang, Qingmei and Cheng, Minjie and Yuan, Shen and Xu, Hongteng},
  year={2023}
}

Instructions

Here are the instructions to use the code base

Dependencies

This code is written in python. To use it you will need:

  • PyTorch == 1.10.0
  • Python == 3.9.0

Data

Downloadable from this Google Drive link

Training & Evaluation(take dataset'Hawkes' for example)

MLE+Reg.

python test_learning.py -batch_size 4 -epoch 50 -model 'MLE' -save_label 'MLE + Reg' -data_folder 'tpp-data/data_hawkes' -w_mle 1 -w_dis 1 -w_cl1 0 -w_cl2 0 -seed 12

MLE+DA

python test_learning.py -batch_size 4 -superpose -epoch 50 -model 'MLE' -save_label 'MLE + DA' -data_folder 'tpp-data/data_hawkes' -w_mle 1 -w_dis 1 -w_cl1 0 -w_cl2 0 -seed 12

Dis

python test_learning.py  -batch_size 4  -model 'MLE' -save_label 'Dis' -data_folder 'tpp-data/data_hawkes' -epoch 50 -w_mle 0 -w_dis 1 -w_cl1 0 -w_cl2 0 -seed 12

HCL+MLE

python test_learning.py  -batch_size 4  -num_neg 20 -ratio_remove 0.4 -model 'HCL' -save_label 'HCL+MLE ' -data_folder 'tpp-data/data_hawkes' -epoch 50 -w_mle 1 -w_dis 1 -w_cl1 1 -w_cl2 1 -seed 12

HCLeve+MLE

python test_learning.py  -batch_size 4  -num_neg 20 -ratio_remove 0.4 -model 'HCL' -save_label 'HCL+MLE ' -data_folder 'tpp-data/data_hawkes' -epoch 50 -w_mle 1 -w_dis 1 -w_cl1 1 -w_cl2 0 -seed 12

HCLseq+MLE

python test_learning.py  -batch_size 4  -num_neg 20 -ratio_remove 0.4 -model 'HCL' -save_label 'HCL+MLE ' -data_folder 'tpp-data/data_hawkes' -epoch 50 -w_mle 1 -w_dis 1 -w_cl1 0 -w_cl2 1 -seed 12

Parameters

tpp-data is the dataset.

Learning is the learning methods chosen for the training, including mle, hcl.

TPPSis the model chosen for the backbone of training.

num_neg is the number of negative sequence for contrastive learning. The default value of Hawkes dataset is 20.

wcl1 corresponds to the weight of event level contrastive learning loss. The default value is 1.

wcl2 corresponds to the weight of event level contrastive learning loss. The default value is 1.

ratio_remove corresponds to the ratio of removing events of per sequence when generate negative and positive sequence . The default value is 0.4.

About

[AAAI2023 Oral] The official implementation of "Hierarchical Contrastive Learning for Temporal Point Processes"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages