Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Without head bar #90

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
16 changes: 8 additions & 8 deletions _config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
# `jekyll serve`. If you change this file, please restart the server process.

# Site Settings
title : "Lorem ipsum"
description : "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. "
repository : "RayeRen/acad-homepage.github.io"
title : "Xingyu SONG"
description : "Welcome! "
repository : "xingyu-song/xingyu-song.github.io"
google_scholar_stats_use_cdn : true

# google analytics
Expand All @@ -21,14 +21,14 @@ baidu_site_verification : # get baidu_site_verification from https://ziyuan.ba

# Site Author
author:
name : "Lorem ipsum"
avatar : "images/android-chrome-512x512.png"
bio : "Lorem ipsum College"
location : "Beijing, China"
name : "Xingyu SONG"
avatar : "images/avatar.JPG"
bio : "The University of Tokyo"
location : "Tokyo, Japan"
employer :
pubmed :
googlescholar : "https://scholar.google.com/citations?user=YOUR_GOOGLE_SCHOLAR_ID"
email : "Lorem@ipsum.com"
email : "songxingyu0429@gmail.com"
researchgate : # e.g., "https://www.researchgate.net/profile/yourprofile"
uri :
bitbucket :
Expand Down
16 changes: 8 additions & 8 deletions _data/navigation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,20 @@ main:
- title: "About Me"
url: "/#about-me"

- title: "News"
url: "/#-news"
# - title: "News"
# url: "/#-news"

- title: "Publications"
url: "/#-publications"

- title: "Honors and Awards"
url: "/#-honors-and-awards"
# - title: "Honors and Awards"
# url: "/#-honors-and-awards"

- title: "Educations"
url: "/#-educations"

- title: "Invited Talks"
url: "/#-invited-talks"
# - title: "Invited Talks"
# url: "/#-invited-talks"

- title: "Internships"
url: "/#-internships"
# - title: "Internships"
# url: "/#-internships"
56 changes: 33 additions & 23 deletions _pages/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,42 +17,52 @@ redirect_from:

<span class='anchor' id='about-me'></span>

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. Suspendisse condimentum, libero vel tempus mattis, risus risus vulputate libero, elementum fermentum mi neque vel nisl. Maecenas facilisis maximus dignissim. Curabitur mattis vulputate dui, tincidunt varius libero luctus eu. Mauris mauris nulla, scelerisque eget massa id, tincidunt congue felis. Sed convallis tempor ipsum rhoncus viverra. Pellentesque nulla orci, accumsan volutpat fringilla vitae, maximus sit amet tortor. Aliquam ultricies odio ut volutpat scelerisque. Donec nisl nisl, porttitor vitae pharetra quis, fringilla sed mi. Fusce pretium dolor ut aliquam consequat. Cras volutpat, tellus accumsan mattis molestie, nisl lacus tempus massa, nec malesuada tortor leo vel quam. Aliquam vel ex consectetur, vehicula leo nec, efficitur eros. Donec convallis non urna quis feugiat.
I am currently a research assistant at the Graduate School of Engineering, University of Tokyo. My research focuses on computer vision, deep learning, and graph neural networks, particularly in the context of human motion understanding and representation. I am actively seeking PhD opportunities.

My research interest includes neural machine translation and computer vision. I have published more than 100 papers at the top international AI conferences with total <a href='https://scholar.google.com/citations?user=DhtAFkwAAAAJ'>google scholar citations <strong><span id='total_cit'>260000+</span></strong></a> (You can also use google scholar badge <a href='https://scholar.google.com/citations?user=DhtAFkwAAAAJ'><img src="https://img.shields.io/endpoint?url={{ url | url_encode }}&logo=Google%20Scholar&labelColor=f6f6f6&color=9cf&style=flat&label=citations"></a>).
Feel free to contact me if you would like more information or access to the code related to my work.

# 📝 Publications

<div class='paper-box'><div class='paper-box-image'><div><div class="badge">ECAI 2024</div><img src='images/pipeline.pdf' alt="sym" width="100%"></div></div>
<div class='paper-box-text' markdown="1">

# 🔥 News
- *2022.02*: &nbsp;🎉🎉 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2022.02*: &nbsp;🎉🎉 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
[An Animation-based Augmentation Approach for Action Recognition from Discontinuous Video](https://arxiv.org/abs/2404.06741)

# 📝 Publications
**Xingyu Song**, Zhan Li, Shi Chen, Xin-Qiang Cai, Kazuyuki Demachi

[**Project**](https://github.com/xingyu-song/4A) <strong><span class='show_paper_citations' data='DhtAFkwAAAAJ:ALROH1vI_8AC'></span></strong>
- 4A (Action Animation-based Augmentation Approach) is a pipeline that enhances action recognition by generating animated pose data through 2D pose estimation, Quaternion-based GCN, and Dynamic Skeletal Interpolation, which effectively bridging the gap between virtual and real-world data and achieving superior performance with significantly less data.
</div>
</div>

<div class='paper-box'><div class='paper-box-image'><div><div class="badge">CVPR 2016</div><img src='images/500x300.png' alt="sym" width="100%"></div></div>

<div class='paper-box'><div class='paper-box-image'><div><div class="badge">ECAI 2024</div><img src='images/q_gcn.pdf' alt="sym" width="100%"></div></div>
<div class='paper-box-text' markdown="1">

[Deep Residual Learning for Image Recognition](https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf)
[Quater-GCN: Enhancing 3D Human Pose Estimation with Orientation and Semi-supervised Training](https://arxiv.org/abs/2404.19279)

**Kaiming He**, Xiangyu Zhang, Shaoqing Ren, Jian Sun
**Xingyu Song**, Zhan Li, Shi Chen, Kazuyuki Demachi

[**Project**](https://scholar.google.com/citations?view_op=view_citation&hl=zh-CN&user=DhtAFkwAAAAJ&citation_for_view=DhtAFkwAAAAJ:ALROH1vI_8AC) <strong><span class='show_paper_citations' data='DhtAFkwAAAAJ:ALROH1vI_8AC'></span></strong>
- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
[**Project**](https://github.com/xingyu-song/q_gcn) <strong><span class='show_paper_citations' data='DhtAFkwAAAAJ:ALROH1vI_8AC'></span></strong>
- Quater-GCN (Q-GCN) is a deep learning model that enhances 3D human pose estimation by incorporating both joint spatial dependencies and bone orientation, utilizing a directed graph convolutional network and a semi-supervised training strategy, resulting in superior performance compared to existing methods.
</div>
</div>

- [Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet](https://github.com), A, B, C, **CVPR 2020**
- [Data, language and graph-based reasoning methods for identification of human malicious behaviors in nuclear security]([https://github.com](https://www.sciencedirect.com/science/article/pii/S0957417423018699)), Zhan Li, **Xingyu Song**, Shi Chen, Kazuyuki Demachi, **Expert Systems with Applications**

# 🎖 Honors and Awards
- *2021.10* Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2021.09* Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- [Game Engine Based Data Augmentation with In-game Customization and Modeling for Malicious Behaviors Identification in Nuclear Security](https://resources.inmm.org/sites/default/files/2023-07/finalpaper_223_0418062326.pdf), **Xingyu Song**, Zhan Li, Shi Chen, Kazuyuki Demachi, **INMM/ESARDA 2023 Joint Annual Meeting**

# 📖 Educations
- *2019.06 - 2022.04 (now)*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2015.09 - 2019.06*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- [Abnormal Detection in Nuclear Security Videos Based on Label-Specific Autoencoders and Reconstruction Errors Comparison](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641399), Zhan Li, **Xingyu Song**, Shi Chen, Kazuyuki Demachi, **SSRN Preprint**

- [GTAutoAct: An Automatic Datasets Generation Framework Based on Game Engine Redevelopment for Action Recognition](https://arxiv.org/abs/2401.13414), **Xingyu Song**, Zhan Li, Shi Chen, Kazuyuki Demachi, **arXiv Preprint**

# 💬 Invited Talks
- *2021.06*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2021.03*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. \| [\[video\]](https://github.com/)
- [Armed Boundary Sabotage: A Case Study of Human Malicious Behaviors Identification with Computer Vision and Explainable Reasoning Methods](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4750342), Zhan Li, **Xingyu Song**, Shi Chen, Kazuyuki Demachi, **SSRN Preprint**

# 💻 Internships
- *2019.05 - 2020.02*, [Lorem](https://github.com/), China.
- [Malicious behaviors identification in nuclear security based on visual relationships extraction and knowledge reasoning](https://resources.inmm.org/sites/default/files/2023-07/finalpaper_220_0425010228.pdf), Zhan Li, **Xingyu Song**, Shi Chen, Kazuyuki Demachi, **INMM/ESARDA 2023 Joint Annual Meeting**

- [Advancement and Development of Graph-Based Reasoning Method for Human Malicious Behaviors Identification Based on Graph Contrastive Representation Learning](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4798853), Zhan Li, **Xingyu Song**, Shi Chen, Kazuyuki Demachi, **SSRN Preprint**

# 📖 Educations
- *2022.04 - 2024.03*, M.Sc., Graduate School of Engineering, The University of Tokyo.
- *2021.10 - 2022.03*, Research Student, School of Fundamental Science and Engineering, Waseda University.
- *2017.09 - 2021.07*, B.Sc., College of Hongshen/College of Computer Science, Chongqing University.
Binary file added images/AC3FC3F7-E384-4274-953A-FC8391233A26.JPG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/android-chrome-192x192.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/android-chrome-512x512.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/apple-touch-icon.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/avatar.JPG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/favicon-16x16.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/favicon-32x32.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/favicon.ico
Binary file not shown.
Binary file added images/pipeline.pdf
Binary file not shown.
Binary file added images/q_gcn.pdf
Binary file not shown.