-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
173 lines (163 loc) · 9.31 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Pan Zhang</title>
<meta name="author" content="Pan Zhang">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<name>Pan Zhang</name>
</p>
<p style="text-align:center">
<a href="mailto:[email protected]">Email</a>
<a href="https://github.com/panzhang0212">Github</a>
<a href="docs/CV_210310.pdf">CV</a>
</p>
<p>I am a <a href="https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/">Microsoft Research Asia(MSRA)</a> Joint PhD student from <a href="https://www.ustc.edu.cn/">University of Science and Technology of China(USTC)</a>.
I am working in <a href="https://www.microsoft.com/en-us/research/group/visual-computing/">Visual Computing Group</a> in MSRA as a research intern,
under the supervision of Researcher <a href="https://www.microsoft.com/en-us/research/people/zhanbo/">Bo Zhang</a>, Principal Research Manager <a href="https://www.microsoft.com/en-us/research/people/doch/">Dong Chen</a>,
and <a href="https://www.microsoft.com/en-us/research/people/bainguo/"> Prof. Baining Guo</a>.
</p>
<p>
I received B.S. from Department of Electronic Engineering and Information Science of USTC in 2017.
</p>
</td>
<td style="padding:15% 7% 7% 7%;width:40%;max-width:40%">
<a href="images/PanZhang.jpg"><img style="width:100%;max-width:100%" alt="profile photo" src="images/PanZhang.jpg" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Publications</heading>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr></tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one" >
<img src='images/ProDA_teaser.png' style="width:100%;max-width:100%; position: absolute;top: -5%">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation</papertitle>
<br>
<strong>Pan Zhang</strong>, Bo Zhang, Ting Zhang, Dong Chen, Yong Wang, Fang Wen
<br>
<em>2021 IEEE Conference on Computer Vision and Pattern Recognition</em>, CVPR 2021,
<br>
<a href="https://arxiv.org/abs/2101.10979">[Paper]</a>
<a href="https://github.com/microsoft/ProDA">[Code]</a>
<a href="docs/proda_bib.txt">[BibTeX]</a>
<br>
<p>We propose ProDA for unsupervised domain adaptation which resorts to prototypes to online denoise the pseudo labels and learn the compact feature space for the target domain.
The proposed method outperforms state of-the-art methods by a large margin, greatly reducing the
gap with supervised learning.
</p>
</td>
</tr>
<tr></tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one" >
<img src='images/full_resolution_teaser.png' style="width:100%;max-width:100%; position: absolute;top: -5%">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Full-Resolution Correspondence Learning for Image Translation</papertitle>
<br>
Xingran Zhou, Bo Zhang, Ting Zhang, <strong>Pan Zhang</strong>, Jianmin Bao, Dong Chen, Zhongfei Zhang, Fang Wen
<br>
<em>2021 IEEE Conference on Computer Vision and Pattern Recognition</em>, CVPR 2021,
<strong><font color="#FF0000">Oral Presentation</font></strong>
<br>
<a href="https://arxiv.org/abs/2012.02047">[Paper]</a>
<a href="docs/full_resolution_bib.txt">[BibTeX]</a>
<br>
<p>We present the full-resolution correspondence learning for cross-domain images, which aids image translation.
We adopt a hierarchical strategy that uses the correspondence from coarse level to guide the finer levels
with the proposed GRU-assisted PatchMatch.
</p>
</td>
</tr>
<tr></tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one" >
<img src='images/OldPhotos2_teaser.png' style="width:100%;max-width:100%; position: absolute;top: -5%">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Old Photo Restoration via Deep Latent Space Translation</papertitle>
<br>
Ziyu Wan, Bo Zhang, Dongdong Chen, <strong>Pan Zhang</strong>, Dong Chen, Jing Liao, Fang Wen
<br>
<em>arxiv preprint, Sep 2020</em>
<br>
<a href="https://arxiv.org/pdf/2009.07047.pdf">[Paper]</a>
<a href="docs/oldphoto_bib2.txt">[BibTeX]</a>
<br>
<p>We propose to restore old photos by mapping two VAE latent spaces followed by another face refinement network to recover fine details of faces in the
old photos</p>
</td>
</tr>
<tr></tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<img src='images/CocoNet_teaser.png' style="width:100%;max-width:100%; position: absolute;top: -5%">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Cross-domain Correspondence Learning for Exemplar-based Image Translation</papertitle>
<br>
<strong>Pan Zhang</strong>, Bo Zhang, Dong Chen, Lu Yuan, Fang Wen
<br>
<em>2020 IEEE Conference on Computer Vision and Pattern Recognition</em>, CVPR 2020,
<strong><font color="#FF0000">Oral Presentation</font></strong>
<br>
<a href="https://panzhang0212.github.io/CoCosNet/">[Project]</a>
<a href="https://arxiv.org/abs/2004.05571">[Paper]</a>
<a href="https://github.com/microsoft/CoCosNet">[Code]</a>
<a href="https://www.dropbox.com/s/g7dezxm2mhw6gqo/CoCosNet%20slides.pptx?dl=0">[Slides]</a>
<a href="https://youtu.be/BdopAApRSgo">[Youtube]</a>
<a href="docs/cocosnet_bib.txt">[BibTeX]</a>
<br>
<p>We present a general framework for exemplar-based image translation by jointly learning the cross-domain correspondence and the image translation, where
both tasks facilitate each other and thus can be learned with weak supervision.</p>
</td>
</tr>
<tr></tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one" >
<img src='images/OldPhotos_teaser.png' style="width:100%;max-width:100%; position: absolute;top: -5%">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Bringing Old Photos Back to Life</papertitle>
<br>
Ziyu Wan, Bo Zhang, Dongdong Chen, <strong>Pan Zhang</strong>, Dong Chen, Jing Liao, Fang Wen
<br>
<em>2020 IEEE Conference on Computer Vision and Pattern Recognition</em>, CVPR 2020,
<strong><font color="#FF0000">Oral Presentation</font></strong>
<br>
<a href="http://raywzy.com/Old_Photo/">[Project]</a>
<a href="https://arxiv.org/abs/2004.09484">[Paper]</a>
<a href="https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life">[Code]</a>
<a href="docs/oldphoto_bib.txt">[BibTeX]</a>
<br>
<p>We propose to restore old photos that suffer from severe multiple degradations through a deep learning approach,
and a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs.</p>
</td>
</tr>
</td>
</tr>
</table>
</body>
</html>