My experience on ICASSP 2019
Published:
This year is my first time to attend ICASSP and it was a wonderful experience. I learnt more about what people in the signal processing community are doing, who are the leaders in different fields, and how important it is to make connection with our colleges. Overall, ICASSP 2019 had a significant number of papers on projected gradient methods, along with deep learning applications. Another common interest is the convex reformulation of non-convex problem that preserves the global solution. In terms of research groups, the most impressive one is Prof. Yonina Eldar’s group, with 16 papers. The other outstanding team is Prof. Nicholas Sidiropoulos’s group. I also met my undergrad friend, Minh Trinh. He is doing very well with his advisor, Prof. Marius Pesavento, on the topic of sensor estimation method. I realize that I need to learn a lot more (horizontally) in order to catch up with the community, understand their work and hold discussion with them. Problems and concepts such as massive MIMO, auto-encoders, estimation bounds, DoA, etc. seems to be fundermental and it is necessary to gain a solid understanding about them. It is also an important skill to select sessions to attend in the conference. The first day I was too general in choosing entire sessions to attend. As a result, I missed lots of interesting things in other sessions at that time slot. The day after, I soon realized that planning your schedule according to each paper of interest is much more efficient. I looked at sessions that are most related to my research and then, put them on the schedule such that every 2-hour period has 1 hour of lectures (3 presentations) and 1 hour of poster.
Here is my “TO DO” list of papers after attending ICASSP 2019:
Improved Estimation of Eigenvalues and Eigenvectors of Covariance Matrices Using Their Sample Estimates, Xavier Mestre. Minh suggests this paper after our discussion on the ESD of covariance matrices.
(Huy Phan). This paper received best paper award for its broad impact in recent years.
Statistical rank selection for incomplete low-rank matrices, Yao Xie. Although this paper won the best student paper award, I am not impressed by their presentation. There is a good reason to check out this paper to see what I was missing.
Robust M-estimation based matrix completion, Abdelhak Zoubir. This paper is essentially alternating minimization with different norms that are more robust to noise. The idea is interesting and the author is one of my advisor’s friend. I had a nice discussion with him on his poster presentation.
The geometry of equality-constrained global concensus problems, Michael Wakin. This paper use gradient ADMM in a distributed setting. I was thinking if this can be extended to NMF.