October 26st, Wednesday
Lecture 7 Overview and Outlook for Functional Data Analysis
Fang Yao is Chair Professor in School of Mathematical Sciences at Peking University, Director of Center for Statistical Science, and Head of Department of Probability & Statistics. He is a Fellow of IMS and ASA, and an elected member of ISI. He received his B.S. degree in 2000 from University of Science & Technology in China, and his Ph.D. degree in Statistics in 2003 at UC Davis. He was a tenured Full Professor in Statistical Science at University of Toronto, and has been selected into the National Talents Program of China. Dr. Yao’s research focuses on complex-structured data analysis, including functional, high-dimensional, manifold and non-Euclidean data objects; incorporating machine/deep learning and partial/ordinary differential equations to establish scalable statistical modeling and inference; conducting applications involving functional, high-dimensional and differential dynamics in biomedical studies, human genetics, neuroimaging, finance and economics, engineering etc. In 2014, he received the CRM-SSC Prize that recognizes a statistical scientist’s professional accomplishments in research primarily conducted in Canada during the first 15 years after receiving a doctorate. He has served as the Editor for Canadian Journal of Statistics, and is/was on editorial boards for a numebr of statistical journals, including the Annals of Statistics and Journal of the American Statistical Association.
Functional data analysis (FDA) has received substantial attention in the field of Statistics and data science, as such data varying with time or space are ubiquitous in the big data era nowadays, with applications arising from various disciplines, such as medical studies, public health, engineering, finance and economics, and so on. In general, the FDA approaches focus on nonparametrically modeling underlying random functions, which treats the data as observed or sampled from realizations of stochastic processes satisfying some regularity conditions, e.g., smoothness constraints. The estimation and inference procedures usually do not depend on a finite number of parameters, which contrasts with parametric models, and exploit techniques, such as smoothing methods and dimension reduction, that allow data to speak for themselves. In this talk, I will give an overview and discuss some potential outlook for FDA and related topics.
October 28th, Friday
Lecture 8 Applied dynamical systems in science and engineering
Wei Lin is a Full Professor in applied mathematics at Fudan University. He is currently serving as the Dean of the Research Institute of Intelligent Complex Systems, the Vice Dean of the School of Data Science, the Director of the Centre for Computational Systems Biology, and the Vice Director of the MOE Frontiers Center for Brain Science, Fudan University. His research interests include applied mathematics and its applications to complex systems, artificial intelligence, and computational biology. His major contributions have been published in prestigious journals and conference proceedings, including PRL, PNAS, Nature Communications, Nature Physics, NSR, Research, IEEE Transactions, SIAM Journals, AAAI, ICLR, NeurIPS. Dr. Lin is the Senior Member of IEEE. He received the Outstanding Young Scholar Fund from NSFC in 2019 and was selected as the Chief Scientist of the National Key R&D Program of China in 2018. He was a recipient of the Best Paper Prize from the International Consortium of Chinese Mathematicians in 2019 and a second recipient of the First Prize of the Shanghai Natural Science Awards in 2020.
Dynamical system as well as its developed theory has been a paramount tool, broadly applied to solving scientific problems and deciphering the mechanisms hidden in real-world systems. This talk will review the classical advances of applied dynamical systems to various scientific and engineering regimes. Also, this talk will introduce the most recent progresses that have been achieved for the applications of applied dynamical systems in the era of big data and artificial intelligence. We believe that dynamical system will be sustainably applied to uncovering significant principles and, reversely, it as well as its theory will be further flourished during these principles uncovered.
October 30th, Sunday
Lecture 9 On Presuppositions of Machine Learning: A Best-fitting Theory
Zongben Xu was born in 1955. He received his Ph.D. degrees in mathematics from Xi’an Jiaotong University, China, in 1987. His current research interests include applied mathematics and mathematical methods of big data and artificial intelligence. He established the L(1/2) regularization theory for sparse information processing. He also found and verified Xu-Roach Theorem in machine learning, and established the visual cognition based data modelling principle, which have been widely applied in scientific and engineering fields. he initiated several mathematical theories, including the non-logarithmic transform based CT model, and ultrafast MRI imaging, which provide principles and technologies for the development of a new generation of intelligent medical imaging equipment. He is owner of the Tan Kan Kee Science Award in Science Technology in 2018, the National Natural Science Award of China in 2007，and winner of CSIAM Su Buchin Applied Mathematics Prize in 2008. He delivered a 45-minute talk on the International Congress of Mathematicians 2010. He was elected as member of Chinese Academy of Science in 2011.
Zongben Xu was the vice-president of Xi’an Jiaotong University. He currently makes several important services for government and professional societies, including the director for Pazhou Lab (Guangzhou), director for the National Engineering Laboratory for Big Data Analytics, a member of National Big Data Expert Advisory Committee and the Strategic Advisory Committee member of National Open Innovation Platform for New Generation of Artificial Intelligence.
徐宗本，中国科学院院士，数学家、信号与信息处理专家、西安交通大学教授。主要从事智能信息处理、机器学习、数据建模基础理论研究。曾提出稀疏信息处理的L(1/2)正则化理论,为稀疏微波成像提供了重要基础；发现并证明机器学习的“徐-罗奇”定理, 解决了神经网络与模拟演化计算中的一些困难问题,为非欧氏框架下机器学习与非线性分析提供了普遍的数量推演准则; 提出基于视觉认知的数据建模新原理与新方法，形成了聚类分析、判别分析、隐变量分析等系列数据挖掘核心算法, 并广泛应用于科学与工程领域。曾获国家自然科学二等奖、国家科技进步二等奖、陕西省最高科技奖; 国际IAITQM 理查德.普莱斯(Richard Price)数据科学奖; 中国陈嘉庚信息技术科学奖、中国CSIAM苏步青应用数学奖；曾在2010年世界数学家大会上作45分钟特邀报告。
Machine learning have been applied with a set of prerequisite or hypotheses, the optimal setting of which is a `the chicken or the egg’ problem. Those hypotheses include in particular (i) the Large Capacity Hypothesis on hypothetical space, (ii) the Independence Hypothesis on loss function, (iii) the Completeness Hypothesis on training data, (iv) the Prior-Determine-Regularizer Hypothesis on regularization terms, and (v) the Euclidean Hypothesis on analysis framework. We analyze the role, effect and limitations of those hypotheses in this talk, and propose a systematic way, could named as a best-fitting theory, to break through each of the hypotheses.
More specifically, we propose the model driven deep learning approach to burst the Large Capacity Hypothesis, develop a noise modeling principle to breach the Independence Hypothesis, suggest the axiomatic curriculum/self-paced learning approach for the Completeness Hypothesis, the implicit regularization method for the Prior-Determine-Regularizer Hypothesis, and Banach space geometry for the Euclidean Hypothesis. In each case, we show the best-fitting strategy, substantiate the value and outcome of the breaking though. We show also that the continuing effort for bursting the hypotheses of ML is needed, which is then opening new hot directions of ML research.