当前位置: Home >> News >> 正文


日期: 2022-10-24 点击:

10:00-11:00 a.m.

October 26st, Wednesday

Lecture 7 Overview and Outlook for Functional Data Analysis



Speaker Bio

Fang Yao is Chair Professor in School of Mathematical Sciences at Peking University, Director of Center for Statistical Science, and Head of Department of Probability & Statistics. He is a Fellow of IMS and ASA, and an elected member of ISI. He received his B.S. degree in 2000 from University of Science & Technology in China, and his Ph.D. degree in Statistics in 2003 at UC Davis. He was a tenured Full Professor in Statistical Science at University of Toronto, and has been selected into the National Talents Program of China. Dr. Yao’s research focuses on complex-structured data analysis, including functional, high-dimensional, manifold and non-Euclidean data objects; incorporating machine/deep learning and partial/ordinary differential equations to establish scalable statistical modeling and inference; conducting applications involving functional, high-dimensional and differential dynamics in biomedical studies, human genetics, neuroimaging, finance and economics, engineering etc. In 2014, he received the CRM-SSC Prize that recognizes a statistical scientist’s professional accomplishments in research primarily conducted in Canada during the first 15 years after receiving a doctorate. He has served as the Editor for Canadian Journal of Statistics, and is/was on editorial boards for a numebr of statistical journals, including the Annals of Statistics and Journal of the American Statistical Association.

姚方,北京大学讲席教授,国家高层次人才,北大统计科学中心主任,概率统计系主任。数理统计学会与美国统计学会会士。2000年本科毕业于中国科技大学统计专业,2003获得加利福尼亚大学戴维斯分校统计学博士学位,曾任职于多伦多大学统计科学系长聘正教授。主要研究方向为复杂结构数据分析,包括函数型数据、高维数据、流形和非欧数据等;融合机器/深度学习的方法理论、微分方程等机理模型建立可拓展的统计学习与推断;涉及函数型、高维数据与微分动力系统等在生物医学、人类基因组学、神经影像学、金融和经济学、工程学等领域的应用。由于在函数型数据分析领域所做出的奠基性和开创性的贡献,2014年获得由加拿大统计学会和数学研究中心联合颁发的授予博士毕业15年内做出突出贡献的统计学家的 CRM-SSC奖。曾任《加拿大统计学期刊》主编,担任多个统计学期刊的编委,包括统计学顶级期刊《统计年刊》和《北美统计学会会刊》。


Functional data analysis (FDA) has received substantial attention in the field of Statistics and data science, as such data varying with time or space are ubiquitous in the big data era nowadays, with applications arising from various disciplines, such as medical studies, public health, engineering, finance and economics, and so on. In general, the FDA approaches focus on nonparametrically modeling underlying random functions, which treats the data as observed or sampled from realizations of stochastic processes satisfying some regularity conditions, e.g., smoothness constraints. The estimation and inference procedures usually do not depend on a finite number of parameters, which contrasts with parametric models, and exploit techniques, such as smoothing methods and dimension reduction, that allow data to speak for themselves. In this talk, I will give an overview and discuss some potential outlook for FDA and related topics.


10:00-11:00 a.m.

October 28th, Friday

Lecture 8 Applied dynamical systems in science and engineering



Speaker Bio

Wei Lin is a Full Professor in applied mathematics at Fudan University. He is currently serving as the Dean of the Research Institute of Intelligent Complex Systems, the Vice Dean of the School of Data Science, the Director of the Centre for Computational Systems Biology, and the Vice Director of the MOE Frontiers Center for Brain Science, Fudan University. His research interests include applied mathematics and its applications to complex systems, artificial intelligence, and computational biology. His major contributions have been published in prestigious journals and conference proceedings, including PRL, PNAS, Nature Communications, Nature Physics, NSR, Research, IEEE Transactions, SIAM Journals, AAAI, ICLR, NeurIPS. Dr. Lin is the Senior Member of IEEE. He received the Outstanding Young Scholar Fund from NSFC in 2019 and was selected as the Chief Scientist of the National Key R&D Program of China in 2018. He was a recipient of the Best Paper Prize from the International Consortium of Chinese Mathematicians in 2019 and a second recipient of the First Prize of the Shanghai Natural Science Awards in 2020.



Dynamical system as well as its developed theory has been a paramount tool, broadly applied to solving scientific problems and deciphering the mechanisms hidden in real-world systems. This talk will review the classical advances of applied dynamical systems to various scientific and engineering regimes. Also, this talk will introduce the most recent progresses that have been achieved for the applications of applied dynamical systems in the era of big data and artificial intelligence. We believe that dynamical system will be sustainably applied to uncovering significant principles and, reversely, it as well as its theory will be further flourished during these principles uncovered.


3:00-4:00 p.m.

October 30th, Sunday

Lecture 9 On Presuppositions of Machine Learning: A Best-fitting Theory




Speaker Bio

Zongben Xu was born in 1955. He received his Ph.D. degrees in mathematics from Xi’an Jiaotong University, China, in 1987. His current research interests include applied mathematics and mathematical methods of big data and artificial intelligence. He established the L(1/2) regularization theory for sparse information processing. He also found and verified Xu-Roach Theorem in machine learning, and established the visual cognition based data modelling principle, which have been widely applied in scientific and engineering fields. he initiated several mathematical theories, including the non-logarithmic transform based CT model, and ultrafast MRI imaging, which provide principles and technologies for the development of a new generation of intelligent medical imaging equipment. He is owner of the Tan Kan Kee Science Award in Science Technology in 2018, the National Natural Science Award of China in 2007and winner of CSIAM Su Buchin Applied Mathematics Prize in 2008. He delivered a 45-minute talk on the International Congress of Mathematicians 2010. He was elected as member of Chinese Academy of Science in 2011.

Zongben Xu was the vice-president of Xi’an Jiaotong University. He currently makes several important services for government and professional societies, including the director for Pazhou Lab (Guangzhou), director for the National Engineering Laboratory for Big Data Analytics, a member of National Big Data Expert Advisory Committee and the Strategic Advisory Committee member of National Open Innovation Platform for New Generation of Artificial Intelligence.

徐宗本,中国科学院院士,数学家、信号与信息处理专家、西安交通大学教授。主要从事智能信息处理、机器学习、数据建模基础理论研究。曾提出稀疏信息处理的L(1/2)正则化理论,为稀疏微波成像提供了重要基础;发现并证明机器学习的“徐-罗奇”定理, 解决了神经网络与模拟演化计算中的一些困难问题,为非欧氏框架下机器学习与非线性分析提供了普遍的数量推演准则; 提出基于视觉认知的数据建模新原理与新方法,形成了聚类分析、判别分析、隐变量分析等系列数据挖掘核心算法, 并广泛应用于科学与工程领域。曾获国家自然科学二等奖、国家科技进步二等奖、陕西省最高科技奖; 国际IAITQM 理查德.普莱斯(Richard Price)数据科学奖; 中国陈嘉庚信息技术科学奖、中国CSIAM苏步青应用数学奖;曾在2010年世界数学家大会上作45分钟特邀报告。



Machine learning have been applied with a set of prerequisite or hypotheses, the optimal setting of which is a `the chicken or the egg’ problem. Those hypotheses include in particular (i) the Large Capacity Hypothesis on hypothetical space, (ii) the Independence Hypothesis on loss function, (iii) the Completeness Hypothesis on training data, (iv) the Prior-Determine-Regularizer Hypothesis on regularization terms, and (v) the Euclidean Hypothesis on analysis framework. We analyze the role, effect and limitations of those hypotheses in this talk, and propose a systematic way, could named as a best-fitting theory, to break through each of the hypotheses.

More specifically, we propose the model driven deep learning approach to burst the Large Capacity Hypothesis, develop a noise modeling principle to breach the Independence Hypothesis, suggest the axiomatic curriculum/self-paced learning approach for the Completeness Hypothesis, the implicit regularization method for the Prior-Determine-Regularizer Hypothesis, and Banach space geometry for the Euclidean Hypothesis. In each case, we show the best-fitting strategy, substantiate the value and outcome of the breaking though. We show also that the continuing effort for bursting the hypotheses of ML is needed, which is then opening new hot directions of ML research.  

机器学习是人工智能的最基础、最核心技术(算法),但机器学习的执行通常都是以一组基本的先验假设为前提的,这些基本假设包括: 假设空间的大容量假设、训练数据的完备性假设、损失度量的数据独立假设、正则项的先验决定假设、分析框架的欧氏空间假设等。本报告分析这些假设的作用、局限及其影响,提出突破这些基本假设的可能途径与方法。特别,我们提出突破假设空间大容量假设的模型驱动深度学习方法、突破训练数据完备性假设的课程-自步学习方法、突破损失度量数据独立假设的误差建模原理、突破正则项先验决定假设的隐正则化方法、突破分析框架欧氏空间假设的Banach空间几何方法。每一情况下,我们举例说明新突破带来新价值。所有这些尝试构成机器学习的适配性理论,是当下机器学习研究的一个新方向。