发布时间:2021-10-09 浏览次数:437

时间:2021-10-22 下午1630

题目:Occam's Razor in neural network

报告人:许志钦 副教授,上海交通大学


摘要:

I would demonstrate that a neural network (NN) learns training data as simple as it can, resembling an implicit Occam's Razor, from the following two viewpoints. First, the NN output often follows a frequency principle, i.e., learning data from low to high frequency. The frequency principle qualitatively explains various phenomena of NNs in application. Second, the NN weights condense on isolated directions when initialized small, which means the effective NN size is much smaller than its actual size, i.e., a simple representation of the training data.


简介:

许志钦,上海交通大学自然科学研究院/数学科学学院长聘教轨副教授。2012年本科毕业于上海交通大学致远学院。2016年博士毕业于上海交通大学,获应用数学博士学位。2016年至2019年,在纽约大学阿布分校和柯朗研究所做博士后。主要研究方向是机器学习和计算神经科学。多篇论文发表于Journal of Machine Learning Research, NeurIPS (Spotlight), AAAI, Communications in Computational Physics,European Journal of Neuroscience和Communications in Mathematical Sciences等学术期刊和会议。




发布时间:2021-10-09 浏览次数:437

时间:2021-10-22 下午1630

题目:Occam's Razor in neural network

报告人:许志钦 副教授,上海交通大学


摘要:

I would demonstrate that a neural network (NN) learns training data as simple as it can, resembling an implicit Occam's Razor, from the following two viewpoints. First, the NN output often follows a frequency principle, i.e., learning data from low to high frequency. The frequency principle qualitatively explains various phenomena of NNs in application. Second, the NN weights condense on isolated directions when initialized small, which means the effective NN size is much smaller than its actual size, i.e., a simple representation of the training data.


简介:

许志钦,上海交通大学自然科学研究院/数学科学学院长聘教轨副教授。2012年本科毕业于上海交通大学致远学院。2016年博士毕业于上海交通大学,获应用数学博士学位。2016年至2019年,在纽约大学阿布分校和柯朗研究所做博士后。主要研究方向是机器学习和计算神经科学。多篇论文发表于Journal of Machine Learning Research, NeurIPS (Spotlight), AAAI, Communications in Computational Physics,European Journal of Neuroscience和Communications in Mathematical Sciences等学术期刊和会议。