報告人：Prof. Dianhui Wang (La Trobe University, Australia )
Randomized learning techniques for neural networks have been explored and developed since late 80’s, and received considerable attention due to its good potential to effectively resolve modelling problems in big data setting. Random Vector Functional-link (RVFL) networks, a class of randomized learner models, can be regarded as a result of feed-forward neural networks with a specific randomised algorithm, i.e., random assignment of the hidden weights and biases and fixed during training phase. In this talk, we provide some insights into RVFL networks and highlight some practical issues and common pitfalls associated with RVFL-based modelling techniques. Inspired by the folklore that “all high-dimensional random vectors are almost always nearly orthogonal to each other”, we establish a theoretical result about the disability of RVFL networks for universal approximation of nonlinear maps, if a RVFL network is built incrementally with random selection of the input weights and biases from a fixed scope, and constructive evaluation of its output weights. We also address the significance of the scope setting of random weights and biases in respect to modelling performance, and empirically reveal the correlation between the rank of the hidden output matrix and the learner’s generalization capability.
Dr. Wang received his Master in Applied Mathematics and PhD in Industrial Automation in 1992 and 1995 respectively, both from Northeastern University. He spent two years in Nanyang Technological University, Singapore and worked as a postdoctoral fellow, three years in The Hong Kong Polytechnic University, employed as a Research Fellow. Since July 2001, he joined La Trobe University and keeps staying at Department of Computer Science and Information Technology. Dr. Wang got promotion in 2007 and then working as a Reader and Associate Professor in Computer Science. Dr. Wang has broad research experience on applied mathematics, control engineering and computer sciences. Up to date, he published about 200 technical papers in international journals and proceedings of conferences. His current working topics include data mining and computational intelligence for large scale data analytics. He is serving as an Associate Editor for IEEE TNNLS, IEEE TCYB, Information Sciences, WIRES Data Mining and Knowledge Discovery, Neurocomputing, and applied mathematical modeling.