Learning Using Statistical Invariants (Revision of Machine Learning Problem)

Speaker

Vladimir Vapnik

Host

Daniela Rus
CSAIL
Abstract:
This talk covers a new learning paradigm. In the classical paradigm, the learning machine uses a data-driven model of learning. In the LUSI paradigm, the learning machine computes statistical invariants that are specific for the problem, and then minimizes the expected error in a way that preserves these invariants; it is thus both data- and intelligent-driven learning. Mathematically, methods of the new paradigm employ both strong and weak convergence mechanisms, increasing the rate of convergence. LUSI describes a complete theory of learning and can be considered as a mathematical alternative to "deep learning" heuristic. The talk includes content from a paper published in Machine Learning, Springer 2018.

Bio:
Vladimir Vapnik has taught and researched in computer science, theoretical and applied statistics for over 30 years. His major achievements include a general theory of minimizing the expected risk using empirical data, and a new type of learning machine called Support Vector that possesses a high level of generalization ability. These techniques have been used in constructing intelligent machines. ��Prof. Vapnik gained his Masters Degree in Mathematics in 1958 at Uzbek State University, Samarkand, USSR, received his master's degree in mathematics from the Uzbek State University in 1958, and completed his Ph.D in statistics at the Institute of Control Sciences, Moscow in 1964, where he became Head of the Computer Science Research Department, before he joined AT&T Bell Laboratories, NJ. �He holds a Professor of Computer Science and Statistics position at Royal Holloway, University of London since 1995, and a position as Professor of Computer Science at Columbia University, New York City since 2003.