Free Tutorials

March 28, 2018

(Only for the first 200 registered attendees)

Prof. Yew Soon Ong

Editor-in-Chief of IEEE Transactions on Emerging Topics in Computational Intelligence

Topic: Transfer Optimization: Because Experience is the Best Teacher

Date/time: 8:00 -9:30

Abstract:
Traditional optimization solvers tend to start the search from scratch by assuming zero prior knowledge about the task at hand. Generally speaking, the capabilities of solvers do not automatically grow with experience. In contrast however, humans routinely make use of a pool of knowledge drawn from past experiences whenever faced with a new task. This is often an effective approach in practice as real-world problems seldom exist in isolation. Similarly, practically useful artificial systems are expected to face a large number of problems in their lifetime, many of which will either be repetitive or share domain-specific similarities. This view naturally motivates advanced optimizers that mimic human cognitive capabilities; leveraging on what has been seen before to accelerate the search towards optimal solutions of never before seen tasks. With this in mind, the present tutorial sheds light on recent research advances in the field of global black-box optimization that champion the theme of automatic knowledge transfer across problems. I will introduce a general formalization of transfer optimization, based on which the conceptual realizations of the paradigm are classified into three distinct categories, namely, sequential transfer, multitasking, and multiform optimization. In addition, a survey of different methodological perspectives spanning Bayesian optimization and nature-inspired computational intelligence procedures for efficient encoding and transfer of knowledge building-blocks is discussed. Finally, real-world applications of the techniques are identified, demonstrating the future impact of optimization engines that evolve as better problem-solvers over time by learning from the past and from one another.
Bio:
Yew-Soon ONG received a PhD degree on Artificial Intelligence in complex design from the Computational Engineering and Design Center, University of Southampton, UK in 2003. He is a Professor and the Chair of the School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore, where he is also the Director of the Data Science and Artificial Intelligence Research Center and Principal Investigator of the Data Analytics and Complex Systems Programme at the RollsRoyce@NTU Corporate Lab. His research interest in computational intelligence spans across memetic computing, complex design optimization, and big data analytics. He is the founding Editor-in-Chief of the IEEE Transactions on Emerging Topics in Computational Intelligence, Associate Editor of the IEEE Transactions on Evolutionary Computation, the IEEE Transactions on Neural Networks & Learning Systems, the IEEE Transactions on Cybernetics, and others.

Prof. Hisao Ishibuchi

Editor-in-Chief of IEEE Computational Intelligence Magazine

Topic: Evolutionary Many-Objective Optimization: Difficulties and Future Research Directions

Date/time: 10:00 am – 11:30am

Abstract:
This tutorial clearly explains difficulties of evolutionary many-objective optimization and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives makes multi-objective problem very difficult for EMO algorithms. In this tutorial, we clearly explain not only frequently-discussed well-known difficulties such as the decrease in the selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover, the difficulty of performance evaluation of solution sets, and undesirable features of many-objective test problems. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms. At the same time, the attendees will also learn that there exist a large number of promising, interesting and important research directions in evolutionary many-objective optimization. Some promising research directions will be explained in detail in the tutorial.
Bio:
Dr. Ishibuchi received the BS and MS degrees from Kyoto University in 1985 and 1987, respectively. In 1992, he received the Ph. D. degree from Osaka Prefecture University where he has been a professor since 1999. From April 2017, he is with Department of Computer Science and Engineering, SUSTech, Shenzhen, China. He received a Best Paper Award from GECCO 2004, HIS-NCEI 2006, FUZZ-IEEE 2009, WAC 2010, SCIS & ISIS 2010, FUZZ-IEEE 2011, ACIIDS 2015 and GECCO 2017. He also received a 2007 JSPS Prize. He was the IEEE CIS Vice-President for Technical Activities (2010-2013). Currently, he is the President of the Japan EC Society (2016-2018), the Editor-in-Chief of IEEE CI Magazine (2014-2019) and Journal of Japan EC Society (2014-2018), an IEEE CIS AdCom member (2014-2019), and an IEEE CIS Distinguished Lecturer (2015-2017). He is also an Associate Editor of IEEE TEVC (2007-2017), IEEE Access (2013-2017) and IEEE TCyb (2013-2017). He is an IEEE Fellow.

Prof. Yaochu Jin

Editor-in-Chief of IEEE Transactions on Cognitive and Developmental Systems

Topic: Integration machine learning with evolutionary computation for data-driven optimization

Date/time: 14:00 – 15:30

Abstract:
Many real-world optimization problems rely on data to perform optimization since no analytical mathematical objective functions are available. To solve data-driven optimization, a seamless integration of machine learning techniques with evolutionary algorithms are needed. In this tutorial, we introduce the basic methodologies for data-driven evolutionary optimization and elaborate recent ideas of using various machine learning models such as Gaussian processes and artificial neural networks together with advanced machine learning techniques including semi-supervised learning, ensemble learning, active learning and transfer learning to assist evolutionary algorithms for solving complex data-driven optimizations. Both benchmark problems and industrial applications will be provided to demonstrate the effectiveness of the discussed methods
Bio:
Yaochu Jin received the B.Sc., M.Sc., and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1988, 1991, and 1996 respectively, and the Dr.-Ing. degree from Ruhr University Bochum, Germany, in 2001.
He is a Professor in Computational Intelligence, Department of Computer Science, University of Surrey, Guildford, U.K., where he heads the Nature Inspired Computing and Engineering Group. He is also a Finland Distinguished Professor funded by the Finnish Agency for Innovation (Tekes) and a Changjiang Distinguished Visiting Professor appointed by the Ministry of Education, China. His main research interests include data-driven surrogate-assisted evolutionary optimization, evolutionary multi-objective optimization, evolutionary and developmental learning, interpretable and secure machine learning.
Dr Jin is the Editor-in-Chief of the IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS and Co-Editor-in-Chief of Complex & Intelligent Systems. He is an IEEE Distinguished Lecturer (2013-2015 and 2017-2019) and past Vice President for Technical Activities of the IEEE Computational Intelligence Society (2014-2015). He is the recipient of the 2018 IEEE Transactions on Evolutionary Computation Outstanding Paper Award, the 2017 and 2015 IEEE Computational Intelligence Magazine Outstanding Paper Award. He is a Fellow of IEEE.

Prof. Haibo He

Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems

Topic: Imbalanced Learning in Big Data

Date/time: 16:00 – 17:30

Abstract:
Big data has become an important topic worldwide over the past several years. Among many aspects of the big data research and development, imbalanced learning has become a critical component as many data sets in real-world applications are imbalanced, ranging from surveillance, security, Internet, finance, social network, to medical and healthy related data analysis. In general, the imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently and effectively into information and knowledge representation.Big data has become an important topic worldwide over the past several years. Among many aspects of the big data research and development, imbalanced learning has become a critical component as many data sets in real-world applications are imbalanced, ranging from surveillance, security, Internet, finance, social network, to medical and healthy related data analysis. In general, the imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently and effectively into information and knowledge representation.

Bio:
Haibo He is a Fellow of IEEE and the Robert Haas Endowed Chair Professor at the University of Rhode Island, Kingston, RI, USA. His primary research interests include computational intelligence and various applications. He has published one sole-author book (Wiley), edited1 book (Wiley- IEEE) and 6 conference proceedings (Springer), and authored/co-authors over 280 peer-reviewed journal and conference papers, including several highly cited papers in IEEE Transactions on Neural Networks and IEEE Transactions on Knowledge and Data Engineering, Cover Page Highlighted paper in IEEE Transactions on Information Forensics and Security, and Best Readings of the IEEE Communications Society. He has delivered more than 50 invited talks around the globe. He was the Chair of IEEE Computational Intelligence Society (CIS) Emergent Technologies Technical Committee (ETTC) (20 15) and the Chair of EEE CIS Neural Networks Technical Committee (NNTC) (20 13 and 20 14). He served as the General Chair of 20 14 IEEE Symposium Series on Computational Intelligence (IEEE SSCI' 14,Orlando, Florida). He is currently the Editor-in-Chief of EEE Transactions on Neural Networks and Learning Systems. He was a recipient of the IEEE International Conference on Communications (ICC)“Best Paper Award (20 14),EEE CIS“Outstanding Early Career Award" (20 14), National Science Foundation“Faculty Early Career Development (CAREER) Award' (2011), and Providence Business News (PBN)“Rising Star Innovator”Award (20 11). More information can be found at: http://www.ele.uri.edu/faculty/he/

Keep me informed

Signup to our newsletter or connect via social networks