Prof. Kalyanmoy Deb
Indian Institute of Technology Kanpur
Uncertainty Handling Using Evolutionary Multi-Objective Optimization
In solving practical problems involving multiple objectives, uncertainties of different types often occur. In this talk, we suggest different ways to handle epistemic and aleatory uncertainties using an evolutionary multi-objective optimization (EMO) procedure. The talk will first introduce basic principles of an EMO procedure and then highlight some recent research challenges of EMO.
Thereafter, a specific EMO procedure will be extended for handling different types of uncertainties encountered in practice. A parellel EMO implementation procedure will also be discussed to speed up the optimization process. A few engineering case studies will be discussed.
Prof. Antonio Di Nola
University of Salerno, Italy
Lukasiewicz logic and MV-algebras for a mathematical theory of fuzzy sets
We illustrate an algebraic approach to fuzzy set theory by means MV-algebras, the algebraic semantics of Lukasiewicz fuzzy logic. Several results of general interest are presented, together with some applications.
Prof. Christophe Marsala
Université Pierre et Marie Curie, France
Fuzzy Machine Learning in Dynamical Environments
Dynamical environments are more and more present in our Society. For instance, we could cite video data, the Web of Things environment, or the temporal spreading of news on the web. In such environments, the fuzzy sets theory has been introduced to bring out more robustness and generalization power to machine learning tools. One of the main aim in that kind of applications, is not only to learn a predictive model, but also to learn from the dynamicity of the data and the relationships that it involves.
After a presentation of several ways to improve a machine learning algorithm by means of fuzzy sets theory and we focus on the main questions that should be answered by the obtained fuzzy machine learning algorithm.
To illustrate, we will present some current works and results on the use of a fuzzy machine learning algorithm in dynamical environments.
Prof. Roman Słowiński
Poznan University, Poland
Dominance-based Rough Set Approach to Reasoning about Vague Data
Institute of Computing Science, Poznan University of Technology, 60-965 Poznan, and Systems Research Institute, Polish Academy of Sciences, 01-447 Warsaw, Poland email@example.com
The Dominance-based Rough set Approach (DRSA) is a methodology for reasoning about vague data, which extends the Classical Rough Set Approach (CRSA), proposed by Z. Pawlak, by handling monotonic relationships between values of condition and decision attributes. The origin of the vagueness of data is violation of the dominance principle which requires that (assuming a positive monotonic relationship) if object x has an evaluation at least as good as object y on all condition attributes, then it should not get evaluation worse than y on all decision attributes. We show that DRSA is a natural continuation of the Pawlak’s concept of rough set, which builds on the ideas coming from Leibniz, Frege, Boole, Łukasiewicz and Zadeh. We also show that the assumption admitted by DRSA about the ordinal character of evaluations on condition and decision attributes is not a limiting factor in reasoning about data. In particular, it is an obvious assumption in decision problems, like multicriteria classification or ranking, multiobjective optimization, and decision under risk and uncertainty. Moreover, even when the ordering of data seems irrelevant, the presence or the absence of a property can be represented in ordinal terms, because if two properties are related, the presence, rather than the absence, of one property should make more (or less) probable the presence of the other property. This is even more apparent when the presence or the absence of a property is graded or fuzzy, because in this case, the more credible the presence of a property, the more (or less) probable the presence of the other property. This observation leads to a straightforward hybridization of DRSA with fuzzy sets. Since the presence of properties, possibly fuzzy, is the base of information granulation, DRSA can also be seen as a general framework for granular computing. We also comment on stochastic version of DRSA, and on algebraic representations of DRSA, as well as on topology for DRSA.
 S. Greco, B. Matarazzo, R. Słowiński: Rough sets theory for multicriteria decision analysis. European Journal of Operational Research, 129 (2001) 1-47.
 S. Greco, B. Matarazzo, R. Słowiński: Dominance-based rough set approach to decision under uncertainty and time preference. Annals of Operations Research, 176 (2010) 41-75.
 R. Słowiński, S. Greco, B. Matarazzo: Rough Sets in Decision Making. [In]: R.A. Meyers (ed.): Encyclopedia of Complexity and Systems Science, Springer, New York, 2009, pp. 7753-7786.
 W. Kotłowski, K. Dembczyński, S. Greco, R. Słowiński: Stochastic dominance-based rough set model for ordinal classification. Information Sciences, 178 (2008) 4019-4037.
 J. Błaszczyński, S. Greco, R. Słowiński: Inductive discovery of laws using monotonic rules. Engineering Applications of Artificial Intelligence, 25 (2012) no. 2, 284–294.
Prof. Tomohiro Takagi
Meiji University, Japan
Web marketing and analogical reasoning
Now advertisements are leaving one-way broadcast media, and reborn as digital marketing on the one-to-one and interactive Web media. The change will make a huge impact to our economic system. In order to increase the efficiency of enterprise marketing, there is a need for a variety of intellectual technology paying attention to the consumer behavior.
In this talk, we introduce a linguistic analogical reasoning using conceptual fuzzy sets, and show applied researches to a recommendation engine and trend prediction which are fundamental techniques for businesses to engage consumers.
Prof. Peter P. Wakker
Erasmus University, the Netherlands
Generalizing Law-Invariant Capacities to Capture Ambiguity Attitudes
The Choquet integral was introduced into decision theory by David Schmeidler for decision under uncertainty, so as to model attitudes towards ambiguity (unknown probabilities). It is nowadays also widely used in multiattribute utility, where it captures interactions between different attributes. In its first part, this lecture gives a survey on the history of ambiguity, and the models most popular today, with special attention for applications to multiattribute optimization.
Capacities (nonadditive measures) used in the Choquet integral become exponentially more complex as the number of events/attributes grows and, hence, many specifications have been proposed. A popular special case concerns law invariant capacities, also known as probabilistic sophistication in decision theory. Unfortunately, these capacities are too specific to capture many kinds of interactions, as in the Ellsberg paradox for uncertainty. Hence the source method has been introduced, generalizing law invariance but still maintaining tractability. The method imposes law invariance only within particular partitions of the state or attribute space, so that interactions can still be conveniently illustrated using graphs. It avoids the Anscombe-Aumann model, widely used in ambiguity studies but not available in multiattribute applications. We derive a decomposition of capacities into a component maximally capturing beliefs in events (or noninteracting weights of attribute sets) and a separate component, the source function, which captures all the interactions in a convenient manner that can be illustrated using graphs. We show how it can capture empirical findings for ambiguity and can be used for multiattribute optimization.