Panagiotis Papapetrou, Prof.
Dept. of Computer and Systems Sciences
- April 2017 – present: Professor, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden
- December 2013 – March 2017: Associate Professor, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden
- September 2013 – November 2013: Senior Lecturer (tenured), Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden
- September 2012 – November 2013: Lecturer and director of the IT Applications Programme, Department of Computer Science and Information Systems, School of Business-Economics-Informatics, Birkbeck, University of London, UK
- September 2009 – August 2012: Postdoctoral Researcher at the Department of Computer Science, Aalto University, Finland
- June 2009: Received Ph.D. in Computer Science
- September 2006: Received M.A. in Computer Science
- January 2004: Admitted to the MA/PhD Program of the Department of Computer Science at Boston University, USA
- June 2003: Received B.Sc. in Computer Science
- September 1999: Admitted to the Department of Computer Science, University of Ioannina, Greece
- September 1982: Moved to the city of Ioannina, my home-town (Northwest of Greece)
- July 1981: Was brought to the world in the city of Patra, Greece
Learning from Electronic Health Records: from temporal abstractions to time series interpretability
The first part of the talk will focus on data mining methods for learning from Electronic Health Records (EHRs), which are typically perceived as big and complex patient data sources. On them, scientists strive to perform predictions on patients' progress, to understand and predict response to therapy, to detect adverse drug effects, and many other learning tasks. Medical researchers are also interested in learning from cohorts of population-based studies and of experiments. Learning tasks include the identification of disease predictors that can lead to new diagnostic tests and the acquisition of insights on interventions. The talk will elaborate on data sources, methods, and case studies in medical mining.
The second part of the talk will tackle the issue of interpretability and explainability of opaque machine learning models, with focus on time series classification. Time series classification has received great attention over the past decade with a wide range of methods focusing on predictive performance by exploiting various types of temporal features. Nonetheless, little emphasis has been placed on interpretability and explainability. This talk will formulate the novel problem of explainable time series tweaking, where, given a time series and an opaque classifier that provides a particular classification decision for the time series, the objective is to find the minimum number of changes to be performed to the given time series so that the classifier changes its decision to another class. Moreover, it will be shown that the problem is NP-hard. Two instantiations of the problem will be presented. The classifier under investigation will be the random shapelet forest classifier. Moreover, two algorithmic solutions for the two problem instantiations will be presented along with simple optimizations, as well as a baseline solution using the nearest neighbor classifier.
Plamen Angelov, Prof.Dept. of Computing and Communications
University of Lancaster
Empirical Approach: How to get Fast, Interpretable Deep Learning
We are witnessing an explosion of data (streams) being generated and growing exponentially. Nowadays we carry in our pockets Gigabytes of data in the form of USB flash memory sticks, smartphones, smartwatches etc.
Extracting useful information and knowledge from these big data streams is of immense importance for the society, economy and science. Deep Learning quickly become a synonymous of a powerful method to enable items and processes with elements of AI in the sense that it makes possible human like performance in recognising images and speech. However, the currently used methods for deep learning which are based on neural networks (recurrent, belief, etc.) is opaque (not transparent), requires huge amount of training data and computing power (hours of training using GPUs), is offline and its online versions based on reinforcement learning has no proven convergence, does not guarantee same result for the same input (lacks repeatability).
The speaker recently introduced a new concept of empirical approach to machine learning and fuzzy sets and systems, had proven convergence for a class of such models and used the link between neural networks and fuzzy systems (neuro-fuzzy systems are known to have a duality from the radial basis functions (RBF) networks and fuzzy rule based models and having the key property of universal approximation proven for both).
In this talk he will present in a systematic way the basics of the newly introduced Empirical Approach to Machine Learning, Fuzzy Sets and Systems and its applications to problems like: anomaly detection, clustering, classification, prediction and control. The major advantages of this new paradigm is the liberation from the restrictive and often unrealistic assumptions and requirements concerning the nature of the data (random, deterministic, fuzzy), the need to formulate and assume a priori the type of distribution models, membership functions, the independence of the individual data observations, their large (theoretically infinite) number, etc.
From a pragmatic point of view, this direct approach from data (streams) to complex, layered model representation is automated fully and leads to very efficient model structures. In addition, the proposed new concept learns in a way similar to the way people learn – it can start from a single example. The reason why the proposed new approach makes this possible is because it is prototype based and non-parametric.
 P. Angelov.
X. Gu, Empirical Approach to Machine Learning, Studies in Computational
Intelligence, vol.800, ISBN 978-3-030-02383-6, Springer, Cham, Switzerland,
 P. P.
Angelov, X. Gu, Deep rule-based classifier with human-level performance and
characteristics, Information Sciences, vol. 463-464, pp.196-213, Oct. 2018.
Angelov, X. Gu, J. Principe, Autonomous learning multi-model systems from data
streams, IEEE Transactions on Fuzzy
Systems, 26(4): 2213-2224, Aug. 2018.
Angelov, X. Gu, J. Principe, A generalized methodology for data analysis, IEEE Transactions on Cybernetics, 48(10):
2981-2993, Oct 2018.
Angelov, X. Gu, MICE: Multi-layer multi-model images classifier ensemble, in IEEE International Conference on Cybernetics
(CYBCONF), Exeter, UK, 2017, pp. 1-8.
 X. Gu, P.
Angelov, C. Zhang, P. Atkinson, A massively parallel deep rule-based ensemble
classifier for remote sensing scenes, IEEE
Geoscience and Remote Sensing Letters, vol. 15 (3), pp. 345-349, 2018.
 X. Gu, P.
Angelov, Semi-supervised deep rule-based approach for image classification,
Applied Soft Computing, vol. 68, pp. 53-68, July 2018.
 P. Angelov,
Learning Systems: From Data Streams to Knowledge in Real time, John Willey
and Sons, Dec.2012, ISBN: 978-1-1199-5152-0.
Evangelos Eleftheriou, Dr.
IBM Fellow, Cloud & Computing Infrastructure,
Zurich Research Laboratory, Zurich, Switzerland
Evangelos Eleftheriou received a B.S degree in Electrical Engineering from the University of Patras, Greece, in 1979, and M.Eng. and Ph.D. degrees in Electrical Engineering from Carleton University, Ottawa, Canada, in 1981 and 1985, respectively. In 1986, he joined the IBM Research – Zurich Laboratory in Rüschlikon, Switzerland, as a Research Staff Member. After serving as head of the Cloud and Computing Infrastructure department of IBM Research – Zurich for many years, Dr. Eleftheriou returned to a research position in 2018 to strengthen his focus on neuromorphic computing and to coordinate the Zurich Lab's activities with those of the global Research efforts in this field.
His research interests focus on enterprise solid-state storage, storage for big data, neuromorphic computing, and non-von Neumann computing architecture and technologies in general. He has authored or coauthored about 200 publications, and holds over 160 patents (granted and pending applications).
In 2002, he became a Fellow of the IEEE. He was co-recipient of the 2003 IEEE Communications Society Leonard G. Abraham Prize Paper Award. He was also co-recipient of the 2005 Technology Award of the Eduard Rhein Foundation. In 2005, he was appointed IBM Fellow for his pioneering work in recording and communications techniques, which established new standards of performance in hard disk drive technology. In the same year, he was also inducted into the IBM Academy of Technology. In 2009, he was co-recipient of the IEEE CSS Control Systems Technology Award and of the IEEE Transactions on Control Systems Technology Outstanding Paper Award. In 2016, he received an honoris causa professorship from the University of Patras, Greece.
In 2018, he was inducted as a foreign member into the National Academy of Engineering for his contributions to digital storage and nanopositioning technologies, as implemented in hard disk, tape, and phase-change memory storage systems.
“In-memory Computing": Accelerating AI Applications
In today’s computing systems based on the conventional von Neumann architecture, there are distinct memory and processing units. Performing computations results in a significant amount of data being moved back and forth between the physically separated memory and processing units. This costs time and energy, and constitutes an inherent performance bottleneck. It is becoming increasingly clear that for application areas such as AI (and indeed cognitive computing in general), we need to transition to computing architectures in which memory and logic coexist in some form. Brain-inspired neuromorphic computing and the fascinating new area of in-memory computing or computational memory are two key non-von Neumann approaches being researched. A critical requirement in these novel computing paradigms is a very-high-density, low-power, variable-state, programmable and non-volatile nanoscale memory device. There are many examples of such nanoscale memory devices in which the information is stored either as charge or as resistance. However, one particular example is phase-change-memory (PCM) devices, which are very well suited to address this need, owing to their multi-level storage capability and potential scalability.
In in-memory computing, the physics of the nanoscale memory devices, as well as the organization of such devices in cross-bar arrays, are exploited to perform certain computational tasks within the memory unit. I will present how computational memories accelerate AI applications and will show small- and large-scale experimental demonstrations that perform high-level computational primitives, such as ultra-low-power inference engines, optimization solvers including compressed sensing and sparse coding, linear solvers and temporal correlation detection. Moreover, I will discuss the efficacy of this approach to efficiently address not only inferencing but also training of deep neural networks. The results show that this co-existence of computation and storage at the nanometer scale could be the enabler for new, ultra-dense, low-power, and massively parallel computing systems. Thus, by augmenting conventional computing systems, in-memory computing could help achieve orders of magnitude improvement in performance and efficiency.