Review: A Survey of Evaluation Techniques and Systems for Answer Set Programming [Video][Slides]

The review of the TEWI colloquium of Prof. Francesco Ricca from May 3, 2019 comprises the video and slides (below):

Abstract:

Answer set programming (ASP) is a prominent knowledge representation and reasoning paradigm that found both industrial and scientific applications. The success of ASP is due to the combination of two factors: a rich modeling language and the availability of efficient ASP implementations. In this talk we trace the history of ASP systems, describing the key evaluation techniques and their implementation in actual tools.

CV:

Francesco Ricca (www.mat.unical.it/ricca) is currently an Associate Professor at the Department of Mathematics and Computer Science of the University of Calabria, Italy. In the same Department he is Coordinator of the Computer Science Courses Council.
He received his Laurea Degree in Computer Science Engineering (2002) and a PhD in Computer Science and Mathematics (2006) from the University of Calabria, Italy, and received the Habilitation for Full Professor in Computer Science (INF/01) in 2017.
He is interested in declarative logic-based languages, consistent query answering, and rule-based reasoning on ontologies and in particular on the issues concerning their practical applications: system design and implementation, and development tools. He is co-author of more than 100 (peer-reviewed) publications including international research journals (30+), encyclopedia chapters, conference proceedings, and workshops of national and international importance. He has served in program committees of international conference and workshop, such as IJCAI, AAAI, KR, ICLP, LPNMR and JELIA, and has been reviewer for AIJ, JAIR, TPLP, JLC, etc. He is Area Editor of Association for Logic Programming newsletters, and member of the Executive Board of the Italian Association for Artificial Intelligence.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: A Survey of Evaluation Techniques and Systems for Answer Set Programming [Video][Slides]

Estimating Space-Time Covariance from Finite Sample Sets

Dr. Stephan Alexander Weiss | May 22, 2019 | 11:00 | B02.1.59

Abstract:

Covariance matrices are central to many adaptive filtering and optimisation problems. In practice, they have to be estimated from a finite number of samples; on this, I will review some known results from spectrum estimation and multiple-input multiple-output communications systems, and how properties that are assumed to be inherent in covariance and power spectral densities can easily be lost in the estimation process. I will discuss new results on space-time covariance estimation, and how the estimation from finite sample sets will impact on factorisations such as the eigenvalue decomposition, which is often key to solving the introductory optimisation problems. The purpose of the presentation is to give you some insight into estimating statistics as well as to provide a glimpse on classical signal processing challenges such as the separation of sources from a mixture of signals.

Stephan Weiss. I am a Professor at the University of Strathclyde and head its Centre for Signal & Image Processing. My particular interests are adaptive filtering and array signal processing.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Estimating Space-Time Covariance from Finite Sample Sets

Why AI is shaping our games

Dr. Johanna Pirker | May 16, 2019 | 10:00 | B01.0.203

Abstract:

AI is used to create parts of our games. It provides intelligent enemy behavior, techniques such as pathfinding or can be used to generate in-game content procedurally. AI can also play our games. The idea to train computers to beat humans in game-like environments such as Jeopardy!, Chess, or soccer is not a new one. But can AI also design our games? The role of Artificial Intelligence in the game development process is constantly expanding. In this talk, Dr. Pirker will talk about the importance of AI in the past, the present, and especially the future of game development.

Bio:

Dr. Johanna Pirker is researcher at the Institute of Interactive Systems and Data Science at Graz University of Technology (TUG). She finished her Master’s Thesis during a research visit at Massachusetts Institute of Technology (MIT) working on collaborative virtual world environments. In 2017, she finished her doctoral dissertation in computer science on motivational environments under the supervision of Christian Gütl (TUG) and John Belcher (MIT). She specialized in games and environments that engage users to learn, train, and work together through motivating tasks. She has long-lasting experience in game design and development, as well as virtual world development and has worked in the video game industry at Electronic Arts. Her research interests include AI, data analysis, immersive environments (VR), games research, gamification strategies, HCI, e-learning, CSE, and IR. She has authored and presented numerous publications in her field and lectured at universities such as Harvard, Berlin Humboldt Universität, or the University of Göttingen. Johanna was listed on the Forbes 30 Under 30 list of science professionals.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Why AI is shaping our games

Use, Misuse, and Reuse of Continuous Integration Features

Prof. Shane McIntosh | May 2, 2019 | 14:00 | N.1.42

Abstract:

Continuous Integration (CI) is a popular practice where software systems are automatically compiled and tested as changes appear in the version control system of a project. Like other software artifacts, CI specifications, which describe the CI process, require maintenance effort. In this talk, I will describe the results of an empirical analysis of patterns of feature use and misuse in the Travis CI specifications of 9,312 open source systems. To help developers to detect and remove patterns of misuse, we propose Hansel and Gretel, anti-pattern detection and removal tools for Travis CI specifications. To help developers to rapidly develop and reuse common CI logic, we propose an extension to the TouchCORE modelling tool that allows users to select high-level features from CI feature models and generate an appropriate CI specification. To support this envisioned tool, we perform an initial analysis of common CI features using association rule mining, which yielded underwhelming results.

Bio:

Shane McIntosh is an assistant professor in the Department of Electrical and Computer Engineering at McGill University, where he leads the Software Repository Excavation and Build Engineering Labs (Software REBELs). He received his PhD in Computer Science from Queen’s University, for which he was awarded the Governor General of Canada’s Academic Gold Medal. In his research, Shane uses empirical software engineering techniques to study software build systems, release engineering, and software quality. More about his work is available online at http://rebels.ece.mcgill.ca/.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Use, Misuse, and Reuse of Continuous Integration Features

A Survey of Evaluation Techniques and Systems for Answer Set Programming

Prof. Francesco Ricca | May 3, 2019 | 11:00 | S.1.42

Abstract:

Answer set programming (ASP) is a prominent knowledge representation and reasoning paradigm that found both industrial and scientific applications. The success of ASP is due to the combination of two factors: a rich modeling language and the availability of efficient ASP implementations. In this talk we trace the history of ASP systems, describing the key evaluation techniques and their implementation in actual tools.

CV:

Francesco Ricca (www.mat.unical.it/ricca) is currently an Associate Professor at the Department of Mathematics and Computer Science of the University of Calabria, Italy. In the same Department he is Coordinator of the Computer Science Courses Council.
He received his Laurea Degree in Computer Science Engineering (2002) and a PhD in Computer Science and Mathematics (2006) from the University of Calabria, Italy, and received the Habilitation for Full Professor in Computer Science (INF/01) in 2017.
He is interested in declarative logic-based languages, consistent query answering, and rule-based reasoning on ontologies and in particular on the issues concerning their practical applications: system design and implementation, and development tools.
He is co-author of more than 100 (peer-reviewed) publications including international research journals (30+), encyclopedia chapters, conference proceedings, and workshops of national and international importance. He has served in program committees of international conference and workshop, such as IJCAI, AAAI, KR, ICLP, LPNMR and JELIA, and has been reviewer for AIJ, JAIR, TPLP, JLC, etc. He is Area Editor of Association for Logic Programming newsletters, and member of the Executive Board of the Italian Association for Artificial Intelligence.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für A Survey of Evaluation Techniques and Systems for Answer Set Programming

Artificial Intelligence (AI) in media applications and services

Dr.-Ing. Christian Keimel | 9.5.2019 | 10:00 | S.1.42

Abstract: Artificial Intelligence (AI) is nowadays used frequently in many application domains. Although sometimes considered only as an afterthought in the public discussion compared to other domains such as health, transportation, and manufacturing, the media domain is also transformed by AI enabling new opportunities, from content creation e.g. “robojournalism” and individualised content to optimisation of the content production and distribution. Underlaying many of these new opportunities is the use of AI in its current reincarnation as deep learning for understanding the audio-visual content by extracting structured information from the unstructured data, the audio-visual content.

In this talk the current understanding and trends of AI will therefore be discussed, what can be done, what is done, and what challenges remain in the use of AI especially in the context of media applications and services. The talk is not so much focused on the details and fundamentals of deep learning, but rather on a practical perspective on how recent advances in this field can be utilised in use-cases in the media domain, especially with respect to audio-visual content and in the broadcasting domain.

Bio: Christian Keimel received his B.Sc and Dipl.-Ing.(Univ.) in information technology from the Technical University of Munich (TUM) in 2005 and 2007, respectively. In 2014 he received a Dr.-Ing. degree from TUM for his dissertation on the “Design of video quality metrics with multi-way data analysis.” Since 2013 he is with the Institut for Rundfunktechnik (IRT), the research and competence centre of the public service broadcasters of Austria, Germany, and Switzerland, where he leads the machine learning team, working on the applications of machine learning and AI in the broadcasting context. In addition, he is a lecturer at TUM for “Deep Learning for Multimedia”. His current research interests include applications of data-driven models using machine learning particularly deep learning for audio-visual content understanding and distribution optimisation.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Artificial Intelligence (AI) in media applications and services

Towards 6DoF Adaptive Streaming Through Point Cloud Compression

Jeroen van der Hooft | 25.03.2019 | 16:00 | S.2.42

Abstract: The increasing popularity of head-mounted devices and 360-degree video cameras allows content providers to provide virtual reality video streaming over the Internet, using a relevant representation of the immersive content combined with traditional streaming techniques. While this approach allows the user to look around and move in three dimensions, the user’s location is fixed by the camera’s position within the scene. Recently, an increased interest has been shown for free movement within immersive scenes, referred to as six degrees of freedom (6DoF). One way to realize this, is by capturing one or multiple objects through a number of cameras positioned in different angles, creating a point cloud object which consists of the location and RGB color of a significant number of points in the three-dimensional space. While the concept of point clouds has been around for over two decades, it recently received increased attention by MPEG, issuing a call for proposals for point cloud compression. As a result, dynamic point cloud objects can now be compressed to bit rates in the order of 3 to 55 Mb/s, allowing feasible delivery over today’s mobile networks. In this talk, we use MPEG’s dataset to generate different scenes consisting of multiple point cloud objects, and propose a number of rate adaptation heuristics which use information on the user’s position and focus, the available bandwidth and the buffer status to decide upon the most appropriate quality representation of each of the considered objects. Through an extensive evaluation, we discuss the advantages and drawbacks of each solution. We argue that the optimal solution depends on the considered scene and camera path, which opens interesting possibilities for future work.

Bio: Jeroen van der Hooft obtained his M.Sc. degree in Computer Science Engineering from Ghent University, Belgium, in July 2014. In August of that year, he joined the Department of Information Technology at the same university, where he is currently active as a Ph.D. student. His main research interests include the end-to-end Quality of Experience optimization in adaptive video streaming, and low-latency delivery of immersive video content. During the first months of 2019, he worked as a visiting researcher in the Institute of Information Technology at the University of Klagenfurt, where he focused on rate adaptation for volumetric media streaming.

Web sitehttps://users.ugent.be/~jvdrhoof/

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Towards 6DoF Adaptive Streaming Through Point Cloud Compression

Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können

Dr. Pat Treusch | Do., 11.04.2019 | 18 Uhr | Stiftungssaal

Inhalt (Entwurf): In ihrem Vortrag „Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können“ spricht Patricia Treusch über die Zusammenarbeit von Mensch und Maschine. Aufbauend auf ihrer aktuellen Forschung beleuchtet sie Mensch-Maschine-Verhältnisse, Automatisierung von Arbeit sowie den Körper/Geist Split im Zusammenhang mit Artificial Intelligence. Am Beispiel Stricken diskutiert sie die Interaktionsverhältnisse zwischen Mensch und Roboter und stellt Formen der feministischen-kritischen Intervention in aktuelle Praktiken des Engineering und der Robotik vor.

© Felix Noak

Dr. phil./PhD Pat Treusch hat am Zentrum für Interdisziplinäre Frauen- und Geschlechterforschung (ZIFG) und dem Tema Genus, Universität Linköping, Schweden zu dem Thema „Robotic Companionship“ binational promoviert (Cotutelle-Verfahren). Von August 2015—Februar 2018 hat sie als wissenschaftliche Mitarbeiterin am ZIFG das Projektlabor „Wie Wissenschaft Wissen schafft. Verantwortlich Handeln in Natur- und Technikwissenschaften“ im Rahmen des MINTgrün Orientierungsstudiums (TUB) durchgeführt.

Im Berliner Verbundprogramm „DiGiTal – Digitalisierung: Gestaltung und Transformation“ setzt Pat Treusch am Fachgebiet Allgemeine und Historische Erziehungswissenschaft und am ZIFG, TU Berlin ihr Postdoc-Projekt „Das vernetzte Selbst. Eine feministisch-interdisziplinäre Studie zur Veränderung von Lernkulturen durch Digitalisierungsprozesse im Zeitalter des Internets der Dinge (IoT)“ um. Das Projekt analysiert empirisch beobachtbare Herausforderungen »unserer« Lernkulturen, die sich ergeben, wenn Alltagstechnologien anfangen zu lernen. Smart-Home-Geräte sind nur ein aktuelles Beispiel solch intelligenter Alltagstechnologien des IoT, an denen neuartige Mensch-Maschine-Schnittstellen entstehen. Diese versprechen – im Kern – eine Vernetzung aller Lebensbereiche. Das Projekt geht davon aus, dass den entstehenden Schnittstellen eine Qualität inhärent ist, die »uns« zu mehr herausfordert, als eine Medienkompetenz 4.0 zu entwickeln. Sich zwischen der feministischen Technik- und Wissenschaftssoziologie mit Fokus auf Mensch-Maschine Verhältnissen und der feministischen Erziehungswissenschaft mit Fokus auf Lerntheorien verortend, untersucht das Projekt explorativ, inwiefern aktuelle Lernumgebungen der Digitalisierung durch neue Verschränkungen von maschinellem und menschlichem Lernen gekennzeichnet sind. Das bedeutet auch, die Verhältnissetzungen zwischen Kognition und Lernen, im speziellen zwischen Computer und Kognition, in unterschiedlichen Wissens- und Technikfeldern der Digitalisierung nachzuzeichnen. Dem folgend zielt das Projekt darauf ab, sich verändernde, digitalisierte Bedingungen »unseres« Selbst- und Weltbezugs zu erfassen. Nicht zuletzt beinhaltet dies, intelligente Alltagstechnologien daraufhin zu befragen, ob und wie grundlegende symbolische Ordnungsschemata der Gesellschaft – etwa Gender, Sexuality, Race, Class oder Ableism – neu verhandelt werden (könnten).

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können

Review: Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems [Slides]

The review of the TEWI colloquium of Priv.-Doz. Dr. Rick Rabiser from February 7, 2019 comprises the slides (below):

Abstract

Complex software-intensive systems are often described as systems of systems (SoS) due to their heterogeneous architectural elements. As SoS behavior is often only understandable during operation, runtime monitoring is needed to detect deviations from requirements. Today, while diverse monitoring approaches exist, most do not provide what is needed to monitor SoS, e.g., support for dynamically defining and deploying diverse checks across multiple systems. In this talk, I will describe our experiences of developing, applying, and evolving an approach for monitoring an SoS in the domain of industrial automation software, that is based on a domain-specific language (DSL). I will first describe our initial approach to dynamically define and check constraints in SoS at runtime, including a demo of our monitoring tool REMINDS, and then motivate and describe its evolution based on requirements elicited in an industry collaboration project. I will furthermore describe solutions we have developed to support the evolution of our approach, i.e., a code generation approach and a framework to automate testing the DSL after changes. We evaluated the expressiveness and scalability of our new DSL-based approach using an industrial SoS. At the end of the talk, I will also present general lessons we learned and give an overview of other projects in the area of software monitoring as well as other areas such as software product lines, that I am currently involved in.

Bio

Rick Rabiser (http://mevss.jku.at/rabiser) is currently a senior researcher at the Christian Doppler Laboratory for Monitoring and Evolution of Very-Large-Scale Software Systems (VLSS) at Johannes Kepler University Linz, Austria. In this lab, he heads the research module on requirements-based monitoring and diagnosis in VLSS evolution, with Primetals Technologies Austria as industry partner. He holds a Master’s and a Ph.D. degree in Business Informatics as well as the venia docendi (Habilitation) in Practical Computer Science from Johannes Kepler University Linz. His research interests include but are not limited to variability management, software maintenance and evolution, systems and software product lines, automated software engineering, requirements engineering, requirements monitoring, and usability and user interface design. Dr. Rabiser co-authored over 120 (peer-reviewed) publications; served in 80+ program committees and 25+ conference and workshop organization committees; and frequently reviews articles for several international journals like IEEE TSE, IEEE TSC, ACM CSUR, EMSE, JSS, and IST. He is also a member of the steering committee of the Euromicro SEAA conference series and a member of the Euromicro Board of Directors (Director for Austria) and the Euromicro Executive Office (Publicity Secretary). He is also an elected member of the steering committee of the International Systems and Software Product Line Conference (SPLC). He currently is the speaker of computer scientists at JKU Linz, who are not full professors (Fachbereichssprecher Mittelbau Informatik).

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems [Slides]

Review: Random Matrix Theory in Array Signal Processing: Application Examples [Slides]

The review of the TEWI colloquium of Prof. Xavier Mestre from February 25, 2019 comprises the slides (below):

Abstract:

Conventional tools in array signal processing have traditionally relied on the availability of a large number of samples acquired at each sensor or array element (antenna, hydrophone, microphone, etc.). Large sample size assumptions typically guarantee the consistency of estimators, detectors, classifiers and multiple other widely used signal processing procedures. However, practical scenario and array mobility conditions, together with the need for low latency and reduced scanning times, impose strong limits on the total number of observations that can be effectively processed. When the number of collected samples per sensor is small, conventional large sample asymptotic approaches are not relevant anymore. Recently, large random matrix theory tools have been proposed in order to address the small sample support problem in array signal processing. In fact, it has been shown that the most important and longstanding problems in this field can be reformulated and studied according to this asymptotic paradigm. By exploiting the latest advances in large random matrix theory and high dimensional statistics, a novel and unconventional methodology can be established, which provides an unprecedented treatment of the finite sample-per-sensor regime. In this talk, we will see that random matrix theory establishes a unifying framework for the study of array signal processing techniques under the constraint of a small number of observations per sensor, which has radically changed the way in which array processing methodologies have been traditionally established. We will show how this unconventional way of revisiting classical array processing has lead to major advances in the design and analysis of signal processing techniques for multidimensional observations.

Bio:

Xavier Mestre received the MS and PhD in Electrical Engineering from the Technical University of Catalonia (UPC) in 1997 and 2002 respectively and the Licenciate Degree in Mathematics in 2011. During the pursuit of his PhD, he was recipient of a 1998-2001 PhD scholarship (granted by the Catalan Government) and was awarded the 2002 Rosina Ribalta second prize for the best doctoral thesis project within areas of Information Technologies and Communications by the Epson Iberica foundation. From January 1998 to December 2002, he was with UPC’s Communications Signal Processing Group, where he worked as a Research Assistant and participated actively in several European-funded projects. In January 2003 he joined the Telecommunications Technological Center of Catalonia (CTTC), where he currently holds a position as a Senior Research Associate and head of the Advanced Signal and Information Processing Department. During this time, he has actively participated in 8 European projects and two ESA contracts. He has been coordinator of the European ICT project EMPhAtiC (2012-15) and has participated in 6 industrial contracts, some of which have lead to commercialized products. He is author of three granted patents, 9 book chapters, 41 international journal papers and more than 90 articles in international conferences. He has been associate editor of the IEEE Transactions on Signal Processing (2008-11, 2015-present) and associate co-editor of the special issue on Cooperative Communications in Wireless Networks at the EURASIP Journal on Wireless Communications and Networking. He is IEEE Senior member and elected member of the IEEE Sensor Array and Multi-channel Signal Processing technical committee (2013-2018) and the EURASIP Special Area Teams on “Theoretical and  Methodological Trends in Signal Processing” (2015-present) and “Signal Processing in Communications” (2018-present). He has participated in the organization of multiple conferences and scientific events, such as the “IEEE Wireless Communications and Networking Conference 2018″ (general vice-chair), the “IEEE International Symposium on Power Line Communications” (technical chair), the “European Wireless 2014″ (general co-chair), the “European Signal Processing Conference 2011″ (general technical chair), the “IEEE Winter School on Information Theory” 2011 (general co-chair), the “Summer School on Random Matrix Theory for Wireless Communications” 2006 (general chair). He is general chair of the IEEE International Conference on Acoustics, Speech and Signal Processing 2020.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: Random Matrix Theory in Array Signal Processing: Application Examples [Slides]
RSS
EMAIL