Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können

Dr. Pat Treusch | Do., 11.04.2019 | 18 Uhr | Stiftungssaal

Inhalt (Entwurf): In ihrem Vortrag „Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können“ spricht Patricia Treusch über die Zusammenarbeit von Mensch und Maschine. Aufbauend auf ihrer aktuellen Forschung beleuchtet sie Mensch-Maschine-Verhältnisse, Automatisierung von Arbeit sowie den Körper/Geist Split im Zusammenhang mit Artificial Intelligence. Am Beispiel Stricken diskutiert sie die Interaktionsverhältnisse zwischen Mensch und Roboter und stellt Formen der feministischen-kritischen Intervention in aktuelle Praktiken des Engineering und der Robotik vor.

© Felix Noak

Dr. phil./PhD Pat Treusch hat am Zentrum für Interdisziplinäre Frauen- und Geschlechterforschung (ZIFG) und dem Tema Genus, Universität Linköping, Schweden zu dem Thema „Robotic Companionship“ binational promoviert (Cotutelle-Verfahren). Von August 2015—Februar 2018 hat sie als wissenschaftliche Mitarbeiterin am ZIFG das Projektlabor „Wie Wissenschaft Wissen schafft. Verantwortlich Handeln in Natur- und Technikwissenschaften“ im Rahmen des MINTgrün Orientierungsstudiums (TUB) durchgeführt.

Im Berliner Verbundprogramm „DiGiTal – Digitalisierung: Gestaltung und Transformation“ setzt Pat Treusch am Fachgebiet Allgemeine und Historische Erziehungswissenschaft und am ZIFG, TU Berlin ihr Postdoc-Projekt „Das vernetzte Selbst. Eine feministisch-interdisziplinäre Studie zur Veränderung von Lernkulturen durch Digitalisierungsprozesse im Zeitalter des Internets der Dinge (IoT)“ um. Das Projekt analysiert empirisch beobachtbare Herausforderungen »unserer« Lernkulturen, die sich ergeben, wenn Alltagstechnologien anfangen zu lernen. Smart-Home-Geräte sind nur ein aktuelles Beispiel solch intelligenter Alltagstechnologien des IoT, an denen neuartige Mensch-Maschine-Schnittstellen entstehen. Diese versprechen – im Kern – eine Vernetzung aller Lebensbereiche. Das Projekt geht davon aus, dass den entstehenden Schnittstellen eine Qualität inhärent ist, die »uns« zu mehr herausfordert, als eine Medienkompetenz 4.0 zu entwickeln. Sich zwischen der feministischen Technik- und Wissenschaftssoziologie mit Fokus auf Mensch-Maschine Verhältnissen und der feministischen Erziehungswissenschaft mit Fokus auf Lerntheorien verortend, untersucht das Projekt explorativ, inwiefern aktuelle Lernumgebungen der Digitalisierung durch neue Verschränkungen von maschinellem und menschlichem Lernen gekennzeichnet sind. Das bedeutet auch, die Verhältnissetzungen zwischen Kognition und Lernen, im speziellen zwischen Computer und Kognition, in unterschiedlichen Wissens- und Technikfeldern der Digitalisierung nachzuzeichnen. Dem folgend zielt das Projekt darauf ab, sich verändernde, digitalisierte Bedingungen »unseres« Selbst- und Weltbezugs zu erfassen. Nicht zuletzt beinhaltet dies, intelligente Alltagstechnologien daraufhin zu befragen, ob und wie grundlegende symbolische Ordnungsschemata der Gesellschaft – etwa Gender, Sexuality, Race, Class oder Ableism – neu verhandelt werden (könnten).

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können

Review: Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems [Slides]

The review of the TEWI colloquium of Priv.-Doz. Dr. Rick Rabiser from February 7, 2019 comprises the slides (below):

Abstract

Complex software-intensive systems are often described as systems of systems (SoS) due to their heterogeneous architectural elements. As SoS behavior is often only understandable during operation, runtime monitoring is needed to detect deviations from requirements. Today, while diverse monitoring approaches exist, most do not provide what is needed to monitor SoS, e.g., support for dynamically defining and deploying diverse checks across multiple systems. In this talk, I will describe our experiences of developing, applying, and evolving an approach for monitoring an SoS in the domain of industrial automation software, that is based on a domain-specific language (DSL). I will first describe our initial approach to dynamically define and check constraints in SoS at runtime, including a demo of our monitoring tool REMINDS, and then motivate and describe its evolution based on requirements elicited in an industry collaboration project. I will furthermore describe solutions we have developed to support the evolution of our approach, i.e., a code generation approach and a framework to automate testing the DSL after changes. We evaluated the expressiveness and scalability of our new DSL-based approach using an industrial SoS. At the end of the talk, I will also present general lessons we learned and give an overview of other projects in the area of software monitoring as well as other areas such as software product lines, that I am currently involved in.

Bio

Rick Rabiser (http://mevss.jku.at/rabiser) is currently a senior researcher at the Christian Doppler Laboratory for Monitoring and Evolution of Very-Large-Scale Software Systems (VLSS) at Johannes Kepler University Linz, Austria. In this lab, he heads the research module on requirements-based monitoring and diagnosis in VLSS evolution, with Primetals Technologies Austria as industry partner. He holds a Master’s and a Ph.D. degree in Business Informatics as well as the venia docendi (Habilitation) in Practical Computer Science from Johannes Kepler University Linz. His research interests include but are not limited to variability management, software maintenance and evolution, systems and software product lines, automated software engineering, requirements engineering, requirements monitoring, and usability and user interface design. Dr. Rabiser co-authored over 120 (peer-reviewed) publications; served in 80+ program committees and 25+ conference and workshop organization committees; and frequently reviews articles for several international journals like IEEE TSE, IEEE TSC, ACM CSUR, EMSE, JSS, and IST. He is also a member of the steering committee of the Euromicro SEAA conference series and a member of the Euromicro Board of Directors (Director for Austria) and the Euromicro Executive Office (Publicity Secretary). He is also an elected member of the steering committee of the International Systems and Software Product Line Conference (SPLC). He currently is the speaker of computer scientists at JKU Linz, who are not full professors (Fachbereichssprecher Mittelbau Informatik).

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems [Slides]

Review: Random Matrix Theory in Array Signal Processing: Application Examples [Slides]

The review of the TEWI colloquium of Prof. Xavier Mestre from February 25, 2019 comprises the slides (below):

Abstract:

Conventional tools in array signal processing have traditionally relied on the availability of a large number of samples acquired at each sensor or array element (antenna, hydrophone, microphone, etc.). Large sample size assumptions typically guarantee the consistency of estimators, detectors, classifiers and multiple other widely used signal processing procedures. However, practical scenario and array mobility conditions, together with the need for low latency and reduced scanning times, impose strong limits on the total number of observations that can be effectively processed. When the number of collected samples per sensor is small, conventional large sample asymptotic approaches are not relevant anymore. Recently, large random matrix theory tools have been proposed in order to address the small sample support problem in array signal processing. In fact, it has been shown that the most important and longstanding problems in this field can be reformulated and studied according to this asymptotic paradigm. By exploiting the latest advances in large random matrix theory and high dimensional statistics, a novel and unconventional methodology can be established, which provides an unprecedented treatment of the finite sample-per-sensor regime. In this talk, we will see that random matrix theory establishes a unifying framework for the study of array signal processing techniques under the constraint of a small number of observations per sensor, which has radically changed the way in which array processing methodologies have been traditionally established. We will show how this unconventional way of revisiting classical array processing has lead to major advances in the design and analysis of signal processing techniques for multidimensional observations.

Bio:

Xavier Mestre received the MS and PhD in Electrical Engineering from the Technical University of Catalonia (UPC) in 1997 and 2002 respectively and the Licenciate Degree in Mathematics in 2011. During the pursuit of his PhD, he was recipient of a 1998-2001 PhD scholarship (granted by the Catalan Government) and was awarded the 2002 Rosina Ribalta second prize for the best doctoral thesis project within areas of Information Technologies and Communications by the Epson Iberica foundation. From January 1998 to December 2002, he was with UPC’s Communications Signal Processing Group, where he worked as a Research Assistant and participated actively in several European-funded projects. In January 2003 he joined the Telecommunications Technological Center of Catalonia (CTTC), where he currently holds a position as a Senior Research Associate and head of the Advanced Signal and Information Processing Department. During this time, he has actively participated in 8 European projects and two ESA contracts. He has been coordinator of the European ICT project EMPhAtiC (2012-15) and has participated in 6 industrial contracts, some of which have lead to commercialized products. He is author of three granted patents, 9 book chapters, 41 international journal papers and more than 90 articles in international conferences. He has been associate editor of the IEEE Transactions on Signal Processing (2008-11, 2015-present) and associate co-editor of the special issue on Cooperative Communications in Wireless Networks at the EURASIP Journal on Wireless Communications and Networking. He is IEEE Senior member and elected member of the IEEE Sensor Array and Multi-channel Signal Processing technical committee (2013-2018) and the EURASIP Special Area Teams on “Theoretical and  Methodological Trends in Signal Processing” (2015-present) and “Signal Processing in Communications” (2018-present). He has participated in the organization of multiple conferences and scientific events, such as the “IEEE Wireless Communications and Networking Conference 2018″ (general vice-chair), the “IEEE International Symposium on Power Line Communications” (technical chair), the “European Wireless 2014″ (general co-chair), the “European Signal Processing Conference 2011″ (general technical chair), the “IEEE Winter School on Information Theory” 2011 (general co-chair), the “Summer School on Random Matrix Theory for Wireless Communications” 2006 (general chair). He is general chair of the IEEE International Conference on Acoustics, Speech and Signal Processing 2020.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: Random Matrix Theory in Array Signal Processing: Application Examples [Slides]

Forgetful, shortsighted demons in wireless communications (in Kooperation mit der Lakeside Labs GmbH)

Harun Siljak, PhD | February 26, 2019 | 15:30 | B04.1.114 (Lakeside B04, Eingang b, 1. Stock)

Abstract:

The common theme of results presented in this talk is control of complex systems in wireless communications subject to information loss, either because of noise/equipment limitations or because of the controller’s inability to wait long enough or see far enough. Can we reconstruct the past and/or predict the future based on imperfect information and why would we want to do it in the first place?

Bio:

Harun Siljak obtained his BoE and MoE degrees in control engineering from University of Sarajevo in 2010 and 2012, respectively, and his PhD in electrical engineering from International Burch University Sarajevo in 2015. After working at International Burch University and Bell Labs Ireland, he joined Trinity College Dublin as an EDGE Marie Curie Fellow in 2017 to work on his project on complexity and control in distributed massive MIMO. His research interests include physics of computation, reversibility, wave propagation and nonlinear dynamics. His other interests include popular science and science fiction writing, as well as collaborations with artists and writers.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Forgetful, shortsighted demons in wireless communications (in Kooperation mit der Lakeside Labs GmbH)

Random Matrix Theory in Array Signal Processing: Application Examples

Prof. Xavier Mestre | February 25, 2019 | 11:00 | S.1.42

Abstract:

Conventional tools in array signal processing have traditionally relied on the availability of a large number of samples acquired at each sensor or array element (antenna, hydrophone, microphone, etc.). Large sample size assumptions typically guarantee the consistency of estimators, detectors, classifiers and multiple other widely used signal processing procedures. However, practical scenario and array mobility conditions, together with the need for low latency and reduced scanning times, impose strong limits on the total number of observations that can be effectively processed. When the number of collected samples per sensor is small, conventional large sample asymptotic approaches are not relevant anymore. Recently, large random matrix theory tools have been proposed in order to address the small sample support problem in array signal processing. In fact, it has been shown that the most important and longstanding problems in this field can be reformulated and studied according to this asymptotic paradigm. By exploiting the latest advances in large random matrix theory and high dimensional statistics, a novel and unconventional methodology can be established, which provides an unprecedented treatment of the finite sample-per-sensor regime. In this talk, we will see that random matrix theory establishes a unifying framework for the study of array signal processing techniques under the constraint of a small number of observations per sensor, which has radically changed the way in which array processing methodologies have been traditionally established. We will show how this unconventional way of revisiting classical array processing has lead to major advances in the design and analysis of signal processing techniques for multidimensional observations.

Bio:

Xavier Mestre received the MS and PhD in Electrical Engineering from the Technical University of Catalonia (UPC) in 1997 and 2002 respectively and the Licenciate Degree in Mathematics in 2011. During the pursuit of his PhD, he was recipient of a 1998-2001 PhD scholarship (granted by the Catalan Government) and was awarded the 2002 Rosina Ribalta second prize for the best doctoral thesis project within areas of Information Technologies and Communications by the Epson Iberica foundation. From January 1998 to December 2002, he was with UPC’s Communications Signal Processing Group, where he worked as a Research Assistant and participated actively in several European-funded projects. In January 2003 he joined the Telecommunications Technological Center of Catalonia (CTTC), where he currently holds a position as a Senior Research Associate and head of the Advanced Signal and Information Processing Department. During this time, he has actively participated in 8 European projects and two ESA contracts. He has been coordinator of the European ICT project EMPhAtiC (2012-15) and has participated in 6 industrial contracts, some of which have lead to commercialized products. He is author of three granted patents, 9 book chapters, 41 international journal papers and more than 90 articles in international conferences. He has been associate editor of the IEEE Transactions on Signal Processing (2008-11, 2015-present) and associate co-editor of the special issue on Cooperative Communications in Wireless Networks at the EURASIP Journal on Wireless Communications and Networking. He is IEEE Senior member and elected member of the IEEE Sensor Array and Multi-channel Signal Processing technical committee (2013-2018) and the EURASIP Special Area Teams on “Theoretical and  Methodological Trends in Signal Processing” (2015-present) and “Signal Processing in Communications” (2018-present). He has participated in the organization of multiple conferences and scientific events, such as the “IEEE Wireless Communications and Networking Conference 2018″ (general vice-chair), the “IEEE International Symposium on Power Line Communications” (technical chair), the “European Wireless 2014″ (general co-chair), the “European Signal Processing Conference 2011″ (general technical chair), the “IEEE Winter School on Information Theory” 2011 (general co-chair), the “Summer School on Random Matrix Theory for Wireless Communications” 2006 (general chair). He is general chair of the IEEE International Conference on Acoustics, Speech and Signal Processing 2020.

 

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Random Matrix Theory in Array Signal Processing: Application Examples

Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems

Priv.-Doz. Dr. Rick Rabiser | February 7, 2019 | 10:00 | S.2.42

Abstract

Complex software-intensive systems are often described as systems of systems (SoS) due to their heterogeneous architectural elements. As SoS behavior is often only understandable during operation, runtime monitoring is needed to detect deviations from requirements. Today, while diverse monitoring approaches exist, most do not provide what is needed to monitor SoS, e.g., support for dynamically defining and deploying diverse checks across multiple systems. In this talk, I will describe our experiences of developing, applying, and evolving an approach for monitoring an SoS in the domain of industrial automation software, that is based on a domain-specific language (DSL). I will first describe our initial approach to dynamically define and check constraints in SoS at runtime, including a demo of our monitoring tool REMINDS, and then motivate and describe its evolution based on requirements elicited in an industry collaboration project. I will furthermore describe solutions we have developed to support the evolution of our approach, i.e., a code generation approach and a framework to automate testing the DSL after changes. We evaluated the expressiveness and scalability of our new DSL-based approach using an industrial SoS. At the end of the talk, I will also present general lessons we learned and give an overview of other projects in the area of software monitoring as well as other areas such as software product lines, that I am currently involved in.

Bio

Rick Rabiser (http://mevss.jku.at/rabiser) is currently a senior researcher at the Christian Doppler Laboratory for Monitoring and Evolution of Very-Large-Scale Software Systems (VLSS) at Johannes Kepler University Linz, Austria. In this lab, he heads the research module on requirements-based monitoring and diagnosis in VLSS evolution, with Primetals Technologies Austria as industry partner. He holds a Master’s and a Ph.D. degree in Business Informatics as well as the venia docendi (Habilitation) in Practical Computer Science from Johannes Kepler University Linz. His research interests include but are not limited to variability management, software maintenance and evolution, systems and software product lines, automated software engineering, requirements engineering, requirements monitoring, and usability and user interface design. Dr. Rabiser co-authored over 120 (peer-reviewed) publications; served in 80+ program committees and 25+ conference and workshop organization committees; and frequently reviews articles for several international journals like IEEE TSE, IEEE TSC, ACM CSUR, EMSE, JSS, and IST. He is also a member of the steering committee of the Euromicro SEAA conference series and a member of the Euromicro Board of Directors (Director for Austria) and the Euromicro Executive Office (Publicity Secretary). He is also an elected member of the steering committee of the International Systems and Software Product Line Conference (SPLC). He currently is the speaker of computer scientists at JKU Linz, who are not full professors (Fachbereichssprecher Mittelbau Informatik).

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems

Effective model-based approaches for automated software testing

Prof. Giorgio Brajnik | January 23, 2019 | 11:00 | N.1.42 (Germanistik)

Abstract

Testing lies at the heart of software development. Tightly woven with requirements engineering, the testing process influences how software is developed and its quality.  With adoption of agile and devops approaches, the continuous testing process has to rely on a testing strategy that is multi-level and has to balance test automation and exploratory testing.  Because so many things need to be tested, and because the system under test changes very often and rapidly, effectiveness and sustainability of the testing process is a must.

I will present an approach for automating end-to-end testing that is based on UML specifications of the behavior of the system and a toolkit that automatically generates source code supporting definition of high level test cases and related artifacts. In this way, a software development team can avoid dealing with low level details and focus instead on what needs to be tested, what test conditions need to be covered, how test results affect requirements coverage. This kind of information constitutes then a living documentatio of the system specification which can be used to guide exploratory testing. Such an approach is currently being used in mobile apps (in the area of workforce management) and web apps (in the financial domain).

Bio

Giorgio Brajnik is associate professor at the Computer Science Department of the University of Udine, Italy. He holds a degree in Computer Science (from the University of Udine) and a PhD in Computer Science (from the University of Manchester). After working on information search systems, since 1999 his focus is on methods for effective assessment of accessibility and quality of websites and web applications and more recently on model-based techniques for analysis of user interfaces.

At the university he teaches courses on object oriented programming and accessibility and user centered web development.  In ’92 and ’95-’96 he was visiting scholar at the University of Texas at Austin. He has been invited lecturer, panelist and visiting professor in Europe, the U.S. and New Zealand. He participated to several of the W3C working groups dealing with accessibility. He also supervised the development of accessibility testing tools when he was working with a company he cofounded, Usablenet Inc.  Currently he is scientific advisor for Interaction Design Solutions, a startup company he co-founded that is specialized on model-driven techniques for software system testing.

He is program committee member of several conferences, including the International Cross-Disciplinary Conference on Web Accessibility and ACM Assets, for which he was co-chair of the Doctoral Consortium  and also General Chair; he is regular reviewer for several journals. Additional details are available at www.dimi.uniud.it/giorgio/vitae.html.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Effective model-based approaches for automated software testing

Review: A Distributed Approach for Bitrate Selection in HTTP Adaptive Streaming [Slides]

The review of the TEWI colloquium of Abdelhak Bentaleb from December 13, 2018 comprises the slides (below):

Abstract: Past research has shown that concurrent HTTP adaptive streaming (HAS) players behave selfishly and the resulting competition for shared resources leads to underutilization or oversubscription of the network, presentation quality instability and unfairness among the players, all of which adversely impact the viewer experience. While coordination among the players, as opposed to all being selfish, has its merits and may alleviate some of these issues. A fully distributed architecture is still desirable in many deployments and better reflects the design spirit of HAS. In this study, we focus on and propose a distributed bitrate adaptation scheme for HAS that borrows ideas from consensus and game theory frameworks. Experimental results show that the proposed distributed approach provides significant improvements in terms of viewer experience, presentation quality stability, fairness and network utilization, without using any explicit communication between the players.

Bio: Abdelhak Bentaleb is a PhD candidate in Computer Science at School of Computing, National University of Singapore (NUS). He is advised by Prof. Roger Zimmermann and his work interest is on Video Streaming Architecture, Content Delivery, Multimedia Systems, and Computer Networks. He is a member of Media Management Research Lab and working on Streaming Media project. He designed and developed a novel suite of Adaptive Bitrate (ABR) solutions to address the key challenges of video delivery including quality instability, unfairness, and network resources under/over utilization for HTTP Adaptive Streaming (HAS) and HAS-like (DASH) systems.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: A Distributed Approach for Bitrate Selection in HTTP Adaptive Streaming [Slides]

A Distributed Approach for Bitrate Selection in HTTP Adaptive Streaming

Abdelhak Bentaleb | Thursday, December 13, 2018 | 14:30 | S.1.42 (formerly known as E.1.42)

Abstract: Past research has shown that concurrent HTTP adaptive streaming (HAS) players behave selfishly and the resulting competition for shared resources leads to underutilization or oversubscription of the network, presentation quality instability and unfairness among the players, all of which adversely impact the viewer experience. While coordination among the players, as opposed to all being selfish, has its merits and may alleviate some of these issues. A fully distributed architecture is still desirable in many deployments and better reflects the design spirit of HAS. In this study, we focus on and propose a distributed bitrate adaptation scheme for HAS that borrows ideas from consensus and game theory frameworks. Experimental results show that the proposed distributed approach provides significant improvements in terms of viewer experience, presentation quality stability, fairness and network utilization, without using any explicit communication between the players.

Bio: Abdelhak Bentaleb is a PhD candidate in Computer Science at School of Computing, National University of Singapore (NUS). He is advised by Prof. Roger Zimmermann and his work interest is on Video Streaming Architecture, Content Delivery, Multimedia Systems, and Computer Networks. He is a member of Media Management Research Lab and working on Streaming Media project. He designed and developed a novel suite of Adaptive Bitrate (ABR) solutions to address the key challenges of video delivery including quality instability, unfairness, and network resources under/over utilization for HTTP Adaptive Streaming (HAS) and HAS-like (DASH) systems.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für A Distributed Approach for Bitrate Selection in HTTP Adaptive Streaming

Review: New Media Services from a Mobile Chipset Vendor and Standardization Perspective [Slides]

The review of the TEWI colloquium of Dr. Thomas Stockhammer from November 30, 2018 comprises the slides (below):

Abstract: The media landscape changes significantly over the last few years by new content formats, new service offerings, additional consumption devices and new monetization models. Think of Netflix, DAZN, Mediatheks, mobile devices, interactive content, smart TVs, Virtual and Augmented Reality, and so on. Many of these efforts have been realized by a limited usage of standards, but are standards irrelevant? Secondly, more and more services are enabled by latest mobile compute platforms enabling new services and experiences. This presentation will provide an overview some of these trends and will motivate the development of global interop standards. Specific aspects will include the move of linear TV services to the Internet (both mobile and fixed) as well recent advances on Extended Reality and immersive media trends.

Bio: Thomas Stockhammer received the Dipl.-Ing. and Dr.-Ing. degrees from the Munich University of Technology, Munich, Germany. Thomas was Visiting Researcher at Rensselear Politechnical Institute (RPI), Troy, NY, USA and University of California San Diego (UCSD), San Diego, CA, USA. After acting as cofounder and CEO of Novel Mobile Radio (NoMoR) Research for 10 years and a consultant for Siemens mobile, BenQ mobile, LG Electronics and Digital Fountain, he joined Qualcomm in 2014 as Director Technical Standards. In his different roles, he co-authored more than 200 research publications and more than 150 patents and 1000s of contributions to standardization efforts. In his day job, he is the active and has leadership and rapporteur positions in 3GPP, DVB, MPEG, IETF, ATSC, CTA, ETSI, VR Industry Forum and the DASH-Industry Forum in the area of multimedia communication, TV-distribution, content delivery protocols, immersive media representation and adaptive streaming. Among others, he leads the MPEG-I efforts in MPEG, he is the chair of the DASH-IF Technical working group, the rapporteur of the first completed 3GPP VR work as well as the chairman of the DVB CM-I group. Thomas also received the INCITS Technical Excellence Award 2013 for his MPEG DASH work and the 3GPP Excellence Ward 2017 for his work on Enhanced TV.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: New Media Services from a Mobile Chipset Vendor and Standardization Perspective [Slides]
RSS
EMAIL