MPEG-V -  Jae Joon Han,  Seungju Han,  Sang-Kyun Kim,  Marius Preda,  Kyoungro Yoon

MPEG-V (eBook)

Bridging the Virtual and Real World
eBook Download: PDF | EPUB
2015 | 1. Auflage
220 Seiten
Elsevier Science (Verlag)
978-0-12-420203-0 (ISBN)
Systemvoraussetzungen
Systemvoraussetzungen
108,00 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
This book is the first to cover the recently developed MPEG-V standard, explaining the fundamentals of each part of the technology and exploring potential applications. Written by experts in the field who were instrumental in the development of the standard, this book goes beyond the scope of the official standard documentation, describing how to use the technology in a practical context and how to combine it with other information such as audio, video, images, and text. Each chapter follows an easy-to-understand format, first examining how each part of the standard is composed, then covers intended uses and applications for each particular effect. With this book, you will learn how to: - Use the MPEG-V standard to develop applications - Develop systems for various use cases using MPEG-V - Synchronize the virtual world and real world - Create and render sensory effects for media - Understand and use MPEG-V for the research of new types of media related technology and services - The first book on the new MPEG-V standard, which enables interoperability between virtual worlds and the real world - Provides the technical foundations for understanding and using MPEG-V for various virtual world, mirrored world, and mixed world use cases - Accompanying website features schema files for the standard, with example XML files, source code from the reference software and example applications

Kyoungro Yoon is a professor in School of Computer Science and Engineering at Konkuk University, Seoul, Korea. He received Ph.D. degree in computer and information science in 1999 from Syracuse University, USA. From 1999 to 2003, he was a Chief Research Engineer and Group Leader in charge of development of various product related technologies and standards in the field of image and audio processing at the LG Electronics Institute of Technology. In 2003, he joined Konkuk University as an assistant professor and has been a professor since 2012. He actively participated in the development of standards such as MPEG-7, MPEG-21, MPEG-V, JPSearch, and TV-Anytime and served as a co-chair for Ad Hoc Groups on User Preferences, chair for Ad Hoc Group on MPEG Query Format, chair for Ad Hoc Group on MPEG-V, chair for Ad Hoc Group on JPSearch and chair for the Metadata Subgroup of ISO/IEC JTC1 SC29 WG1 (a.k.a. JPEG). He also served as an editor of various international standards such as ISO/IEC 15938-12, ISO/IEC 23005-2/5/6, and ISO/IEC 24800-2/5. He has co-authored over 40 conference and journal publications in the field of multimedia information systems. He is also a inventor/co-inventor of more than 30 US Patents and 70 Korean Patents.
This book is the first to cover the recently developed MPEG-V standard, explaining the fundamentals of each part of the technology and exploring potential applications. Written by an expert in the field, who was instrumental in the development of the standard, the book goes beyond the scope of the official standard documentation, describing how to use the technology in a practical context and how to combine it with other information such as audio, video, images, and text. Each chapter follows an easy-to-understand format, first examining how each part of the standard is composed, then discussing some intended uses and applications for each particular effect. With this book, you will learn how to: - Use the MPEG-V standard to develop applications - Develop systems for various use cases using MPEG-V - Synchronize the virtual world and real world - Create and render sensory effects for media - Understand and use MPEG-V for the research of new types of media related technology and services. . The first book on the new MPEG-V standard, which enables interoperability between virtual worlds and the real world . Provides the technical foundations for understanding and using MPEG-V for various virtual world, mirrored world, and mixed world use cases . Accompanying website features schema files for the standard, with example XML files, source code from the reference software and example applications.

Front Cover 1
MPEG-V 4
Copyright Page 5
Contents 6
Acknowledgment 8
Author Biographies 10
Preface 14
1 Introduction to MPEG-V Standards 16
1.1 Introduction to Virtual Worlds 16
1.2 Advances in Multiple Sensorial Media 18
1.2.1 Basic Studies on Multiple Sensorial Media 18
1.2.2 Authoring of MulSeMedia 19
1.2.3 Quality of Experience of MulSeMedia 22
1.2.3.1 Test Setups 23
1.2.3.2 Test Procedures 23
1.2.3.3 Experimental QoE Results for Sensorial Effects 25
1.3 History of MPEG-V 26
1.4 Organizations of MPEG-V 29
1.5 Conclusion 32
References 33
2 Adding Sensorial Effects to Media Content 36
2.1 Introduction 36
2.2 Sensory Effect Description Language 39
2.2.1 SEDL Structure 39
2.2.2 Base Data Types and Elements of SEDL 40
2.2.3 Root Element of SEDL 42
2.2.4 Description Metadata 45
2.2.5 Declarations 46
2.2.6 Group of Effects 47
2.2.7 Effect 48
2.2.8 Reference Effect 49
2.2.9 Parameters 50
2.3 Sensory Effect Vocabulary: Data Formats for Creating SEs 51
2.4 Creating SEs 64
2.5 Conclusion 71
References 71
3 Standard Interfacing Format for Actuators and Sensors 72
3.1 Introduction 72
3.2 Interaction Information Description Language 72
3.2.1 IIDL Structure 72
3.2.2 DeviceCommand Element 73
3.2.3 SensedInfo Element 74
3.2.4 InteractionInfo Element 77
3.3 DCV: Data Format for Creating Effects Using Actuators 80
3.4 SIV: Data Format for Sensing Information Using Sensors 88
3.5 Creating Commands and Accepting Sensor Inputs 98
3.6 Conclusion 102
References 102
4 Adapting Sensory Effects and Adapted Control of Devices 104
4.1 Introduction 104
4.2 Control Information Description Language 105
4.2.1 CIDL Structure 105
4.2.2 SensoryDeviceCapability Element 106
4.2.3 SensorDeviceCapability Element 107
4.2.4 USPreference Element 110
4.2.5 SAPreference Element 112
4.3 Device Capability Description Vocabulary 114
4.4 Sensor Capability Description Vocabulary 125
4.5 User’s Sensory Effect Preference Vocabulary 133
4.6 Sensor Adaptation Preference Vocabulary 141
4.7 Conclusion 143
References 144
5 Interoperable Virtual World 146
5.1 Introduction 146
5.2 Virtual-World Object Metadata 148
5.2.1 Introduction 148
5.2.2 Sound and Scent Types 148
5.2.3 Control Type 149
5.2.4 Event Type 150
5.2.5 Behavior Model Type 151
5.2.6 Identification Type 151
5.3 Avatar Metadata 153
5.3.1 Introduction 153
5.3.2 Appearance Type 153
5.3.3 Animation Type 154
5.3.4 Communication Skills Type 159
5.3.5 Personality Type 160
5.3.6 Motion Control Type 161
5.3.7 Haptic Property Type 163
5.4 Virtual Object Metadata 164
5.4.1 Introduction 164
5.4.2 Appearance Type 165
5.4.3 Animation Type 165
5.4.4 Virtual-Object Components 168
5.5 Conclusion 168
References 168
6 Common Tools for MPEG-V and MPEG-V Reference SW with Conformance 170
6.1 Introduction 170
6.2 Common Types and Tools 171
6.2.1 Mnemonics for Binary Representations 171
6.2.2 Common Header for Binary Representations 173
6.2.3 Basic Data and Other Common Types 173
6.3 Classification Schemes 175
6.4 Binary Representations 176
6.5 Reference Software 178
6.5.1 Reference Software Based on JAXB 178
6.5.2 Reference Software for Binary Representation 181
6.6 Conformance Test 182
6.7 Conclusion 183
References 183
7 Applications of MPEG-V Standard 186
7.1 Introduction 186
7.2 Information Adaptation from VW to RW 186
7.2.1 System Architecture 186
7.2.2 Instantiation A: 4D Broadcasting/Theater 188
7.2.3 Instantiation B: Haptic Interaction 189
7.3 Information Adaptation From the RW into a VW 191
7.3.1 System Architecture 191
7.3.2 Instantiation C: Full Motion Control and Navigation of Avatar or Object With Multi-Input Sources 192
7.3.3 Instantiation D: Facial Expressions and Body Gestures 194
7.3.4 Instantiation E: Seamless Interaction Between RW and VWs 196
7.4 Information Exchange Between VWs 199
7.4.1 System Architecture 199
7.4.2 Instantiation F: Interoperable VW 200
References 202
Terms, Definitions, and Abbreviated Terms 204
Index 206

Chapter 2

Adding Sensorial Effects to Media Content


The provision of sensory effects in addition to audiovisual media content has recently gained attention because more sensorial stimulation supports more immersion on user experiences. For the successful industrial deployment of multiple sensorial media (MulSeMedia), it is important to provide an easy and efficient means of producing MulSeMedia content. In other words, the standard descriptions of sensorial effects (SEs) are one of the key success factors of the MuLSeMedia industry. In this chapter, the standard syntax and semantics from MPEG-V, Part 3 to describe such SEs are introduced along with their valid instances.

Keywords


Sensorial effects; sensorial effect rendering; sensory effect metadata

Contents

2.1 Introduction


MPEG-V, Part 3: Sensory information (ISO/IEC 23005-3), specifies the Sensory Effect Description Language (SEDL) [1] as an XML schema-based language that enables one to describe sensorial effects (SEs) such as light, wind, fog, and vibration that trigger human senses. The actual SEs are not part of the SEDL but are defined within the Sensory Effect Vocabulary (SEV) for extensibility and flexibility, allowing each application domain to define its own SEs. A description conforming to SEDL is referred to as Sensory Effect Metadata (SEM) and may be associated with any type of multimedia content (e.g., movies, music, Web sites, games). The SEM is used to steer actuators such as fans, vibration chairs, and lamps using an appropriate mediation device to increase the user experience. That is, in addition to the audiovisual (AV) content of a movie, e.g., the user will also perceive other effects such as those described above, giving the user the sensation of being part of the particular media content, which will result in a worthwhile, informative user experience. The concept of receiving SEs in addition to AV content is depicted in Figure 2.1.


Figure 2.1 Concept of MPEG-V SEDL [1].

The media and corresponding SEM may be obtained from a Digital Versatile Disc (DVD), Blu-ray Disc (BD), or any type of online service (i.e., download/play or streaming). The media processing engine, which is also referred to as the adaptation engine, acts as the mediation device and is responsible for playing the actual media content resource and accompanied SEs in a synchronized way based on the user’s setup in terms of both the media content and rendering of the SE. Therefore, the media processing engine may adapt both the media resource and the SEM according to the capabilities of the various rendering devices.

The SEV defines a clear set of actual SEs to be used with the SEDL in an extensible and flexible way. That is, it can be easily extended with new effects or through a derivation of existing effects thanks to the extensibility feature of the XML schema. Furthermore, the effects are defined based on the authors’ (i.e., creators of the SEM) intention independent from the end user’s device setting, as shown in Figure 2.2.


Figure 2.2 Mapping of author’s intentions to SE data and actuator capabilities (ACs) [2].

The sensory effect metadata elements or data types are mapped to commands that control the actuators based on their capabilities. This mapping is usually provided by the Virtual-to-Real adaptation engine and was deliberately not defined in this standard, i.e., it is left open for industry competitors. It is important to note that there is not necessarily a one-to-one mapping between elements or data types of the SE data and ACs. For example, the effect of hot/cold wind may be rendered on a single device with two capabilities, i.e., a heater or air conditioner, and a fan or ventilator.

As shown in Figure 2.3, the SEs can be adjusted into adapted SEs (i.e., defined in MPEG-V, Part 5, as device commands) in accordance with the capabilities of the actuators (ACs, defined in MPEG-V, Part 2) and actuation preferences (APs, defined in MPEG-V, Part 2, as user sensory preferences).


Figure 2.3 The adapted SEs (actuator commands defined in MPEG-V, Part 5) generated by combining SEs with ACs and user’s APs.

Figure 2.4 shows an example of combining SEs (SEs in MPEG-V, Part 3) with sensed information (SI in MPEG-V, Part 5) to generate adapted actuator commands (ACmd in MPEG-V, Part 5). For example, the SE corresponding to the scene might be cooling the temperature to 5°C and adding a wind effect with 100% intensity. Assume instead that the current room temperature is 12°C. It would be unwise to deploy the cooling and wind effect as described in the SE data because the current temperature inside the room is already low, and users may feel uncomfortable with the generated SEs. Therefore, a sensor measures the room temperature and the adaptation engine generates the adapted SEs (i.e., ACmds), which are a reduced wind effect (20% intensity) and a heating effect (20°C), for instance.


Figure 2.4 The adapted SEs (actuator commands defined in MPEG-V, Part 5) generated by combining SEs with SI.

This chapter is organized as follows. Section 2.2 describes the details of the SEDL. Section 2.3 presents the SEV, which specifies the data formats used for creating SEs. Section 2.4 presents XML instances using SEDL and SEV. Finally, Section 2.5 concludes the chapter.

2.2 Sensory Effect Description Language


2.2.1 SEDL Structure


The SEDL is a language providing basic building blocks to instantiate sensory effect metadata defined by the MPEG-V standard based on XML that can be authored by content providers.

2.2.2 Base Data Types and Elements of SEDL


There are two base types in the SEDL. The first base type is SEMBaseAttributes, which includes six base attributes and one base attribute Group. The schema definition of SEMBaseAttributes is shown in Table 2.1. The activate attribute describes whether the SE shall be activated.

Table 2.1

Schema definition of SEMBaseAttributes

The duration attribute describes the duration of any SE rendering. The fade attribute describes the fade time within which the defined intensity is reached. The alt attribute describes an alternative effect identified by the uniform resource identifier (URI). For example, an alternative effect is chosen because the original intended effect cannot be rendered owing to a lack of devices supporting this effect. The priority attribute describes the priority for effects with respect to other effects in the same group of effects sharing the same point in time when they should become available for consumption. A value of 1 indicates the highest priority, and larger values indicate lower priorities. The location attribute describes the location from where the effect is expected to be received from the user’s perspective according to the X, Y, and Z axes, as depicted in Figure 2.5. A classification scheme that may be used for this purpose is LocationCS, as defined in Annex A of ISO/IEC 23005-6. For example, urn:mpeg:mpeg-v:01-SI-LocationCS-NS:left:*:midway defines the location as follows: left on the X-axis, any location on the Y-axis, and midway on the Z-axis. That is, it describes all effects on the left-midway side of the user. The SEMAdaptabilityAttributes contains two attributes related to the adaptability of the SEs. The adaptType attribute describes the preferred type of adaptation using the following possible instantiations: strict, i.e., an adaptation by approximation may not be performed, i.e., an adaptation by approximation may be performed with a smaller effect value than the specified effect value, i.e., an adaptation by approximation may be performed with a greater effect value than the specified effect value, and i.e., an adaptation by approximation may be performed between the upper and lower bounds specified by adaptRange. The adaptRange attribute describes the upper and lower bounds in terms of percentage for adaptType.


Figure 2.5 Location model for SEs and reference coordinate system.

There are five base elements (Table 2.2), i.e., Declaration, GroupOfEffects, Effect, ReferenceEffect, and Parameter, which are explained in detail in the following sections, extended from the abstract SEMBaseType type...

Erscheint lt. Verlag 2.4.2015
Sprache englisch
Themenwelt Kunst / Musik / Theater Musik
Informatik Grafik / Design Digitale Bildverarbeitung
Mathematik / Informatik Informatik Netzwerke
Technik Elektrotechnik / Energietechnik
Technik Nachrichtentechnik
ISBN-10 0-12-420203-9 / 0124202039
ISBN-13 978-0-12-420203-0 / 9780124202030
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)
Größe: 12,0 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

EPUBEPUB (Adobe DRM)
Größe: 7,4 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Discover the smart way to polish your digital imagery skills by …

von Gary Bradley

eBook Download (2024)
Packt Publishing (Verlag)
29,99
Explore powerful modeling and character creation techniques used for …

von Lukas Kutschera

eBook Download (2024)
Packt Publishing (Verlag)
43,19
Generate creative images from text prompts and seamlessly integrate …

von Margarida Barreto

eBook Download (2024)
Packt Publishing (Verlag)
32,39