Metaverse off-line drafting Oct 11, 2022, North America+Europe-friendly time (17-18:30 CEST)

Minutes by: Erik Guttman, Samsung

Attendees list: (incomplete, but included the following) - Ki Dong Lee (LG Electronics), Mona Mustafa (Apple), Quang Ly (Convida Wireless), Viktor Kueh (Huawei), Gregg Larsen (Xylen), Egemen Cetinkaya (Verizon Wireless)

We agreed on the agenda.

1 Terminology

2 Open issues

3 Next Steps

Please see the input document https://ftp.3gpp.org/Email_Discussions/SA1/S1-223zzz-22856-pCR-Terminology-03.docx
Please see the output document https://ftp.3gpp.org/Email_Discussions/SA1/S1-223zzz-22856-pCR-Terminology-04.docx

The -04 version takes into account the first and second call. Please review it and send comments either to the reflector or to me directly.

Action items identified

These actions were identified in the first call, confirmed in the second call.

Comments

Ki-Dong discussed the term ‘pose’ and asked is this really a ‘person?’

In the course of the discussion it became clear that pose is both the user’s body’s orientation, and relative position of the entire body’s parts (extremities, etc.), and also the objects that are presented to the user. For the user it is what will be used by a metaverse service to identify how to represent the user, how the user is interacting with the service, etc. For the objects that the user interacts with, the pose is important to identify how to represent the object to the user. We noted that a pose can be of any kind of articulated thing - a person, an animal, an architectural feature (e.g. a door can be open or closed), or even a virtual object like an being (ally, adversary, etc.) in a VR game.

Mona suggested that a figure be provided to help understand the new terminology as it is challenging and very abstract. This can be provided as a discussion paper, though Erik wasn’t sure it makes any sense to provide a ‘model’ for the TR.

Egemen asked about the ‘media’ in the list after the definition of metaverse media, shouldn’t it include haptics? We agreed that this is a very good idea given the scope of the study. How about olfactory media? Erik mentioned that there is no clear way to support this currently in ‘display devices,’ so it seems out of scope. Another example of a media that is out of scope for Rel-19 study is hologram because the technology is not sufficiently advanced in development that there is any possibility of standardizing it.

Ki-Dong discussed the utility of coming up with ‘ambitious KPIs’ for such things as advanced XR and hologram scenarios. While this has already been done in other studies and features, it may not be useful to press for ‘super high capacity’ KPIs in Rel-19. Erik pointed out that ‘metaverse’ as a means to provide interactive media is an old goal, but what distinguishes this study is the need to provide diverse media to multiple users at the same time, so that the interactive aspects are acceptable (at the service level). This ‘acceptability’ may not mean that it has to be ‘realistic’ media: the constraints will differ depending on the service. Some metaverse services may only need relatively low bandwidth data service to provide, e.g. metadata that will be locally rendered.