Difference between revisions of "X3D V4 Open Meeting"

From Web3D.org
Jump to: navigation, search
(Contribution 1)
(Contribution 3)
Line 93: Line 93:
  
 
===Contribution 3===
 
===Contribution 3===
<pre>
+
<pre style="white-space: pre-wrap;">
 
The discussion on introducing an id field seemed to point towards the need to have fuller integration in the sense that it is difficult to isolate features. It may be necessary to define a x3d dom similar to the svg dom, with the corresponding interfaces. svg is very successful on the web but it took a long time to arrive there.
 
The discussion on introducing an id field seemed to point towards the need to have fuller integration in the sense that it is difficult to isolate features. It may be necessary to define a x3d dom similar to the svg dom, with the corresponding interfaces. svg is very successful on the web but it took a long time to arrive there.
  

Revision as of 12:33, 8 June 2016

X3D V4.0 Open Workshop / Meeting June 8th 2016


Topics

  • What level of X3D integration into HTML5 do we want?
    • Do we want to be fully integrated like SVG?
  • Do we want/need a DOM spec? If so:
    • Which DOM version should it be based on?
    • Do we want to fully support all DOM/HTML features?
  • Do we want to maximize the backwards compatibility of V4.0 with V3.3? Or break away completely?
    • Do we want to retain SAI?
  • What features do we want? For example,
    • How is animation to be handled? The X3D way of TimeSensor and ROUTEs, or an HTML way, such as CSS3 animations, or else JavaScript?
    • How is user interaction to be handled? The X3D way of Sensors, or the HTML way with event handlers?
    • Do we need any different nodes? One example might be a mesh node?
    • Do we want Scripts and Prototypes in HTML5?
    • How do we want to handle styling?
  • What profile(s) do we need for HTML?


Attendees and contributors

E-mail contributors: Don Brutzman, Leonard Daly, Andreas Plesch

Meeting Attendees:

Apologies: Don Brutzman, Andreas Plesch


Prior e-mail contributions:

Contribution 1

I think the bigger question of what should be done with X3D. Is X3D solely going to exist within HTML or will X3D have a separate life inside and outside of HTML.

If the life is solely within HTML, then the questions below become inclusive of all X3D. If there are separate existences, then the first question is what is the cross-compatibility between X3D/HTML and X3D/other.

Contribution 2

Relevant working-group references follow.  A lot of excellent work has been accomplished already.

	X3D Version 4
	http://www.web3d.org/x3d4

	Web3D Consortium Standards Strategy
	http://www.web3d.org/strategy

	X3D Graphics Standards: Specification Relationships
	http://www.web3d.org/specifications/X3dSpecificationRelationships.png

	X3D Version 4.0 Development
	http://www.web3d.org/wiki/index.php/X3D_version_4.0_Development

A 5-10 minute quicklook discussion across these resources might help.  We are pretty far up X3D4 Mountain already!

The posted discussion-topics list is a good start for renewed activity, and an important way to keep track of everyone's many valuable ideas.  Suggestion: create some kind of topics-discussion page, probably easily linked off the preceding wiki page.

My general inputs for each of these topics are guiding questions:

a. What do the HTML5/DOM/CSS/SVG/MathML specifications actually say?

b. How is cross-language HTML page integration actually accomplished, as shown in best practices by key exemplars?

c. What is the minimal addition needed to achieve a given technical goal using current X3D capabilities?

Editorial observation: the word "want" appears 9 times in this list...  Understandable from common usage, but not a very good way to achieve consensus over a long-term effort.  Also not very useful for measuring successful resolution.

Pragmatic engineering rephrase: "what problem are you trying to fix?"

Over 20 years of successful working-group + community efforts can guide us in these endeavors - we know how to succeed together.  An effective path for building consensus is to:
- define goals that are illustrated by use cases,
- derive technical requirements,
-  perform gap analysis, and then
-  execute loosely coordinated task accomplishment according to each participant's priorities.

How to execute each specification addition: write prose, create examples, implement, evaluate. Repeat until done, topic by topic.

References:

Contribution 3

The discussion on introducing an id field seemed to point towards the need to have fuller integration in the sense that it is difficult to isolate features. It may be necessary to define a x3d dom similar to the svg dom, with the corresponding interfaces. svg is very successful on the web but it took a long time to arrive there.

x3dom has a dual graph approach. There is the x3d graph and in parallel the page dom graph which are kept in sync but are both fully populated. Johannes  Behr would know better how to explain the concept.
It looks like FHG decided that x3dom is now considered community (only?) supported. This probably means it will be out of sync as newer web browsers arrive, or webgl is updated.

I explored Aframe a bit more. It will be popular for VR. It is still in flux and evolves rapidly. The developers (mozilla) focus on its basic architecture (which is non-hierarchical, a composable component system) and expects users to use javascript to develop more advanced functionality (in the form of shareable components). So it is quite different, fun for developers, and for basic scenes easy for consumers. Since most mobile VR content at this point is basic (mostly video spheres and panos), it is a good solution for many.

(As a test I also implemented indexedfaceset as an Aframe component, and it was pretty easy - after learning some Three.js. So it would be possible to have x3d geometry nodes on top of aframe. Protos, events and routes are another matter but also may not be impossible).

There is still space for x3d as a more permanent, and optionally sophisticated 3d content format on the web.

Event system: My limited understanding is that on a web page, the browser emits events when certain things happen. Custom events can also be emitted by js code (via DOM functions) for any purpose. (All ?) events have a time stamp and can have data attached. Then, events can be listened to. There is no restriction to listening, eg. all existing events are available to any listener. A listener then invokes a handler which does something related to the event. js code can consume, cancel, or relay events as needed (via DOM functions). It is not unusual that many events are managed on a web page. events can be used to guarantee that there is a sequence of processing.

So how does the x3d event system relate ? There is a cascade, and directivity. How long does an event live ? one frame ? Until it fully cascaded through the scene graph ?

Since x3dom and cobweb are currently the only options, from a practical stand point a question to ask may be this: what is needed to make x3dom and cobweb easy to use and interact with on a web page ? Typically, the web page would provide an UI, the connection to databases or other sources of data, and the x3d scene is responsible for rendering, and interacting with the 3d content. For VR, the UI would need to be in the scene, but connections and data sources would still be handled by the web page.

Cobweb in effect allows use of the defined SAI functions. Is it possible to define a wrapper around these functions to allow a DOM like API (createElement, element.setAttribute .. element = null) ? It may be since they are similar anyways and it would go a long way. But it still would not be sufficient to let other js libraries such as D3.js or react control and modify a scene since they would expect x3d nodes to be real DOM elements.

VR: A current issue is control devices. It would be probably useful to go over the spec. and see where there is an implicit assumption that mouse or keyboard input is available. VR HMDs have different controls (head position and orientation(pose), one button) and hand held controllers (gamepads, special sticks with their own position/orientations) or the tracked hands themselves become more popular. In VR, you do want to your hands in some way.

Perhaps, it makes sense to have <Right/LeftHand/> nodes paralleling <Viewpoint/> with position/orientation fields which can be routed to transforms to manipulate objects ? How  a browser would feed the <Hand> nodes would be up to the browser. InstantReality has a generic IOSensor.