[semantics-public] Film and Shape languages, what are they?
John Carlson
yottzumm at gmail.com
Mon Jun 22 20:52:05 PDT 2020
Hmm. So I'm thinking there's film and shape semantics and semiotics. Can
we think of X3D semantics in terms of shape semantics and film semantics?
What do we do about semiotics? Do I look at my glass and see "something I
am drinking from"? Or do I see something topologically? Do I somehow
recall every time I've had ice water from a glass?... All of the above?
What causes us to choose one meaning above another? Microtheories?
Time to pull out some linguistics classes...
On Sun, Jun 21, 2020 at 11:38 PM John Carlson <yottzumm at gmail.com> wrote:
> We have very good languages for describing virtual worlds in text or
> binary, VRML, X3D, JSON, ...
>
> Those are a combination of 3 mechanisms, symbols, objects and arrays.
> What I would like to discuss is describing virtual worlds in film and shape
> languages, essentially, the snake eating its tail. We have seen
> recursive/meta aspect of the computer language, but do we really have a
> concept of the recursive/meta aspect in film and shape languages?
>
> This was one of the things I wanted to explore at the DSVL 2001, but that
> workshop mostly focused on the meta-object (and perhaps meta-text)
> languages, of course, it was at OOPSLA, so that's probably understandable.
> What is meta-film? What is meta-shape? equations? Are these metas mostly
> ignored by the object and object-language communities?
>
> One thing that comes to mind for film is the storyboard, or shapes
> describing the film.
>
> Another thing that comes to mind is fractals and procedural generation.
> How can we describe procedural or fractals in a drag and drop fashion?
> I've seen it done.
>
> I am primarily interested in generating artifacts, but the other side,
> which is recognizing artifacts, is also worth examining. There have been
> strides gained in computer vision with neural networks. Can we convert
> these to code with something like TransCoder?
>
> We've all probably all seen video feedback, and it was featured highly in
> Douglas Hofstadter's book, "I am a Strange Loop." Is this the only video
> self-reference we can achieve?
>
> What would shape feedback be? Is this essential to robotic sensors?
>
> What would X3D be like if we had a video encoding for virtual worlds? *as
> input* to the computer. This is where OpenAI is moving with gym and
> universe.
>
> What exactly is shape encoding?
>
> Just to be clear, I am speaking of a concrete specification instead of an
> abstract specification. I'm asking if we can translate the X3D abstract
> specs into movies and shapes.
>
> This would be primarily for people with reading difficulties.
>
> John
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/semantics-public_web3d.org/attachments/20200622/30bbbc79/attachment.html>
More information about the semantics-public
mailing list