[Sigia-l] 3-D workflows

Listera listera at rcn.com
Wed Jul 10 20:19:02 EDT 2002


"Christopher Fahey [askrom]" wrote:
> ... QTVR is just a 2d image with fancy scroll mechanisms and fancy distortion
> algorithms ...

QTVR can also contain embedded movie objects in-line. You can certainly
manipulate them in 3D. A QTVR panorama can contain nodes that allow you jump
from one perspective to another. A QTVR pano can contain a variable and
independent QT movie in-line and in-perspective. Also many 3D apps can
create all or partially synthetic 3D worlds in QTVR; conceptually you could
generate these this in-real time via user interactivity. So the line is
blurred.
  
> The question is this: Is the globe a 2D map projected onto a sphere, or is it
> a 3D dataset like a CAD model.

Again technically speaking, a 2D texture map could be animated (even via
user interactivity) and have changing depth. For the viewer, the end result
may be indistinguishable. In the end, *everything* gets rendered as 2D on
the monitor. Even 3D looking holograms and some volumetric displays are
slices of 2D planes.

I have also seen a demo of a system that can take a 2D picture, create a 3D
model out of it and provide a walkthrough, all in real-time. Sort of like
the Carrara product (whoever has the rights to it now) on steroids.
 
> Causing a 3D model of a tree to rotate is sematically the same as
> walking around the tree.

Only in isolation. And therein lie many of the issues in 3D navigation.

Here's the problem (and the reasons behind my questions): the spinning globe
has its own axis to revolve around. The viewer has his own axis. The globe
and the viewer can also revolve around another arbitrary axis, in unison or
otherwise. When you put more objects into the scene, the permutation of POVs
gets to be very complex.

The 3D navigation systems (which ideally should be able to reveal a lot in
terms of absolute and relative relationships among objects in a system)
cannot really render these arbitrarily complex relationships in real-time,
given our current technology.

It's one thing to take a single object and revolve it. It's something
entirely different to suddenly attempt to render *all* the potential
relationships that objects may have that can come into the view at any given
time. And obviously, those (more) distant objects have relationships too,
etc.

> The differentiator, then, is the ability to change perspective on the dataset.

Yes and no. Since currently everything gets rendered as 2D on a monitor at
any given time, changing the perspective on the dataset and rendering it on
a screen one at a time may be doable. What's not is *continuously* updating
it as you change the perspective. The amount of data that need to be sucked
out of a database, travel over the network, computed and rasterized on the
screen is beyond what we have today. Doing simple stuff is easy; doing the
complex stuff is nearly impossible.

> both of your examples are clearly intended to blur the distinction between 2D
> and 3D datasets, and hence to trip me up!

I'd never do that :-)

> I think we can agree that there is a significant qualitative difference
> between an isometric projection of a building that you can only look at and a
> scale model of a building that you can stick your head into and peek around
> the corners.

Of course. But only as a vector of time. :-)

Best,

Ziya





More information about the Sigia-l mailing list