LightWave 2018

Page tree
Skip to end of metadata
Go to start of metadata

Export Scene As VRML97

The VRML97 Exporter plugin (File > Export > Export Scene as VRML97) creates a VRML97 World based on the current scene. The VRML output complies with the ISO-VRML97 specification. The objects in the scene may be saved as separate files into the Content Directory or an alternate path. The following list shows some highlights:

  • Accurate Translation
  • Keyframed hierarchical animation
  • Light intensity envelopes, including ambient
  • Non-linear fog
  • Color image texture mapping using projection or UV maps
  • Solid, non-linear gradient and image backgrounds
  • Support for SkyTracer warp image environments
  • Particle animation with single-point-polygon object to PointSet node conversion
  • Two-point-polygon object to IndexedLineSet node conversion
  • SubPatch object morph capture (for capturing morph, displacement map and bone effects on SubPatch objects).
  • High-performance output
  • 3D Sounds
  • Level-of-detail object replacement animation
  • Object instancing
  • Vertex color and lighting support
  • Multiple custom viewpoints
  • Custom VRML nodes
  • Touch activated behaviors
  • Viewer proximity activated behaviors
  • Object visibility activated behaviors
  • Objects output as prototypes (PROTO) definitions (optional)
  • Scene object ignore
  • Standard object viewpoints (optional)
  • Optional embedded objects for single file scene output!
  • Optional lowercase conversion for embedded object/image filenames
  • Direct avatar navigation speed control
  • Improved compliance for export to VRML97 editors, including conversion of illegal VRML97 names (like 2Legs or My Light), and reflection of illegal negative scaling

VRML Creation Settings

Output .wrl is the file path for the VRML97 World.

Author is comment text to embed in the file.

Use Prototypes is used to define and use objects in the VRML scene more efficiently. Some older importers may not like this, but it is required for morph capture.

When Lowercase Filenames is enabled, the filenames used in the file can all be converted to lowercase. This can be helpful on UNIX-based Web servers, where filenames are case sensitive, and links with mismatched cases will fail.

Use Embed Objects to include the geometry for all meshes in the main VRML97 World file. This may be more convenient, but for complex worlds, or reusing objects, it is less efficient. Using external object files allows the main world to load faster, and display bounding boxes while the objects are loaded. This option must be off for morphing objects as well as LoD objects - loading them all at once would defeat their purpose!

If Overwrite Objects is enabled and the Embed Objects option is not used, external object files will be created for objects in the scene. If the objects already exist, this option must be enabled to overwrite the objects, thus updating surface or morph changes.

Local .wrlPath is the file path on your machine where external VRML objects will be found and/or saved. This will default to the current LightWaveContent Directory.

VRML Object URL is the URL where browsers will search for external objects, this should be the web equivalent of the local path (e.g., http:\ \\ vrml_objects\).

Text entered into the Texture URN field will be pre-pended to texture map image filenames, as an alternate texture location. This should facilitate work with libraries like the Universal Media textures. This information, when specified, will appear in addition to the regular URL elements.

On the Scene Item pop-up menu, select a scene element to which you want to apply the settings on this tab.

The Sensor Type is the sensor used to start the item’s animation.

For some sensor types, like Proximity, a distance range is required. When the viewer approaches the item within the Range, the animation is triggered.

Alternate Trigger is an alternate item to serve as the animation trigger for this item.

On the Object pop-up menu select an object whose settings will be edited on this tab.

The Ignore Object option will exclude a selected object and its children objects from export.

Use Attach Sound to add a sound effect, triggered with the selected object’s animation. Enter the URL for the audio file triggered in the URL field. You can also set the volume and whether the sound should be looped once it has started.

The Record Morph option saves a Morph Object - a special animated Proto object - in place of the standard external object files. This requires that the exporter step through the animation and capture the deformed mesh at different times. The deformed positions are used in a CoordinateInterpolator node hidden in the morph object. This currently works for SubPatchobjects.

First Frame is the starting frame for the Morph Object animation. Last Frame is the final frame in the morph capture for this object. Frame Step is the number of frames between captured morph keys. Making this too small results in huge objects; making it too large results in an animation that is not smooth or points with motion that is too linear.

Enable Loop to repeat the morph animation, once it has been triggered.

Enable the AutoStart option to make the animation start as soon as the world is loaded.

Use the Navigation Mode pop-up menu to set the initial navigation mode for Web browsers.Enable Headlight for good defaults in dark places.

Standard Viewpoints creates extra ViewPoint nodes (top, left, etc.) for scene and external objects.

Avatar Size lets the browser set appropriate movement for the dimensions of your world.

Global Light Scale globally scales all light intensities.

Environment Images are warp images generated by SkyTracer. These map nicely to VRML’s idea of environment mapping. Enter only the basename portion of the image files. (For example, if you had skyWarp__back.jpg, skyWarp__front.jpg, etc., you would enter skyWarp.) Note that any panoramic images should be compatible, provided they are renamed to match the SkyTracerfilenamingconvention.

The text in the Image URN field will be pre-pended to the image filename and added to the list of URLs for the environment image.

So What is VRML?

VRML, also known as ISO-VRML97 (ISO/IEC 14772-1:1997), stands for Virtual Reality Modeling Language. It is a standard for describing 3D objects and scenes via the Internet.

Like HTML-based web pages, VRML worlds can contain links to remote files. However, rather than using text or images for links, VRML uses 3D objects. As a result, the Web browser for VRML resembles a 3D animation program or video game more than a word processing program.

VRML worlds can be embedded in HTML pages and vice versa. VRML models are based on either primitives, like spheres, cubes, and cones, or, more likely, sets of points and polygons. Since the latter is basically the approach used by LightWave3D’s polygonal models, there is a pretty good match between LightWave scenes and VRML worlds.Before you can view any of your VRML creations, you’ll need to get a VRML 97 Browser. The VRML files produced by LightWave are text files that follow LightWave’s style of separate object and scene files. This is not a requirement of VRML, but a powerful feature that lets a VRML scene include objects from different files, even from some remote library.

These external objects in the scene file consist of a file URL, a bounding box, and a set of position, rotation and scaling transformations. The bounding box information is used by browsers to render stand-ins while the objects are loaded.

VRML scenes also include multiple point lights, directional lights, and spot lights with adjustable cones. The VRML equivalent of the LightWave camera is a viewpoint. The exporter will add a named viewpoint for each camera in the LightWave scene, which browsers can use to jump between points of interest or standard views. In addition, VRML objects created by LightWave may include a set of standard viewpoints for the object.


Objects in your LightWave scene that have keyframes in any motion channels will be given linear motion keys in the VRML file, through PositionInterpolator and OrientationInterpolatornodes.

The Pre Behaviour and Post Behaviour set for the channels in the LightWave motion has a critical influence on the VRML behaviour of an object. If the Pre behaviour is set to Repeat, the motion will begin when the world is loaded and keep on playing. Otherwise the motion will begin when the item is triggered. If the Post behaviour is set to Repeat, the animation will loop until re-triggered, otherwise it will stop after playing.

The default triggering is a click (TouchSensor) on the object that causes the animation to run from the beginning. Currently, the TouchSensorswitch is placed on the highest-level animated object in a hierarchy, and triggers the animation of all the children simultaneously (as one would expect).

Morphing in VRML uses a CoordinateInterpolator node. The node is part of the Proto in the object file, if morph data has been captured. For this reason, Prototypes should be enabled and embedded objects disabled for morphing worlds.


Double-sided surfaces are not supported in VRML97. Thus LightWave objects with polygons whose surfaces are double-sided are translated as if they weren’t double-sided. VRML objects that seem to be missing polygons may actually have double-sided surfaces that need to be either flipped or aligned in Modeler. If the surface is truly meant to be double-sided, you will need to model the geometry with double-sided polygons.

If your model has a texture map image associated with it (color only, not diffuse, specular, etc.), there are a few tricks that can minimise the nuisance of hand-editing your VRML models. Since some browsers will have to load the image named in the object, that image name, saved in the LightWave object, is critical.

It pays to use LightWave’s Content Directory system properly, so that the image path will be relative to that content directory (i.e., images\ wood.jpg rather than C:\ NewTek\ images\ wood.jpg). You may also want to move the image to the Content Directory so that the name in the object will have no path, and browsers will seek the image in the same directory as the object.

In any event, wherever the VRML object finally resides, you will want a matching directory hierarchy where the browser will find the image or you can just edit the VRML file.

Another image issue is that of file format. JPEG and GIF images are almost universally supported on the Web, but the PNG format is gaining acceptance as a modernised, yet unencumbered, replacement for GIF. JPEG images are nice and small, and compression artifacts should be virtually invisible at Web/VRML resolutions. If you have nice high quality texture images for your rendering work and want VRML versions, make smaller JPEG versions of the images for the Web. Large textures may be limited by the browser’s rendering engine in most cases anyway. When you install the VRML model, just use the smaller JPEG image or edit the VRML file.

LightWave VRML Implementation

The organization of LightWave’s VRML object output follows that of LightWave’s own object format. A list of XYZ coordinate triples describe the vertices in the object. For each surface, there is also an IndexedFaceSet node that holds the polygons with that surface, described as a number for each point in the polygon, which refers to an entry in the main list of point coordinates. There may also be an IndexedLineSet node or a PointSetnode containing any two-point and one-point polygons.

If the original LightWave object had a color texture map image, there will be an image file name and a set of texture coordinates. Texture coordinates, also known as UV coordinates, are 2D pixel positions in an image. They describe how the image lies on the 3D surface by pinning certain pixels to each polygon’s vertices. These values can be calculated from LightWave’s mapping and texture size settings.

In the case of planar UV mapping, U and V are simply x and y, (for Z-axis planar). Spherical UV mapping yields U and V coordinates somewhat analogous to longitude and latitude, with the U’s all bunching up at the poles. Cylindrical mapping uses U’s from the spherical case, then the V’s are the coordinate lying along the texture axis. If the LightWave texturing is using UV mapping already, then these coordinates are used, since VRML texture coordinates are defined in a per-polygon fashion (i.e. discontinuous UVs).

The entire object may be embedded in a VRML Anchor, which makes it an active link on the Web. If you supply a URL for the object when you create it, then anytime that object appears in a scene, it will act as a clickable link to some other page. This should be used sparingly, as it can be quite annoying to keep jumping around the web when you’re just inspecting an object.

The uses for URLs in your objects can range from booby traps or ads for your favorite Web site, to inventory data for some widget. A nice example is a VRML origami site, where each step in the folding of a paper menagerie has a simple model with a link to the next stage. This is similar to the VRML level-of-detail mode, where multiple models are grouped together and the viewer’s distance determines which model, if any, is actually rendered.

Performance Notes

Although the VRML format is capable of describing complex scenes, current 3D browsers are limited by the real-time rendering capabilities of their underlying computers. Thus, exquisitely crafted models with painstaking detail, suitable for those print-res close-ups, may fail painfully when they enter the realm of VRML renderers. To avoid the twin perils of long download times and slow rendering, remember: the first key to VRML success is efficient, low-polygon count Modeling.

Similarly, elaborate layers of diffuse, specular, and luminosity textures, whether images or algorithmic, will not survive any conversion to VRML. Don’t even ask about bump maps, displacement maps, or surface shaders. Love it or leave it, VRML supports a single image map for a color texture, as well as diffuse, color, specular, and transparency values. Since that texture image may very well have to fly through a modem, you’ll probably want to keep it small.

Elaborate textures and lighting can be baked into a model’s image map however, and lighting effects and coloring can also be baked into vertex color maps.

PointSetobjects are stored most efficiently if there is only one surface per object. Otherwise, duplicate references to the vertices are required. For large scenes, this could be significant.

Scene Tags

Many of the VRML attributes set in the exporter UI are stored in the LightWave scene file as comments. These comments can be viewed and edited on an item-by-item basis with the Comments (Layout Generic) plugin. These Comments should be formatted as <Tag>=<value> where the Tag is one of the following:


Item - Tag Name/Usage Description

URL - URL=<url> (URL=”http://etc.”)

Item URL, overrides object and children’s URLs.


SOUND - SOUND=<url> [<volume> <loop?>]

Sounds can be added to objects. Currently these are triggered with any animation



Trigger when mouse is over object (mouse grope).



Trigger if viewer enters active region (WxHxD).



Trigger when viewer sees object.


INCLUDE - INCLUDE=<filename>

Dump contents of file directly into output.



Skip this object and its children.


TRIGGER - TRIGGER=<object&gt;<br />

Other object for sensor.


VRML - VRML=nodeName{node fields}

Node creator, dump node from comment into file.


LABEL - LABEL=<text>

Create text node.


MORPH - MORPH= <start> <end> <step> <loop?>

Morph animation capture. Creates external MorphObject.


LOD - LOD=<objectfilename> [<range>]

Level of Detail node. Use multiple tags in order of decreasing complexity (increasing range).

Camera - Tag Name/Usage Description (stored in first camera in scene)


NAVIGATE - NAVIGATE=<type> [<speed>]

NavigationInfo Type is one of


 Browsers may restrict user navigation with this.



Override scene background image with URLs for front, back, left, right, and top images named like basename_front.jpg, etc.



NavigationInfo Headlight on, if present.