How do we create our QTVR movies?


Make single-node movies Make multi-node panoramic movies


Creating single-node QTVR movies

There are seven steps to make a single-node QTVR movie[2].

1. Planing scene

Fifteen locations on Texas A&M University main campus have been chosen to photograph our QTVR nodes. These locations are chosen as we think they are among those most interesting spots that can represent the characteristics of the campus, and a fairly clear route can be constructed out of these locations to give user a virtual tour on the campus. Figure 2 shows these locations on the campus map that is a screen capture from our VRML world. These locations are (1) In front of University Administration Building (2) Inside the Administration Building (3) At the center of the field surrounded by the O&M Building, Administration Building, etc. (4) At the corner of Anthropology Building (5) At the corner of HRBB and Chemistry Building North Wing (6) At the corner of Engineering/Physics Building (7) Inside the Zachry Engineering Center (8) In front of Halbouty Geosciences Building (9) At the corner of the Harrington Education Center and the Chemistry Building (10) In front of Academic Building (11) at the corner of the Rudder Theater/Rudder Tower Complex and the Memorial Student Center (12) At the Center of the Simpson Drill Field (13) In the MSC Flag Room (14) Outside MSC and Kyle Field (15) Inside Kyle Field.

Figure 2. Locations chosen to photograph QTVR movies.

2. Photography the scene

The equipment used to photography the scene include a Canon Ellan IIe QD single-lens reflex camera with a Tokina 20-35 mm zoom lens and a special tripod system. The tripod system consists a regular tripod and a special tripod head system that allow the photographer to adjust the camera position to find the nodal point of the lens and align the center of lens with the pivot point of rotation. The tripod head system also allow user to adjust the level of camera.

Such a system can be purchased from several places which cost from $500 to over $1,000. However, the system used in this project is made by the author himself and served quit well through out the project. Figure 3 shows the complete system.

Figure 3: Photography equipment used to photograph the scene

The menu from Apple recommended that a 15 mm lens for in-door shooting and longer lens for out-door shooting since longer lens gives better depth-of-field[3]. When taking the pictures, the lens was set at 20 mm. We also tried 24 mm setting but observed no noticeable difference. Portrait mode has been chosen since it gives better vertical field-of-view (vFOV) than taking picture in landscape mode. At 20 mm and portrait mode, the vFOV is 84and the horizontal field-of-view (hFOV) is 62. For each node, 12 pictures was taken for a full 360 sweep. This made each picture cover 30 horizontally, or about 50% overlap between two consecutive pictures. The 50% overlap is recommended by the QTVR menu in order to be able to use the "blend" option when using the stitch tool. The Aperture Priority Mode of the camera setting was chosen to give consistent lighting effects on all pictures photographed at one location. Kodak Gold 100 films were chosen for outside door scenes and 400 films were chosen for in-door scenes.

3. Digitize the images

After the negatives have been developed in a local shop, the images were then scanned into digital format by using a HP color scanner. Two resolutions were chosen for each image: 260x360 and 520x720. The choice of the resolution deserves more discussion.

Back to section 1 when the shortcomings of QTVR were discussed and we pointed out that even though one can zoom in or out in a QTVR movie to get close to a far-away object, the resolution of the movie limits the effectiveness of this usage. To see a far-away object clearly, one have to make a very high resolution (normally larger than 1600x1600, depends on the characteristics of the scene). It is difficult to create a high resolution QTVR movie because: first, the time to make the movie will be significantly longer than that to make a lower resolution movie (normally below 512x760). And at very high resolution, the time cost for the creation of a QTVR movie is unrealistic for a middle to large project that consists many nodes. For example, it cost nearly 60 hours to create a 3072x3072 pixel resolution QTVR movie on a Power PC 8100/80 equipped with 72 MB physical RAM and with 480 virtual memory turned on[4]. Not to mention the time to create a multi-node movie with as many as 15 nodes as in our project. Second, the size of such a high resolution movie will be very large (over 8 MB) compare to the movie made out of our 520x720 images (700 KB), and such a big movie will result a much lower frame-per-second (2 frames-per-second) when played on the system mentioned above. Also, the download time for such movie will be significantly longer which is not practical over Internet.

4. Stitch images

From this step, the Apple's QuickTime VR Authoring Tools Suite 1.1 was used. The suite provide provides programs to process images into QTVR movie, MPW shell for running these programs, a HyperCard stack to facilitate making multi-node QTVR movies, and a set of pre-written scripts. Since these pre-written scripts are calibrated for 15 mm lens, they can not be used in our situation because different lens has been used in this project.

After many tests, the parameters for stitching images have been set for our 20 mm lens. The most important parameter is "-fovy" which indicates the vFOV of the lens. In this case, it is 84. Some other parameters such as "-offset", "-range", "-outWidth" and "-outHeight" differ from node to node. The stitched image will be a PICT file as an example shown in Figure 4.

Figure 4: A panoramic PICT file stitched from 12 single images of one location

5. Create object hot spots

Since object movie is not used in this project, this step is skipped.

6. Dice the stitched image files

Again, instead of using scripts supplied with the QTVR Authoring Tools Suite, a low-level tool named p2mv is used to dice the image files. The dicing process is used to compress the panoramic PICT files. The Dicer creates source MooV files, which are standard, linear, QuickTime movie files that can play the panorama back tile by tile. The "vertical tiles" and "horizontal tiles" were chosen to be 1 and 24 respectively in this case, and "cvid" compressor was chosen since the movies were to be ported to other platforms such as Windows.

7. Make single-node panoramic movies

In this step, a low-level program tool called "msnm" was used to take the diced source MooV files and then creates a VR panoramic movie. Of the most important parameters, "-windowSize" were set as 440x260 and 760x520 for low resolution and high resolution QTVR movies respectively; and parameter "-defaultView" was set as "60 0 75" (i.e. horizontal pan angle = 60, vertical pan angle = 0 and zoom = 75) to give good initial views of the single-node movies. The sizes of resulting single-node QTVR movies are 230 KB and 700 KB for low resolution and high resolution respectively.

The system we used to create movies is a PowerPC 8500/120 model with 48 MB physical RAM and have 150 MB virtual memory assigned to the MPW shell. For the resolutions chosen in this project, net computation time spent on creating a node is about 15 minutes.

Since the movies will be ported to non-Mac platforms, such as MS Windows, one extra step is needed. The movies are loaded into the QuickTime VR player on the Macintosh, select the "Make movie self-contained" option and check the "Playable on non-Apple computers" checkbox when save the movies.


Create multi-node QTVR movies

To build a multi-node movie, one need many resource files that store link hot spot information, individual node information, etc.[5] Although one can hand create/edit these resources, it is a very tedious and ineffective way to handle the process. The QuickTime VR Authoring Tools Suit provides a tool named "Scene Editor" to automate the process of generating the Multi-Node Resource Files. The Scene Editor is a HyperCard stack and uses standard HyperCard navigation. One can size the Scene Editor pages, and move object buttons and fields using standard HyperCard techniques. HyperCard automatically saves the changes have been made as one works with the Scene Editor. The scene data is stored in HyperCard elements within the stack itself, and the application and data become more and more closely intertwined as the process goes on.

We first try to build a low resolution multi-node movie. To work with the Scene Editor, a copy of the Scene Editor called "Scene Editor Campus" is created first. A set of 480x148 PICT files are created from the original stitched PICT files and they will be used as "Link Backdrop Pictures" which serve as link hot spot reference files. All source pictures (stitched panorama pictures) and single-node movies are then placed into appropriate directories. Our setup page is shown in Figure 5. The constants are then set on the Scene Constants Page based the information acquired during the creation of single-node movies. Figure 6 shows the Scene Constant Page for this project. The floor plan that contains fifteen locations is then created within the "Node Mode". At the "Node Mode", all single-node movies are them added to the scene follow the procedure described in the menu. Next, link hot spots are edited in "Link Mode". Figure 7 shows a screen shot of the Scene Editor that displays a node (upper right picture) contains a link to another node (middle right picture). The link hot spot areas (total of three) are shown as squares in the Link Backdrop Picture (lower picture).

Figure 5: Scene Editor Campus, File Specification Page

Figure 6: Scene Editor Campus, Constant Page

Figure 7: Scene Editor Campus, Node with Links added

After all links have been tested and verified within the Scene Editor, the data is then exported from the Scene Editor. Three files, the Multi-Node Resource file, the Node List file and the Build Worksheet are created when the data is exported. After exporting the data, one can close the Scene Editor and execute the Build Worksheet within MPW shell. Appendix A shows such a copy of Build Worksheet. The resulted fifteen-node movie has the size of about 3.2 MB.

1. Find links in a multi-node QTVR movie

During the process of navigating within a multi-node movie, when the mouse cursor passes a region that contains a link hot spot, the graphical representation of the cursor changes indicating there is a link in that area. While this gesture is helpful for user to find hot spots, it is far from perfect, especially when the scene is large (like ours) and to find the links totally depend on intuition is not enough. The first version of our movie is without any help icons and several users expressed that it is very difficult, sometimes nearly impossible, to find all the links in the scene. Some kind of extra visible indication besides the cursor change is needed to help user to find links during navigation. On the other hand, these visible indications (e.g. icons) must provide intuitive and obvious meaning within the context of the scene, yet they can not be too large to affect the characteristics of the scene. Based on these considerations, a set of three icons has been used. They are added to the areas in the original panorama pictures where the hot spots reside. The multi-node movie is then constructed based on these changed panorama images. Figure 8 shows a scene with icons. Users may intuitively find the right link without help of the icon, but imaging how difficult it is to find the left link without the help from the icon!

Figure 8: Navigation in a multi-node movie with help of icons