Re-doing Papyrus

A case study of a 2D animated game


Table of Contents

Introduction
Vectorizing the drawings
Assembling the drawings to animate them
Exposure sheet
Animating the drawings
Assembling several animations and a background
Adding interactivity
Testing in web browsers and in mobile phones
Comments on 'external use'
Conclusion
Thanks
Bibliography

In 1995, I was mandated by a French publishing house to prepare 2D animations for the ludo-educative game Papyrus on CD-I and CDROM.

The interactivity was programmed with Macromedia Director. To get fluid animation, the targeted resolution was 512x373 pixels with 256 colors. All the animations were done with bitmaps.

By style choice (cartoon-like), animations were hand-drawn, then computer-colored and assembled.

To be able to benefit from future technical evolutions, we chose to scan the drawings at a high resolution. It was difficult to get a good low resolution result from a high resolution scan, without anti-aliasing (only 256 colors), but that's another subject (Figure 1).


We also had all the exposure sheets. An exposure sheet is used by cartoon animators to define for each time step what drawings are assembled to compose the scene. Something very similar to the scenario panel in the Flash editor.

We then produced an XML version of the exposure sheets.

So, with the drawings, our vectorizer, the exposure sheets and a bit of programming, we were ready to generate SVG files for all the animated scenes.

Hundreds of drawings and dozens of exposure sheets. Is SVG a good solution? Yes, as we will see.

The first step was to get an SVG file for each original drawing. We had the scanned files for each drawing. For a targeted 512x373 resolution, we started with a 1800x1200 resolution.

The drawings were in a proprietary format that we translated into PNG. Then we adjusted our vectorizer to translate PNG files to SVG files.

Our vectorizer uses 16 patterns to scan a black and white bitmap for each successive line pair. The patterns are all the 4 pixel possibilities (see Figure 2):

In the figure, green points are starting points for two paths. Green arrows are new segments in a path, starting either from a starting point or from a previous segment in the path (in red).

Red points are ending points for two paths, where two paths are joined to complete a border. Additional logic is used to join paths which end in the same point.

 

As a result, we got an SVG version for each original drawing.


The code will be available on SourceForge as VectoSVG.

This method generates a lot of small horizontal or vertical segments; it is simple to filter the result to get a more compact representation with fewer segments and points. For example, for each chunk, if we represent the one unit horizontal segment by 0 and the one unit vertical segment by 1, we can have a simple filter, which takes a byte to represent a group of eight unit segments and gives the resulting representation. Two segments can be used to approximate eight unit segments. Figure 3 gives some samples.


x00 → 8 horizontal segments → 1 segment (+8, 0)

x55 → mix of h and v segments → 1 segment (+4, +4)

x0F → 4 h segments, then 4 v segments → 2 segments (+4, 0)(0, +4)


SVG has a lot of possibilities to define parametric animation, mainly by using different modes for interpolating parameters. But 2D cartoon animation is rarely described efficiently using parametric animation. In Figure 4, we see three successive drawings in a sequence. It is very difficult or impossible to design functions to interpolate drawing 2 from drawing 1 and 3. This sample is extreme but representative and a good illustration of such a rule about cartoon animation.

We will now present the exposure sheet. It's a classical way for animators to specify when a drawing is used, associated with a duration. Each line of a table represents a unit of time. Each column represents a level of drawings. Typically, the first column is for the background and other levels for animated characters.

Figure 5 is a detail of the traditional exposure sheet for the starting scene of the game. Each animation team has its own notation to add some other directions: a dynamic change of the framing, important synchronization point with the audio...

As you can see, it is just a table with a specific meaning for each row and each column. It is very similar to the scenario panel in the Flash editor.


Classical cartoons like the ones produced by Disney, or of the style of Tex Avery, are generally made of a background on which some characters move. The traditional manner to compose the pictures of such a cartoon consists in drawing the background and the characters on different transparent celluloid sheets. Some sheets are then put on top of each other and filmed.

The drawings representing the characters are generally made of zones of uniform colors and surrounded with a border line of a different color. The characters are drawn separately and their compositing is specified by the exposure sheet.

The exposure sheet describes, for each frame, the way to compose the final picture. It tells which background to use and what are the drawings to be stacked on top of the background.

From a technical perspective, this means that we can extract the following points from the exposure sheet:

  • a list of actions to perform for each frame: the addition of a new element to be displayed, removal of another element present in the previous frame, or a transformation to apply to an element;

  • a list of elements which need to be referenced more than 2 times during the whole cartoon, and a list of elements which will be referenced only twice (display and removal).

We have created an XML format to represent an exposure sheet.

To simplify, we will show a sample of a small exposure sheet in the version 2 of our XML format. Version 3 adds the possibility of grouping several levels and gives an ID to the group. For example, one level can be used for the animation of the arm of the boy and one level for the body of the boy; then, we can create a group named BOY and apply a transformation (scaling, translation...) to the group. Levels are displayed in the order they are in the file; so, the last level in the file is displayed on top.

            
<fexp version="2.0" repeat="infinite">
   <level end=”cycle”>
      <image duration="0.08">F0000.svg</image>
      <image duration="0.16">F0001.svg</image>
      <image duration="0.08">F0002.svg</image>
      <image duration="0.08">F0003.svg</image>
      <image duration="0.08">F0004.svg</image>
      <image duration="0.08">F0005.svg</image>
      <image duration="0.08">F0006.svg</image>
   </level>
</fexp>

         

In this example, we have a level consisting of seven drawings, each one with a duration. These drawings define a loop: when the last ends, the first begins.

We will show in the next section, how such a description can be translated into SVG.

Now, we need to see how successive SVG drawings can be displayed with SVG and to see if we obtain an acceptable level of performance.

The exposure sheet sample in the previous section defines a very simple animation with seven drawings, each one with a duration of 0.08 second, except for F0001.svg.

First we need to include each elementary drawing in the SVG file which will give us the animation.

We have three ways to do that:

  • the <use> element with an external reference (named 'external use' in the following),

  • the <animation> element,

  • the creation of a new file by aggregating the elementary files.

The <use> element and the <animation> element are good solutions to test the animation while keeping the elementary files independent. But, cascading external use is prohibited with Tiny 1.2 and the support of 'external use' is currently uneven (at the date of writing, Opera supports only one level of 'external use', and Firefox and Chrome don't support any 'external use').

The <use> element can only reference a fragment in a SVG file and can't reference the rootmost <svg> element. So, the referenced drawings need to be built with a <g> element whose ID can be referenced in the xlink:href attribute, like this:

    <use id="lev0c1" xlink:href="F0001.svg#F1" />

where F0001.svg is an elementary drawing and all the elements it contains are grouped in a <g> with ID F1.

The <animation> element can only reference an entire SVG file, but you must specify where to display it and at what size. So, as we have a direct correspondence between the size of the scene for the elementary drawings and the size of the animated scene, we have to give (0,0) as the position and the whole size for the size. In our example, we can include the same file as above like this:

  <animation   x='0' y='0' width='1600' height='1200'  id="lev0c1" xlink:href="F0001.svg"/>

where (1600,1200) is the size of the current viewBox.

The third solution, building a file containing all the elementary files, is just a matter of process, except we need to be sure than no duplicate IDs will be found in the aggregated file. In our case, we have no problem because all the files are generated by a process which takes care of that. We generate this file from the one with the <use> elements.  We replace each <use> with a group containing all the content referenced by the xlink:href  attribute of the <use>.

Now, we have all the steps of the animation and we have to animate them. We have three ways to do that:

  • with a pair of <set> elements for each drawing, one to display it, one to remove it,

  • with an <animate> element to display the drawing and to return to the previous state (display=”none”)

  • with a script.

     

In the following, we suppose that each elementary drawing has been included in the file with a <g> element, but the animation methods are directly transposable for the <use> and <animation> elements.

Each line of the previous XML exposure sheet can be translated using one of three following methods.

The first method, with <animate>:

         
<g id="lev0c1">
   <animate id="l0c1" attributeName="display"</para>
         from="inline" to="inline" calcMode="discrete" begin="l0c0.end" dur="0.16s" />
   <!-- …the elements which define the drawing must be included here -->
</g>
                
      

By default the effect of the <animate> ends by returning to the default state of the group (display=”none”).

For the loop on the last element, we have to animate the first drawing differently; it must play at the beginning of the sequence and again at the end of the last step of the animation:

         
   <animate id="l0c0" attributeName="display"</para>
      from="inline" to="inline" calcMode="discrete" begin="0.0;l0c6.end" dur="0.08s" />
   
   

The second method uses two <set> elements on the 'display' attribute of a drawing. The method is illustrated in the following code:

         
   <g id='level0cell2' display='none' >
      <set id='l0c2' attributeName='display' to='inline' begin='l0c1.end' dur='0.08s'/>
      <set attributeName='display' to='none' begin='l0c2.end'/>
      <!-- …the elements which define the drawing must be included here -->
   </g>
   
   

The third method uses scripting. For Firefox, we have generated for each animation a script which works like the previous pair of set elements, based on the method setTimeout.

We think that a fully declarative animation is the best way to define the animation and is a good base on which to apply tools, for example, to optimize, to check or to provide streaming methods (see an article of our team about streaming SVG [SVGSTREAM]). A declarative language seems to offer better support of the semantic of the domain. Moreover, our tests confirm that the same animation consumes a lot more CPU with scripting than with <set> or <animate> elements (see below).

Next, we need to generate a SVG file which groups a background and several animations, as defined by the columns of the exposure sheets.

First, we generate an SVG file with reference to the background and to each external animation created during the previous step. The reference is an xlink:href in a <use> or <animation> element.

But, as we would like the result to be executable on mobile phones, it needs to be compatible with SVG Tiny 1.2. In that specification, the file referenced by a <use> can't contain a <use> with an external reference.

Our first solution was not compatible: a complete scene made several external references to animations, and each animation made several external references to drawings, resulting in two levels of external <use>.

Below, in the section 'Comments on external use', we will present why it is interesting to create animation chunk by chunk and later, assemble the chunks.  

 

We didn't have access the Macromedia Director sources defining the global structure of the game, so, we decided to start work on the representation of such a structure in a declarative way. We chose an XML  format. For each page, we describe what the events are which are used to change the state of the page or to go to another page. This XML format is useful to present the global structure of the game, to add the interactivity to the animations, to apply some checks... But, this part of the work is still in progress and will be explained in a later publication.


Figure 6 is a detail of the flowchart obtained from the XML file, limited to the transitions between pages.

Some actions are very simple to add to the previously created SVG files. Some are more difficult.

We will show several interactivity examples and how they are obtained.

A first category of interactivity is like hyperlinking in HTML. You have an SVG page and you want to display a new page when an event occurs, typically a click on a graphical element. The basic case is very simple to code with the <a> element, which works like the tag in HTML. Just put an <a> element around the elements which must respond to the click (or other navigation event), like this:

         
<a xlink:href="girlActivity.svg">
   <g      id="girlIntro">
   <!-- …here you put the elements which define the sensitive elements -->
   <g>
</a>
   
   

But, we encountered a small variation on that interactivity. A click on an object activates an animation and, then, when the animation ends, a new page is displayed. In Papyrus, it's in the introductory sequence; if you click on the boy, he walks to the house, then his room is displayed and he enters. We didn't find a way to do that declaratively in Tiny 1.2, so, we use a script to do that. When the animation ends, we call the following script:

         
   function toRoom() {
         document.getElementById('openDoor').setTrait("display", "none");
         gotoLocation('boyRoom.svg');
   }
   
      

and here is how we make a call:

         
   <ev:listener target='g1c21' event='endEvent' handler='#toRoom' />
   
   

where g1c21 is the ID of the <animate> element which controls the display of the last step of the boy's animation.

Such a transition is very common in games: think of the last spaceship exploding and, at the end, a transition to the next level of the game (or the final panel of the game :-))...

Note that the specification is very open about what is activating (validating) the <a> link. The choice seems to be implementation dependent. So, it is difficult to be sure that the animation will start with the same events as those which validate the <a>. In our current version of the game, the animation starts on a click event.

Another sequence is a common situation: during different steps when the girl is going to a party, it is possible to press a button to view an animated song and then, to return to the previous step. So, in the SVG file for the animated song, it is not possible to use the 'gotoLocation' method to go to the previous sequence. This functionality is not supported by the SVG specification. It is possible to get this behavior with the javascript method window.history.back(), if it is supported by the user agent (which is the case for the recent versions of Firefox, Opera, Chrome).

The main problem concerning compatibility with smartphones has been an activity entirely based on drag and drop. For smartphones with a touch screen, it's not a problem. But, for smartphones interacting with keys, we don't know how to provide a good user experience for such activity.

We have tested the game with [GPAC], Firefox 3.0, Opera 9, Chrome 2.0.

In practice, all the animations from Papyrus in  SVG work on a portable PC. Except with GPAC, we have less success with sound and interactivity. As a reference, he are some values about the starting sequence of the Papyrus game.

Original MOV

1830 KB

Uncompressed SVG

2417  KB

Compressed SVG (gzip)

616 KB

Elementary drawings used to define the sequence

51

Path count

1324

Points

335612

Frames per second in Osmo4

30

Here are performance results for a Toshiba portable running Windows XP SP3, with an Intel Core2 Duo P8600 at 2.4 GHz with 2.86 GB of RAM.

Playing

in original size (512x373)

CPU

Full screen (1280x800)

CPU

Original MOV in Quicktime player

3.00%

3.00%

SVG in Osmo4 (GPAC)

7.00%

13.00%

SVG in Opera 9

20.00%

29.00%

SVG in Safari

28.00%

35.00%

SVG in Chrome

18.00%

24.00%

SVG in Firefox(*)

40.00%

50.00%

Some complementary results can be seen in [SVGCARTOONS].

From one level (column) of the exposure sheet, we can know what drawing is to be displayed at what instant and for how many time, sometimes using a loop of several drawings. We need to translate these timings in SVG.

GPAC supports SVG Tiny 1.2. It's our reference software. Our main goal is to demonstrate the feasibility of a multi-platform game with a multi-platform SVG player, including mobile phones: GPAC [PERFPLAY]. But, we have also tried the game with other players and tried to get it working in all players.

On some mobile phone -Samsung i780, Glofish V900-  we have done some tests with GPAC. For example, the starting sequence of Papyrus is playing at 16 fps, consuming 90% of CPU.

Our game is very portable and scalable, thanks to the S of SVG, .

The mainproblems that have been solved or that still remain concern:

  • audio support (remaining; available for GPAC and Opera),

  • support for going to another SVG file,

  • support for going to the previously displayed SVG file,

  • lack of support of <set> or <animate> in Firefox.

As shown above, animations are possible in different ways. The clearest solution is a pair of <set> tags for each drawing, one to display the drawing, one to removing it. As Firefox 3.0 doesn't support <set>, we have used a few lines of script to emulate the pair of <set> tags, wich works quite well. The main drawback is a significant CPU consumption for the script version (see CPU consumption in Firefox).

For cartoon-like animations, 'external use' is very practical (the <use> element with external reference). We have series of static drawings. We assemble a series to provide an animation file with 'external use' (for example, the animation of a character). When all the animations for a scene are assembled, we assemble the animations on a background with 'external use' of the animations. Then, we add interactivity.In this case, it is useful to have access to each static drawing (to add or remove a detail, to change a color...), as well as to each animation (to modify a duration) and to the whole scene.So we have a need for cascading 'external use', and clearly, 'external use' is a very useful feature.

For example, the starting sequence of Papyrus is made with interactivity added to a scene with four animations (father, mother, boy and girl). Each animation is built from static SVG files and each static SVG file is available. In our tests, we have simplified each drawing with Inkscape and rebuilt the whole animation. Each animation is tested separately and is available for further improvement.

But, we have to accept that SVG Tiny has had to limit the possibilities of 'external use' (perhaps for performance reasons). On the one hand, 'external use' allows reuse of external components without duplicating the data. On the other hand, it necessitates more complex processing by  the player. The limitations in Tiny 1.2 are clearly stated in the specification:

“the referenced fragment must not contain scripting, hyperlinking to animations or any externally referenced 'use' or 'animation' elements”[1].

As shown above, we can have four distinct levels which can be named: 'animations with interactivity', 'animations', 'animation', 'drawing'. So, we suggest keeping the structure based on 'external use', using tools to resolve them and building the integrated SVG file without external use. In this way, we preserve the structure and build a file usable on Tiny players.

          We have created such a tool to resolve external references found in <use> elements. With an XSLT transformation, we replace each <use> with external reference by a <g> element with the same attributes  and content as the <use>, except for the xlink:href attribute. If cascading 'external use' are used, the tool must be applied from the deepest SVG files.

 

Two drawbacks arise from the elimination of 'external use'. First, complex scenes need to be rebuilt each time there is a modification of an elementary file. Second, each re-used element is duplicated in the process.

Another reason, other than the rule in Tiny 1.2, argues in favor of the elimination of 'external use' in a file. The <discard> element has no effect on 'external use'. So, if you need to use <discard> to preserve your memory, external <use> could be a bad thing. For example, if you have long animations to stream, the <discard> element is useful to free memory occupied by elements which  were shown and will not be displayed again (see [SVGSTREAM] for more explanations).

Our first conclusion is that SVG is usable to develop games that are complex enough to be successful.

Our second conclusion is that SVG is very close to being playable on a lot of platforms. With the GPAC player, SVG Tiny 1.2 offers all the visual and interactive capabilities we need for Papyrus and it works on Windows, Linux, Macintosh, Windows Mobile...

For some players, we are very close to reaching our goal. For some players, there are still issues, e.g. animation with Firefox and sound with browser based players. But, we anticipate we are near our goal: playing SVG with sound and animation anywhere.

Our third conclusion is that we have to continue work on the management of 'external use' and the management of IDs (see above for discussion). The power of reusing the same elements can be better exploited if we clarify these points.

Finally, when managing projects with media, we optimistically try to imagine and prepare the very unpredictable future. Our procedure is to maintain the very best quality as long as it is reasonably feasible (here, scanning the drawings at a much higher resolution than was finally necessary). Who would have been able to predict in 1995 that very large and very small screens would co-exist, as is the case today with wide-screen TVs and smartphones?

 

Thanks to Institut TELECOM for the support of that work as part of the JEMTU project.



[1]  This prevents the use of external fragments which are themselves structured from fragments; for example, this prevents the integration of a widget set by another producer and built with external use