A comparative analysis of some considerations for browser performance in SVG.

For the SVG Open 2007 conference

Keywords: timing , SVG , browser , efficiency , animation

Dr. David P. Dailey
Associate Professor
Slippery Rock University
Department of Computer Science
Slippery Rock
PA
USA
david.dailey@sru.edu

Biography

David Dailey is Associate Professor of Computer Science at Slippery Rock University where he teaches courses on web development and does research in graph theory and other areas. Receiving his doctorate from the University of Colorado, he has held faculty appointments in mathematics, psychology and computer science at the Universities of Wyoming, Tulsa, Alaska, and Williams College prior to Slippery Rock. His interest in SVG started up some years ago since it offered considerable promise for his interests in both graph visualization and art.


Abstract


The paper addresses several aspects of web browser performance in the context of SVG (Scalable Vector Graphics). Scalable Vector Graphics is a relatively new web and wireless standard from the World Wide Web Consortium (3WC) for conveying rich graphical information. Most web browsers and many mobile devices now use it, in some cases, instead of its much weaker cousin HTML.

Presented are certain objective measurements of the time to perform certain tasks in the Safari, Firefox 2, Opera and Internet Explorer (with Adobe ASV3 plugin), with implications for "best practices" in code development.

This paper compares the effect of document complexity on time efficiency in the context of relationships between the following sets of SVG features:

Simple drawing through Document Object Model methods,

SMIL (Synchronized Multimedia Integration Language)

various clipping and masking operations (including alternate strategies for carving bitmaps),

simple and compound filters,

JavaScript animation.


Table of Contents


1. Introduction
2. Browser differences in string handling
3. SVG Efficiency in the Browser
4. The SVGChamber
5. The Experiments
6. Experiment 1
7. Experiment 2
8. Some title
9. Experiment 3
10. Experiment 4
11. Experiment 5
12. Experiment 6
13. Experiment 7
14. Experiment 8
15. Experiment 9
Appendix 1. Appendix One: Source code of SVGChamber
Appendix 2. Appendix Two: Source code of Animation Chamber

1. Introduction

For this purpose an "SVG chamber" has been constructed in HTML allowing the flexible manipulation of SVG DOM through radio buttons and selects controlling the number and type of SVG objects, presence or absence of clip-paths, and transparency, as well as the presence and type of filters being applied to those objects..

While Firefox 2 does not yet support SMIL, Opera and IE/ASV3 both appear to give precedence to SMIL animation over JavaScript animation in cases that documents contain both sorts of animation.

The comparisons of SMIL with JavaScript animation were done using an "animation chamber" in which differing amounts of each type of animation could be systematically added through the DOM to create complex documents. Because precise calibration of when an effect has finished rendering cannot be performed, the methodology used here involved "overloading the browser" by inserting numerous (sometimes thousands) of animations and in checking the performance as monitored through successful loops through a window.setTimeout as compared with real time measures. That is, the extent to which desired and actual performance differ gives an objective measure of the actual work performed, in a way that can be compared across browsers and across situations.

2. Browser differences in string handling

In an experiment (visible on the web at http://srufaculty.sru.edu/david.dailey/javascript/stringtimer.html), two fundamentally different approaches to the creation of large HTML strings in JavaScript are investigated. In the first approach a loop is defined in which a string is repeatedly concatenated with itself to eventually produce a large string (an HTML table with between 400 and 250,000 cells each filled with a bgcolor equal to a randomly constructed color).

Simple string concatenation in Javascript

var U= "<td height=2 width=2 bgcolor='"
var V="'></td>"
var s="<table cellspacing=0 cellpadding=0>"
for (i=0;i<rows;i++){
  s+="<tr>"
  for (j=0;j<cols;j++){s+=U+color()+V}
  s+="</tr>"
}
s+="</table>"
document.getElementById("R").innerHTML=s

In contrast, another approach builds a large array of (between 400 and 250,000) small strings. Concatenation is performed once by applying the ".join" method to the resulting array.

array join

var U= "<td height=2 width=2 bgcolor='"
var V="'></td>"
var A=new Array()
var k=0
A[k++]="<table cellspacing=0 cellpadding=0>"
for (i=0;i<rows;i++){
  A[k++]="<tr>"
  for(j=0;j<cols;j++)
  {A[k++]=U+color()+V}
  A[k++]="</tr>"
}
A[k++]="</table>"
var s= A.join("")
document.getElementById("R").innerHTML=s

Conducting this test, particularly in different browser environments is illuminating, since one might expect the two algorithms for creating large strings to have roughly equivalent run-time behavior.

In experiments (that the reader may easily replicate), both Firefox and Opera are relatively immune to differences between these approaches. On the machine I'm currently using, Opera takes about 400 milliseconds to produce a table of 10,000 cells compared to about 500 for Safari or 1500 milliseconds for Firefox, regardless of which of these approaches one uses. However, in Internet Explorer, the array-based concatenation takes about 1200 milliseconds while the string-based concatenation takes a whopping 8700 milliseconds.

A relatively simple experiment can leave one with a new sense of what works best and in the context of which browser.

3. SVG Efficiency in the Browser

The cross browser comparison of computational efficiency is motivated by prior experimentation. Applying such timing tests to the browsers' SVG performance follows as a natural investigation.

An "SVG Chamber" with a control panel was constructed. This web page consisting of an HTML implementation of the user's control panel along with a blank SVG document allows the user to control the number and types of SVG objects to be inserted in the DOM. The SVG Chamber also includes the ability to time the process for each of the four browsers tested.

The page is available for testing at http://srufaculty.sru.edu/david.dailey/svg/SVGChamber.html allowing the reader to verify these experiments and or to conduct experiments of her own.

We begin with a simple analysis of the basic graphic elements: 'path', 'polygon', 'polyline', 'text', 'rect', 'circle', 'ellipse', 'line', 'image', and 'use'. We examine across the four browsers, the time associated with creating new Elements, associating attributes and appending them to the DOM.

The first experiment presented will hopefully help to explain some of the reasons the methodology for these experiments was selected. Suppose one issues instructions through ECMAScript (JavaScript) to calculate how long it takes to do certain things in SVG using commands like:

D0=(new Date()).valueOf() 	//start a timer to get the current time (at the beginning)
insertStuff()				//insert elements into the DOM using commands 
							//like createElementNS, setAttributeNS, and appendChild
D1=new Date().valueOf()		//calculate elapsed time
display(D1-D0)				//show result

One may observe that browsers sometimes report that the JavaScript task has been accomplished, despite the fact that the objects have not yet finished being drawn on the screen. If one places, by way of illustration, 1000 rectangles into the SVG Document Root, each with a clip-path, then in the four browsers, one will see quite different results: Internet Explorer and Firefox both finish the timing events in just over half a second, while the drawing of the objects is actually finished much later in a way most obvious to the eye. IE appears to take about three extra seconds to complete the drawing while Firefox requires about 7 more seconds for completion. For Safari, the completion of the timing event and the display of the material appears to occur at the same time (the timing shows about 0.25 seconds for 1000 clipped rectangles). For Opera, the completion of the drawing and the timing appear to coincide (about 15 seconds) but Opera renders the 1000 objects in six separate passes (drawing what looks like about 180 to 200 per screen update).

As a conclusion of the above experiment, using any number of combinations of graphic elements, with or without clip-paths, with or without transparency and with or without filters, it becomes clear that in order to get an objective measure of the total time the browsers take to build certain objects, one will have to devise a more robust methodology.

Accordingly and consistent with some of my previous work, in order to measure effective run-time of certain computationally intensive uses of SVG in the various browsers, repeated or iterative embedding of clusters of elements into the DOM was investigated by putting the iterative calls inside calls to the JavaScript setTimeout method of the window object. This means effectively that only when a particular set of JavaScript operations are performed AND the window has been updated (namely the rendering has finished as well), will the next stage of iteration begin. Now, since both IE and FF appear to measure the run time of the relevant JavaScript functions, excluding the actual render time, while Opera and Safari seem to include render-time, the use of setTimeout gives a way of more fairly comparing the browsers. Certain methodological objections to this may seem natural and some will be addressed shortly. The basic coding structure, then resembles this:

D0=(new Date()).valueOf() 	//start a timer to get the current time (at the beginning)
iterate()				//begin timed loop to insert things.

function iterate(){
	insertStuff()			//insert elements into the DOM
	if (stillrunning){		//boolean test for completion of work
		 window.setTimeout("iterate()", looptime)		
	}						//looptime can be adjusted depending
							//on the complexity of the operation.
	else{
		D1=new Date().valueOf()	//calculate elapsed time
		display(D1-D0)			//show result
	}
}	

A few differences between the above proto-code and the actual code may be noted. For example, in order to get timing effects properly in some of the browsers, the else clause in "iterate()" had to be moved to a separate function.

The main purpose of the approach outlines above, though, is that the browsers will not begin the n-th pass through the function iterate, until the n-1st pass (including rendering) has been completed. One exception to this might be predicted to occur if the DOM call together with the rendering takes more than the time specified by looptime. Generally, it was found that any delay less than 10 msec failed to increase the run time, while longer intervals proved sufficient to slow the overall run time. In all cases, the browsers did complete whatever work was required of them regardless of whether the timeout interval allocated was sufficient or not.

Another issue concerning the "fairness" of the chosen methodology exists: that is, if we are interested in a true comparison between browsers of SVG-related time efficiency, then by making a part of this comparison dependent upon the browser implementation of setTimeout might contaminate some of our findings with artifacts of the implementation of ECMAScript rather than of actual SVG. As we will see in the presentation of results, however, whatever differences browsers may or may not have in this area is quite insignificant compared to the differences between the types of operations and elements concerned.

4. The SVGChamber

While the equivalent of radio buttons, select menus, and text input boxes may be created in SVG, HTML comes with such user interaction widgets already built into the language, so a small web application named "SVGChamber" was created having an HTML control panel which activated ECMAScript functions that, in turn, create new SVG elements and embed them into the DOM. We have given an overview of the nature of the timing and sequencing of those creation events, so I will describe the control panel a bit and what it does.

In figure one, we see a screen shot of the control panel (which looks virtually the same in all four browsers).

Panel.jpg

Figure 1: Screen shot of control panel of SVGChamber

The features of this panel are described from left to right. The features of this panel are described from left to right.

  1. 1. Time. This is where the single dependent variable for these experiments is displayed. The value displayed here represents the time in seconds that the operation took to complete.
  2. 2. Loop. This is the value of what above we called 'looptime' namely the duration (by default 10 milliseconds) of the call to setTimeout. As previously mentioned, 10 msec appeared to be sufficient for the purposes of these experiments.
  3. 3. Opaque vs. clear. By default, the experiments were done with opaque objects, but if "clear" is chosen then eacy object is drawn at 70% opacity.
  4. 4. Clip vs. noclip. This determines whether the object is drawn within a clippath or not. If "clip" is selected, then each object, I, is drawn with a clip-path attached as follows:
    if (clip) {
    	I.setAttributeNS(null, "clip-path","url(#CP)")
    }
    

    The particular clip-path used in all cases is created via:
    
    	var CP="M 100 100 0 10 101 105 40 10 105 110 50 50 110 115 60 100 115 120 150 150 120 120 200 150 120 125 220 200 115 130 180 200 110 130 120 180 400 200 300 100 405 200 330 110 410 205 380 110 420 210 440 120 420 215 500 215 420 220 500 230 420 230 500 300 420 235 440 500 415 235 380 450 410 225 z"
    	var CPO=SD.createElementNS(svgns,"clipPath")
    	CPO.setAttributeNS(null,"id","CP")
    	CPP=SD.createElementNS(svgns,"path")
    	CPP.setAttributeNS(null,"d",CP)
    	CPO.appendChild(CPP)
    	Root.appendChild(CPO)
    

    The path was selected to be rather complex and to visit many parts of its bounding rectangle almost as a space-filling curve.
  5. 5. Translate vs. notran. Some of the objects (polygon and polyline) are drawn with absolute and constant coordinates. In order to allow for multiple objects to appear at separate locations, the effect of the attribute transform=translate(rx,ry) for random (x,y) coordinates could be investigated.
  6. 6. node type (shown in illustration: polygon). Values of this select menu are the basic graphic elements of SVG: 'path', 'polygon', 'polyline', 'text', 'rect', 'circle', 'ellipse', 'line', 'image', and 'use'. For most, random values are chosen for the positioning (for example a circle's cx and cy are chosen randomly with a fixed value of r, radius, of 40) and coloration (both fill and stroke).
    In the case of polygon, path, and polyline, predefined sequences of coordinates are chosen. The path is a simple right triangle defined by four random numbers:
    I.setAttributeNS(null, "d", "M "+x1+" "+y1+" L "+x2+" "+y1+" "+x2+" "+y2+" z"). The polygon and polyline use the same coordinate sequence as defined by the clip-path above,
  7. 7. filter effect (shown in illustration: feGaussianBlur). As of this writing, neither Firefox nor Safari implements the majority of filter primitives, but it was desirable to investigate just a few filter effects. Those investigated were defined as specified in the following arrays of primitive name and attribute values:
    	filterPrimitives[0]=new Array("feGaussianBlur","stdDeviation",40)
    	filterPrimitives[1]=new Array("feColorMatrix","type","matrix","values","-1 0  0 0 0  0 -1  0 0 0 0  0 -1 0 0 1  1  1 0 0")
    	filterPrimitives[2]=new Array("feGaussianBlur","stdDeviation",5)
    	filterPrimitives[3]=new Array("feTurbulence","baseFrequency",.01,"numOctaves",2,"type", "turbulence" )
    	filterPrimitives[4]=new Array("feDisplacementMap", "in","SourceGraphic", "in2", "BackgroundImage" ,"scale","16", 
    		"xChannelSelector","R", "yChannelSelector","B")
    	filterPrimitives[5]=new Array("none")
    

    The particular filter primitive and its associated values are then easily selected by the investigator.
  8. 8. #object (number of objects). This user-chosen number takes on values of 1,2,3, 5, 10, 20, 50, 100, 500, 1000 or 10,000. In the default case when iters (below) is 1, then this parameter controls how many of the basic elements will be inserted into the DOM by the script. Since all objects are (generally) rendered at the same time, this variable allows us to investigate the effects of the complexity of the DOM manipulation.
  9. 9. iters (number of iterations). This variable controls how many times the setTimeout loop will be activated. That is, with this variable the number of distinct rendering events may be controled to allow the investigation of total time for both rendering and DOM manipulation. The user may choose values of 1,2,3, 5, 10, 20, 50, 100, 500, 1000 or 10,000.
  10. 10. redo. Redraws the screen with new objects, clearing out existing elements first. Each time the value of node type, number of objects, or number of iterations is manually changed by the user, new material is appended to existing material. The redo button, redraws the chosen number and type of object, but allowing for the DOM to first be emptied of previous objects.
  11. 11. clear. This button simple empties the DOM of all objects. Effectively, it does the following:
    function erase(){
    	for (i=Root.childNodes.length;i>0;i--) {
    		Root.removeChild(Root.childNodes.item(i-1))
    	}
    }
    

    Provisions for rebuilding the filter object and clippath are also made, since those objects may be required of the next experiment.

5. The Experiments

We now present the results of a series of experiments which will hopefully appear to be well-motivated by both the initial questions and the data and analysis as it unfolds. In each case the SVG Chamber was used to answer questions about the comparative difficulty of drawing certain SVG objects (in the different browsers) as well as the effects of such things as object complexity, transparency, filters, or clipping. The SVGChamber consists of a single empty SVG document located inside an <embed>. <embed> was used because of difficulties with either <object> or <iframe> and SVG scripting across browsers. As the SVG DOM becomes available to the web page, the embed containing the SVG is resized to fit the available on screen real estate. Generally, then, drawing takes place within the available space exclusive of the control panel.

Generally speaking it was interested to get a sense of the relative magnitudes of timing and of how those effects might interact with one another and with browsers without performing rigorous statistical tests of significance. Generally, the magnitude of the effect relative to the variability associated with conditions is so large that the rigor of a say a mixed model analysis of variance is not required. Unless stated, each experiment involved the gathering of three data points per observation. Presented are averages as well as, in the first case, ranges to give the reader a sense of the reliability and variability of the data gathered. In all of the experiments, three data points are gathered per observation. It should be pointed out that visual inspection of the raw data revealed (as in Experiment 1) relatively little inter-trial variability, reinforcing the conclusion that these results are not artifacts of randomness. While significance tests were not performed, our certainty that the results would be significant, based on the magnitude of effects relative to magnitude of error variance, is itself very high.

6. Experiment 1

The first experiment was performed to get an overall sense of how efficiently the four browsers handled the creation and rendering of the basic graphic elements. 50 objects of a given nodeName were build and rendered (using setTimeout interval of 10 msec). All objects were 100% opaque, with no filters, clipPaths, or transformations.

Table1.jpg

Figure 2: Means and ranges for elapsed time (seconds) for building basic graphic elements, by browsers.

It is noteworthy that the overall efficiency of the browsers for repeated embedding of simple objects goes from Safari (most efficient) to IE to Opera to Firefox in an ordering we will designate as SIOF. Of the ten graphical elements investigated, half show exactly this ordering (SIOF) in terms of browser efficiency. That Safari's browser appears to outperform the other browsers, given that its SVG support was available only in nightly build mode a few months ago, is impressive to say the least. Safari's efficiency with SVG text appears less than that of either Opera or IE, moving it out of first position for that type of node alone. Remarking on other departures from the SIOF performance ordering, we note that Firefox has relatively better performance for objects whose bounding boxes are relatively small (rect, circle, ellipse, and image) even outperforming Internet Explorer in the case of the <image> node. I believe this is consistent with the way the Gecko rendering engine used by Firefox paints the screen by updating rectangles that bound the affected area.

7. Experiment 2

While it is straightforward to verify the claim that Firefox and to some extent Internet Explorer tend to report the completion of the JavaScript loop prior to the actual rendering of the objects, an experiment to confirm this visual appearance is in order. In this experiment, the path object was chosen as being somewhat typical of the SIOF browser precedence shown in Experiment 1. 1000 simple path objects (random right triangles, all opaque with no filters or clip-paths) were drawn either simultaneously or in either n=2, n=5, or n=10 successive iterations of 1000/n objects each.

Figure 2 demonstrates that both FF and to some extent IE are indeed accomplishing much of their rendering after the JavaScript loop appears to have finished.

8. Some title

Figure2.jpg

Figure 3: Means and ranges for elapsed time (seconds) for building basic graphic elements, by browsers.

Overall, Experiment 2 presents results consistent with the conclusion that the iterative approach chosen in Experiment 1 is indeed appropriate to accurately measure render-time as well as DOM access time. It is also consistent with the observations from Experiment 1 that whatever affect setTimeout may have differentially on the browsers is quite insignificant relative to the types and number of objects being created and drawn.

Another conclusion that one might be tempted to draw from the results presented here, is that it is more efficient to render many objects by using few iterations rather than many. This however, appears not always to be true and will be examined in more detail in Experiment 3, below.

9. Experiment 3

The results of Experiment 2 suggest that the fewer calls to actually render the image as built through DOM, the better the overall performance of inserting new objects into the browser. However, in the midst of preparing for an experiment that would look directly at the effects of the number of objects per render and the number of renderings (for a given number of objects), it was observed, that, in some cases, k renderings of n/k objects was faster than j renderings of n/j objects for j < k, contrary to the bulk of the evidence examined so far. Specifically it was discovered that for the Firefox browser, it was sometimes quicker to render one path object in m iterations than to render 2 objects in each of m/2 iterations. Investigated were values of m={20,40,50,100,200,500} and as shown in Table 2, Firefox seemed to prefer more iterations for the intermediate ranges of m (40 through 200). In none of the other three browsers were found any examples

Table2.jpg

Figure 4: Table 2: Times in seconds for Firefox to build m paths (random right triangles) in either one or two renderings.

The same phenomenon in Firefox was replicated in experiments with <image> and <rect> though the few values chosen for investigation for <use> and <text> failed to find these somewhat counterintuitive ranges of values.

This suggests the following rule:

* Rule: Rendering n objects in k renderings (of n/k objects per render) is fastest when k is smallest.

Subject to the following constraint:

Exception: When the browser is Firefox and the value of n is between 20 and 500, then k=n may prove slightly faster than when k=n/2.

The reason Firefox behaves this way probably has to do with the way it updates the screen through rectangular subregions. As one observes the drawing of a very complex DOM, it is possible to see the time difference associated with the rendering of very large objects (those with large bounding rectangles) versus smaller objects which are rendered much more quickly.

10. Experiment 4

The results of the experiment 2 also make it natural to inquire about the overall effects of the number of objects, object complexity and the phasing of the construction of those objects into the DOM through multiple iterated DOM calls (as opposed to a single DOM call with many items), There is evidence from Figure 1 that the use of multiple DOM calls slows all the browsers, but perhaps we may tease out a bit of this effect by independently varying, as main effects, the number of objects and the number of iterations.

To look at the effect of the number of objects (with one render per object), each of the browsers was tested on the path object (again our random right triangles) at values of 10, 110, 210, 310, 410 and 510 iterations. Table 3 presents the data collected by running each browser three times for each number of iterations.

Table3.jpg

Figure 5: Table 3: Times in seconds for Firefox to build m paths (random right triangles) in either one or two renderings.

Generally, as the number of object increases, the time required appears to grow at fairly sizable rates. By the time 500 of these objects (many overlapping) are drawn, the time to render such a page approaches the prohibitively large range, when embedded iteratively. It is clearly more time consuming to add objects when the DOM (and screen) are already cluttered with existing objects.

A natural set of questions concerns the growth rate of these numbers (linear vs. nonlinear) and the degree to which browsers do or not differ in ways other than overall mean efficiency. Accordingly, the above data were adjusted by dividing each row by that browser's total time. That is, we may then investigate how the browsers respond to increases in DOM size independent of the main effect of browser differences. Figure 3 presents these results.

Figure3.jpg

Figure 6: Figure 3. Time to render independent of browser speed.

From Figure 3 we may observe that apart from the fact that some browsers appear to be slower than others, and that browsers in general take more time to build more complex documents, the effect of that complexity on the browsers is all rather consistent. While there would appear to be slight nonlinear growth for smaller numbers of objects, the curves appear to settle into what looks like a rather linear growth rate.

Next, the sheer effect of the number of objects to be rendered in a single iteration was investigated. For this purpose, the script was rewritten just a bit: after completing the first iteration, one simple setTimeout is issued prior to gathering a time stamp, for the simple purpose of making sure the rendering has been completed.

For this particular investigation, either 10, 110, 210, 310, 410, or 510 path objects were inserted into the DOM in a single iteration. The raw data as presented in Table 4 while consistent with with the browser ordering of the iterated results of Table 3, show that all browsers are considerably faster when not required to make repeated updates of the DOM as well as the on-screen representation. Instead of taking three minutes to render 510 objects iteratively, Firefox required only 3 seconds. Interestingly the performance advantage shown by Safari over IE seems to disappear when the rendering is all concurrent, suggesting that a good part of Safari's sophistication comes from its ability to update both the DOM and the screen very rapidly as well as repeatedly.

Table4.jpg

Figure 7: Table 4. Times to build and render differing numbers of path objects in a single iteration.

The data as plotted are strong suggestive of linear growth in render time, at least for the moderate range of values considered.

Figure4.jpg

Figure 8: Figure 4. Time to build multiple objects.

11. Experiment 5

While the first several experiments have focused on differing types of objects and on the number of objects, it is natural to ask how various attributes of the objects might affect the performance of the browsers.

To some extent, we may tease out a bit of that data already from the results of experiment 1. In particular, the difference between the path and the polygon (or polyline) object in this particular case has primarily to do with the complexity of the shape. In the case of the path, we have defined a varying value for the "d" attribute (defining the actual points through which the path passes). Likewise in the case of the polygon, we have a "points" attribute with three dozen x-y pairs. The difference between the build times for paths and polygons is likely due to the complexity of the objects being rendered.

But other aspects of the drawing of objects are expected to be computationally expensive, or perhaps efficient, such as transparency, re-use, clippaths and filters. We begin with a simple experiment to track the effect of transparency.

In this experiment, each of the ten basic graphic elements was constructed 200 times (100 objects per each of 2 iterations) in all four browsers, both at the level of 100% opacity and the level of 70% opacity (30% transparent). It was thought that this would give a bit of a mix of both number of objects and rendering complexity. Each observation was conducted three times with the results recorded using the median of the three data points.

Overall the results appeared much the same as in Experiment 1. For example, Safari performed fastest overall. Opera was consistently fast with opaque text (though the transparent text did not fare so well). Transparency generally took longer than opaque images to render (though presumably only minimally longer to build in DOM), averaging 1.695 times longer over all trials and conditions. Interestingly, Opera and Firefox were more affected by transparency than Safari and IE, as the ratios of construction times below indicate:

Table5.jpg

Figure 9: Table 5. Ratio of times to construct transparent images to opaque ones.

However, if we look more carefully at the data, we will see that both Opera and Firefox are detrimentally impacted by transparency more for certain types of objects than for others. Figure 5 breaks this down by presenting the ratios of times for transparent objects to opaque ones as a function of both browser and type of object.

Figure5.jpg

Figure 10: Figure 5. Ratio of times to construct transparent images to opaque ones.

From it we can see that Opera's extraordinary efficiency in rendering text is overcome when the text must be rendered transparently; furthermore this seems to be the only data point at which Opera departs from the pack. Firefox seems quite consistent with the other browsers in handling transparency except when it comes to the rect object (which Firefox handles superbly when it is opaque), and also losing ground with the quite complex use, polygon and polyline objects. In fact, if these objects mentioned for Opera and Firefox are removed from the analysis, the overall ratios for all browsers become 1.20 plus or minus 0.14: all ratios quite comparable to one another.

One would perhaps not be completely surprised by these results given how Opera and Firefox perform so well on text and rectangles respectively and given Firefox's apparent trouble with highly complex objects.

12. Experiment 6

It is interesting to note from Experiment 1 that the <use> object takes less time, in general, than the polygon and polyline which rely on precisely the same set of coordinates. Added to this extra efficiency is the somewhat counterintuitive notion that the <use> actually includes a transform=translate(x,y) attribute, shifting its coordinates according to the random numbers x and y. All of the polygons and polylines are drawn (for better or worse) in exactly the same location. It therefore is natural to investigate the temporal affect of this single transformation to see what sort of impact it may have. A very small experiment was performed (100 paths per 2 iterations) with random translations applied to each object. The data (as presented in Table 6 below) show no noticeable effects on the time that these simple translations actually take.

Table6.jpg

Figure 11: Table 6. time to render 200 paths in two iterations with and without translations applied

Clearly we might expect more sophisticated transformations like rotations involving some floating point arithmetic to cause some noticeable time, though since this affect is calculated before the object enters either DOM or screen, it is not surprising that any effect is negligible.

13. Experiment 7

In this experiment, we attempt to gauge the effect of a clip-path on the rendering time. It is a bit complex, since the clippath has the effect of both making the calculations more complex, but of reducing the portion of the screen which actually needs to be redrawn. In this particular experiment a single pre-built clippath having the same 37 coordinates as the polygon and polyline objects (as well as the use) was chosen. A more thorough experiment might have build separate clippaths with coordinates somehow yoked to those of the object being constructed, so as to ensure that the object and the clippath in fact overlap. If the object turns out to be outside the clippath, then a savings of time is realized since that object does not need to be rendered. So this quite simple experiment has in fact confounded the effects of computation and rendering. Some effort was made to tease this out, by comparing objects clipped to a common clippath versus those transformed and clipped resulting in non-overlapping clips.

It should be noted that alternative ways of "carving" an object exist within SVG. An obvious addition to the clip-path is the mask. Suppose, for example, we are interested in carving a bitmapped image into smaller rectangles as in a jig-saw puzzle. One might use a multiple clipPaths or multiple masks. Additionally, one can, in Opera and IE, that currently support filters, apply one image to another as a fill, using the feComposite and feImage filters as follows:

<filter id="Compo" filterUnits="userSpaceOnUse">
	<feImage xlink:href="pic.jpg" x="40" y="20" width="240" height="300" />
	<feComposite operator="in" in2="SourceGraphic"/>
</filter>
<ellipse filter="url(#Compo)" id="E" cx="150" cy="150" rx="40" ry="90"/>

This approach was considered since it might avoid the necessity of housing separate image tags in DOM and hence in RAM, but the results of Table 7 suggest this was not the case. Yet another approach to carving an image would be to use feTile, feOffset and feMergeNode as discussed in the next section.

Table7.jpg

Figure 12: Table 7times to build 200 paths in two iterations with and without clippaths

From Table 7 we may conclude that indeed the calculations involved in clipping seem to outweigh any advantages that might be expected from less rendering to be performed (since much of the rendering lies outside the clipping region). Overall, clippaths increase the time it takes to do things and are a little bit more efficiently implemented in some browsers than in others. In order to draw more robust conclusions, one would need to do considerably more elaborate experimentation. The use of various filters to simulate clippath like operations does not seem to be computationally effective.

14. Experiment 8

Finally we turn our attention to two sorts of effects, which as of this investigation may only be done in two of the four browsers under scrutiny: filters and animation. Experiment 8 concerns itself with filters. For sake of programming convenience five different examples of filter primitives were investigated: feGaussianBlur (using standard deviations of either 5 or 40), feColorMatrix (of type matrix with values = "-1 0 0 0 0 0 -1 0 0 0 0 0 -1 0 0 1 1 1 0 0"), feTurbulence (with baseFrequency of .01, numOctaves of 2, and type "turbulence"), and a composite filter consisting of feImage and feComposite as discussed above. An attempt was made to use feDisplacementMap, but I was unable to get the background Image to come from previously established content to make the displacement "interesting."

Another approach that was considered because of its possible use in carving images into jig-saw puzzles, was to use feTile, feOffset and feMergeNode as outlined in the following code:

function carve(m,n){
	var tilew=imwide/n-1
	var tileh=imhi/m-1
	var eye1=svgDocument.getElementById("eye1")
	var MERGE=svgDocument.getElementById("MERGE")
	Compo1=svgDocument.getElementById("Compo1")
	for (var i=0;i<m;i++){
		for (j=0;j<n;j++){
			var eye=svgDocument.createElement("feTile")
			eye.setAttribute("height",tileh)
			eye.setAttribute("x",100+(tilew+1)*j)
			eye.setAttribute("y",50+(tileh+1)*i)
			if(i==4){//demonstrating the use of offsets on row 4.
				eye.setAttribute("width",2*tilew+1)
				eyeOff=svgDocument.createElement("feOffset")
				eyeOff.setAttribute("result","R"+((i*n)+j))
				eyeOff.setAttribute("dx",tilew)
			}
			else{
				eye.setAttribute("result","R"+((i*n)+j))
				eye.setAttribute("width",tilew)
			}
			eye.setAttribute("in","face")
			if (i==4) Compo1.insertBefore(eyeOff,MERGE)
			Compo1.insertBefore(eye,MERGE)
			eye11=svgDocument.createElement("feMergeNode")
			eye11.setAttribute("in","R"+((i*n)+j))
			MERGE.appendChild(eye11)
		}
	}
}

Experiments with this, however, visible at http://srufaculty.sru.edu/david.dailey/svg/later/offsets7.svg, suggest that it is considerably more timeconsuming than any of the other approaches considered, so it was not, in the end implemented here.

Because the filters are indeed "time consuming" to apply, only 20 objects rendered in each of two passes were used. Two passes were useful, since at least for Internet Explorer, our best efforts to time the completion of rendering seemed not to include the render time for the Gaussian blur after a single pass. The objects chosen were of type text, rect, and image to allow the effects to be applied to a reasonably diverse set of objects, together with those like image and text having particularly noticeable effects in these particular browsers.

Table8.jpg

Figure 13: Table 8: seconds to render 20 objects in two iterations using filters

From these data, we may indeed observe that indeed, the application of filters takes time, sometimes adding a considerable factor to the construction of an object. We also note that the effect varies as a function of the type of filter, the type of object and the browser. Gaussian filters with a larger standard deviation seem to take longer in almost (but not all) instances, and turbulence filters are the most expensive in Opera, but not in IE where the higher Gaussian blur takes more time. Interestingly, the composite filter involving feImage as discussed above as an alternative to using clippaths, is rather moderately placed in the middle of computational complexity for the filters investigated.

The overall results are inconclusive, but suggest that as browsers mature, their implementation of filter primitives may become faster and more consistently predictable,

15. Experiment 9

As with filters, SMIL animation is supported at the current time only within the IE/ASV or Opera environments. As many in the SVG community may not be familiar with SMIL, let me offer some introductory remarks on the topic.

Since its first public release in 1997, SMIL or Synchronized Media Integration Language has seen a relatively slow adoption rate, in part because of the widespread use of the proprietary methods used within Flash animation from Adobe/Macromedia. In September 2001 SMIL became a W3C recommendation. While SMIL is being put to use for complex multimedia projects involving story-boarding, and integration of audio and video clips, for most of those interested in SVG, SMIL allows simple attributes of objects to be changed smoothly over specified value ranges and durations, with much of the complexity of programming being handled by SMIL itself.

SMIL uses what has become known as declarative animation.

In JavaScript, the window method setTimeout() is typically used to repeatedly (seemingly recursively) update the screen after changing certain attributes of the objects on the screen. For example, which file is displayed in an image tag or the values of the x and y coordinates of an absolutely positioned <div> tag may be changed by JavaScript. In the latter case, in every refresh of the screen (happening every dT units of time) we move the <div> dX pixels horizontally and dY pixels vertically. The author of such an animation must guess the screen refresh rate of a typical visitor's client software and then adjust dT, dX, and dY accordingly so that dX and dyY are kept as small as possible subject to the constraint that the browser must do all that it needs to in dT units of time. That is, considerable guesswork in needed on the part of the author as to what will produce a smooth animation. Also setting up multiple independent animations running in parallel has been rather notorious for its difficulty. In contrast, SMIL lets the browser software handle all these decisions with the locus of the animation being kept directly affiliated with the animated object.

The following (which does not include the accompanying HTML markup) represents a typical JavaScript approach to moving an object in a circle of radius 60 around the point (x=150,y=140):

Circular movement in JavaScript

var pos=0
var incr= (2*Math.PI)/60
var delay=10
function move(){
	pos+=incr;
	x= 60*Math.cos(pos) + 150;
	document.getElementById("c").style.left=x;
	y= 60*Math.sin(pos) + 140;
	document.getElementById("c").style.top=y;	window.setTimeout("move()",delay);
}

As most who are familiar with this type of animation would probably agree, determining the values of incr (which controls, in this case, both the x and y coordinate) and delay, is crucial to the success of the animation, and difficult to do. We can anticipate that the animation may look different on different machines, given different clock speeds, video cards, RAM, browser software, operating systems, and screen resolutions.

If incr is too large, the object will appear to jump too far around the screen. If it is too small the animation will move too slow. If delay is too small (typically less than 10 milliseconds), the browser will not be able to keep up with the animation and may fall behind, potentially interfering with the success of complex animations. If delay is too large, the animation will appear to pause intermittently rather than to run smoothly. Many of us have experienced setting delay very small, in hopes that the animation might just move a bit faster, and then viewed in alarm when we see the page run far too fast on a machine that is faster than where we first tested it.

The same animation can be written in SMIL including SVG markup as follows:

Circular movement with SMIL

<ellipse id="E" cx="150" cy="200" rx="10" ry="10" fill="red">
<animateTransform attributeName="transform" type="rotate" dur="2s" 
from="0 150 140" to="360 150 140" repeatCount="indefinite"/>
</ellipse>

Similar and more dramatic comparisons between the two can be made. Much debate has gone on (particularly within the W3C HTML working group) about whether declarative approaches are "better" or not. Let us not enter this debate here, but merely agree that the two approaches are quite different, cognitively and as we shall show, behaviorally.

The experiments performed on these topics were of two primary sorts. To what extent can the presence of two much demands for SMIL animation interfere with JavaScript processing in the browser (either IE or Opera) and likewise to what extent does JavaScript interfere with SMIL processing?

I created an "animation chamber" AC in which arbitrary numbers of separate objects each animated with one or more graphically intensive SMIL animations might be created. Following the creation of these SMIL animations, relatively simple but graphically intensive JavaScript animations might then be requested of the browser in addition to its current workload. The environment would allow the analysis of the temporal efficiency of both the JavaScript and SMIL animations in terms of the amount of animation requested of the browser and the actual amount of work the browser is able to accomplish in the associated time frame.

The source code of the HTML with SVG embed is included as an appendix at the end of this document, though the code used in preliminary experiments is available on the web at http://srufaculty.sru.edu/david.dailey/svg/timer.html.

The SVG complexity is adjusted by the user. An <image> tag in the SVG document is associated with a <clipPath>. The user can select how many entities are inserted into the clipPath. In version 1 of the AC, the user chooses a number n (between 1 and 50) resulting in an n by n checkerboard where every other square of the n2 clippings of the <image> are inserted into the <clipPath>. In version 2 of the AC, the user chooses between 1 and 1000 ellipses (with random radii and centers) to insert into the <clipPath>.

In both cases certain attributes of the clipping objects are then randomly selected, and an <animate> tag is created for those attributes with values randomly chosen within reasonable bounds established by some reasonable minima and maxima..

Once the SMIL animations are constructed, the JavaScript can be started with the ability to control the delay of the timing loop and, hence, the amount of JavaScript activity requested of the browser.

In these experiments, the JavaScript merely changed the "link.href" property of the SVG <image> to one of a variety of other images. Note that the existing clipPath will automatically and on the fly, be applied to a new image file.

Figure6.jpg

Figure 14: Figure 6: The appearance of the screen of the Animation Chamber (AC).

SMIL vs JavaScript: results

Several experiments were performed. Among the primary investigations were:

the degree of SMIL interference on JavaScript performance as a function of the amount of JavaScript requests;

the degree of SMIL interference on JavaScript performance as a function of the amount of SMIL animation;

the degree of JavaScript interference with SMIL animation.

Tables 9 and 10 investigate the degree that JavaScript timing is adversely affected by SMIL as functions of the timeout delay required by the JavaScript. Table 9 considers relatively lightweight SMIL animation (25 animated objects each with separate timing constraints) while Table 10 presents similar data for slightly heavier SMIL loads (400 animated objects each with separate timing constraints).

Table9.jpg

Figure 15: Table 9: The Effect of loop delay on number of successful JavaScript iterations. Internet Explorer; two different machines: Complexity=25 (5 by 5) effects of loop delay; animate for 5 seconds; actual iterations.

With relatively light SMIL animation (25 objects) we might observe from Table 9 that when the JavaScript timing delays are slow (200 to 500 milliseconds) the browsers on both machines tested were able to perform 100 percent of the requested animation in the time allotted. When the speed demand was increased to 50 iterations per second (10 milliseconds) performance became erratic with only about 39% efficiency of the requested JavaScript activity. Further it should be pointed out that random activity (operating system and network activity) seemed to add a good deal of variance to the observations, with one of the machine's (running Windows 2000) actually crashing on a number of occasions and both machines showing great variability in the performance when high JavaScript demands were placed.

Table10.jpg

Figure 16: Table 10: Effect of loop delay on number of successful JavaScript iterations. Internet Explorer; two different machines: Complexity=400 (20 by 20) effects of loop delay; animate for 5 seconds;

Table 10 investigates the amount of degradation of JavaScript by SMIL as a function of the amount of JavaScript requested with a medium amount of SMIL animation (400 animated objects each with its own timing parameters). Compared with Table 9, we see that degradation of JavaScript performance was more pronounced for even relatively slow animation speeds (100 milliseconds) . By the time the JavaScript loop is fast enough to do precise animation of the 2 dimensional movement variety (20 milliseconds) the JavaScript loops were being performed only at about one fourth of the requested rate.

Next we look at the effect that the amount of SMIL animation has on JavaScript animation. In this experiment we chose a medium fast screen update of 50 milliseconds and then varied the number of SMIL animations from 10 (1 row by 10 columns) to 500 (50 rows by 10 columns). The decision to vary rows rather than columns is motivated by the results of Table 12 (in which it is shown that the browser keeps up with vertical complexity better than horizontal complexity, as might be expected given the natural way of drawing the screen).

Table11.jpg

Figure 17: Table 11: Effect of SVG complexity on number of successful JavaScript iterations. Internet Explorer; two different machines: Loop delay=50 msecs; animate for 5 seconds; actual iterations(expected=100). Two different machines.

What we see in Table 11 is a fairly pronounce degradation of JavaScript throughput (iterations accomplished per iterations requested), directly attributable to SMIL complexity. By the time the number of SMIL animations is high (500) JavaScript performance has fallen to less than 60% efficiency. From tables 9 and 10, we would clearly expect this effect to be more pronounced at faster timing loops (than the 50 msec investigated here).

Table 12 merely demonstrates that rows and columns are not symmetrical in this analysis. It is far easier for the browser to have more rows and fewer columns to update.

Table12.jpg

Figure 18: Table 12: Number of iterations as a function of row vs. column complexity. Internet Explorer with Complexity = 100; loop-delay=50; run=5 secs; expected iters=100

In Table 13 we investigate the effects of SMIL complexity on JavaScript performance in both IE and Opera. In all cases, if JavaScript were keeping up with its requests, 100 iterations would have been performed. With even as few as 20 SMIL animations we can see that neither browser is keeping up with its requests and that IE is managing the <image> source swaps considerably better than Opera, possibly because of the closer proximity to low level graphics system calls.

Table13.jpg

Figure 19: Table 13: Timing=50 msecs; effect of complexity animate for 5 seconds; actual iterations(expected=100). Opera and IE

The effect of SMIL complexity on JavaScript performance is quite pronounced for both browsers.

Similarly Table 14 reveals the degree to which JavaScript timing is a factor in its own degradation. With 50 animate objects (for an expected 150 separate animate tags), Opera reaches a maximum of 30 updates during the 5 seconds, while IE's efficiency, while better, is still accomplishing only about 40% of the cycles requested.

Table14.jpg

Figure 20: Table 14. Complexity=50;effects of loop delay; animate for 5 seconds; actual iterations

Additional analyses (not shown) revealed that preloading of images had little effect on the overall performance in either browser.

Finally we note that SMIL itself shows relatively little degradation as a function of the amount of JavaScript. SMIL animations are going to terminate in roughly the amount of time that has been requested, regardless of the other demands placed on the browser.

Table15.jpg

Figure 21: Table 15. Internet Explorer. Effects of JavaScript animation on SMIL termination times.Note that with 1000 clippaths and no JavaScript times for three trials were 11.9, 12.2 and 11.8

Table16.jpg

Figure 22: Table 16. Opera. Effects of JavaScript animation on SMIL termination times. 8 seconds for each animate object. Note: Opera 9.0 fails to run JavaScript after 500 animated clipPaths have been inserted.

In conclusion, both Opera and Internet Explorer both divert high priorities to the accomplishment of SMIL animation sometimes to the detriment of JavaScript animation. This is probably due to the way declarative animation requires that performance be kept "on schedule" and a long-standing awareness that browsers have never been able to keep up with all the JavaScript animations that people might like them to do.

Nevertheless, the results have some interesting implications for programmers who might like to use both types of animation.

Appendix 1. Appendix One: Source code of SVGChamber

<html>
<head>
	<title>SVG Chamber</title>
<script>
var svgns = 'http://www.w3.org/2000/svg';
var xlinkns = 'http://www.w3.org/1999/xlink';
var SD;
var Root;
var workH=0
var D0
Pix=new Array(2,4,5,9,11,17,18,24,28,29,31,33,34,35,38,41,44,45,47,52,55,56,59,61,62,71,72,73,74,75,76,77)
Graphic_Elements=new Array('path', 'text', 'rect', 'circle', 'ellipse', 'line', 'image', 'use', 'polygon', 'polyline') 
filter="none"
function ready(){
	
	for (var i=0;i<Graphic_Elements.length;i++){
		document.f.G.options[i+1] =  new Option(Graphic_Elements[i],Graphic_Elements[i]);
	}
	for (var i=0;i<filterPrimitives.length;i++){
		document.f.FR.options[i+1] =  new Option(filterPrimitives[i][0],filterPrimitives[i][0]);
	}
	var offset=measure(document.getElementById("Table"))
	var S=document.getElementById("E")
	S.height= workH=height - offset
	S.width=rightedge
	S.top=0
	SD=S.getSVGDocument()
	Root=SD.documentElement
	makeCP()
}


function measure(O){
	try{
		rightedge=document.body.clientWidth;
		height=document.body.clientHeight;
		Oheight=O.clientHeight
	}
	catch(e){
		rightedge=window.innerWidth;
		height=window.innerHeight;
		Oheight=O.innerHeight
	}
	return Oheight
}

function change(evt){
	var O=evt.target
	nodeN=O.nodeName
	var C = "rgb("+parseInt(Math.random()*255)+","+parseInt(Math.random()*255)+","+parseInt(Math.random()*255)+")";
	if (nodeN=="text") {
		O.firstChild.nodeValue=C
		O.setAttributeNS(null, "stroke", C)
	}
	else if (nodeN=="image"){
		var r=Math.floor(Math.random()*Pix.length)
		var f="../p"+Pix[r]+".jpg"
		O.setAttributeNS(xlinkns,"xlink:href",f)
	}
	else O.setAttributeNS(null, "fill", C)
}

opa=1
clip=false
tran=false
var Colors=new Array("red","yellow", "orange", "green", "purple")
var CPP=null
function makeCP(){
	var CP="M 170 140 A -35 30 0 1 1 170 141 M  208 128  A -75 7 0 1 1  208 129"
	var CP="M 170 140 A -35 30 0 1 1 170 141 M  208 128  A -75 7 0 1 1  208 129 L300 300 400 400 100 400 100 395 395 300 z"
	var CP="M 100 100 0 10 101 105 40 10 105 110 50 50 110 115 60 100 115 120 150 150 120 120 200 150 120 125 220 200 115 130 180 200 110 130 120 180 z"
	var CP="M 100 100 0 10 101 105 40 10 105 110 50 50 110 115 60 100 115 120 150 150 120 120 200 150 120 125 220 200 115 130 180 200 110 130 120 180 400 200 300 100 405 200 330 110 410 205 380 110 420 210 440 120 420 215 500 215 420 220 500 230 420 230 500 300 420 235 440 500 415 235 380 450 410 225 z"
	var CPO=SD.createElementNS(svgns,"clipPath")
	CPO.setAttributeNS(null,"id","CP")
	CPP=SD.createElementNS(svgns,"path")
	CPP.setAttributeNS(null,"id","CPP")
	CPP.setAttributeNS(null,"d",CP)
	//CPP.setAttributeNS(null,"clip-rule","evenodd") //not sure this makes any difference
	CPO.appendChild(CPP)
	Root.appendChild(CPO)
	//Root.appendChild(CPP) //in case we want to see the path of our clip-path

}
filterPrimitives=new Array()
filterPrimitives[0]=new Array("feGaussianBlur","stdDeviation",40)
filterPrimitives[1]=new Array("feColorMatrix","type","matrix","values","-1 0  0 0 0  0 -1  0 0 0 0  0 -1 0 0 1  1  1 0 0")
filterPrimitives[2]=new Array("feGaussianBlur","stdDeviation",5)
filterPrimitives[3]=new Array("feTurbulence","baseFrequency",.01,"numOctaves",2,"type", "turbulence" )
filterPrimitives[4]=new Array("feDisplacementMap", "in","SourceGraphic", "in2", "BackgroundImage" ,"scale","16", "xChannelSelector","R", "yChannelSelector","B")
filterPrimitives[5]=new Array("feImage")
filterPrimitives[6]=new Array("none")

	var FPRIM=null
function makeFilter(n){
	TY=filterPrimitives[n]
	filter=TY[0]
	//alert(filter)
	var FIL=SD.createElementNS(svgns,"filter")
	FIL.setAttributeNS(null, "id","tooi")
	FIL.setAttributeNS(null, "x","0%")
	FIL.setAttributeNS(null, "y","0%")
	FIL.setAttributeNS(null, "width","180%")
	FIL.setAttributeNS(null, "height","110%")
	FPRIM=SD.createElementNS(svgns,filter)
	for (i=1;i<TY.length;i+=2){
		FPRIM.setAttributeNS(null, TY[i],TY[i+1])
	}
	FIL.appendChild(FPRIM)
	if (filter=="feImage"){
		var r=Math.floor(Math.random()*Pix.length)
		var f="../p"+Pix[r]+".jpg"
		FPRIM.setAttributeNS(xlinkns,"xlink:href",f)
		F2=SD.createElementNS(svgns,"feComposite")
		F2.setAttributeNS(null, "operator","in")
		F2.setAttributeNS(null, "in2","SourceGraphic")
	/* --- if we wish clips to be from different parts of a large image
	FPRIM.setAttributeNS(null,"width",rightedge) 
	FPRIM.setAttributeNS(null,"height",workH)
	FPRIM.setAttributeNS(null,"x",0)
	FPRIM.setAttributeNS(null,"y",0)
	*/	
		FIL.appendChild(F2)
	}

	Root.appendChild(FIL)
	var k=Root.getElementsByTagName("filter")
}

function erase(){
	//alert(Root.getChildNodes.length)
	try{suspendHandle = Root.suspendRedraw(1000000000000);}
	catch(e){}
	for (i=Root.childNodes.length;i>0;i--) {
		Root.removeChild(Root.childNodes.item(i-1))
	}
	try{Root.unsuspendRedraw(suspendHandle);}
	catch(e){}
	makeCP()
	if (filter!="none") makeFilter(document.f.FR.selectedIndex-1)
}
iterate=false
countit=0
D0=0
function build(nodeN,n){
	try{suspendHandle = Root.suspendRedraw(1000000000000);}
	catch(e){}
	timing=document.f.loop.value
	iters=parseInt(document.f.iters.value)
	countit=1
	if (iters>1) iterate=true
	D0=(new Date()).valueOf()
	dostuff(nodeN,n)
	try{Root.unsuspendRedraw(suspendHandle);}
	catch(e){}
}
stopping=false
function dostuff(nodeN,n){
stopping=false
	for (i=0;i<n;i++)addNode(nodeN)
	if ((iterate)&&(iters>countit++)) window.setTimeout("dostuff('"+nodeN+"',"+n+")",timing)
	else stop(nodeN)
}
function stop(nodeN){
	//addNode(nodeN)
	if (!stopping) {
		//one last iteration with no new drawing simply to make sure the rendering is complete
		stopping=true
		window.setTimeout("stop()",10)
	}
	else {
		D1=new Date().valueOf()
		dif=((D1-D0)/1000).toString().substring(0,6)
		document.getElementById("time").value=dif
	}
}
function addNode(nodeN){
	var I=SD.createElementNS(svgns,nodeN)
	I.setAttributeNS(null, "stroke-width", 2)
	var r=Math.floor(Math.random()*Pix.length)
	var f="../p"+Pix[r]+".jpg"

	x1=Math.ceil(Math.random()*(rightedge))
	x2=Math.ceil(Math.random()*(rightedge))
	y1=Math.ceil(Math.random()*(workH))
	y2=Math.ceil(Math.random()*(workH))
	I.setAttributeNS(null, "id","TU")
	I.setAttributeNS(null, "x", x1)
	I.setAttributeNS(null, "y", y1)
	I.setAttributeNS(null, "stroke-width", 2)
	r=Math.floor(Math.random()*Colors.length)
	var S = Colors[r]
	I.setAttributeNS(null, "stroke", S)
	I.setAttributeNS(null, "opacity", opa)
	var C = "rgb("+parseInt(Math.random()*255)+","+parseInt(Math.random()*255)+","+parseInt(Math.random()*255)+")";
	I.setAttributeNS(null, "fill", C)
	I.setAttributeNS(null, "onclick", "top.change(evt)")

	if (nodeN=="text") {
		I.appendChild(SD.createTextNode(C))
		I.setAttributeNS(null,"font-size",32)
	}
	else if (nodeN=="line") {
		I.setAttributeNS(null, "x1", x1)
		I.setAttributeNS(null, "x2", x2)
		I.setAttributeNS(null, "y1", y1)
		I.setAttributeNS(null, "y2", y2)
	}
	else if (nodeN=="path") {
		I.setAttributeNS(null, "d", "M "+x1+" "+y1+" L "+x2+" "+y1+" "+x2+" "+y2+" z")
	}
	else if ((nodeN=="polygon")||(nodeN=="polyline")) {
		var route="100 100 0 10 101 105 40 10 105 110 50 50 110 115 60 100 115 120 150 150 120 120 200 150 120 125 220 200 115 130 180 200 110 130 120 180 400 200 300 100 405 200 330 110 410 205 380 110 420 210 440 120 420 215 500 215 420 220 500 230 420 230 500 300 420 235 440 500 415 235 380 450 410 225"
		I.setAttributeNS(null, "points", route)
	}

	else if ((nodeN=="circle")||(nodeN=="ellipse")) {
		I.setAttributeNS(null, "cx", x2)
		I.setAttributeNS(null, "cy", y1)
		I.setAttributeNS(null, "r", 40)
		I.setAttributeNS(null, "rx", 20)
		I.setAttributeNS(null, "ry", 40)
	}
	else if (nodeN=="use") {
		I.setAttributeNS(xlinkns,"xlink:href","#CPP")
		I.setAttributeNS(null, "fill", C)
		Root.appendChild(I)
	}
	else if ((nodeN=="rect")||(nodeN=="image")) {
		I.setAttributeNS(null, "height", 100)
		I.setAttributeNS(null, "width", 100)
		y1=Math.ceil(Math.random()*(workH-100))
		I.setAttributeNS(null, "y", y1)
		x1=Math.ceil(Math.random()*(rightedge-100))
		I.setAttributeNS(null, "x", x1)
	}
	if (nodeN=="image") I.setAttributeNS(xlinkns,"xlink:href",f)

	if (tran){
		x3=Math.ceil(Math.random()*(rightedge/2-300))
		y3=Math.ceil(Math.random()*(workH/2-250))
		I.setAttributeNS(null, "transform", "translate("+x3+","+y3+")")
	}
	if (clip) {
		I.setAttributeNS(null, "clip-path","url(#CP)")
	}
	if (filter!="none") {
		var k=Root.getElementsByTagName("filter")
		I.setAttributeNS(null, "filter","url(#tooi)")
	}
	Root.appendChild(I)
}
time=100


</script>
<style type="text/css">
body { 
  margin-top: 0px;
  margin-right: 0px;
  margin-bottom: 0px;
  margin-left: 0px
}
</style>
</head>

<body onload="ready()">
<div id="status" style="position:absolute">
<form name="f">
<table id="Table" cellspacing="2" cellpadding="2" border="1">
<tr>
	<td>
	time<input id="time" size="9" value="0"><br>
	loop<input name="loop" size="5" value="10">
	</td>
    <td align="right">
	opaque<input type="radio" onclick="opa=1.0" checked name="r"><br>
	clear<input type="radio" onclick="opa=0.75" name="r"><br>
	</td>
	<td align="right">
	clip<input type="radio" onclick="clip=true" name="clp"><br>
	noclip<input type="radio" onclick="clip=false" checked name="clp"><br>
	</td>
	<td  align="right">
	translate<input type="radio" onclick="tran=true" name="trn"><br>
	notran<input type="radio" onclick="tran=false" checked name="trn"><br>
	</td>
	<td  colspan="2" align="center">
<select id="G" name="G" onchange="build(G.value,s.value)">
	<option>--node type--</option>
</select><br>
<select id="FR" name="FR" onchange="makeFilter(this.selectedIndex-1)">
	<option>--filter--</option>
</select>
	</td>
	<td  align="center">
	#objects<hr>
	<select name="s" onchange="build(G.value,s.value)">
		<option value="1" selected>1</option>
		<option value="2">2</option>
		<option value="3">3</option>
		<option value="5">5</option>
		<option value="10">10</option>
		<option value="20">20</option>
		<option value="50">50</option>
		<option value="75">75</option>
		<option value="100">100</option>	
		<option value="200">200</option>
		<option value="500">500</option>
		<option value="1000">1000</option>
		<option value="10000">10000</option>
	</select>
	</td>
    <td align="center">
	iters<hr>
	<select name="iters" onchange="build(G.value,s.value)">
		<option value="1" selected>1</option>
		<option value="2">2</option>
		<option value="3">3</option>
		<option value="5">5</option>
		<option value="10">10</option>
		<option value="20">20</option>
		<option value="50">50</option>
		<option value="75">75</option>
		<option value="100">100</option>	
		<option value="200">200</option>
		<option value="500">500</option>
		<option value="1000">1000</option>
		<option value="10000">10000</option>
	</select>
	</td>
    <td align="center">
<input type="button" onclick="erase();build(G.value,s.value)" value="redo">
<br>
<input type=button id="clear" value="clear" onclick="erase()">
</td>
</tr>
</table>

<embed id="E" src="empty.svg" width="800px" height="600px"></form></div>
</body>
</html>


Appendix 2. Appendix Two: Source code of Animation Chamber

<html>
<head>
	<title>Animation Timer</title>
	<script>
	
var svgns = 'http://www.w3.org/2000/svg';
var xlinkns = 'http://www.w3.org/1999/xlink';
var SD;
var CP
var D0
url="http://srufaculty.sru.edu/david.dailey/svg/"
var imwide=240
var imhi=300
var first=true
var count=0
var chosen=0
Conditions=new Array()
Conditions[0]="stopped"
Conditions[1]="count>document.f.s1.value"
Conditions[2]="DN=new Date().valueOf();DN-D0>eval(document.f.s2.value)*1000"
Conditions[3]="yip==animsrunning"
Pix=new Array(2,4,5,9,11,17,18,24,28,29,31,33,34,35,38,41,44,45,47,52,55,56,59,61,62,71,72,73,74,75,76,77,78)
Pix=new Array(17,11,2,78,76,74,72,62,56,47,45)
function prepare(){
	alert("hello")
   var S=document.getElementById("SS")
  	SD=S.getSVGDocument();
	CP=SD.getElementById("CP")
	document.f.p.value=5
	document.f.q.value=Pix[document.f.p.value]
	preload()
}
var received=0
function preload(){
	
	IM=new Array(Pix.length)
	for (r=0;r<Pix.length;r++) {
		IM[r]=new Image()
		IM[r].onload=function(){ready()}
		IM[r].src=url+"p"+Pix[r]+".jpg"
	}

}
function ready(){
	received++
	if (received>Pix.length-1)carve(document.f.s.value)
}
var smil=false
function prep(r){
	for (i=0;i<4;i++){
		if (i!=r) document.f.elements["s"+i].style.visibility="hidden"
		else {
			document.f.elements["s"+i].style.visibility="visible"
			chosen=i
		}
	}
	if (r==3) smil=true
	else smil=false
	document.f.go.style.visibility="visible"
}

built=false
var stopped=false
time=100
function runit(){
	count=0
	stopped=false
	time=document.f.time.value
	D0=(new Date()).valueOf()
	iterate(Conditions[chosen])
}

function iterate(co){
	if (eval(co)) {stop();return}
	r=Math.floor(Math.random()*Pix.length)
	//var f=url+"p"+Pix[r]+".jpg"
	var f=IM[r].src
	count++
	o=SD.getElementById("I")
	o.setAttributeNS(xlinkns,"xlink:href",f)
	window.setTimeout("iterate('"+co+"')",time)
}
function stop(){
	D1=new Date().valueOf()
	dif=((D1-D0)/1000).toString().substring(0,6)
	var s=eval(document.f.s.value)
	expe="; expected:"+((dif*1000)/time).toString().substring(0,7)+"\n"
	document.ff.t.value+="loop-delay: "+time+" ; secs: "+dif+"; SVGcomplex: "+s+";iters:"+count+expe
	
}
function restart(o){
	var copy = o.cloneNode(false);
	o.parentNode.replaceChild(copy, o);
	return copy
}
ovals=0
animsrunning=0
yip=0
P=new Array("cx","cy","rx","ry")
Q=new Array
Q["cx"]=imwide
Q["cy"]=imhi
Q["rx"]=50
Q["ry"]=60
function carve(n){
	yip=0
	animsrunning=0
	if (CP.childNodes) CP=restart(CP)
	for (j=0;j<n;j++){
		R=SD.createElementNS(svgns,"ellipse")
		for (i in P){
			var v=Math.ceil(Q[P[i]]*Math.random())
			R.setAttribute(P[i],v)
			if(Math.random()<.75){
				addanim(SD,R, P[i],n,v)
				animsrunning++
			}
		}
		CP.appendChild(R)
	}
	
	DSA=new Date().valueOf()
}

size=Math.ceil(Math.random()*4)+1
function startsmil(){
	if (smil) runit()

}
function addanim(SD,o,prop,n,v){
	var an = SD.createElementNS(svgns,"animate");
	an.setAttribute("attributeName",prop)
	mini=Math.ceil(Math.random()*v)
	if (smil){
		time=eval(document.f.s3.value)
		an.setAttribute("onend","top.counter()")
		an.setAttribute("begin","I.mousedown")
	}
	else {
		time=2+Math.ceil(Math.random()*10)
		an.setAttribute("repeatCount","indefinite")
	}
	an.setAttribute("values",v+"; "+mini+"; "+v)
	an.setAttribute("dur",time+"s")
	o.appendChild(an)
}

function counter(){
	yip++
	if (yip==animsrunning) {
		DSX=new Date().valueOf()
		document.ff.t.value+="time for all "+animsrunning+" : "+(DSX-DSA)/1000+" secs.\n"
	}
}
function advance(p){
	if (p==1||p==-1)p=eval(document.f.p.value)+p
	if (p<0)p=Pix.length-1
	if (p>Pix.length-1) p=0
	document.f.p.value=p
	document.f.q.value=Pix[document.f.p.value]
	var f=IM[p].src
	var o=SD.getElementById("I")
	o.setAttributeNS(xlinkns,"xlink:href",f)
}
	</script>
</head>

<body onload="prepare()">
<form name="f">

<table cellspacing="2" cellpadding="2" border="1">
<tr>
    
    <td  colspan="2" align="right" onclick="prep(r.value)" id="radio" style="visibility:visible">
	run until click<input type="radio" onclick="r.value=0" name="r"><br>
	run n iterations<input type="radio" onclick="r.value=1"  name="r"><br>
	run until time<input type="radio"  onclick="r.value=2"  name="r"><br>
	check SMIL<input type="radio"  onclick="r.value=3"  name="r">
	</td>
	<td>
	<input name="s0" type=button value="stop" onclick="stopped=true" style="visibility:hidden"><br>
	<select id="s1" onchange="" style="visibility:hidden">
	<option value="10">10</option>
	<option value="100" selected>100</option>
	<option value="1000">1000</option>
	<option value="10000">10000</option>
	</select><br>
	<select id="s2" onchange="" style="visibility:hidden">
	<option value="1">1 second</option>
	<option value="5" selected>5 seconds</option>
	<option value="10">10 seconds</option>
	<option value="20">20 seconds</option>
	<option value="60">60 seconds</option>
	</select>
	<input name="s3" value="8" size=3 style="visibility:hidden">
	</td><td><input type=button id="go" value="animate" style="visibility:hidden" onclick="runit()"></td>
</tr>
<tr>
    <td colspan="2" align="right">
	

<br>
loop-delay(msec)<input name="time" size="5" value="100">
<hr>
Change complexity of SVG<select name="s" onchange="carve(s.value)">
<option value="1">1</option>
<option value="2" selected>2</option>
<option value="3">3</option>
<option value="5">5</option>
<option value="10">10</option>
<option value="20">20</option>
<option value="50">50</option>
<option value="100">100</option>
<option value="200">200</option>
<option value="500">500</option>
<option value="1000">1000</option>
</select>
<hr>
<input type=button value="-" onclick="advance(-1)">
<input size=3 name="p">
<input type=button value="+" onclick="advance(1)">
<br>
<input size="3" name="q">
<input type=button value="img" onclick="advance(p.value)">
</form>
	</td>
	<td colspan="2">
	<embed id="SS" src="test5.svg" height="310" width="250"/>

	</td>
</tr>
<tr>
<td colspan="4"><form name="ff">
<textarea name="t" cols=80 rows=6></textarea></form>
</td>
</tr>
</table>
</body>
</html>

Source code of embed (test5.svg)

<svg xmlns="http://www.w3.org/2000/svg"
     xmlns:xlink="http://www.w3.org/1999/xlink"
	width="100%" height="100%" onload="startup(evt)"
>
<script>
<![CDATA[
var svgDocument=null;
var I=svgRoot=null;
var svgns = 'http://www.w3.org/2000/svg';
var xlinkns = 'http://www.w3.org/1999/xlink';
function startup(evt) {
	var O=evt.target
	svgDoc=O.ownerDocument
	svgRoot=svgDoc.documentElement
	addanim=top.addanim
	//addanim(svgDoc,'bg')
}
//]]>
</script>
<defs>
<clipPath id="CP">
<ellipse id="bg" cx="135" cy="130" rx="40" ry="85"/>
</clipPath>
</defs>
<g onclick="addanim(svgDoc,svgDoc.getElementById('bg'),'rx',1)">
<image id="I" x="20" y="0" height="300" width="240"
xlink:href="http://srufaculty.sru.edu/david.dailey/svg/p78.jpg" clip-path="url(#CP)"/>
</g>
</svg>

XHTML rendition made possible by SchemaSoft's Document Interpreter™ technology.