Modelling the Reactive Behaviour of SVG-based Scoped User Interfaces with Hierarchically-linked Statecharts


There are many challenges that developers face during the development of a complex User Interface (UI). Desired behaviour may be autonomous or reactive, and possibly real-time. Each UI component may be required to exhibit a radically different behaviour from that of any other component and the behaviour of components may be inter-related. These complex behavioural relationships between components are often difficult to express, and are even more difficult to encode and maintain.

There are also difficulties related to the development process: the developer must be able to rapidly adapt the structure and behaviour of the UI to changing system requirements. Unfortunately, conventional code-centric approaches fall short. We suggest that a developer should specify the behaviour of a UI using a representation which minimizes "accidental complexity".

Our solution uses Model-Driven Engineering (MDE). By modelling every aspect of the system-to-be-built, at the most appropriate level of abstraction, using the most appropriate formalism(s), it becomes possible to completely capture the behaviour of a UI, to rapidly generate prototype implementations, to easily adapt the UI as project requirements change, and, finally, to synthesize a UI and maintain it. More specifically, in this paper, we introduce a class of UIs known as Scoped User Interfaces, and illustrate how one may model them using Hierarchically-linked Statecharts (HlS). A Scoped User Interface is one in which reactive visual components (widgets) such as buttons and windows are hierarchically nested. Statecharts is a modelling formalism initially proposed by David Harel as an extension of Finite State Machines. It was intended as a solution for modelling complex, state-based behaviour, and is now included as a part of the OMG Unified Modeling Language (OMG UML) Superstructure specification. HlS applies an actor-based approach to modelling the behaviour of UIs, explicitly modelling each hierarchically-scoped UI component. A default event model that implements hierarchical event capturing and bubbling is used to bind the system together, and allows widgets at the highest level of the hierarchy to exhibit a general behaviour, while widgets deeper in the hierarchy exhibit more specific behaviour.

We have used the techniques described in this paper to develop AToMPM, an experimental, SVG-based visual modelling environment. SVG has provided a complete platform for the development of this application. In our approach, we use the SCCJS compiler to translate our Statechart models to ECMAScript. Because SVG supports scripting interactivity, it has been possible to bind each Statechart instance to a newly-created, SVG-based widget, with minimal "glue" code. The SVG DOM is inherently hierarchical, and was hence used to encode hierarchical relationships between UI components. The SVG event model then, has enabled the hierarchical event handling for arbitrarily complex, dynamically created widgets. Finally, the SVG event model also provides an event "vocabulary" that we have used as triggers in our Statechart models.

The paper will give a step-by-step description of our approach by means of a concrete example, as we illustrate the work we have done to develop, using HlS, an SVG-based environment for visually editing and simulating Statechart models.

Table of Contents

Overview of Modelled User Interfaces
Overall Goals
Sources of Complexity in UI Development
Motivation for an MDA Approach and Proposal for a Possible Solution
Hierarchically-linked Statecharts
Developing an SVG-based Diagram Editor Front-end to a Multi-formalism Meta-Modelling Environment
Project Goals
Defining the Abstract Syntax and Concrete Syntax of a Simple Drawing Tool
Describing the Reactive Behaviour of AToMPM with HlS
SVG as a Platform for HlS
Future work

Unfortunately, conventional code-centric approaches fall short. Hence, a developer needs to capture the structure and behaviour of a UI such that "accidental complexity" is minimized [Brooks.SilverBullet]. We claim that an elegant solution to these problems may be found in Multi-Paradigm Modelling [TSCS.GrandChallenge]. By modelling every aspect of the system-to-be-built, at the most appropriate level of abstraction, using the most appropriate formalisms, it becomes possible to completely capture the structure, behaviour and visual appearance of a UI, to rapidly generate prototype implementations, to easily adapt the UI as project requirements change, and, finally, to synthesize a UI and maintain it.

Hierarchically-linked Statecharts (HlS) is a formalism for visually describing the structure and behaviour of Scoped UIs based on a combination of UML Class Diagram and Statecharts [Harel1987].

As will be demonstrated in the following sections, HlS makes it easier to develop applications with complex UI behaviour faster and more reliably. This is possible, as HlS allow the developer to see UI development as a language engineering problem. Specifically, HlS entails the following work-flow:

  1. One uses an appropriate formalism, such as UML Class Diagrams, to specify the Abstract Syntax of the visual language. This entails specifying all elements in the domain one wishes to model, and qualifying their relationships with other elements. This Class Diagram, together with constraints over its elements, is commonly known as a meta-model.

  2. Subsequently, one models the Concrete Visual Syntax by associating a visual entity (such as an iconic shape [VisualLanguages]) to each element of the Abstract Syntax model,

  3. One finally specifies UI behaviour using Statecharts, such that each class is associated with a Statechart and specifies the reactive behaviour of each instance of that class. The Statechart "glues" together:

    • Reactions to user events such as mouse clicks and key-presses;

    • Interactions with the non-visual part of the language. In particular, checking of well-formedness of constructs against the Abstract Syntax specification as well as reflecting the Semantics of the language which is often encoded as transformation rules;

    • Layout operations which act exclusively on the Concrete Visual Syntax.

The concepts from the section called “Overview of Modelled User Interfaces” will be illustrated using an example based on the development of AToMPM, the multi-paradigm modelling environment, developed by the McGill University Modelling, Simulation, and Design Lab (MSDL), that targets targets SVG for its UI front-end.

The goal of our project is to create a multi-paradigm modelling environment, the diagram-editor front-end of which targets SVG, and may run in a web browser. This tool will be similar to other multi-formalism, meta-modelling environments such as AToM^3, Marama, and DiaMeta.

We have based the UI behaviour of AToMPM's diagram editor primarily on the UI behaviour of the SVG drawing tool Inkscape. Later in the paper it will be made clear what, exactly, this has entailed.

The diagram editor of AToMPM is designed to communicate with an "Abstract Syntax" model, possibly hosted on the server, to facilitate formal verification and syntax-directed editing. The behaviour to facilitate syntax-directed editing may also be captured using HlS, and is included as a part of AToMPM, however, it will not be covered in this paper.

The first step when constructing an application using HlS is to describe the Abstract Syntax model (sometimes called the "domain model") of the domain we are modelling. This model attempts to classify the entities in that domain and their relationships to one-another. Here we present a meta-model for a very simple drawing tool, using the Class Diagrams formalism.

In this model, there are four classes: CSCanvas, CSConnectionCurve, CSGroup, and a CSRectElement. The prefix "CS" in this case stands for "Concrete Syntax", as we are, in effect, modelling the Abstract Syntax of a very simple Concrete Syntax domain.

A CSCanvas is a Canvas which may contain CSConnectionCurve and CSGroup objects.

A CSGroup is an abstract class, which refers to objects that are dynamically created, updated and deleted on the CSCanvas. We provide a CSRectElement to serve as an example of a concrete subclass of CSGroup.

A CSConnectionCurve is analogous to an SVGPathElement. Its shaped is defined via a path segment list, which may contain moveto, lineto, curveto, quadratic curveto, and closepath path commands. This is identical to the SVG Path language, and a complete description of the language may be found in the W3C Scalable Vector Graphics (SVG) 1.1 Specification.

The default "Drawing Mode" behaviour of CSConnectionCurve is derived from the behaviour of the Inkscape Bezier Tool. This behaviour will be examined in the section called “CSConnectionCurve Drawing Behaviour”.

A CSConnectionCurve may also be used to connect two CSGroups. This behaviour extends beyond Inkscape's behaviour for manipulation of SVGPathElements, and will be examined in the section called “CSConnectionCurve Interaction with CSGroup”.

In this section, we will first present two examples of non-trivial UI behaviour, in increasing order of complexity. Our goal is to illustrate how HlS may, in general, be used to concisely model the behaviour of highly reactive UI elements with nontrivial behavioural requirements.

First, we may note that there are two methods for bringing a CSGroup from an Idle state to a Selected state. One may either:

  • MOUSEDOWN-MOUSEUP (mouseclick) on the CSGroup

  • MOUSEDOWN-MOUSEMOVE*-MOUSEUP (mousedrag-and-release) on the CSGroup

There is some subtlety to this behaviour, as the above transitions may lead to different states. When a CSGroup is Idle, MOUSEDOWN-MOUSEUP will always bring it to Scale Mode. On the other hand, MOUSEDOWN-MOUSEMOVE*-MOUSEUP will bring the CSGroup back to the state it was in when it last transitioned from Selected to Idle. For example, if the CSGroup had been put in Rotate Mode, and then brought back into Idle mode, then MOUSEDOWN-MOUSEMOVE*-MOUSEUP would bring CSGroup into Rotate Mode. Another way to describe this is that CSGroup maintains a history of the state it was in when it left Selected.

In order to differentiate between mouseclick and mousedrag-and-release, we add a "transitional state" which the CSGroup will enter from Idle on MOUSEDOWN. Adding a "transitional" state in this way is a fairly common pattern that will be used frequently throughout the paper. We will call this state Before Mode is Chosen. We must also add a History state to encode the behaviour mentioned above. Finally, we'll set Translating_R and Translating_S as default states. This will ensure that when the History state is entered, the CSGroup will enter into either into a mode where it may continue translating (dragging) the CSGroup on further MOUSEMOVE events.

Once in Idle_R, in order to begin Rotating the CSGroup, the user dispatches a MOUSEDOWN event on the CSGroup's rotation handle. This is encoded by the special event R_HANDLE_MOUSEDOWN.

Once in Rotating, every MOUSEMOVE event will trigger a series of actions which cause the CSGroup to rotate.

MOUSEUP will bring the CSGroup back to Idle_R.

Scale Mode behaves in an analagous fashion.

CSGroup can also be translated when in Rotate Mode and Scale Mode. When in Idle_R, mousedrag will bring the CSGroup into the Translating_R state. Once in Translating_R, every MOUSEMOVE will translate the object. MOUSEUP will bring the object back to Translating_R.

mouseclick will allow the user to toggle from Idle_R to Selected_R. Because both mouseclick and mousedrag are initiated by MOUSEDOWN we add a transitional state to Rotate Mode called Ready To Translate_R.

Finally, to bring the CSGroup from Selected to Idle, the user may dispatch MOUSEDOWN on the CSCanvas. The CSCanvas must then inform the CSGroup that the CSGroup should bring itself into an Idle state. This is an example of an interaction between two UI objects, which will be further explored in the section called “Interactions Between Multiple Canvas Objects ”.

This UI behaviour description presents an early example of nontrivial reactive behaviour. Through the use of states, substates, transitions, and History, the statechart is able accurately and concisely capture, express, and encode the natural-language behavioural description. Because of the executable semantics of the Statechart formalism, it is possible to generate code from the statechart in order to generate an implementation of this UI behavioural specification.

Additionally, because each CSGroup is an actor, which encapsulates its own state and behaviour, each object will be able to react to user input events concurrently with other UI objects.

A MOUSEDOWN on the CSCanvas creates the CSConnectionCurve, and puts it into Drawing mode. This initial MOUSEDOWN may be followed by either an immediate MOUSEUP (mouseclick), or a MOUSEMOVE (mousedrag). These two events will bring the CSConnectionCurve into different states.

Both mouseclick and mousedrag create an initial "moveto" path command, such that the path's x and y parameters are set to the event's x and y properties.

Both mouseclick and mousedrag will then immediately create a new "lineto" path command.

Mousedrag, will immediately put the curve into a special state for Drawing First Line. While in Drawing First Line, on every MOUSEMOVE event, the x1, y1 parameters of the new lineto path command will be updated to the MOUSEMOVE's coordinates. Visually, this makes the endpoint of the path appear to follow the mouse.

MOUSEUP will bring the CSConnectionCurve out of Drawing First Line and bring it into a new state which will be called Before Setting Start Point. In this state, on every MOUSEMOVE, the coordinates of the final control point will be updated to the MOUSEMOVE's x, y coordinates. MOUSEDOWN will bring the CSConnectionCurve to a new state Before Selecting Next Segment Type. The CSConnectionCurve's behaviour then varies between whether one exits this state with either an immediate MOUSEUP or a MOUSEMOVE. If the user dispatches a MOUSEUP to the curve, then the curve creates a new lineto segment, and enters a state in which it is "drawing" the line. We call this state Drawing Line. Like in Drawing First Line, while in Drawing Line, every MOUSEMOVE sets the endpoint to the MOUSEMOVE event coordinates, visually causing the last line segment endpoint to track the mouse cursor. A MOUSEDOWN would bring the curve back to the Before Selecting Next Segment Type state.

If, instead of a dispatching a MOUSEUP, the user dispatches a MOUSEMOVE while in Before Selecting Next Segment, the curve will create a new quadratic curveto segment; the curve will also add a control point to the previous segment, in essence "upgrading" the previous segment from a linear to a quadratic, or from a quadratic to a cubic segment. The CSConnectionCurve will then enter a mode where it is able to adjust the control points of the new quadratic segment, and the CSConnectionCurve's previous, newly "upgraded" segment. This state will be called Drawing Curve. While in Drawing Curve, every MOUSEMOVE causes the control point of the new quadratic segment, and the new control point of the previous "upgraded" segment, to be updated, such that the the new quadratic segment's control point will be set to the MOUSEMOVE's coordinates, and the previous segment's new control point will be reflected about the new quadratic segment's start point. A MOUSEUP will bring the CSConnectionCurve back into Before Setting Start Point.

Finally, to return to the original choice between the MOUSEUP and MOUSEMOVE, when the CSConnectionCurve is first created, if the user dispatches an immediate MOUSEUP event, then the CSConnectionCurve will be brought directly into the Drawing Line state, rather than the Drawing First Line state.

The requirements for the CSConnectionCurve serves as another example of a nontrivial UI behaviour. Based upon the length of its natural-language description, one may state that it is, in fact, more complex than the CSGroup behaviour. However, its statechart still captures its reactive behaviour very concisely, in a way that is easy to read and understand. Because it is actor-based it still maintains its same advantages with regard to concurrency.

There are many cases in which the behaviour of UI objects may need to be defined in terms of other UI objects. In HlS, this is typically modelled by allowing UI objects to dispatch special semantic events, which may then be handled by other objects' statecharts. Rather than allow communication through direct object references, we impose a restriction that UI objects must communicate hierarchically, through the references to their hierarchical parents and children. Three examples of the way UI objects interact with one-another, and how this behaviour may be encoded using HlS, will be explored.

While the previous two interactions were fairly simple, and served to demonstrate how UI objects may communicate with one-another through their heirarchical relationships, the third example is more complex and requires a new technique for modelling behaviour. Note that this behaviour diverges from that found in Inkscape, for the reasons mentioned in the section called “Reasons for Choosing Inkscape as a Basis for UI Behaviour”.

When we wish to connect a CSConnectionCurve to a CSGroup, the CSGroup's behaviour will be affected by the CSConnectionCurve's state. In order to coordinate, the CSConnectionCurve and CSGroup send events to one-another via their parent CSCanvas. When engaged in this sequence of communications, we say that the CSConnectionCurve is "interacting" with CSGroup, or that CSConnectionCurve is engaged in an interaction with CSGroup.

Specifically, when the CSConnectionCurve is in a state in which it is ready to be "dropped" on the CSGroup, a MOUSECLICK may be used to "drop" the CSConnectionCurve onto the CSGroup. A MOUSECLICK is also accepted by a CSGroup in several of its states (Idle, Idle_S, Idle_R). We do not want a CSGroup to change states in response to a MOUSECLICK, when a CSConnectionCurve is ready to connect to it, and so we say that CSConnectionCurve enters into an interaction with the CSGroup. That is to say, CSConnectionCurve sends a special event to request that the CSGroup enter a state to indicate that it is involved in an interaction with the CSConnectionCurve. When the CSConnectionCurve has finished its interaction, it sends a semantic event to the CSGroup to allow it to exit its interaction state, and return to the state it was formerly in.

First, we provide the CSConnectionCurve with a new state, called Ready to Snap, distinct from the Drawing state. The purpose of this state is to separate cleanly the UI logic required to connect a CSConnectionCurve to a CSGroup. We will give the user the opportunity to toggle between Drawing and Ready to Snap modes by dispatching a SPACEBAR event.

In Ready To Snap, we would like the CSConnectionCurve to visually "snap" to a CSGroup whenever it gets within a certain range of the CSGroup's anchor points. To this end we allow the CSGroup to define certain "select areas", which are basically closed, transparent SVGPathElements associated with a particular CSGroup. Their purpose is to define a threshold around the CSGroup's anchor point, to inform the CSConnectionCurve when and where to snap.

When a CSGroup's "select areas" receive a MOUSEOVER event, the event bubbles up to the parent CSCanvas, which then passes the MOUSEOVER event down to its CSConnectionCurve children.

The CSConnectionCurve, upon receiving a MOUSEOVER event from a select area, is able to change from a state in which it is Not Ready to Drop to a state in which it is Ready to Drop. When it enters Ready to Drop, the CSConnectionCurve visually "snaps" to the CSGroup's anchor points.

When the curve enters Ready to Drop, it begins the interaction with the CSGroup associated to the selection area. It sends a request through the parent CSCanvas to begin the interaction with the CSGroup.




When the CSGroup receives the CSCONNECTIONCURVE_REQUESTS_READY_TO_DROP_INTERACTION_START event from the parent CSCanvas, it exits whatever state it was in, and enters into an In Ready To Drop Interaction with CSConnectionCurve state. When the CSGroup receives the CSCONNECTIONCURVE_REQUESTS_READY_TO_DROP_INTERACTION_END event from the parent CSCanvas, it exits In Ready To Drop Interaction with CSConnectionCurve and enters a deep history state. This allows the CSGroup to return to the state that it left when it entered the interaction with CSGroup.

Due to the fact that, while in In Ready To Drop Interaction with CSConnectionCurve, CSGroup does not react to MOUSEDOWN events, a MOUSEDOWN event may be dispatched on the CSGroup, which it will ignore. This will effectively allow CSConnectionCurve to "capture" the MOUSEDOWN events from CSGroup for the duration of the interaction.

Finally, CSConnection is able to safely handle a MOUSEDOWN event in order to drop its point. This is simply a matter of adding a new transition from Ready To Drop to a new state Idle.

Exiting the Ready To Drop Interaction with CSGroup will trigger CSCONNECTIONCURVE_REQUESTS_READY_TO_DROP_INTERACTION_END, which will allow the CSGroup to finish the interaction and return to its previous state.

This example is important because it illustrates the pattern that will be required to model even more complex interactions between objects. For even more complex interactions, it may be desirable to encapsulate the interaction in a single class. The interaction, when encapsualted in its own class, would be modelled as an Association Class between two other classes in the Abstract Syntax model, and, like all other classes in the Abstract Syntax, the Association Class would be associated with a statechart that would describe its behaviour. In the behaviour statecharts of the classes associated by the Association statechart, when one statechart initiates an interaction, it would instantiate a new interaction class. Both classes would then delegate their events to the new interaction object for the duration of the interaction. When the interaction ends, then the interaction object would be destroyed. An example of how this may occur is described in [HarelExecutable1997].

Although the interaction described in this example is not sufficiently complex to require the use of an interaction object, it is clear where this logic would be injected. An interaction object would be instantiated on entering Ready to Drop Interaction with CSGroup*, and destroyed on exit. CSConnectionCurve would delegate to the interaction object while in Ready to Drop Interaction with CSGroup*, and CSGroup would delegate to the interaction object while in In Ready To Drop Interaction with CSConnectionCurve. Thus, this example illustrates the pattern that will be required to model complex interactions between objects

SVG, as an XML-based, retained-mode graphics API, has provided a complete platform for the development of AToMPM. There have been a number of features of SVG that we have been able to productively leverage during our development with HlS.

HlS relies heavily on using hierarchical object references and message-passing interfaces for inter-object communication. The DOM Level 3 Event Specification, which SVG imports, provides a default Event Flow. Leveraging this event flow has allowed us to once again be more productive. The effect of this is to provide a "default behaviour" that does not need to be explicitly specified in each individual Statechart. This has allowed us to make our Statecharts more concise, and thus easier to read and maintain.

Specifically, the DOM Level 3 Event Specification states that a DOM event may be in one of three event phases: capture, target, and bubble phase. By default, in HlS each UI entity will use the bubble phase of DOM to allow events to bubble up to the parents. In this way, it is possible to allow communication from children to parents without needing to explicitly define this action in the statechart.

The SVG event model then, has enabled the hierarchical event handling for arbitrarily complex, dynamically-created widgets, and has simplified the resultant statechart descriptions of UI behaviour.

Tooling is a very important aspect of any model-driven approach to software engineering. It is therefore important to describe both the tools that we are developing, and tooling as it exists today.

There are primarily two tools that are required to develop HlS: a diagram editor for the HlS formalism (essentially, a Class Diagram editor, and a Statecharts editor), and a compiler that is able compile HlS models to executable code for a particular target language and environment.

At the MSDL, we are currently working to create the diagram editor and compiler tools. It is our vision to allow the development of SVG-based user interfaces using HlS, in an environment which has itself been modelled using HlS.

In order to achieve this, we have used a number of tools to bootstrap the new AToMPM tool. The Statechart models shown in this paper were developed using AToM^3, MSDL's environment for multi-formalism meta-modelling. The Statecharts were compiled to executable JavaScript using SCCJS, a Statechart-to-JavaScript compiler built on top of the SCC, a statechart compiler developed by Thomas Feng at MSDL.

"Glue code" was written by hand in order to bind Statecharts to JavaScript objects. This involved a pattern of including code that instantiated and initialized a new Statechart model inside of appropriate JavaScript constructor functions. This pattern is illustrated below with the following code snippet:

function createNewCSGroup(){
	/* initialize the representation in DOM */ 
	var newGroup = document.createElementNS(svgNS,"g");
	/* other DOM initialization goes here*/


	/* hook up behaviour */
	var newGroupModel = new CSGroupBehaviour_MDL();

	/* set a statechart property on the object */
	newGroup.statechart = newGroupModel;

In the future, as more advanced tooling supports modelling both Class Diagrams and Statecharts, and the associations between the two, this will obviate the need to write glue code by hand.

In the future, there are many problems we still face, on both practical and theoretical levels.

On a practical level, we intend to continue to develop AToMPM. While it is currently still in the prototyping phase, we would like the tool to evolve into a mature multi-paradigm modelling environment, employing an SVG-based UI front-end, and backed by a server-side compiler and full meta-modelling kernel. The tool will be open source, and we would like to reach a point where it is of general interest to the developer community.

With regard to theory, we continue to seek "optimal" formalisms for UI specification and synthesis. We feel that it may be possible to use HlS as an "assembly language" for higher-level specification languages (such as Task Models). In order to develop this, further work must be done to classify UI structural and behavioural patterns.

I would like to acknowledge Denis Dubé, whose Master's thesis [Dube.MSthesis] laid the foundation for this work.