Active Object Computing
Most of the
systems that application programmers deal with are
reactive (event-driven). Such a system can perform some
or no actions, and possibly change its state, as a
reaction to events, external (received from the
environment) or internal (generated by its own previous
actions). This reaction can vary depending on the
current state of the system and, possibly, even on state
history. It can also send events/messages to the
environment as a part of the reaction. It’s natural
then, while implementing such systems, to decompose them
into subsystems that are reactive themselves. This leads
us to using active objects as the most natural basic
modules of a software application. In addition to
regular data and methods/actions of the traditional OOP
objects, active objects possess a fundamentally
different quality: they are endowed with their own
thread of execution or process, i.e. they are “alive”.
Actions can be run and events can happen in an active
object even in the absence of any communication with its
environment. Thus, active objects allow much better and
more natural modeling of real-life objects than
traditional OOP objects.
Consider a
Husband / Wife “system” under the following
circumstances: the Wife is cooking dinner and discovers
that she is missing some ingredient—an event for Wife,
but not for Husband, who is watching football and has no
knowledge about this misfortune. She asks her Husband
(sends a message) to go to a store and buy that stuff.
Once received, the message becomes now an (unpleasant)
event for Husband. What’s going to happen next? The
Husband might refuse to go, because he is busy (send the
corresponding message to Wife). But, most likely, he
will not dare to argue and will go to the store. Unless
the Husband is totally retarded, the Wife will not have
to explain him how many steps he should take and in
which direction and what to do if he sees a red light
when crossing a street (all that is internal behavior
information of the Husband object), right? Moreover, the
Wife can be doing something else while the Husband is
out (concurrency).
However, what
should be the internal mechanism driving active objects?
How do we describe and code the object’s behavior
information?
Traditional Finite State Machine
Model Limitations
As it turns out, complex behavior
cannot easily be described by simple, “flat”
state-transition diagrams (finite state machines or FSMs).
The FSM model works well for simple, state-driven
systems, but doesn’t “scale up” to larger systems. The
lack of scalability in FSM stems from two fundamental
problems: the flatness of the state model and its lack
of support for concurrency [Douglas 01].
Flat state machines do not provide
the means for constructing layers of abstraction.
Consider modeling a car, which is definitely a complex
system, but on the highest level of abstraction can be
described as either “moving” or “stopped”. On a much
lower level of abstraction and, hence, a much higher
level of detail the state of the car can be described as
a combination of states of all the cylinders: whether
each of them at the current moment of time is being
filled with the gas/air mixture, compressing it,
exploding, or emptying. Apparently, for some events,
like “the driver has pushed the brake pedal”, the
reaction of the system (the car must stop) will be the
same regardless of the states of particular cylinders,
while for other events, like “valve 3 opened” the
reaction may depend on the state of a particular
cylinder. If we need our model to provide adequate
reactions to both kinds of events, the flat model
doesn’t leave us a choice other than describing the car
in terms of cylinder states. It means we need to
explicitly assign a reaction to the “the driver has
pushed the brake pedal” event in every possible cylinder
states combination separately! Traditional state machine
formalism inflicts repetitions. Such “modeling” results
in an unstructured, unrealistic and chaotic state
diagram. Classical FSMs can easily become unmanageable,
even for moderately involved systems. The state “moving”
in the car example above obviously contains all the
states for all the cylinders. So, “moving” and “stopped”
should not be considered at the same of
abstraction/detail level as, say, “emptying cylinder 1”.
Another serious problem with
traditional state machines is the lack of support for
concurrency. This leads to a combinatorial explosion in
the number of states to model. Consider modeling a
traffic light. It can be thought of to be Green, Yellow
or Red. Imagine that you also need to take into account
that it can run off a battery or from mains. To describe
this system with a traditional FSM we will need the
following states: Green-Battery, Yellow-Battery,
Red-Battery, Green-Mains, Yellow- Mains, Red-Mains. The
fact that the light is Green is obviously independent of
whether it is running from battery or mains. However,
since traditional FSMs have no notion of independence,
we must combine the independent states together. This is
called the “combinatorial state explosion” because the
modeling of multiple concurrent state sets requires the
multiplication of the number of states in each to model
all conditions
[Douglas 01].
Hierarchical State Machines
(Statecharts)
Theoretical
foundations on how to construct software for non-trivial
event-driven systems have been around for more than 20
years! David Harel invented statecharts or Hierarchical
State Machines (HSMs) in 1983 as a powerful way of
specifying complex reactive systems
[Harel 87].
HSMs allow
nesting states within states. States that contain other
states are called composite states; conversely, states
without internal structure are called simple states. If
a system is in the nested state (called substate), it is
also (implicitly) in the surrounding state (the
superstate). Structuring of the state space in this
manner provides the ability to consider the system at
different levels of abstraction which, as is widely
known, is a powerful way for coping with complexity
[Samek
02].
Composite states
not only hide, but also reduce, complexity through the
reuse of behavior. An HSM will attempt to handle any
event in the context of substate (which is in the lower
level of the hierarchy). However, if the substate does
not prescribe how to handle the event, the event is
automatically handled in the higher level context of the
superstate. This is what is meant by the system being in
substate as well as in superstate. Of course, state
nesting is not limited to one level only, and the simple
rule of event processing applies recursively to any
level of nesting. In addition, the semantics of state
nesting allows substates to define only differences in
behavior from the superstates—behavioral inheritance. If
a reaction (transition) for some event is defined for a
superstate, it is automatically defined for all its
substates (unless it is explicitly overridden in a
substate). Thus, HSM relaxes the requirement of
classical state machines to define explicitly the
transitions for every possible state/event combination,
economizing on the description of the system’s behavior.
Avoiding repetitions allows HSMs to grow proportionally
to system complexity as opposed to the explosive
increase in states and transitions typical for
traditional FSMs
[Samek 02].
Statecharts also
introduce state entry and exit actions. The normal order
of execution is that entry actions of the superstate are
performed first, followed by the entry actions of the
nested state. Exit actions are performed in reverse
order. Since states may be nested arbitrarily deeply
these rules apply recursively
[Douglas 01].
The concept of
orthogonal regions (AND states)
[Harel 87] is another
cornerstone of statecharts. Practical realization of
this concept is rather heavy-weight though. Instead of
AND states, concurrency within a system can be easily
modeled by breaking it into concurrently running active
objects/subsystems which themselves have no internal
concurrency, i.e. have a hierarchy of OR states only and
can be only in one state at a time.