Multi-Cloud Deployment of Airline Scheduling Application Prototype


From year to year the airline industry has the challenge of working more and more cost effectively. Today's airlines need to permanently revise their schedule plans in response to competitor actions, or to follow updated sales and marketing plans, while constantly maintaining operational integrity. This makes schedule management a very complex process.

Cloud Computing provides such high flexibility by providing elasticity and scalability together with pay-per-use cost models.

This use case is related to the commercial aircraft schedule planning software NetLine/Sched. NetLine/Sched supports all aspects of schedule development and schedule management. It offers powerful and easy to use schedule visualization and modification, supports alternative network strategies and schedule scenarios, and measures the profitability impacts of alternative scheduling scenarios.

Use case description

One scenario covered by the LHS use case is transcribed by the “Follow-the-Sun pattern”. This term describes the way of working on a specific problem by passing the remaining work from an employee located in one time-zone (i.e. geographical region) to another employee located in another region to work further on it, during regular office hours. This process can be repeated, until the problem has been solved. This ensures fast problem resolve time together with low personnel costs.

To ensure a responsive system (i.e. airline scheduling application) the services and software components related to the user interface and some (pre-aggregated) data stores need to be located near to the active group of users (near-edge location); while other components of the application can stay in a private cloud.

For the scope of the BEACON project, LHS provides a prototype with reduced (business) functionality compared to the existing NetLine/Sched classic software. This is because of several reasons:

  • Current version of NetLine/Sched was not designed as a cloud ready application which could gain from a cloud ecosystem in the way we want to demonstrate in BEACON.
  • To focus of the use case is to demonstrate the usage of the new federated cloud functionality provided by BEACON to fulfil the distributed requirements of the application and not the specific application functionalities.

Architectural Description

The prototype of the cloud enabled airline flight scheduling application used for demonstrating BEACON moves beyond standalone software functionality and is designed to support SaaS operations in cloud environments. SaaS is provided by means of a delivery model, where application services are cloud-hosted and offered to customers on demand over the internet with specific pricing and licensing options.

Server-side performance, high availability and scalability are key non-functional requirements in architecting the prototype. In a multi-tenant environment, users share the same code base and services are required to adequately react to high traffic loads. Virtualized cloud environments are offering a scale on demand infrastructure. Virtual nodes can incrementally be added to or removed from the cluster. Elastic scalability is well suited to achieve high performance in scenarios when workload increases or to free up resources when demand decreases. As a SaaS-Architecture, the application prototype provides central flight-planning services. Common infrastructure-services like user- and configuration-management, security and an integration-bus are centrally developed and will be ideally provided in a PaaS-Environment.

The scheduling application prototype is a multi-layered, distributed web-application and provides a scalable platform of loosely coupled, component oriented building blocks. In contrast to a monolithic approach, server-side-applications are decomposed in self-contained, collaborating and independently scalable business-components, each capable of running in its own process and interacting by use of lightweight REST-style communication protocols.

The overall Architecture encompasses major styles of REST-Architectures, Domain-Driven-Design (DDD) with hexagonal layering design, Command-Query-Segregation (CQRS) and Event-Sourcing (ES). In combination, these styles are complementary and covering non-functional criteria on different architectural levels. In turn a layered architecture enables us to address various non-functional and conceptual requirements with explicit design decisions on each level.

For modules acting as read-models the latency is critical so these are set to the closest datacentre to minimize the network latency. Therefore those read-models must be available on as many nodes as needed for supporting the adequate availability. Beside the read model itself (to serve the queries) those modules consist of a webserver for the GUI, a RESTful web API, the Kafka messaging and a local database to store pre-aggregated data.


Write models require extremely high availability to ensure data integrity. This can be achieved by using multiple datacentre’s in parallel to keep the factor of data loss in case of one datacentre outage as minimal as possible. The write model is used to implement the business logic and is supported by Kafka messaging to publish and to persist events, a RESTful web API to access the write model and a local database.


The use case application is a containerized application using Docker. Each component as well as infrastructure components like Kafka, the database, the webserver and the central gateway runs in an own container. To support the installation process of these components and to ease the process of VM image creation a set of bash scripts is provided.


The BEACON framework makes it much easier for us to setup our use case application in such a cross-cloud scenario. By employing the cloud federation capabilities of BEACON the use case has not to deal with the entire low-level configuration to connect the application components across the two cloud testbeds we are using here, the OpenNebula installation and the OpenStack testbed of our project partners ONS and UME.

Therefore we were able to concentrate on the application specifics itself, like the setup of a Kafka Mirroring, trying out the scaling behaviour and fine-tuning of the architecture of such a decoupled system, e.g. with having (many) separate read models connected with the event store and the messaging component of the write model.