BEACON Represented at Poster Session at SummerSOC 2017, Crete

Managing Director of industrial partner flexiOPS, Craig Sheridan and University of Messina Associate Professor Massimo Villari represented BEACON at the poster session at this year's Symposium on Service Oriented Computing on Crete. The well established summer school has proven to have been a great opportunity for generating interest and feedback on not only the concepts but also the results of BEACON.  

1st IEEE International Workshop on Federated Networking, Clouds and IoT

Several BEACON partners presented papers at the FENCI 2017 (IEEE International Workshop on Federated Networking, Clouds and IoT) workshop co-located with the SmartComp 2017 conference in Hong Kong.

Dario Bruneo from the University of Messina presented the paper Orchestrated multi-cloud application deployment in OpenStack with TOSCA. The paper explored using a standard for “Topology and Orchestration Specification for Cloud Applications” in order to deploy applications and virtual network functions in an OpenStack based Federated cloud.

Anna Levin presented the paper “Network Monitoring in Federated Cloud Environment”. The paper explores using the open source real-time network topology and protocols analyzer Skydive for monitoring federated cloud networks.

Philippe Massonet presented the article "End-to-end Security Architecture for Federated Cloud and IoT Networks" at the FENCI 2017 workshop. The paper is a collaboration between CETIC, the University of Messina and IBM research in Israel. The paper proposes to connect IoT devices and cloud services using a federated cloud network. The paper then proposes a security architecture to protect the federated cloud network using network function virtualisation (NFV) and service function chaining.

Philippe Massonet also presented the paper “Deployment-time multi-cloud application security” at the at the FENCI 2017 workshop. The paper explores protecting applications that are deployed in a federated cloud network by performing vulnerability analysis upon deployment of virtual machines and deploying application level firewalls.

This work has been supported by the BEACON project, grant agreement number 644048, funded by the European Unions Horizon 2020 Programme under topic ICT-07-2014

BEACON Plenary Meeting, Messina

The BEACON Consortium met for the last time before the final review in what turned out to be a highly productive and energising two day event.  The meeting was hosted by the University of Messina, at the Faculty of Engineering in Sicily and allowed partners to consolidate code, bring demo materials together and present the work done in each of the work packages.  

Confidence is high throughout the consortium and the overall project looks in good shape for the final phase.  The review is scheduled to take place in October. 

NET FUTURES 2017

Internet, the economy and society in 2027

We are going through a technological revolution that will fundamentally change the way we live, work, and relate to one another. This transformation led by Internet, its scale, scope, and complexity, will be unlike anything humans have experienced before. We do not know where exactly this 'internetisation' will lead us to: What will our society look like in 2027? 

However one thing we know: People simply expect a lot more from the Internet, in terms of quality and reach, in terms of security and privacy, an Internet that is inclusive and supports openness, diversity and responds to the needs of the individuals. 

We feel that not all European policy actors share this sense of urgency for action. However it is imperative to address these challenges or Europe's voice on the future of the Internet will disappear. 

The NET FUTURES edition in 2017 will serve as a wake-up call for policy makers and technologist alike, for civil society and the young whose future we will influence. It will be the place for deep-dive conversations and learnings right at a time when Europe is at the brink of entering the next industrial revolution: The Net.

The BEACON Consortium are proud to collaborate and attend this concertation meeting of H2020 projects.

Automated HA across Datacenters

Eduardo Huedo Cuesta - Unversidad Complutense de Madrid

Availability is calculated as the percentage of time an application and its services are available, given a specific time interval. High Availability (HA) is achieved when the service downtime is no more than 5.25 minutes per year, meaning at least 99.999%. The cloud with the best uptime in 2015 was Amazon Web Services, with a downtime of 150 minutes, far from the also named “five nines” availability. To achieve more than “five nines” availability, it could be necessary to deploy HA services using a combination of two availability zones (multi-zone HA), which are isolated locations within a cloud infrastructure, or even two different cloud providers (multi-cloud HA).

 

HA of a system is achieved by incorporating specific features to reduce service downtime, typically redundancy for failover and replication for load balancing. These techniques can be incorporated into services, such as clusters or multi-tier applications. In this post, we explore and compare different deployments of HA clusters using single- and multi-cloud setups. First we discuss a sample multi-tier service and give some details about the deployment configuration. Then, we analyze the advantages of the multi-cloud approach regarding the availability of the cluster.

Single-cloud HA

To illustrate these experiences we have chosen a paradigmatic multi-tier application that showcases the main characteristics and requirements of an HA deployment. In our case, we will use a classical web application consisting of the following components:

  • The load balancer tier that distributes the traffic over different application servers. The load balancers must be deployed in a HA configuration, requiring a floating IP associated to the full qualified domain name of the application and shared across the load balancer cluster. Usually, this layer is implemented with a combination of TCP/HTTP load balancing (e.g. with HAProxy) and VRRP failover (e.g. with Keepalived).

  • The web or application tier, consisting of various web servers that exposes the application HTTP interface. The web server spawns one or more worker process to handle the actual requests from the clients.

  • The cache tier, consisting of in-memory cache nodes providing read-only data to speed-up database access. This tier is usually included to scale out the application, sometimes in the application servers. A common setup is to use a distributed hash table (e.g. memcached) that requires the clients (workers in the web tier) to implement a consistent hashing algorithm and so to allow the addition or removal of cache nodes.

  • The data tier, consisting of one or more database servers that provide data access and persistence mechanisms. To provide HA in this tier, the database is replicated to one or more database servers. The database servers adopt a master-master replication mode, so write updates can be directed to the backup database servers in case of failure.

 

The following figure shows a single-cloud deployment of this web application. To improve availability, the application components are replicated in two different zones of the same cloud. These zones can be seen as two separated physical clusters within the same cloud infrastructure, so, if one cluster fails the service is not interrupted. Note that component replication provides two benefits, namely:

  • Scale out: The application and cache nodes can scale horizontally to increase the overall capacity of the web service. Also this nodes provides those layers with the required HA functionality.

  • HA: The load balancer and database tiers includes active-passive nodes to provide pure HA functionality. Note that the main workload is processed in the other two tiers.

 

These kind of HA services should be deployed taking into account that failures occur at different levels: VM instance, physical server, and availability zone. Therefore, the different service components must be deployed with specific placement constraints. The use of affinity rules is traditionally considered an effective mechanism to implement HA strategies. However, traditional affinity rules can not deal deal with complex multi-tier services and different availability zones. So, one of the main challenges in the deployment of multi-zone HA services is the orchestration of the service considering zone-based placement constraints and role-based placement constraints (i.e. for groups of related VMs or roles). To address this challenge, we are adding new affinity mechanisms and placement heuristics in OpenNebula for multi-zone scenarios.

Multi-cloud HA

The following figure shows a multi-cloud deployment of the same web application. In this case, the application components are replicated in two different clouds, so in case of cloud outage, the service continuity is guaranteed. As in the single-cloud deployment, we deploy the application in two different availability zones to increase the fault-tolerance capabilities of the service.

 

 

The different service components are distributed or replicated among both clouds, so that each cloud scheduler should receive the description of the service components that must be locally deployed, along with their location constraints (affinity rules), and each cloud makes its own and independent placement decisions according to these constraints. So, regarding the orchestration problem in a multi-cloud scenario, there are no new challenges other than those concerned with the multi-zone case.

 

However, in the multi-cloud scenario other additional challenges should be considered. First, the access to the service, which is performed through the Internet using a global load balancing and failover mechanism. For example, the Domain Name System (DNS) can include multiple address records for the service to distribute the client calls across load balancers. When a load balancer fails, web clients will retry using the next address returned by the DNS servers. Since the client usually picks the first address provided, the sequence of addresses is permuted in order to provide round robin, or it can be sorted following some distance metric. Also, health tests can be used to remove failing services. This is the simplest and probably most effective solution. More advanced techniques rely on anycast networks or global networks of reverse proxies.

 

Second, to interconnect the different elements within a tier, it is necessary to configure various private networks. In the single-cloud deployment, all the networks are internal, so they can be configured as private VLANs within the cloud. However, in the multi-cloud deployment, the data tier would require the configuration of a cross-site private network for multi-master database replication. This may require multicast UDP monitoring traffic to promote a slave in case of master failure, so it could be necessary to provide L2 connectivity at the virtual network level. For this, the BEACON framework for federated cloud networking is used.

 

Summing up, there are three main challenges for multi-cloud HA services, namely: multi-zone service orchestration with placement constraints, global load balancing and failover, and cross-cloud private networking.

BEACON Meeting in Madrid

The team met recently in Madrid at the OpenNebula offices to discuss the final phase of the project.  The team are happy to say that everything is on track and they look forward to the Open Stack Summit in Boston in May and also the Beacon workshop which is part of the SmartCOMP conference in Hong Kong.


The call for papers for this workshop is still open, the deadline being April 9th.  
See more here: http://fenci2017.unime.it/