SCM Architecture

SCM has four major components:

  • Scyld Cloud Portal
  • Scyld Cloud Controller
  • Scyld Cloud Auth
  • Scyld Cloud Accountant

Each of these is a web application written in Python using the Pyramid web framework, and served by Apache with mod_wsgi. The Cloud Controller, Cloud Auth, and Cloud Accountant all present an application programming interface (API) to be consumed by the Cloud Portal or by individual clients. The Cloud Portal presents a secure web-based interface (HTTPS) for both users and administrators.

Logical Architecture

The following diagram shows the logical architecture of SCM. Each Scyld Cloud application requires access to a MySQL database. The Cloud Portal is the “front end” to the other Scyld Cloud applications, and users typically interact with both the Cloud Portal and the VMs to which they have access.

The Cloud Controller interfaces with OpenStack, storage clusters, and virtual machines. It authenticates all API requests with Cloud Auth.


Physical Architecture

Single Controller

When SCM is deployed with a single controller node, all Scyld Cloud Auth components and all controller parts of OpenStack are deployed on that single node. In this scenario, if the single controller becomes unavailable, access to the portal and to virtual machines is lost. The controller performs network address translation (NAT) for the VMs, thus providing access to VMs from external sources.


Dual Controller

When SCM is deployed with dual controllers, the controllers form a high-availability (HA) pair. HAProxy and keepalived are used to manage a virtual internet protocol (VIP) address that can be applied to either controller. HAProxy is then used to load balance incoming requests to each of the SCM applications and the OpenStack APIs. If the controller with the VIP fails, the VIP migrates to the other node. HAProxy is configured to perform health checks on each of the services it is load balancing, and will automatically remove any failed services from its pool.


Additionally, the MySQL database is run in an “active/active” configuration using Galera. This means that reads and writes can be directed to either MySQL node, and the nodes are automatically synchronized.