Jump to Navigation

 

2.5 Artifacts Across the Cloud Provisioning Chain

Our report on the Cloud Accountability Conceptual Framework [1] describes several cloud scenarios that are used as a basis for discussion. It is not infrequent to see provisioning chains where a single service offering depends on several SaaS, PaaS and IaaS providers, with a mix of small and large IT organisations. From the point of view of the cloud customer, this provisioning chain should behave as a single, coherent service, seamlessly integrated to the point of being an invisible integration.

The exchange of the accountability artifacts, described in section 2.4, at various phases of the service lifecycle enables the establishment and continuous evaluation of accountability between any two accountable cloud actors directly engaged in a service relationship. This model extends to cover cloud service provisioning chains of arbitrary length and branching complexity, each actor exchanging accountability artifacts with its neighbours. Figure 9 illustrates how the process of exchanging accountability artifacts between pairs of actors in the provisioning chain can lead to accountability for the agreed context across the entirety of a given cloud service provisioning chain.

Figure 9: A model for end-to-end accountability in cloud service provisioning chains.

This flow of information is not a simple relay, store-and-forward structure. Artifacts are seldom transmitted unchanged across the provisioning chain. At each stage, artifacts are transformed based on the distribution of roles of each actor and the distribution of data processing functions across the provisioning chain.

To illustrate this point for artifacts flowing downstream (cf. objectives in Figure 9), we can take as an example the case of policies which specify limitations on where a data object classified as personal data can be stored and processed. These policies can be initially associated with the object instance as metadata, stored in a policy-aware object-oriented database, and processed by a policy-aware application. However, down the provisioning chain, a SaaS provider may elect to use a third party IaaS provider for the deployment of its application. In this case, the policies associated with all the personal data instances being processed will need to be aggregated into a policy which relates to the SaaS-IaaS interface, e.g. virtual machines and storage. This aggregated policy will specify the allowable location for servers and disc farms. Alternative models can be used in this case, such as a dynamic distribution of the processing jobs based on the policies associated with the personal data (matching data policies to IaaS capabilities), or a static selection of locations which comply with all personal data location policies the application will ever process (using static policies). This simple example demonstrates that the adoption of interoperability standards covering the specification and exchange of artifacts, which is in itself quite a complex and challenging task, is not enough. In order to automate the propagation of policies, the industry will need to converge on interoperable machine-readable functional descriptions which are suitable for automated reasoning. These are clearly topics of future research and effort.

The situation is similar for artifacts which are being exchanged upstream (cf. assurance in Figure 9), such as notification reports. As illustration, we can take as an example an IaaS provider which experiences a severe disk crash, leading to the temporary unavailability of data, until the faulty hardware is replaced and the data restored. Information on this outage is certainly relevant for the users of the affected services. But the question of which cloud subjects need to be notified about the incident depends on whether they are affected by the outage or not. This can only be determined based on mapping the distribution of the cloud subject data and processing onto the infrastructure. Furthermore, the data in the notification report will need to be modified to refer to the cloud subject data, as the cloud subject has no knowledge of storage volumes or virtual machines.

In current practice, the transformation of artifacts along the provisioning chain is performed by manual or static (hard-coded) processes.