4.6 Metrics and Evidence
In accountable organisations, the evidence element can be used to indirectly assess the suitability of organisational policies, IT controls and operations. In this sense, the usage of accountability metrics capable of performing such indirect assessment throws light on the existing relationship between accountability metrics and evidence. The following example illustrates this relationship:
Let us consider an accountability metric that computes the percentage of customer PII [50], records with a timestamp out of all the customer PII records kept by an organisation. This metric can be used, in combination with additional relevant metrics, to indirectly evaluate how well the organisation meets the verifiability attribute of accountability with respect to the fulfilment of a data retention policy by disclosing how long (data retention period) the PII will be kept before being deleted. Verifiability requires the behaviour of the process in place for data retention be verifiable against the policy. In this case the timestamp held by a PII record allows the verification of whether the storage of the record is conforming to data retention policies by comparing it with the data retention period and the current date. Furthermore, the PII creation date is used as body of evidence to perform the actual measurement based on the metric definition.
Based on the previous example and from the perspective of the accountability framework, metrics are a means for demonstrating accountability through its attributes using the notion of quantifiable evidence of the application of proper practices and the performance of operational processes. Based on this notion, A4Cloud proposes the following definition for the evidence element in the context of accountability metrics:
Evidence: collection of information, with tangible representation, that can be used for assessing a cloud service attribute.
Note 1 to entry: A metric does not measure directly an attribute of a given cloud service, but rather it measures the attribute indirectly through the evidence associated with that cloud service. Therefore, the evidence element supports the measurement result and the repeatability of the measurement.
Note 2 to entry: Evidence may come from sources with different levels of certainty and validity, depending on the method of collection or generation. All these sources can be analysed and be used as body of facts for assessing a given attribute.
The proposed definition of evidence is broad enough to encompass not only experimental observations, but also system logs, certifications asserted by a trusted party, or textual descriptions of a procedure. Therefore, e vidence can take different forms and can be provided at different levels: at (i) the organisational level, to demonstrate that the policies are correct and appropriate for the context; at (ii) the operational level, to demonstrate that the right practices and mechanisms have been deployed to implement the policies; at (iii) the technical level, to demonstrate that systems and processes in place are behaving as planned.
The evidence element also contributes to the S.M.A.R.T. characteristic of an accountability metric, namely:
- S - Specific (or Significant) Evidence allows metrics to be specific and targeted to the area being measured. For example, evidence provides information about the actors involved on the measurement, the purpose or benefits of the metrics, etc.
- M - Measurable (or Manageable, Meaningful) Evidence supports the elicitation of meaningful metrics, by providing clear information about the input parameters.
- A - Achievable (or Appropriate, Attainable) Metrics should not be developed if one cannot collect accurate or complete data. Evidence in this case helps stakeholders to assess the quality of the inputs, and ultimately to evaluate the assurance associated to the measurement result.
- R - Repeatable Provided evidence is essential to trace the measurement process applied by the assessor to obtain the result. Evidence should provide enough information to repeat the measurement, as many times as required, in order to validate the obtained measurement result.
- T - Timely - Timely metrics are those for which data are available when needed, and this feature is directly related to the quality of provided evidence.
Depending on the characteristics of the evidence associated to each metric, it is possible to derive a level of confidence that indicates the assurance on the result of the metric. This confidence is based on two factors:
- Consistency, which indicates how systematic and regular the evaluation process is. This dimension is directly related to the Repeatable characteristic of accountability metrics. This level can vary depending if the evidence indicates an informal assessment where no formal procedure for the evaluation is defined (level 1), a structured but manual assessment (level 2), or an automated and systematic evaluation (level 3).
- Source of Assessment, which indicates how independent the metrics evaluation with respect to the object of assessment is. Self-provided assessments are considered of low value (level 1), while third party assessments, or even user/publicly verifiable evaluations, are considered of high value (levels 2 and 3, respectively).
Based on these two dimensions, an aggregated measure of the level of confidence can be constructed, according to the Metrics Confidence Matrix, in Figure 17. This measure indicates the confidence in the metrics results according to the following levels:
- Level 0 (Unreliable). There is no confidence in the metrics results, since both the independence and the consistency of the assessment are very low.
- Level 1 (Insufficient). In this case, one of the two factors only achieves the lowest level, so the global confidence value will be considered as insufficient. It is clear that confidence in metrics is insufficient when the assessment is self-made or the process is informal.
- Level 2 (Essential). This level is the minimum desired level of confidence. The assessment guarantees an acceptable level of independence and consistency.
- Level 3 (Maximum). This is the preferable level of confidence. However, achieving this level is presumably a costly procedure, since it implies automating the evaluation and making it publicly verifiable.
It is clear that both self-assessed and informally performed evaluations, are not sufficient for providing a reliable metric; thus, the maximum attainable level of confidence for these two levels is 1. In particular, when the evaluation is both informal and self-assessed, the confidence is considered to be non-existent (level 0). Once both factors reach a level of 2, then an acceptable level of confidence is achieved (level 2). For the particular case when the evaluation is both publicly verifiable and automated, a maximum level of confidence is reached (level 3). Note that the confidence level defined above is only a coarse-grained indicator of the aggregation of the two factors of confidence. A finer grained indicator could be possible, but it would have more levels, which complicates its interpretation. Thus, the selection of this scale was done for the sake of simplicity and clarity.
Consistency Source of Assessment |
Informal (Level 1) |
Structured (Level 2) |
Automated (Level 3) |
Self-assessment (Level 1) |
0 |
1 |
1 |
Third party assessment (Level 2) |
1 |
2 |
2 |
User/Publicly Verifiable (Level 3) |
1 |
2 |
3 |
Figure 17: Metrics confidence matrix.
The degree of confidence associated with accountability metrics becomes useful for mechanisms like certifications, where multi-assurance schemes like the CSA Open Certification Framework (OCF) have been proposed by the industry. The idea behind CSA OCF is to offer three levels of assurance (Level 1 – self assessment, Level 2 – third party attestation, and Level 3 – continuous monitoring/continuous audit), with different degrees of confidence related to the provided evidence. A more detailed discussion related to certifications and accountability is presented in the next section.
[50] Personally Identifiable Information.
Download the preliminary release of the Cloud Accountability Reference Architecture and the relevant A4Cloud Toolkit.