This initiative is launched by Inovement Spain and tries to catch the attention into the importance of the concepts used in security consultant’s activities
Madrid, February 20th 2014
Inovement launches the O-ISM3 Challenge, an equivalent in the consulting arena to the popular “Capture the Flag” competitions of the ethical hacking scene. In order to solve the Challenge participants have to solve a Use Case where they play the role of a consultant who is in charge of finding the security needs of a company. In order to solve the Challenge, participants have the option of using traditional concepts like confidentiality, integrity, availability, or new concepts like O-ISM3’s security objectives. The purpose of the Challenge is comparing the success solving the Use Case of participants using either option, which are mutually exclusive.
The Challenge will develop between the 28th and 14th of March, and the winners will be published on the 24 of March. Among those who pass the test, a prize of 500 euro and a spot in an O-ISM3 course will be randomly assigned. Furthermore, the conclusions of the Challenge will be published, depending on the number of participants, their approach to solve the Challenge, and their comparative success solving the Challenge.
Inovement is a European consulting company. We provide consulting services in Security Architecture, IT governance, Risk Management, Information Security, Business Continuity and Compliance (ISO 27001, PCI-DSS, O-ISM3)
The Open Group Information Security Management Maturity Model (O-ISM3) Standard defines security processes for managing an enterprise’s Information Security Management System (ISMS). The O-ISM3 Standard places the onus on the business to define its required business security targets in its Security Policy, and then offers a set of security management processes from which the business selects which ones to deploy in a coherent ISMS. Each security control process in the ISMS then returns metrics to indicate how well that process is contributing towards achieving the business's security targets.
The O-ISM3 Standard metrics feedback is a major differentiating feature compared with other ISMS systems, because it enables the ISMS manager to present quantitative evidence (as opposed to qualitative subjective judgments) to show:
Which security control processes are revealing IT operational areas that are under-achieving regarding security targets
Which processes need to be tuned to improve performance to achieve or exceed critical targets
Which processes are not contributing sufficient to justify continuing to use them
The ISMS manager is thereby informed on which processes to retire and which to add to their ISMS. (More) Importantly, they are also informed with metrics that they can report as objective evidence to their CxO-level management on how well their ISMS is performing (based) on their security targets, and how effective their investment in security is.
I introduced the O-ISM3 Risk Assessment Method and SpreadSheet. We learnt how to model the Business, Model the Information Technology, the dependencies between them, the Threat level, the Protection level, arriving at a Qualitative evaluation of the Risk, using the SpreadSheet Tool.
A metric is a quantitative measurement that can be interpreted in the context of a series of previous or equivalent measurements. Metrics are necessary to show how security activity contributes directly to security goals; measure how changes in a process contribute to security goals; detect significant anomalies in processes and inform decisions to fix or improve processes. Good management metrics are said to be S.M.A.R.T:
Specific: The metric is relevant to the process being measured.
Measurable: Metric measurement is feasible with reasonable cost.
Relevant: Improvements in the metric meaningfully enhances the contribution of the process towards the goals of the management system.
Timely: The metric measurement is fast enough for being used effectively.
Metrics are fully defined by the following items:
Name of the metric;
Description of what is measured;
How is the metric measured;
How often is the measurement taken;
How are the thresholds calculated;
Range of values considered normal for the metric;
Best possible value of the metric;
Units of measurement.
Security Metrics are difficult to come by
Unfortunately, it is not easy to find metrics for security goals like security, trust and confidence. The main reason is that security goals are “negative deliverables”. The absence of incidents for an extended period of time leads to think that we are safe. If you live in a town where neither you nor anyone you know has ever been robbed, you feel safe. Incidents prevented can’t be measured in the same way a positive deliverable can, like the temperature of a room.
Metrics for goals are not just difficult to find; they are not very useful for security management. The reason for this is the indirect relationship between security activity and security goals. Intuitively most managers think that there is a direct link between what we do (which results or outputs) and what we want to achieve (the most important things: our goals). This belief is supported by real life experiences like making a sandwich. You buy the ingredients, go home, arrange them, and perhaps toast them and voilá: A warm sandwich ready to eat. The output sought (the sandwich) and the goal (eating a home made sandwich) match beautifully.
Unfortunately, there is no direct link every time. A good example can be research. There is not direct relationship between goals (discoveries) and the activity (experiments, publication). You can try hundreds of experiments and still not discover a cure for cancer. Same thing happens with security. The goals (trust, confidence, security) and the activity (controls, processes) are not directly linked.
When there is a direct link between activity and goal, like the temperature in a pot and the heat applied that pot, we know what decision to take if we want the temperature to drop: stop applying heat But, how will we make a network safer, adding (more accurate filtering), or summarizing (less complexity) filtering rules? We don’t know. If a process produces dropped packets, more or less dropped packets won’t necessarily make the network more or less secure, just like a change in the firewall rules won’t necessarily make the network safer of otherwise.
The disconnect present in information security between goals and activity prevents goal metrics from being useful for management, as you can never tell if you are closer to your goals because of decisions recently taken on the security processes.
Goal metric examples:
Instances of secret information disclosed per year. What can you do to prevent people with legitimate access to disclose that information.
Use of system by unauthorized users per month. What can you do to prevent people from letting other users to use their accounts.
Customers reports of misuse of personal data to the Data Protection Agency. Even if you are compliant, what can you do to prevent a customer to fill a report?
Risk reduction per year of 10%. As risk depends on internal an external factors, what can you do to actually modify risk?
Prevent 99% of incidents. How do you know how many incidents didn’t happen?
Actually useful security metrics
If metrics for goals are difficult to get, and are not very useful; what is a security manager to do? Measuring process outputs can be the answer. Measuring outputs is not only possible but very useful, as outputs contribute directly or indirectly to achieve security, trust and confidence. Using output metrics you can:
Measure how changes in a process contribute to outputs;
Detect significant anomalies in processes;
Inform decisions to fix or improve the process.
There are seven basic types of process output metrics:
Activity: The number of outputs produced in a time period;
Scope: The proportion of the environment or system that is protected by the process. For example, AV could be installed in only 50% of user PCs;
Update: The time since the last update or refresh of process outputs.
Availability: The time since a process has performed as expected upon demand (uptime), the frequency and duration of interruptions, and the time interval between interruptions.
Efficiency / Return on security investment (ROSI): Ratio of losses averted to the cost of the investment in the process. This metric measures the success of a process in comparison to the resources used.
Efficacy / Benchmark: Ratio of outputs produced in comparison to the theoretical maximum. Measuring efficacy of a process implies the comparison against a baseline.
Load: Ratio of available resources in actual use, like CPU load, repositories capacity, bandwidth, licenses and overtime hours per employee.
Examples of use of these metrics:
Activity: Measuring the number of new user account created per week, a sudden drop could lead to detecting that the new administrator is lazy, or that users started sharing user accounts, so they are not requesting them any more.
Scope: In an organization with a big number of third party connections, measuring the number of connections with third parties protected by a firewall could lead to a management decision not to create more unprotected connections.
Update: Measuring the update level of the servers in a DMZ could lead to investigating the root cause if the level goes above certain level.
Availability: Measuring the availability of a customer service portal could lead to rethinking the High Availability Architecture used.
Efficiency / Return on security investment (ROSI): Measuring the cost per seat of the Single Sign On systems of two companies being merged could lead to choose one system over the other.
Efficacy / Benchmark: Measuring backup speed of two different backup systems could lead to choose one over the other.
Load: Measuring and projecting the minimum load of a firewall could lead to taking the decision to upgrade pre-emptively.
There is an important issue to tackle when using output metrics; what I call the Comfort Zone. When there are too many false positives, the metrics is quickly dismissed, as it is not possible to investigate every single warning. On the other hand, when the metric never triggers a warning, there is a feeling that the metric is not working or providing value. The Comfort Zone (not too many false positives, pseudo-periodic warnings) can be achieved using an old tool from Quality Management, the control chart. The are some rules used in Quality Management to tell a warning, a condition that should be investigated from a normal statistical variation (Western Electric, Donald J. Wheeler's, Nelson rules), but for security management the best practice is adjusting the multiple of the standard deviation that will define the range of normal values for the metric until we achieve the Comfort Zone, pseudo-periodic warnings without too many false positives.
Using Security Management Metrics
There are six steps in the use of metrics: measurement, representation, interpretation, investigation and diagnosis.
Measurement: The measurement of the current value of the metric is periodic and normally refers to a window, for example: “9:00pm Sunday reading of the number of viruses cleaned in the week since the last reading” Measurements from different sources and different periods need to be normalized before integration in a single metric.
Interpretation: The meaning of a measured value is evaluated comparing the value of a measurement with a threshold, a comparable measurement, or a target. Normal values (those within thresholds) are estimated from historic or comparable data. The results of interpretation are:
Anomaly: When the measurement is beyond acceptable thresholds.
Success: When the measurement compares favourably with the target.
Trend: General direction of successive measurements relative to the target.
Benchmark: Relative position of the measurement or the trend with peers.
Incidents or poor performance take process metrics outside normal thresholds. Shewhart-Deming control charts are useful to indicate if the metric value is within the normal range, as values within the arithmetic mean plus/minus twice the standard deviation make more than 95.4% of the values of a normally distributed population. Fluctuations within the “normal” range would not normally be investigated.
Investigation: The investigation of abnormal measurements ideally ends with identification of the common cause, for example changes in the environment or results of management decisions, or a special cause (error, attack, accident) for the current value of the metric.
Representation: Proper visualization of the metric is key for reliable interpretation. Metrics representation will vary depending on the type of comparison and distribution of a resource. Bar charts, pie charts and line charts are most commonly used. Colours may help to highlight the meaning of a metric, such as the green-amber-red (equivalent to on-track, at risk and alert) traffic-light scale. Units, the period represented, and the period used to calculate the thresholds must always be given for the metric to be clearly understood. Rolling averages may be used to help identify trends.
Diagnosis: Managers should use the results of the previous steps to diagnose the situation, analyse alternatives and their consequences and make business decisions.
Fault in Plan-Do-Check-Act cycle leading to repetitive failures in a process -> Fix the process.
Weakness resulting from lack of transparency, partitioning, supervision, rotation or separation of responsibilities (TPSRSR) -> Fix the assignment of responsibilities .
Technology failure to perform as expected -> Change / adapt technology.
Inadequate resources -> Increase resources or adjust security targets.
Security target too high -> Revise the security target if the effect on the business would be acceptable.
Incompetence, dereliction of duty -> Take disciplinary action.
Inadequate training -> Institute immediate and/or long-term training of personnel.
Change in the environment -> Make improvements to adapt the process to the new conditions.
Previous management decision -> Check if the results of the decision were sought or unintended.
Error -> Fix the cause of the error.
Attack -> Evaluate whether the protection against the attack can be improved.
Accident -> Evaluate whether the protection against the accident can be improved.
What management practices become possible?
A side effect of an Information Security Management System (ISMS) lacking useful security metrics is that security management becomes centered in activities like Risk Assessment and Audit. Risk Assessment considers assets, threats, vulnerabilities and impacts to get a picture of security and prioritize design and improvements while Audit checks the compliance of the actual information security management system with the documented management system with an externally defined management system or an external regulation. Risk Assessment and Audit are valuable, but there are more useful security management activities like monitor, test, design & improvement and optimization that become possible with output metrics. Theses activities can be described as follows:
Monitor—Use metrics to watch processes outputs, detect abnormal conditions and assess the effect of changes in the process.
Test—Check if inputs to the process produce the expected outputs.
Improving - Making changes in the process to make it more suitable for the purpose, or to reduce usage of resources.
Planning - Organizing and forecasting the amount, assignment and milestones of tasks, resources, budget, deliverables and performance of a process.
Assessment - How well the process matches the organization's needs and compliance goals expressed as security objectives. How changes in the environment or management decisions in a process change the quality, performance and use of resources of the process; Whether bottlenecks or single points of failure exist; Points of diminishing returns; Benchmarking of processes between process instances and other organizations. Trends in quality, performance and efficiency.
Benefits realisation. Shows how achieving security objectives contributes to achieving business objectives, measures the value of the process for the organization, or justifies the use of resources.
While audits can be performed without metrics, monitoring, testing, planning, improvement and benefits realisation are not feasible without them.
What needs to be done?
S.M.A.R.T security managers need metrics that actually help them performing management activities.
While it is not necessary to drop goal metrics altogether, the day to day focus of information security management should be on security monitoring, testing, design & improvement and optimization using output metrics, which are the ones which will show what are the effect of management decisions, if things are getting worse or better, if processes work as designed, and if there are changes out of our direct control that cause abnormal conditions in security processes. All these activities are perfectly feasible using outputs metrics and control charts.
The usage of ISM3 within the Information Assurance program of the Swiss Armed Forces is threefold:
first there is the necessity to comply with a couple of regulations inter alia the ISO 27k family, ISO 31000 and ISO 20000.
then the development of measurable and achievable security processes is very demanding in such a high security environment while the pure implementation of ISO 27k, especially its controls, is not sufficient to prove a Return of Security Investment (ROSI)
at last the governance of security in a highly decentralized organization needs a clever structuring.
Basically in all of these action areas ISM3 is giving us a helping hand and therefore saving us time and money to develop an own interpretation of ISO 27k. ISM3 came into the focus of the Swiss Armed Forces during a study about a business-driven implementation of an ISMS in order to regain management attention and acceptance for the restructuring of security. ISM3 itself is not a new invention but a straight forward and enabling approach to comprehend existing security frameworks in order to make security understandable for the rest of the (business) world. During the long process of aligning diverse security initiatives within the Swiss Armed Forces ISM3 is and will be the central repository and helping cornucopia to establish security processes which are measurable, acceptable and achievable in the sense of ROSI. The methodology ISM3 provides is helping to achieve ROSI while the ISM3 security processes in detail are helping to focus on the servicing and main-tenance of security at all levels: operational, tactical and strategic.
"CajaMadrid started using ISM3 in the security process of ethical hacking of systems and applications. Several enhancements were made in order to measure the metrics of this process, and a Service Level Agreement was established based on those metrics. Using ISM3 criteria, the classification of information systems, which determines how frequently the systems are tested, was improved.
As a result of ISM3 implementation, the team’s productivity doubled during the first year. The follow-up reports with metrics made collaboration between developers, system administrators and security personnel easier and more productive. The information available is so detailed that CajaMadrid now uses objective criteria to give an award to the manager whose applications or systems present the fewest vulnerabilities, and vulnerabilities that are found are fixed faster.
The methodology’s orientation on deliverables makes the entire evidence-generating activity possible, which means that it is auditable, measurable and manageable. As the management system is metrics based, daily operations do not require audits for improvement, speeding up the improvement cycle significantly. When metrics correlate well with pursued goals, process improvement leads directly to the achievement of goals with greater effectiveness, efficiency and quality. Metrics make the status and progress of activities clearly visible, making it easier to reach agreements with our internal client and partners, and communicate our achievements to upper management. Ultimately, this methodology allows information security to be managed using best practices that apply to business in general. Based on the success of the methodology, we are extending the use of the method to other processes.
CajaMadrid is ISO27001 and ISO20000 certified, and uses CMMI-3 and standards like ITIL, OSSTMM and many others. ISM3 integrates seamlessly with all of them and we have even received a positive note about our ISMS certification thanks to ISM3. We are very interested in the method, and once it has been set up, it is simple to use.”
In this video the pros and cons of the Compliance approach and the Continuous Improvement approach are weighed.
Most ISMS standards emphasize Risk Assessment and Audit. These management practices leave other information security management practices in shadows, which is specially unfair if you consider the limitations risk assessments face. Creating a risk assessment method is very easy, as you can make many choices:
The scope (what's in, what's out)
The depth (think OSI levels and above to business processes)
The way you model the parts/objects of the organization, their relationships, and the states of their lifecycles.
Your threat taxonomy (there is not a single one widely accepted one at all depth levels)
The way you score the impact on assets (dollars, high-medium-low or 1-5 Confidentiality, Integrity, Availability scales and expansions or combinations thereof)
Controls taxonomy (there is not a single one widely accepted one at all detail levels. Many use the ISO17799 list)
How you combine threats, their probability, controls, their qualityand impact to reach a Risk figure.
The multiplicity of risk assessment methods and standards makes exceedingly difficult to reuse or compare risk assessments, problem compounded with changes in the method design or even the way it is used. Very seldomly it is possible to compare this year's RA with the last years one, and comparing RA from different companies becomes an unattainable Saint Grial. A good risk assessment standard should meet the following criteria:
Reproducibility. This means two different independent practitioners should get virtually the same work products and results.
Productivity (Added value) This means the work products should serve as inputs for:
Gauge how safe is the organization;
Identify threats and weaknesses;
Choosing what processes are appropriate for fulfilling the security objectives;
Prioritizing investment in security processes;
Quantifying investment in security processes.
Cost-effectiveness. Setting up a ISM system should be cheaper than operating it, just like the cost of choosing a security tool should be small in comparison with the cost of purchasing and using the tool.
Added value. This means the result of the process selection should be learnt from the process selection itself. If the process selection result is known beforehand, and the process selection is just a justification for a previously taken decision, the added value is nil,which negates any cost-effectiveness.
ISM3 considers the following management activities:
Risk Assessment (part of GP-3) - Considers assets, threats, vulnerabilities, impacts to get a picture of security and prioritize design and improvements.
Audit. Using the GP-2 ISM System and Business Audit process, checks are made on whether the process inputs, activities and results match their documentation.
Certify: Certification it evaluates whether process documentation, inputs, outputs and activities comply with a pre-defined standard, law or regulation. The certificate provides independent proof of compliance that third parties can trust. This practice is also performed using GP-2 ISM System and Business Audit.
Additionally, and in equal footing:
Testing. Assessment of whether process outputs are as expected when test data is input. This is an aspect of TSP-4 Service Level Management.
Monitoring. Checking whether the outputs of the process and the resources used are within normal range for the purpose of detecting significant anomalies. This is also performed using TSP-4 Service Level Management.
Improving. Making changes in the process to make it better fit for the purpose, or to lead to a saving in resources. The removal of faults before they produce incidents, bottlenecks that hamper performance and making trade-offs are examples of process improvements.. This management practice needs information gained from evaluating, testing or monitoring the process. The gains from the changes (if any) can be diagnosed with subsequent testing, monitoring or evaluation. GP-3 ISM Design and Evolution provides a framework for monitoring.
Planning. Organizing and forecasting the amount, assignment and milestones of tasks, resources, budget, deliverables and performance of a process. This is performed using TSP-4 Service Level Management.
Evaluation. Required periodically to assess the outcomes of the ISM system.
Assessment. Using the GP-3 ISM Design and Evolution process, the following areas are assessed:
How well the process matches the organization's needs and compliance goals expressed as security objectives.
How changes in the environment or management decisions in a process change the quality, performance and use of resources of the process;
Whether bottlenecks or single points of failure exist;
Points of diminishing returns;
Benchmarking of processes between instances and other organizations.
Trends in quality, performance and efficiency.
Benefits realisation. Shows how achieving security objectives contributes to achieving business objectives, measures the value of the process for the organization, or justifies the use of resources. This is performed using TSP-4 Service Level Management
So, do Audit, and assess your risks, but don't let this drain all you energy from real management.
There are two reasons for using a information system model in Information Security. The first is that in order to understand the threats faced by information systems, we need to understand their components, what is their behaviour and how they relate to each other. The second is that security policies are too often tied too closely to actual hardware and software components and rendered obsolete by technological advances. Using a good model of an information system provides a necessary layer of abstraction that can help to make security policies mode durable.
Process. This is equivalent to the “State register”, the “Table”, and the “Head”.
As early information systems where not networked, the components needed for inter-system communication where not modelled. As a matter of fact, operating systems like Inferno try to keep the file/process model alive.
ISM3 represents information systems using a reduced set of components, but a set complex enough to illustrate how networked information system behave.
Repositories: Any temporary or permanent storage of information, including RAM, databases, file systems, and any kind of portable media;
Interfaces: Any input/output device, such as screens, printers and fax;
Channels: Physical or logical pathways for the flow of messages, including buses, LAN networks, etc. A Network is a dynamic set of channels;
Borders define the limits of the system.
Services. Any value provider in an information system, including services provided by BIOS, operating systems and applications. A service can collaborate with other services or lower level services to complete a task that provides value, like accessing information from a repository;
Sessions. A temporary relationship of trust between services. The establishment of this relationship can require the exchange of Credentials.
Messages. Any meaningful information exchanged between two services or a user and an interface.