Risk, Investment and Maturity

In this video we examine how exactly risk decreases with increasing investment, and how this correlates to maturity.

Compliance vs Continuous Improvement

In this video the pros and cons of the Compliance approach and the Continuous Improvement approach are weighed.

Most ISMS standards emphasize Risk Assessment and Audit. These management practices leave other information security management practices in shadows, which is specially unfair if you consider the limitations risk assessments face. Creating a risk assessment method is very easy, as you can make many choices:

  • The scope (what's in, what's out)
  • The depth (think OSI levels and above to business processes)
  • The way you model the parts/objects of the organization, their relationships, and the states of their lifecycles.
  • Your threat taxonomy (there is not a single one widely accepted one at all depth levels)
  • The way you score the impact on assets (dollars, high-medium-low or 1-5 Confidentiality, Integrity, Availability scales and expansions or combinations thereof)
  • Controls taxonomy (there is not a single one widely accepted one at all detail levels. Many use the ISO17799 list)
  • How you combine threats, their probability, controls, their qualityand impact to reach a Risk figure.

The multiplicity of risk assessment methods and standards  makes exceedingly difficult to reuse or compare risk assessments, problem compounded with changes in the method design or even the way it is used. Very seldomly it is possible to compare this year's RA with the last years one, and comparing RA from different companies becomes an unattainable Saint Grial. A good risk assessment standard should meet the following criteria:

  • Reproducibility. This means two different independent practitioners should get virtually the same work products and results.
  • Productivity (Added value) This means the work products should serve as inputs for:
  • Gauge how safe is the organization;
  • Identify threats and weaknesses;
  • Choosing what processes are appropriate for fulfilling the security objectives;
  • Prioritizing investment in security processes;
  • Quantifying investment in security processes.
  • Cost-effectiveness. Setting up a ISM system should be cheaper than operating it, just like the cost of choosing a security tool should be small in comparison with the cost of purchasing and using the tool.
  • Added value. This means the result of the process selection should be learnt from the process selection itself. If the process selection result is known beforehand, and the process selection is just a justification for a previously taken decision, the added value is nil,which negates any cost-effectiveness.

ISM3 considers  the following management activities:

  • Risk Assessment (part of GP-3) - Considers assets, threats, vulnerabilities, impacts to get a picture of security and prioritize design and improvements.
  • Audit. Using the GP-2 ISM System and Business Audit process, checks are made on whether the process inputs, activities and results match their documentation.
  • Certify: Certification it evaluates whether process documentation, inputs, outputs and activities comply with a pre-defined standard, law or regulation. The certificate provides independent proof of compliance that third parties can trust. This practice is also performed using GP-2 ISM System and Business Audit.

Additionally, and in equal footing:

  • Testing. Assessment of whether process outputs are as expected when test data is input. This is an aspect of TSP-4 Service Level Management.
  • Monitoring. Checking whether the outputs of the process and the resources used are within normal range for the purpose of detecting significant anomalies. This is also performed using TSP-4 Service Level Management.
  • Improving. Making changes in the process to make it better fit for the purpose, or to lead to a saving in resources. The removal of faults before they produce incidents, bottlenecks that hamper performance and making trade-offs are examples of process improvements.. This management practice needs information gained from evaluating, testing or monitoring the process. The gains from the changes (if any) can be diagnosed with subsequent testing, monitoring or evaluation. GP-3 ISM Design and Evolution provides a framework for monitoring.
  • Planning. Organizing and forecasting the amount, assignment and milestones of tasks, resources, budget, deliverables and performance of a process. This is performed using TSP-4 Service Level Management.
    Evaluation. Required periodically to assess the outcomes of the ISM system.
  • Assessment. Using the GP-3 ISM Design and Evolution process, the following areas are assessed:
  • How well the process matches the organization's needs and compliance goals expressed as security objectives.
  • How changes in the environment or management decisions in a process change the quality, performance and use of resources of the process;
  • Whether bottlenecks or single points of failure exist;
  • Points of diminishing returns;
  • Benchmarking of processes between instances and other organizations.
  • Trends in quality, performance and efficiency.
  • Benefits realisation. Shows how achieving security objectives contributes to achieving business objectives, measures the value of the process for the organization, or justifies the use of resources. This is performed using TSP-4 Service Level Management

So, do Audit, and assess your risks, but don't let this drain all you energy from real management.

Information System Models

There are two reasons for using a information system model in Information Security. The first is that in order to understand the threats faced by information systems, we need to understand their components, what is their behaviour and how they relate to each other. The second is that security policies are too often tied too closely to actual hardware and software components and rendered obsolete by technological advances. Using a good model of an information system provides a necessary layer of abstraction that can help to make security policies mode durable.

There are quite a few Turing-complete computer models, among them:

A Turing machine has the following components:

  • Tape: It stores information.

  • Head: It read or writes information.

  • Table: It specifies what will be the next action of the machine depending on the State and the current value under the Head.

  • State register

Nowadays, most computers follow the Von Neumann Architecture. Operating systems that run on Von Neumann machines often follow the Unix convention, where the information system components are:

  • File. This is equivalent to the “Tape”.

  • Process. This is equivalent to the “State register”, the “Table”, and the “Head”.

As early information systems where not networked, the components needed for inter-system communication where not modelled. As a matter of fact, operating systems like Inferno try to keep the file/process model alive.

ISM3 represents information systems using a reduced set of components, but a set complex enough to illustrate how networked information system behave.

  • Repositories: Any temporary or permanent storage of information, including RAM, databases, file systems, and any kind of portable media;

  • Interfaces: Any input/output device, such as screens, printers and fax;

  • Channels: Physical or logical pathways for the flow of messages, including buses, LAN networks, etc. A Network is a dynamic set of channels;

  • Borders define the limits of the system.

  • Services. Any value provider in an information system, including services provided by BIOS, operating systems and applications. A service can collaborate with other services or lower level services to complete a task that provides value, like accessing information from a repository;

  • Sessions. A temporary relationship of trust between services. The establishment of this relationship can require the exchange of Credentials.

  • Messages. Any meaningful information exchanged between two services or a user and an interface.

Beyond Authentication, Authorization and Accounting

A very common oversimplification of access control is: "authentication, authorization and accounting". This narrow view obscures many subtle aspects of corporate practices for providing fine grained access to information resources.

Information systems represent users using user accounts or certificates and implement digital equivalents to guarded doors, records and signatures. While user accounts sometimes represent services or information systems instead of people, I will use the term "user" alone for simplicity.

Authentication is seen as a way to check for a users identity, but real implementations do not guarantee this. Every user account has an associated credential that is presented and validated by the system that controls the access to resources before the user of the user account can access those resources (this is called "login"). Upon login, we can't assert the identity of the user, or even if the user account's owner is the current user of the user account. What we do know is that someone who knows the user ID of a valid user account has presented to the system both the user ID and the associated credentials.

When we need to link user accounts and certificates to identifiable users, making users accountable for the use of the user account, we have to complement technical authentication with other controls.

The first control is checking the actual identity of the user before providing him with the set of user ID and credential. In countries were national ID cards exist, this control can be pretty straightforward. In other countries, widely different methods are used, normally checking if the user has information, papers and cards that only the legitimate user should have. I call this control User Registration.

The second control is preventing the user from sharing the credentials with other users. This can be achieved by using multiple credentials (biometric and password, for example)  Other enforcement controls are not as effective; for example one can warn the user of the consequences of sharing  the user ID and credential, and find ways to detect when this happens.

When protecting the anonymity of users is more important than making them accountable, we often need to guarantee that user accounts and certificates are not linked to identifiable users. This does not  mean that anonymous User Registration is trivial, as we often will want the users to fulfill certain criteria, what I call  Personality of the user. Common examples of personality test are: "Are you over 18?", "Do you live in Europe?", or "Are you human?". Anonymous user accounts are not intended to check who the user is, but to check if the user is the same who used the user account the last time it logged in.

It is not rare to find online applications that don't use use accounts at all, checking only for the personality of the user before providing access.

An additional aspect of authentication is session control, which limits:

  • The number of login attempts with an invalid credential or user ID; what happens after the limit is reached; the time lapse between login attempts.
  • The number of simultaneous sessions with the same user ID;
  • The longest duration of a session.

So under the umbrella of "Authentication", we find:

  • Validation for the actual identity of identified users OR validation of the Personality of anonymous users (User Registration)
  • Provision of user ID's and credentials to users.
  • Validation of the matching between user ID and credentials before grating any access to a resource.
  • Limits on the usage of the system with a user ID, with or without a matching credential (Session Control)

"Authentication" therefore has three component proofs, which are mixed and matched as needed:

  • Proof of identity.
  • Proof of personality.
  • Proof of ownership of the user account or digital certificate.

The traditional list of types of proof of ownership of the user account is:

  • Something you know (passwords)
  • Something you have (tokens and private keys)
  • Something you are (biometrics)

This list is not complete, we can include:

  • Something you like. What if we could test the taste for colors or music of a user, and reliably check it when the user wants to login?
  • Something you can do. What if we could test the some ability of a user, like typing patterns, and reliably check it when the user wants to login?
  • Something you think.  What if we could test the scale of values of a user, and reliably check it when the user wants to login? (I know, is far-fetched)

Authorization is seen as granting the use of resources to authorized users and deny it to unauthorized users. But, How does a user becomes authorized to use a resource?. A very simple scenario is when there is a Resource Owner, an Administrator, who manages the system, and a User. The user sends an "access rights request" to the resource owner, who can grant the request,  and ask the administrator to technically grant those rights on the resource to the user's user ID. At a later time, the authorized user can use his user ID and credential to authenticate to the system, and use the resource using the rights he has been granted.

So under the umbrella of "Authorization" we find:

  • Access Rigths Control
  • Authorization

Under the traditional Accounting concept of access control there are two distinct processes, Recording and Signing.

Recording registers accurately the results of access to resources, so these can be investigated and will and intent or responsibilities determined. The recording process will normally have to meet objectives for accuracy, including date and time. Recording normally registers;

  • Interface ID and Location;
  • User account or certificate ID;
  • Signature;
  • Type of Access Attempt (login, logout, change password, change configuration, connect/disconnect systems, repositories I/O interfaces, enabling/disabling admin access or logging, etc)
  • Date and Time of Access attempt;
  • Access attempt result;
  • Resource accessed.

Signing records the will and intent about a "document" (or a mail, video, song, photo, etc) of the owner of the user account or certificate concerning a document, such as agreeing, witnessing or claiming authorship of documents like original works, votes, contracts and agreements. Digital signatures are a special kind of record.

A digital signature using public key encryption allows for authentication of documents, but what does a digital signature actually authenticate?. You can assert third parties authorship sometimes. For example "I know that this was written by Abel because it is signed with his private key, no one contests his authorship, and Abel doesn't claim that his private key has been stolen".

But what you can't do is to assert your own authorship. Why? Because you can get a message from Abel, remove the digital signature and add your own, asserting that you are the author.

This is the reason why signing contracts can potentially become complicated. I could send you a signed contract, which you sign and send back for me to sign, which I could again sign to show I agree with your signing it, and send it to you so you can prove that I am committed to the contract, as the first signature only shows agreement to my own writing.

Summarizing, "Authentication, Authorization and Accounting" is an oversimplification for:

  • User Registration.
  • Provision of user ID's and credentials to users.
  • Authentication.
  • Session Control
  • Access Rights Control
  • Authorization
  • Recording
  • Signing

I hope UPASAARS doesn't become a popular acronym...it looks ugly, doesn't it?

A final word on the equivalence between user accounts and digital certificates. A digital certificate serves the same purpose a user account does; providing and denying access to resources to people depending on what we know about them and how much we trust them.

For user accounts:

  • User Registration, provision of user ID's and credentials to users is performed in house.
  • The user account has a user ID and a credential or set of credentials.
  • Session Control is configured normally system by system.
  • Access Rights are granted or denied depending on the users identity and personality.

For digital certificates:

  • User Registration, provision of user ID's and credentials to users is performed by a certification authority.
  • The digital certificate has a Distinguished Name (equivalent to the user ID), and a credential (public key signed by a certification authority+private key) , which is a credential of the type "something you have".
  • Session Control is still configured system by system.
  • Access Rights are granted or denied depending on how much we trust the guarantees the certification authority provides about the users identity and personality.

The advantage of digital certificates over user accounts is that they scale; we can trust users that haven't gone through our own User Registration,  the disadvantage is that we can trust them only as much as we trust their Certification Authority regarding their User Registration quality in terms of how do they check their users identity and personality.

This is what OpenID ultimately tries to accomplish, without relying solely on digital certificates.

What is O-ISM3 good for?

There are several ways to take advantage of O-ISM3:

  • For someone who is using ISO9001: Build your ISMS using ISO9001 principles and infrastructure you already have and understand;
  • For someone who has no IS Management System: Build your ISMS in stages around your Business Goals, not some external or artificial goals;
  • For someone who wants to outsource security processes: Find out exactly what to outsource, who to link it to internal processes and how to create SLAs;
  • For someone who want to show commitment with security: Get a meaningful certificate that is not only compliant but useful (further business goals)
  • For someone who is already spending loads in IS: Use Security Targets and learn at least if the IS management system is working, or use Metrics and manage your IS management system with or without Auditors around you,
  • For someone who is experiencing pains using other approaches: Suit you processes to your needs in an environment by environment basis. Stop using Production Environment requirement for your Development Environment;
  • For a CISO: Get to tell Top Management, Middle Management and Administrators what are their responsibilities on security, in a more specific way than "Security is everyone's responsibility";
  • For businesses that are going out to tender for their services;
  • For businesses that require a consistent approach by all service providers in a supply chain;
  • For  service providers to benchmark their IT service management;
  • As the basis for an independent assessment;
  • For an organization which needs to demonstrate the ability to provide services that meet customer requirements;
  • For  organizations which aims to improve service through the effective application of processes to monitor and improve service quality.

Return On Security Investment

The information security industry recognizes both the necessity and the difficulty of carrying out a quantitative evaluation of ROSI, return on security investment.

The main reason for investing in security measures is to avoid the cost of accidents, errors and attacks. Direct costs of an incident may include lost revenues, damages and property loss, or direct economic loss. The total cost can be considered to be the direct cost plus the cost of restoring the system to its original state before the incident. Some incidents can cost information, fines, or even human lives.

The indirect cost of an incident may include damage to a company’s public image, loss of client and shareholder confidence, cash-flow problems, breaches of contract and other legal liabilities, failure to meet social and moral obligations, and other costs.

Measuring Return

What do we know intuitively about the risk and cost of security measures? First, the relationship between the factors that affect risk - such as window of opportunity, value of the asset and its value to the attacker, combined assets, number of incidents and their cost, etc. - is quite complex.  We also know that when measures are implemented to reduce risk, the ease of using and managing systems also decreases, generating an indirect cost of the security measures.

How do we go from this intuitive understanding to quantitative information? There is some accumulated knowledge of the relationship between investment in security measures and their results. First, there is the Mayfield paradox, according to which the cost of universal access to a system and absolutely restricted access is infinite, with more acceptable costs corresponding to the intermediate cases.

An empirical study was also done by the CERT at Carnegie Mellon University, which states that the greater the expenditure on security measures, the smaller the effect of the measures on security. This means that after a reasonable investment has been made in security measures, doubling security spending will not make the system twice as secure.

The study that is most easily found on the Internet on this subject cites the formulas created during the implementation of an intrusion detection system by a team from the University of Idaho.

R: losses.
E: prevented losses
T: total cost of security measures

(R-E)+T= ALE

R-ALE = ROSI, therefore ROSI = E-T

The problem with this formula is that E is merely an estimate, and even more so if the measure involved is an IDS, which simply collects information on intrusions, which means that there is no cause-effect relationship between detecting an intrusion and preventing an incident. Combining this type of estimate with basing it on mathematical formulas is like combining magic with physics.

What problems do we face in calculating return on investment of security measures? The most important is the lack of concrete data, followed closely by a series of commonly accepted suppositions and half-truths, such as that risk always decreases as investment increases, and that the return on the investment is positive for all levels of investment.

Nobody invests in security measures to make money; they invest in them because they have no choice. Return on investment demonstrates that investing in security is profitable, in order to select the best security measures with a given budget, and to determine whether the budget allocated to security is sufficient to fulfill the business objectives, but not to demonstrate that companies make money off of the investment.

In general, and also from the point of view of return on investment, there are two types of security measures: measures to reduce vulnerability and measures to reduce impact.

  • Measures that reduce vulnerability barely reduce the impact when an incident does occur. These measures protect against a narrow range of threats. They are normally known as Preventive Measures. Some of these measures are firewalls, padlocks, and access control measures. One example of the narrowness of the protection range is the use of firewalls, which protect against access to unauthorized ports and addresses, but not against the spread of worms or spam.
  • Measures that reduce impact to very little to minimize vulnerability if an incident does occur. These measures protect against a broad range of threats and are commonly known as Corrective Measures. Examples of these measures include RAID disks, backup copies, and redundant communication links. One example of the range of protection is the use of backups, which do not prevent incidents, but do protect against effective information losses in the case of all types of physical and logical failures.

The profitability of both types of measures is different, as the rest of the article will show.

Preventive or Vulnerability-Reduction Measures

A reduction in vulnerability translates into a reduction in the number of incidents. Security measures that reduce vulnerability are therefore profitable when they prevent incidents for a value that is higher than the total cost of the measure during that investment period.

The following formula can be used:

ROSI = CTprevented / TCP

CT  = Cost of Threat = Number of Incidents * Per Incident Cost.
TCP = Total Cost of Protection

When ROSI > 1, the security measure is profitable.

Several approximations can be used to calculate the prevented cost. One takes the prevented cost into account as the cost of the threat in a period of time before and after the implementation of the security measure.

CTprevented = ( CTbefore – CTafter)

Calculating the cost of the threat as the number of incidents multiplied by the cost of each incident is an alternative with respect to the traditional calculation of the incident probability multiplied by the incident cost, provided that the number of incidents in the investment period is more than 1. To calculate a probability mathematically, the number of favorable cases and the number of possible cases must be known. Organizations rarely have information on possible cases (but not “favorable” cases) of incidents. It is impossible to calculate the probability without this information. However, it is relatively simple to determine the number of incidents that occur within a period of time and their cost.

For a known probability to be predictive, it is also necessary to have a large enough number of cases, and conditions must also remain the same. Taking into account the complexity of the behavior of attackers and the organizations that use information systems, it would be foolish to assume that conditions will remain constant. Calculating the cost of a threat using probability information is therefore unreliable in real conditions.

One significant advantage of calculating the cost of a threat as the product of the number of incidents and their unit cost is that this combines the cost of the incidents, the probability, and the total assets (since the number of incidents partly depends on the quantity of the total assets) into a single formula. To make a profitability calculation like this, real information on the incidents and their cost is required, and gathering this information generates an indirect cost of an organization’s security management. If this information is not available, the cost of the threats will have to be estimated to calculate the ROSI, but the value of the calculation result will be low as the estimate can always be changed to generate any desired result.

The profitability of a vulnerability reduction measure depends on the environment. For example, in an environment in which many incidents occur, a security measure will be more profitable than in the case of another environment in which they do not occur. While using a personal firewall on a PC connected to the Internet twenty-four hours a day may be profitable, using one on a private network not connected to the Internet would not. Investing in a reinforced door would be profitable in many regions of Colombia, but in certain rural areas of Canada, this investment would be a waste of money.

Sample profitability calculation:

  1. Two laptops out of a total of 50 are stolen in a year.
  2. The replacement cost of a laptop is 1800 euros.
  3. The following year, the company has 75 laptops.
  4. The laptops are protected with 60€ locks.
  5. The following year only one laptop is stolen.

ROSI = ( Rbefore – Rafter) / TCP

ROSI = ( ( 1800+Vi )*3 - (( 1800+Vi )*1+75*60) )/( 75*60 )

(The number of incidents is adjusted for the increase in the number of targets).

If a laptop was worth nothing (Vi=0), the security measure would not be profitable (ROSI < 1). In this example, the 60€ locks are profitable when a laptop costs more than 2700€, or when, based on historical information, the theft of 5 laptops can be expected for the  year in question.

Using this type of analysis, we could:

  • Use locks only on laptops with valuable information.
  • Calculate the maximum price of locks for all laptops (24€ when Iv=0).

Corrective or Impact-Reduction Measures

Since impact-reduction measures do not prevent incidents, the previous calculation cannot be applied. In the best case scenario, these measures are never used, while when there are two incidents which could result in the destruction of the protected assets, they are apparently worth twice the value of the assets. Now then, who would spend twice the value of an asset on security measures? Profitability of corrective measures cannot be measured. These measures are like insurance policies; they put a limit on the maximum loss suffered in the case of an incident.

What is important in the case of impact-reduction measures is the protection that you get for your money. The effectiveness of this protection can be measured, for example depending on the recovery time after an incident. Depending on their effectiveness, there are measures that range from backup copies (with some added cost) to fully redundant systems (which cost more than double).

One interesting alternative to calculating the ROSI of a specific security measure is to measure the ROSI of a set of measures – including detection, prevention, and impact reduction – that protect an asset. In this case, the total cost of protection (TCP) is calculated as the sum of the cost of all of the security measures, which the effort to obtain the information on the cost of the threats is practically identical.

Budget, cost, and selection of measures

The security budget should be at most equal to the annual loss expectancy (ALE) caused by attacks, errors, and accidents in information systems for a tax year. Otherwise, the measures are guaranteed not to be profitable. The graph below shows the expected losses as the area under the curve. To clarify the graph, it represents a company with enormous expected losses, of almost 25% of the value of the company. In the case of an actual company, legibility of the graph could be improved using logarithmic scales.

An evaluation of the cost of a security measure must take into account both the direct costs of the hardware, software, and implementation, as well as the indirect costs, which could include control of the measure by evaluating incidents, ethical hacking (attack simulation), audits, incident simulation, forensic analysis, and code audits.

Security measures are often chosen based on fear, uncertainty and doubt, or out of paranoia, to keep up with trends, or simply at random. However, the calculation of the profitability of security measures can help to select the best measures for a particular budget. Part of the budget must be allocated to the protection of critical assets using impact-reduction measures, and part to the protection of all of the assets using vulnerability-reduction measures and incident and intrusion detection measures.


The main conclusions that can be drawn from all of this are that:

  • To guarantee maximum effectiveness of an investment, it is necessary, and possible if the supporting data is available, to calculate the return on the investment of vulnerability-reduction measures.
  • In order to make real calculations, real information is needed regarding the cost of the incidents for a company or in comparable companies in the same sector.
  • Both incidents and security measures have indirect and direct costs that have to be taken into account when calculating profitability.

Download the Article

Standards, standards, standards, Are they any good?

In this video we take an overall view on the information security management process, linking Goals, Situational Awareness, Resources, Priotities and Plans, etc...

Conventional wisdom seems to assume that being intelligent is about having all the answers;  but I beg to disagree. An intelligent manager is he who makes the right questions, as these will make evident what he knows and what we has to learn about the complex landscape of his company. The right questions will place a manager in the right track for a well thought security strategy.

My favourite set of questions, seasoned with my own answers, follows.

  1. How do you know were you are? Perform assessments that compare your model of company with theoretical models, which can be standards or compliance requirements.
  2. Where are you? This is answered by the assessments results, ranging from the result of a PenTest to finding you current ISM3 maturity.
  3. How safe is the organization? This depends on what are the security targets, how mature is the organization's security management, and the context of the organization. A risk assessment can give rough idea of where the organization stands.
  4. How capable is the organization to remain safe? The higher ISM3 capability level achieved, the more capable it is.
  5. Where would you like to be? An objective answer is to state explicitly your goals, among them: business goals, legal and standard compliance goals, and technical goals.
  6. How close to your goal can you afford to be? Unless you organization has unlimited resources, you can express this using security targets.
  7. How much should be spent in security? The minimum to achieve security targets. There is normally no need to achieve invulnerability.
  8. How can you get there? Get management committment, procure resources, project the implementation of the security processes you can afford starting with knowledge management.
  9. How do you stay there when you manage it? Take decisions to get closer to your security targets and use metrics to monitor your results.
  10. How do you stay there when you get someone to manage it for you? Agree on metrics based SLAs with your providers and use them to monitor their results.
  11. How do you improve you ISMS effectiveness and efficiency? Enhance the capability of your security processes using metrics, and control charts.
  12. How good are you at staying there? Make an assessment of the capability of your security processes.
  13. How do you prove to others were you are? Get your ISMS certified.

What is the Maturity of your ISMS?

Find out what is the maturity of your ISMS with five simple questions.

How can you Measure how Secret a Secret is?

When a few know something and want to keep others from learning, that’s a secret. Everyone has secrets, some small, like eating a bar of chocolate when you are on a diet, some are personally important, like a embarrassing personal or professional mistake in the past. There are as many types of secrets as people and organizations that keep them, among them:

• Personal secrets and family secrets, normally related to the moral and taboos of the culture where they live.
• Business secrets, like financial information, strategy and trade secrets.
• Law enforcement secrets, like forensic or methods, investigation information and details about ongoing investigations.
• Crime secrets, like insider trading, organized crime and gangs.
• Political secrets, (Most nations have some form of Official Secrets Act and classify material according to the level of protection needed) like:

o Weapon designs and technology (nuclear, cryptographic, stealth).
o Military plans.
o Diplomatic negotiation positions.
o Intelligence information, sources and methods.
o International relations, secret treaties like:
• Molotov-Ribbentrop pact.
• Cuba crisis agreement.
• Dover treaty.
• Quadripartite agreement.
• Sykes-Picot agreement.

• Social secrets, like certain religions or secret societies as the masonry.
• Professional secrets, like health workers, social workers and journalists.
• Other, like video tape rental and sale records in the USA.

While we all have an intuitive way to distinguish small secrets from high secrets, there hasn’t been so far a way to measure it.

By using the following formula:

Secret = Log C*(Sum Tdk / Sum Tk) = Log C + Log ( Sum Tdk / Sum Tk )

Where C is the quantity of information, Tk is the time someone has known the secret, Tdk the time has had interest in knowing the secret.

If C=1, we can see some examples:

¿Who did “Famous for Nothing” went with for a dirty weekend last summer? If two people know since the 1st of August, 48 more since the 1st of September and 100.000 coach potatoes would like to know and they find out after five months, just before revelation the secrecy is:

S = Log ((100.000 * 5) / (2 * 5 + 48 * 4)) = 3,39

if only 8 people had found out in September, the secret would be S = 4,04

¿Who killed Kennedy? Let's suppose two people know since 1963, and 300 million Americans would like to know. After 42 years:

S = Log( 300 million * 42 / 2 * 42 ) = 8,1

¿Who was Deep Throat? Just before it was found, 4 people knew, and 300 millions were interested. After 33 years:

S = Log( 300 million * 33 / 4 * 33 ) = 7,8

What if 2 more people had found out after 30 years?

S = Log( 300 million * 33 / (4 * 33 + 2 * 30) ) = 6,7

At a business, perhaps a secret is important for a few years, and all your competitors would be eager to know. If 10 people in the company know for a year, and 150 people from other companies would like to know:

S = Log( 150 * 1 / (10 * 1) ) = 1,17

If after two years the market is more competitive and more people (1500) is interested:

Secret = Log (1500 * 1 / (10 * 1) ) = 2,17

For the sake of example, I will use the following groups for measuring secrets:
• Family 10
• Social environment 100
• High school 300
• Competition 300
• Gang 50
• Police 100000
• Army 1000000
• Population 30000000
• Foreign Armies 15000000

Applying the formula, the following approximate values can server as examples of measured secrets:
• A social secret, like the confidence of a friend or an alibi of where you slept, 0,70
• Secret signs and telling signs from a gang, 0,95
• Hiding a mistake, like breaking something, 1,00
• Keeping the privacy of others like a confession, or the social situation of someone, 1,00
• Keeping the privacy of an average person like a list of video rentals of records of library use, 1,04
• Cheating on your wife/husband, 1,44
• Secrets from a Masonry, Scientology or Mirrahism, 1,48
• A regular trade secret, 2,18
• A mistake or wrongdoing of a politician 2,40
• Mafia, Yazuka or insider trading, 3,30
• Keeping the privacy of a politician like a list of video rentals of records of library use, 3,48
• Identity of a witnesses in a criminal case, 5,70
• A journalistic source, 5,78
• The Coca cola formula, 6
• Corruption, misuse of public funds 7,57
• Nuclear weapons, cryptography, 8,00
• Osama Bin Laden location 8,50

Mysteries, secrets known by no one, like those discovered by Champollion when deciphering Egyptian hieroglyphics have S=infinite.

Ignorance on the existence of the secret makes it less secret, as the interest on learning it is less, and the effort on keeping it secret is lower as well. Unfortunately it is very difficult to estimate the number of people interested in a secret, so the accuracy measuring secrecy won’t normally be very high. This way of measuring secrets can lead to some interesting exercises, like adding a factor to the formula for how intense is the interest for learning the secret or analyzing the diffusion of a secret in a group depending on the likelihood of every member of the group of revealing it.

Measuring secrecy can help to gain an understanding on how secret is the information we handle and the kind of efforts to make to keep it secret. To have a clear understanding on the reasons for keeping the secrecy and the influence of time, interest and group of people who know the secret, can give insights on how to manage secrets properly. Two conclusions can be easily drawn from the formula. Firstly, preventing others to know the existence of the secret makes it easier to keep it; Secondly keeping the group of people who know the secret as small as possible prevents leakages more effectively than any technical measure.

A clear understanding on the type of secrets in a organization, how secret they seem to be, the impact of their revelation and a measure of their secrecy is the first step to a cost effective and efficient classification and protection of secrets.

Download the Article

Review of "Inviting Disaster" and "The Logic of Failure"

Review of the books "Inviting Disaster" and "The Logic of Failure"


Subscribe to O-ISM3 Consortium Blog RSS