American automobile executive Lee Iacocca, perhaps best known for conceptualizing the Ford Pinto and Mustang vehicles, once said that every business and every product has its own set of risks that they cannot get around or away from. It is what it is. However, smart organizations understand that they can minimize risks and the gravity of their impact on the company’s operations and reputation if they do it in a systematic manner.
FAIR risk analysis pertains to the process of identifying and probing prospective issues that can negatively affect a company’s profitability and credibility. Information risk assessment is not much different.
Information risk assessment is a process designed to pinpoint possible problems that could compromise or adversely impact an organization’s IT assets, infrastructure, and architecture.
Introducing the FAIR Risk Assessment or FAIR Model
FAIR stands for Factor Analysis of Information Risk. It is a pragmatic risk management methodology that seeks to explore and estimate risks to a company’s operational and cybersecurity framework. Compliant with international standards, the FAIR model was developed in 2005 by Jack Jones, who is currently the Chairman of The FAIR Institute and is considered as the foremost Value at Risk (VaR) framework for operational risk and cybersecurity. The Open Group is a globally-recognized consortium that utilizes information technology standards to realize different business goals and objectives. The Open Group maintains the OpenFAIR standard, which is currently the only internationally recognized quantitative model to assess cybersecurity risks.
In addition, the FAIR model is universally accepted and can be applied to any and all companies that contend with both perceived or tactile risks. Part of the purpose of the FAIR model is to provide a quantifiable and numerical perspective of how cyber-based threats can affect an organization on a multitude of levels.
The FAIR model identifies and compiles different factors that comprise a risk for an organization, and closely analyzes how these relate or trigger another potential concern. Factor analysis of information risk also entails the thorough evaluation of the significance of each specific risk situation, individually and in tandem with our identified risks. It also lists down any number of methods that can measure aspects and elements that can instigate risk, using a scenario modeling construct to simulate and observe probable risk scenarios.
In contrast to other risk assessment frameworks that focus their performance on numerical weighted scales and qualitative color charts, the FAIR model utilizes a precise and scientific approach to information risk management. Moreover, it works well with other methodologies like ISO/IEC 27002:2005, ITIL, OCTAVE, COBOT, and COSO, among many others, serving as an analytical and computational engine that complements other available risk assessment models.
The Risk of Not Being FAIR
The Factor Analysis of Information Risk model also addresses and prevents a number of causes linked to inaccurate risk analysis, such as:
- Poorly defined scope of study and identification of scenarios
- Broken models, wherein the relationship between risk factors are not clear
- Substandard estimates and measurements of required data
- Imbalanced study of probability and possibility, especially for worst-case scenarios
- Flawed normalization of risk identification across different domains
- Impaired communication lines and channels between divisions and business units of a specific organization
The astute intuitiveness of the information that the FAIR model is obtained through the use of empirical research methods that effectively performs risk identification, analysis, and mitigation.
The FAIR model has been recognized by highly-respected organizations such as ISACA, an international professional group that specializes in IT governance, and the Build Security In project of the United States Department of Homeland Security.
Here are some scenarios that FAIR risk assessment employs to obtain more information using robust scientific processes:
Probability vs. Possibility
The main purpose of FAIR risk analysis is to identify probable risks, as well as the chances of these risks transitioning from potential to a real risk situation. Having this foresight can allow companies to make remedial measures to anticipate or mitigate probable issues.
The study of probability offers the opportunity to understand the level of certainty and impossibility for every given scenario. Unlike the binary nature of possibility, where a specific occurrence may or may not happen, probability intimates that an incident can happen, but it may also not, depending on the remedial and preventive actions that a company will exercise.
Precision vs. Accuracy
In measuring probabilities, the difference between precision and accuracy can, quite literally, translate to a life or death situation. Accuracy refers to the proximity of the measurements to a distinct value or set of data points, therefore providing factual results.
Precision, on the other hand, is a measure of statistical variability that identifies the closeness of the range of measurements to each other and may be able to obtain exact data. In the field of risk management, and with the FAIR model, accuracy is preferred and exercised, as it allows for enough flexibility and room for circumstance to yield correct and definitive information.
Understanding the key components of the Factor Analysis of Information Risk model
The FAIR model is divided into four key components, with the primary objective of accurately identifying risk scenarios and their correlation with each other, as well as thoroughly understanding the implications they pose to the organization.
Threats are defined as components, whether human, object, or substance, that can potentially cause harm to an asset. It is important to note that a threat utilizes the application of force against the asset to instigate a loss event. A threat may be something as random as a strong typhoon causing a flash flood to a data center or something more targeted like a computer engineer typing in erroneous syntaxes to a line of code to cause a system error.
The FAIR model provides for the proper profiling of potential threats to get a better understanding of the scale and magnitude of damage they can inflict. The profiling activity drills down motives, sponsorship, primary intent, preferred targets, and their corresponding characteristics, risk tolerance, and collateral damage.
Assets may be tangible, such as computers, services, and other electronic devices connected to an organization’s IT framework, or intangible like data files. Both types of assets are integral components of a company’s information environment, and can also be considered as risk factors, depending on their assigned controls, liability, and value.
Simply put, the organization is the entity being observed and analyzed risks for, especially as any one, or a combination of, these risk situations may cause harm to the company. If any of these situations take place, an organization’s credibility, profitability, and unique selling propositions will likely be affected.
Worse, these identified risks may even cause an organization to cease operating and doing business, possibly due to the erosion of trust and confidence in them.
The Factor Analysis of Information Risk also seeks to distinguish the direct and indirect origins of these risks, as well as their frequency, to acknowledge and act on their probable repercussions to the organization.
The External Environment
An organization also has external factors to contend with, which may or may not be within their control but may nonetheless pose as likely risk factors. These include industry competitors, legislative roadblocks, regulatory frameworks, and the like.
Knowing the stages of Fair Risk Assessment
The basic form of the Factor Analysis of Information Risk model is composed of ten steps spread across four stages.
Stage 1. Identification of inherent components of the risk scenario(s)
This stage comprises two action points, namely the identification of assets at risk, and sources of threat(s) or threat communities that are being looked into. Did the cyber-incident affect one system only or a specific network? Who could have prompted this action, and what were their motivations for doing so?
Evaluating risk scenarios using a probability-based model seeks to provide further knowledge on the frequency of the occurrence of these risks, as well as the probable magnitude of losses in the future, should it happen again. Top management and decision-makers of an organization must be involved in the discussions regarding the financial cost and implication of these risks so that these can be duly allocated for in budget and resources.
Stage 2. Evaluation of Loss Event Frequency
This stage collects information and makes estimations on plausible Threat Event Frequencies (TEF), Threat Capability (TCap), Control Strength (CS), Vulnerability (Vul), and Loss Event Frequency (LEF).
Threat Event Frequencies (TEF) are defined as probable frequencies, within a given time frame, that threat agents will perform acts that may result in losses for the organization. Hacking is a common threat event that companies, especially those engaged in high-risk industries, have to protect themselves from. If a hacker is able to penetrate their systems, s/he may be able to control, deface, or damage the website, as well as steal vital and proprietary information.
Threat Capability (TCap) assesses the ability of an individual or a group of threat agents to create a loss. This is measured by way of a percentiles scale ranging from 1 to 100, representing the range of skills and resources utilized by a given group of threat agents to be able to generate their desired outcome.
Control Strength (CS), otherwise known as Resistance Strength, is the level of difficulty that a threat agent must be able to overcome to achieve an outcome of a loss. The difficulty level is measured against the TCap scale. Let’s say yours is a small real estate start-up firm with a website hosted on Wix. You probably chose Wix because it has been named as one of the most secure and hack-proof platforms, what with its strong arsenal of security features – and that’s all good.
What Control Strength measures is the how tough or time-consuming it would be for, say, the 65th percentile of hackers globally, to penetrate Wix’s security settings and create significant damage to the platform, as well as the company websites hosted on it.
Vulnerability (Vul) assesses the probability that the actions of a threat agent will result in losses, and is appraised by studying specific types of threats and controls. Case in point – the most sophisticated of anti-virus controls will not serve any benefit if the issues at hand concern lapse on audit and compliance or internal fraud.
Loss event frequency is a standard of measurement that aims to determine how often losses are likely to happen, within a specific time period, and stemming from the actions of different threat agents. Having a clear-cut time frame is imperative, as having none may translate to a high probability of loss events.
Stage 3. Evaluating Probable Loss Magnitude (PLM)
How much loss can an organization expect from a primary or secondary loss event? What are the probable and worst-case loss scenarios? Will the impact be tangible yet one-time, or will it resonate within and outside the industry? These are the specific details that this stage will set out to discover.
The difference between primary and secondary loss lies on the stakeholders involved and how they view or feel about the probability and incidence of loss. Primary stakeholder loss is something that takes place as a direct result of the loss event. This could be in the form of lost revenues or the need to replace or upgrade company assets.
Secondary stakeholders, on the other hand, may not be directly linked to the primary stakeholders but may react negatively to the loss event involving the latter, and negatively affect the organization and its businesses. Secondary risks also have their own dedicated measurements and analysis, due to the impact they may inflict an organization.
Stage 4. Deriving and articulating risk
FAIR risk assessment provides for the reasonable articulation of risk to decision-makers through the following methods:
- Providing a precise classification of factors that comprise information risk
- Offering a sound measurement method that measures the above-mentioned factors and their corresponding level of loss
- Presenting a computational engine that accurately reflects the relationships between the identified factors
- Displaying a simulation model that allows the application of the measurement methods and computational engine to accurately analyze risk scenarios regardless of size, scope, and complexity
With the interconnected manner that trade and business is conducted nowadays, companies are hard-pressed to secure their information assets in order to future-proof their operations. Organizations that wish to future-proof their operations, and ensure their continued existence and success, will highly benefit from investing in IT safety and security professionals and agencies that perform thorough audits utilizing the FAIR model. By doing so, they can be prepared to effectively address both probable and potential threats, and the tremendous financial impacts that come along with these.
Contact RSI Security today to see how we can help!