Welcome to the CISSP study notes. You know the type of study guides to expect by now. Prepare for a wall of formatted text.
The information in this guide is organized by the CISSP exam objectives, at least by domain. I have filled in the blanks with my notes from the general content I learned from Mike Chapple and Wikipedia.
Know going into this that you won’t retain all industry knowledge at all times. I’ll happily admit I don’t have this entire page of notes memorized. What’s more important is taking notes and knowing where to look when you need to recall something or solve a problem.
Treat these notes as a review. You should be shaking your head, yes, as you go through them. Learn and retain as many concepts as possible. There’s no shortcut to being a security pro. Put in the work, and do great.
Let me know how you do. Good luck!
Table of Contents
1. Security and Risk Management
The first domain starts us off with the basics of information security and risk management. Expect to see principles of confidentiality, availability, and integrity here. Risk management is also huge for threat modeling and making decisions.
- Confidentiality – encryption
- Integrity – hash
- Availability – resiliency, HA, load balancing, failover clustering, and fault tolerance
Planning
- Strategic – defines the organization’s security purpose. Long term is 5 years.
- Tactical – midterm plan developed to provide more details on how to accomplish goals.
- Operational – short term, highly detailed plan based on the strategic and tactical plan.
Processes
- Acquisitions – state of IT integration can be difficult to determine.
- Divestitures – how to split IT services, especially user management.
- Governance committees – governing vendors, projects, state of IT, architecture, and more.
Due Care
Reasonable care to protect the interest of an organization. Due care is a legal liability concept that defines the minimum level of information protection that a business must achieve. Sometimes called Prudent Man Rule.
Due Diligence
Effort to maintain due care. Practicing due diligence is a defense against negligence.
Note: Wikipedia has Due Care redirect to Due Diligence. For the exam, these are different definitions/topics.
Compliance Requirements
- Civil Law – most common, judge ruling typically does not set precedent
- Common Law – used in US, Canada, UK, former British Colonies, judge ruling can set precedent
- Criminal Law – laws against society
- Civil Law – PvP
- Religious Law – religious foundation
- Customary Law – common, local, and otherwise accepted practices that sometimes form laws
- Regulatory Requirements – compliance factors from laws, regulations, and standards: SOX, GLBA, HIPAA, FISMA, and the standard PCI DSS.
Privacy Requirements
GDPR and Privacy Shield
GDPR is a privacy regulation in EU law for data protection on all individuals within the European Union (EU) and the European Economic Area (EEA). Also deals with the transition of data outside the EU. In short, if you do business with European citizens, you need to know about this, regardless of whether you live in the EU or not.
The goal is to put control back in the hands of ordinary citizens and simply the regulatory environment. Main items include:
- In case of data breach, the companies must inform the authorities within 72 hours.
- Every EU country must create a central data authority.
- Individuals must have access to their own data.
- Every individual information must be transferable from one service provider to another.
- Individuals have the right to be forgotten. All their information should be able to be deleted.
EU–US Privacy Shield
In October 2015 the European Court of Justice declared the previous framework (International Safe Harbor Privacy Principles) as invalid. Then the European Commission and the U.S. Government began talks about a new framework. This new framework was later put into effect on February 2, 2016.
Intellectual Property
- Trademark is a recognizable sign, design, or expression. Identifies products or services, rights must be maintained through lawful use. Rights will cease if a mark is not actively used, normally 5 years.
- Dilution occurs when someone uses a famous mark in a manner that blurs or tarnishes the mark.
- Patents protect inventions, protected for 20 years, but then become public domain. An invention as such should be:
- Must be new
- Must be useful
- Must not be obvious
- Copyrights protect art, literacy, musical, and even source code.
- Licensing covers how the software will be used.
- Trade Secret – competitive advantage, damaging if leaked
Professional Ethics
ISC2 Code of Professional Ethics:
- Protect society, the common good, necessary public trust and confidence, and the infrastructure.
- Act honorably, honestly, justly, responsibility, and legally.
- Provide diligent and competent service to principles.
- Advance and protect the profession.
Documentation Types
- Policies are high-level overview of the company’s security program. A policy must contain:
- Purpose of the policy.
- Scope of the policy.
- Responsibility of the people involved.
- Compliance of the policy, how to measure it, and clear consequences of non-compliance.
- Make them short, understandable, and use clear, authoritative language, like must and will.
- Standards define hardware and software that are required for use.
- Procedures explain in detail how to achieve a task, step-by-step.
- Guidelines are discretionary, recommended advice to users.
- Baselines provide a security minimum, automating standards.
Operation Security
- Threat – event that could cause harm.
- Vulnerability – weakness in a system.
- Asset – computer resource.
Risk Management
Risk = Threats x Vulnerabilities x Impact (or asset value)
- Risk Avoidance – change course to avoid risk entirely.
- Risk Mitigation – move forward with risk after installing a safeguard to lessen the blow.
- Risk Assignment – or Risk Transfer, placing risk in the hands of another organization. Requirements usually have to be met as this isn’t a silver bullet.
- Risk Acceptance – based on Risk Tolerance, occurs when it’s more expensive to protect an asset than it is to lose it outright. This must be documented.
- Risk Deterrence – deterrence in place to warn about non-compliance.
- Residual Risk – risk leftover after safeguards are put in place.
Threat Modeling
Threat modeling is the process of identifying, understanding, and categorizing potential threats, including threats from attack sources.
DREAD was previously used at Microsoft and OpenStack to assess threats against the organization. The mnemonic is to remember the risk rating for security threats using five categories. A score of 0 to 10 is given to each category, then the scores are added and divided by 5 to calculate the final risk score. The categories are:
- Damage – how bad would an attack be?
- 0 = no damage
- 10 = complete destruction
- Reproducibility – how easy is it to reproduce the attack?
- 0 = impossible
- 10 = easy and without authentication
- Exploitability – how much work is it to launch the attack?
- 0 = advanced knowledge and tools
- 10 = little knowledge, a web browser
- Affected users – how many people will be affected?
- 0 = none
- 10 = all
- Discoverability – how easy is it to discover the threat?
- 0 = nearly impossible, source code or administrator access required
- 10 = visible easily, from a web browser
PASTA is a risk-centric threat-modeling framework developed in 2012. It contains seven stages, each with multiple activities:
- Define Objectives (DO), identify Business Objectives, identify Business Compliance Requirements (PCI DSS, HIPAA, and etc.)
- Define Technical Scope (DTS)
- Application Decomposition and Analysis (ADA)
- Threat Analysis (TA)
- Vulnerability & Weakness Analysis (WVA)
- Attack Modeling & Simulation (AMS)
- Risk & Impact Analysis (RIA)
STRIDE is an acronym for:
- Spoofing
- Tampering – modifying data, in transit, or stored
- Repudiation
- Information disclosure
- Denial of Service
- Elevation of privilege
VAST is a threat modeling concept based on Agile project management and programming principles.
Trike is using threat models as a risk-management tool. Used to satisfy the security auditing process. Threat models are based on a “requirements model.” The requirements model establishes the stakeholder-defined “acceptable” level of risk assigned to each asset class. Analysis of the requirements model yields a threat model from which threats are enumerated and assigned risk values. The completed threat model is used to construct a risk model based on assets, roles, actions, and calculated risk exposure.
Risk Assessment
Quantitative Analysis calculates monetary loss in dollars per year of an asset. It then helps to calculate how much is reasonable to spend to protect an asset. Here’s what’s involved:
- AV is the cost of an asset.
- EF is the percentage of loss.
- SLE is the loss in a single event if a threat is realized.
- SLE = AV * EF
- ARO is how often a threat will be successful per year.
- ALE is the estimated loss of an asset per year.
- ALE = ARO * SLE
- Likelihood Assessment is the process where the ARO is measured in how many times a risk might materialize in a typical year.
- Controls gap is the risk minus the implemented safeguard.
Qualitative assessment is a non-monetary calculation that attempts to showcase other important factors like:
- Loss of goodwill among client base
- Loss of employees after prolonged downtime
- Social and ethical responsibilities to the community
- Negative publicity
Absolute qualitative risk analysis is possible because it ranks the seriousness of threats and sensitivity of assets into grades or classes, such as low, medium, and high.
- Low – minor inconvenience that could be tolerated for a short period of time.
- Medium – could result in damage to the organization or cost a moderate amount of money to repair.
- High – would result in loss of goodwill between the company and clients or employees. Could potentially lead to prolonged loss, fines, and other legal action.
Delphi Method is a structured communication technique or method, originally developed as a systematic, interactive forecasting method that relies on a panel of experts. The experts answer questionnaires in two or more rounds. After each round, a facilitator or change agent provides an anonymized summary of the experts’ forecasts from the previous round as well as the reasons they provided for their judgments. Delphi is a qualitative risk analysis method.
OCTAVE is a risk assessment suite of tools, methods, and techniques that provides two alternative models to the original. That one was developed for organizations with at least 300 workers. OCTAVE-S is aimed at helping companies that don’t have much in the way of security and risk management resources. OCTAVE-Allegro was created with a more streamlined approach.
NIST 800-30 is a systematic methodology used by senior management to reduce mission risk. Risk mitigation can be achieved through any of the following risk mitigation options:
- Risk Assumption – to accept the potential risk and continue operating the IT system or to implement controls to lower the risk to an acceptable level.
- Risk Avoidance – to avoid the risk by eliminating the risk cause and/or consequence (not use certain system functions or power system down when something is identified).
- Risk Limitation – to limit the risk by implementing controls that minimize the adverse impact of a threat’s exercising a vulnerability (use of supporting, preventive, and/or detective controls).
- Risk Planning – to manage risk by developing a risk mitigation plan that prioritizes, implements, and maintains controls.
- Research and Acknowledgement – to lower the risk of loss by acknowledging the vulnerability or flaw and researching controls to correct the vulnerability.
- Risk Transfer – to transfer the risk by using other options to compensate for the loss, such as purchasing insurance.
MTD is a measurement to indicate how long the company can be without a specific resource. General MTD estimates are:
- Critical – minutes to hours
- Urgent – 24 hours
- Important – 72 hours
- Normal – 7 days
- Non-essential – 30 days
Defense in Depth is a strategy to defend a system using multiple ways to defend against similar attacks. It is a layering tactic, conceived by the National Security Agency (NSA) as a comprehensive approach to information and electronic security. Even using a different type of control (physical, logical, and administrative) is an example of defense in depth.
Access Control
Administrative Access Control
- Procedures and Policies come from management. They must be enforced.
- Supervisory Structure makes the supervisor accountable for the actions of their team.
- Personnel controls are safeguards to check risky behaviors.
- Job Rotation, allows detection of fraud or improper handling of tasks.
- Separation of duties, makes sure no one individual can carry out a critical task.
- Change of Status controls indicate what security actions should be taken when an employee is hired, terminated, suspended, moved, or promoted.
- Testing
- This control states that all security controls, mechanisms, and procedures are tested on a periodic basis to ensure that they properly support the security policy, goals, and objectives.
- The testing can be a drill to test reactions to a physical attack or disruption of the network, a penetration test of the firewalls and perimeter network to uncover vulnerabilities, a query to employees to gauge their knowledge, or a review of the procedures and standards to make sure they still align with business or technology changes that have been implemented.
- Security Awareness and Training control helps employees understand how to properly access resources, why access controls are in place, and the consequences for not following policy.
- Examples:
- Information Classification
- Investigations
- Job Rotation
- Monitoring And Supervising
- Personnel Procedures
- Security Policy
- Security Awareness And Training
- Separation Of Duties
- Testing
Technical Access Control
- Sometimes called Logical Control
- Software, applications, OS features, network appliances, etc. to limit subject access to objects.
- Network access firewalls, switches, and routers can limit access to resources.
- Network Architecture defines the logical and physical layout of the network, and also the access control mechanisms between different network segments.
- System Access is based on the subject’s, often user rights and permissions, clearance level of users, and data classification.
- Encryption protects data at rest (stored) or data in transit (transmitting through network).
- Auditing tracks activity through usage of hardware and software. Helps to point out weakness of other technical controls and make the necessary changes.
- Examples:
- ACLs
- Alarms And Alerts
- Antivirus Software
- Audit Logs
- Encryption
- Firewalls
- IDS
- Routers
- Smart Cards
Types of Alarm systems:
- Local alarm system – an alarm sounds locally and can be heard up to 400 feet away.
- Central station system – the alarm is silent locally, but offsite monitoring agents are notified so they can respond to the security breach. Most residential security systems are like this and are well known, such as ADT and Brinks.
- Proprietary system – this is similar to a central station system with the exception that the host organization has its own onsite staff to respond to security breaches.
- Auxiliary station – after the security perimeter is breached, emergency services are notified to respond to the incident and arrive at the location. This could include fire, police, and medical services.
Physical Access Control
- Access control that physically protects the asset.
- It can also physically remove or control functionalities.
- Perimeter Security implementation is a set of multiple physical access control that allow the company to prevent intrusion in its building, factory, datacenter, etc.
- Computer Controls can be a lock on the cover of the computer to protect the internal parts of the computer, or the removal of the CD-ROM, USB ports, and more to prevent local data infiltration or exfiltration. A Faraday cage prevents electromagnetic wave leaks.
- Work Area Separation is the separation of an employee from other employees. Services working on very sensitive data should not be in the same place as other employees.
- Network Segregation is the physical separation of certain networks. The DMZ should not be on the same switches as the users. The network racks should be accessible only by authorized users.
- Data Backups is a physical control to ensure that information can still be accessed after an emergency or a disruption of the network or a system.
- Cabling should be done in a way to maintain safety and prevent sniffing. Shielding avoids electromagnetic crosstalk between cables.
- Control Zone is a specific area that surrounds and protects network devices that emit electrical signals. These electrical signals can travel a certain distance and can be contained by a specially made material, which is used to construct the control zone.
- Examples:
- Alarms
- Backups
- Badge System
- Biometric System
- Closed-Circuit TVs
- Dogs
- Electronic Lock
- Fences
- Guards
- Laptop Locks
- Lighting
- Locks
- Mantrap Doors
- Mantraps
- Motion Detectors
- Safe Storage Area Of Backups
- Security Guards
Types of Controls
- Preventative – avoid security events through defense strategies.
- Access Control Methods
- Alarm Systems
- Antivirus Software
- Auditing
- Biometrics
- CCTV
- Data Classification
- Encryption
- Fences
- Job Rotation
- Lighting
- Locks
- Mantraps
- Penetration Testing
- Security Awareness and Training
- Security Policies
- Separation Of Duties
- Smart Cards
- Detective – find unauthorized activities.
- Audit Trails
- Guard Dogs
- Honey Pots
- IDS
- Incident Investigations
- IPS
- Job Rotation
- Mandatory Vacations
- Motion Detectors
- Reviewing CCTV
- Reviewing Logs
- Security Guards
- Violation Reports
- Deterrent – discourage security violations.
- Auditing
- Awareness and Training
- Encryption
- Fences
- Firewalls
- Locks
- Mantraps
- Security Badges
- Security Cameras
- Security Guards
- Separation Of Duties
- Tasks or Procedures
- Trespass Or Intrusion Alarms
- Corrective – correct undesirable events.
- Antivirus
- Business Continuity Planning
- Security Policies
- Recovery – restore resources and capabilities.
- Antivirus Software
- Backup and Restore
- Database Shadowing
- Fault Tolerant Drive Systems
- Server Clustering
- Directive – administrative actions designed to compel compliance.
- Awareness and Training
- Exit Signs
- NDA
- Posted Notifications
- Compensatory – alternatives to support other controls.
- Additional logging to support policy
2. Asset Security
This includes the classification of information and ownership of information, systems, and business processes (Data and Assets). They address the collection, handling, and protection of information throughout its lifecycle.
The collection and storage of information must include data retention. Retention must be considered in light of organizational, legal, and regulatory requirements.
IT asset management (ITAM) is the set of business practices that join financial, contractual, and inventory functions to support life cycle management and strategic decision making for the IT environment. Assets include software and hardware found within the business environment.
IT asset management, also called IT inventory management, is an important part of an organization’s strategy. It usually involves gathering detailed hardware and software inventory information which is used to make decisions on redistribution and future purchases.
IT inventory management helps organizations manage their systems more effectively and saves time and money by avoiding unnecessary asset purchases and promoting the reuse of existing resources. Organizations that develop and maintain an effective IT asset management program further minimize the incremental risks and related costs of advancing IT portfolio infrastructure projects based on old, incomplete, and/or less accurate information.
Inventory management deals with what the assets are, where they are, and who owns them. Configuration management is another layer on top of inventory management. The stages of the data management process are below:
- Capture/Collect
- Digitalization
- Storage
- Analysis
- Presentation
- Use
FIPS 199 helps organizations categorize their information systems. The criteria to classify data is below:
- Usefulness
- Timeliness
- Value
- Lifetime
- Disclosure Damage Assessment
- Modification Damage Assessment
- Security Implications (of use on a broad scale)
- Storage
Security Testing and Evaluation
FISMA requires every government agency to pass Security Testing and Evaluation, a process that contains 3 categories:
- Management Controls focus on business process administration and risk management.
- Operational Controls focus on the processes that keep the business running.
- Technical Controls focus on processes or configuration on systems.
Clearance
Who has access to what? If a subject needs access to something they don’t have access to, a formal access approval process is to be followed. Furthermore, the subject must have a need to know.
Government
- Top secret
- Secret
- Confidential
- Sensitive (SBU, limited distribution)
- Unclassified
Private
- Confidential
- Private
- Sensitive
- Public
Data Ownership
- Data Owners – usually management or senior management. They approve access to data.
- Data Processors – those who read and edit the data regularly. Must clearly understand their responsibility with the data.
- Data Remanence – recoverable data after deletion. Here’s how to not make the data recoverable:
- Secure deletion by overwriting of data, using 1s and 0s.
- Degaussing – removes or reduces magnetic fields on disk drives.
- Destroying the media, by shredding, smashing, and other means.
- Collection Limitation – important security control that’s often overlooked. Don’t collect data you don’t need. Create a Privacy Policy that specifies what data is collected and how it’s used.
Data Privacy
There are 3 main ways to private information through modification by anonymization. This makes it much harder, if not impossible, to link data back to the original person.
- Anonymization removes the personal data that can be used to identify the original subject. It is not reversible.
- Pseudonymization changes the name of the subject, to a fictional name or an ID. It is reversible if the relation between the new name and the old name exist.
- Tokenization is similar to pseudonymization, but requires a complex process to retrieve original data. It is reversible.
Data Attacks
- XSRF is an attack that uses an existing session on a normal site. Think of this as session hijacking. The fix is to have validation through CAPTCHA or SMS before starting sessions.
- Side-Channel attacks are on the system itself (hardware), as opposed to software. Information of worth include timing information, power consumption, electromagnetic leaks, and even sound. Here are a few examples:
- Cache attack – attacks based on attacker’s ability to monitor cache accesses made by the victim in a shared physical system as in virtualized environment or a type of cloud service.
- Timing attack – attacks based on measuring how much time various tasks take to perform.
- Power-monitoring attack – attacks that make use of varying power consumption by the hardware during computation.
- Electromagnetic attack – attacks based on leaked electromagnetic radiation, which can directly provide plaintext and other information. Such measurements can be used to infer cryptographic keys using techniques equivalent to those in power analysis or can be used in non-cryptographic attacks. For example, TEMPEST (Van Eck phreaking or radiation monitoring) attacks.
- Acoustic cryptanalysis – attacks that exploit sound produced during a computation, rather like power analysis.
- Differential fault analysis – in which secrets are discovered by introducing faults in a computation.
- Data remanence – in which sensitive data are read after supposedly having been deleted. (Cold boot attack)
- Software-initiated fault attacks – Currently a rare class of side-channels, Row hammer is an example in which off-limits memory can be changed by accessing adjacent memory too often (causing state retention loss).
- Optical – in which secrets and sensitive data can be read by visual recording using a high resolution camera, or other devices that have such capabilities.
- Meet In The Middle Attack is a generic space–time trade off cryptographic attack against encryption schemes which rely on performing multiple encryption operations in sequence. The MITM attack is the primary reason why Double DES is not used and why a Triple DES key (168-bit) can be brute forced by an attacker.
- A skimmer is a device installed on an ATM or device where user slide its card in it. The skimmer read the card magnetic strip or scan the card number.
3. Security Architecture and Engineering
Security engineering takes the system architecture, using the capabilities therein, and then protects against malicious acts, human error, hardware failure and natural disasters. Besides using system architecture, security engineering involves the use of secure design principles that use established security models within the scope of organizational goals, security policies, and more.
Security Models
- Bell-LaPadula Model is model focused on confidentiality.
- No Read Up is the simple rule for this model.
- No Write Down is the star rule for this model.
- Strong Star rule say no write up.
- Biba Model is a model focused on integrity.
- No Read Down is the simple rule for this model.
- No Write Up is the star rule for this model.
- Clark-Wilson Model is a model focused on integrity.
- The Clark-Wilson model enforces separation of duties to further protect the integrity of data.
- This model employs limited interfaces or programs to control and maintain object integrity.
- Brewer-Nash is also called Chinese wall model. A subject can write to an object only if it cannot read another object that is in a different dataset. Information flow model, provides access control mechanism that can change dynamically depending on user’s authorization and previous actions. Main goal is to protect against conflict of interest by user’s access attempts. Chinese Wall model is more context oriented in that it prevents a worker consulting for one firm from accessing data belonging to another, thereby preventing any COI.
- Non-Interference also name Goguen-Meseguer is a strict multilevel security policy model, first described by Goguen and Meseguer in 1982, and amplified further in 1984. Basically, a computer is modeled as a machine with inputs and outputs. Inputs and outputs are classified as either low (low sensitivity, not highly classified) or high (high sensitivity, highly classified). A computer has the non-interference property if and only if any sequence of low inputs will produce the same low outputs, regardless of what the high level inputs are.
- If a low (uncleared) user is working on the machine, it will respond in exactly the same manner (on the low outputs) whether or not a high (cleared) user is working with sensitive data. The low user will not be able to acquire any information about the activities (if any) of the high user.
- Graham-Dennin is an Access Control Matrix model that addresses the security issues associated with how to define a set of basic rights on how specific subjects can execute security functions on an object.
- The model has eight basic protection rules (actions) that outline:
- How to securely create an object.
- How to securely create a subject.
- How to securely delete an object.
- How to securely delete a subject.
- How to securely provide the read access right.
- How to securely provide the grant access right.
- How to securely provide the delete access right.
- How to securely provide the transfer access right.
- Each object has an owner that has special rights on it and each subject has another subject (controller) with special rights.
- The model is based on the ACM model where rows correspond to subjects and columns correspond to objects and subjects. Each element contains a set of rights between subjects. When executing one of the 8 rules, the matrix is changed: a new column is added for that object and the subject that created it becomes its owner.
- The model has eight basic protection rules (actions) that outline:
- Zachman Framework is a framework created in 1980 at IBM. It’s an ACM based on the view of an architecture from different point of view.
- Concentric Circles of protection, sometimes called security in depth, is a concept that involves the use of multiple “rings” or “layers” of security. The first layer is located at the boundary of the site, and additional layers are provided as you move inward through the building toward the high-value assets.
- Sutherland model is based on the idea of defining a set of system states, initial states, and state transitions. Through the use of and limitations to only these predetermined secure states, integrity is maintained, and interference is prohibited. The Sutherland model focuses on preventing interference in support of integrity. This model is based on the idea of defining a set of system states, initial states, and state transitions. Through the use of and limitations to only these predetermined secure states, integrity is maintained, and interference is prohibited.
- Lipner Model combine the elements of Bell-LaPadula model and Biba model to provide confidentiality and integrity.
Access Control Models
MAC
MAC is a model based on data classification and object label. It’s important to have an accurate classification of the data to have a functional MAC system. MAC have different security modes, depending on the type of users, how the system is accessed, etc.
- Dedicated security mode
- Formal access approval for ALL info on system.
- Valid need to know for ALL info on system.
- All info, only having one security clearance.
- System high security mode
- Formal access approval for ALL info on system.
- Valid need to know for ALL info on system.
- Some info, only having one security clearance and multiple projects (need to know).
- Compartmented security mode
- Formal access approval for ALL info on system.
- Valid need to know for SOME info on system.
- Some info, multiple security clearances and multiple projects.
- Multilevel security mode
- Formal access approval for SOME info on system.
- Valid need to know for SOME info on system.
- Some info, parallel compartmented security mode.
MAC Environments
- Hierarchical environments are structured like a tree. The top of the tree gives access to the entire tree. It’s possible to read below in the tree but not in another branch.
- Compartmentalized environments, there is no relation between the different security domain hosted in it. To gain access to a object, a subject need to have the exact clearance for the object’s security domain.
- Hybrid environments combines hierarchical and compartmentalized. Hybrid MAC environment provide the more granular control over access but become difficult to manage when the environment grows.
Other Types of Access Control
- DAC is a type of access control defined “as a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission, perhaps indirectly, on to any other subject unless it’s under the control of MAC. Linux is DAC system, but it’s possible to use other access control.
- RBAC is an access control type defined around roles and privileges. The components of RBAC such as role-permissions, user-role and role-role relationships make it simple to perform user assignments.
- ABAC is also known as policy-based access control. It defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together. The policies can use any type of attributes. This model supports Boolean logic.
- The PEP or Policy Enforcement Point is responsible for protecting the apps and data you want to apply ABAC to. The PEP inspects the request and generates an authorization request from it which it sends to the PDP.
- The PDP or Policy Decision Point is the brain of the architecture. This is the piece which evaluates incoming requests against policies it has been configured with. The PDP returns a Permit / Deny decision. The PDP may also use PIPs to retrieve missing metadata.
- The PIP or Policy Information Point bridges the PDP to external sources of attributes, such as databases or LDAP.
Security Evaluation Methods
- TCSEC is a Department of Defense (DoD) standard that sets basic requirements for assessing the effectiveness of computer security controls built into a computer system. TCSEC was used to evaluate, classify, and select computer systems being considered for the processing, storage, and retrieval of sensitive or classified information.
- TCSEC, frequently referred to as the Orange Book, is the centerpiece of the DoD Rainbow Series publications. Initially issued in 1983 by the National Computer Security Center (NCSC), an arm of the National Security Agency, and then updated in 1985, TCSEC was eventually replaced by the Common Criteria international standard published in 2005.
- The Red Book, also known as the Trusted Network Interpretation, is a supplement to the orange book, that describe security evaluation criteria for networked systems.
- The rating schema of TCSEC:
- D – Minimal protection
- Reserved for those systems that have been evaluated but that fail to meet the requirements for a higher division.
- C – Discretionary protection (DAC)
- C1 – Discretionary Security Protection
- C2 – Controlled Access Protection
- B – Mandatory protection (MAC)
- B1 – Labeled Security Protection
- B2 – Structured Protection
- B3 – Security Domains
- Satisfies reference monitor requirements
- A – Verified protection
- A1 – Verified Design
- Beyond A1
- D – Minimal protection
- ITSEC
- Common Criteria is a framework to test product, in which computer system users can specify their Security Functional and Assurance Requirements (SFRs and SARs respectively) in a ST. The Common Criteria for Information Technology Security Evaluation is an international standard (ISO/IEC 15408) for computer security certification. It is currently in version 3.1 revision 5. Each TOE have:
- PP, a document, which identifies security requirements for a class of security devices relevant to that user for a particular purpose. Product vendors can choose to implement products that comply with one or more PPs.
- ST a document that identifies the security properties of the target of evaluation. The ST may claim conformance with one or more PPs. The TOE is evaluated against the SFRs established in its ST. This allows vendors to tailor the evaluation to accurately match the intended capabilities of their product. This means that devices do not need to meet the same functional requirements as other devices. They may be evaluated against different lists.
- SFR specify individual security functions which may be provided by a product. The Common Criteria presents a standard catalog of such functions.
- SAR, is a descriptions of the measures taken during development and evaluation of the product to assure compliance with the claimed security functionality.
- EAL is level of Evaluation that have been tested on a TOE. EAL are separated in 7 levels of testing:
- EAL1: Functionally Tested
- EAL2: Structurally Tested
- EAL3: Methodically Tested and Checked
- EAL4: Methodically Designed, Tested and Reviewed
- EAL5: Semiformally Designed and Tested
- EAL6: Semiformally Verified Design and Tested
- EAL7: Formally Verified Design and Tested
ITIL
ITIL is an operational framework created by CCTA, requested by the UK’s gov in the 1980s. ITIL provides documentation on IT best practice to improve performance, productivity and reduce cost. It’s divided into 5 main categories:
- Service Strategy
- Service Design
- Service Transition
- Service Operation
- Continual Service Improvement
ISO and BS on Security Governance
- ISO 27001 is derived from BS 7799. It’s focused on Security Governance.
- ISO 27002 is derived from BS 7799. It’s a security standard that recommend security controls based on industry best practices.
The Capability Maturity Model was originally created to develop software, but can be adopted to handle security management. Each phase corresponds to a certain level of maturity in the documentation and the control put in place. The first phase, initial, is where nothing is in place. The team handles each incident as it comes up. The last phase, optimizing, is where the processes are sophisticated and the organization is able to adapt to new threats.
- Initial (chaotic, ad hoc, individual heroics) – the starting point for use of a new or undocumented repeat process.
- Repeatable – the process is at least documented sufficiently such that repeating the same steps may be attempted.
- Defined – the process is defined/confirmed as a standard business process.
- Capable – also called managed, the process is quantitatively managed in accordance with agreed-upon metrics.
- Efficient – also called optimizing, process management includes deliberate process optimization/improvement.
Security Capabilities of Information Systems
- Memory Protection – prevents one application or service from modifying another.
- Process Isolation – prevents one process from modifying another.
- Hardware Segmentation – the OS maps processes to dedicated memory locations.
- Virtualization – prevents attacks on hypervisors and other VMs.
- Trusted Platform Module – TPM is a cryptographic chip included with computers or servers.
- Interfaces – way that 2 or more systems communicate.
- Encryption – way to communicate privately. When an interface doesn’t provide a way to do this, then IPsec or another transport mechanism can be used to encrypt the communication.
- Signing – way to provide non-repudiation.
- Fault Tolerance – way to keep system available.
Components and Security
Covert Timing Channel conveys information by altering the performance of a system component in a controlled manner. It’s very difficult to detect this type of covert channel. Covert Storage Channel is writing to a file accessible by another process. To avoid it, the read/write access must be controlled.
A nonce, short for number used once, is an arbitrary number that can be used just once in a cryptographic communication. It is often a random or pseudo-random number issued in an authentication protocol to ensure that old communications cannot be reused in replay attacks. They can also be useful as initialization vectors and in cryptographic hash functions.
An initialization vector (IV) is an arbitrary number that can be used along with a secret key for data encryption. This number, also called a nonce, is employed only one time in any session. The use of an IV prevents repetition in data encryption, making it more difficult for a hacker using a dictionary attack to find patterns and break a cipher.
DRAM uses a capacitor to store information, unlike SRAM that uses flip-flops. DRAM requires power to keep the information, as it constantly needs to be refreshed due to the capacitor’s charge leak. DRAM is cheaper and slower than SRAM.
CVE is the part of SCAP that provides a naming system to describe security vulnerabilities. CVSS is a free and open industry standard for assessing the severity of computer system security vulnerabilities. CVSS attempts to assign severity scores to vulnerabilities, allowing responders to prioritize responses and resources according to the threat. Scores are calculated based on a formula that depends on several metrics that approximate the ease of the exploit and the impact of the exploit. Scores range from 0 to 10, with 10 being the most severe. Here are the 3 groups of CVSS metrics:
- Base metrics indicate the severity of the vulnerability is given by the vendor or the entity that found the vulnerability. It has the largest influence on the CVSS score.
- Temporal metrics indicate the urgency of the vulnerability, it’s also given by the vendor or the entity that found the vulnerability.
- Environmental metrics is set by the end-user. It indicates how an environment or entire organization is impacted. This is optional.
The same metrics are used to calculate the temporal metrics which are used to calculate the environmental metrics. XCCDF is the SCAP component that describes the security checklist. Here’s the SABSA Matrix:
- Contextual
- Conceptual
- Logical
- Physical
- Component
- Operational
Processor Ring
- Kernel
- OS components
- Device drivers
- Users
- Application in Ring 0 can access data in Ring 1, Ring 2 and Ring 3.
- Application in Ring 1 can access data in Ring 2 and Ring 3.
- Application in Ring 2 can access data in Ring 3.
Boolean Operator
- AND (∧) return true if input 1 and input 2 (column in1 and in2 in the table) are true. Both need to be true.
- OR (∨) return true if input 1 or input 2 is true. At least one input must be true, but both can be true to return true.
- NOT (~) take only one input and return the opposite value.
- XOR (⊕) return true if the parameters are different.
Cryptography
Apply Cryptography
The Cryptographic Lifecycle is focused on security. There are cryptographic limitations, along with algorithm and protocol governance. The older a cryptographic algorithm gets, the lower the strength. Computing power keeps raising and with enough exposure, it’s only a matter of time before an old algorithm gets cracked.
- Approved – NIST or FIPS recommended, algorithm and key length is safe today.
- Deprecated – algorithm and key length is OK to use, but there will be risk.
- Restricted – use of algorithm and/or key length should be avoided.
- Legacy – the algorithm and/or key length is outdated and should be avoided when possible.
- Disallowed – algorithm and/or key length is no longer allow for use.
Cryptographic Methods cover 3 types of encryption:
- Symmetric – uses the same key for encryption and decryption. It’s faster, can use smaller keys, but you have to find a way to securely share the key. Unfortunately there’s repudiation as well. The symmetric algorithms have stronger encryption per key bits than asymmetric algorithms.
- Asymmetric – uses different keys for encryption and decryption. There’s a public key and a private key. Slower, but best suited for sharing between 2 or more parties. RSA is a common asymmetric standard.
- Elliptic Curve – ECC is a newer form of asymmetric encryption. The benefit is you can use smaller keys, which is great for mobile devices. Quantum computing would put a hurtin’ on this implementation.
Key Management Practices
- Key Creation and Distribution – after the creation process, the key is sent to a user or a system. The key is stored in a secured store.
- Key Protection and Custody – split custody is a method in which 2 or more people share access to a key.
- Key Rotation – retires old keys and implements new ones.
- Key Destruction – keys can be put in a temporary hold, revoked, expiration, or destroyed.
- Key Escrow and Key Backup Recovery – storage of key and back up and recover process. PKIs have backup and recovery methods.
Public Key Infrastructure
Foundational technology for managing certificates. Can be private, solely for your organization, you can acquire certificates from a trusted 3rd party provider, or you can have a combination of both.
- Can have multiple tiers.
- Should have a certificate policy and a certificate practices statement or CSP.
- Certificate revocation information need to be able to be sent to clients.
- Private keys and information about issued certificates can be stored in a database or a directory.
Key Clustering in cryptography is two different keys that generate the same ciphertext from the same plaintext by using the same cipher algorithm. A good cipher algorithm, using different keys on the same plaintext, should generate a different ciphertext regardless of the key length.
Zero knowledge Proof is a method by which one party (the proofer) can prove to another party (the verifier) that they know a value, without conveying any information except for the value itself. It is trivial to prove that one has knowledge of certain information by simply revealing it. The hard part is proving the possession without revealing the hidden information or any additional information.
Encryption
- AES also known by its original name Rijndael, is a specification for the encryption data established by NIST in 2001. It’s a symmetric-key algorithm and it use a block size of 128 bits, but has three different key lengths, 128 bits, 192 bits and 256 bits.
- CHAP is an authentication protocol using a symmetric key. It’s protected against replay attacks and will reauthenticate the client during the session. It uses a 3-way handshake and is used in PPP and other protocols.
- Blowfish is a symmetric-key block cipher, designed in 1993 by Bruce Schneier, that includes a large number of cipher suites and encryption products. Blowfish provides good encryption without yielding to cryptanalysis. However, Blowfish has been compromised. Schneier recommends using Twofish for modern applications. Blowfish has a 64-bit block size and a variable key length from 32 bits up to 448 bits.
- Twofish is a symmetric key block cipher with a block size of 128 bits and key sizes up to 256 bits. It was one of the five finalists of the AES contest, but it was not selected for standardization. Twofish is related to the earlier block cipher Blowfish. The Twofish algorithm uses an encryption technique not found in other algorithms that XORs the plain text with a separate subkey before the first round of encryption. This method is called prewhitening.
- RSA is one of the first public key cryptosystems and is widely used for secure data transmission. It’s a slow algorithm and generally doesn’t encrypt users data, but is used during key exchange since symmetric key algorithms are faster. Replay attacks can’t be done against RSA, but brute-force attacks, mathematical, and timing attacks are fair game.
- DES is a symmetric key algorithm for the encryption of electronic data, published as a FIPS in 1977. It use a 56-bit key, making it weak for modern user. The block length is 64 bits. DES has multiple modes, ordered below from the best to the worst below:
- CTR mode use a 64-bit counter for feedback. As this counter doesn’t depend on the previous bits or block for encryption, CTR can encrypt blocks in parallel. CTR, like OFB, doesn’t propagate errors.
- OFB mode makes a block cipher into a synchronous stream cipher. It generates keystream blocks, which are then XORed with the plaintext blocks to get the ciphertext. Just as with other stream ciphers, flipping a bit in the ciphertext produces a flipped bit in the plaintext at the same location. This property allows many error correcting codes to function normally even when applied before encryption.
- CFB is a block cipher mode, using a memory buffer to have same size block. It’s retired due to the wait in encoding each block. The Cipher Feedback (CFB) mode, similarly to CBC, makes a block cipher into a self-synchronizing stream cipher.
- CBC mode employs an IV and chaining to destroy cipher text patterns. Because CBC works in block mode, it decrypts a message one block at a time. Because it uses IV and chaining to prevent leaving text patterns through propagation, an error during a read or transfer could render the encrypted file unusable.
- ECB It’s the weakest DES mode. The disadvantage of this method is a lack of diffusion. Because ECB encrypts identical plaintext blocks into identical ciphertext blocks, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
- 3DES
- S/MIME is a standard for public key encryption and signing of MIME data (mail). Developed by the RSA company, S/MIME provides the following cryptographic security services for electronic messaging applications:
- Authentication
- Message integrity
- Non-repudiation of origin (using digital signatures)
- Privacy
- Data security (using encryption)
- PGP is an encryption program that provides cryptographic privacy and authentication for data communication. PGP is used for signing, encrypting, and decrypting texts, e-mails, files, directories, and whole disk partitions. Phil Zimmermann developed PGP in 1991. It uses a web of trust between users.
- SEAL is a stream cipher optimized for machines with a 32-bit word size and plenty of RAM. It use a 160-bit key.
- Vigenère Cipher is a cipher that use a square matrix to encrypt text. It’s an old cipher first described in 1533.
- Book Cipher is a cipher that use a known book to cipher a text.
- P2PE is a standard created by PCI DSS that encrypts the data from the bank card reader to the payment processor.
- E2EE is like P2PEE but data is decrypted multiple times before reaching the payment processor.
Hashes
- DSA is a FIPS for digital signatures. Messages are signed by the signer’s private key and the signatures are verified by the signer’s corresponding public key. The digital signature provides message authentication, integrity and non-repudiation. The three algorithms described in the FIPS are DSA, RSA, and ECDSA.
- SHA-1 is a cryptographic hash function which takes an input and produces a 160-bit hash value known as a message digest. It is typically rendered as a hexadecimal number, 40 characters long. It’s deprecated to due collision.
- SHA-2 is a set of cryptographic hash functions designed by the NSA. The SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256.
- HMAC is a hash method with a password.
- ECDSA is an implementation of DSA that uses an elliptic curve. For a same sized key, ECDSA is more secure than DSA.
Protocol/Standard
- X.509 is an ITU standard defining the format of public key certificates. This is what PKI uses. X.509 certificates are used in many Internet protocols, including TLS/SSL.
- CRL is “a list of digital certificates that have been revoked by the issuing certificate authority (CA) before their scheduled expiration date and should no longer be trusted.”
- OSCP is an internet protocol used for obtaining the revocation status of an X.509 digital certificate. It is described in RFC6960 and is on the Internet standards track. It was created as an alternative to CRL, specifically addressing certain problems associated with using CRLs in a public key infrastructure (PKI). Messages communicated via OCSP are encoded in ASN.1 and are usually communicated over HTTP. The “request/response” nature of these messages leads to OCSP servers being termed OCSP responders. The OSCP server reply by “good”, “revoked”, or “unknown”. Unknown is when the OSCP has no info about a certificate. Some web browsers use OCSP to validate HTTPS certificates. However, the most popular browser, Google Chrome, only uses CRL.
- PEM is a de facto file format for storing and sending cryptographic keys, certificates, and other data. Because DER produces a binary output, it can be challenging to transmit the resulting files through systems, like electronic mail that only support ASCII. The PEM format solves this problem by encoding the binary data using base64. PEM also defines a one-line header, consisting of “—–BEGIN “, a label, and “—–“, and a one-line footer, consisting of “—–END “, a label, and “—–“. The label determines the type of message encoded. Common labels include “CERTIFICATE”, “CERTIFICATE REQUEST”, and “PRIVATE KEY”.
- IPsec is a secure network protocol suite that authenticates and encrypts the packets of data sent over an internet protocol network. It’s used to create VPN. IPsec use the following protocols :
- AH provide integrity and authentication of the IP header. It’s the IP protocol number 51.
- ESP is a member of the IPsec protocol suite. It provides origin authenticity through source authentication, data integrity through hash functions and confidentiality through encryption protection for IP packets. In transport mode, ESP doesn’t provide authentication and integrity, AH does for the entire packet. However, it does do tunnel mode because the original packet is encapsulated, ciphered, and hashed checked. A new IP packet containing the original packet. ESP is also used to provide integrity for L2TP.
- IPComp is a low level compression protocol for IP datagrams.
- IKE is the protocol used to set up a SA in the IPsec protocol suite. It use ISAKMP and X.509.
- ISAKMP is a protocol used for establishing SA and cryptographic keys in an Internet environment. It use IKE or other key exchange protocols.
- Oakley Key Determination Protocol is a key-agreement protocol that allows authenticated parties to exchange keying material across an insecure connection using the Diffie–Hellman key exchange algorithm.
- S-HTTP is an obsolete alternative to the HTTPS protocol for encrypting web communications carried over HTTP. HTTPS and S-HTTP were both defined in the mid-1990s to address this need. S-HTTP was used by Spyglass’s web server, while Netscape and Microsoft supported HTTPS rather than S-HTTP, leading to HTTPS becoming the de facto standard mechanism for securing web communications. S-HTTP encrypts only the served page data and submitted data like POST fields, leaving the initiation of the protocol unchanged. Because of this, S-HTTP could be used concurrently with HTTP on the same port, as the unencrypted header would determine whether the rest of the transmission is encrypted. Encryption is said to be done at the application layer. In contrast, HTTP over TLS wraps the entire communication within TLS, so the encryption starts before any protocol data is sent.
- SET is a communications protocol standard for securing credit card transactions over networks, specifically the Internet. SET was not itself a payment system. It failed to gain traction in the market. VISA now promotes the 3D Secure scheme. It uses RSA and DES, among others.
- DH key exchange is a method of securely exchanging cryptographic keys over a public channel and was one of the first public-key protocols as originally conceptualized by Ralph Merkle. It was named after Whitfield Diffie and Martin Hellman and uses discrete logarithm.
Cryptanalytic Attacks
- Brute Force – every possible combination is attempted. With enough time, this attack will be successful, assuming the key space is correct.
- Ciphertext Only – you have samples of ciphertext. With enough ciphertext you could draw conclusions.
- Known Plaintext – you have plaintext and the matching ciphertext. The goal is to figure out what the key is so you can decrypt other messages using the same key.
Certification and Accreditation
Certification involves the testing and evaluation of the technical and non-technical security features of an IT system to determine its compliance with a set of specified security requirements.
Accreditation is a process whereby a Designated Approval Authority (DAA) or other authorizing management official authorizes an IT system to operate for a specific purpose using a defined set of safeguards at an acceptable level of risk.
Additional information on Accreditation, C&A, RMF at SANS Reading Room.
Fire Extinguishers
There is no official standard in the US for the color of fire extinguishers, though they are typically red, except for the following:
- Class D extinguishers are usually yellow.
- Water and Class K wet chemical extinguishers are usually silver.
- Water mist extinguishers are usually white.
Class | Intended use |
A | Ordinary CombustibleWood, Paper, etc… |
B | Flammable liquids and gases |
C | Energized electrical equipment |
D | Combustible metals |
K | Oils and fats |
Gas-based Fire Suppression System
The Montreal Protocol (1989) limits the use of certain types of gas. Halon, for example, is no longer acceptable. See the following list below:
- FM-200 is a gas used primarily to protect server room or data center. It lowers the temperature of the room and is not dangerous to humans.
- CO² suppress fire by removing the oxygen. It can kill humans so it’s not to be used around people.
- Dry Pipe setups are generally configured to start if the gas-based system failed to extinguish the fire.
- Dry Powders are mix of CO², water and chemical. It generally destroys the systems.
- FE-13 is the safest fire suppression system in an electrical environment. It’s also safe for humans.
Pipe System
- Wet Pipe systems are filled with water. When the sprinkler breaks due to the temperature increase, it releases the water.
- Dry Pipe systems are filled with compressed air. When the sprinkler breaks, the pressure drop allows the valve to open and release the water. It is used where temperatures are very low, to prevent the water from freezing in the pipe.
- Deluge Systems are almost like dry pipe systems. When the fire alarm is triggered, it also opens a valve to let the water flow.
NFPA standard 75 requires building hosting information technology to be able to withstand at least 60 minutes of fire exposure.
Fences and Lighting
NIST standard pertaining to perimeter protection states that critical areas should be illuminated eight feet high and use two foot-candles, which is a unit that represents illumination.
4. Communication and Network Security
This domain covers network architecture, transmission methods, transport protocols, control devices, and security measures used to protect information in transit.
OSI Model
The OSI model is a conceptual model that characterizes and standardizes the communication functions of a telecommunication or computing system. This means there is no mention of internal structure and specific technology. The model shows the interoperability of diverse communication systems with standard protocols and puts communication systems into abstraction layers.
The original version of the model defined seven layers. A layer serves the layer above it and is served by the layer below it. Two instances at the same layer are visualized as connected by a horizontal connection in that layer.
Layer | OSI Model | Details |
---|---|---|
7 | Application | HTTP, SMTP, DNS |
6 | Presentation | Compression, Encryption, Character Encoding, File Formats |
5 | Session | Managing dialog, SQL, RPC |
4 | Transport | Segments, TCP, UDP, TLS, SSL, SCTP, DCCP |
3 | Network | Datagrams/Packets, Routers, Layer 3 Switches, IPSec |
2 | Data Link | Frames, Hubs, Switches, ATM, Frame-Relay, PPTP, L2TP |
1 | Physical | Bits, Cables, Hardware signals |
TCP/IP Model
TCP/IP is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It is commonly known as TCP/IP because the foundation protocols in the suite are the Transmission Control Protocol (TCP) and the Internet Protocol (IP). This model is divided into 4 layers:
- Application layer is where applications or processes create user data and communicate the data to other applications. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client-server model and peer-to-peer networking. This is the layer in which all higher-level protocols operate, such as SMTP, FTP, SSH, and HTTP operate. Processes are addressed via ports.
- Transport layer performs host-to-host communications on either the same or different hosts and on either the local network or remote networks separated by routers. It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable datagram service. The Transmission Control Protocol provides flow-control, establishes the connection and reliable transmission of data.
- Internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also referred to as the layer that establishes internetworking. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next IP router that has the connectivity to a network closer to the final data destination.
- Link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to effect transmission of Internet layer datagrams to next-neighbor hosts.
TCP Connections
- Connection opening, three-way hand-shake
- The client and server have received an acknowledgment of the connection. The steps 1 and 2 establish the connection parameter (sequence number) for one direction and it is acknowledged.
- The steps 2 and 3 establish the connection parameter (sequence number) for the other direction and it is acknowledged. A full-duplex communication is established.
- Connection termination, four-way hand-shake
- A connection can be “half-open”, in which case one side has terminated its end, but the other has not. The side that has terminated can no longer send any data into the connection, but the other side can.
- The terminating side should continue reading the data until the other side terminates as well.
Software-Defined Networks
SDNs are growing due to the need for cloud services and multi-tenancy. The core network itself may not change as often, at least in a topology sense, but the edge or access devices can communicate with a number of tenants and other device types. Edge or access switches are becoming virtual switches running on a hypervisor or virtual machine manager.
It’s imperative to be able to add new subnets or VLANs to make network changes on demand. These configuration changes do not scale well on traditional hardware or their virtual counterparts. SDNs allow for changes to happen with ease across the network, even with automation and data collection built-in.
WiFi
802.11 Protocol | Modulation | Frequency | Data Stream Rate |
---|---|---|---|
a | OFDM | 5 GHz | Up to 54 Mbps |
b | DSSS | 2.4 GHz | Up to 11 Mbps |
g | OFDM | 2.4 GHz | Up to 54 Mbps |
n | OFDM | 2.4 – 5 GHz | Up to 600 Mbps |
ac | QAM | 5 GHz | Up to 3466 Mbps |
Wireless Security Standards
WEP
- 802.11a and 802.11b
- 64-bit to 256-bit keys with weak stream cipher
- Deprecated in 2004 in favor of WPA and WPA2, avoid
WPA
- 128-bit per packet key
- Pre-shared key (PSK) with TKIP for encryption
- Vulnerable to password cracking from packet spoofing on network
- Message Integrity Check is a feature of WPA to prevent MITM attack
- WPA Enterprise uses certificate authentication or an authentication server such as RADIUS
WP2
- Advanced Encryption Standard (AES) cipher with message authenticity and integrity checking
- PSK or WPA2 Enterprise, WPA2 Enterprise uses a new encryption key each time a user connects
To avoid collision, 802.11 uses CSMA/CA, a mechanism where a device that wants to start a transmission sends a jam request before sending anything else. CSMA/CA also requires that the receiving device send an acknowledgment once the data are received. If the sender doesn’t receive the acknowledgment, it will try to resend the data. To avoid confusion, know that it’s the wired networks that use collision detection not collision avoidance as in wireless networks.
Bluetooth
Bluetooth uses FHSS, the implementation is named AFH. The cipher used is named E0. It can use a key up to 128 bits, but it has a major problem – the key length doesn’t improve security as some attacks have shown that it can be cracked like the key is only 32 bits long. Bluetooth attacks to know about:
- Bluebugging: the process to infect a device and allow the attacker to listen in.
- Bluejacking: the sending of unsolicited messages via Bluetooth.
- Bluesnarfing: the unauthorized access of information from a device through.
Network Scanning
A Port scanner is an application designed to probe a server or host for open ports, either to check all ports or a defined list. From there, services can be determined to be running or not. Such an application may be used by administrators to verify security policies of their networks and by attackers to identify network services running on a host and exploit vulnerabilities.
A port scan is a process that sends client requests to a range of server port addresses on a host, with the goal of finding an active port. This process in and of itself is not nefarious.
A port sweep is the process of checking one port but on multiple targets. The result of a port scan fall in one of the three following categories:
- Open, Accepted: the host sent a reply indicating that a service is listening on the port.
- Closed, Denied, Not Listening: the host sent a reply indicating that connections will be denied to the port.
- Filtered, Dropped, Blocked: there was no reply from the host.
Different scanning methods:
- TCP Scanning or connect scan, is a check to see if a port is open by trying to open a complete connection. It’s slow but doesn’t require root or admin rights.
- SYN Scanning is a mode that needs root or admin rights because the scanner forges its packets. This scan doesn’t use the OS full network stack. The scanner sends a SYN packet and the target will reply with a SYN/ACK if the port is open. The scanner will reply directly with a RST packet, closing the connection before the end of the three-way handshake. The target will reply with a RST is the port is open. To recap:
- Scan sends SYN to target
- Target replies SYN/ACK to scan
- Scan closes with RST
- Target confirms close with RST
- UDP Scanning has no session (connectionless). A target that receives a packet on a UDP port doesn’t need to reply. A syslog port (UDP 514) just receives logs, nothing is sent. Some applications like TFTP may reply if the server receives a packet.
- ACK scanning is used to check if there is a firewall between the scanner and the target. A stateful firewall will block this scan while a TCP scan should be accepted if allowed.
- FIN scanning aims to bypass the firewall while they are waiting for a SYN. If the target’s port is closed, the target will reply with a RST, unless it doesn’t reply at all.
Types of Ports
- Ports 0 to 1023 are system-ports, or well known ports.
- Ports 1024 to 49151 are registered ports, or user ports.
- Ports 49152 to 65535 are dynamic ports.
- Ports are assigned by IANA but doesn’t require escalated system privilege to be used.
Network Attacks
DDoS attack occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. Such an attack is often the result of multiple compromised systems, like a botnet.
See DDoS techniques below:
- SYN Floods attacks that do not require completion of the TCP three-way handshake. Attempts to exhaust the destination SYN queue or the server bandwidth. Can be from a single source or multiple different sources.
- Smurf Attacks spoof the IP of the target and send a large number of ICMP packets to a broadcast address. By default, the network device will reply to spoofed ICMP packets. This is an older attack that is no longer as big of a threat.
- Fraggle Attacks are a variation of smurf attacks where an attacker sends a large amount of UDP traffic to ports 7 (Echo) and 19 (CHARGEN) on a broadcast address. The intended victim is the spoofed source IP address.
- Teardrop Attacks consist of sending a large amount of TCP packets with an overlapping payload. It can crash the TCP stack of a remote OS. It’s not necessarily a distributed attack. It’s an older attack that is no longer as big of a threat.
Pharming is a DNS attack that tries to send a lot of bad entries to a DNS server. If a bad record, one that is under attack, is requested by a user, the DNS server may think the attacker packets are in fact a reply to the user’s request.
Phreaking boxes are devices used by phone phreaks to perform various functions normally reserved for operators and other telephone company employees. Most phreaking boxes are named after colors, due to folklore surrounding the earliest boxes which suggested that the first ones of each kind were housed in a box or casing of the particular color. However, very few phreaking boxes are actually the color from which they are actually named. Today, most phreaking boxes are obsolete due to changes in telephone technology. The colors are below:
- Green box – tone generator, emits ‘coin accept’, ‘coin return’ and ‘ringback’ tones at the remote end of an Automated Coin Toll Service payphone call. This box is obsolete.
- Blue box – Tone generator, emitted 2600 Hz tone to disconnect a long-distance call while retaining control of a trunk. Generated multi-frequency tones are then able to make another toll call which was not detected properly by billing equipment. This box is obsolete.
- White box – DTMF tone dial pad.
- Black box – a resistor bypassed with a capacitor and placed in series within the line to limit DC current on received calls. The black box was intended to trip one, but not both relays. This allows ringing to stop but not show the call as answered for billing purposes. This box is obsolete.
- Red box – tone generator, emitted an Automated Coin Toll Service tone pair (1700 Hz and 2200 Hz) to signal coins dropping into a payphone. This box is obsolete.
Network Protection
Firewalls
- 1st generation are just firewalls that allow packets through without inspection.
- 2nd generation are stateful filters that can read L4 (TCP/UDP or other) to maintain a session table. The firewall will allow packets from both directions of the session, until the FIN/ACK.
- 3rd generation are application layer firewalls that work at, you guessed it, the application layer. Packets that don’t fall in line are dropped.
- 4th generation also called Application Level Gateway or Proxy Firewalls. They act like a proxy. If a user reaches a resource, the firewall opens the resource and check to make sure everything is above board. This firewalls can decrypt TLS with the proper certificate installed.
- 5th generation firewalls are local. It checks everything at the OS level.
IDS
Intrusion Detection Systems are devices or software that scan the network or behavior of a system to detect malware or forbidden activities. There are different types of IDS/IPS setups:
- Network Based, Network Intrusion Detection Systems (NIDS) are placed at strategic points within the network to monitor traffic to and from all devices on the network. It performs an analysis of passing traffic on the entire subnet, and matches the traffic that is passed on the subnets to the library of known attacks. Once an attack is identified, or abnormal behavior is sensed, the alerts can be sent.
- Host Based, Host Intrusion Detection Systems (HIDS) run on individual hosts or devices on the network. A HIDS monitors the inbound and outbound packets from the device only and will alert if suspicious activities are detected.
IDS can use different detection methods, but it’s not uncommon to see the use of both of the following methods:
- Signature-based IDS refers to the detection of attacks by looking for specific patterns, such as byte sequences in network traffic or known malicious instruction sequences used by malware. This terminology originates from anti-virus software, which refers to these detected patterns as signatures. Although signature-based IDS can easily detect known attacks, it is difficult to detect new attacks, as new patterns are not yet available.
- Anomaly-based intrusion detection systems were primarily introduced to detect unknown attacks, in part due to the rapid development of malware. The basic approach is to use machine learning to create a model of trustworthy activity and then compare new behavior against this model. Although this approach enables the detection of previously unknown attacks, it may suffer from false positives. False positives are time-consuming during the detection process and degrades the performance of IDS.
Note: Wikipedia redirects IPS to the IDS page. It’s worth noting that IDS do not prevent traffic and are usually placed on a span port of a core switch. IPS on the other hand, are usually placed in-line and can prevent traffic.
VoIP Attacks
- Spit is like spam but with VoIP.
- Caller ID falsification.
- Vishing is trying to scam user by using VoIP.
- Remote dialing (hoteling) is the vulnerability of a PBX system that allows an external entity to piggyback onto the PBX system and make long-distance calls without being charged for tolls.
IPv6
In IPv6, FE80::/10 is used to create a unicast link-local address.
Cabling
- 100Base-FX, 802.3u-1995, Fast Ethernet over optical, Fiber 1300nm, 2km
WAN Line Type
Line | Speed | Notes |
---|---|---|
T1 | 1.544 Mbps | 2 pair of shielded copper wire |
E1 | 2.084 Mbps | 2 pair of shielded copper wire |
T3 | 44.736 Mbps | |
E3 | 34.368 Mbps |
5. Identity and Access Management (IAM)
Compromising an identity or an access control system to gain unauthorized access to systems and information is the biggest reason for attacks involving the confidentiality of data. This is why this is an area where information security professionals should invest a considerable amount of time.
Key topics of this domain are identity management systems, single and multi-factor authentication, accountability, session management, registration and proofing, federated identity management, and credential management systems.
Authentication
Traditional authentication systems rely on a username and password. LDAP directories are commonly used to store user information, authenticate users, and authorize users. There are newer systems that enhance the authentication experience, however. Some replace the traditional username and password systems, while others, such as single sign-on or SSO, extend them. Biometrics is an authentication method that includes, but is not limited to, fingerprints, retina scans, facial recognition, and iris scans.
LDAP
Lightweight Directory Access Protocol is a standards-based protocol (RFC 4511) that traces its roots back to the X.500, which was released in the early 1990s. Vendors have even implemented LDAP-compliant systems and LDAP-compliant directories, often with their own specific enhancements.
LDAP is popular for on-premises corporate networks. An LDAP directory stores information about users, groups, computers, and sometimes other objects such as printers and shared folders. It is common to use an LDAP directory to store user metadata, such as their name, address, phone numbers, departments, employee number, etc. Metadata in an LDAP directory can be used for dynamic authentication systems or other automation.
The most common LDAP system today is Microsoft Active Directory (Active Directory Domain Services or AD DS). It uses Kerberos (an authentication protocol that offers enhanced security) for authentication by default.
SSO
Single sign-on provides an enhanced user authentication experience as the user accesses multiple systems and data across a variety of systems. It is closely related to federated identity management. Instead of authenticating to each system individually, the recent sign-on is used to create a security token that can be reused across apps and systems.
A user authenticates once and then can gain access to a variety of systems and data without having to authenticate again. The SSO experience will last for a specified period, often enough time to do work, such as 4 to 8 hours. SSO often takes advantage of the user’s authentication to their computing device. SSO can be more sophisticated, however.
Access to resources and configuration could be separated for example. Note that using the same username and password to access independent systems is not SSO. Instead, it is often referred to as “same sign-on” because you use the same credentials. The main benefit of SSO is also its main downside – it simplifies the process of gaining access to multiple systems for everyone. This means, the bad guys can also take advantage of the convenience. Multi-factor authentication (MFA) can help mitigate this risk.
Authorization
Traditional authorization systems rely on security groups in a directory, such as an LDAP directory. Based on your group memberships, you have a specific type of access (or no access). For example, there could be different groups for reading versus writing and executing a file or directory.
Even though this system is quite old, it has remained the primary authorization mechanism for on-premises technologies. Newer authorization systems incorporate dynamic authorization or automated authorization.
Other information can be incorporated into authorization, like location-based information.
Data Classification
Military or Government
Classified by the type of damage the involuntary divulgence of data would cause.
- Top Secret is the highest level of classified information. Information is further compartmentalized so that specific access using a code word after top secret is a legal way to hide collective and important information. Such material would cause “exceptionally grave damage” to national security if made publicly available.
- Secret material would cause “serious damage” to national security if it were publicly available.
- Confidential material would cause damage or be prejudicial to national security if publicly available.
- Unclassified is technically not a classification level, but this is a feature of some classification schemes, used for government documents that do not merit a particular classification or which have been declassified. This is because the information is low-impact, and therefore does not require any special protection, such as vetting of personnel.
Private Sector
Corporate or organizational classification system. Similarly structured to military or government classification.
- Confidential is the highest level in this classification scheme. A considerable amount of damage may occur for an organization given this confidential data is divulged. Proprietary data, among other types of data, falls into this category. This category is reserved for extremely sensitive data and internal data. A “Confidential” level necessitates the utmost care, as this data is extremely sensitive and is intended for use by a limited group of people, such as a department or a workgroup, having a legitimate need-to-know.
- Private are data for internal use only whose significance is great and its disclosure may lead to a significant negative impact on an organization. All data and information which is being processed inside an organization is to be handled by employees only and should not fall into the hands of outsiders.
- Sensitive is data that have been classified and are not public data. If these data where disclosed, a negative impact for company may happen.
- Public are data already published to the outside of the company or with no value. If these data had to be disclosed, no impact for the company would happen.
Identity Terms
Subjects are active entities, users or programs that manipulate Objects. A user (subject) requests a server (object).
Objects are passive, manipulated by Subjects. A database (object) is requested by a reporting program (subject). It’s important to note that an object in a situation can be a subject and vice versa. If a user requests a DB, the user in the subject, the DB is the object. But the DB can request its software version management to check for an update. In this case, the DB is the subject, and version management is the object.
Need to know is a type of access management to a resource. Just because you have top classification doesn’t mean you have access to ALL information. You will only be granted access to the data you need to effectively do your job.
Least Privilege is a principle of allowing every module, such as a process, a user, or a program (depending on the subject), to have access to only what they are allowed to access.
Access Control is the measures taken to allow only the authorized subject to access an object. The goal is to allow authorized users and deny non-authorized users, or non-users in general. Separated into 3 categories:
- Administrative
- Technical
- Physical
Permissions are different from rights in that permissions grant levels of access to a particular object on a file system. Rights can be seen as broad administrative access.
Rights grant users the ability to perform specific actions on a system, such as logging in, opening preferences or settings, and more.
Authentication Types
- Type 1: something you know.
- Passwords, PINs, birthdays, etc.
- Type 2: something you have.
- Smartcards, ID cards, licenses, keyfobs, etc.
- Also called transient authentication.
- Type 3: something you are.
- Fingerprint, retina, face, etc.
- Also called biometrics.
Biometrics Method
- Retinal scan – a map of retina’s blood vessels.
- Iris recognition – a map of the Iris.
- Fingerprint – a picture of the fingerprint.
- Minutiae are the specific plot points on a fingerprint. This includes characteristics such as ridge bifurcation or a ridge ending on a fingerprint.
- Palm print or structure – a picture of the palm print of a picture of the structure of the palm.
- Walk recognition
- Keyboard typing recognition
- Signature recognition
- Face recognition
- Voice recognition
Biometric Functions in Two Modes
- Verification (or Authentication) mode, the system performs a one-to-one comparison of a captured biometric with a specific template stored in a biometric database in order to verify the individual is the person they claim to be.
- Identification mode the system performs a one-to-many comparison against a biometric database in an attempt to establish the identity of an unknown individual. The system will succeed in identifying the individual if the comparison of the biometric sample to a template in the database falls within a previously set threshold. Identification mode can be used either for ‘positive recognition’ (so that the user does not have to provide any information about the template to be used) or for ‘negative recognition’ of the person “where the system establishes whether the person is who they (implicitly or explicitly) deny to be”.
Biometrics Terms
Throughput refers to the time an authentication took to be completed.
Enrollment is the process to register a user in the system. Cognitive Password is a form of knowledge-based authentication that requires a user to answer a question, presumably something they intrinsically know, to verify their identity.
Performance Metrics in Biometrics
- Type 1 error – FRR is the probability of type I errors or false non-match rate (FNMR).
- It’s the probability for a valid user to be rejected.
- Type 2 error – FAR is the probability of type II errors or false match rate (FMR).
- It’s the probability for a unauthorized user to be accepted.
- CER – CER where the ratio of the FRR and the FAR are equal.
Kerberos
Kerberos is an authentication protocol, that functions within a realm and user ticket. Users authenticate only once, so Kerberos is an SSO system. Kerberos uses the UDP port 88 by default.
Kerberos also requires user machines and servers to have a relatively accurate date, because the TGT, the ticket given to an authenticated user by the KDC, is timestamped to avoid replay attacks.
Each time a client authenticates, a TGT and a session key are used. The session key is encrypted with the client secret key. When the client needs to access a resource in the realm, the client decrypts the session key and sends it, with the TGT to the TGS. The TGS checks in its base to see if the user is authorized to access the resource.
Authorization Protocols
Oauth 2.0 is an open standard authentication mechanism defined in RFC 6749. Oauth2 is not compatible with OAuth1. It’s used in sites that ask the users to authenticate with Gmail or Facebook, for example.
Authorization Mechanisms
Role-Based Access Control (RBAC)
RBAC is a common access control method. RBAC is a non-discretionary access control method because there is no discretion. The separation of work roles is what fuels this access control method. Thus, RBAC is considered a good industry-standard practice.
Rule-Based Access Control
Rule-based access control implements access control based on predefined rules. Think of available printers for sites. This is a great way of automating access management and making the process more dynamic. Even when someone transfers sites, the old access would be automatically removed.
Mandatory Access Control (MAC)
MAC is a method to restrict access based on a user’s clearance level and the data’s label. The MAC method ensures confidentiality. While not as dynamic as DAC, it provides higher security since access isn’t as quickly changed through individual users.
Discretionary Access Control (DAC)
DAC is useful when you need granular control over the rights of an object, such as a file share. You can also configure the rights to be inherited by child objects. DAC is decentralized, flexible, and easy to administer. As such, it’s in widespread use. Since users can change rights on the fly, it can be difficult to track all changes and overall permission levels to determine access level.
Attribute-Based Access Control (ABAC)
User attributes can be used to automate authorization to objects. Attributes can cover many different descriptors such as departments, location, and more.
Managing Identity and Access
User Access Review
Periodic access reviews are an important, but often forgotten, method of reviewing rights and permissions. Do users have appropriate access to do their jobs? If not, what is the process for increasing access? What about the revocation of access for users who have left the organization? The gamut can cover access management systems as well.
System Account Access Review
System accounts, sometimes called service accounts, are accounts that are not tied to users. They are used for running automated processes, tasks, and jobs. It’s important to not use user accounts to do this. Especially since some of the system accounts require administrative privileges, these accounts require regular review as well. Be sure to keep detailed records of what this account is, what it’s used for, who asked for it, and so on.
Provisioning and Deprovisioning Users
Provisioning and deprovisioning refer to the creation and deletion of users. These key tasks are important so no dormant accounts lie available to bad actors. It’s best to automate these important tasks, not just in time savings, but also human error due to repetitive tasks. These, of course, are set to guidelines and other organizational requirements.
6. Security Assessment and Testing
This domain houses the validation of assessment and test strategies using vulnerability assessments, penetration testing, synthetic transactions, code review and testing, misuse case, and interface testing against policies and procedures.
This covers all assets in order to identify and mitigate risk due to architectural issues, design flaws, configuration errors, hardware and software vulnerabilities, coding errors, and any other weaknesses. The main goal is to make sure disaster recovery and business continuity plans are up to date and capable of responding to or recovering from disaster.
Audit Strategies
Frequency is based on risk. Overall risk must be sufficient enough to justify the time, energy, and cost. Asset value and threats are only part of the risk.
- Internal – an internal audit strategy should be in step with daily operations. This also includes compliance initiatives as well as security of normal business operations.
- External – an external audit strategy should complement the internal strategy, providing further proof that your initiatives are actually working.
- Third-party – third-party auditing provides another set of eyes for whatever needs to be tested within your enterprise. This type of audit can also review internal and external audits.
Key elements of an audit report:
- Purpose
- Scope
- Results of the audit
Audit Events
- Security health checks from IT staff.
- Certified law enforcement personnel investigating criminal activity.
Vulnerability Assessments
Vulnerability assessments are done in order to find systems that aren’t patched or configured properly. They can also be done to assess physical security or reliance on resources.
Penetration Testing
- Reconnaissance is collecting all the available info about a target.
- Enumeration is scanning in order to have the best info about a system or subnet possible.
- Vulnerability Analysis is searching for vulnerabilities.
- Execution is exploiting the vulnerability.
- Reporting is done by ethical hackers, also called white hat.
Penetration testing should always be done with authorization from management.
- Blind testing involves giving no information to the pen-tester but the IT team knows.
- Double-Blind involves giving no information to the pen-tester and the IT team doesn’t know.
- Targeted testing involves giving technical information to the pen-tester but the IT team knows.
Log Reviews
- IT systems can log any transaction, but are rarely enabled across the board.
- Obvious log entries to look for are excessive failure or “deny” events.
- Successful or “allowed” events may be in excess and therefore nearly impossible to regularly comb through without a SIEM or log analyzer.
- Look for privilege escalation, account compromise, or any other anomalous action.
Synthetic Transactions
- User monitoring captures actual user actions in real time.
- Synthetic, whether they are scripts or artificially generated, are used to test performance, stability, and/or security.
- Code review and testing needs to be built into the application lifecycle process just as unit tests and function tests are.
- Misuse case testing is used to find whether software can be used outside its intended purpose. Important to test because hackers can:
- Reverse engineer the binaries or to access other processes through the software.
- Escalate privileges, share passwords, and access resources that should be denied by default.
Test Coverage Analysis
- Black box testing refers to the tester having no prior knowledge prior to testing.
- White box testing refers to the tester having full knowledge prior to testing.
- Dynamic testing refers to the the system being tested also being monitored.
- Static testing refers to the system being tested is not monitored.
- Manual testing refers to tests being done manually by people.
- Automated testing refers to test being done by a script.
- Structural testing refers to testing data flow coverage, statements, conditions, decisions, and loops.
- Functional testing refers to expect and unexpected inputs in order to validate performance, stability, and functionality.
- Negative testing refers to using the system or software with bad data and verifies the system responds in the correct manner.
- Interface testing refers to any server interface that supports operations. For applications:
- External interfaces can be a web browser or operating system
- Internal interfaces can be plug-ins, error handling, and more components that essentially handle data.
Security Process Data
- Review policies and procedures regularly to ensure they are not only being followed, but capable of being followed. Also, are there new risks?
- Account management involves a defined procedure for maintaining accounts. This includes all parts of the account management process.
- Management review and approval makes sure process are followed. After collecting adequate supporting evidence full support of the management team.
- Key performance and risk indicators refer to the data that is being collected. The risk indicators can be used to measure how risky components of the collected data are. The performance indicators can be used to capture and report on levels of success.
- Backup verification data is a must when dealing with backups. If you can’t restore your backups, you are wasting time and resources. Integrity must be checked.
Disaster Recovery (DR) and Business Continuity (BC)
Two areas that must be heavily documented and tested are disaster recovery and business continuity. It is imperative to make sure documentation is up to date and can be followed.
Training and Awareness
Don’t discount the importance of training and awareness. See below for a matrix of different types of training:
Awareness | Training | Education | |
---|---|---|---|
Knowledge Level | The “what” of a policy or procedure | The “how” of a policy or procedure | The “why” of a policy or procedure |
Objective | Knowledge retention | Ability to complete a task | Understanding the big picture |
Typical Training Methods | Self-paced elearning, web-based training, or videos | Instructor-led training, demos, or hands-on activities | Seminars and research |
Testing Methods | Short quiz after training | Application-level problem solving | Design-level problem solving and architecture exercises |
SOC Reports
- Any information of concern must be reported to management teams immediately.
- Level of detail within reports can vary depending on roles.
- Types of audits necessary can also shape how reports should be used. There are four types of SOC reports:
- SOC 1 Type 1 – report outlines the findings of an audit, as well as the completeness and accuracy of the documented controls, systems, and facilities.
- SOC 1 Type 2 – report includes the Type 1 report, along with information about the effectiveness of the procedures and controls in place for the immediate future.
- SOC 2 – report includes the testing results of an audit.
- SOC 3 – report outlines general audit results with a datacenter certification level.
7. Security Operations
This domain covers various investigative concepts including evidence collection and handling, documentation and reporting, investigative techniques, and digital forensics. The logging and monitoring mechanisms must be able to support investigations and provide an operational review to include intrusion detection and prevention, security information and event monitoring systems, and data leakage protection.
Key technologies include firewalls, intrusion prevention systems, application whitelisting, anti-malware, honeypots, and sandboxing to assist with managing third-party security contracts and services, patch, vulnerability, and change management processes.
The goal is to understand security operations so that incident response and recovery, disaster recovery, and business continuity can be the most effective.
Legal
Criminal Law
- Laws protect physical integrity of people and the society as a whole.
- Punishment is incarceration, financial penalties, and even dealt.
- Proof should be beyond reasonable doubt.
- Some laws have been designed to protect people and society from crimes related to computers:
- Fourth Amendment protect individual against unreasonable searches and seizures.
- CFAA, one of the first law (1984) about the computer and network related crimes.
- ECPA of 1986 was enacted by the United States Congress to extend government restrictions on wire taps from telephone calls to include transmissions of electronic data by computer, added new provisions prohibiting access to stored electronic communications. The ECPA has been amended by the CALEA in 1994, the USA PATRIOT Act (2001), the USA PATRIOT reauthorization acts (2006), and the FISA Amendments Act (2008).
- PATRIOT Act (2001), In response to the September 11 attacks, Congress swiftly passed legislation to strengthen national security. It expanded the ability of U.S. law enforcement use electronic monitoring techniques with less judicial oversight. It also amended the CFAA.
- FISA, 1977, 2008, regulate the use of electronic surveillance.
- FISMA requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source.
- ITADA, (2003) Fraud related to activity in connection with identification documents, authentication features, and information. The statute now makes the possession of any “means of identification” to “knowingly transfer, possess, or use without lawful authority” a federal crime, alongside unlawful possession of identification documents.
- DMCA is a copyright law that criminalizes production and dissemination of technology, devices, or services intended to circumvent measures that control access to copyrighted works (commonly known as digital rights management or DRM).
- GDPR is a regulation in EU law on data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). It also addresses the export of personal data outside the EU and EEA areas. The GDPR primarily gives control to individuals over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU.
Civil Law
- Laws are enforced to govern matters between citizens and organizations, crimes are still criminal.
- Civil can be related to contract, estate, etc.
- The evidence standard is Preponderance of the evidence.
- One of the major difference between criminal and civil law is that criminal law is enforced by the government. Whereas, a person or organization must raise the issue with civil law.
Administrative Law
- Laws enacted to enforce administrative policies, regulations, and procedures.
- FDA Laws
- HIPAA was created (1996) primarily to modernize the flow of healthcare information, describes how Personally Identifiable Information maintained by the healthcare and healthcare insurers should be protected from fraud and theft, and address limitations on healthcare insurance coverage.
- HITECH (2009) is an act that include new regulation and compliance requirement to the HIPAA act. The HITECH Act requires entities covered by the HIPAA to report data breaches, which affect 500 or more people, to the Department of Health and Human Services (HHS), to the news media, and to the people affected by the data breaches.
- FAA Laws
- FCRA The Fair Credit Reporting Act was one of the first (1968) data protection laws passed in the computer age. The purpose of FCRA is that there should be no secret databases that are used to make decisions about a person’s life. Individuals should have a right to see and challenge the information held in such databases, and that information in such a database should expire after a reasonable amount of time.
- GLBA is a law that protect private data collected by bank and financial institution. It also repealed part of the Glass–Steagall Act of 1933, removing barriers in the market among banking companies, securities companies and insurance companies that prohibited any one institution from acting as any combination of an investment bank, a commercial bank, and an insurance company. With the bipartisan passage of the Gramm–Leach–Bliley Act, commercial banks, investment banks, securities firms, and insurance companies were allowed to consolidate.
- Privacy Act (1974), a federal law that establishes a Code of Fair Information Practice that governs the collection, maintenance, use, and dissemination of PII about individuals that is maintained in systems of records by federal agencies. Anyone can ask about the data every governmental agency has on them.
- COPPA (1998) applies to the online collection of personal information by people or entities about children under 13 years of age. It details what a website operator must include in a privacy policy, when and how to seek verifiable consent from a parent or guardian, and what responsibilities an operator has to protect children’s privacy and safety online including restrictions on the marketing of those under 13.
- FERPA is about the right of the parents to access and amend their child’s educational data. It also cover the privacy of the students of 18 years of age and more.
- FIPS are publicly announced standards developed by the federal government for use in computer systems by non-military government agencies and government contractors.
- SOX Act of 2002 is mandatory. It requires public traded companies to submit to independent audits and to properly disclose financial information. ALL organizations, large and small, MUST comply. There are new or expanded requirements for all public company boards, management, and public accounting firms. The bill, which contains eleven sections, was enacted as a reaction to a number of major corporate and accounting scandals, including Enron and WorldCom. The sections of the bill cover responsibilities of a public corporation’s board of directors, adds criminal penalties for certain misconduct, and requires the Securities and Exchange Commission to create regulations to define how public corporations are to comply with the law.
- The Federal Sentencing Guidelines released in 1991 formalized the prudent man rule, which requires senior executives to take personal responsibility for ensuring the due care that ordinary, prudent individuals would exercise in the same situation.
- California Senate Bill 1386 is one of the 1st state laws about privacy breach notification.
Private Regulations
Refers to compliance required by contract. This can also be standards that aren’t necessarily forcible by law.
- PCI DSS is a standard for companies that handle credit card information. The Payment Card Industry Security Standards Council was originally formed by American Express, Discover Financial Services, JCB International, MasterCard and Visa on September 7, 2006.
- The goal is to manage the ongoing evolution of the Payment Card Industry Data Security Standard. The council itself claims to be independent of the various card vendors that make up the council.
- PCI DSS allows organizations to choose between performing annual web vulnerability assessment tests or installing a web application firewall.
- Downstream liabilities refers to company’s responsibility for damages that result from a security compromise in company’s business. For example, if hackers break into a database and steal the personal information of customers and business partners, the victim might be held liable for the damage that arises.
Evidence
- Real Evidence, also called material evidence, is tangible and physical objects, in IT Security: Hard Disks, USB Drives, etc. This is NOT the data that resides on them. Real evidence must be either uniquely identified by a witness or authenticated through a documented chain of custody.
- Direct Evidence is a testimony from a first hand witness, what they experienced with their 5 senses.
- Circumstantial Evidence is evidence that relies on an inference to connect it to a conclusion of fact—such as a fingerprint at the scene of a crime. By contrast, direct evidence supports the truth of an assertion directly—i.e., without need for any additional evidence or inference. On its own, circumstantial evidence allows for more than one explanation.
- Corroborating Evidence is evidence that tends to support a proposition that is already supported by some initial evidence, therefore confirming the proposition.
- Hearsay Evidence is someone recounting a second hand story. In certain courts, hearsay evidence is inadmissible unless an exception to the Hearsay Rule applies. Computer generated records, and with that log files, were considered hearsay, but case law and updates to the Federal Rule of Evidence have changed that.
- Best Evidence Rule is a legal principle that holds an original copy of a document as superior evidence. The rule specifies that secondary evidence, such as a copy or facsimile, will be not admissible if an original document exists and can be obtained. The rule has its roots in 18th-century British law.
- Documentary Evidence is any evidence that is, or can be, introduced at a trial in the form of documents, as distinguished from oral testimony. Documentary evidence is most widely understood to refer to writings on paper (such as an invoice, a contract or a will), but the term can also apply to any media by which information can be preserved, such as photographs; a medium that needs a mechanical device to be viewed, such as a tape recording or film; and a printed form of digital evidence, such as emails or spreadsheets.
The 5 Rules of Evidence
- Be Authentic
- Be Accurate
- Be Complete
- Be Convincing
- Be Admissible
To be admissible, the evidence must be relevant, material, and competent.
Search Warrants
- To obtain a search warrant, investigators must have probable cause.
- Exigent circumstances is a term that describe the seizure of evidence without a warrant. It can happen if there is a probable chance of destruction of evidence.
Electronic Discovery
Electronic discovery, also called e-discovery or eDiscovery, refers to discovery in legal proceedings such as litigation, government investigations, or Freedom of Information Act requests, where the information sought is in electronic format (often referred to as electronically stored information or ESI).
Electronic discovery is subject to rules of civil procedure and agreed-upon processes, often involving review for privilege and relevance before data are turned over to the requesting party.
Electronic information is considered different than paper information because of its intangible form, volume, transience, and persistence. Electronic information is usually accompanied by metadata that is not found in paper documents and that can play an important part as evidence. For example, the date and time a document was written could be useful in a copyright case.
EDRM
The EDRM is a ubiquitous diagram that represents a conceptual view of these stages involved in the e-discovery process.
- Identification – the identification phase is when potentially responsive documents are identified for further analysis and review. To ensure a complete identification of data sources, data mapping techniques are often employed. Since the scope of data can be overwhelming in this phase, attempts are made to reduce the overall scope during this phase – such as limiting the identification of documents to a certain date range or search term(s) to avoid an overly burdensome request.
- Preservation – a duty to preserve begins upon the reasonable anticipation of litigation. During preservation, data identified as potentially relevant is placed in a legal hold. This ensures that data cannot be destroyed. Care is taken to ensure this process is defensible, while the end-goal is to reduce the possibility of data spoliation or destruction. Failure to preserve can lead to sanctions. Even if the court ruled the failure to preserve as negligence, they can force the accused to pay fines if the lost data puts the defense at an undue disadvantage in establishing their defense.
- Collection – once documents have been preserved, collection can begin. Collection is the transfer of data from a company to their legal counsel, who will determine relevance and disposition of data. Some companies that deal with frequent litigation have software in place to quickly place legal holds on certain custodians when an event, such as a legal notice, is triggered and begin the collection process immediately. Other companies may need to call in a digital forensics expert to prevent the spoliation of data. The size and scale of this collection is determined by the identification phase.
- Processing – during the processing phase, native files are prepared to be loaded into a document review platform. Often, this phase also involves the extraction of text and metadata from the native files. Various data culling techniques are employed during this phase, such as deduplication and de-NISTing. Sometimes native files will be converted to a petrified, paper-like format (such as PDF or TIFF) at this stage, to allow for easier redaction and bates-labeling. Modern processing tools can also employ advanced analytic tools to help document review attorneys more accurately identify potentially relevant documents.
- Review – during the review phase, documents are reviewed for responsiveness to discovery requests and for privilege. Different document review platforms can assist in many tasks related to this process, including the rapid identification of potentially relevant documents, and the culling of documents according to various criteria (such as keyword, date range, etc.). Most review tools also make it easy for large groups of document review attorneys to work on cases, featuring collaborative tools and batches to speed up the review process and eliminate work duplication.
- Production – documents are turned over to opposing counsel, based on agreed-upon specifications. Often this production is accompanied by a load file, which is used to load documents into a document review platform. Documents can be produced either as native files, or in a petrified format, such as PDF or TIFF, alongside metadata.
Logging and Monitoring Activities
- Intrusion detection and prevention – there are two technologies that you can use to detect and prevent intrusions. Some solutions combine them into a single software package or an appliance.
- An intrusion detection system (IDS) is a technology, typically software or an appliance, that attempts to identify malicious activity in your environment. Solutions often rely on patterns, signatures, or anomalies. There are multiple types of IDS solutions. For example, there are solutions specific to the network (network IDS or NIDS) and others specific to computers (host-based IDS or HIDS).
- An intrusion prevention system (IPS) can help block an attack before it gets inside your network. In the worst case, it can identify an attack in progress. Like an IDS, an IPS is often a software or appliance. However, an IPS is typically placed in line on the network so it can analyze traffic coming into or leaving the network, whereas an IDS typically sees intrusions after they’ve occurred.
- Security information and event management (SIEM) – companies have security information stored in logs across multiple computers and appliances. Often, the information captured in the logs is so extensive that it can quickly become hard to manage and use. Many companies deploy a security information and event management (SIEM) solution to centralize the log data and make it simpler to work with. A SIEM is a critical technology in large and security-conscious organizations.
- Continuous monitoring – the process of streaming information related to the security of the computing environment in real time (or close to real time). Some SIEM solutions offer continuous monitoring or at least some features of continuous monitoring.
- Egress monitoring – the monitoring of data as it leaves your network. One reason is to ensure that malicious traffic doesn’t leave the network, like infected computers trying to spread malware to hosts on the internet. Another reason is to ensure that sensitive data, such as customer information or HR information, does not leave the network unless authorized.
- Data loss prevention (DLP) solutions focus on reducing or eliminating sensitive data leaving the network.
- Steganography is the art of hiding data inside another file or message. For example, a text file can be hidden inside a picture file. Because the file appears innocuous, especially with detail and many colors, it can be difficult to detect.
- Watermarking is the act of embedding an identifying marker in a file. For example, you can embed a company name in a customer database file or add a watermark to a picture file with copyright information.
Security Operations Concepts
- Need to Know and Least Privilege
- Access should be given based on a need to know. The principle of least privilege means giving users the fewest privileges they need to perform their job tasks. Access is only granted when a specific privilege is deemed necessary. It is a good practice and almost always recommend to follow.
- Aggregation – the combining of multiple things into a single unit is often used in role-based access control.
- Transitive trust – from a Microsoft Active Directory perspective, a root or parent domain automatically trusts all child domains. Because of the transitivity, all child domains also trust each other. Transitivity makes it simpler to have trusts. But it is important to be careful. In high-security environments, it isn’t uncommon to see non-transitive trusts used, depending on the configuration and requirements.
- Access should be given based on a need to know. The principle of least privilege means giving users the fewest privileges they need to perform their job tasks. Access is only granted when a specific privilege is deemed necessary. It is a good practice and almost always recommend to follow.
- Separation of Duties and Responsibilities
- Separation of duties refers to the process of separating certain tasks and operations so that a single person doesn’t control everything. Administration is key, as each person would have administrative access to only their area.
- The goal with separation of duties is to make it more difficult to cause harm to the organization via destructive actions or data loss, for example. With separation of duties, it is often necessary to have two or more people working together (colluding) to cause harm to the organization.
- Separation of duties is not always practical, though, especially in small environments. In such cases, you can rely on compensating controls or external auditing to minimize risk.
- Privileged Account Management
- A special privilege is a right not commonly given to people. Actions taken using special privileges should be closely monitored.
- For high-security environments, you should consider a monitoring solution that offers screen captures or screen recording in addition to the text log.
- Job Rotation
- Job rotation is the act of moving people between jobs or duties. The goal of job rotation is to reduce the length of one person being in a certain job or handling a certain set of responsibilities for too long.
- This minimizes the chance of errors or malicious actions going undetected. Job rotation can also be used to cross-train members of teams to minimize the impact of an unexpected leave of absence.
Information Lifecycle
Information lifecycle is made up of the following phases:
- Collect data – data is gathered from automated sources and when users produce data such as creating a new spreadsheet.
- Use data – users read, edit, and share data.
- Retain data (optional) – data is archived for the time required by the company’s data retention policies.
- Legal hold (occasional) – a legal hold requires you to maintain one or more copies of specific data in an unalterable form during a legal scenario, an audit, or government investigation. A legal hold is often narrow and in most cases, is invisible to users and administrators who are not involved in placing the hold.
- Delete data – the default delete action in most operating systems is not secure. The data is simply marked as deleted but is still in storage until overwritten. To have an effective information lifecycle, you must use secure deletion techniques such as disk wiping, degaussing, and physical destruction.
Service-Level Agreements (SLAs)
An SLA is an agreement between a provider (which could simply be another department within the organization) and the business that defines when a service provided by the department is acceptable.
You’ll most likely come across this as providing a reliable service in the 9s. This is basically an availability or coverage threshold. The focus is usually on high availability and site resiliency. Sometimes there can be financial penalties for not meeting SLA requirements.
Security Incident Management
NIST have divided the incident response into the following four steps :
- Preparation
- Detection and Analysis
- Containment, Eradication and Recovery
- Post-incident Activity
But these steps are usually divided into eight steps to have a better view of the incident management.
- Preparation – what has been done to train the team and users to take responsible measures to help to detect and handle the incident. The checklist to handle the incident is also part of the preparation.
- Detection – also called identification phase, this is the most important part of the incident management. The detection phase should include an automated system that checks logs. The users’ awareness about security is also paramount. Time is of the essence.
- Response – also called containment, this is the phase where the team interacts with the potential incident. First step is to contain the incident by preventing it to affect others systems.
- Depending of the situation, the response can be to disconnect the network, shutdown the system, or to isolate the system. This phase typically starts with forensically backing up the system involved in the incident. Volatile memory capturing and dumping is also performed in this step before the system is powered off.
- Depending of the criticality of the affected systems, the production can be heavily affected or maybe even stopped, it is important to have the management’s approval. The response team will have to update the management on the importance of the incident and the estimated time to resolution.
- Mitigation – during this phase, the incident should be analyzed to find the root cause. If the root cause is not known, the restoration of the systems may allow the incident to occur again. Once the root cause is known, a way to prevent the incident from occurring again can be applied.
- The systems can then be restored or rebuild from scratch, to a state where the incident can’t occur again. It is especially important to make sure to prevent this incident from happening to other systems. Changing the firewall rule set or patching the system is often a way to do this.
- Reporting – this phase starts at detection and finishes with the addition of the incident response report to the knowledge base. The reporting can take multiple forms depending on how public the communication is.
- For the non-technical people of the organization, a formatted mail explaining the problem without technical terms and the estimated time to recover. If users are required to take action, it should be clearly explained with supporting screenshots everyone can do it.
- For the technical team, the communication should include details, estimated time to recover, and perhaps the details to the incident response team’s resolution. Maybe a bridge call would have to be done.
- Recovery – during this phase, the system is restored or rebuilt. The business unit responsible for the system only has the ability to decide when the system should go back in production. Depending of the actions taken during the mitigation, it’s possible that there’s still a problem. Therefore, close monitoring is required after the system returns to production.
- Remediation – this phase is done during the mitigation phase. Once the root-cause analysis is over, the vulnerabilities should be mitigated. Remediation starts when the mitigation ends. If the vulnerabilities exist in the system’s recovery image, the recovery image needs to be be generated with the fix applied. All systems not affected by the incident but are still vulnerable should be patched ASAP. It’s important to neutralize the threat in this phase.
- Lessons Learned – this phase is often the most neglected one but it can prevent a lot of future incidents. The incident should be added in a knowledge base, along with steps taken, and if users or members of the response team need additional training. The Lessons Learned phase can improve the preparation phase dramatically.
Maintaining Detective and Preventative Measures
- Type 1 Hypervisors are VM hypervisors where the OS is installed directly on the barebone machine. These hypervisors often perform better.
- Type 2 Hypervisors are applications installed in an OS. They are called hosted hypervisors. These hypervisors often perform slower than type 1 hypervisors since the OS have to translate each call.
- Tripwire is a HIDS.
- NIPS is like an IDS, but it’s installed inline to the network. It can modify network packets or block attacks.
- IACIS is a non-profit organization of digital forensic professionals. The CFCE credential was the first certification demonstrating competency in computer forensics in relation to Windows based computers.
- CFTT is a project created by NIST to test and certify forensics equipment.
- Software Escrow Agreement allows the customer to have access to the source code of software when the vendor stops support or is out of business.
Firewalls
The operation of firewalls involves more than modifying rules and reviewing logs. You also need to review the configuration change log to see which configuration settings have been changed recently.
Intrusion Detection and Prevention Systems
You need to routinely evaluate the effectiveness of your IDS and IPS systems. This is not a set and forget security solution. The alerting functionality needs to be reviewed and fine-tuned. Too many alerts with false positives and dangerous false negatives will impede detection and ultimately response.
Whitelisting and Blacklisting
Whitelisting is the process of marking applications as allowed, while blacklisting is the process of marking applications as disallowed. Maintaining these lists can be automatic and can be built-in into other security software.
Security Services Provided by Third Parties
Some vendors offer security services that ingest logs from your environment. This handles the detection and response by using artificial intelligence or a large network operations center to sort through the noise. Other services perform assessments, audits, or forensics. There are also other third-party security services that offer code reviews, remediation, or reporting.
Open Source Intelligence is the gathering of information from any publicly available resource. This includes websites, social networks, discussion forums, file services, public databases, and other online sources. This also includes non-Internet sources, such as libraries and periodicals. Besides data being available in public places, third parties can provide services to include this information in their security offerings.
Sandboxing
Sandboxing is a technique that separates software, computers, and networks from your entire environment. Sandboxes help minimize damage to a production network. Unfortunately, since sandboxes are not under the same scrutiny as the rest of the environment, they are often more vulnerable to attack. Sandboxes are also often used for honeypots and honeynets.
Honeypots and Honeynets
A honeypot or a honeynet is a computer or network that is deliberately deployed to lure bad actors so that the actions and commands are recorded. If you don’t know how something would be compromised, this is a great way to see some of the methods used so that you can better secure your environment. There are important and accepted uses but don’t expect all unauthorized access to be malicious in nature.
It’s interesting that honeypots and honeynets can be seen as unethical due to the similarities of entrapment. It’s undeniable though those security-conscious organizations can still take advantage of the information gleaned from their use.
Anti-malware
Anti-malware is a broad term that encompasses all tools to combat unwanted and malicious software, messages, or traffic. Malicious software includes nearly all codes, apps, software, or services that exist to trick users or cause overall harm. You should deploy anti-malware to every possible device, including servers, computers, and mobile devices. Make sure to keep this stuff updated!
Configuration Management System
CMS is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional, and physical attributes with its requirements, design, and operational information throughout its life. CMS can also be used for the following purpose:
- Service Modeling
- Standardization and Compliance
- Incident Resolution
- Change Impact Analysis
- Change Control
- Even Management
- License Management
Configuration Management Process
Configuration Management Process usually involves the three following steps:
- Baselining
- Patch Management
- Vulnerability Management
Change Control or Change Management Process
Change control within information technology (IT) systems is a process—either formal or informal—used to ensure that changes to a product or system are introduced in a controlled and coordinated manner. It reduces the possibility that unnecessary changes will be introduced to a system without forethought, introducing faults into the system or undoing changes made by other users of the software. The goals of a change control procedure usually include:
- Have all the change reviewed by management
- Minimal disruption to services
- Communication for disruption to services
- Control that the change have a rollback
- Reduction in back-out activities
- Cost-effective utilization of resources involved in implementing change
The steps within the Change Management Process include:
- Request the change
- Review the change
- Approve/Reject the change
- Test the change
- Implement the change
- Document the change
Request Control process provides an organized framework where users can request modifications, managers can conduct cost/benefit analysis, and developers can prioritize tasks.
Recovery Strategies
- A recovery operation takes place after availability is hindered. This can be an outage, security incident, or a disaster. Recovery strategies have an impact on how long your organization will be down or would otherwise be hindered. Here are the strategies (design):
- Backup storage – have a strategy and policy detailing where your backup data is stored and how long it is retained. Offsite storage is important as well.
- Backup separation – having backups stored offsite ensures that if your datacenter goes, your data doesn’t go with it. Third party organizations also offer recovery facilities to enable faster recovery.
- Long term backup storage – Offsite storage providers can be sensitive storage facilities with high-quality environmental characteristics around humidity, temperature, and light. These facilities offer favorable consideration for long-term backup storage.
- Additional services – other managed services are generally available like tape rotation, electronic vaulting, and organization of media.
- Recovery site strategies – multiple datacenters allow for basic primary, recovery, and regional strategies in house. Thanks to the cloud, it’s a possible to extend your recovery efforts on the fly. Backups to the cloud could be cost effective, however, recovery can be expensive or too slow for needs.
- Multiple processing sites – multiple data centers with high speed connectivity is more prevalent now than ever before. You have the ability for running multiple instances of an application across 3 or more data centers. This allows for backup-free designs since the data is stored in at least 3 separate locations with syncing. A processing site can include the cloud.
- System resiliency – includes high availability, quality of service (QoS), and fault tolerance as noted below:
- System resilience – ability to recover quickly. A device or system can quickly and seamlessly be added or replaced in the event of system failure. It often comes from having multiple functional components.
- High availability – since resilience is focused on recovering in a short period of time, high availability is about having multiple redundant systems that enable zero downtime for a single failure. While clusters are often the answer for high availability, there are many other methods available too. For example, you can use an HA pair. Both high availability and resiliency are needed in many organizations.
- Quality of service (QoS) – a technique that helps enable specified services higher priority or service than other services. For example, on a network, QoS might provide the highest quality of service to the phones and the lowest quality of service to social media. QoS is also used by ISPs to control the quality of services for specific partners or customers. For example, an ISP would decide to use QoS to make its own web properties or media offerings perform at its best while hamstring the performance of its competitors. There is plenty of evidence to support this (it’s a short Google query away). This is why net neutrality is an important discussion.
- Fault tolerance – as part of providing a highly available solution, your systems must have multiples of the same type of components to provide fault tolerance. Fault tolerance, by itself, isn’t valuable. You’re just holding onto spares since another component failure can still take you down.
Backup
Difference between the following types of backup strategies:
- Differential: copies only files that have had their data changed since the last full backup. This requires more space than incremental backup. Differential backup doesn’t clear the archive attribute, so the next differential backup will be larger than the previous one.
- Incremental: copies only files that have changed since the last full or incremental backup. This kind of backup strategy takes more time in restoration but is faster at the backup time. Incremental backup clears the archive attribute, so the next incremental backup will not save the same file if it has not changed.
Backup Strategy | Backup Speed | Restoration Speed | Space Required | Needed for recovery | Clears backup bit |
---|---|---|---|---|---|
Full | Slow | Fast | Large | Last Full backup | Yes |
Differential | Medium | Medium | Large | Last Full backup + Last differential | No |
Incremental | Fast | Slow | Small | Last Full backup + All incremental since last full backup. | Yes |
System Availability
RAID is a set of configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives.
- RAID 0 – stripping, sending data to two or more disks to increase the write and read speed. Striping is done at the block level. The downside is if one disk fails in the RAID, your data is gone. The failure rate is multiplied by the number of disks.
- Striping
- RAID 1 – consists of an exact copy or mirror of a data set on two or more disks.
- Mirroring (RAID 1-4)
- RAID 2 – same as RAID 0, but the stripe is done at the bit level. Rarely implemented because it’s too complex. The disks have to spin at the same speed and they need to be synchronized. Therefore, only one request can be made at a time.
- RAID 3 – same as RAID 2 plus a parity disk. Rarely used.
- RAID 4 – same as RAID 3 (RAID 2 plus a parity disk) but the striping is done at the block level.
- RAID 5 – RAID 0 + two parity disks. There are 2 disks to do the data striping + 2 disks to do the parity. Minimum number of disks is three. If a striping disk fails, the data can be calculated from the parity disks. If a parity disk fails, the data is still available on the striping disks.
- Distributed parity
- RAID 6 – RAID 5 + another parity block. The disks can be read at the same speed as RAID 5 but the write speeds are slower.
- Dual parity
- Nested RAID levels – also known as hybrid RAID, combine two or more of the standard RAID levels. RAID 50 is 5+0, combines the straight block-level striping of RAID 0 with the distributed parity of RAID 5
Power and Interference
Electrical Power is a basic need to operate. Here are the problems you can encounter with the commercial power supply:
- Blackout – complete loss of commercial power.
- Fault – momentary power outage.
- Brownout – an intentional reduction of voltage by a power company.
- Sag/dip – a short period of low voltage.
- Spike – a sudden rise in voltage in the power supply, during a short period of time.
- Surge – a rise in voltage in the power supply, during a long period of time.
- In-rush current – the initial surge of current required by a load before it reaches normal operation.
- Transient – line noise or disturbance is superimposed on the supply circuit and can cause fluctuations in electrical power.
You can mitigate the risk by installing a UPS. UPS have a limited power and can send power to connected systems for a short period of time. To be able to have power for days, a diesel generator is needed.
Noise can occur on a cable:
- Transverse Mode happen when there is high charge difference between hot and neutral.
- EMI and RFI are caused by others electrical device, light source, electrical cable, etc.
DRP
DRP is focused on IT and it’s part of BCP. There are 5 methods to test a DRP:
- Read-through – where everyone involved read the plan. It helps find inconsistencies, errors, etc.
- Structure walk-through – also known as table-top exercise. It’s where all of the involved parties role play their part, by reading the DRP and following a fake scenario.
- Simulation test – is when the team is asked to give a response to a virtual disaster. The response is then tested to make sure the DRP is valid.
- Parallel test – is where the DRP is tested for real. If there is a second site, it is activated, etc. The parallel test should never impact production.
- Full interruption test – is when the production is shutdown to test the DRP. It’s rarely done due to the heavy impact on production.
Type of DR sites:
- Redundant site – same as production site and there is a mechanism to activate a failover to send the traffic to the redundant site. A failure on the production site should be invisible to users and customers.
- Hot site – a mirror site of the current production site. It should be up within 3 hours after an event.
- Warm site – has everything needed in terms of hardware to run everything in production, but the data or systems are not up to date. Sometimes, the telecom is not ready as well. It can take up to 3 days for a warm site to be up and running.
- Cold site – the least expensive option. It’s just an empty site with no hardware or data.
- Reciprocal agreement – where you agree with another company to host each others recovery if there’s a problem. It can raise some concerns regarding distance, data confidentiality, and etc.
- Mobile site – essentially datacenters on wheels. These are trucks with data and hardware ready to leave a site if a disaster is announced.
Business Continuity Planning
BCP is the process of ensuring the continuous operation of your business before, during, and after a disaster event. The focus of BCP is totally on the business continuation and it ensures that all services that the business provides or critical functions that the business performs are still carried out in the wake of the disaster.
BCP should be reviewed each year or when a significant change occurs. BCP has multiple steps:
- Project initiation is the phase where the scope of the project must be defined.
- Develop a BCP policy statement.
- The BCP project manager must be named, they’ll be in charge of the business continuity planning and must test it periodically.
- The BCP team and the CPPT should be constituted too.
- It is also very important to have the top-management approval and support.
- Scope is the step where which assets and which kind of emergency event are included in the BCP. Each services of the company must be involved in these steps to ensure no critical assets are missed.
- Conduct a BIA. BIA differentiates critical (urgent) and non-essential (non-urgent) organization functions or activities. A function may be considered critical if dictated by law. It also aims to quantify the possible damage that can be done to the system components by disaster.
- The primary goal of BIA is to calculate the MTD for each IT asset. Other benefits of BIA include improvements in business processes and procedures, as it will highlight inefficiencies in these areas. The main components of BIA are as follows:
- Identify critical assets
- At some point, a vital records program needs to be created. This document indicates where the business critical records are located and the procedures to backup and restore them.
- Conduct risk assessment
- Determine MTD
- Failure and recovery metrics
- Identify critical assets
- The primary goal of BIA is to calculate the MTD for each IT asset. Other benefits of BIA include improvements in business processes and procedures, as it will highlight inefficiencies in these areas. The main components of BIA are as follows:
- Identify preventive controls
- Develop Recovery strategies
- Create a high-level recovery strategy.
- The systems and service identified in the BIA should be prioritized.
- The recovery strategy must be agreed by executive management.
- Designing and developing an IT contingency plan
- Where the DRP is designed. A list of detailed procedure to for restoring the IT must be produced at this stage.
- Perform DRP training and testing
- Perform BCP/DRP maintenance
8. Software Development Security
Software development security involves the application of security concepts and best practices to production and development software environments.
It’s important to add security to software development tools, source code weaknesses and vulnerabilities, configuration management as it relates to source code development, the security of code repositories, and the security of application programming interfaces which should be integrated into the software development lifecycle considering development methodologies, maturity models, operations and maintenance and change management as well as understanding the need for an integrated product development team.
5 Phase SDLC Project Lifecycle
Nonfunctional Requirements define system attributes such as security, reliability, performance, maintainability, scalability, and usability.
- Initiation – problems are identified and a plan is created.
- Acquisition and development – once developers reach an understanding of the end user’s requirements, the actual product must be developed.
- Implementation – physical design of the system takes place. The Implementation phase is broad, encompassing efforts by both designers and end users.
- Operations and maintenance – once a system is delivered and goes live, it requires continual monitoring and updating to ensure it remains relevant and useful.
- Disposition– represents the end of the cycle, when the system in question is no longer useful, needed, or relevant.
This is a more detailed SDLC, containing 13 phases:
- Preliminary analysis – begin with a preliminary analysis, propose alternative solutions, describe costs and benefits, and submit a preliminary plan with recommendations.
- Conduct the preliminary analysis: discover the organization’s objectives and the nature and scope of the problem. Even if a problem refers only to a small segment of the organization itself, find out what the objectives of the organization are and see if this fits.
- Propose alternative solutions: after digging into the organization’s objectives and specific problems, several solutions may have been discovered. However, alternate proposals may still come from interviewing employees, clients, suppliers, and/or consultants. Insight may also be gained by researching what competitors are doing.
- Cost benefit analysis: analyze and describe the costs and benefits of implementing the proposed changes. In the end, the ultimate decision on whether to leave the system as is, improve it, or develop a new system will be guided by this and the rest of the preliminary analysis data.
- Systems analysis, requirements: define project goals into defined functions and operations of the intended application. This involves the process of gathering and interpreting facts, diagnosing problems, and recommending improvements to the system. Project goals will be further aided by analysis of end-user information needs and the removal of any inconsistencies and incompleteness in these requirements. Due Care should be done in this phase. A series of steps followed by the developer include:
- Collection of facts: obtain end user requirements through documentation, client interviews, observation, and questionnaires.
- Scrutiny of the existing system: identify pros and cons of the current system in-place, so as to carry forward the pros and avoid the cons in the new system.
- Analysis of the proposed system: find solutions to the shortcomings described in step two and prepare the specifications using any specific user proposals.
- Systems design: desired features and operations are described in detail, including screen layouts, business rules, process diagrams, pseudocode, and other documentation.
- Development: the real code is written in this step.
- Documentation and common program control: the way data is handled in the system, the logs are generated, and etc. This is also documented.
- Integration and testing: all the pieces are brought together into a special testing environment, then checked for errors, bugs, and interoperability.
- Acceptance: the system is tested by a third party. The testing includes functionality tests and security tests.
- Testing and evaluation controls: create guidelines to determine how the system can be tested.
- Certification: the system is compared to functional security standards to ensure the system complies with those standards.
- Accreditation: the system is approved for implementation. A certified system might not be accredited and an accredited system might not be certified.
- Installation, deployment, implementation: final stage of initial development, where the software is put into production and runs actual business.
- Maintenance: during the maintenance stage of the SDLC, the system is assessed/evaluated to ensure it does not become obsolete. This is also where changes are made to initial software.
- Disposal: plans are developed for discontinuing the use of system information, hardware, and software and making the transition to a new system. The purpose here is to properly move, archive, discard, or destroy information, hardware, and software that is being replaced, in a manner that prevents any possibility of unauthorized disclosure of sensitive data.
- The disposal activities ensure proper migration to a new system. Particular emphasis is given to proper preservation and archiving of data processed by the previous system. All of this should be done in accordance with the organization’s security requirements.
Security of Application Programming Interfaces
Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap. The programming language has been classified by generation.
- First-generation language – made up of ones and zeros.
- Machine language
- Second Generation language – the assembly. The language is specific to a particular processor family and environment.
- Assembly language
- Third Generation language – includes features like improved support for aggregate data types and expressing concepts in a way that favors the programmer, not the computer. A third generation language improves over a second generation language by having the computer take care of non-essential details. It’s also using a compiler to translate the human readable code to a machine code. Sometime a runtime VM is used, like for C# and Java. Fortran, ALGOL, COBOL, C, C++, C#, Java, BASIC, and Pascal are 3rd generation languages.
- High-Level language
- Fourth Generation language – languages that are done for a specific set of problems or tasks. Mathlab is made to work in the mathematics field. The different flavors of SQL are done to interact with databases. XQuery is made for XML.
- Very high-level language
- Fifth Generation language – while fourth-generation programming languages are designed to build specific programs, fifth-generation languages are designed to make the computer solve a given problem without the programmer. Fifth-generation languages are used mainly in artificial intelligence research. OPS5 and Mercury are examples of fifth-generation languages.
- Natural language
Software and Data Terms
- Coupling is the degree of interdependence between software modules depend heavily on another module/object. Low coupling means changing something in a class will not affect other class. A measure of how closely connected two routines or modules are; the strength of the relationships between modules.
- Coupling is usually contrasted with cohesion (if an object/module implements a lot of unrelated functions. High cohesion means an object/module implements only related functions). Low coupling often correlates with high cohesion, and vice versa.
- Consistency in database systems refers to the requirement that any given database transaction must change affected data only in allowed ways. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof.
- Cardinality refers to the uniqueness of data values contained in a particular column (attribute) of a database table. The lower the cardinality, the more duplicated elements in a column. For example, ID should be unique, so ID would have a high cardinality. A column Gender that can only accept Male or Female would have a low cardinality.
- Durability indicates that once a transaction is committed, it’s permanent. It’ll survive any crash or power off of the DB’s host. The transaction is written to the disk and in the transaction log. Like a customer entry in a database for example.
- Data Dictionary is a data structure that stores metadata (structured data about information). If a data dictionary system is used only by the designers, users, and administrators and not by the DBMS Software, it is called a passive data dictionary. Otherwise, it is called an active data dictionary or data dictionary.
- Test Coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. To calculate the test coverage, the formula is Number of use cases tested / Total number of use cases.
- Negative Testing is a method of testing an application or system that ensures that the plot of the application is according to the requirements and can handle the unwanted input and user behavior. Invalid data is inserted to compare the output against the given input. Negative testing is also known as failure testing or error path testing.
- Boundary tests are done during negative testing. When performing negative testing exceptions are expected. This shows that the application is able to handle improper user behavior. Users input values that do not work in the system to test its ability to handle incorrect values or system failure.
- CRUD testing Create, Read, Update, and Delete (CRUD) are the four basic functions of persistent storage. CRUD testing is used to validate the CRUD is functioning.
- Heap Metadata Prevention is a memory protection that force a process to fail if a pointer is freed incorrectly.
- Pointer Encoding is a buffer overflow protection recommended by Microsoft during the Software Development Lifecycle for Independent Software Vendors, but it’s not required.
- Data Warehousing is the process of collecting large volumes of data on a high performance storage.
- Data Mining is the process of searching large volumes of data for patterns.
Buffer Overflow and Pointer Protection
This is according to the Independent Software Vendor recommendations from Microsoft SDL.
Name | Requirement | Priority |
---|---|---|
Pointer Encoding | No | Moderate |
ASLR | Yes | Critical |
Heap Metadata Protection | Yes | Moderate |
DEP | Yes | Critical |
Data Protection
- Hardware Segmentation is memory protection that maps process in different hardware memory locations.
- Defect Density is a development that determines the average number of defects per line of code.
- Risk Density is a secure development metric that ranks security issues in order to quantify risk.
- Inference is the ability to deduce sensitive information from available non-sensitive information. For example, deducing a patient’s illness based on that patient’s prescription.
- Aggregation is combining benign data to reveal potential sensible information.
Software Development Methodologies
- Agile Software Development method is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customers/end users.
- Most agile development methods break product development work into small increments that minimize the amount of up-front planning and design.
- Iterations, or sprints, are short time frames (time boxes) that typically last from one to four weeks. Each iteration involves a cross-functional team working in all functions: planning, analysis, design, coding, unit testing, and acceptance testing. At the end of the iteration a working product is demonstrated to stakeholders.
- This minimizes overall risk and allows the product to adapt to changes quickly. An iteration might not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Multiple iterations might be required to release a product or new features.
- Working software is the primary measure of progress.
- CMM is a development model created after a study of data collected from organizations that contracted with the U.S. Department of Defense, who funded the research. The term “maturity” relates to the degree of formality and optimization of processes, from ad hoc practices, to formally defined steps, to managed result metrics, to active optimization of the processes. The model’s aim is to improve existing software development processes, but it can also be applied to other processes. There are five maturity levels:
- Initial – characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.
- It’s chaos. Personnel is reacting to events/requests.
- Repeatable – characteristic of this level of maturity that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.
- Personnel have already encountered the events/requests and are able to repeat action/unwritten process.
- Defined – characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place. The processes may not have been systematically or repeatedly used – sufficient for the users to become competent or the process to be validated in a range of situations.
- Some documentations and standards are in place.
- Managed – characteristic of processes at this level that, using process metrics, effective achievement of the process objectives can be evidenced across a range of operational conditions. The suitability of the process in multiple environments has been tested and the process refined and adapted. Process users have experienced the process in multiple and varied conditions, and are able to demonstrate competence. The process maturity enables adaptions to particular projects without measurable losses of quality or deviations from specifications.
- The company/organization have metrics about the process. Personnel are trained and experienced.
- Optimizing – characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes or improvements. Processes are concerned with addressing statistical common causes of process variation and changing the process to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative process-improvement objectives. There are only a few companies in the world that have attained this level.
- Company/Organization management is constantly working on improving the process.
- Initial – characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.
- CPA or Critical Path Method (CPM) is an algorithm for scheduling a set of project activities. It is commonly used in conjunction with the program evaluation and review technique (PERT). A critical path is determined by identifying the longest stretch of dependent activities and measuring the time required to complete them from start to finish.
- Critical Path Analysis is commonly used with all forms of projects, including construction, aerospace and defense, software development, research projects, product development, engineering, and plant maintenance, among others. Any project with interdependent activities can apply this method of mathematical analysis.
- The first time CPM was used for major skyscraper development was in 1966 while constructing the former World Trade Center Twin Towers in New York City. Although the original CPM program and approach is no longer used, the term is generally applied to any approach used to analyze a project network logic diagram.
- Waterfall Model is the oldest and most common model used for SDLC methodology. It works on the principal of finishing one phase and then moving on to the next one. Every stage builds up on information collected from the previous phase and has a separate project plan. Though it is easy to manage, delays in one phase can affect the whole project timeline. Moreover, once a phase is completed, there is little room for amendments until the project reaches the maintenance phase. The phases include:
- Requirement
- Design
- Implementation
- Verification
- Maintenance
- Spiral Model The spiral model is a risk-driven software development process model. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incremental, waterfall, or evolutionary prototyping.
- IDEAL
- INITIATION – management support is obtained for process improvement, the objectives and constraints of the process improvement effort are defined, and the resources and plans for the next phase are obtained.
- DIAGNOSIS – identifies the appropriate appraisal method (such as CMM-based), identifies the project(s) to be appraised, trains the appraisal team, conducts the appraisal, and briefs management and the organization on the appraisal results.
- ESTABLISHMENT – an action plan is developed based on the results of Phase 2, management is briefed on the action plan, and the resources and group(s) are coordinated to implement the action plan.
- ACTION – resources are recruited for implementation of the action plan, the action plan is implemented, the improvement effort is measured, and the plan and implementation are modified based on measurements and feedback.
- LEVERAGE – ensures that all success criteria have been achieved, all feedback is evaluated, the lessons learned are analyzed, the business plan and process improvement are compared for the desired outcome, and the next stage of the process improvement effort is planned.
Processor Mode
Processors have different modes of execution.
- Ring 0 – Kernel/Supervisor/High Privilege is the mode used to execute code that have complete access to the hardware.
- It’s normally reserved for OS functions.
- Ring 3 – Users/Applications mode
- Is used to run applications.
Secure Coding Guidelines and Standards
- Many organizations have a security strategy that is focused at the infrastructure level; it deals with hardware and access. However, organizations that develop code internally should also include coding in their security strategy.
- Security weaknesses and vulnerabilities at the source-code level are important because just about every application has bugs. While not all bugs are specifically related to security, they can sometimes lead to a security vulnerability.
- Use source code analysis tools, which are also called static application security testing (SAST) tools, to find and fix bugs.
- These tools are most effective during the software development process, since it’s more difficult to rework code after it is in production.
- These tools can’t find everything and can potentially create extra work for teams if there are a lot of false positives.
- All source code is scanned during development and after release into production.
- Security of application programming interfaces (APIs). APIs allow applications to make calls to other applications. Without proper security, APIs are a perfect way for malicious activity to sour your application or environment as a whole.
- The security of APIs starts with requiring authentication using a method such as OAuth or API keys. Authorization should also be used and enforced.
- Many companies use an API security gateway to centralize API calls and perform checks on the calls (checking tokens, parameters, messages, etc.) to ensure they meet the organization’s requirements.
- Other common methods to secure your APIs is to use throttling (which protects against DoS or similar misuse), scan your APIs for weaknesses, and use encryption (such as with an API gateway).
- Secure coding practices. There are established practices you should follow to maximize the security of your code. Some of the most common ones are:
- Input validation – validate input from untrusted sources and reject invalid input.
- Don’t ignore compiler warnings – use the highest warning level available and address all warnings that are generated.
- Deny by default – everyone should be denied by default. Grant access as needed.
- Authentication and password management – require authentication for everything that is not meant to be available to the public. Hash passwords and salt the hashes. Mmmm, hash browns.
- Access control – restrict access using the principle of least privilege. Deny access if there are issues checking access control systems.
- Cryptographic practices – protect secrets and master keys by establishing and enforcing cryptographic standards for your organization.
- Error handling and logging – avoid exposing sensitive information in log files or error messages. Restrict access to logs.
- Data protection – encrypt sensitive information.
- Communication security – use Transport Layer Security (TLS) everywhere possible.
- System configuration – lock down servers and devices. Keep software versions up to date with fast turnaround.
- Memory management – use input and output control, especially for untrusted data. Watch for buffer size issues (use static buffers). Free memory when it is no longer required.
Conclusion
YEAH. We did it. Over 24K words of CISSP study notes goodness. This was probably a fraction of what you need to know, as plenty of knowledge and experience is already in my head. So be sure to make your own notes or add to these!
Let me know what was easy for you and, of course, what you had trouble with. If anything needs to be corrected or added, please sound off in the comments below.
Thanks, and good luck with the exam!
Update 9/25: I JUST PASSED. WOOHOO! What’s next? I’m not sure what 2020’s cert will be. I’m also debating whether I should create updated study guides for newer versions of exams on this website. If you come across this and have ideas, share them in the comment section below!
Hello Roy,
Please do you have a 2021 update of the notes? I noticed containerization, dockers etc are not mentioned.
Thanks
Hello Bafo,
No, I do not have an updated CISSP notes set. I would like to work on an update but I’m not sure when I will have the time to do that. Thanks for reaching out.
Hi Roy,
Great article, many thanks!
GDPR gives a timeline of 72 hours to notify authorities in case of a breach. The article mentiones 24hours.
cheers
Benni
Hi Bernhard,
Thanks for reaching out. You are right, it should be 72 hours (Article 33, I believe). Good catch! I fixed that, along with a large number of spelling and grammar issues.