RemNote Community
Community

Cybersecurity - Security Principles and Architecture

Learn the fundamentals of secure operating systems and coding, key security architecture and control models, and modern practices such as software‑defined perimeters and security engineering.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz

Quick Practice

How are secure operating systems typically certified by external security-auditing organizations?
1 of 26

Summary

Secure Operating Systems and Secure Coding Introduction This material covers the foundational concepts and practices for building systems that are inherently secure. The goal is to prevent security vulnerabilities from being introduced during development and deployment, rather than trying to patch problems after they occur. This involves selecting appropriate security architectures, implementing protective controls, and following best practices throughout the entire system lifecycle. Secure Operating Systems A secure operating system is designed and built with security as a primary concern from the ground up. These systems are typically verified and certified by external security-auditing organizations to demonstrate that they meet established security standards. The most common evaluation standard is Common Criteria, a formal framework that evaluates IT products against defined security requirements. Products that pass Common Criteria evaluation receive a certification that confirms they have been independently tested and meet specific security guarantees. This certification process gives users confidence that the system has been rigorously assessed for security vulnerabilities. Secure Coding Practices Secure coding refers to the practice of writing software in ways that prevent accidental introduction of security vulnerabilities. Rather than relying on security measures added after development is complete, secure coding embeds security thinking into the development process itself. Secure-by-design systems are those designed from the ground up with security as a primary feature, not an afterthought. This approach ensures that security is woven into the architecture and implementation rather than bolted on later. The motivation for this approach is simple: it is far more difficult and expensive to retrofit security into an insecure system than to build it correctly from the start. Formal Verification Formal verification is the process of mathematically proving that an algorithm or system operates correctly according to its specification. This is particularly important for cryptographic protocols and security-critical components, where even a small error can completely undermine security. Rather than relying on testing (which can only show the presence of bugs, not their absence), formal verification provides mathematical certainty that an implementation is correct. However, formal verification is time-consuming and expensive, so it is typically reserved for the most critical security components rather than entire systems. Computer Protection Countermeasures Security by Design Building secure systems requires following a set of architectural and implementation principles from the very beginning of development. Principle of Least Privilege: Every component, user, and process should be granted only the minimum access and permissions it actually needs to perform its function. For example, a web server should not have administrative access to the operating system, and a user account should not have write access to system files. This principle limits the damage if a component is compromised. Code Reviews and Unit Testing: When formal verification is not feasible (which is most of the time), code reviews—where other developers examine the code—and unit testing help catch security flaws before deployment. A security-aware code review focuses on identifying potential vulnerabilities such as buffer overflows, SQL injection points, or missing input validation. Defense in Depth: Rather than relying on a single security measure, effective systems have multiple overlapping layers of protection. For example, a system might use network firewalls, authentication requirements, encryption, and application-level access controls. This ensures that if one layer is compromised, others remain in place. Default Secure Settings: Systems should be configured securely out of the box. Users should never have to explicitly enable security features for a system to be safe. This prevents situations where administrators forget to configure security properly. Fail-Secure Design: If a system fails, it should fail safely. For example, a door lock should lock rather than unlock if power is lost, and a security check that crashes should deny access rather than allow it. Audit Trails: Systems should maintain detailed logs of important activities and security events. These records serve two purposes: they enable detection of attacks in progress and support forensic investigation after an incident occurs. Audit trails should be stored remotely or in ways that prevent tampering, even if the main system is compromised. <extrainfo> Full Disclosure: When a security vulnerability is discovered, there are different philosophies about how quickly to disclose it publicly. Full disclosure means telling the public immediately, which shortens the window of time that attackers can exploit the flaw before users know to patch it. However, this also gives attackers a roadmap if the vendor hasn't released a patch yet. Most modern practice favors coordinated disclosure, where the vendor is given time to prepare a patch before public announcement. </extrainfo> Security Architecture Security architecture is the discipline of designing systems so that their structure actively reinforces security goals. A security architect's role is to ensure that: Components are structured to prevent unauthorized access Changes and additions to the system maintain or improve security Security controls align with organizational risk tolerance and requirements Key attributes of good security architecture include clear understanding of component relationships, risk-based selection of controls (applying stronger controls where risk is highest), and standardization of controls across the system to reduce complexity and cost. Security Measures and the CIA Triad The Confidentiality, Integrity, and Availability (CIA) triad forms the foundation of information security: Confidentiality: Only authorized users can access sensitive data Integrity: Data has not been altered by unauthorized parties Availability: Systems and data are accessible when needed Protecting the CIA triad requires three types of measures working together: Technical Measures include: User account access controls determine which users can access which resources Cryptography protects data both in transit and at rest Firewalls filter incoming and outgoing network traffic, implemented as either hardware appliances or software programs Intrusion Detection Systems (IDS) monitor networks for signs of ongoing attacks and provide evidence for forensic investigation after an incident Audit logs record system events and activities on individual systems, serving a detection function similar to IDS on networks Forward web proxies inspect web traffic before it reaches client machines and can block access to known malicious sites Physical Measures include locks, surveillance, and access controls to buildings and equipment. Administrative Measures include policies, procedures, training, and security awareness programs. Vulnerability Management Vulnerabilities are weaknesses in software or firmware that can be exploited to compromise security. Vulnerability management is a continuous cycle of identifying, fixing, or mitigating these flaws. Vulnerability scanners are automated tools that detect known weaknesses such as: Open network ports that should be closed Insecure configurations (for example, default passwords or unnecessary services) Missing security patches Software known to be susceptible to specific malware Scanners must be kept up to date with current threat information and vendor patches to be effective. However, automated scanning alone is not sufficient—penetration testing performed by external security auditors simulates real attacks and often discovers vulnerabilities that automated tools miss. A comprehensive vulnerability management program uses both approaches. Reducing Vulnerabilities Through Operational Practices Beyond technical controls, several operational practices significantly reduce the likelihood of successful attacks: Information Technology Security Assessments evaluate systems for risk and predict potential vulnerabilities. These assessments help prioritize which vulnerabilities pose the greatest risk and should be addressed first. Patching and Skilled Personnel: Keeping systems updated with vendor patches closes known vulnerabilities. Additionally, hiring Security Operations Centre analysts—trained professionals who monitor systems for suspicious activity—provides human intelligence to catch attacks that automated tools might miss. Two-Factor Authentication requires two forms of evidence of identity: Something you know (a password or PIN) Something you have (a hardware token, smart card, or mobile device) This prevents attackers from accessing accounts even if they steal a password, since they still lack the physical token. Security Training improves user awareness of social engineering tactics and physical security threats. Inoculation theory suggests that exposing users to simulated phishing attacks and other low-risk simulated attacks builds resistance to real attack attempts. Users who have seen a convincing fake phishing email are more likely to recognize and avoid real phishing attacks. <extrainfo> Hardware Protection Mechanisms: Physical security devices such as hardware security dongles, Trusted Platform Modules (TPMs), intrusion-aware cases that detect tampering, drive locks, disabled USB ports, and mobile-enabled access controls add an extra layer of protection by making physical attacks more difficult. </extrainfo> Access Control Models Access control determines who can access what resources and what actions they can perform. Several models provide different approaches to implementing access control. Access-Control Lists (ACLs) An access-control list is a list of permissions attached to a resource that specifies: Which users or processes may access the object What operations they may perform (read, write, execute, delete, etc.) For example, a file might have an ACL stating that Alice can read and write it, Bob can only read it, and Charlie cannot access it at all. ACLs are straightforward and give administrators fine-grained control but can become unwieldy in large organizations with thousands of users and resources. Role-Based Access Control (RBAC) Role-based access control groups users by their roles and grants access based on role membership. Instead of assigning permissions to individual users, you assign users to roles (such as "database administrator," "financial analyst," or "customer service representative"), and each role has specific permissions. RBAC simplifies management because adding a new user just requires assigning them the appropriate role. The role itself inherits all necessary permissions. RBAC can implement either mandatory access control (where the system enforces access rules that cannot be overridden by users) or discretionary access control (where resource owners can grant access to others). Capability-Based Security Capability-based security grants unforgeable tokens that describe what a process can access and do. A capability is essentially a protected key that says "this process can perform operation X on resource Y." Because capabilities are unforgeable, they provide strong guarantees about what a compromised process cannot do—it can only do what its capabilities allow. Capability-based mechanisms can be implemented at the programming language level, where they refine and enhance object-oriented design. Languages that support capability-based security make it difficult to accidentally grant excessive permissions or to forge false capabilities. Software-Defined Perimeter Traditional network security relied on a "perimeter" (firewalls and borders) to protect everything inside. A software-defined perimeter replaces this model with dynamic, identity-based access controls that work regardless of the network topology. Core Concept and Benefits In a software-defined perimeter, resources remain invisible to unauthenticated or unauthorized users. Instead of relying on network boundaries, access is controlled based on the identity of the user or device requesting access. This approach: Reduces attack surface by making resources hidden from unauthorized traffic Enables granular access control where each user can have different permissions based on context, device, and role Works across hybrid environments combining on-premises systems, cloud services, and remote locations Supports zero-trust architectures where no user or device is automatically trusted Simplifies compliance by providing clear visibility into who accessed what resources How It Works A software-defined perimeter consists of four main components: Identity Providers authenticate users and devices, verifying they are who they claim to be Policy Engines define access rules based on identity, context (location, time, device type), and resource attributes Secure Gateways enforce these policies and provide encrypted connections between users and resources Controllers orchestrate the entire system, integrating with existing infrastructure and adjusting policies as needed All traffic through the perimeter is monitored and logged, enabling both real-time threat detection and after-the-fact forensic investigation. <extrainfo> Limitations and Challenges While powerful, software-defined perimeters require significant operational changes: Robust identity management is essential; if identity systems are compromised, the entire perimeter fails Initial deployment can be complex and requires training security teams on new concepts Performance overhead occurs because every access request requires authentication and encryption Continuous policy updates are needed as threats evolve and business requirements change </extrainfo> Security Engineering Security engineering is the discipline of designing and building systems that incorporate security controls throughout their entire lifecycle, from initial conception through deployment and maintenance. The Security Engineering Process Effective security engineering follows these steps: Define security requirements based on threat models (analysis of who might attack and how) and risk assessments (analysis of what could go wrong and its impact) Design system architecture applying security-by-design principles such as least privilege and defense in depth Implement controls including authentication (verifying identity), encryption (protecting data), and logging (recording activities) Verify security through testing, code review, and formal verification of critical components Deploy and monitor the system in production, applying patches when vulnerabilities are discovered and continuously improving based on operational experience Common Standards and Frameworks Trusted Computer System Evaluation Criteria (TCSEC) sets graded security requirements ranging from minimal to extremely strict. Systems are rated based on how well they meet these criteria. Common Criteria, mentioned earlier, provides a standardized methodology for evaluating IT products against defined security requirements. NIST Cybersecurity Framework outlines five core functions: Identify: Understand your systems and risks Protect: Implement safeguards Detect: Identify when attacks occur Respond: Take action during incidents Recover: Restore normal operations after incidents Verification and Validation Ensuring security requires multiple techniques: Penetration testing simulates real attacks to identify weaknesses Formal verification mathematically proves that critical components operate correctly Security audits assess whether systems comply with policies and security standards Continuous monitoring uses security information and event management tools to watch for suspicious activity in real-time
Flashcards
How are secure operating systems typically certified by external security-auditing organizations?
Through the Common Criteria evaluation.
What is the primary aim of secure coding practices?
To prevent the accidental introduction of security vulnerabilities in software.
What term describes systems that are designed from the ground up to be secure?
Secure-by-design systems.
What is the core definition of security by design?
Building software from the ground up with security as a primary feature.
What does the principle of least privilege dictate regarding component permissions?
Each component is granted only the privileges it needs to function.
Which two methods improve module security when formal proofs are not feasible?
Code reviews and unit testing.
What is the requirement for a successful breach under the principle of defense in depth?
Multiple subsystems must be compromised.
Which two design strategies ensure a system remains safe by default?
Default secure settings and fail-secure design.
Where should audit trails be stored to prevent tampering?
Remotely.
What are the two primary goals of designing systems via security architecture?
Preventing initial compromise and limiting impact.
What are the three core processes of computer security?
Threat prevention Detection Response
In what two forms can firewalls be implemented to filter network traffic?
Hardware appliances or software programs.
What is the function of a forward web proxy regarding malicious content?
It blocks access to malicious pages and inspects content before it reaches clients.
What are the three components of the CIA triad in information security?
Confidentiality Integrity Availability
Which three types of measures work together to protect the CIA triad?
Administrative Physical Technical
Why is penetration testing by external auditors used alongside automated scans?
To identify additional vulnerabilities that automated scans might miss.
What are the two general categories of requirements used in two-factor authentication?
Something the user knows (e.g., password) and something the user has (e.g., token).
How does inoculation theory suggest building user resistance to real cyber attacks?
By exposing users to simulated attacks.
What specific information is contained within an access-control list (ACL)?
Permissions specifying which users/processes may access an object and what operations they can perform.
On what basis does role-based access control (RBAC) restrict system access?
The user's assigned roles.
What does capability-based security use to describe the access rights of a process?
Unforgeable tokens.
How does a software-defined perimeter (SDP) protect resources from unauthorized users?
By creating dynamic, identity-based access controls that hide resources.
What does the policy engine do within a software-defined perimeter architecture?
It defines access rules based on identity, context, and resource attributes.
What is the primary benefit of making resources invisible to unauthenticated traffic in an SDP?
It reduces the attack surface.
What is the core definition of security engineering?
Designing and building systems that incorporate security controls throughout their lifecycle.
What does the NIST Cybersecurity Framework outline for organizational security?
Functions for identifying, protecting, detecting, responding, and recovering.

Quiz

In role‑based access control, how is system access determined?
1 of 10
Key Concepts
Security Principles
Principle of least privilege
Defense in depth
CIA triad
Access Control Mechanisms
Access‑control list (ACL)
Role‑based access control (RBAC)
Capability‑based security
Security Practices and Techniques
Secure operating system
Secure coding
Formal verification
Security architecture
Vulnerability management
Software‑defined perimeter