The Ying & Yang of Systems Security Engineering

Overview

Systems Security Engineering is Systems Engineering. Like any other engineered system, a security system will follow a certain workflow as it progresses from concept through to deployment. These include architectural development, design,  implementation, design verification and validation. This is the classic Systems Engineering top down development process followed by a bottom up verification process – like any other systems engineering effort.

However, in other ways Systems Security Engineering very unlike Systems Engineering in that many security requirements are negative requirements, and typical systems engineering is about positive requirements and functional goals. For example – a negative requirement may state “the security controls must prevent <some bad thing>”, where a positive requirement may state “the system must do <some functional thing>”. In addition Systems Engineering is about functional reduction, where some higher level function is reduced to some set of lower level functions – defining what the system does. Security Engineering is about how system functions are implemented, and things the system should not do, ideally with no impact on the overall function of the system. These two factors increase the complexity of top down security implementation, and make the bottom up verification much more difficult (since it is impossible to prove a negative).

In this post below we are going to be focusing on how security systems are verified, and provide a few insights on how to more effectively verify system security.

Level 0 Verification: Testing Controls

As security engineers, we work to express every security requirement as a positive requirement, but that approach is fundamentally flawed since a logical corollary almost never exists for the negative requirements. The best we can hope for is to reduce the scope of the negative requirements. In addition, security architectures and designs are comprised of controls which have specific functions. The result often is that the security verification is a collection of tests that functionally verify security controls, and this is mis-interpreted as verification of the overall system security. This is not to say these are unimportant (they are), but they represent the most basic level of testing because testing of this nature only tests the functional features of specific security controls. It does not test any of the negative requirements that drive the controls. For example, if we started out with a negative security requirement that states “implement user authentication requirements that prevent unauthorized access”. This could be implemented as a set of controls that enforce password length, complexity and update requirements for users. These controls for length, complexity and update requirements could then be tested to verify that they have been implemented correctly. However, if an attacker were able to get the authentication hashed datafile, and extract the passwords with some ridiculous GPU based personal supercomputer or a password cracker running on EC2, this attacker would have access since they simply can use the cracked password. The result is that the controls have been functional tested (and presumed passed), but the negative requirement has not been satisfied. The takeaways are:

  • Testing the controls functionally is important, but don’t confuse that with testing the security of the system.
  • System security is limited by the security controls, and attackers are only limited by their creativity and capability. Your ability as a systems security engineer is directly correlated to your ability to identify threats and attack paths in the system.

Level 1 Verification: Red Teams / Blue Teams

The concept of Red Team versus Blue Team has evolved from military war gaming simulations, where the blue team represents the defenders and the red team represent the attackers. Within the context of military war gaming, this is a very powerful model since it encompasses both the static and dynamic capabilities of the conflict between the teams.

This model was adapted to system security assessment where the blue team represents the  system architects and / or system admins /ITSecOps team / system owners (collectively – the stakeholders), and red team is some team of capable “attackers” that operates independently from the system design team. As a system testing model this brings forward some significant advantages.

First and foremost, system designers / system owners have a strong tendency to only see the security of their system through the lens of the controls that exist in the system. This is an example of Schneier’s Law, an axiom that states “any person (or persons) can invent a security system so clever that she or he can’t think of how to break it.” A blue team is that group that generally cannot think of a way to break their system. A red team is external to the system architects / system owners is not bound by those preconceptions and is more likely to see the system in terms of potential vulnerabilities (and is much more likely to find vulnerabilities).

Secondary to that, since a red team is organizationally independent from the system architects / system owners, they are much less likely to be concerned about the impact of their findings on the project schedule, performance or bruised egos of the system stakeholders. In the case of penetration test teams, it is often a point of pride to cause as much havoc as possible within the constraints of their contract.

Penetration Testing teams are a form of red team testing, and work particularly well for some classes of systems where much of the system security is based on people. This is discussed in detail in the next sections.

Level 2 Verification: Black Box / White Box

In the simplest terms, black box testing is testing of a system where little or no information of the system is known by the testers. White box testing is where a maximum level of information on the system is shared with the testers.

From a practical viewpoint, whitebox testing can produce results much more quickly and efficiently since the test team can skip past the reconnaissance  / discovery of the system architecture / design.

However, there are cases where whitebox testing will not give you complete / correct results and blackbox testing will likely be more effective. There are two major factors that can drive black box testing as the better methodology over white box testing.

The first factor is whether or not the implemented system actually matches the architecture / design. If the implementation has additions/deletions or modifications that do not match the documented architecture / design, whitebox testing may not identify those issues, since reconnaissance  / discovery is not been performed as part of whitebox testing. As a result, vulnerabilities associated with these modifications are not explored.

The second factor in determining if blackbox testing is the right choice is where the security controls are. Security controls can exist in the following domains:

  1. Management – These are people policy, organizational and authority controls put in place to support the system security. Requiring all employees to follow all the systems security rules, or be fired and / or prosecuted – is  management control. A common failure of this rule is where corporate VPs share their usernames / passwords with their administrative assistants – and generally do not risk being fired. In most cases management controls are the teeth behind the rules.
  2. Operational – These controls are the workflow and process controls. These are the controls that are intended to associate authority with accountability. An example is that all purchase orders are approved by accounting, and  above a certain value they must be approved by a company officer. Another one is to not share your username / password. These controls are people-centric controls (not enforced by technology), and in most cases they present the greatest vulnerabilities.
  3. Technical – These are the nuts and bolts of system security implementation. These are the firewalls, network Intrusion Detection Systems (IDS), network / host anti-virus tools, enforced authentication rules, etc. This is where 90% of the effort and attention of security controls is focused, and where a much smaller percentage of the failures actually occur.

When your system is well architected, with integral functional controls for technical controls, but with a significant portion of the system security focused in operational (people) controls, black box testing is merited. Much like the first factor where the actual system may not reflect the architecture / design and it is necessary to use black box testing to discovery these issues, people controls are often soft and variable and it is necessary to use black box testing to test this variability.

Penetration Test Teams

Penetration Test Teams (also known as Red Teams) are teams comprised of systems security engineers with very specialized knowledge and skills in compromising different elements of target computer systems. An effective Red Team has all of the collective expertise needed to compromise most systems. When functioning as a blackbox team, they function and operate in a manner that is consistent with cyber attackers, but with management endorsement and the obligatory get out of jail free documentation.

At first glance, Red Teams operating in this way may seem like a very effective approach to validating the security of an system. As discussed above, that would be a flawed assumption. More specifically, Red Team team testing can be effective for a specific type of system security architecture, where the actual system could deviate from the documented system or if much of your system security controls are people-centric. Secondly, by understanding where the security in a system is (and where it is not), we can determine if Black Box testing is the more correct approach to system security testing.

Security Control Decomposition – Where “Security” Lives

In any security solution, system or architecture it should be clear what makes the system secure. If it is not obvious what controls in a system provide the security, it is not really possible to assess and validate how effective the security is. In order to better explore this question, we are going to take a look at another (closely related) area of cyber-security that is somewhat more mature that security engineering for parallels – cryptography.

Background: Historical Cryptography

In the dark ages of cryptography, the algorithm was the secrecy. The Caesar Cipher is a simple alphabet substitution cipher where plaintext is converted to ciphertext by shifting some number of positions in the alphabet. Conversion back to plaintext is accomplished by reversing the process. This cipher is the basis of the infamous ROT13, which allows the plaintext to be recovered from ciphertext by applying the 13 step substitution a second time, due to the 26 letters in the basic Latin alphabet.

In modern terms, the algorithm of the Caesar Cipher is to shift substitute by some offset to encrypt (with wrap around at the end of the alphabet), and shift substitute with the same offset negatively to decrypt. The offset used would be considered the key for this method. The security of any cipher is based on what parts of the cipher make it secure. In the Caesar Cipher knowledge of the method allows some attacker to try offsets until they are successful (with a keyspace of 25 values). If the attacker knows the key, but not the method, it appears to be more challenging that testing for 1 of 25 values. Given this very trivial example, it would appear that the security of the Caesar Cipher is more heavily based on the algorithm than the key.  From a more practical sense, Caesar gained most of his security based on the degree of illiteracy of his time.

In practice, Caesar used a fixed offset of three in all cases, with the result that his key and algorithm with fixed for all applications, which meant there is not distinction between key and algorithm.

Fast forward a few thousand years (give or take), and modern cryptography has a very clear distinction between key and algorithm. In any modern cipher, the algorithm is well documented and public, and all of the security is based on the keys uses by the cipher. This is a really important development in cryptography.

Background: Modern Cryptography

Advanced Encryption Standard (AES) was standardized by the US National Institute of Standards and Technology (NIST) around 2001. The process to develop and select an algorithm was essentially a bake off starting in 1997 of 15 different ciphers along with some very intensive and competitive analysis by the cryptography community. The result is that the process was transparent, the evaluation criteria was transparent, and many weaknesses were identified in a number of ciphers. The resulting cipher (Rijndael) survived this process, and by being designated the cipher of choice by NIST it has a lot of credibility.

Most importantly for this discussion is the fact that any attacker has access to complete and absolute knowledge of the algorithm, and even test suites to ensure interoperability, and this results in no loss of security to any system using it. Like all modern ciphers, all of the security of a system that uses AES is based on the key used and how it is managed.

Since the use of AES is completely free and open (unlicensed), over the last decade it has been implemented in numerous hardware devices and software systems. This enables interoperability between competitive products and systems, and massive proliferation of AES enabled systems. This underscores why it is so important to have a very robust and secure algorithm.

If some cipher were developed as a close source algorithm with a high degree of secrecy, was broadly deployed and then later a weakness / vulnerability was discovered, this would compromise the security of any system that used cipher. That is exactly what happened with a steam cipher known as RC4. For details refer to the Wikipedia reference below for RC4. The net impact is that the RC4 incident / story is one of the driving reasons for the openness of the AES standards process.

And now back to our regularly scheduled program…

The overall message from this discussion on cryptography is that a security solution can be viewed as a monolithic object, but by doing so it cannot effectively be assessed and improved. The threats need to be identified and anti-patterns need to be developed based on these threats, system vulnerabilities, and attack vectors mapped. Based on this baseline specific security controls can be defined and assessed for how well these risks are mitigated.

The takeaways are:

  • System security is based on threats, vulnerabilities, and attack vectors. These are mitigated by explicitly by security controls.
  • System security is built from a coordinated set of security controls, where each control provides a clear and verifiable role / function in the overall security of the system.
  • The process of identifying threats, vulnerabilities, attack vectors and mitigating controls is Systems Security Engineering. It also tells you “where your security is”.

Bottom Line

In this post we highlighted a number of key points in System Security Engineering.

  • Systems Security engineering is like Systems engineering in that (done properly) it is based on top down design and bottom up verification / validation.
  • Systems Security engineering is not like Systems engineering in that it is usually not functional and expressed as negative requirements that defy normal verification / validation.
  • Security assessments can be based on red team / blue team assessments and it can be done using a white box model / black box model, and the most effective approach will be based on the nature of the system.

As always, I have provided links to interesting and topical references (below).

References

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.