Category Archives: Pentesting

tools / processes and experiences in the domain of pentesting

The Ying & Yang of Systems Security Engineering


Systems Security Engineering is Systems Engineering. Like any other engineered system, a security system will follow a certain workflow as it progresses from concept through to deployment. These include architectural development, design,  implementation, design verification and validation. This is the classic Systems Engineering top down development process followed by a bottom up verification process – like any other systems engineering effort.

However, in other ways Systems Security Engineering very unlike Systems Engineering in that many security requirements are negative requirements, and typical systems engineering is about positive requirements and functional goals. For example – a negative requirement may state “the security controls must prevent <some bad thing>”, where a positive requirement may state “the system must do <some functional thing>”. In addition Systems Engineering is about functional reduction, where some higher level function is reduced to some set of lower level functions – defining what the system does. Security Engineering is about how system functions are implemented, and things the system should not do, ideally with no impact on the overall function of the system. These two factors increase the complexity of top down security implementation, and make the bottom up verification much more difficult (since it is impossible to prove a negative).

In this post below we are going to be focusing on how security systems are verified, and provide a few insights on how to more effectively verify system security.

Level 0 Verification: Testing Controls

As security engineers, we work to express every security requirement as a positive requirement, but that approach is fundamentally flawed since a logical corollary almost never exists for the negative requirements. The best we can hope for is to reduce the scope of the negative requirements. In addition, security architectures and designs are comprised of controls which have specific functions. The result often is that the security verification is a collection of tests that functionally verify security controls, and this is mis-interpreted as verification of the overall system security. This is not to say these are unimportant (they are), but they represent the most basic level of testing because testing of this nature only tests the functional features of specific security controls. It does not test any of the negative requirements that drive the controls. For example, if we started out with a negative security requirement that states “implement user authentication requirements that prevent unauthorized access”. This could be implemented as a set of controls that enforce password length, complexity and update requirements for users. These controls for length, complexity and update requirements could then be tested to verify that they have been implemented correctly. However, if an attacker were able to get the authentication hashed datafile, and extract the passwords with some ridiculous GPU based personal supercomputer or a password cracker running on EC2, this attacker would have access since they simply can use the cracked password. The result is that the controls have been functional tested (and presumed passed), but the negative requirement has not been satisfied. The takeaways are:

  • Testing the controls functionally is important, but don’t confuse that with testing the security of the system.
  • System security is limited by the security controls, and attackers are only limited by their creativity and capability. Your ability as a systems security engineer is directly correlated to your ability to identify threats and attack paths in the system.

Level 1 Verification: Red Teams / Blue Teams

The concept of Red Team versus Blue Team has evolved from military war gaming simulations, where the blue team represents the defenders and the red team represent the attackers. Within the context of military war gaming, this is a very powerful model since it encompasses both the static and dynamic capabilities of the conflict between the teams.

This model was adapted to system security assessment where the blue team represents the  system architects and / or system admins /ITSecOps team / system owners (collectively – the stakeholders), and red team is some team of capable “attackers” that operates independently from the system design team. As a system testing model this brings forward some significant advantages.

First and foremost, system designers / system owners have a strong tendency to only see the security of their system through the lens of the controls that exist in the system. This is an example of Schneier’s Law, an axiom that states “any person (or persons) can invent a security system so clever that she or he can’t think of how to break it.” A blue team is that group that generally cannot think of a way to break their system. A red team is external to the system architects / system owners is not bound by those preconceptions and is more likely to see the system in terms of potential vulnerabilities (and is much more likely to find vulnerabilities).

Secondary to that, since a red team is organizationally independent from the system architects / system owners, they are much less likely to be concerned about the impact of their findings on the project schedule, performance or bruised egos of the system stakeholders. In the case of penetration test teams, it is often a point of pride to cause as much havoc as possible within the constraints of their contract.

Penetration Testing teams are a form of red team testing, and work particularly well for some classes of systems where much of the system security is based on people. This is discussed in detail in the next sections.

Level 2 Verification: Black Box / White Box

In the simplest terms, black box testing is testing of a system where little or no information of the system is known by the testers. White box testing is where a maximum level of information on the system is shared with the testers.

From a practical viewpoint, whitebox testing can produce results much more quickly and efficiently since the test team can skip past the reconnaissance  / discovery of the system architecture / design.

However, there are cases where whitebox testing will not give you complete / correct results and blackbox testing will likely be more effective. There are two major factors that can drive black box testing as the better methodology over white box testing.

The first factor is whether or not the implemented system actually matches the architecture / design. If the implementation has additions/deletions or modifications that do not match the documented architecture / design, whitebox testing may not identify those issues, since reconnaissance  / discovery is not been performed as part of whitebox testing. As a result, vulnerabilities associated with these modifications are not explored.

The second factor in determining if blackbox testing is the right choice is where the security controls are. Security controls can exist in the following domains:

  1. Management – These are people policy, organizational and authority controls put in place to support the system security. Requiring all employees to follow all the systems security rules, or be fired and / or prosecuted – is  management control. A common failure of this rule is where corporate VPs share their usernames / passwords with their administrative assistants – and generally do not risk being fired. In most cases management controls are the teeth behind the rules.
  2. Operational – These controls are the workflow and process controls. These are the controls that are intended to associate authority with accountability. An example is that all purchase orders are approved by accounting, and  above a certain value they must be approved by a company officer. Another one is to not share your username / password. These controls are people-centric controls (not enforced by technology), and in most cases they present the greatest vulnerabilities.
  3. Technical – These are the nuts and bolts of system security implementation. These are the firewalls, network Intrusion Detection Systems (IDS), network / host anti-virus tools, enforced authentication rules, etc. This is where 90% of the effort and attention of security controls is focused, and where a much smaller percentage of the failures actually occur.

When your system is well architected, with integral functional controls for technical controls, but with a significant portion of the system security focused in operational (people) controls, black box testing is merited. Much like the first factor where the actual system may not reflect the architecture / design and it is necessary to use black box testing to discovery these issues, people controls are often soft and variable and it is necessary to use black box testing to test this variability.

Penetration Test Teams

Penetration Test Teams (also known as Red Teams) are teams comprised of systems security engineers with very specialized knowledge and skills in compromising different elements of target computer systems. An effective Red Team has all of the collective expertise needed to compromise most systems. When functioning as a blackbox team, they function and operate in a manner that is consistent with cyber attackers, but with management endorsement and the obligatory get out of jail free documentation.

At first glance, Red Teams operating in this way may seem like a very effective approach to validating the security of an system. As discussed above, that would be a flawed assumption. More specifically, Red Team team testing can be effective for a specific type of system security architecture, where the actual system could deviate from the documented system or if much of your system security controls are people-centric. Secondly, by understanding where the security in a system is (and where it is not), we can determine if Black Box testing is the more correct approach to system security testing.

Security Control Decomposition – Where “Security” Lives

In any security solution, system or architecture it should be clear what makes the system secure. If it is not obvious what controls in a system provide the security, it is not really possible to assess and validate how effective the security is. In order to better explore this question, we are going to take a look at another (closely related) area of cyber-security that is somewhat more mature that security engineering for parallels – cryptography.

Background: Historical Cryptography

In the dark ages of cryptography, the algorithm was the secrecy. The Caesar Cipher is a simple alphabet substitution cipher where plaintext is converted to ciphertext by shifting some number of positions in the alphabet. Conversion back to plaintext is accomplished by reversing the process. This cipher is the basis of the infamous ROT13, which allows the plaintext to be recovered from ciphertext by applying the 13 step substitution a second time, due to the 26 letters in the basic Latin alphabet.

In modern terms, the algorithm of the Caesar Cipher is to shift substitute by some offset to encrypt (with wrap around at the end of the alphabet), and shift substitute with the same offset negatively to decrypt. The offset used would be considered the key for this method. The security of any cipher is based on what parts of the cipher make it secure. In the Caesar Cipher knowledge of the method allows some attacker to try offsets until they are successful (with a keyspace of 25 values). If the attacker knows the key, but not the method, it appears to be more challenging that testing for 1 of 25 values. Given this very trivial example, it would appear that the security of the Caesar Cipher is more heavily based on the algorithm than the key.  From a more practical sense, Caesar gained most of his security based on the degree of illiteracy of his time.

In practice, Caesar used a fixed offset of three in all cases, with the result that his key and algorithm with fixed for all applications, which meant there is not distinction between key and algorithm.

Fast forward a few thousand years (give or take), and modern cryptography has a very clear distinction between key and algorithm. In any modern cipher, the algorithm is well documented and public, and all of the security is based on the keys uses by the cipher. This is a really important development in cryptography.

Background: Modern Cryptography

Advanced Encryption Standard (AES) was standardized by the US National Institute of Standards and Technology (NIST) around 2001. The process to develop and select an algorithm was essentially a bake off starting in 1997 of 15 different ciphers along with some very intensive and competitive analysis by the cryptography community. The result is that the process was transparent, the evaluation criteria was transparent, and many weaknesses were identified in a number of ciphers. The resulting cipher (Rijndael) survived this process, and by being designated the cipher of choice by NIST it has a lot of credibility.

Most importantly for this discussion is the fact that any attacker has access to complete and absolute knowledge of the algorithm, and even test suites to ensure interoperability, and this results in no loss of security to any system using it. Like all modern ciphers, all of the security of a system that uses AES is based on the key used and how it is managed.

Since the use of AES is completely free and open (unlicensed), over the last decade it has been implemented in numerous hardware devices and software systems. This enables interoperability between competitive products and systems, and massive proliferation of AES enabled systems. This underscores why it is so important to have a very robust and secure algorithm.

If some cipher were developed as a close source algorithm with a high degree of secrecy, was broadly deployed and then later a weakness / vulnerability was discovered, this would compromise the security of any system that used cipher. That is exactly what happened with a steam cipher known as RC4. For details refer to the Wikipedia reference below for RC4. The net impact is that the RC4 incident / story is one of the driving reasons for the openness of the AES standards process.

And now back to our regularly scheduled program…

The overall message from this discussion on cryptography is that a security solution can be viewed as a monolithic object, but by doing so it cannot effectively be assessed and improved. The threats need to be identified and anti-patterns need to be developed based on these threats, system vulnerabilities, and attack vectors mapped. Based on this baseline specific security controls can be defined and assessed for how well these risks are mitigated.

The takeaways are:

  • System security is based on threats, vulnerabilities, and attack vectors. These are mitigated by explicitly by security controls.
  • System security is built from a coordinated set of security controls, where each control provides a clear and verifiable role / function in the overall security of the system.
  • The process of identifying threats, vulnerabilities, attack vectors and mitigating controls is Systems Security Engineering. It also tells you “where your security is”.

Bottom Line

In this post we highlighted a number of key points in System Security Engineering.

  • Systems Security engineering is like Systems engineering in that (done properly) it is based on top down design and bottom up verification / validation.
  • Systems Security engineering is not like Systems engineering in that it is usually not functional and expressed as negative requirements that defy normal verification / validation.
  • Security assessments can be based on red team / blue team assessments and it can be done using a white box model / black box model, and the most effective approach will be based on the nature of the system.

As always, I have provided links to interesting and topical references (below).



Installing / Using W3af


W3af is a vulnerability scanner for web applications. Arbitrarily scanning random webpages / sites without permission from the site owners could get you a visit from law enforcement of the cyber type (FBI in the US). I recently had the opportunity to scan some customer sites with this tool.

Increasing complexity is the nature of life on the Internet, and increasing complexity leads to increased flaws and security vulnerabilities. In order to “harden” their systems Internet companies often have their systems penetration tested (ie pentested). Pentesting is a multistep process that loosely follows these steps:

  1. Networking mapping: from the entry point, map (to the greatest degree possible) map out everything beyond the entry point that may be accessible to the outside attacker.
  2. Discovery / Auditing : Based on this mapping, push and probe each system for known vulnerabilities and weaknesses.
  3. Exploitation: Based on the identified weaknesses / vulnerabilities, structure an attack to exploit to validate / confirm the exposure.

W3af is a tool that automates steps 1 and 2 of this process, enabling a much more thorough scan in a short period of time. Of course like any automation, the results from the tool need to be reviewed and analyzed, since many of the results will be false positives (not real vulnerabilities).


My first attempt to install W3AF was from the Ubuntu software center, and did not have much luck with it. The current version in the repository for 12.04LTS would crash on scan every time for me. Based on reports from around the web, this is very common and can vary – but was fixable with some effort. Not wanting to create some configuration Frankenstein, I decided a better and faster way to resolution may be to go to the source. So at the W3AF homepage, the following directions were provided.

    git clone
    cd w3af

Which was fairly painless worked well. however it was not complete. There were a number of dependencies identified with the first run of the gui, and a pointer to a script to install them. After setting the execute flag on the script, this worked really well, and I was able to launch the w3af gui. I ran the app from a terminal window, and was able to monitor the scrolling messages and on my first scan, a message appeared indicating that there was a failed dependancy – Python-Webkit.

Usage Notes

After installation of this, i discovered that the app would generate a nice graphical map of the website beneath the URL being scanned. Nice. In any case, like many open source projects, it can be very functional – but it has more than a few non-obvious gotchas. Of note:

  • If you setup for a number of consecutive scans, the clear data function does not appear to be 100% effective. I noticed that after two or three, the scrolling log display would sometimes stop displaying. Or the scrolling time graph on the log page would simply not be drawn, leaving a large blank space in the lower right corner. For the most part, exiting the W3AF gui app and restarting after each run seem to clear out whatever cruft was laying around.
  • In my attempt to be productive, i was configuring multiple scan configs for different targets – while it was running a long scan on a current configuration. It basically trashed the current run and no useful results were produced. So once again, exiting and restarting the app resolved whatever state it thought it was in.
  • Check your reporting / output plugin. Essentially nothing is enabled. I created a ~/w3af/reports directory to contain the results, enabled the xml, csv and html reports to point to that directory. Note – even with verbose turned off, there is a lot of data.
  • Although the app can produce a nice pictoral graph of the network (look under results / URLs tab), it does not save or provide a method to save. Use ‘screenshot’ to capture / save a region.
  • After a run is complete, save your reports off to a directory that matches the scan config – consecutive runs will overwrite the files.
  • In order to filter results in the HTML report, open it and search for “Severity: low” , “Severity: medium” or “Severity: high” to count / find issues.

Summary – W3af is fairly solid as a webapp mapper / scanner, and installs easily per the directions on w3af homepage in less than 30 minutes. It is not well suited for targeting multiple targets concurrently.

Links W3AF homepage

Pentesting: Day 1

Occasionally I get the opportunity to do something interesting, and today was one of those days. As part of a customer engagement, we are scanning parts of their public interfaces for vulnerabilities. We are stopping short of actual pen-testing, but are going through the discovery and audit phases to identify vulnerabilities. Last week I had identified two tools I wanted to look at; W3AF and OpenVAS. A critical point – the services we are scanning are web apps / web services. Today was a big day because the customer provided me with an email indicating that i was authorized to scan a certain set of their addresses from a specific address of mine. Not quite a Get Out of Jail Card, but close. This means I can go in with my scanner and be as noisy as I want to be without being too concerned about the FBI showing up at my door.

In any case, I installed and configured both of them today from respective sources (ie and Initial impressions are that OpenVAS looks and acts like it could support a continuous monitoring program on a network in a fairly automated manner (post setup), and could scale well. It provided a broad range of network port scanning capabilities against a large number of configured targets – all individually configurable. The W3AF application on the other hand is focused on web-apps vulnerabilities, and works best against a single URL / IP address or target.

Regarding the results, the OpenVAS was very fast (i.e. less than 5 minutes) to scan the target but produced no significant vulnerabilities. This can be contrasted with W3AF which took about an hour to run the OWASP Top 10 Scan and produced a 100+ low / moderate priority findings.

Bottom line – If you considering internal continuous monitoring for your network OpenVAS has a low pain threshold to implement. If on the other hand you are looking to protect your Internet based webapp – regular scans with W3AF is a better tool.