Good News Everyone!

To state the obvious, I have not posted to this blog in over a year. In addition, in reviewing some of the posts before I stopped blogging (in 2016) some of them seemed to be fairly derivative of earlier posts I had done without significant new content.

For that I apologize, and hope I will not be writing a similar post a year from now.

Almost exactly a year ago, I took a job with a medical device company in a product security role. This role is one of the most challenging I have had in a while, and for me there is a level of excitement that comes with working on a dozen different tasks that range from hands on / nuts and bolts work to developing an enterprise level product security architecture to executive meetings selling the security story.

It is a different kind of thrill seeking, and so far it just keeps getting better.

As to why the blog has been dead for a year, that is the second half of this post. Initially it was about getting being stuck technically, since i was rehashing the same things I had been doing for a decade. I needed something new to stimulate the brain. Secondary to that the new job had a significant learning curve that left no cognitive reserves for a blog of this nature.

In my experience, there is some finite level of daily cognitive function, and if you use that up before you get out of the office, there is nothing left for hobbies like this. For most of the last year, I was operating in that mode, but fortunately the brain (through learning) is very good at taking thinking tasks and turning them into mental reflexes which impose a much lower cognitive tax.

And here we are.

In addition to this post I have created the first of a series of pages (meant to have some longevity) – Crypto Lab: Windows Subsystem for Linux.

See you soon.

The Ying & Yang of Systems Security Engineering


Systems Security Engineering is Systems Engineering. Like any other engineered system, a security system will follow a certain workflow as it progresses from concept through to deployment. These include architectural development, design,  implementation, design verification and validation. This is the classic Systems Engineering top down development process followed by a bottom up verification process – like any other systems engineering effort.

However, in other ways Systems Security Engineering very unlike Systems Engineering in that many security requirements are negative requirements, and typical systems engineering is about positive requirements and functional goals. For example – a negative requirement may state “the security controls must prevent <some bad thing>”, where a positive requirement may state “the system must do <some functional thing>”. In addition Systems Engineering is about functional reduction, where some higher level function is reduced to some set of lower level functions – defining what the system does. Security Engineering is about how system functions are implemented, and things the system should not do, ideally with no impact on the overall function of the system. These two factors increase the complexity of top down security implementation, and make the bottom up verification much more difficult (since it is impossible to prove a negative).

In this post below we are going to be focusing on how security systems are verified, and provide a few insights on how to more effectively verify system security.

Level 0 Verification: Testing Controls

As security engineers, we work to express every security requirement as a positive requirement, but that approach is fundamentally flawed since a logical corollary almost never exists for the negative requirements. The best we can hope for is to reduce the scope of the negative requirements. In addition, security architectures and designs are comprised of controls which have specific functions. The result often is that the security verification is a collection of tests that functionally verify security controls, and this is mis-interpreted as verification of the overall system security. This is not to say these are unimportant (they are), but they represent the most basic level of testing because testing of this nature only tests the functional features of specific security controls. It does not test any of the negative requirements that drive the controls. For example, if we started out with a negative security requirement that states “implement user authentication requirements that prevent unauthorized access”. This could be implemented as a set of controls that enforce password length, complexity and update requirements for users. These controls for length, complexity and update requirements could then be tested to verify that they have been implemented correctly. However, if an attacker were able to get the authentication hashed datafile, and extract the passwords with some ridiculous GPU based personal supercomputer or a password cracker running on EC2, this attacker would have access since they simply can use the cracked password. The result is that the controls have been functional tested (and presumed passed), but the negative requirement has not been satisfied. The takeaways are:

  • Testing the controls functionally is important, but don’t confuse that with testing the security of the system.
  • System security is limited by the security controls, and attackers are only limited by their creativity and capability. Your ability as a systems security engineer is directly correlated to your ability to identify threats and attack paths in the system.

Level 1 Verification: Red Teams / Blue Teams

The concept of Red Team versus Blue Team has evolved from military war gaming simulations, where the blue team represents the defenders and the red team represent the attackers. Within the context of military war gaming, this is a very powerful model since it encompasses both the static and dynamic capabilities of the conflict between the teams.

This model was adapted to system security assessment where the blue team represents the  system architects and / or system admins /ITSecOps team / system owners (collectively – the stakeholders), and red team is some team of capable “attackers” that operates independently from the system design team. As a system testing model this brings forward some significant advantages.

First and foremost, system designers / system owners have a strong tendency to only see the security of their system through the lens of the controls that exist in the system. This is an example of Schneier’s Law, an axiom that states “any person (or persons) can invent a security system so clever that she or he can’t think of how to break it.” A blue team is that group that generally cannot think of a way to break their system. A red team is external to the system architects / system owners is not bound by those preconceptions and is more likely to see the system in terms of potential vulnerabilities (and is much more likely to find vulnerabilities).

Secondary to that, since a red team is organizationally independent from the system architects / system owners, they are much less likely to be concerned about the impact of their findings on the project schedule, performance or bruised egos of the system stakeholders. In the case of penetration test teams, it is often a point of pride to cause as much havoc as possible within the constraints of their contract.

Penetration Testing teams are a form of red team testing, and work particularly well for some classes of systems where much of the system security is based on people. This is discussed in detail in the next sections.

Level 2 Verification: Black Box / White Box

In the simplest terms, black box testing is testing of a system where little or no information of the system is known by the testers. White box testing is where a maximum level of information on the system is shared with the testers.

From a practical viewpoint, whitebox testing can produce results much more quickly and efficiently since the test team can skip past the reconnaissance  / discovery of the system architecture / design.

However, there are cases where whitebox testing will not give you complete / correct results and blackbox testing will likely be more effective. There are two major factors that can drive black box testing as the better methodology over white box testing.

The first factor is whether or not the implemented system actually matches the architecture / design. If the implementation has additions/deletions or modifications that do not match the documented architecture / design, whitebox testing may not identify those issues, since reconnaissance  / discovery is not been performed as part of whitebox testing. As a result, vulnerabilities associated with these modifications are not explored.

The second factor in determining if blackbox testing is the right choice is where the security controls are. Security controls can exist in the following domains:

  1. Management – These are people policy, organizational and authority controls put in place to support the system security. Requiring all employees to follow all the systems security rules, or be fired and / or prosecuted – is  management control. A common failure of this rule is where corporate VPs share their usernames / passwords with their administrative assistants – and generally do not risk being fired. In most cases management controls are the teeth behind the rules.
  2. Operational – These controls are the workflow and process controls. These are the controls that are intended to associate authority with accountability. An example is that all purchase orders are approved by accounting, and  above a certain value they must be approved by a company officer. Another one is to not share your username / password. These controls are people-centric controls (not enforced by technology), and in most cases they present the greatest vulnerabilities.
  3. Technical – These are the nuts and bolts of system security implementation. These are the firewalls, network Intrusion Detection Systems (IDS), network / host anti-virus tools, enforced authentication rules, etc. This is where 90% of the effort and attention of security controls is focused, and where a much smaller percentage of the failures actually occur.

When your system is well architected, with integral functional controls for technical controls, but with a significant portion of the system security focused in operational (people) controls, black box testing is merited. Much like the first factor where the actual system may not reflect the architecture / design and it is necessary to use black box testing to discovery these issues, people controls are often soft and variable and it is necessary to use black box testing to test this variability.

Penetration Test Teams

Penetration Test Teams (also known as Red Teams) are teams comprised of systems security engineers with very specialized knowledge and skills in compromising different elements of target computer systems. An effective Red Team has all of the collective expertise needed to compromise most systems. When functioning as a blackbox team, they function and operate in a manner that is consistent with cyber attackers, but with management endorsement and the obligatory get out of jail free documentation.

At first glance, Red Teams operating in this way may seem like a very effective approach to validating the security of an system. As discussed above, that would be a flawed assumption. More specifically, Red Team team testing can be effective for a specific type of system security architecture, where the actual system could deviate from the documented system or if much of your system security controls are people-centric. Secondly, by understanding where the security in a system is (and where it is not), we can determine if Black Box testing is the more correct approach to system security testing.

Security Control Decomposition – Where “Security” Lives

In any security solution, system or architecture it should be clear what makes the system secure. If it is not obvious what controls in a system provide the security, it is not really possible to assess and validate how effective the security is. In order to better explore this question, we are going to take a look at another (closely related) area of cyber-security that is somewhat more mature that security engineering for parallels – cryptography.

Background: Historical Cryptography

In the dark ages of cryptography, the algorithm was the secrecy. The Caesar Cipher is a simple alphabet substitution cipher where plaintext is converted to ciphertext by shifting some number of positions in the alphabet. Conversion back to plaintext is accomplished by reversing the process. This cipher is the basis of the infamous ROT13, which allows the plaintext to be recovered from ciphertext by applying the 13 step substitution a second time, due to the 26 letters in the basic Latin alphabet.

In modern terms, the algorithm of the Caesar Cipher is to shift substitute by some offset to encrypt (with wrap around at the end of the alphabet), and shift substitute with the same offset negatively to decrypt. The offset used would be considered the key for this method. The security of any cipher is based on what parts of the cipher make it secure. In the Caesar Cipher knowledge of the method allows some attacker to try offsets until they are successful (with a keyspace of 25 values). If the attacker knows the key, but not the method, it appears to be more challenging that testing for 1 of 25 values. Given this very trivial example, it would appear that the security of the Caesar Cipher is more heavily based on the algorithm than the key.  From a more practical sense, Caesar gained most of his security based on the degree of illiteracy of his time.

In practice, Caesar used a fixed offset of three in all cases, with the result that his key and algorithm with fixed for all applications, which meant there is not distinction between key and algorithm.

Fast forward a few thousand years (give or take), and modern cryptography has a very clear distinction between key and algorithm. In any modern cipher, the algorithm is well documented and public, and all of the security is based on the keys uses by the cipher. This is a really important development in cryptography.

Background: Modern Cryptography

Advanced Encryption Standard (AES) was standardized by the US National Institute of Standards and Technology (NIST) around 2001. The process to develop and select an algorithm was essentially a bake off starting in 1997 of 15 different ciphers along with some very intensive and competitive analysis by the cryptography community. The result is that the process was transparent, the evaluation criteria was transparent, and many weaknesses were identified in a number of ciphers. The resulting cipher (Rijndael) survived this process, and by being designated the cipher of choice by NIST it has a lot of credibility.

Most importantly for this discussion is the fact that any attacker has access to complete and absolute knowledge of the algorithm, and even test suites to ensure interoperability, and this results in no loss of security to any system using it. Like all modern ciphers, all of the security of a system that uses AES is based on the key used and how it is managed.

Since the use of AES is completely free and open (unlicensed), over the last decade it has been implemented in numerous hardware devices and software systems. This enables interoperability between competitive products and systems, and massive proliferation of AES enabled systems. This underscores why it is so important to have a very robust and secure algorithm.

If some cipher were developed as a close source algorithm with a high degree of secrecy, was broadly deployed and then later a weakness / vulnerability was discovered, this would compromise the security of any system that used cipher. That is exactly what happened with a steam cipher known as RC4. For details refer to the Wikipedia reference below for RC4. The net impact is that the RC4 incident / story is one of the driving reasons for the openness of the AES standards process.

And now back to our regularly scheduled program…

The overall message from this discussion on cryptography is that a security solution can be viewed as a monolithic object, but by doing so it cannot effectively be assessed and improved. The threats need to be identified and anti-patterns need to be developed based on these threats, system vulnerabilities, and attack vectors mapped. Based on this baseline specific security controls can be defined and assessed for how well these risks are mitigated.

The takeaways are:

  • System security is based on threats, vulnerabilities, and attack vectors. These are mitigated by explicitly by security controls.
  • System security is built from a coordinated set of security controls, where each control provides a clear and verifiable role / function in the overall security of the system.
  • The process of identifying threats, vulnerabilities, attack vectors and mitigating controls is Systems Security Engineering. It also tells you “where your security is”.

Bottom Line

In this post we highlighted a number of key points in System Security Engineering.

  • Systems Security engineering is like Systems engineering in that (done properly) it is based on top down design and bottom up verification / validation.
  • Systems Security engineering is not like Systems engineering in that it is usually not functional and expressed as negative requirements that defy normal verification / validation.
  • Security assessments can be based on red team / blue team assessments and it can be done using a white box model / black box model, and the most effective approach will be based on the nature of the system.

As always, I have provided links to interesting and topical references (below).



2016 Personal Security Recommendations


There are millions of criminals on the Internet and billions of potential victims. You have probably not been attacked or compromised and if so, it is due to the numbers – probably not your personal security habits.

I have a passion for cyber security. Effective cyber security is a system problem with no easy or obvious solutions, and the current state of the art leaves plenty of room for improvement. I also think that every person who uses the Internet should have a practical understanding of the risks and what reasonable steps they should take to protect themselves.

For these reasons, any conversation I am in tends toward cyber security, and I occasionally am asked what my recommendations are for personal cyber security. When not asked, I usually end up sharing my opinions anyway.  My answer generally is qualified by the complexity of defending against the threats that are more ‘real’, but for most people we can make some generalizations.

The list below is what I think makes the most sense at this time. Like all guidance of this nature, the shelf life of this may be short. Before we can look at actionable recommendations, we need to really look at the threats we face. The foundation for any effective security recommendation must be to look at your threat space.

  1. Threats – These are realistic and plausible threats to your online accounts and data, in which you have realistic and plausible mitigation.
    1. Cyber Criminals – Criminals who are trying to monetize whatever they can from people on the Internet. There are so many ways this can be accomplished, but in most cases it involves getting access to your online accounts or installing malware to your computer. This threat represents 99.5% of the entire threat space most users have (note – this is a made up number, but is probably not too far off).
    2. Theft or Loss – Criminals who steal your computers or phone for  the device itself. If they happen to gain access to personal information on the device that enables extortion or other criminal access to your online accounts, that is a secondary goal. This threat represents 90% of the remaining threat space (so 90% of 0.5%) for laptops and smartphones (note – this number is also made up, with the same caveats).
    3. Computer Service Criminals – Anytime you take a phone / computer in for service, there is a risk that somebody copies off more interesting information for personal gain. It really does happen – search “geek squad crime” for details.
  2. Non-Threats – These are threats that are less likely, less plausible or simply unrealistic to defend against.
      1. NSA / FBI / CIA / KGB / GRU / PLA61398– Not withstanding the current issue between FBI vs Apple (which is not really about technical capability but about legal precedent), big govt Agencies (BGAs) have massive resources and money that they can bring to bear if you draw their attention. So my recommendation is that if you draw the attention of one or more BGAs, get a lawyer and spend some time questioning the personal choices that got you where you are.

    In order to effectively apply security controls to these threats, it is critical to understand what threat each of these controls protects against with some quantifiable understanding of relatively risk. In other words – it is more effective to protect against the threat that is most likely.

    Of the threats identified above, we identified online threats, device theft threats and computer service threats. For most people, the total number of times a computer / smart phone has been serviced or stolen can be counted on one hand. Comparatively, your online accounts are online and available 365 x 24 (that’s 8766 hours/year that you are exposed), and accessible by any criminal in the world with Internet access. Simple math should show you that protecting yourself online is at least 100x more critical than any other threat identified above.

    Threat Vectors

    In order to determine the most effective security controls for the given threats, it is important to understand what the threat vectors for each threat are. Threat vectors define the “how systems are attacked” for a given threat. Fortunately for the threats identified above, the vectors are fairly simple.

    In reverse order:

        1. Computer Service Threat: As part of the service process, you (the system owner) provides the device username and password so that the service people can access the operating system. This also happens to give these same service people fairly unlimited access to the personal files and data on the system, which they have been know to harvest for their personal gain. Keeping files of this nature in a secure container can reduce this threat.
        2. Theft or Loss: In recent years criminals have discovered that the information on a computer / phone may be worth much more than the physical device itself. In most cases, stolen computers and phones are harvested for whatever personal information can be monetized and then are sold to a hardware broker. If your system is not encrypted, all of the information on the system is accessible even if you have a complex password. Encryption of the system is really the only protection from this threat.
        3. Cyber Criminals: This is the most complex of the threats, since there are always at least two paths to the information they are looking for. Remember that the goal of this threat is to compromise your online accounts, which means that they can target the accounts directly on the Internet. However, most online Internet companies are fairly good at detecting and blocking direct attacks of this nature. So the next most direct path is to compromise a device with malware and harvest the information from this less protected device. The nature of this vector means this is also the most complex to protect. The use of Firewalls, Anti-Virus/Anti-Malware, Ad-Blockers, more secure browsers, secure password containers, and two factor authentication all contribute to blocking this attack vector. This layering of security tools (controls) is also called “defense in depth”.

    Actionable Recommendations [ranked]

    1. (Most Critical) Use Two Factor Authentication (2FA) for critical online accounts.
      1. Google: Everybody (maybe not you) has a Google account, and in many cases it is your primary email account. As a primary email account it is the target account for resetting your password for most other accounts. It is the one account to rule them all for your online world, and it needs to be secured appropriately. Use Google Authenticator on your smart phone for 2FA.
      2. Amazon: In the global first world, this is the most likely online shopping account everybody (once again – maybe not you) has. It also supports Google Authenticator for 2FA.
      3. PayPal: PayPal uses the SMS code as a 2nd authentication factor. It is not as convenient as Google Authenticator, but is better that 1FA.
      4. Device Integration: Apple, Google and Microsoft are increasingly integrating devices in their product ecosystems into their online systems. This increases the capabilities of these devices, and it also increases the online exposure of your accounts.
        1. Microsoft Online: Enable 2FA. Microsoft unfortunately does not  integrate with Google Authenticator, but does provide their own authentication app for your smart phone.
        2. Apple ITunes: Require Authentication for any purchases and Enable 2FA.
        3. Google Play: Require Authentication for any purchases.
      5. Banks, Credit Unions and Credit Accounts – These groups are doing their own thing for 2FA. If your banks, credit unions or credit accounts do not have some form of 2FA, contact them and request it. Or move your account.
    2. Password Manager: Use one, and offline is better than online. Remember putting it in the cloud is just somebody else’s computer (and may represent more risk than local storage). I personally recommend KeePass since it is open source, supports many platforms, is actively supported and free.
    3. Never store credit card info online: There are many online service providers that insist each month that they really want to store my credit card information in their systems (I am talking to you Comcast and Verizon), and I have to uncheck the save info box every time. At some point in the past, I asked a few of these service providers (via customer service) if agreeing to store my information on their servers meant that they assumed full liability for any and all damages if they were compromised. The lack of any response indicated to me that the answer is “probably not”. So if they are not willing to take responsibility for that potential outcome, I don’t consider it reasonable to leave credit card information in their system.
    4. Encrypt your SmartPhone: Smart phones are becoming the ultimate repository of personal information that can be used to steal your identity / money, and nearly all smart phones have provisions for encryption and password / PIN access. Use them. They really do work and are effective. It is interesting to note that most PIN codes are 4 to 6 digits, and most patterns (when reduced to bits) are comparable to 4 digit (or less) codes.
    5. Encrypt your Laptop: Your second most portable device is also the second most likely to be stolen or lost. If you have a Windows laptop, use BitLocker for system encryption. It is well integrated and provides some decent level of data security. In addition I would also recommend installing VeraCrypt. VeraCrypt is the more open source, next generation of TrueCrypt. For that extra level of assurance, you can create a secure container on your device or removable drive to store data requiring greater security / privacy.
    6. Password protect Chrome profile: I personally save usernames and passwords in my Chrome profile purely for the convenience. This allows me to go to any of my systems, and login easily to some of my regular sites. It also means that my profile represents a tremendous security exposure. So I sync everything and secure / encrypt it with a passphrase. Chrome offers the option to secure / encrypt with Google Account credentials, but I chose to use a separate passphrase to create a small barrier between my Google account and my Chrome sync data.
    7. Ad Blocker Plus/ AntiVirus/Firewall/Chrome: Malware is the most likely path to having your computer compromised. This can happen through phishing emails, or through a website or popup ads. Browsers are more effective at stopping malware than they used to be, and Chrome updates silently and continuously, decreasing your exposure risk. Chrome isthe browser I recommend. In addition, I use the Ad Blocker Plus plugin in Chrome. Lastly, I am using Windows 10, so I keep Windows  Defender fully enabled and updated. Pick your favorite anti-virus / anti-malware product, Defender just happens to be included and and does not result in a self inflicted Denial of Service (McAfee anyone?).
    8. Use PayPal (or equivalent) when possible: PayPal (and some other credit providers) manage purchases more securely online by doing one time transactions for purchases rather than simply passing on your credit credentials. This limits the seller to the actual purchase, and greatly reduces the risk that your card can be compromised.
    9. (Least Critical) VPN: If you have a portable device and use forms of public Wi-Fi, there is a risk that your information could be harvested as part of that first hop to the Internet. VPNs will not make you anonymous, VPNs are not TOR, but an always on VPN can provide you some security for this first hop. I use an always on VPN that I was able to get for $25 / 5 years. It may not provide the most advanced /  best security / privacy features available, but it is probably good enough for realistic threats.

    Additional Notes

    For those who are curious, there are some security tools that purport to provide security against the big government Agencies. However, it is important to note that even if these tools are compromised by these Agencies, it is very unlikely that they would admit it since it is more useful to have people believe they are being protected by these tools.

    1. VeraCrypt: Provides standalone encryption capability for files and storage devices that is nearly unbreakable. Like any encryption, the real weakness is the key and how you manage it.
    2. KeePass: Uses standalone encryption for passwords and other credential information. Once again, it is only as good as the password credentials you use.
    3. Signal / Private Call by Open Whisper: Secure messaging and voice call apps for your smart phone. The usefulness of these is directly related to who you are chatting with / talking with since both parties involved have to buy into to the additional effort to communicate securely.

    Bottom Line

    Security should do many things, but the most important elements for practical security are:

    1. It should protect against real threats in an effective manner. The corollary: It should not protect against imaginary / non-existent threats.
    2. It should be as transparent / invisible / easy to use as possible.
    3. It should be good enough that you are an obviously harder target than the rest of the herd (e.g There is no need to be faster than the bear chasing you, just faster than the guy next to you).

    Remember – The most effective security is the security that is used.

    Note – I apologize for my lack of tools for Apple platforms, but since I do not own one it is much more difficult to research / use.


Security Patterns & Anti-Patterns


In this post we will be exploring a very useful analysis concept in security engineering, Security Patterns and more importantly; Anti-Patterns.

As we have discussed in earlier posts, a use case or use model is a generalized process or method to do something useful. A security pattern is a generalized solution to a use case / use model.

Security Redux

As a quick refresher, lets take a look at how we get to patterns. Security within a system can be dissembled into a set of security controls. These controls come from one of three broad categories, which include Management, Operational and Technical. For further information on these distinctions, look to NIST SP 800-53 and NIST SP800-100. The management controls are essentially policy and enforcement controls. Operational controls are primarily process and workflow management. Lastly, Technical controls are the nuts and bolts pieces of technology that most people associate with computer security. These three control domains loosely map to implementation mechanisms including, People, Process, Policy and Technology. Technology maps directly to technical controls, and for the most part is the most effective part of system security design. Process is the how stuff gets done, and includes the checks, balances and feedback elements to ensure stuff gets done right. Policy is the organizational policy that drives the behavior of people and process. Lastly people are the mechanism that interfaces everything and in many cases turns a disconnected collection of policy, process and technical systems into some organizational system that provides some capability. When we represent some overall system capability as a Pattern, we are generalizing and simplifying down so that the entire system function can be easily understood as a single system. Anti-Patterns is used to represent common failure modes of the system, and analyze what security controls are missing or failing that allows this failure.

Credit Issuance: Pattern & Anti-Pattern

In this simple example we will look at a how large purchase credit is issued to consumers. It is important to note that I do not work in the financial / credit business, and this example is massively simplified.

In this particular Pattern / Anti-Pattern discussion, the bulk of the system security is based on process and people, and the discussion will center on those elements.

First we are going to explore the use case and security pattern. Bob and Alice are car shopping, have selected a vehicle, inform the sales person that they would like to finance the purchase, and would like the dealership to facilitate this purchase. This is essentially the use case. The next steps are that Bob and Alice provide information that authenticates who they are so that their financial identity can be verified by financial institution. Based on Bob and Alice’s identity, the financial institution procures a credit report from one of the three credit reporting agencies (or all three), to establish a credit profile for Bob and Alice.  Based on Bob and Alice’s current financial commitments and history, the financial institution makes a risk based decision as to whether credit will be extended for the purchase and what the terms will be. This information is then relayed back to the car salesman, who provides to Bob and Alice and then they decide if they will accept the terms. If the terms are accepted, Bob and Alice fill out various contracts that commit them to a number of things, the money is transferred from the financial institution, and owner ship of the car is transferred from the dealership to Bob, Alice and the financial institution.

It is important to note that this pattern and use case are idealized, and by looking at the anti-pattern for this pattern, we can make some interesting observations. An anti-pattern is not exactly the opposite of the pattern, but often represents generalized failure in the pattern that we would like to prevent.

In this particular anti-pattern, Eve is car shopping also, but rather than paying for it herself, she intends to present herself as Alice, and take possession of a car and fraudulently commit Alice to the loan for the car. All of this is occurring without Alice’s involvement or awareness of these events. It turns out that it is surprisingly easy to achieve with some degree of success, requiring little more than a fabricated ID and some personal information about Alice. When successful, Eve completes the contractual paperwork (posing as Alice), money is transferred to the car dealership and Eve takes possession of the car. Some 15 to 30 days later, Alice receives notification of her payment schedule for the loan.

In most cases this is the first indication to Alice that she is involved. From that point Alice then contacts the financial institution indicating that they are in error and that she did not take out a loan for a new car. By this time, the transfer of the money and car title to the bank has been completed, and is unlikely to be reversed without the return of the car (which Eve is unlikely to do voluntarily). As far as the car dealership or the financial institution is concerned, the entire process was legitimate and valid. By default, Alice is the responsible party for this fraudulent loan until she is able to legally correct this issue by having the financial institution accept the loan as fraudulent, and absolve her of responsibility for the loan.  This can often take many months, and in the mean time it is often necessary for Alice to make payments on this loan to protect her credit standing.

What Went Wrong?

I consider this to be a particularly good example to illustrate patterns, anti-patterns. So lets dissect what happened and what went wrong.

If we look at this pattern, and analyse the roles of the parties involved, we have Bob and Alice – the buyers, the car salesman, and the financial institution loan officer. In addition, the car salesman is acting as a broker for the between the financial institution and Bob and Alice. As buyers – the role of Bob and Alice is relatively simple. Bob and Alice want to buy a car, and are ready to commit to a car loan within some set of terms they deem reasonable.

The loan officer has a similarly simple role. The financial institution chooses to offer a loan to the buyers under a set of terms that fall within the policy of the financial institution, based on the financial identity / history of the buyers.  If we examine the goals and motives of the financial institution it becomes somewhat more complicated. For any financial institution, it is imperative to not give out fraudulent loans. As as for profit institution, it is also imperative to increase profits by issuing more loans. These two conflicting goals result in a risk based trade-off that becomes part of of the loan calculus at the financial institution. The probability of the loan being fraudulent is a known risk, and the probability that Bob and Alice may default on the loan is also a known risk and all of these risks are taken into consideration. However, even when these risks are known and accounted for, there is no benefit to a realized risk.

The car salesman plays a critical role in this process. The salesman (and by extension – his employers) are responsible for authenticating Bob and Alice. The primary basis of this entire example is that it only functions correctly if Bob and Alice are really Bob and Alice. The salesman is also responsible for representing the financial institution to the buyers – Bob and Alice. This becomes complicated by the fact that most car dealerships have relationships with dozens of financial institutions with various forms of incentives to select one over another. The role of the car salesman also is conflicted. Fundamentally, the first and most important goal for the car salesman is to sell cars, and maximize his personal incentive that results from the sale of that car. The goal of ensuring that any particular car purchase is not fraudulent is a distant second. It is safe to assume that if one financial institution rejects the loan application because it seems excessively risky, it will be submitted to multiple other financial institutions willing to take on more risky loans. In addition, for every car dealership that rigorously reviews the application and credentials submitted by Bob and Alice to ensure that they are not party to a fraudulent loan, there are numerous other dealerships willing to be less diligent.

If we then look at the Anti-Pattern, we introduce an additional party to this process; Eve. When Eve impersonates Alice, Alice still plays a role (as the victim) but is not actually connected to the process in a useful manner – and therein lies the flaw in this security architecture.

The remaining part of this analysis is to examine how the pattern reacts to misrepresentation. If the financial institution misrepresents the loan terms to the buyers, the buyer is in possession of the contract signed at closing of the loan. If the financial institution fails to transfer the loan proceeds to the car dealership, the title is not transferred and possession of the car is not released. If the car salesman misrepresents the vehicle, the financial institution does check the VIN number which provides significant information about the vehicle, and no money will be transferred until it is resolved. For both the car salesman and the financial institution there are checks and balances to ensure that they are not misrepresenting their part in the transaction. However, if the buyer misrepresents themselves as somebody else, there are no immediate system level controls to function as a check.

Bottom Line – Whenever people are key parts of the security design, it is important to assess these elements:

  • Identify Goals / Motivations of all the roles. If these are conflicted, this will result in some form of trade-off  at the personal level, which translates to a system security vulnerability.
  • Identify impact of Misrepresentation. What checks and balances are in place to ensure that if a role misrepresents itself, the system security functions despite this misrepresentation.


Pattern and Anti-Pattern analysis are often done to highlight weaknesses. This analysis showed that for this particular example, all of the parties (or actors) need to be accounted for in the process, where this includes the primary pattern and any anti-patterns.


Introduction to Systems Security Engineering

There are many books, articles and websites on System Engineering in general, but relatively few on Systems Security Engineering. In the not so distant past, I spent more than a decade implementing IT security, developing policy and procedure for IT security and auditing / assessing IT security in the Federal space. As part of that I spent a significant amount of time with FIPS standards and NIST Special Publications. The FIPS standards are more useful in that they define the the structure of the solution and the scope of what is compliant / certifiable and what is not, which tends to encourage (but not ensure) interoperability. The NIST Special Publications on the other hand are much more educational, instuctional and tutorial in nature. A recent example of this is the NIST SP800-160 Systems Security Engineering: An Integrated Approach to Building Trustworthy Resilient Systems.

The document provides a relatively brief overview of what Systems Security Engineering in chapter 2, and how it is in alignment with ISO/IEC 15288 (ISO standard for Systems Engineering processes and life cycles ). This chapter really provides the most useful content of this document at this time.

Chapter 3 goes into detailed lifecycle processes for systems security engineering and happens to map those directly to ISO/IEC15288, which is a good thing to help develop an understanding of how System Security Engineering integrates with the general Systems Engineering processes. These are not separate or disjointed processes, and that needs to be explicit and clear.

The appendices are simply placeholders in the draft, but show promise. I will be extremely curious to see what goes in those in the release version.

Overall – I think this document (when completed) may integrate and update the better parts of several aging Special Publications.


PSA – Update on TrueCrypt


There are many users who have continued to user TrueCrypt 7.1a for a number of reasons; specifically:

  1. TrueCrypt is not actively being developed or supported, but there are no indications of security vulnerabilities with TrueCrypt, and
  2. There are no clear and obvious alternatives to TrueCrypt which are as good / better than TrueCrypt 7.1a.

However – Neither of these reasons are still valid. In September 2015, a researcher discovered two additional security flaws in TrueCrypt 7.1a, one of which is critical (CVE-2015-7358), potentially allowing elevated privileges on a TrueCrypt system.

In addition, VeraCrypt is a fork of the TrueCrypt 7.1a codebase, is stable, and has already patched these two vulnerabilities (in addition to several others previously identified).

Bottom Line – It is time for any Truecrypt users to remove and replace with VeraCrypt.


Embedded Device Security – Some Thoughts


Devices are becoming increasingly computerized and networked. That is mildly newsworthy. Most of these devices have a long history of not being computerized or networked. Once again, only mildly newsworthy. Some of the companies have limited background in designing computerized and networked devices, which is not newsworthy in any context.

However – If security researchers compromise a Jeep and disconnect the brakes on a moving vehicle, or changes the dosage on a medical infusion pump – that is wildly newsworthy. Newsworthy in the sense that by connecting the dots above, we now have systems used by millions of people that represent a previously unknown and very real risk of injury and death.

Designing secure embedded systems is complex and challenging, and requires at least some level of capability and comprehension of the system level risks. A level of capability that many companies/industries have proven they do not have. Lets examine some of these cases.

Operational Issues Driving Bad Security

System security is a feature of a system as a whole, and as such it is not easily distributed to the individual pieces of a system. When this is coupled with the modern practice of subcontracting out much of the design into subsystems, this often means that the security of the system is either defined poorly or not at all within these subsystems. This should logically shift the responsibility of security validation to the systems integrator, but often does not since the system integrator does have an in-depth understanding of the individual subsystems. Essentially, as systems are further subdivided the ability to develop and validate an effective security architecture becomes increasingly difficult.
A secondary aspect of this problem is that subsystems are defined in terms of what they are supposed to do – feature requirements. Security requirements express what the systems is supposed to prevent or what it is supposed to not do. This negative aspect of security requirements makes security much more difficult to define and even more difficult to validate. As a result security requirements are not a good mechanism to implement system-level security.

The third and last aspect of this problem is that developing a secure system is a process that incorporates threat modeling, attack models, and security domain modeling – which ultimately drive requirements and validation. Without this level of integration into the development process, effective security cannot happen.

Designing secure embedded systems is complex and challenging, and requires at least some level of capability and comprehension of the system level risks — a level of capability that many companies/industries have proven they do not have.

Device Compromises in the News

There have been a number of well publicized security compromises in the last few years, which are increasingly dramatic and applicable to the public at large. Many of these compromises are presented at conferences dedicated to security, such as the Blackhat Conference and the DefCon Conference (both held in late summer).

This association has resulted in some interesting aspects of device security compromises. The most significant is that since these compromises are the presented at non-academic conferences, the demonstration and illustration of them are increasingly dramatic to garner more attention. A secondary aspect is that many of these compromises are made public early in the summer to build interest before the conferences. This is not particularly relevant, but interesting.

2014 Jeep Compromise (Blackhat 2015 Talk)

In July 2015, a pair of security researchers went public with a compromise against a 2014 Jeep Cherokee that allowed them to get GPS location, disable brakes, disable the transmission, and affect steering. In addition, they were able to control the radio, wipers, seat heater, AC, and headlights.

All attacks took place through the built-in cellular radio data connection. The root compromise was based on the ability to telnet (with no password) into a D-Bus service from the cellular interface, allowing commands to be sent to any servers on the D-Bus. One of those servers happened to be the CAN bus processor, which has access to all of the computerized devices in the vehicle including the engine, transmission, braking system, and steering.

The secondary issues that allowed complete compromise of the system include a lack of system-level access controls along security domain boundaries. Or to put it bluntly – allowing the entertainment system the ability to disable critical systems like brakes or steering is a bad practice and potentially dangerous. Since this same system (UConnect) is used in a large number of Chrysler, Plymouth, Dodge, Mercedes, and Jeep vehicles, many of these security vulnerabilities are applicable to any vehicles that also use UConnect.

The overall risk this compromise represents is that an attacker can take control of an entire class of vehicle and track the vehicle, disable the vehicle or precipitate a high speed accident with the vehicle, from anywhere via a cellular network. Collectively these represent a threat to privacy, personal injury or death.

Brinks CompuSafe Galileo Compromise (DefCon 2015 Talk)

The Brinks CompuSafe is a computerized safe with a touch screen that allows multiple people to deposit money, and it will track the deposits, enabling tracking and accountability. The unfortunate reality is that it is based on WindowsXP, and it has an external exposed USB port that was fully functional (including support for mouse, keyboard and storage). In combination with a user interface-bypass attack, administrative access allowed the researchers to modify the database of transactions, unlock the safe, and cleanup any evidence of the compromise. Issues with this include 1) using a general purpose OS in a security critical role, 2) exposing unrestricted hardware system access externally via the USB port, 3) a user interface (kiosk) that fails insecurely. Ultimately, this computerized commercial safe is much less secure than most of the mechanical drop slot safes they were intended to replace.

BMW Lock Compromise

In the BMW compromise, the ConnectedDrive system queries BMW servers on a regular basis over HTTP. The researchers were able to implement a Man in the Middle Attack by posing as a fake cell phone tower and were able to inject commands into the vehicle’s system. In many practical attacks this was used to unlock the doors to simplify theft. The “fix” issued by BMW, forced this channel to HTTPS – which is better, but still does not qualify as a highly secure solution. A more complete and secure solution would implement digitally-signed updates and commands that would provide significantly greater resistance to injection attacks.

The overall risk this compromise represents is that an attacker can inject commands into the vehicle (from close proximity) enabling it to be unlocked, started and stolen without significant effort.

Hospira Infusion Pump Compromise

An infusion pump is a medical device that administers intravenous drugs in a very controlled manner in terms of dosage and scheduling. Modern infusion pumps are networked into the networks in clinical environment to provide remote monitoring and configuration. In addition, they often have built in failsafe mechanisms to mitigate risk of operator errors in dosage / scheduling. The Hospira pump that was compromised had an exposed Ethernet port with an open telnet service running, which enabled a local attacker to connect via Ethernet and gain access to the clinical network credentials. This then allowed the attacker to access the secure clinical network, and access servers, data and other infusion pumps on this network. From either the Wi-Fi or Ethernet port, the attacker could modify the configuration and failsafes since this infusion pump had no protections to prevent either overwriting the firmware or modifying the dosage tables. Later investigations indicate that there are a number of Hospira infusion pumps that are likely to have the same vulnerabilities since they use common parts and software. As a result the FDA has issued multiple security warnings (and sometimes recalls) to hospitals and medical care providers on these types of devices.

The overall risk this compromise represents is that an attacker could: a) compromise the device locally to modify the drug tables, dosing and disabling failsafes, b) compromise the clinical secure network by harvesting the unprotected Wi-Fi credentials from the device, and c) compromise any of these infusion pumps on the same network (wirelessly). Collectively these represent a significant risk of injury and death.

Samsung Refrigerator Compromise

Samsung produces number of smart refrigerators; a recent model has an LCD display that is linked to a Google Calendar that functions as a family calendar. Unfortunately, the operation of this device is also capable of exposing the Google credentials for the associated account. Each time the refrigerator accesses the Google account, it does not actually verify that the SSL/TLS certificate is a valid certificate, allowing a Man in the Middle Attack server to pose as the Google server, exposing the login credentials.

The overall risk associated with this compromise is that the username/password for a Google account is compromised, and that is often associated with many other accounts including banking, financial, and social accounts. This can lead to all of these accounts being compromised by the attacker.

Bottom Line

There is a common thread through all of these recent newsworthy examples. In every one of these cases, these devices not only failed to follow generally accepted best practices for embedded security, but every one of them followed one or more worst / risky practices.

Risky Practices

The following are a set of practices that are strictly not “bad practices” but are risky. They are risky in the sense that they can be used effectively in a secure architecture, but like explosives must be handled very carefully and deliberately.

Exposed Hardware Access

Many embedded devices have some hardware interfaces exposed. These include USB ports, UART ports, JTAG ports, both internal to the device and exposed externally. In many cases these are critical to some aspect of the operation of the device. However, when these interfaces are absolutely necessary, it is incredibly important that these interfaces be limited and locked down to minimize the opportunities to attack through that given interface. In cases where they can be eliminated or protected mechanically, this also should be done.

Pre-Shared Keys

As a general rule, pre-shared keys should only be used when there are no other better solutions (solution of last resort). The issue is that sooner or later, an embedded pre-shared key will be compromised and exposed publicly, compromising the security of every single device that uses that same pre-shared key. In the world of device security, key management represents the greatest risk in general, and with pre-shared keys we also have the greatest impact to compromise. This very broad impact also creates an incentive to compromise those very keys. If a pre-shared key is the only viable solution, ensure that a) the pre-shared key can be updated securely (when provisioned or regularly), and b) that the architecture allows for unique keys for each device (key stored separate from firmware). By storing the pre-shared keys separate from the firmware (rather than embedding in firmware), keys cannot be compromised through firmware images. These features can be used to minimize the overall attack exposure and reduce the value of a successful attack.

Worst Practices

Collectively these compromises represent a number of “worst practices”, or design practices that simply should not be done. Outside of development environments, these practices serve no legitimate purpose and they represent significant security risk.

Exposed Insecure Services

As part of any security evaluation of an embedded system, unused services need to be disabled and (if possible) removed. In addition, these services need to be examined interface by interface and blocked (firewalled) whenever possible. On the Jeep Cherokee, exposing an unauthenticated telnet D-Bus service on the cellular interface was the root access point into the vehicle, and this exposure provided absolutely no purpose or value to the operation of the system.

Non-Secure Channel Communications

If the wireless channel is not secured, everything on that channel is essentially public. This means that all wireless communications traffic can be monitored, and with minimal effort the system can be attacked by posing as a trusted system through a Man in the Middle Attack. Examples of these services include any telnet or HTTP services.

Non-Authenticated End Points

A slightly more secure system may use a SSL/TLS connection to secure the channel. However, if the embedded system fails to actually validate the server certificate, it is possible to pose as that system in a Man in the Middle Attack. As described above, a recent model Samsung Refrigerator has this flaw, exposing the end user Google credentials.

No Internal Security Boundaries / Access Controls

Many of the systems highlighted place all of their trust on a single security boundary between the system and the rest of the world, where the internal elements of the systems operate in a wide open trusted environment. The lack of strong internal boundaries between any of the subsystems is why the Jeep Cherokee was such an appealing target. Within systems are subsystems and each of these need to be defined by their roles and access controls need to be enforced on the interface for each subsystem. The infotainment system should never be able to impact critical vehicle operation – ever. Subsystems need to be protected locally to a degree that corresponds to the role and risk associated with that subsystem.

Non Digitally Signed Firmware / Embedded Credentials

Many embedded systems allow for end user firmware updates, and these firmware updates are often distributed on the Internet. There are functional bugs, security bugs and emergent security vulnerabilities that need to be mitigated or the device can become a liability. This approach to updating has the advantage that users can proactively ensure that their systems are up to date with minimal overhead effort on the part of the product vendor.
It also allows for attackers to dissect the firmware, identify any interesting vulnerabilities and often extract built in credentials. In addition, if the update process does not require / enforce code signing, it allows for the attacker to modify the firmware to more easily compromise the system. By requiring that firmware updates be digitally signed, this prevents any attacker from installing some modified or custom firmware that provides they attacker privileged access to further compromise the device or the network it is part of. Firmware updates need to require digital signatures, and not include embedded credentials.


As more and more devices evolve into smart interconnected devices, there will be more and more compromises that have increasing levels of risk to to life and property. As the examples above show, many of these will be due to ignorance and arrogance on the part of some companies. Knowledge and awareness of the risks associated with embedded networked devices is critical to minimizing the risks in your systems.



Infusion Pump

Other Devices

CryptoCoding for Fun – Part 2 [Terminology and Concepts]

Introduction (Yak Shaving)

As you can see from the title, we are still the “CyberCoding for Fun” path, and although I would like to jump in and start how you make that happen, we need to step back a bit and take care of some details. In this case, defining and describing the Terminology and Concepts associated with crypto and cryptocoding is something that we refer to as “Yak Shaving”, or something you end up doing before starting what you started out doing.

Many times when you start a new project /  endeavor, where some learning is involved it is necessary to back the task up to a point where you can actually do something. For example, if the task is painting something in the garage, it is often necessary to clean and prep the garage, go buy some paint, and probably prep the the item for painting – all prior to the actual ‘painting’. This is called Yak Shaving.

In this update, we are going to go over some of the system level concepts in modern crypto without getting bogged down in details or acronyms. The intent is that in order to understand cryptosystems, is is necessary to understand tools (and terms) available to crytosystem designer.

Terms – Authentication / Authorization / Credentials / Confidentiality / Integrity / Availability

This section is all about crypto terminology, and nothing more.

  • Authentication – When a system authenticates a given set of credentials as valid. If these credentials are a username / password, the username identifies and the password authenticates. When a computer verifies that the username is valid, and that password matches the password associated with that username – you have been authenticated.
  • Authorization – Authorization is about access control and determining what a given authenticated user can and cannot access within some context. Authorization is what separates ‘guest’ from ‘administrator’ (and everything in between) on most computer systems.
  • Credentials – Something that identifies you to the computer system. This can be a username / password pair, or a PKI smartcard, or a simple RFID token. Each one of these represents different levels of confidence (that the credentials represent you), and this means they provide different levels of security. Bottom line – the security of most forms of credentials is based on how difficult they are to fake, break or steal.
  • Confidentiality – The ability to keep something secret. It is really that simple. If we assume Alice and Bob are exchanging private information, confidentiality is a characteristic of the communications channel that prevents Eve from listening in.
  • Integrity – The ability to ensure that the message received is the same as the message sent. If Alice and Bob are exchanging information, integrity is a characteristic of the communications channel that the prevents Eve from modifying the information (without detection) over the channel.
  • Availability – The ability to ensure that the communications channel is available. If Alice and Bob are exchanging information, availability is the characteristic of the communications channel that prevents Eve from blocking information over the channel.
  • Channel – Some arbitrary mechanism between two points for exchanging information. Channels can be nested on other channels. For example, the network IP protocol layer is a channel, and TCP is another protocol channel that runs on top of IP – forming TCP/IP. In this example, neither IP nor TCP are secure. Another example is the TLS secure protocol which runs on top of the insecure TCP/IP protocol. TLS is the basis of most secure web browser sessions.
  • Ciphertext – Encrypted text or data, to be contrasted with cleartext (which is un-encrypted text).

Interesting Sidenote – In most cybersecurity scenarios, Bob and Alice are the protagonists and Eve is the antagonist. This is their story:


Encryption – Symmetric

Symmetric encryption is where plaintext is encrypted to ciphertext, and then decrypted back to plaintext using the same key used to encrypt. When used to secure some data or channel, it requires that both end points or all parties involved share the same key, which is where the term ‘pre-shared key (PSK)’ comes from.

Historically, symmetric encryption was the only form of encryption available until about 1976 (not withstanding classified encryption) when the Diffie-Hellman key exchange algorithm was published. Every form of encryption or ciphers prior to that time was about key generation, key management, and algorithms. Prior to WWII, all encryption was done by hand or with machines, the most sophisticated and infamous of these machines being the German Enigma machine.

Encryption – Asymmetrical

Asymmetrical encryption is a form of encryption where there are two paired keys, where either one can be used to encrypt and the opposite key (and only this key) of this pair is used to decrypt the data. The first form of asymmetrical encryption that became generally well know was the basis of the Diffie-Hellman Key exchange. There have been a number of different variations of asymmetrical encryption based on various arcane and complex mathematical methods, but all share the same basic characteristic of a key pair for encryption / decryption.

In common nomenclature, one of these keys are designated the ‘public key’ which is not kept secret (and often published publicly), and the other key is designated the ‘private key’ which is kept as secret as possible. On some operating systems, private keys are often secured in applications called ‘keyrings’ which require some form of user credentials to access. Bottom line – private keys need to be kept private.

The value of asymmetric encryption may not be immediately obvious, but let’s take a look at an example where we compare / contrast with symmetric encryption.

If Bob and Alice need to exchange some small amount of data securely over a non-secure channel and they are relying on symmetric encryption, both Bob and Alice need to have a pre-shared encryption key, and this pre-shared key needs to be kept secret from Eve (and everybody else who may be a threat). The problem with this is how Bob or Alice communicates this key to the other without an secure channel in place. Since the primary communications channel is insecure, it cannot be used to share the encryption key, which drives the need for some secondary channel or out of band (OOB) channel that is secure. Think about that a minute – exchanging information securely over a non-secure channel requires some other secure channel to exchange keys first. This highlights the fundamental problem with symmetric encryption; key management.

Now if we take a look at asymmetric encryption, both Bob and Alice have generated their own personal public-private key pairs. This is then followed by Bob and Alice exchanging their public keys. Since these are public keys and it is not necessary to keep them secret, this is much easier than exchanging a symmetric encryption key. Once both Bob and Alice have exchanged public keys, we can start.

  1. Bob has a message he wants to send to Alice securely over a non-secure channel.
  2. Bob takes the message, produces a hash of the message, encrypts that hash with his private key and attaches it to the message and produces message A. The encrypted hash is known as a digital signature.
  3. Bob takes message A and then encrypts it using Alice’s public key, producing ciphertext B.
  4. Bob then sends this encrypted message B to Alice via any non-secure channel.
  5. Alice gets the message and decrypts it using her private key, producing message A. Alice is the only one who can do this since she is the only one that has her private key.
  6. Alice then takes message A and decrypts the attached electronic signature using Bob’s public key, producing Bob’s original message.

From this exchange, we can make the following significant statements:

  1. Bob knows that Alice and only Alice can decrypt the outer encryption since she is the one who has her private key.
  2. Alice knows that Bob and only Bob could have sent the message since the digital signature was verified and Bob is the only one that has his private key.
  3. Alice knows that the message was not modified since the hash code produced from the digital signature matched the contents of the message.
  4. This was achieved without sending secret keys through a second channel or over a non-secure channel.

These are some fairly significant features of public-private key encryption.  But of course our example can be compromised by a Man in the Middle Attack (MITM). For further details on these operations read the Wikipedia reference below on RSA.

Man in the Middle Attack (MITM)

As shown in the example above, public-private key encryption provides some significant advantages. However, it also is susceptible to some new attacks, including the Man in the Middle attack. If we look to the example above, both Bob and Alice generated their own public-private key pair and then somehow exchanged them. Since they are public keys there is no need for secrecy – but there is a need for integrity.  Say for example that Bob and Alice emailed their public keys to each other. Meanwhile Eve was somehow able to intercept these emails, generate her own public-private key pairs, and substitute her public key in those emails and send them to Bob and Alice. This means that when  Bob thinks he is signing the message with Alice’s public key it is really Eves public key. After Eve intercepts the message, she opens it with her private key and they re-signs it with Alice’s public key and sends on to Alice. The net result is that Eve can intercept and read every message without Bob or Alice being aware if it.

Digital Signing

The use of Digital signatures is a technically interesting solution to many of the attacks on public-private key encryption. But first we need to talk about hashcodes. In the world of digital data and encryption, a ‘hashcode’ is a mathematical fingerprint of some data set. A typical hash code used is called SHA-2/256 (most often just SHA256) that ‘hashes’ a dataset of any size and produces  a 256 bit hashcode. Due to the mathematical processes used, it is highly unlikely that a data set could be modified and still produce the same hashcode, so hashcodes are often used to verify integrity of datasets. When combined with public-private keys this leads us to digital signing.

In this example, we are going to add a fourth party to the example; Larry’s Certificate Authority (CA). At Larry’s CA, Larry has a special public-private keypair used just for signing things. It works just like any other public-private keypair, but is only used to sign things and is treated with a much higher degree of security than most other certs since it is used to assert the validity of many other certs.

In this example, both Bob and Alice take their public-private key pairs to their respective local offices of Larry’s CA along with identifying credentials – like drivers licenses, passports or birth certificates. Larry’s examines the credentials and determines that Bob is Bob and Alice is Alice, and then generates a Digital Public Key Certificate with their respective names, possibly addresses, email addresses, and their public keys. Larry then generates a hash of this Public Key Certificate, encrypts it with the signing private key, and attaches it to the public key certificate.

Now both Bob and Alice have upgraded from simply using self-generated public-private keypair to using a public private key pair with a public key certificate signed by a trusted certificate authority. So when Bob and Alice exchange these public key certificates, they can each take these certs and decrypt the signature using Larry’s CA public signing key, read the encrypted hashcode and compare it to the hashcode they generate from the certificate.

If the hashcodes match, we can conclude a few things about these public key certificates.

  • Since Larry’s CA is known to check physical credentials, there is a certain level of trust that the personal identifying information on the public key certificate is really associated with that information.
  • Since the digital signature is based on the hashcode of the entire public certificate, and the signature is valid – is is highly probable that the contents of the public key certificate have not been modified since it was signed.
  • Bottom Line – If Bob has public key certificate for Alice signed by Larry’s CA, he can trust that this public key is trustworthy (and probably has not been replaced by Eve’s key). Since Alice can know the same things about Bob’s public key certificate signed by Larry’s CA, Bob and Alice can use each other’s public keys with a much higher degree of confidence than with the example based on self-generated keypairs.

It is important to note that any digital data can be signed by a public-private key pair. This includes public key credentials (as described above), executable code, firmware updates, and documents.

Digital Certificates

Digital Certificates are essentially what is described above in ‘Digital Signing’, but are mapped to specific structure. By mapping the data into a standard structure, it means that the generation, signing, verification and general use of the certificates can work across product / technology boundaries. In other words, it makes signed public-private key certificates inter-operable. The most common standard for digital certificates is X.509.

Public Key Infrastructure (PKI)

Public Key Infrastructure is an operational and inter-operable set of standards and services on a network that enable anybody to procure a signed digital certificate and use this as an authentication credential. On the Internet this allows every website to inter-operate securely with SSL / TLS, with certificates from any number of different Certificate authorities, with any number of web browsers, all automatically.

Within the context of an company, consortium, or enterprise the same type of PKI services can be operated to provide an additional level of operational security.

Diffie-Hellman Authentication

Diffie-Hellman Authentication (or Key Exchange) was the first published form of asymmetric encryption in 1976. In a Diffie-Hellman exchange, the two parties would generate their own public-private key pairs and exchange the public keys through some open channel. The fundamental issue with asymmetric encryption (for bulk data) is that it does not scale well for large data sets, since it requires significant computing effort. So in Diffie-Hellman, a public-private session is established and the payload / data for the exchange is a shared key for a symmetric encryption session. Once this key has been generated and securely shared with both parties, a much higher performance symmetric encryption secure session is established and used for all following communications in that secure session.

However – it is important to note that just like our example above with Bob and Alice, using locally generated unsigned keypairs are highly susceptible to MITM attacks and should never be used where that is a risk.


Most people are familiar with SSL/TLS as the keypair solution to generate secure sessions between webservers and web clients (browsers). SSL/TLS operates using the same logical steps as Diffie-Hellman, but with two differences. Rather than using locally generated unsigned keys, (as a minimum) the server has a signed key that is validated as part of the exchange. Optionally, the client may also have a signed key that can be used for authentication also.

SSL/TLS is very widely used and considered to be one of the most important foundational elements to privacy and security on the Internet. However it does have its issues. One of the most significant is that that the key to secure the symmetric encryption channel is exchanged using the certificate keypair. Now if we consider that servers will often server a large number of clients, and the server certificate will be the same for each of these customers and for each session – the keypair is the same in all cases. We also recognize that while this is happening, the key size is very likely to be fairly unbreakable; therefore the sessions are fairly secure.

However, if this session traffic is recorded and archived by some highly capable group, and at some later date this group was able to acquire the private key for the server certificate, it means that every single one of those sessions can be decrypted. Essentially the private key can be used to decrypt the initial key exchange for each session and then use that key for the remainder of the session.

There is however a solution – Forward Secrecy.

Forward Secrecy

Forward secrecy is one of the most interesting developments (in my opinion) in securing communications using public-private key pairs. As discussed in SSL/TLS above, if a private key for server certificate is ever compromised, every session ever initiated with that certificate can be compromised.

In forward secrecy, a normal SSL/TLS session is initiated with a resulting symmetric encryption secure channel. At each end of this channel the server and client generate a public – private keypair, and exchange these public keys over the secure channel. A second symmetric key is generated and exchanged over the secure channel. This is essentially a Diffie-Hellman key exchange over an SSL/TLS session. This second key resulting from the Diffie-Hellman key exchange is then used to setup a symmetric session channel, and the initial channel is discarded.

Overall – the first key exchange authenticates the server (and possibly the client) since the session is based on signed certificates, but does not provide long term session security. The second key exchange based on Diffie-Hellman is vulnerable to MITM, but since it runs over an already authenticated secure channel, MITM is not a risk. Most significantly, since the private keys in the second key exchange are never sent over the channel or written persistently, these keys cannot be recovered from an archived session, and as a result the second symmetric session key is also unrecoverable.

This means that even if the private key for the server certificate is compromised, any archived sessions are still secure – Forward Secure.


Since 1976 cryptography and all of the associated piece parts have exploded in terms of development, applications and vulnerability research and a massive amount if it has been in open source development. However, for most engineers and programmers it is still very inaccessible. Step one in making it accessible is to learn how it fundamentally works, and this was step one.

Lastly – There are some very significant details on these topics that have been left out in order to generalize the concepts of operation and use. I strongly recommend at least skimming the references below to get a flavor of these details (that have been left out of this article). In my experience it is very easy to get lost in the details and become frustrated, so this approach was intentional – and hopefully effective.


CryptoCoding for Fun


Inevitably when somebody with more than a passing interest in programming develops an interest in crypto, there is an overwhelming urge to write cryptocode. Sometimes it is just the desire to implement something documented in a textbook or website. Sometimes it is the desire to implement a personal crypto design. And sometimes is is because there is a design need for crypto with no apparent ready solutions. All of these are valid desires / needs, however there are a few (well recognized) principles of cryptocoding that need to articulated.

Design your own Crypto

The first of these is sometimes referred to as Schneiers Law “any person can invent a security system so clever that she or he can’t think of how to break it.” This is not to state that any one person cannot design a good crypto algorithm, but that really good crypto algorithms are written by groups of people, and validated by much larger groups of well educated and intelligent people. In simple terms, good crypto is conceived by people who are well versed in the crypto state of art, and iteratively built on that. The concept is then successively attacked, and refined by an increasingly larger audience of crypto-experts. At some point the product may be deemed sufficient not because it has been proved to ‘completely secure’, but that a sufficiently capable group of people have determined that it is ‘secure enough’. Bottom line – There is nothing fundamentally wrong with designing your own crypto algorithm, but without years of education and experience as a cryptoanalyst, it is unlikely to be anywhere as good as current crypto algorithms. In addition good crypto is the product of teams and groups who try to attack and compromise it in order to improve it.

Writing your own Crypto

The second of these is sometimes referred to as “never roll your own crypto”. Even if we are implementing well documented, well defined, best practice algorithms – writing crypto code is fraught with risks that do not exist for general coding. These risks are based on the types of attacks that have been successful in the past. For example, timing attacks have been used to map out paths through the crypto code, buffer attacks have been used to extract keys from the crypto code, or weak entropy / keying. Of course this is primarily applicable in a production context or anytime the cryptocode is used to protect something of value. If this is being done for demonstration or educational purposes, these risks can often be recognized and ignored (unless dealing with those risks are part of the lesson goals). Bottom line – writing crypto code is hard, and writing high quality, secure / interoperable crypo code is much harder.

Kerckhoff’s Principle

Kerckhoff’s Principle states “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” Essentially this says that in a good cryptosystem, all parts of the system can be public knowledge (except the key) and not impair the effectiveness of the cryptosystem. Another perspective on this is that any cryptosystem that relies on maintaining the secrecy of the algorithm is less secure that one that only relies on a secret key.

Bruce Schneier stated “Kerckhoffs’s principle applies beyond codes and ciphers to security systems in general: every secret creates a potential failure point. Secrecy, in other words, is a prime cause of brittleness—and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.”

When Kerckhoff’s principle is paired with Schneiers Law, which states (paraphrased) that more eyes makes better crypto, the result is that better code results from public and open development. Fortunately that is the approach used for most modern crypto, which is a very fortunate circumstance for the aspiring cryptocoder / cryptographer. It allows us to learn cryptosystems from the best cryptosystems available.

If you are interested in the best crypto documentation that United States Government publishes, review the FIPS and NIST Special Publications listed in the references below. For example, if you are interested in the best way to generate / test public-private keys, look at FIPS 186-4. The quality if the Pseudo Random numbers used in crypto is critical to security, and if you are interested in how this is done, look at NIST Special Publications 800-90a (revision 1), 90B, and 90C.

Bottom Line

For years I have espousing the principals of “never write your own crypto” because it was clear that crypto is hard, and high quality cryptocode is only written by teams of well qualified cryptocoders. With the Snowden revelations over the last few years, it has also become obvious to me that “never write  your own crypto” is also what a state player like the NSA would also encourage in order to more easily access communications through shared vulnerabilities in cryptosuites like OpenSSL. As a result I have modified my recommendation to state “In order to be a better cryptocoder / cryptographer, you should write your cryptocode and develop your own cryptosystems (for learning purposes), but should only use well qualified cryptocode in production or critical systems. ”

It is critically important that every systems engineer and cryptocoder develop an in depth understanding of crypto algorithms and cryptosystems, and the most effective method to accomplish this is by writing, testing and evaluating cryptocode / cryptosystems.

So my stronger recommendation is that everybody who can program and who has an interest in crypto should write crytocode and learn cryptocode and design cryptosystems, and from that we will have a much stronger foundation of understanding of Security Systems Engineering.


System Security Testing and Python


A significant part of systems security can be testing, and this presents a real challenge for most systems security engineers. Whether it is pen testing, forensic analysis, fuzz testing, or network testing, there can be infinite variations of System Under Test (SUT) when combined with the necessary testing variations.

The challenge is to develop an approach to this testing that provides the necessary flexibility without imposing an undue burden on the systems security engineer. Traditionally the options included packaged security tools; which provides an easy to use interface for pre-configured tests, and writing tests in some high level language; which provide a high degree of flexibility with a relatively high level of effort / learning curve. The downside with the packaged tools is lack of flexibility and cost.

An approach which has been increasingly more popular is to take either one (or both) of these approaches and improve through the use of Python.

For Example

A few examples of books that take this approach include:

  • Grayhat Python – ISBN 978-1593271923
  • Blackhat Python – ISBN 978-1593275907
  • Violent Python – ISBN 978-1597499576
  • Python Penetration Testing Essentials – ISBN 978-1784398583
  • Python Web Penetration Testing Cookbook – ISBN 978-1784392932
  • Hacking Secret Ciphers with Python – ISBN 978-1482614374
  • Python Forensics – ISBN 978-0124186767

In general, these take the approach of custom code based on generic application templates, or scripted interfaces to security applications.


After skimming a few of these books and some of the code samples, it is become obvious that Python has an interesting set of characteristics that make it a better language for systems work  (including systems security software) than any other language I am aware of.

Over the last few decades, I have learned and programmed in a number languages including Basic, Fortran 77, Forth C, Assembly, Pascal (and Delphi), and Java. Through all of these languages I have come to accept that each one of these languages had a set of strengths and issues. For example, Basic was basic. it provided a very rudimentary set of language features, and limited libraries which meant there often a very limited number of ways to do anything (and sometimes none). It was interpreted, so that meant it was slow (way back when).  It was not scaleable, which encouraged small programs, and it was fairly easy to read. The net result is that Basic was used as a teaching language, suitable for small demonstration programs – and it fit that role reasonably well.

On the other hand Java (and other strongly typed language) are by nature, painful to write in due that strongly typed nature, but also make syntax errors less likely (after tracking down all of the missing semi-colons, matching braces, and type matching). Unfortunately, syntactical errors are usually the much simpler class of problems in a program.

Another attribute of Java (and other OO languages) is the object oriented capabilities – which really do provide advantages for upwards scaleability and parallel development, but result in very difficult imperative development (procedural). Yes – everything can be an object, but that does not mean that it is the most effective way to do it.

Given that background, I spent a week (about 20 hours of it) reading books and writing code in Python. In that time I went from “hello” to a program with multiple classes that collected metrics for each file in a file system, placed that data in a dictionary of objects and wrote out / retrieved from a file in about a 100 lines of code. My overall assessment:

  • The class / OO implementation is powerful, and sufficiently ‘weakly typed’ that it is easily useable.
  • The dictionary functionality is very easy to use, performs well, massively flexible, and becomes the go-to place to put data.
  • The included standard libraries are large and comprehensive, and these are dwarfed by the massive, high quality community developed libraries.
  • Overall – In one week, Python has become my default language of choice for Systems Security Engineering.


Also of note, I looked at numerous books on Python and have discovered that:

  • There are a massive number of books purportedly for learning Python.
  • They are also nearly universally low value, with a few exceptions.

My criteria for this low value assessment is based on the number of “me-too” chapters. For example, every book I looked at for learning Python has at least one chapter on:

  • finding python and installing it
  • interactive mode of the Python interpreter
  • basic string functions
  • advanced string functions
  • type conversions
  • control flow
  • exceptions
  • modules, functions
  • lists, tuples, sets and dicts

In addition each of these sections provide a basic level of coverage, and are virtually indistinguishable from a corresponding chapter in dozens of other books. Secondary to that there was usually minimal or basic coverage of dicts, OO capabilities, and module capabilities.  I wasted a lot of time looking for something that provided a more terse coverage of the basic concepts and a more complete coverage of more advanced features of Python. My recommendation to authors of computer programming books: if your unique content is much less than half of your total content, don’t publish.

From this effort I can recommend the following books:

  • The Quick Python Book (ISBN 978-1935182207): If you skip the very basic parts, there is a decent level of useful Python content for the experienced programmer.
  • Introducing Python (ISBN 978-1449359362): Very similar to the Quick Python book, with some unique content.
  • Python Pocket Reference (ISBN 978-1449357016): Simply a must have for any language. If O’Reilly has one, you should have it.
  • Learning Python (ISBN 978-1449355739): A 1500 page book that surprised me. It does have the basic “me-too” chapters, but has a number of massive sections not found in any other Python book. Specifically, Functions and Generators (200 pages), Modules in depth (120 pages), Classes and OOP in depth (300 pages), Exceptions in depth (80 pages), and 250 pages of other Advanced topics. Overall it provides the content of at least three other books on Python, in a coherent package.

Note – Although I could have provided links on Amazon for each of these books (every one of them is available at Amazon), my purpose is to provide some information on these books as resources (not promote Amazon). I buy many books directly from O’Reilly (they often have half off sales), Amazon, and Packt.