Category Archives: Uncategorized

Good News Everyone!

To state the obvious, I have not posted to this blog in over a year. In addition, in reviewing some of the posts before I stopped blogging (in 2016) some of them seemed to be fairly derivative of earlier posts I had done without significant new content.

For that I apologize, and hope I will not be writing a similar post a year from now.

Almost exactly a year ago, I took a job with a medical device company in a product security role. This role is one of the most challenging I have had in a while, and for me there is a level of excitement that comes with working on a dozen different tasks that range from hands on / nuts and bolts work to developing an enterprise level product security architecture to executive meetings selling the security story.

It is a different kind of thrill seeking, and so far it just keeps getting better.

As to why the blog has been dead for a year, that is the second half of this post. Initially it was about getting being stuck technically, since i was rehashing the same things I had been doing for a decade. I needed something new to stimulate the brain. Secondary to that the new job had a significant learning curve that left no cognitive reserves for a blog of this nature.

In my experience, there is some finite level of daily cognitive function, and if you use that up before you get out of the office, there is nothing left for hobbies like this. For most of the last year, I was operating in that mode, but fortunately the brain (through learning) is very good at taking thinking tasks and turning them into mental reflexes which impose a much lower cognitive tax.

And here we are.

In addition to this post I have created the first of a series of pages (meant to have some longevity) – Crypto Lab: Windows Subsystem for Linux.

See you soon.

Security Patterns & Anti-Patterns


In this post we will be exploring a very useful analysis concept in security engineering, Security Patterns and more importantly; Anti-Patterns.

As we have discussed in earlier posts, a use case or use model is a generalized process or method to do something useful. A security pattern is a generalized solution to a use case / use model.

Security Redux

As a quick refresher, lets take a look at how we get to patterns. Security within a system can be dissembled into a set of security controls. These controls come from one of three broad categories, which include Management, Operational and Technical. For further information on these distinctions, look to NIST SP 800-53 and NIST SP800-100. The management controls are essentially policy and enforcement controls. Operational controls are primarily process and workflow management. Lastly, Technical controls are the nuts and bolts pieces of technology that most people associate with computer security. These three control domains loosely map to implementation mechanisms including, People, Process, Policy and Technology. Technology maps directly to technical controls, and for the most part is the most effective part of system security design. Process is the how stuff gets done, and includes the checks, balances and feedback elements to ensure stuff gets done right. Policy is the organizational policy that drives the behavior of people and process. Lastly people are the mechanism that interfaces everything and in many cases turns a disconnected collection of policy, process and technical systems into some organizational system that provides some capability. When we represent some overall system capability as a Pattern, we are generalizing and simplifying down so that the entire system function can be easily understood as a single system. Anti-Patterns is used to represent common failure modes of the system, and analyze what security controls are missing or failing that allows this failure.

Credit Issuance: Pattern & Anti-Pattern

In this simple example we will look at a how large purchase credit is issued to consumers. It is important to note that I do not work in the financial / credit business, and this example is massively simplified.

In this particular Pattern / Anti-Pattern discussion, the bulk of the system security is based on process and people, and the discussion will center on those elements.

First we are going to explore the use case and security pattern. Bob and Alice are car shopping, have selected a vehicle, inform the sales person that they would like to finance the purchase, and would like the dealership to facilitate this purchase. This is essentially the use case. The next steps are that Bob and Alice provide information that authenticates who they are so that their financial identity can be verified by financial institution. Based on Bob and Alice’s identity, the financial institution procures a credit report from one of the three credit reporting agencies (or all three), to establish a credit profile for Bob and Alice.  Based on Bob and Alice’s current financial commitments and history, the financial institution makes a risk based decision as to whether credit will be extended for the purchase and what the terms will be. This information is then relayed back to the car salesman, who provides to Bob and Alice and then they decide if they will accept the terms. If the terms are accepted, Bob and Alice fill out various contracts that commit them to a number of things, the money is transferred from the financial institution, and owner ship of the car is transferred from the dealership to Bob, Alice and the financial institution.

It is important to note that this pattern and use case are idealized, and by looking at the anti-pattern for this pattern, we can make some interesting observations. An anti-pattern is not exactly the opposite of the pattern, but often represents generalized failure in the pattern that we would like to prevent.

In this particular anti-pattern, Eve is car shopping also, but rather than paying for it herself, she intends to present herself as Alice, and take possession of a car and fraudulently commit Alice to the loan for the car. All of this is occurring without Alice’s involvement or awareness of these events. It turns out that it is surprisingly easy to achieve with some degree of success, requiring little more than a fabricated ID and some personal information about Alice. When successful, Eve completes the contractual paperwork (posing as Alice), money is transferred to the car dealership and Eve takes possession of the car. Some 15 to 30 days later, Alice receives notification of her payment schedule for the loan.

In most cases this is the first indication to Alice that she is involved. From that point Alice then contacts the financial institution indicating that they are in error and that she did not take out a loan for a new car. By this time, the transfer of the money and car title to the bank has been completed, and is unlikely to be reversed without the return of the car (which Eve is unlikely to do voluntarily). As far as the car dealership or the financial institution is concerned, the entire process was legitimate and valid. By default, Alice is the responsible party for this fraudulent loan until she is able to legally correct this issue by having the financial institution accept the loan as fraudulent, and absolve her of responsibility for the loan.  This can often take many months, and in the mean time it is often necessary for Alice to make payments on this loan to protect her credit standing.

What Went Wrong?

I consider this to be a particularly good example to illustrate patterns, anti-patterns. So lets dissect what happened and what went wrong.

If we look at this pattern, and analyse the roles of the parties involved, we have Bob and Alice – the buyers, the car salesman, and the financial institution loan officer. In addition, the car salesman is acting as a broker for the between the financial institution and Bob and Alice. As buyers – the role of Bob and Alice is relatively simple. Bob and Alice want to buy a car, and are ready to commit to a car loan within some set of terms they deem reasonable.

The loan officer has a similarly simple role. The financial institution chooses to offer a loan to the buyers under a set of terms that fall within the policy of the financial institution, based on the financial identity / history of the buyers.  If we examine the goals and motives of the financial institution it becomes somewhat more complicated. For any financial institution, it is imperative to not give out fraudulent loans. As as for profit institution, it is also imperative to increase profits by issuing more loans. These two conflicting goals result in a risk based trade-off that becomes part of of the loan calculus at the financial institution. The probability of the loan being fraudulent is a known risk, and the probability that Bob and Alice may default on the loan is also a known risk and all of these risks are taken into consideration. However, even when these risks are known and accounted for, there is no benefit to a realized risk.

The car salesman plays a critical role in this process. The salesman (and by extension – his employers) are responsible for authenticating Bob and Alice. The primary basis of this entire example is that it only functions correctly if Bob and Alice are really Bob and Alice. The salesman is also responsible for representing the financial institution to the buyers – Bob and Alice. This becomes complicated by the fact that most car dealerships have relationships with dozens of financial institutions with various forms of incentives to select one over another. The role of the car salesman also is conflicted. Fundamentally, the first and most important goal for the car salesman is to sell cars, and maximize his personal incentive that results from the sale of that car. The goal of ensuring that any particular car purchase is not fraudulent is a distant second. It is safe to assume that if one financial institution rejects the loan application because it seems excessively risky, it will be submitted to multiple other financial institutions willing to take on more risky loans. In addition, for every car dealership that rigorously reviews the application and credentials submitted by Bob and Alice to ensure that they are not party to a fraudulent loan, there are numerous other dealerships willing to be less diligent.

If we then look at the Anti-Pattern, we introduce an additional party to this process; Eve. When Eve impersonates Alice, Alice still plays a role (as the victim) but is not actually connected to the process in a useful manner – and therein lies the flaw in this security architecture.

The remaining part of this analysis is to examine how the pattern reacts to misrepresentation. If the financial institution misrepresents the loan terms to the buyers, the buyer is in possession of the contract signed at closing of the loan. If the financial institution fails to transfer the loan proceeds to the car dealership, the title is not transferred and possession of the car is not released. If the car salesman misrepresents the vehicle, the financial institution does check the VIN number which provides significant information about the vehicle, and no money will be transferred until it is resolved. For both the car salesman and the financial institution there are checks and balances to ensure that they are not misrepresenting their part in the transaction. However, if the buyer misrepresents themselves as somebody else, there are no immediate system level controls to function as a check.

Bottom Line – Whenever people are key parts of the security design, it is important to assess these elements:

  • Identify Goals / Motivations of all the roles. If these are conflicted, this will result in some form of trade-off  at the personal level, which translates to a system security vulnerability.
  • Identify impact of Misrepresentation. What checks and balances are in place to ensure that if a role misrepresents itself, the system security functions despite this misrepresentation.


Pattern and Anti-Pattern analysis are often done to highlight weaknesses. This analysis showed that for this particular example, all of the parties (or actors) need to be accounted for in the process, where this includes the primary pattern and any anti-patterns.


Introduction to Systems Security Engineering

There are many books, articles and websites on System Engineering in general, but relatively few on Systems Security Engineering. In the not so distant past, I spent more than a decade implementing IT security, developing policy and procedure for IT security and auditing / assessing IT security in the Federal space. As part of that I spent a significant amount of time with FIPS standards and NIST Special Publications. The FIPS standards are more useful in that they define the the structure of the solution and the scope of what is compliant / certifiable and what is not, which tends to encourage (but not ensure) interoperability. The NIST Special Publications on the other hand are much more educational, instuctional and tutorial in nature. A recent example of this is the NIST SP800-160 Systems Security Engineering: An Integrated Approach to Building Trustworthy Resilient Systems.

The document provides a relatively brief overview of what Systems Security Engineering in chapter 2, and how it is in alignment with ISO/IEC 15288 (ISO standard for Systems Engineering processes and life cycles ). This chapter really provides the most useful content of this document at this time.

Chapter 3 goes into detailed lifecycle processes for systems security engineering and happens to map those directly to ISO/IEC15288, which is a good thing to help develop an understanding of how System Security Engineering integrates with the general Systems Engineering processes. These are not separate or disjointed processes, and that needs to be explicit and clear.

The appendices are simply placeholders in the draft, but show promise. I will be extremely curious to see what goes in those in the release version.

Overall – I think this document (when completed) may integrate and update the better parts of several aging Special Publications.


PSA – Update on TrueCrypt


There are many users who have continued to user TrueCrypt 7.1a for a number of reasons; specifically:

  1. TrueCrypt is not actively being developed or supported, but there are no indications of security vulnerabilities with TrueCrypt, and
  2. There are no clear and obvious alternatives to TrueCrypt which are as good / better than TrueCrypt 7.1a.

However – Neither of these reasons are still valid. In September 2015, a researcher discovered two additional security flaws in TrueCrypt 7.1a, one of which is critical (CVE-2015-7358), potentially allowing elevated privileges on a TrueCrypt system.

In addition, VeraCrypt is a fork of the TrueCrypt 7.1a codebase, is stable, and has already patched these two vulnerabilities (in addition to several others previously identified).

Bottom Line – It is time for any Truecrypt users to remove and replace with VeraCrypt.


Embedded Device Security – Some Thoughts


Devices are becoming increasingly computerized and networked. That is mildly newsworthy. Most of these devices have a long history of not being computerized or networked. Once again, only mildly newsworthy. Some of the companies have limited background in designing computerized and networked devices, which is not newsworthy in any context.

However – If security researchers compromise a Jeep and disconnect the brakes on a moving vehicle, or changes the dosage on a medical infusion pump – that is wildly newsworthy. Newsworthy in the sense that by connecting the dots above, we now have systems used by millions of people that represent a previously unknown and very real risk of injury and death.

Designing secure embedded systems is complex and challenging, and requires at least some level of capability and comprehension of the system level risks. A level of capability that many companies/industries have proven they do not have. Lets examine some of these cases.

Operational Issues Driving Bad Security

System security is a feature of a system as a whole, and as such it is not easily distributed to the individual pieces of a system. When this is coupled with the modern practice of subcontracting out much of the design into subsystems, this often means that the security of the system is either defined poorly or not at all within these subsystems. This should logically shift the responsibility of security validation to the systems integrator, but often does not since the system integrator does have an in-depth understanding of the individual subsystems. Essentially, as systems are further subdivided the ability to develop and validate an effective security architecture becomes increasingly difficult.
A secondary aspect of this problem is that subsystems are defined in terms of what they are supposed to do – feature requirements. Security requirements express what the systems is supposed to prevent or what it is supposed to not do. This negative aspect of security requirements makes security much more difficult to define and even more difficult to validate. As a result security requirements are not a good mechanism to implement system-level security.

The third and last aspect of this problem is that developing a secure system is a process that incorporates threat modeling, attack models, and security domain modeling – which ultimately drive requirements and validation. Without this level of integration into the development process, effective security cannot happen.

Designing secure embedded systems is complex and challenging, and requires at least some level of capability and comprehension of the system level risks — a level of capability that many companies/industries have proven they do not have.

Device Compromises in the News

There have been a number of well publicized security compromises in the last few years, which are increasingly dramatic and applicable to the public at large. Many of these compromises are presented at conferences dedicated to security, such as the Blackhat Conference and the DefCon Conference (both held in late summer).

This association has resulted in some interesting aspects of device security compromises. The most significant is that since these compromises are the presented at non-academic conferences, the demonstration and illustration of them are increasingly dramatic to garner more attention. A secondary aspect is that many of these compromises are made public early in the summer to build interest before the conferences. This is not particularly relevant, but interesting.

2014 Jeep Compromise (Blackhat 2015 Talk)

In July 2015, a pair of security researchers went public with a compromise against a 2014 Jeep Cherokee that allowed them to get GPS location, disable brakes, disable the transmission, and affect steering. In addition, they were able to control the radio, wipers, seat heater, AC, and headlights.

All attacks took place through the built-in cellular radio data connection. The root compromise was based on the ability to telnet (with no password) into a D-Bus service from the cellular interface, allowing commands to be sent to any servers on the D-Bus. One of those servers happened to be the CAN bus processor, which has access to all of the computerized devices in the vehicle including the engine, transmission, braking system, and steering.

The secondary issues that allowed complete compromise of the system include a lack of system-level access controls along security domain boundaries. Or to put it bluntly – allowing the entertainment system the ability to disable critical systems like brakes or steering is a bad practice and potentially dangerous. Since this same system (UConnect) is used in a large number of Chrysler, Plymouth, Dodge, Mercedes, and Jeep vehicles, many of these security vulnerabilities are applicable to any vehicles that also use UConnect.

The overall risk this compromise represents is that an attacker can take control of an entire class of vehicle and track the vehicle, disable the vehicle or precipitate a high speed accident with the vehicle, from anywhere via a cellular network. Collectively these represent a threat to privacy, personal injury or death.

Brinks CompuSafe Galileo Compromise (DefCon 2015 Talk)

The Brinks CompuSafe is a computerized safe with a touch screen that allows multiple people to deposit money, and it will track the deposits, enabling tracking and accountability. The unfortunate reality is that it is based on WindowsXP, and it has an external exposed USB port that was fully functional (including support for mouse, keyboard and storage). In combination with a user interface-bypass attack, administrative access allowed the researchers to modify the database of transactions, unlock the safe, and cleanup any evidence of the compromise. Issues with this include 1) using a general purpose OS in a security critical role, 2) exposing unrestricted hardware system access externally via the USB port, 3) a user interface (kiosk) that fails insecurely. Ultimately, this computerized commercial safe is much less secure than most of the mechanical drop slot safes they were intended to replace.

BMW Lock Compromise

In the BMW compromise, the ConnectedDrive system queries BMW servers on a regular basis over HTTP. The researchers were able to implement a Man in the Middle Attack by posing as a fake cell phone tower and were able to inject commands into the vehicle’s system. In many practical attacks this was used to unlock the doors to simplify theft. The “fix” issued by BMW, forced this channel to HTTPS – which is better, but still does not qualify as a highly secure solution. A more complete and secure solution would implement digitally-signed updates and commands that would provide significantly greater resistance to injection attacks.

The overall risk this compromise represents is that an attacker can inject commands into the vehicle (from close proximity) enabling it to be unlocked, started and stolen without significant effort.

Hospira Infusion Pump Compromise

An infusion pump is a medical device that administers intravenous drugs in a very controlled manner in terms of dosage and scheduling. Modern infusion pumps are networked into the networks in clinical environment to provide remote monitoring and configuration. In addition, they often have built in failsafe mechanisms to mitigate risk of operator errors in dosage / scheduling. The Hospira pump that was compromised had an exposed Ethernet port with an open telnet service running, which enabled a local attacker to connect via Ethernet and gain access to the clinical network credentials. This then allowed the attacker to access the secure clinical network, and access servers, data and other infusion pumps on this network. From either the Wi-Fi or Ethernet port, the attacker could modify the configuration and failsafes since this infusion pump had no protections to prevent either overwriting the firmware or modifying the dosage tables. Later investigations indicate that there are a number of Hospira infusion pumps that are likely to have the same vulnerabilities since they use common parts and software. As a result the FDA has issued multiple security warnings (and sometimes recalls) to hospitals and medical care providers on these types of devices.

The overall risk this compromise represents is that an attacker could: a) compromise the device locally to modify the drug tables, dosing and disabling failsafes, b) compromise the clinical secure network by harvesting the unprotected Wi-Fi credentials from the device, and c) compromise any of these infusion pumps on the same network (wirelessly). Collectively these represent a significant risk of injury and death.

Samsung Refrigerator Compromise

Samsung produces number of smart refrigerators; a recent model has an LCD display that is linked to a Google Calendar that functions as a family calendar. Unfortunately, the operation of this device is also capable of exposing the Google credentials for the associated account. Each time the refrigerator accesses the Google account, it does not actually verify that the SSL/TLS certificate is a valid certificate, allowing a Man in the Middle Attack server to pose as the Google server, exposing the login credentials.

The overall risk associated with this compromise is that the username/password for a Google account is compromised, and that is often associated with many other accounts including banking, financial, and social accounts. This can lead to all of these accounts being compromised by the attacker.

Bottom Line

There is a common thread through all of these recent newsworthy examples. In every one of these cases, these devices not only failed to follow generally accepted best practices for embedded security, but every one of them followed one or more worst / risky practices.

Risky Practices

The following are a set of practices that are strictly not “bad practices” but are risky. They are risky in the sense that they can be used effectively in a secure architecture, but like explosives must be handled very carefully and deliberately.

Exposed Hardware Access

Many embedded devices have some hardware interfaces exposed. These include USB ports, UART ports, JTAG ports, both internal to the device and exposed externally. In many cases these are critical to some aspect of the operation of the device. However, when these interfaces are absolutely necessary, it is incredibly important that these interfaces be limited and locked down to minimize the opportunities to attack through that given interface. In cases where they can be eliminated or protected mechanically, this also should be done.

Pre-Shared Keys

As a general rule, pre-shared keys should only be used when there are no other better solutions (solution of last resort). The issue is that sooner or later, an embedded pre-shared key will be compromised and exposed publicly, compromising the security of every single device that uses that same pre-shared key. In the world of device security, key management represents the greatest risk in general, and with pre-shared keys we also have the greatest impact to compromise. This very broad impact also creates an incentive to compromise those very keys. If a pre-shared key is the only viable solution, ensure that a) the pre-shared key can be updated securely (when provisioned or regularly), and b) that the architecture allows for unique keys for each device (key stored separate from firmware). By storing the pre-shared keys separate from the firmware (rather than embedding in firmware), keys cannot be compromised through firmware images. These features can be used to minimize the overall attack exposure and reduce the value of a successful attack.

Worst Practices

Collectively these compromises represent a number of “worst practices”, or design practices that simply should not be done. Outside of development environments, these practices serve no legitimate purpose and they represent significant security risk.

Exposed Insecure Services

As part of any security evaluation of an embedded system, unused services need to be disabled and (if possible) removed. In addition, these services need to be examined interface by interface and blocked (firewalled) whenever possible. On the Jeep Cherokee, exposing an unauthenticated telnet D-Bus service on the cellular interface was the root access point into the vehicle, and this exposure provided absolutely no purpose or value to the operation of the system.

Non-Secure Channel Communications

If the wireless channel is not secured, everything on that channel is essentially public. This means that all wireless communications traffic can be monitored, and with minimal effort the system can be attacked by posing as a trusted system through a Man in the Middle Attack. Examples of these services include any telnet or HTTP services.

Non-Authenticated End Points

A slightly more secure system may use a SSL/TLS connection to secure the channel. However, if the embedded system fails to actually validate the server certificate, it is possible to pose as that system in a Man in the Middle Attack. As described above, a recent model Samsung Refrigerator has this flaw, exposing the end user Google credentials.

No Internal Security Boundaries / Access Controls

Many of the systems highlighted place all of their trust on a single security boundary between the system and the rest of the world, where the internal elements of the systems operate in a wide open trusted environment. The lack of strong internal boundaries between any of the subsystems is why the Jeep Cherokee was such an appealing target. Within systems are subsystems and each of these need to be defined by their roles and access controls need to be enforced on the interface for each subsystem. The infotainment system should never be able to impact critical vehicle operation – ever. Subsystems need to be protected locally to a degree that corresponds to the role and risk associated with that subsystem.

Non Digitally Signed Firmware / Embedded Credentials

Many embedded systems allow for end user firmware updates, and these firmware updates are often distributed on the Internet. There are functional bugs, security bugs and emergent security vulnerabilities that need to be mitigated or the device can become a liability. This approach to updating has the advantage that users can proactively ensure that their systems are up to date with minimal overhead effort on the part of the product vendor.
It also allows for attackers to dissect the firmware, identify any interesting vulnerabilities and often extract built in credentials. In addition, if the update process does not require / enforce code signing, it allows for the attacker to modify the firmware to more easily compromise the system. By requiring that firmware updates be digitally signed, this prevents any attacker from installing some modified or custom firmware that provides they attacker privileged access to further compromise the device or the network it is part of. Firmware updates need to require digital signatures, and not include embedded credentials.


As more and more devices evolve into smart interconnected devices, there will be more and more compromises that have increasing levels of risk to to life and property. As the examples above show, many of these will be due to ignorance and arrogance on the part of some companies. Knowledge and awareness of the risks associated with embedded networked devices is critical to minimizing the risks in your systems.



Infusion Pump

Other Devices

CryptoCoding for Fun – Part 2 [Terminology and Concepts]

Introduction (Yak Shaving)

As you can see from the title, we are still the “CyberCoding for Fun” path, and although I would like to jump in and start how you make that happen, we need to step back a bit and take care of some details. In this case, defining and describing the Terminology and Concepts associated with crypto and cryptocoding is something that we refer to as “Yak Shaving”, or something you end up doing before starting what you started out doing.

Many times when you start a new project /  endeavor, where some learning is involved it is necessary to back the task up to a point where you can actually do something. For example, if the task is painting something in the garage, it is often necessary to clean and prep the garage, go buy some paint, and probably prep the the item for painting – all prior to the actual ‘painting’. This is called Yak Shaving.

In this update, we are going to go over some of the system level concepts in modern crypto without getting bogged down in details or acronyms. The intent is that in order to understand cryptosystems, is is necessary to understand tools (and terms) available to crytosystem designer.

Terms – Authentication / Authorization / Credentials / Confidentiality / Integrity / Availability

This section is all about crypto terminology, and nothing more.

  • Authentication – When a system authenticates a given set of credentials as valid. If these credentials are a username / password, the username identifies and the password authenticates. When a computer verifies that the username is valid, and that password matches the password associated with that username – you have been authenticated.
  • Authorization – Authorization is about access control and determining what a given authenticated user can and cannot access within some context. Authorization is what separates ‘guest’ from ‘administrator’ (and everything in between) on most computer systems.
  • Credentials – Something that identifies you to the computer system. This can be a username / password pair, or a PKI smartcard, or a simple RFID token. Each one of these represents different levels of confidence (that the credentials represent you), and this means they provide different levels of security. Bottom line – the security of most forms of credentials is based on how difficult they are to fake, break or steal.
  • Confidentiality – The ability to keep something secret. It is really that simple. If we assume Alice and Bob are exchanging private information, confidentiality is a characteristic of the communications channel that prevents Eve from listening in.
  • Integrity – The ability to ensure that the message received is the same as the message sent. If Alice and Bob are exchanging information, integrity is a characteristic of the communications channel that the prevents Eve from modifying the information (without detection) over the channel.
  • Availability – The ability to ensure that the communications channel is available. If Alice and Bob are exchanging information, availability is the characteristic of the communications channel that prevents Eve from blocking information over the channel.
  • Channel – Some arbitrary mechanism between two points for exchanging information. Channels can be nested on other channels. For example, the network IP protocol layer is a channel, and TCP is another protocol channel that runs on top of IP – forming TCP/IP. In this example, neither IP nor TCP are secure. Another example is the TLS secure protocol which runs on top of the insecure TCP/IP protocol. TLS is the basis of most secure web browser sessions.
  • Ciphertext – Encrypted text or data, to be contrasted with cleartext (which is un-encrypted text).

Interesting Sidenote – In most cybersecurity scenarios, Bob and Alice are the protagonists and Eve is the antagonist. This is their story:


Encryption – Symmetric

Symmetric encryption is where plaintext is encrypted to ciphertext, and then decrypted back to plaintext using the same key used to encrypt. When used to secure some data or channel, it requires that both end points or all parties involved share the same key, which is where the term ‘pre-shared key (PSK)’ comes from.

Historically, symmetric encryption was the only form of encryption available until about 1976 (not withstanding classified encryption) when the Diffie-Hellman key exchange algorithm was published. Every form of encryption or ciphers prior to that time was about key generation, key management, and algorithms. Prior to WWII, all encryption was done by hand or with machines, the most sophisticated and infamous of these machines being the German Enigma machine.

Encryption – Asymmetrical

Asymmetrical encryption is a form of encryption where there are two paired keys, where either one can be used to encrypt and the opposite key (and only this key) of this pair is used to decrypt the data. The first form of asymmetrical encryption that became generally well know was the basis of the Diffie-Hellman Key exchange. There have been a number of different variations of asymmetrical encryption based on various arcane and complex mathematical methods, but all share the same basic characteristic of a key pair for encryption / decryption.

In common nomenclature, one of these keys are designated the ‘public key’ which is not kept secret (and often published publicly), and the other key is designated the ‘private key’ which is kept as secret as possible. On some operating systems, private keys are often secured in applications called ‘keyrings’ which require some form of user credentials to access. Bottom line – private keys need to be kept private.

The value of asymmetric encryption may not be immediately obvious, but let’s take a look at an example where we compare / contrast with symmetric encryption.

If Bob and Alice need to exchange some small amount of data securely over a non-secure channel and they are relying on symmetric encryption, both Bob and Alice need to have a pre-shared encryption key, and this pre-shared key needs to be kept secret from Eve (and everybody else who may be a threat). The problem with this is how Bob or Alice communicates this key to the other without an secure channel in place. Since the primary communications channel is insecure, it cannot be used to share the encryption key, which drives the need for some secondary channel or out of band (OOB) channel that is secure. Think about that a minute – exchanging information securely over a non-secure channel requires some other secure channel to exchange keys first. This highlights the fundamental problem with symmetric encryption; key management.

Now if we take a look at asymmetric encryption, both Bob and Alice have generated their own personal public-private key pairs. This is then followed by Bob and Alice exchanging their public keys. Since these are public keys and it is not necessary to keep them secret, this is much easier than exchanging a symmetric encryption key. Once both Bob and Alice have exchanged public keys, we can start.

  1. Bob has a message he wants to send to Alice securely over a non-secure channel.
  2. Bob takes the message, produces a hash of the message, encrypts that hash with his private key and attaches it to the message and produces message A. The encrypted hash is known as a digital signature.
  3. Bob takes message A and then encrypts it using Alice’s public key, producing ciphertext B.
  4. Bob then sends this encrypted message B to Alice via any non-secure channel.
  5. Alice gets the message and decrypts it using her private key, producing message A. Alice is the only one who can do this since she is the only one that has her private key.
  6. Alice then takes message A and decrypts the attached electronic signature using Bob’s public key, producing Bob’s original message.

From this exchange, we can make the following significant statements:

  1. Bob knows that Alice and only Alice can decrypt the outer encryption since she is the one who has her private key.
  2. Alice knows that Bob and only Bob could have sent the message since the digital signature was verified and Bob is the only one that has his private key.
  3. Alice knows that the message was not modified since the hash code produced from the digital signature matched the contents of the message.
  4. This was achieved without sending secret keys through a second channel or over a non-secure channel.

These are some fairly significant features of public-private key encryption.  But of course our example can be compromised by a Man in the Middle Attack (MITM). For further details on these operations read the Wikipedia reference below on RSA.

Man in the Middle Attack (MITM)

As shown in the example above, public-private key encryption provides some significant advantages. However, it also is susceptible to some new attacks, including the Man in the Middle attack. If we look to the example above, both Bob and Alice generated their own public-private key pair and then somehow exchanged them. Since they are public keys there is no need for secrecy – but there is a need for integrity.  Say for example that Bob and Alice emailed their public keys to each other. Meanwhile Eve was somehow able to intercept these emails, generate her own public-private key pairs, and substitute her public key in those emails and send them to Bob and Alice. This means that when  Bob thinks he is signing the message with Alice’s public key it is really Eves public key. After Eve intercepts the message, she opens it with her private key and they re-signs it with Alice’s public key and sends on to Alice. The net result is that Eve can intercept and read every message without Bob or Alice being aware if it.

Digital Signing

The use of Digital signatures is a technically interesting solution to many of the attacks on public-private key encryption. But first we need to talk about hashcodes. In the world of digital data and encryption, a ‘hashcode’ is a mathematical fingerprint of some data set. A typical hash code used is called SHA-2/256 (most often just SHA256) that ‘hashes’ a dataset of any size and produces  a 256 bit hashcode. Due to the mathematical processes used, it is highly unlikely that a data set could be modified and still produce the same hashcode, so hashcodes are often used to verify integrity of datasets. When combined with public-private keys this leads us to digital signing.

In this example, we are going to add a fourth party to the example; Larry’s Certificate Authority (CA). At Larry’s CA, Larry has a special public-private keypair used just for signing things. It works just like any other public-private keypair, but is only used to sign things and is treated with a much higher degree of security than most other certs since it is used to assert the validity of many other certs.

In this example, both Bob and Alice take their public-private key pairs to their respective local offices of Larry’s CA along with identifying credentials – like drivers licenses, passports or birth certificates. Larry’s examines the credentials and determines that Bob is Bob and Alice is Alice, and then generates a Digital Public Key Certificate with their respective names, possibly addresses, email addresses, and their public keys. Larry then generates a hash of this Public Key Certificate, encrypts it with the signing private key, and attaches it to the public key certificate.

Now both Bob and Alice have upgraded from simply using self-generated public-private keypair to using a public private key pair with a public key certificate signed by a trusted certificate authority. So when Bob and Alice exchange these public key certificates, they can each take these certs and decrypt the signature using Larry’s CA public signing key, read the encrypted hashcode and compare it to the hashcode they generate from the certificate.

If the hashcodes match, we can conclude a few things about these public key certificates.

  • Since Larry’s CA is known to check physical credentials, there is a certain level of trust that the personal identifying information on the public key certificate is really associated with that information.
  • Since the digital signature is based on the hashcode of the entire public certificate, and the signature is valid – is is highly probable that the contents of the public key certificate have not been modified since it was signed.
  • Bottom Line – If Bob has public key certificate for Alice signed by Larry’s CA, he can trust that this public key is trustworthy (and probably has not been replaced by Eve’s key). Since Alice can know the same things about Bob’s public key certificate signed by Larry’s CA, Bob and Alice can use each other’s public keys with a much higher degree of confidence than with the example based on self-generated keypairs.

It is important to note that any digital data can be signed by a public-private key pair. This includes public key credentials (as described above), executable code, firmware updates, and documents.

Digital Certificates

Digital Certificates are essentially what is described above in ‘Digital Signing’, but are mapped to specific structure. By mapping the data into a standard structure, it means that the generation, signing, verification and general use of the certificates can work across product / technology boundaries. In other words, it makes signed public-private key certificates inter-operable. The most common standard for digital certificates is X.509.

Public Key Infrastructure (PKI)

Public Key Infrastructure is an operational and inter-operable set of standards and services on a network that enable anybody to procure a signed digital certificate and use this as an authentication credential. On the Internet this allows every website to inter-operate securely with SSL / TLS, with certificates from any number of different Certificate authorities, with any number of web browsers, all automatically.

Within the context of an company, consortium, or enterprise the same type of PKI services can be operated to provide an additional level of operational security.

Diffie-Hellman Authentication

Diffie-Hellman Authentication (or Key Exchange) was the first published form of asymmetric encryption in 1976. In a Diffie-Hellman exchange, the two parties would generate their own public-private key pairs and exchange the public keys through some open channel. The fundamental issue with asymmetric encryption (for bulk data) is that it does not scale well for large data sets, since it requires significant computing effort. So in Diffie-Hellman, a public-private session is established and the payload / data for the exchange is a shared key for a symmetric encryption session. Once this key has been generated and securely shared with both parties, a much higher performance symmetric encryption secure session is established and used for all following communications in that secure session.

However – it is important to note that just like our example above with Bob and Alice, using locally generated unsigned keypairs are highly susceptible to MITM attacks and should never be used where that is a risk.


Most people are familiar with SSL/TLS as the keypair solution to generate secure sessions between webservers and web clients (browsers). SSL/TLS operates using the same logical steps as Diffie-Hellman, but with two differences. Rather than using locally generated unsigned keys, (as a minimum) the server has a signed key that is validated as part of the exchange. Optionally, the client may also have a signed key that can be used for authentication also.

SSL/TLS is very widely used and considered to be one of the most important foundational elements to privacy and security on the Internet. However it does have its issues. One of the most significant is that that the key to secure the symmetric encryption channel is exchanged using the certificate keypair. Now if we consider that servers will often server a large number of clients, and the server certificate will be the same for each of these customers and for each session – the keypair is the same in all cases. We also recognize that while this is happening, the key size is very likely to be fairly unbreakable; therefore the sessions are fairly secure.

However, if this session traffic is recorded and archived by some highly capable group, and at some later date this group was able to acquire the private key for the server certificate, it means that every single one of those sessions can be decrypted. Essentially the private key can be used to decrypt the initial key exchange for each session and then use that key for the remainder of the session.

There is however a solution – Forward Secrecy.

Forward Secrecy

Forward secrecy is one of the most interesting developments (in my opinion) in securing communications using public-private key pairs. As discussed in SSL/TLS above, if a private key for server certificate is ever compromised, every session ever initiated with that certificate can be compromised.

In forward secrecy, a normal SSL/TLS session is initiated with a resulting symmetric encryption secure channel. At each end of this channel the server and client generate a public – private keypair, and exchange these public keys over the secure channel. A second symmetric key is generated and exchanged over the secure channel. This is essentially a Diffie-Hellman key exchange over an SSL/TLS session. This second key resulting from the Diffie-Hellman key exchange is then used to setup a symmetric session channel, and the initial channel is discarded.

Overall – the first key exchange authenticates the server (and possibly the client) since the session is based on signed certificates, but does not provide long term session security. The second key exchange based on Diffie-Hellman is vulnerable to MITM, but since it runs over an already authenticated secure channel, MITM is not a risk. Most significantly, since the private keys in the second key exchange are never sent over the channel or written persistently, these keys cannot be recovered from an archived session, and as a result the second symmetric session key is also unrecoverable.

This means that even if the private key for the server certificate is compromised, any archived sessions are still secure – Forward Secure.


Since 1976 cryptography and all of the associated piece parts have exploded in terms of development, applications and vulnerability research and a massive amount if it has been in open source development. However, for most engineers and programmers it is still very inaccessible. Step one in making it accessible is to learn how it fundamentally works, and this was step one.

Lastly – There are some very significant details on these topics that have been left out in order to generalize the concepts of operation and use. I strongly recommend at least skimming the references below to get a flavor of these details (that have been left out of this article). In my experience it is very easy to get lost in the details and become frustrated, so this approach was intentional – and hopefully effective.


CryptoCoding for Fun


Inevitably when somebody with more than a passing interest in programming develops an interest in crypto, there is an overwhelming urge to write cryptocode. Sometimes it is just the desire to implement something documented in a textbook or website. Sometimes it is the desire to implement a personal crypto design. And sometimes is is because there is a design need for crypto with no apparent ready solutions. All of these are valid desires / needs, however there are a few (well recognized) principles of cryptocoding that need to articulated.

Design your own Crypto

The first of these is sometimes referred to as Schneiers Law “any person can invent a security system so clever that she or he can’t think of how to break it.” This is not to state that any one person cannot design a good crypto algorithm, but that really good crypto algorithms are written by groups of people, and validated by much larger groups of well educated and intelligent people. In simple terms, good crypto is conceived by people who are well versed in the crypto state of art, and iteratively built on that. The concept is then successively attacked, and refined by an increasingly larger audience of crypto-experts. At some point the product may be deemed sufficient not because it has been proved to ‘completely secure’, but that a sufficiently capable group of people have determined that it is ‘secure enough’. Bottom line – There is nothing fundamentally wrong with designing your own crypto algorithm, but without years of education and experience as a cryptoanalyst, it is unlikely to be anywhere as good as current crypto algorithms. In addition good crypto is the product of teams and groups who try to attack and compromise it in order to improve it.

Writing your own Crypto

The second of these is sometimes referred to as “never roll your own crypto”. Even if we are implementing well documented, well defined, best practice algorithms – writing crypto code is fraught with risks that do not exist for general coding. These risks are based on the types of attacks that have been successful in the past. For example, timing attacks have been used to map out paths through the crypto code, buffer attacks have been used to extract keys from the crypto code, or weak entropy / keying. Of course this is primarily applicable in a production context or anytime the cryptocode is used to protect something of value. If this is being done for demonstration or educational purposes, these risks can often be recognized and ignored (unless dealing with those risks are part of the lesson goals). Bottom line – writing crypto code is hard, and writing high quality, secure / interoperable crypo code is much harder.

Kerckhoff’s Principle

Kerckhoff’s Principle states “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” Essentially this says that in a good cryptosystem, all parts of the system can be public knowledge (except the key) and not impair the effectiveness of the cryptosystem. Another perspective on this is that any cryptosystem that relies on maintaining the secrecy of the algorithm is less secure that one that only relies on a secret key.

Bruce Schneier stated “Kerckhoffs’s principle applies beyond codes and ciphers to security systems in general: every secret creates a potential failure point. Secrecy, in other words, is a prime cause of brittleness—and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.”

When Kerckhoff’s principle is paired with Schneiers Law, which states (paraphrased) that more eyes makes better crypto, the result is that better code results from public and open development. Fortunately that is the approach used for most modern crypto, which is a very fortunate circumstance for the aspiring cryptocoder / cryptographer. It allows us to learn cryptosystems from the best cryptosystems available.

If you are interested in the best crypto documentation that United States Government publishes, review the FIPS and NIST Special Publications listed in the references below. For example, if you are interested in the best way to generate / test public-private keys, look at FIPS 186-4. The quality if the Pseudo Random numbers used in crypto is critical to security, and if you are interested in how this is done, look at NIST Special Publications 800-90a (revision 1), 90B, and 90C.

Bottom Line

For years I have espousing the principals of “never write your own crypto” because it was clear that crypto is hard, and high quality cryptocode is only written by teams of well qualified cryptocoders. With the Snowden revelations over the last few years, it has also become obvious to me that “never write  your own crypto” is also what a state player like the NSA would also encourage in order to more easily access communications through shared vulnerabilities in cryptosuites like OpenSSL. As a result I have modified my recommendation to state “In order to be a better cryptocoder / cryptographer, you should write your cryptocode and develop your own cryptosystems (for learning purposes), but should only use well qualified cryptocode in production or critical systems. ”

It is critically important that every systems engineer and cryptocoder develop an in depth understanding of crypto algorithms and cryptosystems, and the most effective method to accomplish this is by writing, testing and evaluating cryptocode / cryptosystems.

So my stronger recommendation is that everybody who can program and who has an interest in crypto should write crytocode and learn cryptocode and design cryptosystems, and from that we will have a much stronger foundation of understanding of Security Systems Engineering.


System Security Testing and Python


A significant part of systems security can be testing, and this presents a real challenge for most systems security engineers. Whether it is pen testing, forensic analysis, fuzz testing, or network testing, there can be infinite variations of System Under Test (SUT) when combined with the necessary testing variations.

The challenge is to develop an approach to this testing that provides the necessary flexibility without imposing an undue burden on the systems security engineer. Traditionally the options included packaged security tools; which provides an easy to use interface for pre-configured tests, and writing tests in some high level language; which provide a high degree of flexibility with a relatively high level of effort / learning curve. The downside with the packaged tools is lack of flexibility and cost.

An approach which has been increasingly more popular is to take either one (or both) of these approaches and improve through the use of Python.

For Example

A few examples of books that take this approach include:

  • Grayhat Python – ISBN 978-1593271923
  • Blackhat Python – ISBN 978-1593275907
  • Violent Python – ISBN 978-1597499576
  • Python Penetration Testing Essentials – ISBN 978-1784398583
  • Python Web Penetration Testing Cookbook – ISBN 978-1784392932
  • Hacking Secret Ciphers with Python – ISBN 978-1482614374
  • Python Forensics – ISBN 978-0124186767

In general, these take the approach of custom code based on generic application templates, or scripted interfaces to security applications.


After skimming a few of these books and some of the code samples, it is become obvious that Python has an interesting set of characteristics that make it a better language for systems work  (including systems security software) than any other language I am aware of.

Over the last few decades, I have learned and programmed in a number languages including Basic, Fortran 77, Forth C, Assembly, Pascal (and Delphi), and Java. Through all of these languages I have come to accept that each one of these languages had a set of strengths and issues. For example, Basic was basic. it provided a very rudimentary set of language features, and limited libraries which meant there often a very limited number of ways to do anything (and sometimes none). It was interpreted, so that meant it was slow (way back when).  It was not scaleable, which encouraged small programs, and it was fairly easy to read. The net result is that Basic was used as a teaching language, suitable for small demonstration programs – and it fit that role reasonably well.

On the other hand Java (and other strongly typed language) are by nature, painful to write in due that strongly typed nature, but also make syntax errors less likely (after tracking down all of the missing semi-colons, matching braces, and type matching). Unfortunately, syntactical errors are usually the much simpler class of problems in a program.

Another attribute of Java (and other OO languages) is the object oriented capabilities – which really do provide advantages for upwards scaleability and parallel development, but result in very difficult imperative development (procedural). Yes – everything can be an object, but that does not mean that it is the most effective way to do it.

Given that background, I spent a week (about 20 hours of it) reading books and writing code in Python. In that time I went from “hello” to a program with multiple classes that collected metrics for each file in a file system, placed that data in a dictionary of objects and wrote out / retrieved from a file in about a 100 lines of code. My overall assessment:

  • The class / OO implementation is powerful, and sufficiently ‘weakly typed’ that it is easily useable.
  • The dictionary functionality is very easy to use, performs well, massively flexible, and becomes the go-to place to put data.
  • The included standard libraries are large and comprehensive, and these are dwarfed by the massive, high quality community developed libraries.
  • Overall – In one week, Python has become my default language of choice for Systems Security Engineering.


Also of note, I looked at numerous books on Python and have discovered that:

  • There are a massive number of books purportedly for learning Python.
  • They are also nearly universally low value, with a few exceptions.

My criteria for this low value assessment is based on the number of “me-too” chapters. For example, every book I looked at for learning Python has at least one chapter on:

  • finding python and installing it
  • interactive mode of the Python interpreter
  • basic string functions
  • advanced string functions
  • type conversions
  • control flow
  • exceptions
  • modules, functions
  • lists, tuples, sets and dicts

In addition each of these sections provide a basic level of coverage, and are virtually indistinguishable from a corresponding chapter in dozens of other books. Secondary to that there was usually minimal or basic coverage of dicts, OO capabilities, and module capabilities.  I wasted a lot of time looking for something that provided a more terse coverage of the basic concepts and a more complete coverage of more advanced features of Python. My recommendation to authors of computer programming books: if your unique content is much less than half of your total content, don’t publish.

From this effort I can recommend the following books:

  • The Quick Python Book (ISBN 978-1935182207): If you skip the very basic parts, there is a decent level of useful Python content for the experienced programmer.
  • Introducing Python (ISBN 978-1449359362): Very similar to the Quick Python book, with some unique content.
  • Python Pocket Reference (ISBN 978-1449357016): Simply a must have for any language. If O’Reilly has one, you should have it.
  • Learning Python (ISBN 978-1449355739): A 1500 page book that surprised me. It does have the basic “me-too” chapters, but has a number of massive sections not found in any other Python book. Specifically, Functions and Generators (200 pages), Modules in depth (120 pages), Classes and OOP in depth (300 pages), Exceptions in depth (80 pages), and 250 pages of other Advanced topics. Overall it provides the content of at least three other books on Python, in a coherent package.

Note – Although I could have provided links on Amazon for each of these books (every one of them is available at Amazon), my purpose is to provide some information on these books as resources (not promote Amazon). I buy many books directly from O’Reilly (they often have half off sales), Amazon, and Packt.

IoT and Stuff – Cautionary Tales


IoT (Internet of Things) is an interesting phenomena where “things” become connected and provide either some control and / or sensor capability through this connection. Examples include connected thermostats, weather stations, garage door openers, smart door locks, etc.

It is an area of explosive growth, and like any other system it will have its security failures.

Tale 1 – Hacking Internet Connected Light Bulbs

LIFX lightbulbs are smart LED lights with two wireless interfaces; a Wi-Fi interface to connect to the local network and provide a control path for computers / smartphones, and an IEEE 802.15.4 mesh network to communicate between multiple LIFX smartlights. This dual wireless interface meant that any number of LIFX smartlights could be controlled and managed through a single Wi-Fi connection. Since any of the LIFX smartlights could operate as the “master” device that connected to both networks, it was necessary for each smartlight to have the Wi-Fi network access credentials.


The vulnerability involves a couple of aspects in the design. These include:

  1. When an additional LIFX smartlight was added to the network, it exchanges data over the IEEE 802.15.4 network in the clear (unencrypted); except for the Wi-Fi credentials and some configuration details. All of this data was sent as encrypted blob.
  2. The encryption key for this blob was a pre-shared key hard coded into the firmware for every LIFX smartlight (of that firmware revision). This key was accessible via JTAG (which was pinned out on the PCB) or through the firmware image (which was not available at the time of the compromise).
  3. The system allowed a client on the IEEE 802.15.4 network to request (and receive) this encrypted configuration / credentials blob at any time in the background.


The compromise allows an attacker physically close to the system to:

  1. Acquire the LIFX pre-shared encryption key from the firmware or JTAG interface.
  2. On the IEEE 802.15.4 network, request the encrypted configuration / credentials blob (masquerading as a LIFX smartlight).
  3. Crack open the blob using the encryption key from step 1.
  4. Connect to the Wi-Fi network using the credentials from the blob.
  5. Access the network and / or control the LIFX light bulbs.


From this there are at least a few poor design choices that enabled this compromise. The first of these is to use a static pre-shared key to encrypt sensitive wireless data. The ability to establish a secure channel based on PKI has been standard practice for decades, allowing the use of dynamically generated keys at a session level. The use of a static pre-shared key is just lazy design.

The second of these is the ability to request the encrypted credential blob silently. For an initial configuration of an additional smartlight to the network, it is reasonable to require user confirmation to share the data with the additional smartlight. An attacker requesting this data in the background should not be allowed to get this data without user confirmation, or simply rejected when not part of a new bulb configuration.

Although having the JTAG port pinned out may seem to be a poor design choice, it is not really add significant risk. JTAG availability on the device pins would have been more than sufficient for a physical hacker, and that is assuming that the same data would not have been available in a firmware download. A JTAG port does not present a significant risk if keys are managed securely, and the security architecture takes this exposure into account.

Tale 2 – Smart Home Denial of Service


The vulnerability in this story is that the smart home in this story had connected all of the smart devices in the house through a common Ethernet infrastructure – effectively rendering every device as a node on a flat network. This flat network meant that any one device can saturate the network with packets, effectively breaking the network. It also means that any one device can also monitor every packet on the network, or selectively disrupt packets. Essentially the security of this flat network can be compromised in multiple ways by any device on the network, and the overall security of the network is only as good as the weakest device.


This particular compromise was based on a smartlight beaconing on the network as a denial of service attack. This event was not malicious, but if we consider the triad of confidentially, integrity and availability it is still a security failure. A self induced denial of service is still a denial of service.


As a systems engineer the smart home described in this article makes me uncomfortable. The designer indicated that he had not installed his smart door locks since he did not want to be locked out / in by the locks. The designer also indicated that the light bulb denial of service rendered all the smart devices in his house broken / unavailable.

As a systems engineer this bothers me for a couple of reasons. The first of these is that it is possible to segment the network so that a failure does propagate through the entire network – effectively setting up security domains on functional boundaries. Even a trivial level of peering management would provide some level of isolation without giving up the necessary control protocols.

The second part that I find bothersome is that it appears that the entire system was designed with a single centralized control mechanism / scheme. Given the relatively poor reliability of network systems as compared with traditional home lighting / appliance controls, it makes sense to to install a parallel control scheme that is based on a local (more reliable) control path that operates much closer to the device being controlled.

In summary – the architecture of this particular smart home implementation is brittle in that a single device failure can precipitate an entire system failure. In addition it is fragile in that the control scheme is dependent on a number of disparate sequential operations that provide a multitude of single point failures for every device. Lastly, the system is not robust in that there is not an alternate control scheme. In my opinion this smart home may be an interesting experiment, but is a weak systems design with lots of architectural / system flaws.

Tale 3 – ThingBots

This is not a cautionary tale about a specific device or attack, but a cautionary tale about embedded devices in general, and by inclusion – IoT devices. Back in last week of 2013/first week of 2014, Proofpoint gathered some data from a number of botnets sending out spam. Specifically, they identified the unique IP addresses in the botnets, and characterized them forensically, and found that roughly a quarter of the zombie machines were not traditional PCs, but things like DVRs, security cameras, home routers, and at least one refrigerator. From this they coined the term ‘ThingBot’, which is a botnet zombie based on some ‘thing’.

The message is that when it comes to compromise and attack, there are no devices that will not be attacked, there is no point where your devices is not a target for a botnet. Harden all embedded devices and design defensively.

Bottom Line

The messages in these three tales are diverse, but can be summarized by:

  1. Every connected device is a target. Simply being a connected device is sufficient.
  2. Key management may be mundane but is even more critical on devices since often the only interface is networked.
  3. Most importantly – System design matters. Most security issues occur at the integration  interfaces between components of one type or another – and good system design reduce that exposure.