Skip to main content

Schadenfreude

August 20th, 2015

In case you aren’t familiar with the term, “schadenfreude” is a German word for enjoying others’ misery. I think it fits the release of Ashley Madison customer data this week.

So what should we make of this compromise and disclosure? I think there are at least two subject areas – technical/security and sociological/moral.

Technical

As has been pointed out before, this looks very much like an inside job. It just screams for better internal controls, including Privileged Access Management, data loss prevention and plain old employee and contractor screening. It’s quite possible that, despite lots of claims about motivation, this is the work of a disgruntled employee or contractor.

It’s also interesting to see what the operators of the site — Avid Life Media — got right and wrong:

  • Right:
    • Strong encryption of customer passwords (blowfish plus hash).
  • Wrong:
    • No privileged access management.
    • Retain excessive customer data, including physical location (GPS coordinates, presumably from smart phone app), phone number, personal e-mail address, security question/answer in plaintext, detailed credit card data, including mailing address.
    • Failed to delete this data, even when paid to do so.

Sociological

This discussion is just beginning, and will no doubt continue for a long time. A few observations:

  • Despite best efforts by the AM legal team, the data is out in the wild. They got it removed from a few web sites, but it’s on BitTorrent where content is essentially un-removable and un-deletable. Get over it – the data is permanently public.
  • The data appears to be quite authentic. Some had thought (hoped?) that the data may be fake – but that’s just not so.
  • The volume of data is huge – about 32,000,000 customer records.
  • It’s mostly men. Really – there aren’t many women on this site. It’s a lot of men, chasing after a few women. A completely one-sided seller’s market for women.
  • It will be interesting to see if someone can figure out how many of the profiles are real people, and how many are bogus data injected by the company. I suspect a significant number of fake or duplicate profiles, because the numbers just beggar belief. For example, there appear to be over 100,000 profiles in Calgary, where I live. There are just over a million people here, and I don’t believe that 1 in 10 are trying to cheat on their spouse. The data are mostly men, so that’s really, 1 in 5 males. If you subtract children, the elderly and single people, it probably reduces to 1/3 or 1/2 of adult males in relationships, and no matter how low your view of humanity, that’s just not believable. But that’s the data, so the data is obviously lying.
  • This is a treasure trove of data for various purposes. For example, someone has already published a heat map of where the users (real or fake) are and whether they are overwhelmingly male (>85%) or merely majority male (<85%).
  • This will be a bonanza for divorce lawyers. Not as big as everyone assumes, however, as there are certainly many users on the site who are not endangering an existing relationship:
    • Fake or duplicate users, as mentioned above.
    • I know at least one person who has a profile on the site, that he setup while single – he was just using it as a normal dating site. I bet there are lots of these.
    • There are probably many users on the site for whom the excuse “I was just curious” is actually true – they were curious about the market or looking for their current partner, to see if that person was on the site.
    • Another person I know pointed out that sex workers use this and similar sites, so there are likely thousands of those.
  • As always happens with disclosure of sexual behaviour that is widely considered to be immoral, public figures, especially those who spout socially conservative views, will be shamed. I’m not too sure what “family values” are other than a code word for social conservatism, but apparently someone who pushes that as a political cause has already been caught with his pants (literally?) down: some idiot public figure called Josh Duggar.
  • I bet the security establishment in many countries is looking at this data, as it provides leverage for foreign governments against their own people, in sensitive positions. I would expect employees to be fired or shuffled to less sensitive positions as a result.
  • Employers may cross check employees or candidates against this data set, as an (unethical and almost certainly illegal) test of character.
  • I fear that physical harm may come to some people whose data was disclosed, including sex workers and people with overzealous partners.

I’m sure there’s more.

The big lesson, as always, is to assume that privacy is a chimera. If there is something you don’t want to share with the world, don’t upload it to some web site!

Avid Life Media hack

July 20th, 2015

If you haven’t read this one yet, then do so now:

Online Cheating Site AshleyMadison Hacked

This is interesting on so many levels!

  • The data that was apparently exfiltrated is about people cheating on their spouses. There is a delicious moral irony involved in the possible release of this.
  • At the same time, this is a criminal event. Proprietary and personally identifying data was stolen. Theft is theft, even if it’s just a copy of data and even if it’s used to shame cheaters.
  • A company in this line of business should surely make security paramount. That they kept plaintext data with PII – including sexual fetishes and compromising photos around – is simple incompetence, applied at an industrial scale.
  • The attack seems to have been perpetrated by an insider. The ALM people seem to think they know who did this, and imply it was a contractor of some sort. If this doesn’t cry out for Privileged Access Management then I don’t know what does.
  • The societal impact of this hack could be huge. Imagine what happens if this data set is published and tens or hundreds of thousands of divorces, family breakups and job terminations ensue. That could make this the most impactful hack in history, in terms of financial and personal harm. Family lawyers will be in the money from years as a result.

It’ll be interesting to see how this story unfolds in the coming days.

So glad we don’t use Java

June 30th, 2015

Interesting news regarding litigation around Java intellectual property (IP) today:

eff.org

Basically the courts are bouncing back and forth decisions regarding a lawsuit between Google and Oracle regarding ownership and use of the specifications for the Java API.

I’m not a lawyer, but generally I think that languages and runtime environments are well adopted if they are open and unencumbered. Nobody claims copyright over C or stdlib, for example.

Oracle has – unsurprisingly given its corporate culture – tried to make as much as possible of the Java ecosystem proprietary, so that they can generate license fees from this asset. This should cause many developers to think twice about investing in this platform — since there is a risk of undefined fees in your future.

Tread with caution. Not only is Java a terrible platform for performance, it turns out that it’s also at risk of becoming increasingly proprietary. Not a healthy place to develop.

LastPass hack

June 16th, 2015

I guess it was inevitable that a consumer-oriented password manager service would get hacked, and today we’ve learned that one did: Gizmodo.com.

So is there a lesson here for us? A few, I suppose:

  • Security is only as good as the weakest link. I don’t think plaintext passwords were exposed, and it’s not even clear that encrypted ones got leaked, but password recovery hints did, and that may be enough to compromise some passwords.
  • The size of a target matters. I’m sure hackers much prefer to compromise popular systems to obscure ones. For consumers, this leads to the following interesting guidance: see where the herd is running – and run the other way. Choose less commonly used services if you can (but subject to other constraints, like commercial viability and likelihood of the service being well/professionally operated – have fun figuring out which is which).
  • The push to federate will only accelerate. Nobody wants separate passwords for various web sites, when the operators of those sites could easily federate to Facebook, Google, etc. Why solve the problem yourself if you can simply farm it out, for free?

If you are/were a lastpass user, you have a couple of options:

  • Change everything – your master password and your hints.
  • Delete your profile. Take your business elsewhere or give up on this class of application.

Stay safe!

Appliances are Dangerous (because nobody patches them)

May 26th, 2015

Putting sensitive infrastructure on physical or virtual appliances, rather than running it as a traditional on-premise application or a newer software-as-a-service system is a security disaster just waiting to happen.

Why? Because unlike on-premise applications and also unlike the servers running SaaS applications, there is no guarantee that anyone will apply critical security patches to your appliances, either at all or on time. Systems with unpatched security vulnerabilities are an open door to your otherwise secure infrastructure. Tolerate them at your peril.

I just recently spoke with a customer of ours who had – a few years ago – deployed a privileged access management product from one of our competitors. That product includes one or more “jump servers” which mediate login sessions from the desktops of authorized users to logins on managed endpoint systems. Such a “jump server” architecture is common in the privileged access management product category.

The problem for this customer has been that these jump servers — which have access to the most sensitive passwords in the company — run on the original Windows 2008 Server OS (i.e., before the first service pack was released). Since the vendor has made custom changes to the OS to “harden” it, it has been impossible to patch the OS on these jump servers. As a result, today, these jump servers run an OS that was released on February 27, 2008. The OS was released 2,645 days ago. 7.25 years ago. Our customer is scrambling to rip out this product, which endangers their entire infrastructure (it also has performance problems, but that’s another story).

Just think about how many security exploits have been discovered for and patched on Windows 2008+IIS since this platform was released on 2008-02-27. This recently discovered vulnerability comes to mind:

HTTP.sys

Using this particularly dangerous vulnerability, an attacker can remotely gain full SYSTEM privileges on any Windows system running IIS. Yup – including Windows 2008. This exploit is being actively leveraged in the wild, so the risk is very real.

Imagine that your privileged access management system — or any other critical infrastucture — runs on an old, unpatched OS like this. How secure would your organization be?

Is it ever OK to use appliances — physical or virtual — instead of just installing software on a well managed OS image?

I can think of only two cases where appliances are acceptable:

  1. Physical appliances which incorporate specialized hardware, to perform some task very fast. There is simply no software alternative to custom ASICs.
  2. Appliances (physical or virtual) with an automatically managed patch system. i.e., they should run a stock OS and be subject to automatic and timely patches from all the software vendors that contributed components: OS, web server, app server, DB, etc.:
    • If human intervention is required to patch, you’re likely going to forget or at least be late, which will create windows of opportunity for attackers: no good.
    • If only some components get automatically patched (say just the OS), it follows that others aren’t being patched (say the app server) and again you’ll be vulnerable.
    • If the runtime platform has been significantly customized (i.e., “hardened”) then automatic patching will likely break and you’ll achieve insecurity by trying to be too clever.

What if you’ve already deployed appliances that aren’t automatically patched?

  1. Try to patch them manually. Right now.
  2. Talk to the vendor. They are putting you at risk and had better step up and correct the error of their ways, or else you’ll be obliged to rip out their products.
  3. Look for alternatives, since these things are ticking time bombs on your infrastructure.

Roles are for entitlements and user classes are for people, and never the twain shall meet

May 21st, 2015

I keep running into bad terminology when talking to new and prospective customers about organization structure and roles. People frequently confound t
wo quite unrelated concepts, calling both “roles.” This leads to confusion and much wasted effort trying to design unworkable systems, as I’ll explain below.

First, what are the concepts? In identity and access management (IAM) systems, we’re mainly interested in managing the lifecycles of people — identities, and their entitlements — typically login accounts and group memberships on end systems. Confusingly, some systems use the term ‘role’ to mean ‘security group, assignable to users within this system or application.’ I can’t make vendors like Oracle change their terminology, but I’ll take it as a given that anything assignable to a user within a single system or application is either a login account or group — even if that system thinks its called a role.

  1. Role. Roles are named collections of entitlements, that the IAM system can assign to users. They might be assigned automatically, because of some policy, or because of an (approved) request.
  2. User class. A user class is a set of users (i.e., identities or people). Users might be included in the user class individually, but the more common scenario is to collect users into a set by some rule — say based on their department, location, business unit, etc.
  3. Organizational hierarchy. This just means that every user should, ideally have (at least one) manager. We like to know the manager/subordinate relationship for all users because this relationship feeds into many useful processes: change authorization, access certification and more.

Nesting is implied in both user classes and roles. When roles are nested, it means that parent roles also include the entitlements of their child roles. This is represented by attaching one or more roles as entitlements in a parent role. There should be no technical restriction on how many roles a role may contain, or how deeply nested roles can be. In practice, most implementations use this sparingly, but in theory, at least, nesting can be both broad and deep.

As for user classes, we can think of “all people sharing a given manager” as a user class. This specific type of user class represents a hierarchy, so can be thought of as being nested. We could ask the system to show us a list of people who report both directly and indirectly to a given person — as one way to exploit this hierarchy. The organizational hierarchy is just a (possibly visual) representation of this nesting of the manager/subordinate user class.

So what’s the problem?

Many people use the term ‘role’ to mean two totally unrelated things:

  • The set of people who fit into a particular part of the organizational hierarchy — say all reporting to some manager, or all working in some department. THIS IS NOT A ROLE!
  • A set of entitlements (i.e., really a role this time).
  • Some combination of these two incompatible ideas — i.e., both a set of people and a set of entitlements, mixed up together.

I think people do this because they haven’t thought about clear definitions for roles or user classes. Quite often, they haven’t thought about user classes at all, but instead have only a very ambiguous idea of “some hierarchy of people and entitlements.”

People then make it worse by talking about ‘role nesting’ but actually meaning the nesting of these imaginary, hybrid role/user class things (that should not exist in any well designed IAM system). What does nesting mean when we’re talking about users and entitlements at the same time?

My advice is for everyone to just stop doing that. Roles are one kind of thing — collections of entitlements. User classes are another kind of thing — collections of people. Each of them can have its own hierarchy. You cannot include a role as a member in a user class. You cannot include a user class (or even a single user) as a member of a role.

By keeping the language clear, we can design much simpler, cleaner systems. For example, automatically assign some role to all members of some user class. That’s a nice way to automate access in many cases. Or ask the members of a user class to approve the manual assignment of a role to users. Or ask the members of a user class to recertify the list of people who were (manually) assigned a role. All these things are simple, clear and useful. What someone would do with weird, hybrid, role/user class things is beyond me.

Keep your servers patched

April 15th, 2015

Do you automatically and promptly apply security patches to your servers?

If not, you should.

Most bug fixes and security patches address obscure problems that are hard to trigger and have limited impact. Every once in a while, however, something big comes along. Usually, there is an inverse correlation between problem severity and press coverage. Do you remember “heartbleed”? Not a serious problem in most cases.

Today we see a serious security bug. MS15-034 — this is a remote code execution attack against IIS on all Windows platforms. Scary stuff.

If you haven’t applied the hotfix for this bug yet — stop reading and do that now. It’s that serious.

The philosophical take-aways from this are:

  • Microsoft should know better than to embed bits of the web server in the OS kernel. Nobody else does that, and for good reason. Microsoft moved bits of IIS into the kernel for performance reasons, and this is precisely the reason why it’s a bad idea.
  • All organizations should apply security patches automatically and promptly. If you had to wait to read this blog to apply this patch, you’re moving more slowly than your adversaries, with predictable consequences.
  • It’s a safe bet that all software has bugs, and that some of those bugs have security consequences. Build defense in depth, build heterogeneous defenses and try to compensate using well thought out business processes (such as frequent and automated password changes).

Stay safe!

Keep it short and to the point, please

March 26th, 2015

Part of my job is to review responses to requests for proposal (RFPs) that we receive from current and prospective customers. The idea is pretty simple:

  • An organization wishes to procure an IAM system.
  • They find some vendors who make products in the space. Perhaps they search for web sites that say so using Google or they contact an analyst firm like Gartner, Forrester or KuppingerCole.
  • They either independently or with the aid of a consultant write down a wish list of features, integrations and other capabilities.
  • They send this wishlist to all the candidate vendors, who respond in writing indicating whether and how they can comply.
  • Based on these responses, they down-select vendors to follow up with — via presentations, demos, a POC deployment, etc.

Sounds good in theory. We used the same process, more or less, to procure VoIP, e-mail and CRM services over the past couple of years.

But the process can go horribly wrong, and I’ve seen it do that more often than I care to think about:

  • Ask too many questions, and you may just get what you wished for. I just reviewed our response to an RFP with over 400 requirements and over 200 pages. Imagine 10 responses like that. Who will read 2000 pages of response material? Who can even comprehend so much information?
  • Ask lots of internal stake-holders to submit questions, and blindly aggregate them. Some of these requirements will be silly, others mutually contradictory, others off-topic. The person assembling the RFP should understand and consolidate the requirements, not just blindly merge them!
  • Ask questions with yes/no answers. Guess what? Every vendor will just answer “yes” to every question, and you will learn nothing.

So what’s the right way to do this?

  • Don’t ask about every conceivable requirement. Figure out which requirements you think are either critical or hard to hit, and focus on just those. If you’ve asked 100 questions, then you’ve probably asked too many and won’t be able to digest the responses.
  • Engage in a conversation with the vendors and any integrators or other third parties. Ask their advice. Maybe your requirements are ill-conceived? Maybe there is a better way to solve the problem? You’ll never know if you stick to a formal, no-discussions-allowed process!
  • Invite vendors to give presentations and product demos before issuing an RFP. You’ll get some ideas this way, including how to refine your requirements and which vendor approaches you like. You can then send an RFP to fewer vendors, with more targetted questions.
  • Hire someone to help. I hear Gartner does that. Other analyst firms will as well. Integrators have lots of good ideas, especially if they are vendor-neutral. One caution though: be careful of integrators that are strongly affiliated with a particular vendor. For example, I hear that Deloitte likes to push Oracle products, because they get lots of business from Oracle and frankly because Oracle products require huge amounts of consulting to deploy. This is great for the integrator, but terrible for the customer.
  • Figure out how the market has segmented features into product categories. Only ask about one product category in a single RFP. If you have requirements that span multiple categories – fine – send out multiple RFPs, probably to different, though likely overlapping, lists of vendors.

Good luck out there! Keep it short and simple, if you can!

Can your product do “X?”

March 15th, 2015

Frequently I get this question – from customers, prospects, partners and even internally: “can you do X?” This is a deceptively simple question and often exactly the wrong thing to ask.

Why is it wrong? It’s not because I can’t or won’t answer. I often answer “yes” and sometimes “no” and I usually elaborate. That’s not the trouble.

The trouble is that people are trying to solve a problem. Their thought process goes something like this: (a) they have a problem; (b) they have identified a possible solution; (c) their solution requires some feature “X” so (d) they go shopping for “X.”

It doesn’t matter what “X” is here – any feature in any product will do. The problem is that by asking for a particular feature, the person doing the asking is not revealing the problem (a), which is what actually needs to be solved. Perhaps there is a better solution to (a) and the subject matter expert (sometimes – that might be me!) will point it out. Sometimes the proposed solution (b) has subtle problems that haven’t been anticipated yet. Again – if I don’t know about it, I can’t warn the person I’m responding to about that or help them find a better approach.

What I wind up having to do in such conversations is try to figure out (a) and (b) – the problem and proposed solution – by inferring what the person I’m talking to really wants when they ask for “X.” That works out fine (if a bit time consuming) in a voice conversation. It’s a bit slower but still yields the same result in e-mail threads. In a formal RFP process, however, there is no real conversation. There is a single broadcast set of requirements, and a single collected set of responses. There may be a single exchange of questions and answers in between those two bookends. What there isn’t, really, is an interactive converation, but that’s where everyone learns the most. That’s a real weakness of formal RFPs, incidentally – no conversations.

It would be so much better for current and prospective customers and partners to include some background in their questions. What problems are they trying to solve? How do they propose to solve these problems? Transparency and conversations create value – vendors and customers are not adversaries and withholding information does not create advantage in some conflict.

Sometimes, there aren’t even problems to be solved, but rather an aggregate of features from one or more vendors. I wish people would stop asking for things they don’t need, just because some vendor somewhere says they can do it. Products should solve problems, not compete in some checklist match. But that’s a rant for another day.

So what do we call this thing, anyways?

March 5th, 2015

Every vendor in the privileged access management (PAM) market seems to refer to the product category using a different name and acronym. Some analysts simply refer to the market as PxM in recognition of this situation.

I’d like to put forward the argument, here, that the most appropriate term is PAM, as above. Our own product, Hitachi ID Privileged Access Manager (HiPAM), connects authorized and authenticated users, temporarily, to elevated privileges. This may be accomplished through a shared account, whose password is stored in the HiPAM credential vault and may be periodically randomized. It may also be via privilege escalation, such as temporarily assigning the user’s pre-existing (directory) account to security groups or temporarily placing the user’s SSH public key in a trusted keys file of a privileged account. In all cases, HiPAM assigns privileged access, to users, temporarily.

Following are other names for this product category, along with the reasons that I think each is incomplete or simply wrong:

  • Privileged Identity Management (PIM):

    Identity management means creating and deleting identities — such as login accounts — on managed endpoint systems. No PAM product actually performs this function. Rather, a PAM system connects authorized, pre-existing users to managed, pre-existing accounts for short time periods, subject to authentication, authorization and audit. There is simply no identity management, in the sense of Create/Update/Delete (CrUD) operations in the current generation of PAM products.

  • Identity and Access Management (IAM):

    IAM systems can, in principle manage the lifecycle of privileged accounts. In practice, there is rarely a need to do so, as most privileged accounts come pre-installed on the device, operating system, hypervisor, database, or other system that must be managed. The main IAM use case relating to privileged accounts is to create and track meta data, such as account ownership.

    Architecturally, typical IAM systems can scale to a few thousand endpoints. Enterprise PAM deployments, on the other hand, scale to hundreds of thousands of managed endpoints. Few IAM products, even with lots of custom code to close the functional gap, could deploy at PAM scale.

    In short, IAM is complementary to PAM, but the two product categories address distinct problem categories with distinct functionality at different scales.

  • Privileged User Management (PUM):

    The same argument presented vis-a-vis PIM above holds. User management — of privileged or other accounts — is simply not what PAM products available today do.

  • Privileged Password Management (PPM):

    Hitachi ID Systems previously use this term, before offering other methods to connect users, temporarily, to privileges. While this label may still apply to some products, today HiPAM also allows for temporary group membership and temporary SSH trust relationships, making the term obsolete.

  • Privileged Account Management (PAM):

    For some products in the market, this is probably an accurate description, since authorized users are connected to specific, pre-existing, privileged accounts for defined time intervals. This is also a fair description of one of the methods that HiPAM uses to connect users to privileges. Since HiPAM also supports temporary trust and temporary group membership (i.e., privilege escalation), this description would be incomplete for Hitachi ID.

  • Privileged Session Management (PSM):

    Some analyst firms refer to this as a distinct product category — software that establishes sessions connecting users to shared accounts on managed endpoints, typically via a proxy appliance. It’s not clear to me how such a product could function independently of a credential vault, presumably complete with a password change process. In short, this is a subset of what HiPAM actually provides and I’m pretty sure it’s a subset of what our competitors do too. Not a real product.

  • Application to Application Password Management (AAPM):

    Another subset of HiPAM functionality — allowing programs to be modified to eliminate passwords embedded in scripts and configuration files, and instead be fetched on demand from a secure, authenticating vault. Not a product category: just a subset of functionality.

  • Superuser Privileged Management (SUPM):

    A somewhat complementary product category, where a central policy server controls what commands a user can issue, to execute as root or Administrator, on individual Unix, Linux or Windows systems. HiPAM accomplishes much the same thing through a combination of temporary group membership, along with local policies linking groups to commands (via Linux sudo or Windows GPOs). In the Unix/Linux environment, I’m not sure I buy that products like this are actually effective. If you give me the ability to run something like sed or grep on a Linux box, as root, then I can do pretty much anything. Many programs would let me shell out to run a sed or grep command, so I suspect that this whole product category is more about optics than actual security. YMMV.

So what do you think? Anyone care to refute my ideas here, and support use of a different category name and acronym? :-)

page top page top