Archive

Archive for the ‘Strategy’ Category

#33C3 – my “must have”-sessions

December 27, 2016 Leave a comment

Like every year, there’s the annual Chaos Communication Congress taking place to cover the dead week between Christmas and new year. I was taking a look onto the schedule to identify my „must have“-sessions. These are:

As every year, I’ll spend my rare time with my family back home, but i’ll try to catch at least a couple of my „must have“-sessions via livestreams. Once the event is over, i’ll download all my sessions to watch them offline during hotel nights.

The full schedule can be found here: https://fahrplan.events.ccc.de/congress/2016/Fahrplan/schedule.html
Live stream will be available here: https://streaming.media.ccc.de/33c3
Downloads will be available here: https://media.ccc.de/c/33c3

All of you onsite: have a great event. 

[Update: 01.01.2017]
I’ve added the URLs of the recordings to the list of my „must have“-sessions. But i’ll download all of the sessions to watch them bit by bit. I’ll take the one or other talk to dig into it a little deeper, and posting my thoughts about it.

PoCs – are they really an inefficient use of time and money?

This blog post is meant to be an answer on one of the latest blog posts of Identropy, „IAM Proofs of Concepts (POC) – An Inefficient Use of Time and Money“, written by Luis Almeida (btw: congratulations and all the best for your new job). It took me a while to gather all my arguments to reply to this blog post.

The first argument that Luis brings up, is that PoCs are costly and deliver very limited value.

The cost effect is an interesting one. Typically (at least for the majority of PoCs i did in my career), costs are beared by the vendor or integrator that is attending the PoC at the customers site. It’s in the interest of the vendor or the system integrator to show capabilities of their product within the customers environment. This is why most vendors or system integrators do not charge their potential customers for PoC attendance. The only cost effect (and i do agree that this should not be underestimated) on the customer site is to define the PoC scenario and to establish an lab environment to do the PoC in. Typically there are already testing environments beside the production network of the potential customer where development effort and software testing is driven before moving IT systems to production. The PoC scenario should already be defined (at least in parts) by the daily processes of the potential customer that are used to onboard, update or offboard employee identities. If the customer is a green field customer (having no legacy IDM solution in place), the approval processes are mostly kept in request forms that are available in the potential customers intranet or as printouts that can be leveraged to gather the currently used information for request fullfilment. I’ve also experienced PoCs where it was part of the PoC to demonstrate the conceptual approach of migrating manual process into an request driven, automated workflow in order to let the customer gain some inside on the used approaches and the way they are implemented into the solution.

The delivery of limited value is a bit tricky. For sure, the teams sent by vendors or integrators to do PoCs are mostly pure pre-sales consultants with a dedicated skill set in order to deliver a successful PoC in order to sell the product. Mostly, but not every time. In smaller organizations PoCs are delivered by project proven consultants and architects. Depending on what background the PoC delivery team of the vendor or integrator has, PoCs do deliver different outcome. It should be a duty of the vendor or integrator to set up a team of pre-sales consultants as well as professional services consultants to not only demonstrate the solution but also make the customer aware of potential pitfalls, arising complexity of the solution that is going to be implemented but also identify strategies to mitigate the risk that is about to be affect the upcoming implementation project.

The next argument that Luis brings up is the handover of responsibility from the sales / pre-sales team to the services team after the PoC in order to deliver a successful project.

Indeed, this is the most complex part of the transition of an IAM implementation project. Information might get lost, the SOW that is created by the pre-sales team might not fit into the customers requirements and the services team might get caught in an financial frame that is to tight to do the project in a safe way. But there is an solution out there to solve this problem: Integrate the services organization into your pre-sales cycle as soon as possible. What’s that good for? They can assist the pre-sales consultants onsite during the PoC by bringing in real life experience instead of pure trained pre-sales knowhow of how to achieve goals in a short amount of time to get the PoC scenario done. They services organization can also assist in creation of the SOW by bringing in real life experience from ongoing or former implementations and to justify potentially higher project cost by their experience of with pitfalls, complexity and data driven issues of implementation projects.

The third argument that is brought up by Luis is the significant opportunity to gain better outcome of an product evaluation phase by engaging a services partner. I totally agree to this approach as this allows the customer organization to focus on their needs and living processes while having partner in place that is dedicated to the IAM implementation and the search for the perfect product for the customers situation, their architecture and their team. The only thing i do have to mention for this scenario: are service provider or solution integrators really unbiased? They a probably a bit more unbiased than a vendor sales and pre-sales team, but not completely unbiased as they are typically partnering with a couple of vendors to distribute and customize their solutions. From a customers perspective, the best approach would be to engage a service provider for the product evaluation phase and the definition of use cases and requirements, but to have another service provider or solution integration, maybe the the vendor itself doing the project implementation.

The last argument that comes up in Luis’ blog post is „If POCs were an effective means to evaluate IAM solutions, there would not be so many failed implementations in the market.“.

For sure, there are failed implementations out there that came through PoCs and a suboptimal handling of the transition from the sales cycle into the implementation phase executed by services organizations. The implementation might also fail if the product evaluation was done by a service provider as there are so much factors that can be a source for project failure.

As an final conclusion out of both blog posts, i think we do agree in the fact that things have to be changed around the product evaluation process. There have to be things changed on the customer side as well as on the side of vendors or system integrators. And last but not least: this does not only affect IAM projects.

Categories: IAG, IAM, IDM, Strategy

Passwords must die – we’re on the way

Since several months and identity related conferences there is one hot topic still ongoing and represented as an popular hashtag within the IAM crowd (some call them / us identirati): #PasswordsMustDie

I already did spent some time in march blogging some lines on #PasswordsMustDie in the article “Passwords must die – but how”. And over the past weeks i was spending some time to look around on various plates to see, how it’s going on the way to kill passwords. There’s a bunch of news in that space that i’d like to wrap up pretty quick.

The FIDO alliance

The FIDO alliance (FIDO stands for Fast IDentity Online) was formed as an non-profit organization in summer 2012 to change the nature of user authentication. Some very known names being members in the FIDO alliance are:

  • Google
  • Lenovo
  • PayPal
  • PingIdentity
The alliance is still growing, making it’s way to bring a FIDO plugin supporting various FIDO authenticators, such as hardware based tokens, finger prints, voice identification as well as combinations of those differentiating those into two kinds of  tokens:
  1. Identification tokens as unique identifiers being associated with an online identity
  2. Authentication tokens for identity proofing

Mozilla Persona

In April 2013, the Mozilla Identity team announced the second beta of Persona as an simple way to login to various services and web sites using any modern internet browser. Their simple goal: Eliminate passwords on the web. Although the base of services and web sites is still small, i do expect them to grow their services base over the months.

 

Both, the FIDO alliance as well as Mozilla Persona do show that there is something going on to kill passwords. These initiatives will see a major boost in usage as soon as some bigger services start supporting their technology and approach. As long as services like Twitter and LinkedIn just enable their users to use two factor authentication as as result due to various security incidents, there is still some password usage although it’s just a single part of authentication. Let’s see what’s the first popular service starting to use such technologies as offered by FIDO or Mozilla, we might see some real security improvements.

Categories: Cloud, IAG, IAM, Identity, Security, Strategy

Continuity vs. Reinvention

In an lead architect role i’m currently involved into a project to design the migration of a very mature IAM implementation towards a newer release of the same IAM solution suite within an complex infrastructure. I don’t want to shed light on the technical details here, but i would like to discuss the impact on the end user in such an migration.

The customer is using the web based end user portal heavily with an user base beyond 100.000 end user all over the world. Due to the lack of oob-features in the currently used release the customer spent a lot of time and money in extensively customizing the web portal. In fact, there are just a NO standard modules used anymore, the whole web portal was reimplemented by the customer and their current service provider to offer the best breed value for their end users.

Facing the challenge of migrating the existing solution to an newer release of the same IAM suite, were facing some issues here:

  • due to the fact that we plan to make a jump over 2 major releases, lot’s of the backend technology, database structure as well as the web portal designer engine and controls have changed, were replaced or disestablished
  • to enhance the end user experience, the customers service provider did implement a bunch of custom controls into the web portal using the existing web portal designer engine which are sometimes based on oob-controls
  • the web portal implementation is doing some tasks in an very special way, so to say not all of the tasks are initiated through the web portal are being handled and executed by the IAM solutions backend engine (as it would be the usual way to deal with) but they are being driven directly out of the web portal itself without ever interacting with the backend engine (e.g. several web service calls)

All of these issue and their consequences have to taken into consideration when planning the migration as an strategy as well as the technical implementation itself. When not taking a look on effort or cost, we’re coming down to the following decision:

Rebuild the web portal as it exists today OR reinvent the web portal using the newer technology and oob controls?

Or just rephrasing the question: Continuity or Reinvention? Let’s discuss both of them a bit first before coming to my final conclusion.

Continuity

Keeping an eye on the huge end user base of more than 100.000 users it might be worth spending the effort in rebuilding the same heavily customized web portal than it is existing today. This will minimize the impact on the end user in the final phase of the solution migration and might keep the help desk and the IAM team in an comfortable situation. But the price that has to be paid is high: all controls that can’t be migrated automatically using the IAM suites migration features have to be rebuild within the new platform. So there is effort to be spent which has already been spent and paid by the customer. Keeping a high level of customization does increase the complexity of the final solution implementation as well as the potential maintenance effort but also the risk when migrating towards newer versions of the IAM suite in some years. It does also make the customer dependent from the implementing service provider as the implementation know how will be on the side of the implementing service provider. The solution vendor might not be able to support all of the implementation as it might be to heavily customized, which also does not bring any value to the customer.

Reinvention

Reinvention does mean to migrate the complete web portal towards the standard solution features by limiting the customization to the lowest level possible (which in the best case would be customers CI). The nearer the to the delivery standard the solutions implementation is, the better is the supportability by the solution vendor instead of the implementing service provider which brings the customer into a much more comfortable situation for future business with the vendor as well as with the service provider landscape. On the other hand there is impact on the huge end user base as they will have to relearn and / or adopt the newer solutions web portal and it’s functionality as it will have another look and behavior. This will also have an impact on the customers help desk and the IAM team as they will be the point of contact for all the end users that do have trouble with adjusting themselves to the new solution and the new way of handling things. This can be mitigated by providing training material, web casts and regular updates during the implementation process to make key users and power users of the upcoming changes. Utilizing the group of key users and power users does streamline the process of sharing knowledge and information from the help desk and the IAM team through the key users and power users to the regular end user.

Conclusion

From an architects view, my conclusion is pretty clear: i’m recommending the reinvention of the end user web portal although there will be an impact on the huge end user base. Why do i want to do that?

  1. Bringing the solution back towards oob standard as far as possible does make the solution less complex and enhances the maintenance situation for the customer itself as well as for the service provider (which might not necessarily be the implementing service provider)
  2. The implementation effort is not that high than keeping the web portal as it is by spending a lot of time and material in keeping continuity by reimplementing anything that has been implemented so far
  3. The end user impact will have a peak at the beginning of the solutions roll out but will decrease quickly

From an realists view knowing the customer for a while i’m pretty sure we will end up with kind of an mixture between continuity and reinvention. But the strategy i’d like to propose to my customer is clear: decrease customization over time. Maybe it’s worth spending the money in reimplementing the existing solution in the newer release and then starting an process of moving feature by feature back to standard.

Categories: IAG, IAM, Identity, IDM, Migration, Strategy

MDM in the context of IAM

While BYOD is not the newest phenomena in the IT and Security area, i just had my first project not only touching an Mobile Device Management platform. As part of the identity lifecycle it’s necessary to get control over mobile devices that are used by the end user. While this is pretty easy reg. mail integration (as soon as the user gets deprovisioned, the mailbox access is no longer possible), it’s not that easy to handle reg. profiles and own apps.

In my customers case, they do have MobileIron deployed in their infrastructure. As part of the deprovisioning process they came up with the requirement to retire devices used by the terminated employee within their MobileIron instance, which would take all the certificate based access from that device to the customers wifi and network resources.

As the MobileIron API does support HTTP requests to retire devices, it was necessary to have the device ID for an device in order to retire it. But lucky wise there is an HTTP web request to get a decent set of device attributes from MobileIron. We choose the most convenient and quickest approach: extending our IAM database model with an table to store the data of mobile devices with an foreign key link to the employees table. Calling an dedicated HTTP request within MobileIron, we got an CSV back from MobileIron carrying the decent set of attributes. This CSV then get’s imported into the IAM system. This process happens every hour.

As an employee now get’s terminated, we also kick off a process to retire all devices that are known for this employee in MobileIron. So far, this is satisfying the customers requirements.

For an later phase, this does also satisfy additional requirements that will come up (or already came up in while defining upcoming phases of the IAM strategy): being able to use the data from an governance and access management perspective does also answer questions such as “Who is accessing the enterprise network with what kind devices?” or “Are there devices with an software release that is not safe to let them touch the enterprise network and enterprise resources?”.

To cite a good IAM guy i did a project with: “Building an IAM implementation is like building a house: it’s all based on a strong foundation.”

I expect to see more projects coming up with even deeper integration between IAM and MDM as the BYOD wave is still rising…

Ian Glazer – Killing IAM in Order to Save it

It’s just nothing more to say, it’s just Ian Glazer: Killing IAM in Order to Save it

Categories: IAM, Identity, Strategy

Passwords must die – but how?

It’s been a while since i had a chance to pick some time for my blog. I still have several topics to deal with in my pipeline, but as today is going to be a lazy sunday staying in my Hotel in Louisville, KY, i decided to take a shot on an burning hot topic thats been on top for a while: #PasswordsMustDie

As you might have seen in the news, on twitter or in various blogs, Evernote got hacked these days and it’s reported that usernames and passwords (no matter if they were crypted or not) where taken. I wonder how much of those passwords are used with other services as well, in the worst case they are used with the same email address. I think at least 60% of the taken password / email combinations are being used for other services as well as this is the most common mistake we all do: We reuse passwords. We shouldn’t reuse them, but as a matter of fact, we do. By doing so, we’re exposing ourselves to an enormous risk. But we do accept it.

What could be ways out of this misery?

  • Biometric Authentication
  • Token based authentication
  • Multi-Factor-Authentication

Let’s take a look at all of those.

Biometric Authentication

Some Laptops today do have a fingerprint reader that can be utilized to authenticate a user. My HP laptop issued from my employer does have one of those and i use it at least for the windows based authentication. The same laptop also offers a face recognition via the built-in webcam. I’ve never tried that feature since the webcam is not working properly. On the other hand, i do remember a presentation on an Laptop (i won’t name the vendor) that would allow to bypass the face recognition with a good photo print. Just looking at my brand new MacBook i’m currently using to type this article, there is no fingerprint reader, but it does have a webcam. Looking at other devices or computers in my family, there are also devices that do not have any of those. So on these devices there is no way to do biometric authentication without any additional hardware. So as long as not all devices by default do have a fingerprint reader, there is no way to roll-out biometric authentication to all the mainstream services.

Token based authentication

Token based authentication does require a service to issue a token to the user. But how’s the user identified by the service? The user has to give something to proof the identity. So this requires potentially a username and a password or other attributes to be communicated between user and service. Sure, there are SSO applications doing that in an enterprise infrastructure. But how about home users? What do they use to initiate a token based authentication? They could use for example other account from other services that do integrate with the service they want to use, like Google or Facebook. And how are the Google account or the Facebook account secured? Typically by a username and a password. Only a smaller number of people i know is using two factor authentication using their phone to authenticate with Google.

Multi-Factor Authentication

Using multi-factor authentication does require not only a password or a pin, it also requires at least one more factor to be authenticated with. This could be certificate, a security token being calculated by a device or an application, it could be also a biometric component such as fingerprints or an face recognition scan. As discussed earlier, fingerprints or face recognition are pretty much out of the game as not all the devices are support those features. Most of you do know the RSA tokens issued by our security departments or customers to use them for logging into the network or remote into the customers network. As these are bound to an network specific infrastructure component, this would result into a bunch of of tokens to have them with me all the time. And certificates? Not only one major certificate authority has been hacked in the past times.

So what’s the conclusion out of this?

As much as i agree that passwords must die, i don’t see any chance to make that happen pretty soon. It would require at least an standard for a mixture out of token based authentication and multi-factor authentication. But no hardware tokens please. How about a multi-plattform token application for smartphones? Oh, right,… what about non-smartphoners? You see, it’s long way to go and we just started it using first services offering alternative ways to authenticate like SSO with Google.

What could be done to make passwords more secure in the meantime?

  1. Use strong passwords (combination of lower case and uppercase letters, numbers and special characters)
  2. Don’t reuse passwords (i know, it’s not pretty convenient)
  3. services do need to enforce password policies and password change intervals
  4. if you’re using SSO with Google or Facebook, try to ensure that you’re passwords are ultra-strong and enforce yourself to change them from time to time

All of this is nothing new, but as the Evernote-case does remind us to reset our account data and authentication information, we do need to remind ourselves on how to deal with passwords in a safe way until they die.

Categories: Cloud, IAG, IAM, Identity, Security, Strategy