Archive for the ‘IAG’ Category

Dell One Identity Manager 7.0 Rollup Package 1 released

November 2, 2015 Leave a comment

Dell Software just released the first rollup package for Dell One Identity Manager 7.0 which comes with a bunch of resolved issues as well as a couple of cool new features. Features i’ve been waiting for due to customer issues:

  • Support for encrypted emails using TLS, S/MIME and PGP
  • Reading and Assigning SAP security policies
  • Support for Powershell v3 and later
  • Transport of UNSRoot definitions including all necessary settings
  • applied sort order of change labels during transports
  • support for transporting Compliance Rules

But there are a couple of additional cool features included as well:
The REST API was extended to support additional capabilities such as calling methods, scripts, customizer methods and events as well as support for different collection load types. This makes the REST API a bigger part in the upcoming API economy in IAM / IAG. There have been some additional SAP HCM info types added to make the SAP HCM sync more powerful and to avoid additional programming effort.

The rollup package is currently available through Dell Support, not yet on the download page.

Seems like i have to use a night in the hotel to upgrade my testing environment this week. 

Categories: D1IM, IAG, IAM, IDM, Security, Tools

Sailpoint addressing Data Access Governance

August 4, 2015 Leave a comment

With the acquisition of Whitebox Security, Sailpoint is extending their portfolio into the emerging Data Access Governance market. The Whitebox Security suite will be rebranded and renamed into the Sailpoint product naming schema as SecurityIQ. The plan is to bring the identity information into correlation with the data centric view in the IT infrastructure to get a clear view to bring visibility into “Who has access to what, using which entitlement?”. According to the press release, SecurityIQ will be integrated into IdentityIQ and IdentityNow, Sailpoint might offer the same depth of integration between Identity and Access Management (IAM) / Identity Access Governance (IAG) and Data Access Governance (DAG) than Dell has with their Dell One Identity Manager and the Dell One Identity Manager Data Governance Edition, which was built on the foundation of the former Quest Access Manager. The market will stay heated up…

Categories: Access Governance, DAG, IAG, IAM, IDM, Tools

A D1IM programming snippet

August 1, 2015 Leave a comment

There has been a discussion about an implementation detail within Dell One Identity Manager with two colleagues that came up during my family vacation and which i took on after being back at my desk Thursday this week. It all started with the simple question how to catch the event name that triggered a process. The initial answer was „there’s no way to get there“ but this answer is at least outdated. Sure there is a way to catch the event name in an D1IM process by using the EventName-property. So just in case you have a process that is raised by two different events but you want to have a process step being generated only for one dedicated event, the generation condition would look like this:

Value = CBOOL(EventName = <Name of the Event>“)

Just wanted to share this, it might be a helpful snippet in the one or the other project implementation.

Categories: D1IM, IAG, IAM, IDM, Programming, Tools

PoCs – are they really an inefficient use of time and money?

This blog post is meant to be an answer on one of the latest blog posts of Identropy, „IAM Proofs of Concepts (POC) – An Inefficient Use of Time and Money“, written by Luis Almeida (btw: congratulations and all the best for your new job). It took me a while to gather all my arguments to reply to this blog post.

The first argument that Luis brings up, is that PoCs are costly and deliver very limited value.

The cost effect is an interesting one. Typically (at least for the majority of PoCs i did in my career), costs are beared by the vendor or integrator that is attending the PoC at the customers site. It’s in the interest of the vendor or the system integrator to show capabilities of their product within the customers environment. This is why most vendors or system integrators do not charge their potential customers for PoC attendance. The only cost effect (and i do agree that this should not be underestimated) on the customer site is to define the PoC scenario and to establish an lab environment to do the PoC in. Typically there are already testing environments beside the production network of the potential customer where development effort and software testing is driven before moving IT systems to production. The PoC scenario should already be defined (at least in parts) by the daily processes of the potential customer that are used to onboard, update or offboard employee identities. If the customer is a green field customer (having no legacy IDM solution in place), the approval processes are mostly kept in request forms that are available in the potential customers intranet or as printouts that can be leveraged to gather the currently used information for request fullfilment. I’ve also experienced PoCs where it was part of the PoC to demonstrate the conceptual approach of migrating manual process into an request driven, automated workflow in order to let the customer gain some inside on the used approaches and the way they are implemented into the solution.

The delivery of limited value is a bit tricky. For sure, the teams sent by vendors or integrators to do PoCs are mostly pure pre-sales consultants with a dedicated skill set in order to deliver a successful PoC in order to sell the product. Mostly, but not every time. In smaller organizations PoCs are delivered by project proven consultants and architects. Depending on what background the PoC delivery team of the vendor or integrator has, PoCs do deliver different outcome. It should be a duty of the vendor or integrator to set up a team of pre-sales consultants as well as professional services consultants to not only demonstrate the solution but also make the customer aware of potential pitfalls, arising complexity of the solution that is going to be implemented but also identify strategies to mitigate the risk that is about to be affect the upcoming implementation project.

The next argument that Luis brings up is the handover of responsibility from the sales / pre-sales team to the services team after the PoC in order to deliver a successful project.

Indeed, this is the most complex part of the transition of an IAM implementation project. Information might get lost, the SOW that is created by the pre-sales team might not fit into the customers requirements and the services team might get caught in an financial frame that is to tight to do the project in a safe way. But there is an solution out there to solve this problem: Integrate the services organization into your pre-sales cycle as soon as possible. What’s that good for? They can assist the pre-sales consultants onsite during the PoC by bringing in real life experience instead of pure trained pre-sales knowhow of how to achieve goals in a short amount of time to get the PoC scenario done. They services organization can also assist in creation of the SOW by bringing in real life experience from ongoing or former implementations and to justify potentially higher project cost by their experience of with pitfalls, complexity and data driven issues of implementation projects.

The third argument that is brought up by Luis is the significant opportunity to gain better outcome of an product evaluation phase by engaging a services partner. I totally agree to this approach as this allows the customer organization to focus on their needs and living processes while having partner in place that is dedicated to the IAM implementation and the search for the perfect product for the customers situation, their architecture and their team. The only thing i do have to mention for this scenario: are service provider or solution integrators really unbiased? They a probably a bit more unbiased than a vendor sales and pre-sales team, but not completely unbiased as they are typically partnering with a couple of vendors to distribute and customize their solutions. From a customers perspective, the best approach would be to engage a service provider for the product evaluation phase and the definition of use cases and requirements, but to have another service provider or solution integration, maybe the the vendor itself doing the project implementation.

The last argument that comes up in Luis’ blog post is „If POCs were an effective means to evaluate IAM solutions, there would not be so many failed implementations in the market.“.

For sure, there are failed implementations out there that came through PoCs and a suboptimal handling of the transition from the sales cycle into the implementation phase executed by services organizations. The implementation might also fail if the product evaluation was done by a service provider as there are so much factors that can be a source for project failure.

As an final conclusion out of both blog posts, i think we do agree in the fact that things have to be changed around the product evaluation process. There have to be things changed on the customer side as well as on the side of vendors or system integrators. And last but not least: this does not only affect IAM projects.

Categories: IAG, IAM, IDM, Strategy

Passwords must die – we’re on the way

Since several months and identity related conferences there is one hot topic still ongoing and represented as an popular hashtag within the IAM crowd (some call them / us identirati): #PasswordsMustDie

I already did spent some time in march blogging some lines on #PasswordsMustDie in the article “Passwords must die – but how”. And over the past weeks i was spending some time to look around on various plates to see, how it’s going on the way to kill passwords. There’s a bunch of news in that space that i’d like to wrap up pretty quick.

The FIDO alliance

The FIDO alliance (FIDO stands for Fast IDentity Online) was formed as an non-profit organization in summer 2012 to change the nature of user authentication. Some very known names being members in the FIDO alliance are:

  • Google
  • Lenovo
  • PayPal
  • PingIdentity
The alliance is still growing, making it’s way to bring a FIDO plugin supporting various FIDO authenticators, such as hardware based tokens, finger prints, voice identification as well as combinations of those differentiating those into two kinds of  tokens:
  1. Identification tokens as unique identifiers being associated with an online identity
  2. Authentication tokens for identity proofing

Mozilla Persona

In April 2013, the Mozilla Identity team announced the second beta of Persona as an simple way to login to various services and web sites using any modern internet browser. Their simple goal: Eliminate passwords on the web. Although the base of services and web sites is still small, i do expect them to grow their services base over the months.


Both, the FIDO alliance as well as Mozilla Persona do show that there is something going on to kill passwords. These initiatives will see a major boost in usage as soon as some bigger services start supporting their technology and approach. As long as services like Twitter and LinkedIn just enable their users to use two factor authentication as as result due to various security incidents, there is still some password usage although it’s just a single part of authentication. Let’s see what’s the first popular service starting to use such technologies as offered by FIDO or Mozilla, we might see some real security improvements.

Categories: Cloud, IAG, IAM, Identity, Security, Strategy

Continuity vs. Reinvention

In an lead architect role i’m currently involved into a project to design the migration of a very mature IAM implementation towards a newer release of the same IAM solution suite within an complex infrastructure. I don’t want to shed light on the technical details here, but i would like to discuss the impact on the end user in such an migration.

The customer is using the web based end user portal heavily with an user base beyond 100.000 end user all over the world. Due to the lack of oob-features in the currently used release the customer spent a lot of time and money in extensively customizing the web portal. In fact, there are just a NO standard modules used anymore, the whole web portal was reimplemented by the customer and their current service provider to offer the best breed value for their end users.

Facing the challenge of migrating the existing solution to an newer release of the same IAM suite, were facing some issues here:

  • due to the fact that we plan to make a jump over 2 major releases, lot’s of the backend technology, database structure as well as the web portal designer engine and controls have changed, were replaced or disestablished
  • to enhance the end user experience, the customers service provider did implement a bunch of custom controls into the web portal using the existing web portal designer engine which are sometimes based on oob-controls
  • the web portal implementation is doing some tasks in an very special way, so to say not all of the tasks are initiated through the web portal are being handled and executed by the IAM solutions backend engine (as it would be the usual way to deal with) but they are being driven directly out of the web portal itself without ever interacting with the backend engine (e.g. several web service calls)

All of these issue and their consequences have to taken into consideration when planning the migration as an strategy as well as the technical implementation itself. When not taking a look on effort or cost, we’re coming down to the following decision:

Rebuild the web portal as it exists today OR reinvent the web portal using the newer technology and oob controls?

Or just rephrasing the question: Continuity or Reinvention? Let’s discuss both of them a bit first before coming to my final conclusion.


Keeping an eye on the huge end user base of more than 100.000 users it might be worth spending the effort in rebuilding the same heavily customized web portal than it is existing today. This will minimize the impact on the end user in the final phase of the solution migration and might keep the help desk and the IAM team in an comfortable situation. But the price that has to be paid is high: all controls that can’t be migrated automatically using the IAM suites migration features have to be rebuild within the new platform. So there is effort to be spent which has already been spent and paid by the customer. Keeping a high level of customization does increase the complexity of the final solution implementation as well as the potential maintenance effort but also the risk when migrating towards newer versions of the IAM suite in some years. It does also make the customer dependent from the implementing service provider as the implementation know how will be on the side of the implementing service provider. The solution vendor might not be able to support all of the implementation as it might be to heavily customized, which also does not bring any value to the customer.


Reinvention does mean to migrate the complete web portal towards the standard solution features by limiting the customization to the lowest level possible (which in the best case would be customers CI). The nearer the to the delivery standard the solutions implementation is, the better is the supportability by the solution vendor instead of the implementing service provider which brings the customer into a much more comfortable situation for future business with the vendor as well as with the service provider landscape. On the other hand there is impact on the huge end user base as they will have to relearn and / or adopt the newer solutions web portal and it’s functionality as it will have another look and behavior. This will also have an impact on the customers help desk and the IAM team as they will be the point of contact for all the end users that do have trouble with adjusting themselves to the new solution and the new way of handling things. This can be mitigated by providing training material, web casts and regular updates during the implementation process to make key users and power users of the upcoming changes. Utilizing the group of key users and power users does streamline the process of sharing knowledge and information from the help desk and the IAM team through the key users and power users to the regular end user.


From an architects view, my conclusion is pretty clear: i’m recommending the reinvention of the end user web portal although there will be an impact on the huge end user base. Why do i want to do that?

  1. Bringing the solution back towards oob standard as far as possible does make the solution less complex and enhances the maintenance situation for the customer itself as well as for the service provider (which might not necessarily be the implementing service provider)
  2. The implementation effort is not that high than keeping the web portal as it is by spending a lot of time and material in keeping continuity by reimplementing anything that has been implemented so far
  3. The end user impact will have a peak at the beginning of the solutions roll out but will decrease quickly

From an realists view knowing the customer for a while i’m pretty sure we will end up with kind of an mixture between continuity and reinvention. But the strategy i’d like to propose to my customer is clear: decrease customization over time. Maybe it’s worth spending the money in reimplementing the existing solution in the newer release and then starting an process of moving feature by feature back to standard.

Categories: IAG, IAM, Identity, IDM, Migration, Strategy

MDM in the context of IAM

While BYOD is not the newest phenomena in the IT and Security area, i just had my first project not only touching an Mobile Device Management platform. As part of the identity lifecycle it’s necessary to get control over mobile devices that are used by the end user. While this is pretty easy reg. mail integration (as soon as the user gets deprovisioned, the mailbox access is no longer possible), it’s not that easy to handle reg. profiles and own apps.

In my customers case, they do have MobileIron deployed in their infrastructure. As part of the deprovisioning process they came up with the requirement to retire devices used by the terminated employee within their MobileIron instance, which would take all the certificate based access from that device to the customers wifi and network resources.

As the MobileIron API does support HTTP requests to retire devices, it was necessary to have the device ID for an device in order to retire it. But lucky wise there is an HTTP web request to get a decent set of device attributes from MobileIron. We choose the most convenient and quickest approach: extending our IAM database model with an table to store the data of mobile devices with an foreign key link to the employees table. Calling an dedicated HTTP request within MobileIron, we got an CSV back from MobileIron carrying the decent set of attributes. This CSV then get’s imported into the IAM system. This process happens every hour.

As an employee now get’s terminated, we also kick off a process to retire all devices that are known for this employee in MobileIron. So far, this is satisfying the customers requirements.

For an later phase, this does also satisfy additional requirements that will come up (or already came up in while defining upcoming phases of the IAM strategy): being able to use the data from an governance and access management perspective does also answer questions such as “Who is accessing the enterprise network with what kind devices?” or “Are there devices with an software release that is not safe to let them touch the enterprise network and enterprise resources?”.

To cite a good IAM guy i did a project with: “Building an IAM implementation is like building a house: it’s all based on a strong foundation.”

I expect to see more projects coming up with even deeper integration between IAM and MDM as the BYOD wave is still rising…