« July »
SunMonTueWedThuFriSat
     12
3456789
10111213141516
17181920212223
24252627282930
31      
About
Categories
Recently
Syndication
Locations of visitors to this page

Powered by blojsom

Radovan Semančík's Weblog

Monday, 27 June 2016

MidPoint 3.4 code-named "Heisenberg" was released few days ago. This is a sixteenth midPoint release since the project started all these long years ago. MidPoint went a very long way since then.

The Heisenberg release is the best midPoint release yet. We have finished access certification functionality, which makes midPoint the very first open source product to enter the identity governance and compliance playing field. We have also improved midPoint internals to better handle inconsistencies of resource data and we have also made many small internal improvements to increase robustness. This was one of the inspiration for the code-name. Similarly to Heisenberg's uncertainty principle midPoint accepts that there is some degree of uncertainty when it comes to processing of the identity data. It may not be practically possible to always base the decisions on authoritative data. Practical identity management system needs to accept that the identity data are always in a state of flux - and midPoint does just that. And it manages the data reliably even in situations where other systems fail miserably.

So, midPoint now has governance features. This is really big news. Much bigger than you may expect. Why? Because midPoint is a brilliant identity management system. Identity provisioning circulates inside midPoint veins. The release of midPoint 3.4 made the term "closed-loop remediation" obsolete. Any governance decision is immediately reflected into provisioning action because it all happens inside one system. There is no need to painfully integrate provisioning and governance engines any more. MidPoint does it all!

Even though the governance features in midPoint is really a big news, there is even more important improvement in midPoint 3.4: user interface. MidPoint user interface went through a major facelift during last two releases. And the Heisenberg release brings the results. The user interface is much more streamlined, it is consistently color-coded, it is much more user-friendly and it just looks good. See it for yourself:

Even though midPoint currently has the richest user interface among all the open source IDM systems, there are still more user interface improvement planned for the future and usability is one of our big priorities. Usability is something that needs to be continuously improved. And it will. Also there are big plans to expand the governance and compliance features in next midPoint versions. MidPoint is by far the richest open source IDM system and it improves all the time.

The Heisenberg release is without any doubts a major milestone in midPoint history. It comes after long years of a very hard work. But it was worth it. Every second of it. And the midPoint team is very proud of the result. So, just give it a try.

(Reposted from Evolveum blog)

Posted by rsemancik at 6:10 PM in Identity
Friday, 27 May 2016

It isn't. That's how it is. Why? Take any study describing potential information security threats. What do you see among the top threats there? Take another study. What do you see there? Yes. That's the one. It is consistently marked as one of the most serious threats in vast majority of studies published for (at least) last couple of decades. Yet it looks like nobody really knows what to do about this threat. So, who is this supervillain? He's right under your nose. It is the insider.

It all makes perfect sense. The employee, contractor, partner, serviceman - they all are getting the access rights to your systems easily and legally. But, do you really know who has access to what? Do you know that the access is still needed? Maybe this particular engineer was fired yesterday, but he still has VPN access and administration rights to the servers. And as he might not be entirely satisfied by the way how he has left the company the chances are he is quite inclined to make your life a bit harder. Maybe leaking some of the company records to which he still has the access would do the trick? It certainly will. And who is the one to blame for this? Is the security officer doing his job properly? Do we know who has access to what right now? Do we know if the access is legal? Are we sure there are no orphaned accounts? Are we sure there are no default or testing accounts with trivial passwords? Can we disable the accounts immediately? Maybe we can disable password authentication, but are you sure that there is no other way around that? What about SSH keys? What about email-based or help-desk password resets?

If you do not have good answers to these questions then your information security is quite weak. I'm sorry. That's how it really is. Do you remember that weakest link idiom that is taught in every information security training? Now you know where your weakest link is.

But what to do about it? Obviously, you need to manage the access. So maybe the Access Management (AM) software can help here? Actually, the primary purpose of Access Management software is not security. The AM purpose is to make user's life easier by implementing convenience mechanisms such as single sign-on (SSO). Yes, AM might improve the authentication by adding a second factor, making the authentication adaptive and so on. But that won't help a bit. Authentication is not your problem. The insider already has all the credentials to pass the authentication. He got the credentials legally. So even the strongest authentication mechanism in the world will do absolutely nothing to stop this attack. No, authentication is not the problem and therefore Access Management is not going to make any significant difference.

The root of the problem is not in authentication, authorization, encryption or any other security buzzword. It is plain old management issue. The people have access where they should not have access. That's it. And what turns this into a complete disaster is lack of visibility: the people responsible for security do not know who has access to what. Therefore improvements in "information security proper" are not going to help here. What needs to be improved is the management side. Management of the identities and access rights. And (surprise surprise) there is a whole field which does right that: Identity Management (IDM).

Therefore there is no real security without Identity Management. I mean it. And I've been telling this for years. I though that everybody knows it. But obviously I was wrong. So recently I have been putting that openly in my presentations. But still everybody is crazy about deploying Access Management, SSO and OpenID Connect and OAuth and things like that. And people are surprised that it costs a fortune an yet it will not bring any substantial security improvement. Don't get me wrong, I'm not telling you that the AM technologies are useless. Quite the contrary. But you need to think how to manage them first. Implementing SSO or OAuth without identity management is like buying a super expensive sport car with an enormous engine but completely forgetting about steering wheel.

Don't make such dangerous and extremely expensive mistakes. Think about identity management before heading full speed into the identity wilderness.

(Reposted from Evolveum blog)

Technorati Tags:

Posted by rsemancik at 2:16 PM in Identity
Wednesday, 18 May 2016

Test-Driven Development (TDD) tells us to write the tests first and only then develop the code. It may seem like a good idea. Like a way how to force lazy developers to write tests. How to make sure that the code is good and does what it should do. But there's the problem. If you are doing something new, something innovative, how the hell are you supposed to know what the code should do?

If you are doing something new you probably do not know what will be the final result. You are experimenting, improving the code, changing the specification all the time. If you try to use TDD for that you are going to fail miserably. You will have no idea how to write the tests. And if you manage to write it somehow you will change them every time. This is a wasted effort. A lot of wasted effort. But we need the tests, don't we? And there is no known force in the world that will make the developer to write good and complete tests for the implementation once the implementation is finished. Or ... is there?

What are we using in midPoint project is Test-Driven Bugfixing (TDB). It works like this:

  1. You find a bug.
  2. You write an (automated) test that replicates the bug.
  3. You run the test and you check that the test is failing as expected.
  4. You fix the bug.
  5. You run the test and you check that the test is passing.

That's it. The test remains in the test suite to avoid future regressions. It is a very simple method, but a very efficient one. The crucial part is writing the test before you try to fix the bug. Even if the bugfix is one-liner and the test takes 100 lines to write. Always write the test first and see that it fails. If you do not see this test failure how can you be sure that the tests replicates the bug?

We are following this method for more than 5 years. It works like a charm. The number of tests is increasing and we currently have several times more tests that our nearest competition. Also the subjective quality of the product is steadily increasing. And the effort to create and maintain the tests is more than acceptable. That is one of the things that make midPoint great.

(Reposted from Evolveum blog)

Technorati Tags:

Posted by rsemancik at 5:18 PM in Software
Wednesday, 11 May 2016

I like OpenLDAP. OpenLDAP server is famous for its speed and good open source character. But it is really infamous for ease of management. Or rather a lack of anything that could be called "easy" when it comes to managing OpenLDAP. Managing OpenLDAP content is not that difficult. For manual management there is excellent Apache Directory Studio. For automated management and synchronization there is our very own midPoint. No, it is not the content that is a problem. It is the configuration.

OpenLDAP has a really cool but quite cumbersome and very under-documented OLC-style configuration. It is configuration of LDAP server using the LDAP protocol, which seems to become quite a standard feature of all good LDAP servers. But it is really a pain to use that in practice: you have to prepare LDIF file, figure out the correct DN, compose a long ldapmodify command-line, etc. Not very practical at all. But what about this?

$ sudo slapdconf list-suffixes
dc=evolveum,dc=com
dc=example,dc=com
$ sudo slapdconf get-suffix-prop dc=example,dc=com
olcDatabase : {2}mdb
olcDbDirectory : /var/lib/ldap/example
.... (shortened for clarity) ....
$ sudo slapdconf set-server-prop idle-timeout:120
$ sudo slapdconf get-server-prop
olcIdleTimeout : 120
olcLogLevel :
  stats
  stats2

This is my micro-project slapdconf. Very handy tool. If have been using it and maintaining it for last two years, but some parts go back more than 10 years. But now I'm looking for a help with this little project. Firstly, it is written in Perl. Perl was the cool thing when this all started. I'm an old Perl monk and when you have a hammer every problem looks like a nail. So I've nailed it in Perl. But despite the long-awaited Perl 6 release the Perl is not a very suitable tool any more. So I'm looking for someone that is fluent in Python to rewrite these little scripts in Python. I can maintain them in Python, but I'm not confident enough to get the right Python style when starting from scratch. I'm also looking for someone who would help me to properly package this in deb packages so it can be easily distributed. Anyone willing to help?

(Reposted from Evolveum blog)

Technorati Tags:

Posted by rsemancik at 8:49 PM in Identity
Wednesday, 9 March 2016

In identity management there is a class of petty issues that appear and re-appear all the time. Even though these issues are easy to understand they are tricky to completely eliminate and they often have very nasty consequences. These seemingly unimportant issues frequently result in nights spent resolving a total breakdown of IDM system. What is this devil that kills sleep and keeps engineers away from the families? It is the deamon of case insensitivity and his friends.

It works like this: IDM system often maps its own username to the usernames used by the target application. The IDM usernames are often clean and pretty alphanumeric lowercase strings such as 'foobar'. Many applications are perfectly happy with that. But there are some notorious systems that insist that lowercase is no good and that uppercase is the only proper case. So they silently transform 'foobar' to 'FOOBAR'. Now, these are two different strings for an IDM system and that's where trouble begins. The solution of a naïve IDM system is to always treat the username as case-insensitive. But that won't work. There are systems that strictly insist on case-sensitivity. E.g. for a UNIX system 'foobar', 'fooBAR' and 'FOOBAR' and three very different identifiers. That leads to even worse trouble if the IDM system fails to recognize that, e.g. it is quite easy to get mass duplication of UNIX accounts. The naïve solution will not work here.

This is getting even more complicated. E.g. LDAP distinguished name (DN) is quite a slippery beast. It is (usually, but not always) case-insensitive. But it also has internal structure that tolerates white spaces. E.g. the 'cn=foobar, dc=example, dc=com' and 'cn=foobar,dc=example,dc=com' are equivalent. There are similar rules also for other formats. E.g. URIs 'http://example.com/foo%20bar' and 'http://example.com/foo+bar' are equivalent. Obviously, there is no simple solution to this little problem with a nasty head. And really bad things are bound to happen if the IDM system fails to recognize that two identifiers are in fact the same. What is even worse these issues are often overlooked at the beginning when the IDM system is tested and deployed. It is only at the time when the system is filled with data and the real operation begins that the disaster strikes.

MidPoint has a solution to these issues for many years already. It is called matching rules. Simply speaking matching rules are little algorithms that compare values. These algorithms can be attached to individual attributes. Then midPoint knows that 'foobar' and 'FOOBAR' are in fact the same thing. This makes the operation of midPoint reliable even if the connected applications are doing crazy things with the data. The matching rules can also normalize the value so midPoint can do efficient large-scale searches and matching. These are very useful little thingies. And they are essential for reliable operation of any IDM system.

So, what is so interesting about all of this if there is already a solution that works for several years? Well, quite a lot. The matching rules are not very easy to configure. They are the little things that the engineer always forgets to configure until all these huge chunks of data are migrated into IDM system. And the duplicities slowly (but persistently) start to appear. But at that point it is quite late to configure the matching rules as the data needs to be re-normalized and re-evaluated. This made the use of matching rules quite tricky.

The curious part about this story is that many systems can actually tell that the value of a certain attribute is case-insensitive or that it is DN or UUID. And identity connector can easily detect that. Yet the original Identity Connector Framework developed by Sun Microsystems had absolutely no means how the connector can tell that to the IDM system. This was simply insane. This insanity was fixed in the ConnId project right now. And it is already supported in the midPoint development code. Smart connectors can detect value subtypes and midPoint will automatically determine matching rules based on that. No need for explicit configuration. This is one more nasty tricky thing that is going to be eliminated in midPoint 3.3.1 and 3.4. And this is how midPoint continually improves its practical usability and deployment efficiency. MidPoint is indeed built to make engineer's life easier.

(Reposted from Evolveum blog)

Technorati Tags:

Posted by rsemancik at 12:16 PM in Identity
Wednesday, 17 February 2016

Identity Management (IDM) systems usually provide quite a broad mix of features. But there is one thing that no other system can do: management of access rights. No other system comes even close, even if they often pretend to do so. Access rights, privileges, role assignments, authorities, authorizations ... whatever these things are called they need to be managed. They need to be assigned to the right people in the right systems at the right time. And that is no easy task.

The naïve solution would be to adopt a precise model such as Role-Based Access Control (RBAC) and automatically assign the roles based on deterministic rules (policies). That looks great on paper and auditors love it. But it almost never works in practice. The problem is that nobody can really define the rules that cover every single privilege in every single system for every single user and every single situation. Even if someone spends the tremendous effort to define such rules it will not take long until the rules get outdated. It usually takes the first re-organization to almost completely ruin all the effort.

Therefore the IDM systems implement a less formal but much more practical process. The users request the access rights that they think they need to do their work. Someone else reviews the request and approves or rejects it. These approvers are usually managers, security officers, owners of the requested role, owners of the affected system or any combination of these. This method admits that some decisions can only be done by the people because they are difficult to automate. This process may seem not entirely ideal from security and compliance point of view. But the choice is fully-automated formal but infeasible model or a semi-formal partially-automated process that is perfectly feasible. This choice is not that difficult, is it? In fact the approval process is very successful in practice and if implemented properly it yields surprisingly good results.

Of course, there is a catch.

The request-approval-provision process works very well. In fact it works a bit too well. Users get request and get privileges easily in a matter of hours or even minutes. There is an easy way to add privileges and no practical way to remove them. It ends up pretty much as expected: privileges accumulate. A lot.

Clearly, there also needs to be a process to remove privileges. But the same approach will not work here. If the user does not have a privilege that he needs he is motivated to request it. But if someone has a privilege that he no longer needs - what is the motivation to drop it? Obviously someone else needs to do it. That's where the access certification campaigns come in.

Access certification campaign is usually executed at regular intervals. The IDM system determines list of reviewers and a list of privileges that need to re-certified. The reviewers are usually managers or system owners. They are selected in a similar way as approvers in the role request-and-approval process are selected. The privileges are distributed to reviewers. Each reviewer has to decide whether the privilege is still required for the specific user to do his work. This is done in a fashion that allows to make a lot of decisions very efficiently. Like this: Access certification in midPoint If a privilege is not re-certified in this way then it is automatically removed when the campaign ends. This is the way how to keep privilege accumulation under control.

The certification campaigns are usually implemented in specialized systems that belong to the Governance, Risk Management and Compliance (GRC) category. It is still not entirely common for an IDM system to implement this feature. And this feature is especially rare in the open source IDM systems. In fact there is only one open source system that implements it out-of-the-box: Evolveum midPoint.

Access certification is a part of midPoint since version 3.2. In that version the feature was available as a technology preview. Usability is a critical part of the process. Therefore we have first implemented it as a preview to gather user feedback and incorporate that in the final implementation. Two versions later and the feature is implemented in its entirety. The implementation is now finished and it is already merged into the midPoint master branch. It will be released in midPoint version 3.4 that is due in a couple of months. This is the final step that makes midPoint the only open source IDM system capable of handling complete identity lifecycle management out-of-the-box.

(Reposted from Evolveum blog)

Technorati Tags:

Posted by rsemancik at 1:42 PM in Identity
Tuesday, 17 November 2015

The LDAP conference was held in Edinburgh this year. And it was fascinating.

It was my first time that I have visited Scotland. Despite the infamous weather conditions it was a very pleasant experience. Edinburgh is a really impressive city. And Scotland has much to offer in a form of food and drinks that pretty much compensates the weather.

It was also my first time at LDAPcon. And now I pity that I've missed the previous conferences. I have decided that I will not repeat that mistake ever again. The conference size is just right: enough people to make it interesting and not too many to make it a crowded place. There were LDAP hardcore topics, engineering topics, standards talks and even an excursion to digital humanities and a violin performance. Overall it was a very interesting mix.

My talk was about way how to construct a complete open source IAM solution without a vendor lock in. First of all I have described why we need a "complete solution" and not just LDAP. While this motivation is quite clear for veteran IAM practitioners, it is still not a common knowledge. I have described the idea of "ecosystem" that can for a platform for open source companies to cooperate.

Even though the formal part of the ecosystem is still forming, the technology works today. Right now. Katka Valalikova demonstrated that right away with a live demo of midPoint and OpenLDAP. So, we are doing it differently than most of the commercial world: we have working technology before we start selling it.

Perhaps the most important take away for me is the overview of what other people are doing. And this is really an excellent news for midPoint. It looks like midPoint is far ahead of all other related presented activities when it comes to provisioning and synchronization. The other presented projects were interesting in their own right. But it looks like there are only very few solutions for consistency of directory service content with the outside world. Except for systems such as midPoint. There were also many interesting discussions about midPoint after my talk. I take that as a confirmation that we have made good choices and midPoint is going in the right direction.

So, see you in 2017 at the next LDAPcon.

Technorati Tags:

Posted by rsemancik at 1:02 PM in Identity
Thursday, 13 August 2015

There are not many occasions when a CxO of a big software company speaks openly about sensitive topics. Few days ago that happened to Oracle. Oracle's CSO Mary Ann Davidson posted a blog entry about reverse engineering of Oracle products. Although it was perhaps not the original intent of the author, the blog post quite openly described several serious problems of closed-source software. That might be the reason why the post was taken down very shortly after it was published. Here is Google cached copy and a copy on seclist.org.

So, what are the problems of closed-source software? Let's look at the Davidson's post:

"A customer can’t analyze the code ...". That's right. The customer cannot legally analyze the software that is processing his (sensitive) data. Customer cannot contract independent third party do to this analysis. Customer must rely on the work done by the organizations that the vendor choses. But how independent are these organization if the vendor is selecting them and very often the vendor pays them?

"A customer can’t produce a patch for the problem". Spot-on. The customer is not allowed to fix the software. Even if the customer has all the resources and all the skills he cannot do it. The license does not allow fixing a broken thing. Only vendor has the privilege to do that. And customer is not even allowed to fully check the quality of the fix.

"Oracle’s license agreement exists to protect our intellectual property." That's how it is. Closed-source license agreements are here to protect the vendors. They are not here to make the software better. They are not here to promote knowledge or cooperation. They are not here to prevent damage to the software itself or to the data processed by the software. They are not helping the customer in this way. Quite the contrary. They are here for the purpose of protecting vendor's business.

In the future the children will learn about the historical period of early 21st century. The teacher might mention the prevailing business practices as a curiosity to attract the attention of the class. The kids won't believe that people in the past agreed to such draconian terms that were know as "license agreement".

(Reposted from Evolveum blog)

Technorati Tags:

Posted by rsemancik at 12:48 PM in security
Tuesday, 9 June 2015

Significant part of open source software is developed by small independent companies. Such companies have small and highly motivated teams that are incredibly efficient. The resulting software is often much better than comparable software created by big software vendors. Especially in the Identity and Access Management (IAM) field there are open source products that are much better than the average commercial equivalent. And the open source products are much more cost efficient! This is exactly what the troubled IAM field needs as the closed-source IAM deployment projects struggle for better solution quality and (much) lower cost.

It is obvious that small independent open source companies can deliver great software. But the usual problem is that such a software created by a small company is a "point solution". Such product is a remarkable tool to solve very specific set of problems. But no small company really provides a complete solution. Every engineer know what it takes to integrate products from several companies. It is no easy task. So, this was an obstacle for the open source IAM technologies to reach the full potential. But this obstacle is a thing of the past. It does not exist any more.

Several open source IAM vendor joined together in an unique cooperative group that has a working name "Open Source Identity Ecosystem". This includes companies such as Evolveum, Symas and Tirasa. The ecosystem members agreed to support each other during activities that involve product integration. The primary goal of the ecosystem activity is to create and maintain a complete IAM solution (or rather a set of solutions) that will match and surpass all the closed source IAM solution stacks.

The ecosystem is much more than yet another technology stack. The ecosystem is a completely revolutionary concept.

A stack is usually simple set of products piled on top of each and roughly integrated together. E.g. if a customer needs identity management component from the stack he usually has only one option. The freedom of choice is severely limited. This leads to vendor lock-in, lack of flexibility and a very high cost.

But ecosystem is different. The ecosystem adds a whole new dimension. There are several options for each component. E.g. If a customer needs an identity management component from an ecosystem there are several options to choose from: Apache Syncope supported by Tirasa and midPoint supported by Evolveum. There is no vendor lock-in. If one of them fails to meet the expectations there is always a second choice. Evolveum and Tirasa are competing companies, yet they have agreed on a common set of interfaces to make crucial parts of their products interoperable. Therefore both products can seamlessly live in the same ecosystem. But the internal competition still keeps the incentive for both products to evolve and improve. This concept provides a completely new experience and freedom for the customers. It also brings enormous number of new opportunities to system integrators, value-added partners, OEM-like vendors and so on.

The ecosystem is completely open. If you like this idea you can join the ecosystem. This can be especially attractive for companies that maintain open source projects in the IAM field. But also open-source-friendly system integrators and service providers are more than welcome. Please see the discussion in the ecosystem mailing list for more details.

(Reposted from https://www.evolveum.com/open-source-identity-ecosystem-idea/)

Technorati Tags:

Posted by rsemancik at 4:15 PM in Identity
Monday, 25 May 2015

My recent posts about ForgeRock attracted a lot of attention. The reactions filled the spectrum almost completely. I've seen agreement, disagreement, peaceful and heated reactions. Some people were expressing thanks, others were obviously quite upset. Some people seem to take it as an attack on ForgeRock. This was not my goal. I didn't want to harm ForgeRock or anyone else personally. All I wanted is to express my opinion about a software that I'm using and write down the story of our beginnings. But looking back I can understand that this kind of expression might be too radical. I haven't though about that. I'm an engineer, not a politician. Therefore I would like to apologize to all the people that I might have hurt. It was not intentional. I didn't want to declare a war or anything like that. If you have understood it like that, please take this note as an offer of peace.

A friend of mine gave me a very wise advice recently. What has happened is a history. What was done cannot be undone. So, let it be. And let's look into the future. After all, if it haven't been for all that history with Sun, Oracle and ForgeRock we probably would not have the courage to start midPoint as an independent project. Therefore I think I should be thankful for this. Do not look back, look ahead. And it looks like there are great things silently brewing under the lid ...

(Reposted from https://www.evolveum.com/pax/)

Technorati Tags:

Posted by rsemancik at 4:50 PM in Identity
Monday, 27 April 2015

MidPoint 3.1.1 was released few days ago. It is formally an update to the "Sinan" (midPoint 3.1). But this is actually quite a substantial release as the original goal of "small and quick" update started a life of its own. This is a lesson for us what can happen when a development is driven by customer requirements. Nevertheless, midPoint 3.1.1 release is here. And it is a good release.

MidPoint 3.1.1 builds on the previous release. The resource wizard and actually the entire user interface has usability improvements. The most significant improvement is addition of "lookup" object. This object can be used to define a set of legal values for a property that the user can choose from. It can be used to provide a list of employee types, role types, timezones, languages, etc. In accord with a midPoint philosophy this only needs to be specified once and all the midPoint components automatically adapt to it. This feature makes midPoint deployments even more efficient than before.

There is also a new support for Python scripting (in addition to Groovy, JavaScript/ECMAscript and XPath2). MidPoint reporting is significantly improved by much better integration with Jasper. There is also a bunch of smaller additions: workflow handlers are improved, there are slight policy improvements, there is a new validation API for complex GUI validations, etc. See the release notes for the details.

MidPoint 3.1.1 is a significant achievement and I want to thank all the Evolveum team members that made it possible. However I would like to express a special thanks to our contributors. It was during the development of midPoint 3.1.1 that we have noticed increased contributor activity. We appreciate every single contribution to the midPoint project, whether it is a simple bugfix, translation or a major feature. Therefore I would like to thank all the midPoint contributors regardless of what they have contributed. But there are two companies that deserve a special thanks: Biznet Bilişim and AMI Praha. They are part of midPoint community for a couple of years and they provide the energy for continued midPoint development.

It looks like the midPoint community is growing. MidPoint is no longer a technology that is created by Evolveum only. MidPoint is a true open source project that is a product of several cooperating companies. We also see increased customer interest in the technology that we have created together with our partners. I take this as a sign that word about midPoint has already spread far and wide enough for our project to make a mark on the IAM market. That was our initial goal: to make a difference. To improve the terrible state of established IDM technology. We are getting very close to achieving that goal. We have the technology to do that for some time already. But now we are also gaining the audience.

(Reposted from https://www.evolveum.com/midpoint-3-1-1/)

Technorati Tags:

Posted by rsemancik at 5:03 PM in Identity
Tuesday, 24 March 2015

A month ago I have described my disappointment with OpenAM. My rant obviously attracted some attention in one way or another. But perhaps the best reaction came from Bill Nelson. Bill does not agree with me. Quite the contrary. And he has some good points that I can somehow agree with. But I cannot agree with everything that Bill points out and I still think that OpenAM is a bad product. I'm not going to discuss each and every point of Bill's blog. I would summarize it like this: if you build on shabby foundation your house will inevitably turn to rubble sooner or later. If a software system cannot be efficiently refactored it is as good as dead.

However this is not what I wanted to write about. There is something much more important than arguing about the age of OpenAM code. I believe that OpenAM is a disaster. But it is an open source disaster. Even if it is bad I was able to fix it and make it work. It was not easy and it consumed some time and money. But it is still better than my usual experience with the support of closed-source software vendors. Therefore I believe that any closed-source AM system is inherently worse than OpenAM. Why is that, you ask?

Firstly, I was able to fix OpenAM by just looking at the source code. Without any help from ForgeRock. Nobody can do this for closed source system. Except the vendor. Running system is extremely difficult to replace. Vendors know that. The vendor can ask for an unreasonable sum of money even for a trivial fix. Once the system is up and running the customer is trapped. Locked in. No easy way out. Maybe some of the vendors will be really nice and they won't abuse this situation. But I would not bet a penny on that.

Secondly, what are the chances of choosing a good product in the first place? Anybody can have a look at the source code and see what OpenAM really is before committing any money to deploy it. But if you are considering a closed-source product you won't be able to do that. The chances are that the product you choose is even worse. You simply do not know. And what is even worse is that you do not have any realistic chance to find it out until it is too late and there is no way out. I would like to believe that all software vendors are honest and that all glossy brochures tell the truth. But I simply know that this is not the case...

Thirdly, you may be tempted to follow the "independent" product reviews. But there is a danger in getting advice from someone who benefits from cooperation with the software vendors. I cannot speak about the whole industry as I'm obviously not omniscient. But at least some major analysts seem to use evaluation methodologies that are not entirely transparent. And there might be a lot of motivations at play. Perhaps the only way to be sure that the results are sound is to review the methodology. But there is a problem. The analysts are usually not publishing details about the methodologies. Therefore what is the real value of the reports that the analysts distribute? How reliable are they?

This is not really about whether product X is better than product Y. I believe that this is an inherent limitation of the closed-source software industry. The risk of choosing inadequate product is just too high as the customers are not allowed to access the data that are essential to make a good decision. I believe in this: the vendor that has a good product does not need to hide anything from the customers. So there is no problem for such a vendor to go open source. If the vendor does not go open source then it is possible (maybe even likely) that there is something he needs to hide from the customers. I recommend to avoid such vendors.

It will be the binaries built from the source code that will actually run in your environment. Not the analyst charts, not the pitch of the salesmen, not even the glossy brochures. The source code is only thing that really matters. The only thing that is certain to tell the truth. If you cannot see the source code then run away. You will probably save a huge amount of money.

(Reposted from https://www.evolveum.com/comparing-disasters/)

Technorati Tags:

Posted by rsemancik at 7:09 PM in Identity
Tuesday, 17 March 2015

There was a nice little event in Bratislava called Open Source Weekend. It was organized by Slovak Society for Open Information Technologies. It is quite a long time since I had a public talk therefore I've decided that this a good opportunity to change that. Therefore I had quite an unusual presentation for this kind of event. The title was: How to Get Rich by Working on Open Source Project?.

This was really an unusual talk for the audience that is used to talks about Linux hacking and Python scripting. It was also unusual talk for me as I still consider myself to be an engineer and not an entrepreneur. But it went very well. For all of you that could not attend here are the slides.

OSS Weekend photo

The bottom line is that it is very unlikely to ever get really rich by working on open source software. I also believe that the usual "startup" method of funding based on venture capital is not very suitable for open source projects (I have written about this before). Self-funded approach looks like it is much more appropriate.

(Reposted from https://www.evolveum.com/get-rich-working-open-source-project/)

Technorati Tags:

Posted by rsemancik at 11:50 AM in Software
Thursday, 12 March 2015

Industry analysts produce their studies and fancy charts for decades. There is no doubt that some of them are quite influential. But have you ever wondered how are the results of these studies produced? Do the results actually reflect reality? How are the positions of individual products in the charts determined? Are the methodologies based on subjective assessments that are easy to influence? Or are there objective data behind it?

Answers to these questions are not easy. Methodologies of industry analysts seem to be something like trade secrets. They are not public. They are not open to broad review and scrutiny. Therefore there is no way how to check the methodology by looking "inside" and analyzing the algorithm. So, let's have a look from the "outside". Let's compare the results of proprietary analyst studies with a similar study that is completely open.

But it is tricky to make a completely open study of commercial products. Some product licenses explicitly prohibit evaluation. Other products are almost incomprehensible. Therefore we have decided to analyze open source products instead. These are completely open and there are no obstacles to evaluate them in depth. Open source is mainstream for many years and numerous open source products are market leaders. Therefore this can provide a reasonably good representative sample.

As our domain of expertise is Identity Management (IDM) we have conducted a study of IDM products. And here are the results of IDM product feature comparison in a fancy chart:

We have taken a great care to make a very detailed analysis of each product. We have a very high confidence in these data. The study is completely open and therefore anyone can repeat it and check the results. But these are still data based on feature assessment done by several human beings. Even though we have tried hard to be as objective as possible this can still be slightly biased and inaccurate ...

Let's take it one level higher. Let's base the second part of the study on automated analysis of the project source code. These are open source products. All the dirty secrets of software vendors are there in the code for anyone to see. Therefore we have analyzed the structure of source code and also the development history of each product. These data are not based on glossy marketing brochures. These are hard data taken from the actual code of the actual system that the customers are going to deploy. We have compiled the results into a familiar graphical form:

Now, please take the latest study of your favorite industry analyst and compare the results. What do you see? I leave the conclusion of this post to the reader. However I cannot resist the temptation to comment that the results are pretty obvious.

But what to do about this? Is our study correct? We believe that it is. And you can check that yourself. Or have we done some mistake and the truth is closer to what the analysts say? We simple do not know because the analysts keep their methodologies secret. Therefore I have a challenge for all the analysts: open up your methodologies. Publish your algorithms, data and your detailed explanation of the assessment. Exactly as we did. Be transparent. Only then we can see who is right and who is wrong.

(Reposted from https://www.evolveum.com/analysts/)

Technorati Tags:

Posted by rsemancik at 11:48 AM in Identity
Friday, 27 February 2015

I'm dealing with the OpenAM and its predecessors for a very long time. I remember Sun Directory Server Access Management Edition (DSAME) in early 2000s. After many years and (at least) three rebrandings the product was finally released as OpenSSO. That's where Oracle struck and killed the product. ForgeRock picked it up. And that's where the story starts to be interesting. But we will get to that later.

I was working with DSAME/SunAM/OpenSSO/OpenAM on and off during all that time that it existed. A year ago one of our best partners called and asked for help with OpenAM. They need to do some customizations. OpenAM is no longer my focus, but you cannot refuse a good partner, can you? So I have agreed. The start was easy. Just some custom authentication modules. But then it got a bit complicated. We figured out that the only way forward is to modify OpenAM source code. So we did that. Several times.

That was perhaps the first time in all that long history that I needed to have a close look at OpenAM source code. And I must honestly say that what have I seen scared me:

  • OpenAM is formally Java 6. Which is a problem in itself. Java 6 does not have any public updates for almost two years. But what is worse is that bulk of the OpenAM code is still efficiently Java 1.4 or even older. E.g. the generics are almost not used at all! Vast majority of OpenAM code looks like it was written before 2004.
  • OpenAM is huge. It consists of approx. 2 million lines of source code. It is also quite complicated. There is some component structure. But it does not make much sense on the first sight. OpenAM also does not have any documents describing the system architecture from a developers point of view. The only link that I was able to find still points to Sun OpenSSO document. And it is 5 years since ForgeRock took over the development!
  • OpenAM is in fact (at least) two somehow separate products. There is "AM" part and "FM" part. And these two were not integrated in the cleanest way. The divide is still very obvious. And it gets into the way whenever you want to do something with "federation". E.g. SAML assertion is available in the "FM" part, but not in the authentication modules. The session is central concept in "AM" part, but it is not available in the code that is processing assertion. So, if you need to do some custom code with the assertion that affects the session you are out of luck (no, the custom attribute mapper will not help either). And the most bizarre thing is that OpenAM sometimes obviously creates two or even three sessions for the same user. The it discards the extra sessions. But whatever you do in the authentication modules is discarded with them. This is a mess.
  • OpenAM debugging is a pain. It is almost uncontrollable, it floods log files with useless data and the little pieces of useful information are lost in it. And to understand most of the diagnostic output you just have to look into the source code. This is a 20th century technology. Java logging API was available in Java 1.4 (February 2002). But OpenAM is not using that. This suggests that the OpenAM core may be even older than what I've previously thought.
  • OpenAM is still using obsolete technologies such as JAX-RPC. JAX-RPC is a really bad API. It was a big mistake. Even the Sun engineers obviously knew that and they have deprecated the API in December 2006. That was more than 8 years ago. But OpenAM is still using it. Unbelievable. But worse than that: this efficiently ruins any attempt to use modern web services. E.g. if you need your authentication handler to invoke a SOAP service which uses WS-Security with SAML token retrieved from STS then you are in trouble. This is pretty much standard thing today. E.g. with recent versions of Apache CXF it takes only a handful of lines of code to do. But not in OpenAM.

Using some software archeology techniques I estimate that the core of current OpenAM originated between 1998 and 2002 (it has collections, but not logging and no generics). And the better part of the code is stuck in that time as well. So, now we have this huge pile of badly structured, poorly documented and obsolete code that was designed at the time when people believed in Y2K. Would you deploy that into your environment?

I guess that most of these problems were caused by the original Sun team. E.g. the JAX-RPC was already deprecated when Sun released OpenSSO, but it was not replaced. Logging API was already available for many years, but they haven't migrated to it. Anyway, that is what one would expect from a closed-source software company such as Sun. But when ForgeRock took over I have expected that they will do more than just take the product, re-brand it and keep it barely alive on a life support. ForgeRock should have invested in a substantial refactoring of OpenAM. But it obviously haven't. ForgeRock is the maintainer of OpenAM for 5 years. It is a lot of time to do what had to be done. But the product is technologically still stuck in early 2000s.

I also guess that the support fees for OpenAM are likely to be very high. Maintaining 2M lines of obsolete code is not an easy task. It looks like it takes approx. 40 engineers to do it (plus other support staff). ForgeRock also has a mandatory code review process for every code modification. I have experienced that process first-hand when we were cooperating on OpenICF. This process heavily impacts efficiency and that was one of the reasons why we have separated from OpenICF project. All of this is likely to be reflected in support pricing. My another guess is that the maintenance effort is very likely to increase. I think that all the chances to efficiently re-engineer OpenAM core are gone now. Therefore I believe that OpenAM is a development dead end.

I quite liked the OpenSSO and its predecessors in early 2000s. At that time the product was slightly better than the competition. The problem is that OpenAM is mostly the same as it was ten years ago. But the world has moved on. And OpenAM haven't. I have been recommending the DSAME, Sun Identity Server, Sun Java System Access Manager, OpenSSO and also OpenAM to our customers. But I will not do it any more. And looking back I have to publicly apologize to all the customers that I have ever recommended OpenAM to them.

Everything in this post are just my personal opinions. They are based on more than a decade long experience with DSAME/SunAM/OpenSSO/OpenAM. But these are still just opinions, not facts. Your mileage may vary. You do not need to believe me. OpenAM is open source. Go and check it out yourself.

UPDATE: There is follow-up: Comparing Disasters

(Reposted from https://www.evolveum.com/hacking-openam-level-nightmare/)

Technorati Tags:

Posted by rsemancik at 10:58 AM in Identity