« March »
Locations of visitors to this page

Powered by blojsom

Radovan Semančík's Weblog

Tuesday, 24 March 2015

A month ago I have described my disappointment with OpenAM. My rant obviously attracted some attention in one way or another. But perhaps the best reaction came from Bill Nelson. Bill does not agree with me. Quite the contrary. And he has some good points that I can somehow agree with. But I cannot agree with everything that Bill points out and I still think that OpenAM is a bad product. I'm not going to discuss each and every point of Bill's blog. I would summarize it like this: if you build on shabby foundation your house will inevitably turn to rubble sooner or later. If a software system cannot be efficiently refactored it is as good as dead.

However this is not what I wanted to write about. There is something much more important than arguing about the age of OpenAM code. I believe that OpenAM is a disaster. But it is an open source disaster. Even if it is bad I was able to fix it and make it work. It was not easy and it consumed some time and money. But it is still better than my usual experience with the support of closed-source software vendors. Therefore I believe that any closed-source AM system is inherently worse than OpenAM. Why is that, you ask?

Firstly, I was able to fix OpenAM by just looking at the source code. Without any help from ForgeRock. Nobody can do this for closed source system. Except the vendor. Running system is extremely difficult to replace. Vendors know that. The vendor can ask for an unreasonable sum of money even for a trivial fix. Once the system is up and running the customer is trapped. Locked in. No easy way out. Maybe some of the vendors will be really nice and they won't abuse this situation. But I would not bet a penny on that.

Secondly, what are the chances of choosing a good product in the first place? Anybody can have a look at the source code and see what OpenAM really is before committing any money to deploy it. But if you are considering a closed-source product you won't be able to do that. The chances are that the product you choose is even worse. You simply do not know. And what is even worse is that you do not have any realistic chance to find it out until it is too late and there is no way out. I would like to believe that all software vendors are honest and that all glossy brochures tell the truth. But I simply know that this is not the case...

Thirdly, you may be tempted to follow the "independent" product reviews. But there is a danger in getting advice from someone who benefits from cooperation with the software vendors. I cannot speak about the whole industry as I'm obviously not omniscient. But at least some major analysts seem to use evaluation methodologies that are not entirely transparent. And there might be a lot of motivations at play. Perhaps the only way to be sure that the results are sound is to review the methodology. But there is a problem. The analysts are usually not publishing details about the methodologies. Therefore what is the real value of the reports that the analysts distribute? How reliable are they?

This is not really about whether product X is better than product Y. I believe that this is an inherent limitation of the closed-source software industry. The risk of choosing inadequate product is just too high as the customers are not allowed to access the data that are essential to make a good decision. I believe in this: the vendor that has a good product does not need to hide anything from the customers. So there is no problem for such a vendor to go open source. If the vendor does not go open source then it is possible (maybe even likely) that there is something he needs to hide from the customers. I recommend to avoid such vendors.

It will be the binaries built from the source code that will actually run in your environment. Not the analyst charts, not the pitch of the salesmen, not even the glossy brochures. The source code is only thing that really matters. The only thing that is certain to tell the truth. If you cannot see the source code then run away. You will probably save a huge amount of money.

(Reposted from https://www.evolveum.com/comparing-disasters/)

Technorati Tags:

Posted by rsemancik at 7:09 PM in Identity
Tuesday, 17 March 2015

There was a nice little event in Bratislava called Open Source Weekend. It was organized by Slovak Society for Open Information Technologies. It is quite a long time since I had a public talk therefore I've decided that this a good opportunity to change that. Therefore I had quite an unusual presentation for this kind of event. The title was: How to Get Rich by Working on Open Source Project?.

This was really an unusual talk for the audience that is used to talks about Linux hacking and Python scripting. It was also unusual talk for me as I still consider myself to be an engineer and not an entrepreneur. But it went very well. For all of you that could not attend here are the slides.

OSS Weekend photo

The bottom line is that it is very unlikely to ever get really rich by working on open source software. I also believe that the usual "startup" method of funding based on venture capital is not very suitable for open source projects (I have written about this before). Self-funded approach looks like it is much more appropriate.

(Reposted from https://www.evolveum.com/get-rich-working-open-source-project/)

Technorati Tags:

Posted by rsemancik at 11:50 AM in Software
Thursday, 12 March 2015

Industry analysts produce their studies and fancy charts for decades. There is no doubt that some of them are quite influential. But have you ever wondered how are the results of these studies produced? Do the results actually reflect reality? How are the positions of individual products in the charts determined? Are the methodologies based on subjective assessments that are easy to influence? Or are there objective data behind it?

Answers to these questions are not easy. Methodologies of industry analysts seem to be something like trade secrets. They are not public. They are not open to broad review and scrutiny. Therefore there is no way how to check the methodology by looking "inside" and analyzing the algorithm. So, let's have a look from the "outside". Let's compare the results of proprietary analyst studies with a similar study that is completely open.

But it is tricky to make a completely open study of commercial products. Some product licenses explicitly prohibit evaluation. Other products are almost incomprehensible. Therefore we have decided to analyze open source products instead. These are completely open and there are no obstacles to evaluate them in depth. Open source is mainstream for many years and numerous open source products are market leaders. Therefore this can provide a reasonably good representative sample.

As our domain of expertise is Identity Management (IDM) we have conducted a study of IDM products. And here are the results of IDM product feature comparison in a fancy chart:

We have taken a great care to make a very detailed analysis of each product. We have a very high confidence in these data. The study is completely open and therefore anyone can repeat it and check the results. But these are still data based on feature assessment done by several human beings. Even though we have tried hard to be as objective as possible this can still be slightly biased and inaccurate ...

Let's take it one level higher. Let's base the second part of the study on automated analysis of the project source code. These are open source products. All the dirty secrets of software vendors are there in the code for anyone to see. Therefore we have analyzed the structure of source code and also the development history of each product. These data are not based on glossy marketing brochures. These are hard data taken from the actual code of the actual system that the customers are going to deploy. We have compiled the results into a familiar graphical form:

Now, please take the latest study of your favorite industry analyst and compare the results. What do you see? I leave the conclusion of this post to the reader. However I cannot resist the temptation to comment that the results are pretty obvious.

But what to do about this? Is our study correct? We believe that it is. And you can check that yourself. Or have we done some mistake and the truth is closer to what the analysts say? We simple do not know because the analysts keep their methodologies secret. Therefore I have a challenge for all the analysts: open up your methodologies. Publish your algorithms, data and your detailed explanation of the assessment. Exactly as we did. Be transparent. Only then we can see who is right and who is wrong.

(Reposted from https://www.evolveum.com/analysts/)

Technorati Tags:

Posted by rsemancik at 11:48 AM in Identity