|29||30||31|| || || || |
| || || || || || || |
Radovan Semančík's Weblog
Tuesday, 11 December 2012
MidPoint version 2.1 code-named Coeus was released yesterday. This sixth midPoint release focuses on practical features, code quality and robustness. The major changes include:
- Password policies used to both validate and generate passwords
- Provisioning consistency allows execution of provisioning operations even if target resource is down. The operations will be replayed when is comes up again. This feature also handles other situations such as attempt to create an account that already exists. The handling of provisioning errors is integrated into the midPoint synchronization engine. This feature is quite unique to midPoint, it is not common in the identity management field.
- Numerous synchronization improvements and fixes. The synchronization situations are now recorded directly in the shadows, the synchronization engine was tested for numerous situations and it supports more configuration options (e.g. tolerant attributes) and the overall code quality was significantly improved.
- Introduction of mapping mechanism provides much more reliable and flexible way pass values from users to accounts and vice versa. It takes advantage of relative changes model that is basic principle of midPoint operation. Introduction of mapping allows better implementation of usual IDM requirements such as flexible RBAC hybrids and Rule-Based RBAC (RB-RBAC).
- Support for assignments was improved by introducing several modes of assignment enforcement. This means broader applicability of midPoint synchronization, assignment and RBAC mechanism. MidPoint can now be deployed in almost any IDM scenario and this also allows more flexibility during initial IDM deployment and migration.
- Numerous GUI and usability improvements. There is a first version of preview changes page that will be improved even more in later releases. It is easier to work with remote connectors now, error reporting and logging has been improved, basic resource-centric views were introduced, etc.
- MidPoint includes experimental reporting and workflow integrations based on JasperReports and Activiti respectively. This is meant as preview of features that will come in later releases but this functionality is already partially usable for some deployments.
The internal quality of midPoint code was also significantly improved. MidPoint build contains a lot of automated integration tests that are executed all the time in a continuous integration
fashion. There were some internal re-engineering efforts to get rid of historical code parts. The architecture of the product is well established and proven to be flexible enough and efficient for almost any IDM challenge.
The Coeus release is a major step forward. MidPoint is considerably stable now. However we plan to make sure that the quality is more than acceptable. The next midPoint release will be a maintenance release. We plan to work much more on testing, quality and usability improvements. The primary goal of midPoint is an IDM solution that can be efficiently deployed and maintained while keeping the total cost reasonable. MidPoint seems to be well positioned to be one of the very few IDM products to reach that goal.
Monday, 30 July 2012
When I was a young university student I have learned TCP/IP by reading RFCs. It gave me exact idea how the network worked. It trained me to recognize good specification. And it also somehow persuaded me to believe in standards. And I have maintained that belief for most most of my professional life. However it started to vanish few years ago. And recently I have lost that faith completely. There were two "last drops" that sent my naïveté down the drain.
The first drop was SCIM. I was interested in that protocol as I hoped that having a standard interface in midPoint would be a good thing. But as I went through the specification I have recognized quite a lot of issues. This is a clear telltale of an interface which is under development and it is not suitable for a real-world use, not even thinking about becoming a standard. I have concluded that SCIM is a premature standardization effort and was ready to forget about it. But there was suggestion to post the comments on SCIM mailing list and in an attempt to be a good netizen I did just that. There was some discussion on the mailing list. But it ended in vain. What I figured is that there is no will to improve the protocol, to make the specification more concrete and useful. SCIM is not a protocol, it is not an interface. It is a framework that can be changed almost beyond recognition and one still can call it SCIM. All hopes for practical interoperability are lost. Well, there are some public interoperability testing. But I have checked the scenarios that were actually tested. And these are the most basic simplest cases. These are miles away from the reality. The folks on SCIM mailing lists argue that most of the "advanced" features are to be done as protocol extension, which most likely requires "profiling" the protocol for a specific use case. Which means practically no interoperability out of the box. Every real-world deployment will need some coding to make it work. I believe that SCIM is lost both as a protocol and as a standard.
The other drop was OAuth 2. I was not watching that one so closely, but recently a friend pointed me to Eran Hammer's blog entry. Eran describes the situation that is very similar to SCIM: specification that does not really specifies anything and a lack of will to fix it. That was the point when I realized that I have seen this scenario in various other cases during the last few years. It looks like premature standardization is the method and vague specifications are the tools of current standardization efforts. I no longer believe in standards. They just don't work.
But we need interoperability. We need protocols and interfaces. How can we do that without standards? I think that open specifications are the way to take. Specifications that are constructed outside of the standardization bodies. Specifications backed by (open source) software that really work in practical situations before they are fixed and "standardized". Specifications based on something that really works. That seems to be the only reasonable way.
But there is also a danger down this road. Great care should be taken to do the design responsibly, to specify it well, to reuse (if possible) instead of reinvent and to learn from the experiences of others. To avoid creating abominations such as OpenID.
Monday, 16 July 2012
Clouds are everywhere. We got pretty much used to that buzzword. Open API Economy is quite new. But it is almost the same. What seems to be the mantra behind "Cloud" and "Open API Economy" is: Do not do it yourself. Scrap whatever solution you have now and replace it with the magic service from the cloud. It is a perfect, easy, cheap and simple solution. Or ... is it?
What most of the proponents of cloud APIs have in mind is this:
The cloud companies publish an API that makes their services available to the consumers. Consumers do not need to understand the intricacies of how the service is implemented. They just consume the API which is far simpler. So far so good. It is quite easy to do for one service. Or two. But how about eight?
Poor little Alice will need to create (and maintain) a lot of client software. Oh yes, it is still easier than hosting all these services internally. Unless, of course, the internal implementation can be customized to specific needs that Alice has. And unless the internal implementation can expose a more suitable interface. Anyway, the complexity will not magically go away with migration to the cloud. It can even complicate the things much more, especially if somehow each of the cloud services has a different mechanism for security, consistency, redundancy, ...
There is also one deadly trap in the clouds: vendor lock-in. It is not the regular vendor lock-in as is known today. This is much worse. If you have a software and you stop paying your annual support fee nothing really happens. You still have the right to use the software. The software may break or you may need to change it. But there may be several companies that can do this for you. But the situation is quite different in the cloud. If you stop paying for the cloud the service will stop. Immediately. You have no right to use the service any more. You may own the data, but how do you migrate it to a different service? The APIs are not compatible, data formats are not compatible and processes are not compatible. Actually cloud service is inherently difficult to customize therefore the usual software replacement strategy is almost useless. Once the idea of a cloud sinks in the service fee may quite easily become a ransom.
To make things even worse current cloud services are not really cloud at all. They are not lightweight, not omnipresent and they cannot really move that well either. They are more like petrocumuli. They are dangerous.
The problem behind all of this is the basic misunderstanding of the purpose of the interface in the software systems. Interface, thats the "I" in API, yet too many people designing APIs do not understand the principles well enough. One of the purpose of the interface is to hide implementation from the clients. That's what the API folk gets right. But there is more. Much more. The reason why we want to hide the implementation is because we want to have a freedom in changing that implementation. The changes may happen in time, e.g. a new version of the service implementation. But they may also happen in space, e.g. a switch to an alternative service implementation. And it is the later case that cloud providers seems to ignore. Accidentally? Or is there a purpose?
For an example see how Microsoft reinvents semantic web using its own Graph API. The experience taught me that whatever Microsoft does it does it with a purpose.
So, what is the solution? It is too early to tell. We do not know enough about distributed systems yet. But one thing is almost certain. The use of cloud APIs should be similar to the use of interfaces in any well-designed software systems. When applied to cloud it might look like this:
We know this concept well. It is concept of a protocol: an agreement between communicating parties that abstracts the actual implementation. And that's it. The cloud APIs should not really look like APIs. They should be protocols.
Tuesday, 3 July 2012
Identity management is mainstream now. Sometimes I have a feeling that everybody in "the industry" understands identity management quite well. But that's obviously not entirely the case. When I discuss identity management with other people they somehow do not realize that it is not a single technology. They do not realize the complex labyrinth of technologies that forms the "identity management" buzzword. E.g. people frequently try to apply access management to a task that it just cannot solve. People think that provisioning will give them SSO. And so on. There are obviously some misunderstandings ...
Instead of answering the same questions and explaining the same concepts again and again I have compiled a text that explains it all. All the basic concepts of enterprise identity management. The text is in midPoint wiki: Enterprise Identity Management
Monday, 25 June 2012
MidPoint version 2.0 code-named Rhea was released last week. This is the first midPoint release that has features and quality appropriate for production use. It is a result of
more than two years of development therefore the code is mature enough to enter that stage. The major changes include:
- Repository based on relational database. Hibernate is used as a mapping layer and also for SQL dialect abstraction. The mapping is implemented efficiently therefore even relatively large datasets can be stored (we have tested 500k users with good results). The efficient data storage applies also to run-time schema extensions which was quite a difficult thing to do.
- Brand new GUI. It has new look&feel and ease of use is also improved. But the most significant change is change in technology. We have moved from JSF to Apache Wicket which was an excellent choice.
- All midPoint code is now based on Prism Objects. We had enough of all the problems of XML and JSON which makes them almost unusable for any serious software development. Therefore we have created a data abstraction that takes the good features of the data formats but otherwise is independent of them and avoids the critical pitfalls. The work has started in the previous two versions but it was fully integrated in this version. Although all the import/export formats of midPoint remain XML we can easily switch to JSON or other format if needed.
- Most schemas are stabilized and moved to version number 2. We expect only a compatible changes to the schema in next few releases, maintaining backwards compatibility and making upgrades easier.
- Extensible schema. MidPoint schema can now be extended at startup time by placing a XSD file in midPoint's home directory. GUI forms take this schema into consideration therefore the forms will show new fields from the schema. There is also a couple of new annotations to control how are the form fields displayed.
- PolyString is an unusual kind of animal. It is a built-in data type for polymorphic string. This string maintains extra values in addition to its original value. The extra values are derived from the original value automatically using a normalization code. It is currently used to support national characters in strings. The PolyString contains both the original value (with national characters) and normalized value (without national characters). This can be used in expressions e.g. to generate username that does not contain national characters or is a transliteration of the national characters. It deprecates the need to use custom conversion routines and each expression and therefore it brings some consistency into the integration code. But the most important reason is data storage. All the values are stored in the repository therefore they can be used to look for the object. Search that ignores the difference in diacritics or search by transliterated value can be used even if the repository itself does not support that feature explicitly. PolyString does not have its full potential yet, but it makes the system more useful now and it will be improved in later versions.
- Provisioning robustness. MidPoint can now handle provisioning to resources that are not available during the provisioning operation. MidPoint will queue that operation in the repository and it will replay it later when the resource becomes available. Also some other resource failures are handled in quite an intelligent way. This mechanism is a part of a greater and much more powerful consistency effort. It is a result of Katka Valalikova's diploma thesis work which will be published shortly. Although the whole mechanism is complete even now it still needs a bit of polishing to become a part of the product and will become fully available in the next midPoint version.
MidPoint is shaping up pretty well. We are successful in introducing a lot of unique and useful features. And we still maintain a clean and simple architecture. MidPoint is maturing. And I think it is now pretty clean that this development path was the right one.
Wednesday, 16 May 2012
All software is bad and it is not likely to change anytime soon. There is not a substantial difference between open source and commercial software when it comes to product quality. Both are difficult to use, very hard to diagnose and unsuitable for any practical purpose without a good deal of ugly hacking. But there is one little detail that actually makes a huge difference: source code.
I have spent most of today fighting with a code generation plugin that is part of our build. The code gave all kinds of helpful error messages such as "Index out of bounds: -1" and "null". There were no logs and no diagnostics output. The
-verbose option was most likely provided just for the sake of completeness and had no practical effect. It was simply a dream of every engineer. A very bad dream.
I have been in such situation numerous times, mostly with commercial software. That was a nasty experience in vast majority of the cases. Usually I had to spend many hours reading the useless documentation provided with the product and trying to diagnose the problem using any available tool ... just to fail miserably. Then I would file a trouble ticket and play a long ping-pong match with the support team. If I would be really lucky, few weeks later after many exchanges (and my nerves almost lost in the process) I might have received a hint what the solution might look like. But the most likely outcome is that the support team provides no useful information and I would need to create an ugly workaround all by myself. This happened too many times already.
But today the situation was different. The package that I was using was not a commercial software. It was open source. So I have downloaded the source code, fought with it for a few minutes and finally I had a fresh build of my own. I have navigated the labyrinth of ugly uncommented code and dropped few debug messages here and there. After many attempts and failures I have figured out what is wrong. And solved the problem with only minimal amount of ugly hacking. In just one day.
Few weeks compared to one day. That looks like a huge difference to me. That's one of many reasons why I have stopped to use almost all commercial software. It is just not worth the time. If you don't have buildable and modifiable source code you have nothing. Nothing at all.
May the Source be with you.
Friday, 11 May 2012
I see evidence in favor of this all the time. My colleagues that works on variety of projects and with quite a wild assortment of products are also agreeing that it holds. It looks like this might be a law:
No matter what it is, no matter how big it is, no matter how many people works on it, it always takes at least two years to create a working software product.
Friday, 13 April 2012
SCIM seems to be a new specification with ambition to succeed where
SPML have failed. The effort of SCIM seems to more realistic and practical, yet it is still struggling with similar issues as SPML. As an architect of midPoint I'm looking at SCIM from the point of view of a potential implementer and also partially as a researcher. Here is a list of issues that immediately stuck me when I was reading the core schema specification of SCIM:
There is externalId attribute for user. It may seems as a single attribute but it is not. In fact "The Service Provider MUST always interpret the externalId as scoped to the Service Consumer's tenant". Which means that the provider needs to store one value for every client. This is an extra state that has nothing to do with the provider itself. It is transfer of client's responsibility to server. Wrong application of separation of concerns principle. This is not even made optional. Therefore it will complicate all the server deployments, regardless if it is necessary or not.
It looks like change of user's userName is not supported. This seems quite limiting to have two persistent identifiers for users (id and userName). Also, username changes are very common. If username is based on familyName it changes after most wedding for approximately half of the population.
The familyName and givenName attributes have culture-neutral names. This is a nice take from FOAF. But the middleName is not that good. It enforces "american" point of view to the schema. Maybe "additionalName" would be more appropriate.
User has displayName and also name/formatted attributes. It seems like these two are used for the same purpose? Or maybe it is displayName and userName? It looks like SCIM is following LDAP and SAP anti-patterns where users have just too many names to choose from. It is perhaps good for the entity that displays them, but terrible for the one that needs to manage that. The protocol should be more balanced in this aspect.
User has nickName as a top-level attribute. But isn't a "name" complex attribute a better place for nickName? Especially considering the fact that nickname is frequently formatted as a part of full name.
Does profileUrl represent application profile maintained by the application that is being provisioned? Or some other external profile? Should be "profileUrl" multivalued? The specification is not clear about that.
User has title and userType in the core schema. But these seem to better fit into the "enterprise" extension.
User has phoneNumbers, but no canonical phone number format is specified. This is limiting the usability of the specification especially in telco environment.
The type meta-attribute in the multi-valued attributes is a plain string. This is prone to conflicts, especially in ims and similar "open" attributes. URL instead of plain string may be a better choice.
And probably the most important one: Both groups and roles can be considered entitlements. The groups attribute is read-only, but it can be manipulated through Group Resource. Should such group also appear in entitlements user attribute? If it cannot than the correct name of that attribute should rather be "otherEntitlements" and the specification should make that clear. If it can then we have a redundancy: a group can be manipulated both through Group Resource and "entitlements" attribute. Similarly for roles. The specification does not specify if a role should only appear in "roles" and not in "entitlements" or can appear in both. The "SCIM Group Schema" also defines that roles may be represented as groups, which adds to the confusion. The ignorance of complexity of entitlement management was one of the time bombs in SPML. Not it is time bomb silently ticking in SCIM.
The members attribute in group does not scale. Groups with thousands of members are very difficult to manage in this way. And it is typical that a group such as "Generic Employee" have more members than that, not even speaking about telcos. This is one of the common problems in LDAP and also in SCIM. There is also a corollary: creating a user as a member of a group requires two operations: add user, modify group. This complicates the implementation in case that the second operation fails. Should a provisioning system report that as a failure or success? User is created but not assigned to a group. Good provisioning system should handle that, but how many good provisioning systems are out there?
Minor issue: Canonical types for members are capitalizes while other types start with lowercase letter.
Should not authenticationSchema be outside of the SCIM core schema? Schema defines no transport protocol and the authentication types clearly depend on transport protocols. Maybe a binding specificiation is a better place for authenticationSchema definition?
Resource schema has name attribute that obviously points to the object type. But it seems to be plain string. Namespace is not obvious here. If any SCIM extension adds a new object types (which seems likely) this may be very confusing. URL may be a better choice here.
User has userName, displayName and name/formatted. Group has displayName. Resource has name as string. It is confusing. It is also quite inconsistent and makes if difficult to support uniform representation of "objects" in the underlying SCIM implementation.
Resource Schema has description, but user and groups does not? Description may come handy in any object.
Can endpoint in the Resource Schema be only relative? Or may it be absolute? Base URL is not a good concept, especially when it comes to different representations of the schema (e.g. see here).
The attributes/type definition in Resource Schema does not specify whether the value is URL or QName or plain string. If a plain string (which seems to be the case by looking at the examples) how to map that string to XSD QNames? Are only XSD data types possible? The specification says "SHOULD not" not "MUST NOT", therefore an extension mechanism should be specified here.
The multiValuedAttributeChildName attribute and associated way how to represent data in XML seems to add to the redundancy of the data format. Strictly speaking, this one is also quite specific to XML and should not be in the generic core schema.
When defining an attribute in a resource schema, how is attribute schema used? Is it a namespace that applies to attribute type? Or to the attribute name? Attribute has schema and sub-attribute does not? Why? Does it inherit it from parent? The specification should make that clear.
Sub-attributes cannot have sub-attributes?
If the type is mandatory for every multi-valued attribute (is "hardcoded" in the core schema specification), is there any point to define it explicitly in all the resource schemas?
I have notices meta attribute in the examples, but it looks like it is not defined in the specification.
Is ordering of multi-value attribute values significant?
Critical problems in JSON-like representations: as there are no namespaces in JSON, naming conflicts can happen. If two schemas define a "employeeNumber" attribute while one of them defines it as string and the other as number, such schemas cannot be used together. Is this a known limitation of SCIM?
Overall I perceive SCIM as an effort in the very early state of development. It is also a typical example of premature standardization anti-pattern. That anti-pattern is seen way too often and gives us marvels of software engineering such as CORBA and the WS-* stack. I hope that the authors of SCIM will try to correct the obvious problems of the specification and focus on proving that it works before going any further. The only reasonable way to go is: working software first, standards second. If tried the other way the result will be yet another incarnation of SPML. You know what is the most delicious piece of SPML specification? The SPMLv2 schemas do not pass even a simple XSD validation. I hope SCIM will not reapeat such mistakes.
Tuesday, 7 February 2012
MidPoint version 1.10 code-named Phoebe was released today. Although the changes with respect to the previous version may not be that obvious, there is much that has happened under the hood. There are especially two things that I'm quite proud of: relative change model and RBAC.
MidPoint is now built entirely on the concept of relative changes. All the legacy OpenIDM code was removed and there is a brand new implementation of the IDM logic. Using relative changes seems to be the best approach in identity management as there is no locking and no other support for consistency. MidPoint now treats all changes as deltas that describe relative change to the user or account. This allows to do quite a broad variety of things without being too limited by consistency. Deltas will work without locking, therefore there is not much harm if they are stuck for days waiting for a boss to return from vacation and approve the request. Deltas are easy to merge, therefore the order in which they are applied usually does not matter (we have been careful to have only unordered data in midPoint to allow this). The switch to complete relative changes is a critical move in midPoint evolution. It was planned almost from the very beginning of OpenIDM development and it was finally implemented in midPoint (it seems that this model was dropped in OpenIDMv2 which I considered to be a major mistake). The results that we see in midPoint confirms that we are following the right path.
Second major improvement of midPoint is its RBAC implementation. Traditional RBAC models suffer from a critical problem known as role explosion. The number of roles keeps increasing and multiplying until it exceeds the number of managed identities. Then a hard problem of managing identities transforms to even harder problem of role management. MidPoint solves this problem in two ways that work together very well: flexible role definition and support for RBAC exceptions.
The roles in midPoint are quite smart. Similarly to roles in Waveset they may contain expressions that set account attributes, assigns account to groups, etc. This little feature is quite efficient to reduce the number of roles. However we are going one step further. MidPoint roles can be parametric. The roles can be customized not only at the time role definition is created, it can also be adjusted at the time the role is assigned to user. E.g. we can have a single Assistant role that is parametrized by the department name. Then the expressions in the role algorithmically derive the names of the appropriate groups from the name of the department. One role is enough to support assistants of all departments. When the role is assigned to a user additional department parameter is specified. Therefore the role can derive appropriate group names for each case. This role can even be assigned several times to the same user, usually with different parameters. This can model a situation when an employee acts as assistant for several departments.
What we have learned during all these long years of IDM deployments is that there is no rule without an exception. Completely formalizing access control policies to an RBAC structure is a work similar to that of Sysiphus. As Pareto told us, it may be efficient to cover 80% of cases with 20% of roles. But what to do with the rest? MidPoint allows to specify quite a fine-grained exceptions to the RBAC rules. This allows to specify a "legal exception" from the RBAC rules for each user. In fact, the mechanism for specifying an exception is almost the same as the mechanism used to define a role. It is in fact kind of a "private role" for each user. Therefore the same role analysis and mining principles that work with roles may also work on such exceptions and so the exceptions are not a way towards a dead end as it might appear.
Although both relative changes and RBAC are somehow hidden under the hood now, they work surprisingly well. The next midPoint version Rhea will bring an improved GUI to better present them to the public. Stay tuned. There is more to come ...
Tuesday, 10 January 2012
I have compiled a list and an evaluation
of open source identity management systems. This document will be continually updated as the systems evolve.
Friday, 9 December 2011
Thursday, 1 December 2011
Monday, 24 October 2011
It is time to publicly announce a project that I was working on recently. It is an open-source identity management system named midPoint. It is based on the OpenIDM version 1 code developed by the our part of the team. MidPoint aims to be a usable, pragmatic IDM product. We have based it on many years of experience deploying other IDM products. We have learned from what worked and what have been failing during real deployments. The bad thing is that too many things failed - and that's something we want to improve with midPoint.
MidPoint is a user provisioning tool. It can do basic provisioning as well as provisioning driven be expressions. It is using Identity Connector Framework (ICF) to connect to other systems. It has a live synchronization capability (similar to Sun ActiveSync) and other synchronization methods are under development. There is a basic RBAC support that is continually improving. Lot of time and effort was invested into diagnostics and support for deployment such as good error reporting and logging. We know where are the pain points of IDM deployments and we are working hard to improve what we can.
MidPoint version 1.9 was released few days ago. It is a third version developed under the Evolveum brand name. This version is worth checking out as a preview of the final product that is planned as version 2.0 for early next year.
Friday, 8 July 2011
I was a very young student when I came across a book named Programátorské poklesky (Programmer's Misdemeanours) by Ivan Kopeček and Jan Kučera. The authors describe in a humorous way what are the results of programming errors. It was probably my very first book about programming that was not a programming language manual. It was a year after our country woke up from the communist era and programming books were difficult to come by. I think the book had influenced me more than I have anticipated or was willing to admit at the time.
One of the parts that I particularly remember was the software "psychology". Authors observed four temperaments of programs:
- Sanguine programs provide readable and helpful error messages, have useful help texts, try to recover from errors and try to communicate reasonably in general. Yet, user interaction is maybe the only useful part of such programs.
- Choleric programs does its job well. Such programs do not crash, but the error messages are very dense and cryptic. They do not provide any additional information and there is no help text. It does not try to recover from errors - it expects that the user will know what to do. Experts find these programs easy to use, but all other people hate it.
- Melancholic programs get very sad when they encounter the smallest of problems. The program just crashes, does not provide any message or description. They refuse to communicate about the problem any further and usually does not even provide a way to resolve it.
- Phlegmatic programs ignore any errors. They just carry no matter the cost. No error message, no indication, it just works on. Of course they may provide wrong results from time to time, but they run. That's the most important thing.
All of that came to my mind as I was discussing the error handling approach in mainstream programming languages (mostly Java). It usually boils down to handling exceptions.
The original approach in Java was to use checked exceptions. Programmer has to either catch them or declare them to be thrown. The authors of Java hoped that it will lead to a better error handling. But it looks like there is a glitch: error handling is very difficult to do right. It takes a lot of time and the error handling code may well be a significant part of the system. This leads to sanguine programs: they provide good information about errors, but they do little else. There is just not enough time and resources to do everything right.
Laziness is one of the three great virtues of the programmer. Therefore programmers soon stared to focus on the "meat" and simplified the exception handling. The easiest way at hand was to ignore all the exceptions. Catch all exceptions and handle them with empty code block. This obviously leads to a phlegmatic program. It will run no matter what happens. But the results may not be the best.
The current trend is to switch all the exceptions to the runtime exceptions. These do not enforce checking and handling. The usual outcome is that nobody checks or handles them. Any exception will bubble up through the call stack to the upper layers until is is caught by the framework. That may be an application server that will display a nicely formatted error message that essentially says "something somewhere went wrong" and terminate the request. The user has no idea what went wrong and where or how to recover from the problem. This is a melancholic program.
Luckily, some programmers display at least the exception type and message to the user. But what will the user do if presented with the message "ConsistencyException: Consistency constraint violated"? It is not really helpful. Most programmers also display or log a complete stack trace. But that won't help the user a bit. Even members of a core programming team have problems understanding that, user does not stand a chance. That gives us a choleric program.
Obviously, one size does not fit all. There is no single right way to do it. If a good error handling is required then a sanguine approach is needed. But there is a cost to pay: either reduced functionality or much more effort to do the "same" thing. Robust system asks for somehow phlegmatic approach while cheap code is best done melancholic. However, the usual approach is choleric code. Errors are reported, but nobody really understands them. You just can't always win.
Friday, 3 June 2011
I'm still quite young and my "professional memory" does not even count two decades. But I just cannot help to see some recurring patterns. Quite a scary patterns.
I was a student when Sun RPC was the cool thing. It has all that a C programmer needed at that time to create a distributed system. But obviously it was too simple.
CORBA was taking the place of the "cool thing" as I was finishing university. It had all that a C++ programmer may wish for to create a distributed system. Interfaces, object-orientation, "interoperable" references, ... But it was obviously too complicated to use.
XML took over during the dot-com bubble. Or better to say it was XML-RPC as a mechanism for Internet-scale distributed systems. It has all that PHP programmer would want. It had the "feature" of seamlessly passing firewalls. It was the cool thing for the Internet. But obviously, it was too simple.
SOAP came shortly after that. The mechanism by which Java and .NET architectures promised to bridge enterprise and the Internet. Originally designed as simple thing to do something with objects. It ended up as a maze of WS-WhatEver specifications that are far from being simple and actually have nothing to do with objects. This is obviously too complex to use.
RESTful religion is the current trend with JSON as its holy prophet, worshiped by the scripting crowd. It is based on an idealistic and internally inconsistent principles of Web Architecture with a loud promise of simplicity. But obviously, this is too simple to be practical.
Now we see JSON schema, namespaces, security and actually all the things that we have already seen in SOAP/WS-* and CORBA. I expect we will see a formal RESTful interface defintions soon. Will this be too complex to use, again?
What we see are cycles. Each new generation of engineers is re-inventing what the previous generation has invented, making all the mistakes all over again. Can this eventually converge? How long are the customers going to tolerate this? And what we really know about distributed systems?
Sorry guys. I just refuse to participate in this insanity.