« October »
SunMonTueWedThuFriSat
1234567
891011121314
15161718192021
22232425262728
293031    
       
About
Categories
Recently
Syndication
Locations of visitors to this page

Powered by blojsom

Radovan Semančík's Weblog

Thursday, 13 August 2015

There are not many occasions when a CxO of a big software company speaks openly about sensitive topics. Few days ago that happened to Oracle. Oracle's CSO Mary Ann Davidson posted a blog entry about reverse engineering of Oracle products. Although it was perhaps not the original intent of the author, the blog post quite openly described several serious problems of closed-source software. That might be the reason why the post was taken down very shortly after it was published. Here is Google cached copy and a copy on seclist.org.

So, what are the problems of closed-source software? Let's look at the Davidson's post:

"A customer can’t analyze the code ...". That's right. The customer cannot legally analyze the software that is processing his (sensitive) data. Customer cannot contract independent third party do to this analysis. Customer must rely on the work done by the organizations that the vendor choses. But how independent are these organization if the vendor is selecting them and very often the vendor pays them?

"A customer can’t produce a patch for the problem". Spot-on. The customer is not allowed to fix the software. Even if the customer has all the resources and all the skills he cannot do it. The license does not allow fixing a broken thing. Only vendor has the privilege to do that. And customer is not even allowed to fully check the quality of the fix.

"Oracle’s license agreement exists to protect our intellectual property." That's how it is. Closed-source license agreements are here to protect the vendors. They are not here to make the software better. They are not here to promote knowledge or cooperation. They are not here to prevent damage to the software itself or to the data processed by the software. They are not helping the customer in this way. Quite the contrary. They are here for the purpose of protecting vendor's business.

In the future the children will learn about the historical period of early 21st century. The teacher might mention the prevailing business practices as a curiosity to attract the attention of the class. The kids won't believe that people in the past agreed to such draconian terms that were know as "license agreement".

(Reposted from Evolveum blog)

Technorati Tags:

Posted by rsemancik at 12:48 PM in security
Tuesday, 4 November 2014

The "insider" has been indicated as a the most severe security threat for decades. Almost every security study states that the insiders are among the highest risk in almost any organization. Employees, contractors, support engineers - they have straightforward access to the assets, they know the environment and they are in the best position to work around any security controls that are in place. Therefore it is understandable that the insider threat is consistently placed among the highest risks.

But what has the security industry really done to mitigate this threat? Firewall, VPN, IDS and cryptography is of no help here. 2-factor authentication also does not help. The insiders already have the access they need therefore securing such the access is not going to help. There is not much that the traditional information security can do about the insider threat. So, we have threat that is consistently rated among the top risks and we have nothing to do about it?

The heart of the problem is in the assets that we are trying to protect. The data are stored inside applications. Typically the data of all sensitivity levels are stored in the same application. Therefore network-based security techniques are almost powerless. Network security can usually control only whether a user has access to application or not. But it is almost impossible to discriminate the individual parts of the application which the user is allowed to access - let alone individual assets. Network perimeter is long gone. Therefore there is no longer even a place where to place network security devices as the data move between cloud applications and mobile devices. This is further complicated by the defense in depth approach. Significant part of the internal nework communication is encrypted. Therefore there is very little an Intrusion Detection System (IDS) can do because it simply does not see inside the encrypted stream. Network security is just not going to do it.

Can application security help? Enterprises usually have quite a strict requirements for application security. Each application has to have proper authentication, authorization, policies, RBAC, ... you name it. If we secure the application then we also secure the assets, right? No. Not really. This approach might work in early 1990s when applications were isolated. But now the applications are integrated. Approaches such as Service-Oriented Architecture (SOA) bring in industrial-scale integration. The assets almost freely travel from application to application. There are even composite applications that are just automated processes that live somewhere "between applications" in the integration layer. Therefore it is no longer enough to secure a couple of sensitive applications. All the applications, application infrastructure and integration layers needs to be secured as well.

As every security officer knows there is an aspect which is much more important than high security. It is consistent security. It makes no sense to have high security in one application while other application that works with same data is left unsecured. The security policies must be applied consistently across all applications. And this cannot be done in each of the application individually as this would be daunting and error-prone task. This has to be automated. As applications are integrated then also the security needs to be integrated. If it is not integrated then the security efficiently disappears.

Identity Management (IDM) systems are designed to integrate security policies across applications and infrastructure. The IDM systems are the only components that can see inside all the applications. The IDM system can make sure that the RBAC and SoD policies are applied consistently in all the applications. It can make sure that the accounts are deleted or disabled on time. As the IDM system can correlate data in many applications it can check for illegal accounts (e.g. accounts without a legal owner or sponsor).

IDM systems are essential. It is perhaps not possible to implement reasonable information security policy without it. However the IDM technology has a very bad reputation. It is considered to be very expensive and never-ending project. And rightfully so. The combination of inadequate products, vendor hype and naive deployment methods contributed to a huge number of IDM project failures in 2000s. The Identity and Access Management (IAM) projects ruined many security budgets. Luckily this first-generation IDM craze is drawing to an end. The second-generation products of 2010s are much more practical. They are lighter, open and much less expensive. Iterative and lean IDM deployments are finally possible.

Identity management must be an integral part of the security program. There is no question about that. Any security program is shamefully incomplete without the IDM part. The financial reasons to exclude IDM from the security program are gone now. Second generation of IDM systems finally delivers what the first generation has promised.

(Reposted from https://www.evolveum.com/security-insider-threat/)

Technorati Tags:

Posted by rsemancik at 11:46 AM in security
Friday, 29 January 2010

Quite an interesting scam appeared on Facebook. It was just a matter of time when something like that will pop up, yet I was quite surprised when I have actually seen it. The scam works like this: There is a simple HTML page that promises to provide nude photos in zip file if you click on the button. However, if you click on the button you will see no butts and tits. A link to the tricky page will be posted to your facebook profile instead. If you want to try it go to http://homeslices.org/f2.html (if the page is still around). But you have been warned.

The trick is simple. The page creates an iframe containing pretty standard facebook form to share a link. However the frame is almost invisible, therefore you cannot see it. But the browser still think you can see it and is processing it. The tricky page has a "View" button on the same location as is "Share" button on the invisible facebook page. You think you are clicking on the "View" button but instead you are clicking on the "Share" button on facebook. The iframe is fetched by your browser, therefore it is your identity that is used on facebook to post the link.

This page is pretty innocent. All it does is a bit of humiliation for the victims, amusement for experts and undoubtedly a lot of fun for the author. But imagine that this very same method is used to subvert your Internet banking. I guess that the method could be adapted to subvert many of current Internet banking applications. It won't be that funny any more.

This is the price we pay for flexible presentation formats. There are two basic principles of the trick:

  1. Mix the content from two sites in one window. Content from facebook is displayed in a page where you do not expect it, it a wrong context, with a wrong URL in the URL bar.
  2. Create ambiguous display of information. The browser thinks you can see the "Share" button. If has 1% opacity, therefore it is still somehow opaque and ergo visible. Therefore it thinks that if you click to the place where "Share" button is you want to submit information to facebook. But in fact you do not see the "Share" button because if has only 1% of opacity and therefore is almost invisible. You are clicking to that area because you see "View" button that is behind it.
The first problem is a specific problem of HTML. It can be fixed quite easily, if there would be enough "political will" to do it. But the second problem is the problem. How much opaque something should be to be considered opaque enough? Should 1% grey text on white background be considered visible? Or can a 2pt big font be considered readable?

Probably the most serious implication of this problem is a bit independent from a Web. Presentation formats are very dangerous when used it a legally binding way. For example if you sign a document with a digital signature. If you sign a contract and it contains a paragraph written in light grey text on a white background, should such a text be considered part of the contract or not? Some devices may display that text as well readable while on some devices it cannot be seen. This opens up a huge door to a scam of all sizes.

This problem applies universally to any data format that includes rich presentation features: HTML, Microsoft Word documents, RTF, OpenDocument and many more. But maybe the worst aspect of all of this is that our government as well as many other governments in Europe explicitly allows such data formats for legally binding documents signed by "guaranteed digital signature". I'm really lucky that I have no qualified certificate to create such a signature.

Technorati Tags:

Posted by rsemancik at 7:05 PM in security
Thursday, 4 December 2008

Ben Laurie is discussing the nature of passwords. He claims:

If your password is unphishable, then it is obviously the case that it can be the same everywhere. Or it wouldn’t be unphishable. The only reason you need a password for each site is because we’re too lame to fix the real problem. Passwords scale just fine. If it wasn’t for those pesky users (that we trained to do the wrong thing), that is.

I can see where Ben is leading us. Using a device that can take password and convert it to some form of more secure authenticator or protocol exchange. Well, that could work. But there's a catch, as always.

The password itself may be very difficult to phish, because it is never shared with anything but the secure device (under Ben's password utopia). However, the device communicates with the rest of the world using some kind of "secure" protocol. This protocol interaction may be vulnerable to man-in-the-middle attacks. And it surely will be, unless two mechanisms are in place:

  • The device must verify the identity of the authenticator party, the web site that accepts authentication. Think about the lesson of Needham-Schroeder and compare that with Otway-Rees. If this is not done, following attack is possible: The attacker Mallory can just pretend he is authenticating user for user's usual daily dose of gossip (read: social site). But the attacker will get the authentication challenge from user's Internet banking application feed that to user's secure device and lure user to provide valid authenticator for Internet banking. The user will think that he is entering password to read the gossips, while he in fact will be authorizing transfer of his savings to support Mallory's Luxury Vacation Charity Fund.
  • The device must bind the authentication to the actual communication. Think about how certificates are used to generate the session keys in SSL/TLS. If this is not done, then it is trivial for Mallory to just wait while user executes proper authentication and then hijack his (authenticated) connection. The user may not even notice, as Mallory may pretend network failure. Or Mallory can let user do whatever he does and wait for a logout command. He will silently discard the logout, but pretending it happened. When the user leaves, Mallory can easily afford to buy a new house in Mediterranean.

Unless these attacks are prevented, the whole system will still be inherently vulnerable to man-in-the-middle attacks. No kind of secure device can solve all the issues (although it can improve the situation a bit).

I see the solution like this: User is authenticating to his communication device (computer, mobile phone) with any appropriate combination of I know / I have / I am. When the device is persuaded about the user's identity, it will relay that authentication to other systems. That may be strong authentication, not necessarily based on passwords. This forms a chain of authentication that can have quite a lot of links. However, to get a secure system, use must inevitably believe that the device that displays information for him (workstation, notebook, mobile phone) is operating as expected. Failing that all attempts to secure anything are useless. The bad news is that we are far, far away from that.

Posted by semancik at 11:11 AM in security
Monday, 19 November 2007

Trust simplifies our lives. Human lives. Trust is a relationship that is build on emotions. If you trust someone, you expect him that he will behave in some specific way without being forced or highly motivated to do so. You rely that the feelings of the trustee will not allow to betray your trust.

Trust applies only to human beings. It makes no sense to think about trusting the computers. Computers do not have emotions, do not have feelings. Computers does only what they are programmed to do.

When you think that you trust your computer, you in fact trust a lot of people: engineers that designed and manufactured the hardware, architects and developers that provided the software, distributors that delivered the computer, network operators that maintain the network you have used to download the software ... and lots of other people involved with creating the thing that you are looking at just now.

We should not trust computers. Firstly, is not the smartest thing you can do. To trust the computers you have to trust the software developers at the very minimum. And that's a very foolish thing to do (been there, done that). Secondly, it makes no sense to trust non-human object.

The correct thinking is: How strong is my belief that my computer operates as I would expect? Belief is not a binary value and does not imply any emotions on the other (non-human) side of the relation. I pretty much believe there will be snow in the winter (there is, usually). But I do not trust the weather to bring the snow. I believe that a stone will fall down when I drop it, but I do not trust the stone to fall. Got the idea?

The consequence of this is that the usage of word trust in IT is all wrong. The names like WS-Trust or Trusted Computing are incorrect (although they sounds great from marketing perspective).

Maybe all of this sounds strange and simplistic, but I believe there is more to it. I will try to follow-up on this topic in the next blog posts.

Posted by semancik at 9:10 PM in security
Thursday, 15 November 2007

HTTPS should stand for "HTTP Secure". But how secure HTTPS really is? And how secure are the applications that rely on HTTPS?

HTTPS is based on SSL. Based on my basic cryptology knowledge I can believe that the mechanisms of SSL protocol as well as commonly used cryptosystems are secure. The problems seems to be on higher layers: in the servers, browsers and in the applications.

Servers send X.509 certificates during HTTPS connection setup. Browsers use the Common Name part from the certificate to check agains the host part of URL (wich is DNS name). The browsers also checks that the server's certificate is issued by any of the "trusted" certificate authorities configured in the browser.

Now, if I use the "https://www.mybank.com/" and I see the page and there is no warning, what does it really mean? Almost nothing. Nothing that I would really trust. Why I'm considering that untrustworthy? To explian that you have to know how the X.509 certificates for HTTPS purposes are issued.

Certificate authorities are expected to issue certificates that have Common Name (CN) property set to the value of the correct host (which is www.mybank.com in our case). The only thing that the certificate authority can check is if the guy that is asking for the certificate is an owner of the appropriate DNS domain (for example by checking RIPE Database). It can request some papers from the organization stating that it really exists. And that's it. The certificate is issued.

Now, how difficult is to twist that? You can legaly register a similar DNS domain, something like my-bank.com, mybank-access.com, mybank.cc. Then you get the certificate legaly. And how does users know if the correct domain is mybank.com or my-bank.com? Or I can just pretend that I want to get certificate for mybank.com and falsificate the papers. How could the CA based in USA check the papers (that seems to be) issued by the government of South Uzbekistan? There were approx. 40 certificate authorities pre-configured in my Firefox. I would bet that at least one of them will issue a certificate based on some barely readable fax message showing (seemingly) valid legal document. And this certificate will be "trusted" by the browser. In fact the browser with default setting will not distinguish between the certificate from the lousy CA and the most secure certificate from Verisign.

Some banks are asking users to check the fingerprint of the certificate at the beginning of each access. The funny thing is that they provide "authoritative copy" of the certificate fingerprint on the page that is protected by certificate that has to be checked. Or do they really expect users to remember the fingerprint? And to check it on each access? No. It is just CYA, not a security measure.

Now consider that the prevailing Internet appliction security mechanism is authentication by passwords over HTTPS channel. How secure the Internet applications (including Internet banking applications) really are?

The scary thing is that there are "identity systems" (see previous post) that rely exclusivelly on HTTPS ... Shouldn't we rather stop the nonsense, stand back, take few deep breaths and start thinking and talking how to make it all right this time?

Posted by semancik at 8:21 PM in security
Tuesday, 30 May 2006

In the last posts I've written [1] [2] about inherent security problems of current information technologies. Today I want to write about possible solutions.

To make the long story short (a.k.a. "Management Summary"), I can see no short-term solution at all. If we work really hard, we can have at least some security in the first half of next decade. But I really doubt that.

And now the full story:

Perimeter security does not work. Firewalls are not effective. And I believe that they cannot be made effective and practical at the same time. We should not rely on firewalls for providing host security. Hosts should be secure on their own. Especially mobile hosts, because these cannot count on firewalls protecting them. We should re-engineer the operating system to build security into their network layers.

Workstations are insecure. Anyone can do anything. Any process can ruin system security. This has to change. Operating system should not be designed to "just work", but has to support non-functional requirements also - such as security and reliability. Some features of multi-level secure systems should be also implemented in the conventional operating systems. Well, it may be a little bit difficult to figure out what features to migrate and how to implement them to be usable. But I believe we can figure it out. Sooner or later. Probably later than sooner.

Windows Vista may be heading in the right direction (*). And it looks like Microsoft is quite alone in the effort. But I'm not naive enough to believe that the security can be done right anytime soon. It will take a lot of thinking, designing and testing. And that testing will be done on real customers, I suppose, like you and me. I think that first release of Windows Vista will not be much more secure than the current operating system. Because for the system to be secure, all must be changed. The approach, the technology, the people. And that will take a long time.

I would not expect that we will see any widespread secure operating system until 2010. 2015 or even 2020 are more probable. But at that time, the low-level software that runs on computing devices may not even be called "operating system" anymore.

(*) It's really ridiculous that such a strong oponent of Microsoft approach like myself states that Microsoft is doing something that is heading in the right direction. Well, I would gladly admit that I was all wrong, and that Microsoft is really great technological company. But I have a strange feeling that somehow the things are not all that ideal. The time will tell.

Posted by semancik at 10:48 PM in security
Friday, 19 May 2006

I estimate that at least 95% of all workstations used in home and enterprise environments are insecure. I do not mean insecure like "there's a hole in the OS". I mean insecure as "not designed to be secure".

Consider a common Windows XP workstation. How difficult is to infect it with a virus? Teenage kid can do that. How difficult is to steal data from a PC that is left unattended? Usually as easy as "reboot and insert USB key". How difficult is to steal a password of a user? As easy as "install a keylogger" (use virus, if neccessry).

Attacking the workstation is the easiest way to get what you want. The workstations are the second weakest part of any system (the weakest part is that thing that usually occupies space between the chair and the keyboard). Curent workstations were designed for usability, not for security. Any application can write on entire screen. We need that, because we want full-screen games and screensavers. Any application can read keyboard. We need that, as we want all the fancy pop-up thingies and devious keyboard short-cuts. Most of the applicatons can read and write anywhere on the filesystem. We need that because we want to make software installation and maintenance as easy as possible. That means that any application can do almost anything. Mix that with ineffective network security and low quality of standard software products ... what do you get? Disaster in waiting.

That's scary. And the most dreadful thing is, that some people try to build "secure" systems in this environment. They venture to make legally-binding digital signature on such platforms. They store classified information. They process personal data in large quantities. And they have the nerve (or ignorance?) to call these systems "secure".

This is the last entry from the "all sucks" series. I promise. I will write more about possible solutions next time.

Posted by semancik at 5:48 PM in security
Thursday, 18 May 2006

To have a "Perimeter Security" you need two things: a perimeter and a security.

Let's think about the "security" part of perimeter security first. The most common device to use there is a firewall. Firewall. The word much abused nowadays. Seven years ago I wrote a paper (sorry, Slovak language only) providing an overview and evaluation of network security mechanisms. I tried to make a clean distinction between "application-level gateways" and "packet filters" there, and especially the ability to see and understand network protocols. All of these different shades of gray are called "firewalls" now. The security industry evolved towards ease of use, not towards security. And nobody really seems to care much about the distinction anymore.

Back at InfoSeCon conference, Marcus Ranum had an excellent presentation about firewalls. He presented the reasons while today's firewalls do not work. I can agree with him completely. Current firewalls do not enforce protocol correctness. Yes, they understand some of the protocols (like FTP or HTTP), but that is primarily to allow them pass, not to restrict them. Yes, the firewalls can do URL filtering, antvirus and so on ... but those are "enumerating badness" approaches that does not really scale. Firewalls are designed to pass traffic, not to block traffic. That's not quite the right approach for a security device, is it? One way or another, there's no considerable security in a firewall any more.

Let's look at the "perimeter" part of perimeter security now. We are at the beginning of the age of mobility. It is a common thing to work at home, to read your mail anywhere, to browse Internet using a mobile phone. In a world like this, can you tell where your perimeter is? Does it only cover the network equipment you own? Does it includes all the portable computers that your employees use at home? Does it include yor CEO's notebook connected to some strange ISP in a hotel room somewhere near the end of the world? Does it include a WiFi network created by misconfigured PC of one of your employees? Does it includes mobile phones? And what about fridges, TV sets and toasters? ... Only one thing about the network perimeter seems to be certain: it does not copy the edge of your network.

Now we can see that we do not have security. We do not have perimeter either. Do we have perimeter security?

Disclaimer: I'm not trying to tell you to scrap your firewall as an unneeded piece of old junk. The firewalls are still needed to maintain a minimal level of protection at the very least. I'm just trying to tell you, that the protection that the perimeter security approach provides is just that: minimal.

Posted by semancik at 7:40 PM in security
Sunday, 14 May 2006

InfoSeCon 2006 conference is over. It was really great conference with unique atmosphere. The opportunity to talk in length to other speakers and to share the ideas was priceless. I also appreciate that the conference was vendor-neutral. That's something that we cannot see that often in our longitude. It was unquestionably the best conference I've attended in Central/East Europe.

The presentations and discussions with other attendees provided a lot of insight and tons of material for toughts. I will follow up with more in depth meditations later. Now I only want to present the overall "look & feel".

Marcus Ranum perfectly summarized current state of information security in two words: "all sucks". That's exactly what most of the presentations were about (including mine) - at least partially. Firewalls do not really work, workstations are insecure, it is really difficult to get the security management processes right ... nothing really helps. But what is even worse: nobody really know what to do about it.

There was a lot of good presentations focused on methods to get the security processes right by the "risk managament" folks. Marcus Ranum talked about the fallacy of "generation 2" and "generation 3" firewalls, while hinting about what went wrong and what can be done about it. There was an excellent presentation by Vince Gallo describing the promise and limitations of security system of Windows Vista. But one way or another, no satisfactory short-term solution seems to exist.

Maybe we should call this the "Security Crisis" ...
(gee, I hope haven't I just created a new buzzword)

Posted by semancik at 9:59 PM in security
Friday, 28 April 2006

All the local news are full of it. National Security Office of Slovak Republic was hacked. You can look at the hackers's description of the attack (Slovak only, sorry). The attack was trivial: The attackers probed the system using a bug in the webmail system. They got a suspicious username, tried to guess a password and ... it just worked. There was a "public" SSH connection and the same password worked on several other systems. Too easy ...

The National Security Office is quite an important organization of Slovak government. It supervises the use of classified information, it administers most of the security checks and clearences. It even hosts the national (root) certificate authority for qualified digital certificates and sets the digital signature regulations. You can image the panic that started after the announcement.

The real impact of the attack was minimal. Hackers gained control over several servers in the DMZ, stolen few gigs of data, could read and spoof mails and do similar things. The Office denies that they've stolen any classified information. But the impact of this actual attack is not that troubles me most. The scary thing is the fact that the attack was so easy and straightforward. I would not wonder if it eventually turns out that a teenager did it. That triviality of the attack means that the failure is quite deeper than just a "one weak password" problem.

Every system can be compromised. That's the fact that any security expert knows. The trick is to make the compromise infeasible. To make it difficult, time-consuming, expensive. To combine systems and procedures in such a way, that a compromise is either very unprobable or that it's impact is negligible. The fact that the Office was compromised so easily, that the attack was not detected and that the attackers gathered quite a lot of information tells about severe system failure. I'm not talking about the operating system, not the firewalls or any other technical system, but the "security system" as an organizational process.

If the system worked as it should, the hole in the webmail interface would not be there. It would be fixed by regular patching. It the system worked the public ssh access would not be there. Would be limited to some IP address range, would use public-key authentication only, or it just would not be there at all. If the system worked the user with the weak password would not be there. It would be detected by regular audit and deleted (or at least the password would be made stronger). If the system worked the same password would not be used over several systems. Any of this could hinder or at least limit the attack.

It is not a failure of system administrators. Considering organization like this, the security system should address even the deliberate attempt of a system administrator to lower the security level of the system, not to mention common unintentional mistakes. The multi-level security and separation of dutties principles are good just for that.

The fact that all of the weaknesses existed in the system is a yelling evidence that no effective security system was in place. And that is the thing that really troubles me. This attack was just a fun. The attackers had no real intention to harm. The next attack might not be that friendly ...

Do not look for the www.nbusr.sk webpage for a while. It looks like it was torn down as a mean to secure the agency. In fact, all the agency looks to be disconnected from the Net.

Posted by semancik at 1:24 PM in security
Monday, 27 February 2006

Bob Blakeley, one of my favorite bloggers, recently blogged about the evil nature of passwords:

Static passwords are an unacceptable hazard, good alternatives exist, we should get rid of static passwords in favor of those alternatives, and we should do it fast.

He also issued a call for action:

I believe that this community should commit itself to achieving the goal, before this decade is out, of providing every computer user with a strong authentication device and the infrastructure required for its universal acceptance.

While I can understand Bob's motives, I'm afraid that he is too optimistic and maybe even partially wrong. I think we just can't get rid of passwords. Not in a soon future. The reason is quite simple, but the explanation is quite long. Here it goes:
(for all of you impatient readers, you may skip directly to the point)

It is a common knowledge that we have three types of authentication:

  • Something you know: passwords, PINs, ...
  • Something you have: tokens, mobile phones, ...
  • Something you are: biometrics

Another (but not-so-common) knowlege is, that just one type of authentication is not enough. Why?

  • Something you know: can usually be easily compromised. See all Bob Blakeley's arguments.
  • Something you have: can be stolen. Even if we accept Bob's requirement that the theft has to be quickly noticed, "quickly" may easily be several hours. Consider that you are asleep in a hotel and that one of the hotel employees steal your device. You will detect that in the morning at the earliest. And that may be too late.
  • Something you are: There is nothing about you that one device can read and the other cannot. You leave you fingerprints all around you, and it takes just a few gummi bears to exploit that. A little more effort is paid to iris, and you even leave lots of your DNA around. It seems that once you get inexpensive biometric reader device, there are only few steps that lead to the inexpensive method to fool that device.
Using only one type of authentication is a risk. It does not matter much which one you choose. The first one (passwords) is the most frequently used in digital world and hence the attacks agains it are the most advanced. But who can tell that the other two are more secure? We did not tried that on the same scale, yet.

So called "two-factor" authentication can address the vulnerability of single authentication mechanism. You just have to use two types of authentication to lower the risk of breaking one of them. For example combine tokens and PINs or biometrics and passwords. Now, you have three different combinations of two-factor authentication mechanisms, and only one does not involve passwords: tokens + biometrics. And how would we implement that? Putting fingerprint reader in your notebook does not help much. As Bob correctly said: the workstation is not secure. And even if it was, fingerprint authentication is not. And I can't really imagine portable token with DNA analyzer being affordable anytime soon. And I don't even dare to think about consumer acceptance.

Well, what we have left? Tokens + passwords and biometrics + passwords. I will not ponder about the feasibility of these in detail. All we need to know is that they both involve passwords. May they be in form of PIN, passphrase or whistled-morse-code-signal, these are still passwords.

(There is another issue while using tokens for authentication, and that is the number of tokens needed in day-to-day business. Just recall how many keys are on your keyring. Why do you think that you will not have that many tokens? But more about this later. Maybe.)

One way or another, we cannot get rid of passwords anytime soon. But the one thing that we can change is the way how we use and manage them. First of all we need to get rid of one-factor password-only authentication for all important transactions. We should use two-factor authentication instead. And make sure that we enter our passwords into secure devices, not into our workstations. We shoule have the secure device do the "strong" authentication, not your notebook.

We have to assume realistic goals. We cannot get rid of passwords, but we can change the way that we use them. This should be the goal of the decade.

Posted by semancik at 10:37 PM in security
Friday, 18 November 2005

I've recently found Dan Blum's Identerati blog and found there a piece that explains why "strong" authentication will not fix phising. And it really struck me. How anyone could ever think that one-way authentication can fix a man-in-the-middle attack? What kind of people are out there?

Some environments can really surprise me. It is only few years ago that I've learned that some American bank did use only simple passwords for Internet banking access. "What a foolishness", I tought. Here, in the barbaric eastern europe no bank would ever risk that. Even the technologically least advanced bank deployed at least some kind of "strong" auth before the break of the millenium. And even with strong auth there were some braches. Nothing public, of course :-)

Only later I've learned that it is common practice in the US to use passwords only. Real foolishness. I'm no fan of so called "strong authentication", because that is usually just a one-way dynamic password authentication scheme(*) packaged in a nice box. But even that is much better than static passwords.

(*) Oh yeah, you can "secure" the "strong" auth by wrapping the HTML form in SSL. But, have you ever seen the list of "trusted" Certificate Authorities in your browser? No? Then go on and have a look. I would bet that there are many of them that you've never heard of. Do you trust them? I'm sure you do.

Posted by semancik at 10:24 AM in security