Currently three of my assignments have an interesting similarity: attribute based access control, all at the same time. And for all customers the choice for ABAC is related to rebuilding the internal application landscape. These customers are not implementing ABAC to facilitate federation of external identities, they're implementing ABAC for internal users of internal systems. There are multiple drivers for this change. There is, of course, the desire to publish internal services to external parties, some day. But there's also the concept of service orientation and the use of service buses and web services. But the concept of separation of identity provisioning and service provisioning is gaining ground as well. Explaining the pro's and con's of ABAC for both business and ICT is no longer a mission impossible.
But at the same time I experienced that there is an interesting and unexpected blocking factor: the application developer. It seems that where both the business user and ICT operator like the added value of federation and ABAC, the application developer has some trouble to get a grasp of the new paradigm:
There is no login page for an app anymore, no users table, no account name or password to manage. How do you implement access control without a user database and an authorization table with roles? And if users don't log on to your app, how do you know who they are?
An unexpected problem for sure. We need to educate developers as well, to ease the paradigm shift.
woensdag 20 april 2016
woensdag 6 april 2016
How to manage non personal (system) accounts
Customer often
ask me for best practices regarding management of non-personal or highly
privileged accounts in the process of implementing an Identity and Access
Management (IAM) solution. This is an interesting question, because in an IAM
project, we try to manage all kinds of accounts, but this type of account is
different from accounts that are owned by end users. This type of accounts
can’t directly be related to a uniquely identifiable person, nor are they
the result of the 'joiner – mover - leaver' HR processes in an organization. So, how
do you manage the existence of such an account?
Types of non personal accounts
There are Non Personal Accounts (NPA’s) and Non-personal System Accounts: (NPSA’s). We can identify:
- Admin or root account
The admin or root account of Windows and Linux
or Unix servers is highly privileged system account on the respective
platforms.
o
It
is authorized at the highest level
o It has access to every file and process running on a
platform.
o
The
‘root’ or 'Admin' has the permissions to change the behavior of their component;
o
Commands
can be run from it as well as react to responses of the system.
o
Operational
use of the account needs to be monitored continuously.
- Superuser account
It's a business information system or application account, that looks a lot like ‘root’. It is there when the system is installed, it’s a system account. The Superuser has permission to modify, making it a risk critical account in an information system. Like Sap* in a Sap environment.
It's a business information system or application account, that looks a lot like ‘root’. It is there when the system is installed, it’s a system account. The Superuser has permission to modify, making it a risk critical account in an information system. Like Sap* in a Sap environment.
- Service account
Accounts for middleware processes like DBMS’s,
ESB’s or other ICT components that run on top of the Windows or Linux operating
systems. A special form of a non-personal account is an application account in
a DBMS to give database access to an application.
- Batch user account
An account used by a batch job process, it is
most commonly used for scheduled batch jobs, like nightly file transfers.
NPA characteristics
NPSA’s have
a few characteristics in common. They are non-personal and they are not
directly connected to a person. Login with the account doesn’t leave an audit
trace showing which person is actually using it. And, of course, NPSA's are very
powerful and so use of them should be tightly controlled.
Service and
batch accounts also have a specific similarity: one typically doesn't login
with such an account, these accounts are not used interactively. In most cases
such an account is only used as a placeholder with some permissions to perform
specific tasks, like running a webserver with the limited capability to process
https-requests and write log files.
Modern IAM
solutions can be implemented to facilitate provisioning of personal accounts
for specific user functionality. Since non-personal accounts are not an
attribute of an identity, there is not a sole user that can be connected to an
NPA, so an IAM solution is not suitable to manage them.
If not an IAM problem, then what?
These
accounts belong to the component that they manage. The Windows operating system
comes with the Administrator account, Linux comes with root. You cannot install
Linux without a root account. You may not by default be able to login with it
(as on Ubuntu), but the account is there. So by installing an OS, you
automatically get the 'God'‑account. There is no choice, it is the result of
the change process that leads to the implementation of the OS. Such an NPSA
should only be used in a controlled manner from specific processes, like an
incident management process (admin may be needed to assist in a catastrophe),
or the change management process (the admin permissions may be required to
perform an infrastructural change).
The same is
true for service accounts: when installing a middleware component, like a database
management system, the account is created to enable the service, hence the name
service account. Again, you have no choice. You might install the service using
a 'root' – type account, but that will result in a security violation, Thou
shalt not run any service as root!
And again,
for batch account the same is true again: a batch process is created as the
result of a change request. The batch tasks are created to support an
information system, or a business process. The batch job is created to make it
possible to schedule the automatic execution of the tasks. A batch account is
created to make it possible to use resources on the system.
This leads
to the following conclusion:
Non-personal accounts have to be managed in the change management process.
This has the following implications:
- The account has to be registered in the configuration management database, it is an attribute of the component that it belongs to. Admin belongs to the Active Directory. Root belongs to a linux server. The account named 'Oracle' probably belongs to an Oracle dbms instance: the dbms is a managed component and the account name is Oracle.
- The account has an owner, who is accountable for the use of the account. Admin and root belong to the manager of the ICT department. The SAP account is owned by the system owner of the SAP system
- The interactive accounts should only be used for infrastructural changes or calamities.
- The non-personal system accounts should never be used interactively for operational tasks.
- The passwords of these accounts must remain secret. They should be secured by means of an envelope procedure, a password vault, or by using a Privileged Account Management system (PAM, like CyberArk, Hitachi PAM, Thycotic or CA PAM to name just a few). Any use has to be related to a service management ticket (an incident or a change).
So, there
you have it. Non-personal accounts must not be managed in an IAM solution,
they have to be managed by the Change Management processes in an organization.
Either they are owned by ICT or by the system owner who owns the information
system that the account is used for. You should not manage privileged accounts
in an IAM solution. And if you have to execute tasks with one of these
accounts: use a Privileged Account Management system to secure it.
maandag 2 november 2015
No business case for Identity Providers (part 3)
Would I like to be an identity provider?
Well, of course. I would make sure that my identities were very reliable and reusable. My identities must be trusted in order to make them reusable by different service providers, the government, banks and other websites. This, of course, requires the use of open standards and an auditable governance and trust framework. To achieve this, I need a business model, because someone has to pay for all this. In my opinion there are several models:
-
The citizen/customer pays for his digital identity
-
The citizen/customer gets a digital identity for free, no costs
-
We license a trust framework
$Identity
Anyone requiring a digital identity by a trustworhy
Identity Provider needs to pay for the use of the digital identity.
The question is if I, as a consumer, would be willing to pay for a
digital identity. If I can't use the identity, I don't want to pay
for it: 'What's in it for me?'. This model requires a convincing
story: as a consumer I need the assurance of the reuse potential.
If I were an identity provider, would you pay for my
digital identity if I could guarantee reuse? If so, how much? That's
difficult to calculate. There are many costs attached to running a
trustworthy identity management system. Most of theses costs are
fixed costs. The more identities I can sell, the lower the management
and security costs per identity and thus the lower the price of a
digital identity. How about $20 for every identity? And with a
periodic renewal every 2 years? Because identities erode.
Zero$ identities
There are different variants of this model.
1) Like I mentioned in an earlier post, the Dutch DigiD is an example of a trustworthy free identity. The identity is free, the costs of the identity are made up for by the identity provider. Because of the use of DigiD, Dutch citizens can perform a lot of G-C transactions online, data entry is moved from the civil service to the citizen. The disadvantage of this model is that the reuse potential is low. The identity can only be used at a fw service provider within the trust framework, like local government and a limited number of legally appointed third parties.
1) Like I mentioned in an earlier post, the Dutch DigiD is an example of a trustworthy free identity. The identity is free, the costs of the identity are made up for by the identity provider. Because of the use of DigiD, Dutch citizens can perform a lot of G-C transactions online, data entry is moved from the civil service to the citizen. The disadvantage of this model is that the reuse potential is low. The identity can only be used at a fw service provider within the trust framework, like local government and a limited number of legally appointed third parties.
2) Another instance of this model is a company that pays for the costs of identity management and provisioning for it's own customers. Just like the mentioned Digid case, but with a larger reuse objective. All parties in the trust framework abide by the rules of the trust framework and guarantee the conformance to the rules. This means that there should be auditable quality and trust criteria, resulting in some kind of a seal of approval... It looks a lot like the OpenID+ model I wrote about in a previous post.
An identity that can be used often, has a higher
value that an identity without reuse potential, hence an identity
provider with high reuse value identities will have a better
reputation and may be willing to invest in this identity provisioning
service. What will this cost? The trust framework will be expensive,
so the costs of such an identity will be higher than the costs of the
first model, let's say $50 per identity. Investments with a positive
Return on Investment, even more if the service will result in
frequent customer contact as well, for instance because of periodic
renewal of the identity.
Commercial providers of free identities like Facebook, Twitter and LinkedIn, implement this model in some way. The reuse potential of this model is moderate to low, because of the lack of a Trust Framework. Only service providers within the trust framework of the identity provider (think of Blogspot, that enables you to use your Gmail account to logon) offer the reuse potential. Other SP's, who don't require trust, but who just rely on the identification and authentication of a customer, may allow the use of a free account.
What is the business case for identity provisioning
for these commercial IdP's? They offer a free digital ID, but who is
paying for it? Because when using it, by logging in using open
protocols like Oauth, there is no transaction fee for authentication.
This is an interesting question. These IdP's seem to gain a lot of
money by managing your digital ID in a different way. Managing and
securing identities is costly, but their business model has an
enormous ROI because of the services they offer by analysing the
value of your identity, your profile. Your behavior is valuable…
Is such an identity a good match for all purposes?
Obviously not. there is no trust in your digital ID, because, no
matter what 'real name' policy, the IdP doesn't really know you, it
only knows your profile. And the provider knows every service
provider you use, based on your logon.
You could upgrade the value of an untrusted digital
ID, by using a third party verification schema. For instance upgrade
your twitter account by having it validated by another trust
framwork. This of course creates a larger reuse potential (in the
other trust framework) with your simple logon feature. But of course,
someone will have to pay for the added trust by verification in a
third party trust framework. There's no free lunch...
3) The third instance of this model is that an identity
provider gives out free identities, but makes service providers, who
trust the identity, pay the fee. That could be based on a per per use
fee, or in a subscription kind of fee. This creates a high reuse
potential, within this trust framework. In this way the service
provider doesn't have to pay all costs for identity provisioning,
thereby saving a lot of money and limiting compliance risks – if
you don't manage identity data, you can't lose them… How much
should this cost? Hard to say, but I think that $0.10 per reliable
authentication could well be feasible. Or a subscription fee of,
let's say, $10 per customer per year?
For IdP's there is a real incentive to create as much
reuse potential as possible. The more often an identity is used, the
higher the profit. But reuse potential is a result of reliability and
reputation, Identity Provisioning is an expensive business model. And
if a digital identity is not used often enough, this will result in a
financial loss.
Last model...
Let's just create a trust framework and have anyone
use it. Both identity providers and service providers pay a license
fee and can start using it. The trust framework guarantees reuse and
every party can decide their own business model
(I wrote about this long ago...).
But the trust framework has to be developed, managed and monitored,
according to open standards and governed by legal standards. But
someone has to pay for this model too. And there is an example, OIXby the Open Identity Foundation.
Lingering business case problems...
There are some other problems for identity providers
and service providers. From the business case the main driver for
profit is the reuse potential of digital identities. Only if there is
any reuse capability, operating an IdP can be affordable. When not,
there is no business case. If an identity cannot be reused, it may
well be too expensive for the customer, the IdP or the SP.
But there is a strange oxymoron… The better the
reuse potential, the less I am inclined to use other identities, the
one with the best reuse potential will be my preferred ID. This means
that I don't need another IDP. And same is true for other consumers as well. This means that there is limited room for other IDP's. (I know, you may want to use more than one identity, but that's out of scope for this post :) )
Is there a business case for IdP's?
No trust framework, no reuse. No reuse no business
case. No business case no digital identities. No digital identities, no
trust framework. No trust framework, no reuse, no business case.
I may want to be an Identity Provider, but I don't believe that there is a business case, unless you manage to be the same league as facebook and friends...
(based on my Dutch language post https://www.cqure.nl/kennisplatform/digidem-4-het-opbrengstenmodel)
I may want to be an Identity Provider, but I don't believe that there is a business case, unless you manage to be the same league as facebook and friends...
(based on my Dutch language post https://www.cqure.nl/kennisplatform/digidem-4-het-opbrengstenmodel)
maandag 27 juli 2015
The business case for Identity providers (part 2)
In my previous post I wrote about about the costs of identity provisioning. Yes, a digital identity doesn't come for free, although you may experience otherwise. Lots of digital identities you get are free. For you, as a consumer or citizen. But the costs connected with your identity can be quite high. As I showed in my previous posts, costs of compliance and governance are high. And depending on the trust model that comes with the identity, the value of an identity can be high too. An identity is valuable if you can use it often en reuse it as well. The better the reuse potential, the higher the value of the digital identity that you may experience. And the higher the value that you experience, the more you will be inclined to use it.
But not every identity is equally valuable for us as citizens or consumers. In my opinion there are two major factors that impact the value: Trustworthiness and Reusability. Let me expand on this:
Trustworthiness is an interesting concept. In my country, The Netherlands, a few digital identities are trusted by almost everyone. A good example is a banking account. I can use my banking account at almost every webshop to perform transactions, limited only by the balance of my bank account. The banks in our country created a strong trust framework. They have to, of course, as they have to comply with lots of (international) rules and regulations. They made agreements with several trust brokers, so that even small shops could be part of the trust framework. Yet, the reuse potential of my bank ID is very low. I cannot use by bank ID to login to other sites, or webshops, or to login to a governmental site. Banks don't want you to reuse the identity. In fact, it is just an authorization ID, it only let's you perform a financial transaction... Don; t ask me why...
Interestingly: the bank ID may look free, but we have to pay a subscription fee every year in order to be able to use it.
The Dutch digital government identity is less trustworthy. Mostly because the provisioning takes place without a visual verification of the identity of the citizen. But although the trust level is quite low, the reuse potential is better than the bank ID, because the government want the citizens to use the citizen ID to perform transactions with all kinds of governmental sites and even some external parties can be accessed with 'DigID'.
The best part of this ID is that it's free... Until you remember that it is free because you, as a citizen, perform several tasks that, until a few years ago, were performed by civil servants. The cost savings for the govenrment must be enormous. That more than pays for the costs of ID compliance and ID governance.
There are other free digital identities. Just look at this account, a Google account, or Facebook or Twitter. These accounts can be reused. But reuse is limited to parties within the Trust framework of the identity providers. I can use my Gmail account to create posts on Blogger, but not to post a Twitter status update. Although Oauth kind of obfuscates the reuse bouderies, thank you OAuth ;)
Strangely I cannot recall a paid trustworthy digital identity that can be reused. Could that be a feasible option? I feel that there could well be a paid model. Of course there should be a trust model and of course that will be expensive. But perhaps there could be a business case for such a proposition.
To sum it up:
So, there may be room for
But... do we need all that?
I will try to answer this question in my next post.
(this post is a translated version of my earlier Dutch language post)
But not every identity is equally valuable for us as citizens or consumers. In my opinion there are two major factors that impact the value: Trustworthiness and Reusability. Let me expand on this:
Trustworthiness is an interesting concept. In my country, The Netherlands, a few digital identities are trusted by almost everyone. A good example is a banking account. I can use my banking account at almost every webshop to perform transactions, limited only by the balance of my bank account. The banks in our country created a strong trust framework. They have to, of course, as they have to comply with lots of (international) rules and regulations. They made agreements with several trust brokers, so that even small shops could be part of the trust framework. Yet, the reuse potential of my bank ID is very low. I cannot use by bank ID to login to other sites, or webshops, or to login to a governmental site. Banks don't want you to reuse the identity. In fact, it is just an authorization ID, it only let's you perform a financial transaction... Don; t ask me why...
Interestingly: the bank ID may look free, but we have to pay a subscription fee every year in order to be able to use it.
The Dutch digital government identity is less trustworthy. Mostly because the provisioning takes place without a visual verification of the identity of the citizen. But although the trust level is quite low, the reuse potential is better than the bank ID, because the government want the citizens to use the citizen ID to perform transactions with all kinds of governmental sites and even some external parties can be accessed with 'DigID'.
The best part of this ID is that it's free... Until you remember that it is free because you, as a citizen, perform several tasks that, until a few years ago, were performed by civil servants. The cost savings for the govenrment must be enormous. That more than pays for the costs of ID compliance and ID governance.
There are other free digital identities. Just look at this account, a Google account, or Facebook or Twitter. These accounts can be reused. But reuse is limited to parties within the Trust framework of the identity providers. I can use my Gmail account to create posts on Blogger, but not to post a Twitter status update. Although Oauth kind of obfuscates the reuse bouderies, thank you OAuth ;)
Strangely I cannot recall a paid trustworthy digital identity that can be reused. Could that be a feasible option? I feel that there could well be a paid model. Of course there should be a trust model and of course that will be expensive. But perhaps there could be a business case for such a proposition.
To sum it up:
- We do have free digital ID's that we can reuse, but with little trust
- We do have paid trustworthy digital ID's that we cannot reuse
So, there may be room for
- Free trustworthy ID's that we can reuse
- Paid ID's that we can reuse
But... do we need all that?
I will try to answer this question in my next post.
(this post is a translated version of my earlier Dutch language post)
maandag 6 juli 2015
The business case for Identity providers (part 1)
In the Netherlands
the government provides a reusable digital identity, DigiD, to it's
citizens. DigiD can be used for different G-C transactions, for
tax-return forms, getting certain licenses from local government,
communicating with healt insurance companies and pension funds. The
uses are strictly defined by law, you can't use DigiD for commercial
transaction, like webshops or for other transaction. And DigiD can
only be used in the Netherlands, not abroad. There is another 'minor'
problem with DigiD: any Dutch citizen can request a DigiD and the
DigiD Identity Provider (IdP) sends an activation code by
snail-mail. Not the most secure way of identity provisioning and
there have been some incidents of criminals fishing the activation
letters from the mailbox. And if a criminal first requested a DigiD
on behalf of a victim, of course he knows when to lookout for the
mail to capture it…
Anyway the Dutch
government is in the process of building a new identity framework,
thereby making it possible for third parties to act as an identity
provider within the Dutch eID framework.
In a series of
blogposts (Dutch: First post and
second one)
I asked myself the question: is there a business case to become an
IdP? Is there any commercial driver to become an IdP? Or are there
other drivers?
I found out that it
is very hard to answer these questions positively…
A few years ago I
was at the European Identity conference in Munich and on one of the
panels Kim Cameron remarked that everyone wants to be an IdP. If
consumers use your Identity, the single fact that people use one your
Identities creates a brand value. You grow a valuable reputation.
But if everyone
becomes an IdP, what does that mean? Can you use and reuse every one of those
identities? That makes for an interesting problem: at this moment
noone wants to accept just any third party digital identities. The
reuse capability is very small, too small to make people use a third
party identity. We have a deadlock situation. Why is that?
Of course some
identities can be reused. Think about Facebook, Twitter and LinkedIn identities. These can be reused, but just to a certain
extent. These (free) identites are only trusted by service providers if there's something in for them... For a
service provider these identities make it possible to have an
authenticated account, without the need to store and protect identity
information, like passwords. You can't lose what you don't have. So accepting third party identities is not only useful for consumers, but even more for SP's, as some kind of preventive privacy control against data leakage. But
no service provider will let you make financial transactions using a
facebook account. Facebook (as a service provider) may do so, but
independent third parties will be very reluctant.
Why is the reuse
capability of third party identities limited?
Well, in my opinion
it's the lack of a transparant, and trusted, Trust Framework.
A few years ago we
tried to create a Trust Framework based on the OpenID standard, we
called it OpenID+. The + being the Trust Framework. The group that
worked to create OpenID+ consisted of a government body, a few
financial corporations, some ebusiness companies, media corporations,
all prominently present on the Dutch internet space. Interestingly
both IdP's and SP's took part in the development of the trust
framework. The main principle being that any OpenID+ SP would have to
accept any identty that was provided by any OpenID+ IdP!
We started building
the policies and procedures, in our spare time (I know, I was one of
the authors) and when we deemed the framework sufficiently mauture, we decided to go
ahead and to start a few OpenID+ proofs of concept. In order to build
the trust framework, we defined the technical extentions for OpenID,
the policies for the identity provisioning processes, for
deprovisioning, for auditing and legal issues. These were some of
the questions that had to be answered in order to create the trust
framework:
- Should we create a (secure and trusted) white list of trustworthy OpenID+ IdP's?
- How should an OpenID provider apply for the white list?
- Should there be an audit guideline for audit or self-assessment?
- Can any service provider access the white list, or should we allow only connected OpenID+ service providers?
- How could we guarantee that all providers would interprete claims and attributes in the correct manner?
- What should happen if an incident occurs? For instance in case of misuse or theft of an OpenID+ identity, or wrong interpretation of an attribute of an OpenID+ identity by a service provider?
- How about liability in case of an incident?
- How long should a white list entry be valid?
- Would we need an arbitration committee?
Quite a lot of question, and this was only a small number of questions. And that was when
trouble started. Defining the standards was okay, but implementing
the standards proved very difficult. We found out that building such
a trust framework was very expensive. Especially the documenting and
auditing of the processes and techniques proved so costly, that the
parties became afraid for what would come next. Who should pay the
costs of such a trust framework?
As a result the
OpenID+ framework was never implemented. There was no positive
business case for any of the participants.
That's not the end
of it. Yet. There are several financial models for IdP's. In my next post
I will introduce different models and expend on the business case for
Identity Providers. And I will try to explain why the reuse
capability of digital identities is critical for the succes of IdP's.
zondag 28 juni 2015
Fighting Android insecurity FUD
This week Dutch newspaper Volkskrant warns against a severe leak
in Android that enables attackers to install software on an Android
device without consent of the user and without even touching the
device. The journalist of the Volkskrant wrote an article about some
Dutch scientists who claim to have discovered a leak in Android
security. They posted their findings with a demo of the leak in a
video (that till this day can be downloaded here:
https://drive.google.com/file/d/0B73YUDeOq3OWTG93enVYVWN3TXc/view?pli=1).
The video shows some convincing and exciting insights in the hack. From an infected webbrowser the scientists install malicious software on a cell phone using the browser version of Google Playstore, thereby enabling all sorts of abuse on the phone. In the video you see some very alarming demonstrations and scenario's: abuse of Paypal-accounts, the option of reading SMS-messages, e-mails, etc.
The issue exists because of the tight integration between all Google services, from Gmail to Play Store, and that extends to Android devices. This integration is based on the fact that all Google services are bound to one Google account. The scientists show that using a stolen gmail account, an attacker could upload malicious software to an Android device using only the play store web front end, without touching the device itself. The user takes action on behalf of the attacker, by activating the malware, thereby opening up the device for the attacker.
The scientists end their performance with the statement that you should build security into a product, instead of building security on. This statement stems from their claim that they reported the issue to Google, but that they never got a reply. Anyway, all major Dutch media posted the item as well, Android is a big risk.
If I were a scientist or a journalist, I would check the facts before posting these statements and ask some questions first. My Questions would look like this:
How does a Google account get hacked?
Did you examine this exploit on your own systems? Because criticising scientists and journalists without evidence is only too easy...
My bad, I'm not a hacker, I failed miserably. I could not login to google play services without entering the text message on my mobile phone. I could not even push regular software from PlayStore to my device. Oh no, I am not a scientist or journalist, I couldn't replicate the findings, I can't exploit the Android leak as an attacker. Or... is activating 2 factor authentication enough to mitigate the risk?
So, dear scientists and journalists, before posting FUD, please investigate the problem, not the symptom. If you claim that there is a vulnerability (not even a leak), do so from different perspectives. First check the facts. Then check if the issue is new. Then doubt your own findings. That's science. That investigative journalism. If you don't, you just create FUD.
Disclaimer: I'm not an Android user.
The video shows some convincing and exciting insights in the hack. From an infected webbrowser the scientists install malicious software on a cell phone using the browser version of Google Playstore, thereby enabling all sorts of abuse on the phone. In the video you see some very alarming demonstrations and scenario's: abuse of Paypal-accounts, the option of reading SMS-messages, e-mails, etc.
The issue exists because of the tight integration between all Google services, from Gmail to Play Store, and that extends to Android devices. This integration is based on the fact that all Google services are bound to one Google account. The scientists show that using a stolen gmail account, an attacker could upload malicious software to an Android device using only the play store web front end, without touching the device itself. The user takes action on behalf of the attacker, by activating the malware, thereby opening up the device for the attacker.
The scientists end their performance with the statement that you should build security into a product, instead of building security on. This statement stems from their claim that they reported the issue to Google, but that they never got a reply. Anyway, all major Dutch media posted the item as well, Android is a big risk.
If I were a scientist or a journalist, I would check the facts before posting these statements and ask some questions first. My Questions would look like this:
How does a Google account get hacked?
The scientists claim that a Man in the Browser attack could be used, but they never say how, when and why. I believe it could be done, but the first conclusion must be: this Hole, or whatever it is they found, can only be done if a google account is stolen, by whatever means. This clearly is not an Android problem. If a Google account is stolen, there are more problems than just uploading malware.How does the malware get installed on an Android device?
But when you think further, the first and foremost issue is the fact that if a browser gets infected with evil code, an attacker controls that browser (and sometimes more) which means that people who use online banking, Paypal, read their e-mail through a browser or shop online can expect an attacker to harvest all data. And yes ... obviously use all sorts of logins for evil purposes: Twitter, Facebook, Microsoft and of course Google. This isn't new. Investigative journalist Brenno de Winter (@brenno) demonstrated this a year ago and he explained how you can abuse such a weakness. In his case he used a KLM-domain that the company had forgotten [http://www.nu.nl/internet/3733033/vergeten-klm-domein-opende-weg-phishing.html] (Dutch). The server the domain name pointed to, was vulnereable to all sorts of attacks. This way an attacker could install webpages filled with malware, make look-alike (phishing) websites, etc. Using freely available tools the journalist created a fake Google-loginpage to harvest credentials or use the credentials that the malware harvests. Then he installed Cerberus App [http://www.cerbereusapp.com/] on the Android device. With this software you can control the phone, read SMS-messages, record audio, record video, take pictures and even worse: you can hide the app from the drawer. So the user won't notice he is being spied on. This looks remarkably like the new leak by the scientists. New? No way. Science? No way. Is this an Android, or Google issue? No way.
The scientists claim that you can install malware on an Android device using Google Play services. This clearly is not the case. There is (almost) no malware on the Play store. Google security controls towards Play Store are so strong, that malicious software can hardly be published. Repackaging regular software with a malicious load and uploading it to Google Play store is not feasible.
So, no Android issue here either.Will Android users activate software on their device?
Who knows. People are curious and not always security aware, they might just install malware. For an atacker to create a business case for this scenario is not realistic.
But again, this is not an Android issue, as all phishing tests prove.How is the tight integration on other platforms?
Microsoft and Apple use the same kind of integration on Windows Phone and iOS. I have no experience with those platforms. The differentiator is that these platforms don't have remote push of apps. The vulnerability may be different from Android.
This Android feature could be a risk.Is there no work-around, what should end users do to prevent compromising of these leaks?
No idea, science gives no answer... And the journalist doesn't show any hints either.
Did you examine this exploit on your own systems? Because criticising scientists and journalists without evidence is only too easy...
Here I go: I installed a new browser on my Windows PC (in order to be able to act as an attacker).
Next I browsed to play.google.com and behold, all apps are visible.
Next I logged into Play service using my single Google account.
Yes, logged in, almost... the Google two factor authenticator function popped up, reporting it sent a text message to my mobile device...
My bad, I'm not a hacker, I failed miserably. I could not login to google play services without entering the text message on my mobile phone. I could not even push regular software from PlayStore to my device. Oh no, I am not a scientist or journalist, I couldn't replicate the findings, I can't exploit the Android leak as an attacker. Or... is activating 2 factor authentication enough to mitigate the risk?
So, dear scientists and journalists, before posting FUD, please investigate the problem, not the symptom. If you claim that there is a vulnerability (not even a leak), do so from different perspectives. First check the facts. Then check if the issue is new. Then doubt your own findings. That's science. That investigative journalism. If you don't, you just create FUD.
Disclaimer: I'm not an Android user.
zaterdag 23 mei 2015
Using Passbook for Attribute Management
By now we know all there is to know about managing digital identities so the next level of access control is to further investigate managing (defining, granting and revoking) authorizations. And the ideas about granting access to resources, based on certain (user owned) attributes are gaining ground. In the future I will get access to documents, files, databases and locations based on attributes, more than because of who I am (one of my many identities).
A while back I wrote some posts ("I need a Pall or Pass" and "Attribute management") about managing attributes and about the lack of information about this issue. And I found one interesting entity providing attributes: ISACA issues attributes in the form of OpenBadges, an open standard to manage whatever attributes in a digital wallet, like Mozilla Persona.
Only recently did I come across another digital wallet system, Passbook by Apple. According to Wikipedia "Passbook is an application in iOS that allows users to store coupons, boarding passes, event tickets, store cards, credit cards as well as debit cards via Apple Pay." That's interesting. I didn't know about Passbook, because I don't own or use any iThings, but someone crafted an app for the Sailfish OS on my Jolla smartphone. So, thank you :)
A little about the purpose of Passbook: it is there to manage coupons, tickets and all. And those items are valuable items, they have to be protected. So inherently passes are secured to a certain level and Passbook must facilitate that. These items give access to certain features that were defined by the coupon or ticket provider, these permissions were defined by the owner of the resource that the ticket holder wants to have access to.
This look a lot like the owner's responsibilities that we see in regular IAM environments. Someone, an owner of a resource, a file, a database, a room, defines access rules and decides what identities can have access. Yes, not unlike any theater ticket. And yes, I did write that I need a Personal Attribute Storage System, a Pass. It could well be a Passbook...
Can we use some app like Passbook for attribute management. Yes of course, by all means. But I am curious to know if Apple created an open standard to make it feasible to use the platform elsewhere too.
A while back I wrote some posts ("I need a Pall or Pass" and "Attribute management") about managing attributes and about the lack of information about this issue. And I found one interesting entity providing attributes: ISACA issues attributes in the form of OpenBadges, an open standard to manage whatever attributes in a digital wallet, like Mozilla Persona.
Only recently did I come across another digital wallet system, Passbook by Apple. According to Wikipedia "Passbook is an application in iOS that allows users to store coupons, boarding passes, event tickets, store cards, credit cards as well as debit cards via Apple Pay." That's interesting. I didn't know about Passbook, because I don't own or use any iThings, but someone crafted an app for the Sailfish OS on my Jolla smartphone. So, thank you :)
A little about the purpose of Passbook: it is there to manage coupons, tickets and all. And those items are valuable items, they have to be protected. So inherently passes are secured to a certain level and Passbook must facilitate that. These items give access to certain features that were defined by the coupon or ticket provider, these permissions were defined by the owner of the resource that the ticket holder wants to have access to.
This look a lot like the owner's responsibilities that we see in regular IAM environments. Someone, an owner of a resource, a file, a database, a room, defines access rules and decides what identities can have access. Yes, not unlike any theater ticket. And yes, I did write that I need a Personal Attribute Storage System, a Pass. It could well be a Passbook...
Can we use some app like Passbook for attribute management. Yes of course, by all means. But I am curious to know if Apple created an open standard to make it feasible to use the platform elsewhere too.
Abonneren op:
Posts (Atom)