Wednesday, December 18, 2013

Security Advisory CVE-2013-4517 released

A new security advisory for the Apache Santuario XML Security for Java library has been released:

"The Apache Santuario XML Security for Java project is vulnerable to a Denial of Service (DoS) type attack leading to an OutOfMemoryError, which is caused by allowing Document Type Definitions (DTDs) when applying Transforms. From the 1.5.6 release onwards, DTDs will not be processed at all when the "secure validation" mode is enabled."

This issue is fixed (when secure validation is enabled) in Apache Santuario XML Security for Java 1.5.6. This release is picked up by new releases of Apache WSS4J (1.6.13), and Apache CXF (2.7.8 and 2.6.11).

Friday, November 8, 2013

Apache CXF STS client configuration options

Apache CXF provides a Security Token Service (STS), which can issue (as well as validate, renew + cancel) security tokens using the WS-Trust protocol. A common SOAP security scenario is where a service provider requires that a client must authenticate itself to the service, by geting a token from an STS and including it in the service request. In this article, we will explore different ways of configuring the client with details of how to communicate with the STS, as well as how the service provider can provide these details to the client.

1) IssuedToken policies

The service provider can require that a client retrieve a security token from an STS by specifying a WS-SecurityPolicy IssuedToken policy (for example, in the WSDL of the service). The following IssuedToken policy fragment (see here for a concrete example) tells the client that it must include a SAML 1.1 token in the request. In addition, the token must include the client's PublicKey, the corresponding private key of which must be used to secure the request in some way (SOAP Message Signature, TLS client authentication):

<sp:IssuedToken
    sp:IncludeToken=".../IncludeToken/AlwaysToRecipient">
    <sp:RequestSecurityTokenTemplate>
        <t:TokenType>http://.../oasis-wss-saml-token-profile-1.1#SAMLV1.1</t:TokenType>
        <t:KeyType>http://.../ws-trust/200512/PublicKey</t:KeyType>
    </sp:RequestSecurityTokenTemplate>
</sp:IssuedToken>

2) STSClient configuration

So the CXF client will know that it must get a SAML 1.1 token from an STS when it sees the above policy, and that it must present the STS with a X.509 Certificate, so that the STS can embed it in the issued token. How does it know how to contact the STS? Typically, this is defined in an STSClient bean. The following configuration (see here for a concrete example), specifies the WSDL location of the STS, as well as the Service + Port QNames to use, as well as some additional security configuration:

<bean id="stsClient" class="org.apache.cxf.ws.security.trust.STSClient">
       <property name="wsdlLocation"
                 value="https://localhost:8443/SecurityTokenService/Transport?wsdl"/>
       <property name="serviceName"
                 value="{http://.../ws-trust/200512/}SecurityTokenService"/>
       <property name="endpointName"
                 value="{http://.../ws-trust/200512/}Transport_Port"/>
       <property name="properties">
           <map>
               <entry key="ws-security.username" value="alice"/>
               <entry key="ws-security.callback-handler"
                      value="org.apache.cxf.systest.sts.common.CommonCallbackHandler"/>
               <entry key="ws-security.sts.token.username" value="myclientkey"/>
               <entry key="ws-security.sts.token.properties" value="clientKeystore.properties"/>
               <entry key="ws-security.sts.token.usecert" value="true"/>
           </map>
        </property>
</bean>

While this type of configuration works well, it has a few drawbacks:
  • The client must have the WSDL (location) of the STS (as well as service + port QNames). 
  • The service can't communicate to the client which STS address to use (as well as service + port QNames).
Apache CXF provides two ways of addressing the limitations given above, that will be described in the next two points.

3) Communicating the STS address + service/port QNames to the client

From Apache CXF 2.7.8 it is possible for the service to communicate the STS address and service/port QNames to the client in a simple way (as opposed to using WS-MEX, as will be covered in the next section). This is illustrated by the IssuerTest.testSAML1Issuer in the CXF source. The IssuedToken policy of the service provider, has an additional child policy:

<sp:Issuer>
    <wsaw:Address>https://.../SecurityTokenService/Transport</wsaw:Address>
    <wsaw:Metadata xmlns:wst="http://.../ws-trust/200512/">
        <wsam:ServiceName EndpointName="Transport_Port">
            wst:SecurityTokenService
        </wsam:ServiceName>
    </wsaw:Metadata>
</sp:Issuer>

The "Address" Element communicates the STS address, and the "ServiceName" Element communicates the Service + Endpoint QNames to the client. With this configuration in the WSDL, the client configuration is greatly simplified. Instead of specifying the WSDL Location + Service/Port QNames, the client now only has to specify the security policy to be used in communicating with the STS. It also must set the security configuration tag "ws-security.sts.disable-wsmex-call-using-epr-address" to "true", to avoid using WS-MEX.

An obvious disadvantage to this method is that it still requires the client to have access to the security policy of the STS. However, it at least is a simple way of avoiding hardcoding service host + port numbers in client configuration. A more sophisticated method is to use WS-MEX, as will be described in the next section.

4) Using WS-MetadataExchange (WS-MEX)

The best way for the service to communicate details of the STS to the client is via WS-MetadataExchange (WS-MEX). This is illustrated by the IssuerTest.testSAML2MEX in the CXF source. The IssuedToken policy of the service provider has an Issuer policy that looks like:

<sp:Issuer>
    <wsaw:Address>https://.../SecurityTokenService/Transport</wsaw:Address>
    <wsaw:Metadata>
        <wsx:Metadata>
            <wsx:MetadataSection>
                <wsx:MetadataReference>
                    <wsaw:Address>https://.../SecurityTokenService/Transport/mex</wsaw:Address>
                </wsx:MetadataReference>
            </wsx:MetadataSection>
        </wsx:Metadata>
    </wsaw:Metadata>
</sp:Issuer>

The client uses the Metadata address to obtain the WSDL of the STS. A CXF Endpoint supports WS-MetadataExchange via a "/mex" suffix, when "cxf-rt-ws-mex" is on the classpath. Note that the client configuration does not need to define an STSClient Object at all any more, only to provide some security configuration for the request to the STS. The client first makes a call to the STS to get the WSDL that looks like:

<soap:Envelope>
    <soap:Header>
        <Action xmlns=".../addressing">http://schemas.xmlsoap.org/ws/2004/09/transfer/Get</Action>
       <MessageID xmlns=".../addressing">urn:uuid:1db606be-695b-46d4-8759-fe9d41746b42</MessageID>
       <To xmlns=".../addressing">https://.../SecurityTokenService/Transport/mex</To>
       <ReplyTo xmlns=".../addressing"><Address>.../addressing/anonymous</Address></ReplyTo>
   </soap:Header>
   <soap:Body/>
</soap:Envelope>

The STS responds with the WSDL embedded in the SOAP Body under a "Metadata" Element. The client then matches the Address defined in the Issuer policy with an endpoint address in the WSDL and uses the corresponding Service/Endpoint QName to invoke on the STS. Alternatively, the service could specify explicit QNames as per the previous example.


Monday, November 4, 2013

XKMS functionality in Apache CXF

Talend has recently donated an XKMS 2.0 implementation to Apache CXF, which is available from the CXF 2.7.7 release. It is documented on the CXF wiki here. The XKMS implementation consists of two parts. Firstly, an XKMS service is provided that exposes a SOAP interface that allows users to register X.509 certificates, as well as to both locate and validate X.509 certificates. Secondly, an implementation of the WSS4J Crypto interface is provided which allows a (WS-Security based) client to both find and validate certificates via the XKMS service.

This blog post will focus on a simple WS-Security system test in CXF, and how it uses XKMS for certificate location and validation. For more detailed information on XKMS, and how it is implemented in CXF, check out my Talend colleague Andrei Shakirin's excellent blog posts on XKMS.

1) XKMS system test with WS-Security

WS-Security is used to secure SOAP service requests/responses at the message level. How this is done is defined by a WS-SecurityPolicy fragment, which is usually defined in the WSDL of the service. Some additional configuration is required by CXF clients and services, such as what username to use, a CallbackHandler implementation to get passwords, and the location of property files which contain WSS4J Crypto configuration for signing and encrypting. See the following wiki and system tests for more information on using WS-Security and WS-SecurityPolicy in CXF.

The WS-SecurityPolicy specification defines two security "bindings" for use in securing SOAP messages at the message level - the "Symmetric" and "Asymmetric" bindings. With the Symmetric binding, the client creates a secret key to secure the message, and then uses the certificate of the service to encrypt the secret key, so that only the service can decrypt the secured message. The Asymmetric binding assumes that both the client and service have public/private key-pairs, and hence the client signs the request using its private key, and encrypts the request using the service's public key.

In each case, the client (typically) must have access to an X.509 certificate for the service. It must "locate" a certificate for the service on the outbound side for both the Symmetric and Asymmetric bindings, and it must "validate" the certificate used by the service to sign the response in the "Asymmetric" case. Up to now, the default WSS4J Crypto implementation (Merlin) uses a local Java keystore to obtain certificates. However, given that the tasks the client must perform ("locate" and "validate") map directly to XKMS operations, it's possible to use the new XKMS Crypto implementation instead.

The XkmsCryptoProvider is configured with a JAX-WS client proxy which can locate/validate X.509 Certificates from a remote XKMS SOAP service. Additionally, it can be composed with another Crypto implementation from which private keys can be retrieved (for example, when used for the Asymmetric Binding). It can also be configured with a cache to prevent repeated remote calls to the XKMS service to locate or validate a particular certificate. The default cache is based on EhCache.

Aside from the obvious advantages of being able to centralize certificate management by using XKMS, a cool additional advantage is that the client need have no local key information at all for the Symmetric binding, as it only requires the X.509 certificate of the message recipient, and no private key. This could greatly simplify key management in a large deployment.

Let's tie all of the above information on combining WS-Security with XKMS together by looking at some CXF system tests. The tests are available in the CXF WS-Security systests directory. The test source is here and test configuration files are available here. The tests show how to deploy an XKMS Service, and how to configure a JAX-WS client with the XKMS Crypto implementation to locate and validate certificates for a service invocation over both the Symmetric and Asymmetric bindings.

2) The XKMS Service

The XKMS service configuration for the test is available here. It describes a JAX-WS endpoint that can only be accessed over TLS. The endpoint implements the standard XKMS interface. A certificate repository is defined as follows:

<bean id="certificateRepo"
    class="org.apache.cxf.xkms.x509.repo.file.FileCertificateRepo">
    <constructor-arg value="src/test/resources/certs/xkms" />
</bean>

This is a simple file based certificate repository, where certificates are stored in a single directory on the filesystem (see here). A more appropriate implementation for the enterprise is the LDAP certificate repository, which is documented on the wiki. The service defines a single XKMS "locator":

<bean id="x509Locator"
    class="org.apache.cxf.xkms.x509.handlers.X509Locator">
    <constructor-arg ref="certificateRepo" />
</bean>

In other words, any "locate" query will try to find certificates in the file certificate store we have defined above. Similarly, two XKMS "validators" are configured:

<bean id="dateValidator"
    class="org.apache.cxf.xkms.x509.validator.DateValidator" />
<bean id="trustedAuthorityValidator"
    class="org.apache.cxf.xkms.x509.validator.TrustedAuthorityValidator">
    <constructor-arg ref="certificateRepo" />
</bean>

The first will validate that a given certificate is "in date". The second will look for a trusted certificate authority for the certificate in the certificate repo (under the "trusted_cas" subdirectory).

3) The JAX-WS client

Here we will just focus on the configuration for the client for the Symmetric binding use-case. It is configured as follows:

<jaxws:client
    name="{http://www.example.org/contract/DoubleIt}DoubleItSymmetricPort"
    createdFromAPI="true">
    <jaxws:properties>
        <entry key="ws-security.encryption.crypto" value-ref="xkmsCrypto"/>
        <entry key="ws-security.encryption.username" value="CN=bob, OU=eng, O=apache.org"/>
    </jaxws:properties>
</jaxws:client>

The client specifies an encryption username that corresponds to the Subject DN of the message recipient. It can also search for a certificate by the service "{serviceNamespace}serviceName" QName. The encrytion Crypto object is a reference to the XKMSCryptoProvider, which in turn is simply configured with the URL of the XKMS Service:

<bean id="xkmsClient"
    class="org.apache.cxf.xkms.client.XKMSClientFactory"
    factory-method="create">
    <constructor-arg>
        <value>https://localhost:${testutil.ports.XKMSServer}/XKMS</value>
    </constructor-arg>
    <constructor-arg ref="cxf"/>
</bean>

<bean id="xkmsCrypto"
    class="org.apache.cxf.xkms.crypto.impl.XkmsCryptoProvider">
    <constructor-arg>
        <ref bean="xkmsClient" />
    </constructor-arg>
</bean>

Before the service request, the client queries the XKMS service to locate an appropriate certificate using the configured encryption username:

Payload: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:LocateRequest xmlns:ns2="http://www.w3.org/2002/03/xkms#" xmlns:ns3="http://www.w3.org/2001/04/xmlenc#" xmlns:ns4="http://www.w3.org/2000/09/xmldsig#" xmlns:ns5="http://www.w3.org/2002/03/xkms#wsdl" Id="6a17ae45-21a2-4484-b5ec-c71b04dda0b2" Service="http://cxf.apache.org/services/XKMS/"><ns2:QueryKeyBinding><ns2:UseKeyWith Application="urn:ietf:rfc:2459" Identifier="CN=bob, OU=eng, O=apache.org"/></ns2:QueryKeyBinding></ns2:LocateRequest></soap:Body></soap:Envelope>

The XKMS Service responds with:

Payload: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:LocateResult xmlns:ns2="http://www.w3.org/2002/03/xkms#" xmlns:ns3="http://www.w3.org/2001/04/xmlenc#" xmlns:ns4="http://www.w3.org/2000/09/xmldsig#" xmlns:ns5="http://www.w3.org/2002/03/xkms#wsdl" ResultMajor="http://www.w3.org/2002/03/xkms#Success" RequestId="6a17ae45-21a2-4484-b5ec-c71b04dda0b2" Id="I-2984733849747242496" Service="http://cxf.apache.org/services/XKMS/"><ns2:UnverifiedKeyBinding><ns4:KeyInfo><ns4:X509Data><ns4:X509Certificate>MIIC...</ns4:X509Certificate></ns4:X509Data></ns4:KeyInfo></ns2:UnverifiedKeyBinding></ns2:LocateResult></soap:Body></soap:Envelope>

This certificate is then used to secure the service request using the Symmetric binding policy. For the Asymmetric testcase, the client also "validates" the certificate used by the service to secure the response.

Monday, September 9, 2013

XML Encryption support in Apache Camel 2.12.0

Apache Camel supports using XML Encryption (and decryption) in your Camel routes via the XML Security Data Format. I have contributed some additions to this component for the recent 2.12.0 release that may be of interest to existing or new users.

1) Upgrade to Apache Santuario 1.5.5

The Apache Santuario (XML Security for Java) dependency has been upgraded from 1.5.1 to 1.5.5. In addition, "secure validation" is now enabled by default. This property imposes some restrictions on acceptable XML Encryption Elements to limit potential attacks (although it applies more to XML Signature). See here for more information.

2) Switch to using RSA-OAEP as the Asymmetric Key Cipher algorithm

From Apache Camel 2.12.0, the default Asymmetric Key Cipher algorithm is now the RSA-OAEP algorithm. Previously it was RSA v1.5, which is vulnerable to attack. In addition, requests that use RSA v1.5 will be rejected by default, unless RSA v1.5 has been explicitly configured as the key cipher algorithm.

3) Support for some XML Encryption 1.1 algorithms

Support has been added for some XML Encryption 1.1 algorithms. Essentially this means the following:
  • You can now use "http://www.w3.org/2009/xmlenc11#rsa-oaep" as the Key Cipher Algorithm.
  • You can specify a stronger value for the digest algorithm when using RSA-OAEP. For example, you can use "http://www.w3.org/2001/04/xmlenc#sha256" instead of the default SHA-1 algorithm.
  • Support has been added for "gcm" symmetric cipher modes. For example, you can now set "http://www.w3.org/2009/xmlenc11#aes128-gcm" as the "xmlCipherAlgorithm" parameter.
  • Support has been added for MGF algorithms with stronger digest algorithms. For example, you can define "http://www.w3.org/2009/xmlenc11#mgf1sha256" for the "mgfAlgorithm" configuration parameter.

Tuesday, August 20, 2013

Apache Syncope tutorial - part IV

In the first tutorial on Apache Syncope, we showed how to deploy Syncope to Apache Tomcat, using MySQL as the internal storage mechanism. In the second and third tutorials, we showed how to import some users and roles into Syncope from database and directory backend resources. In this tutorial, we will show how an external client can query Syncope's REST API for the purposes of authentication and authorization. This tutorial assumes that Syncope is set up as described in tutorial I, and that Users + Roles have been imported as per tutorials II and III.

1) Syncope's REST API

Apache Syncope exposes its functionality via a rich REST API. Apache Syncope 1.1 features a new REST API, which is powered by Apache CXF. The new API has been created with the aim of applying RESTful best practices.

2) Querying Syncope's REST API

I've created some simple test-cases (hosted on github) based around a CXF SOAP client/service invocation, which show how to use Syncope's REST API for authentication and authorization.

a) Authentication

The Authentication test uses Syncope as an IDM for authentication. A CXF client sends a SOAP UsernameToken to a CXF Endpoint. The CXF Endpoint has been configured to validate the UsernameToken via the SyncopeUTValidator, which dispatches the username/passwords to Syncope for authentication via Syncope's REST API. Run the test via:
  • git clone git://github.com/coheigea/cxf-syncope.git
  • cd cxf-syncope
  • mvn test -Dtest=AuthenticationTest
Look at the console output to see how the CXF service dispatches the received Username/Password to Syncope for authentication. 

b) Authorization

The Authorization test uses Syncope as an IDM for authorization. It exploits the fact that we synchronized User's Roles into Syncope in tutorial IV. A CXF client sends a SOAP UsernameToken to a CXF Endpoint. The CXF Endpoint has configured the SyncopeRolesInterceptor, which authenticates the Username/Password to Syncope as per the authentication test. If authentication is successful, it then gets the roles of the user and populates a CXF SecurityContext with the user's name + roles.

The CXF Endpoint has also configured the SimpleAuthorizingInterceptor, which reads the current Subject's roles from the SecurityContext, and requires that a user must have role "boss" to access the "doubleIt" operation ("alice" has this role, "bob" does not). Run the test via:
  • git clone git://github.com/coheigea/cxf-syncope.git
  • cd cxf-syncope
  • mvn test -Dtest=AuthorizationTest


Friday, August 9, 2013

Apache Syncope tutorial - part III

In the first tutorial on Apache Syncope, we showed how to deploy Syncope to Apache Tomcat, and how to set up MySQL as the internal storage mechanism. In the second tutorial, we showed how to import some users into Syncope from a backend (Apache Derby) database resource. In this tutorial, we will look at synchronizing user and role data from an LDAP backend into Syncope, in this case Apache DS. This tutorial assumes that Syncope is set up as described in tutorial I, and that a Surname attribute has been added to the User attributes in Syncope, as described in tutorial II.

1) Apache DS

The basic scenario is that we have a directory that stores user and role information that we would like to import into Apache Syncope. In the previous tutorial, we only imported User data into Syncope. For the purposes of this tutorial, we will work with Apache DS. The first step is to download and launch Apache DS. I recommend installing Apache Directory Studio for an easy way to view the data stored in your directory.

Import the following ldif file into your Apache DS instance. Essentially this describes two users, "cn=alice,ou=users,ou=system" and "cn=bob,ou=users,ou=system", as well as two groups, "cn=employee,ou=groups,ou=system" and "cn=boss,ou=groups,ou=system". Both Alice and Bob are employees, but only Alice is the boss. We will import this user information into Syncope as per tutorial II. However, this time we will go further and import the group information as roles into Syncope. This will enable us to perform authorization checks against Syncope, as will be described in the next tutorial.

2) Synchronize user data into Apache Syncope

The next task is to import (synchronize) the user data from Apache DS into Apache Syncope. See the Syncope wiki for more information on this topic, as well as the following blog post. Launch Apache Syncope as per tutorial I/II.

a) Define a Connector

The first thing to do is to define a Connector. In tutorial I we configured two Connector bundles to use for Syncope, one for a DB backend, and one for an LDAP backend. In this section we select the LDAP Connector, and configure it to connect to the DS instance we have set up above. Go to "Resources/Connectors", and create a new Connector of name "org.connid.bundles.db.ldap". In the "Configuration" tab select:
  • Host: localhost
  • TCP Port: 10389
  • Principal: uid=admin,ou=system
  • Password: <password>
  • Base Contexts: ou=users,ou=system and ou=groups,ou=system
  • LDAP Filter for retrieving accounts: cn=*
  • groupObjectClasses: groupOfNames
  • Group member attribute: member
  • Uid attribute: cn
  • Base Context to Synchronize: ou=users,ou=system and ou=groups,ou=system
  • Object Classes to Synchronize: inetOrgPerson and groupOfNames
  • Status Management Class: org.connid.bundles.ldap.commons.AttributeStatusManagement
  • Tick "Retrieve passwords with search".
In the "Capabilities" tab select the following properties:
  • ONE_PHASE_CREATE
  • ONE_PHASE_UPDATE
  • ONE_PHASE_DELETE
  • SEARCH
  • SYNC
Click on the "helmet" icon in the "Configuration" tab to check to see whether Syncope is able to connect to the backend resource. If you don't see a green "Successful Connection" message, then consult the logs.

b) Define a Resource

Next we need to define a Resource that uses the LDAP Connector.  The Resource essentially defines how we use the Connector to map information from the backend into Syncope Users and Roles. Go into the "Resources" tab and select "Create New Resource". In the "Resource Details" tab select:
  • Name: (Select a name)
  • Connector: (Connector display name you have configured previously)
  • Enforce mandatory condition
  • Propagation Primary
  • Propagation Mode (see here): ONE_PHASE
  • Select "LDAPMembershipPropagationActions" for the "Actions class"
The next step is to create User mappings. Click on the "User mapping" tab, and create the following mappings:

Enable "Account Link" and enter "'cn=' + username + ',ou=users,ou=system'", as shown in the screenshot. The next step is to create Role mappings. Click on the "Role mapping" tab, and create the following mapping:

Enable "Account Link" and enter "'cn=' + name + ',ou=groups,ou=system'" as shown in the screenshot.

c) Create a synchronization task

Having defined a Connector and a Resource to use that Connector, with mappings to map User/Role information to and from the backend, it's time to import the backend information into Syncope.  Go to "Tasks" and select the "Synchronization Tasks" tab. Click on "Create New Task". On the "Profile" tab enter:
  • Name: (Select a name)
  • Resource Name: (The Resource name you have created above)
  • Actions Class: LDAPMembershipSyncActions
  • Create new identities
  • Updated matched identities
  • Delete matching identities
  • Status
  • Full reconciliation
Save the task and then click on the "Execute" button. Now switch to the Users tab. You should see the users stored in the backend, and if you edit a user, you should be able to see what roles are associated with the user. In the "Roles" tab, you should see the imported Roles. For example, here are the roles that are associated with the user "Alice":



Friday, July 26, 2013

Apache Syncope tutorial - part II

In the previous tutorial on Apache Syncope, we described how to create a standalone application deployed in Apache Tomcat, and using MySQL as the persistent storage. In this tutorial we will show how to set up a basic schema for Syncope that describes the users that will be created in Syncope. Then we will show how to import users from a Database backend, which will be Apache Derby for the purposes of this tutorial.

1) Creating a Schema attribute

The first thing we will do is add a simple attribute for all users that will exist in Syncope. Launch Apache Syncope as per tutorial I. Click on the "Schema" tab, and then "Create New Attribute" in the Users/Normal subsection. Create a new attribute called "surname" which is of type "String" and "mandatory". So users in our Syncope application must have a "surname". Obviously, the schema allows you to do far more complex and interesting things, but this will suffice for the purposes of this tutorial.


2) Apache Derby

The basic scenario is that we have a SQL database that stores user information that we would like to import into Apache Syncope, to integrate into a BPEL workflow, expose via a RESTful interface, associate with roles, etc. For the purposes of this tutorial, we will work with Apache Derby. The first step is to download and launch Apache Derby, and then to populate it with a table with some user data. Hat tip to my Apache CXF colleague Glen Mazza for an excellent tutorial on setting up Apache Derby.

a) Launch Apache Derby

Download Apache Derby and extract it into a new directory ($DERBY_HOME). Create a directory to use to store Apache Derby databases ($DERBY_DATA). In $DERBY_DATA, create a file called 'derby.properties' with the content:

derby.connection.requireAuthentication=true
derby.user.admin=security

In other words, authentication is required, and a valid user is "admin" with password "security". Now launch Apache Derby in network mode via:

java -Dderby.system.home=$DERBY_DATA/ -jar $DERBY_HOME/lib/derbyrun.jar server start

b) Create user data

Create a new file called 'create-users.sql' with the following content:

SET SCHEMA APP;
DROP TABLE USERS;

CREATE TABLE USERS (
  NAME   VARCHAR(20) NOT NULL PRIMARY KEY,
  PASSWORD  VARCHAR(20) NOT NULL,
  STATUS  VARCHAR(20) NOT NULL,
  SURNAME  VARCHAR(20) NOT NULL
);

INSERT INTO USERS VALUES('dave', 'password', 'true', 'yellow');
INSERT INTO USERS VALUES('harry', 'password', 'true', 'blue');

Launch Apache Derby via $DERBY_HOME/bin/ij. Then connect to the server via:

connect 'jdbc:derby://localhost:1527/SYNCOPE;create=true;user=admin;password=security;';

Populate user data via: run 'create-users.sql';

You can now see the user data via: select * from users;

3) Synchronize user data into Apache Syncope

The next task is to import (synchronize) the user data from Apache Derby into Apache Syncope. See the Syncope wiki for more information on this topic.

a) Define a Connector

The first thing to do is to define a Connector. In tutorial I we configured two Connector bundles to use for Syncope, one for a DB backend, and one for an LDAP backend. In this section we select the DB Connector, and configure it to connect to the Derby instance we have set up above. Go to "Resources/Connectors", and create a new Connector of name "org.connid.bundles.db.table". In the "Configuration" tab select:
  • User: admin
  • User Password: security
  • Table: app.users
  • Key Column: name
  • Password Column: password
  • Status Column: status
  • JDBC Driver: org.apache.derby.jdbc.ClientDriver
  • JDBC Connection URL: jdbc:derby://localhost:1527/SYNCOPE
  • Enable 'Retrieve Password'
Note that the Derby JDBC driver must be available on the classpath as per tutorial II. In the "Capabilities" tab select the following properties:
  • ONE_PHASE_CREATE
  • ONE_PHASE_UPDATE
  • ONE_PHASE_DELETE
  • SEARCH
  • SYNC
Click on the "helmet" icon in the "Configuration" tab to check to see whether Syncope is able to connect to the backend resource. If you don't see a green "Successful Connection" message, then consult the logs.
b) Define a Resource

Next we need to define a Resource that uses the DB Connector.  The Resource essentially defines how we use the Connector to map information from the backend into Syncope Users and Roles. Go into the "Resources" tab and select "Create New Resource". In the "Resource Details" tab select:
  • Name: (Select a name)
  • Connector: (Connector display name you have configured previously)
  • Enforce mandatory condition
  • Propagation Primary
  • Propagation Mode (see here): ONE_PHASE
  • Select "DefaultPropagationActions" for the "Actions class"
The next step is to create User mappings. Click on the "User mapping" tab, and create the following mappings:


    c) Create a synchronization task

    Having defined a Connector and a Resource to use that Connector, with mappings to map User information to and from the backend, it's time to import the backend information into Syncope.  Go to "Tasks" and select the "Synchronization Tasks" tab. Click on "Create New Task". On the "Profile" tab enter:
    • Name: (Select a name)
    • Resource Name: (The Resource name you have created above)
    • Actions class: DefaultSyncActions
    • Create new identities
    • Updated matched identities
    • Delete matching identities
    • Status
    • Full reconciliation
    Save the task and then click on the "Execute" button. Now switch to the Users tab. You should see the users stored in the backend. Click on one of the users, and see that the "surname" attribute is populated with the value mapped from the column stored in the backend:






    Apache Syncope tutorial - part I

    Apache Syncope is a new open source Identity Management project at Apache. This is the first of a planned four-part set of tutorials on how to get Apache Syncope up and running, how to integrate it with various backends, and how to interact with its REST API.

    In this tutorial we will explain how to create a new Apache Syncope project and how to deploy it to a container. We will also cover how to set up internal storage with a database, and how to install Connector bundles to communicate with backend resources. Please note that if you wish to download and play around with Apache Syncope, without going through the steps detailed in this tutorial to set it up for a standalone deployment, it is possible to download the 1.1.3 distribution and use an embedded Tomcat instance. In that case you can completely skip this tutorial and go straight to the next one.

    1) Set up a database for Internal Storage

    The first step in setting up a standalone deployment of Apache Syncope is to decide what database to use for Internal Storage. Apache Syncope persists internal storage to a database via Apache OpenJPA. In this article we will set up MySQL, but see here for more information on using PostgreSQL, Oracle, etc. Install MySQL in $SQL_HOME and create a new user for Apache Syncope. We will create a new user "syncope_user" with password "syncope_pass". Start MySQL and create a new Syncope database:
    • Start: sudo $SQL_HOME/bin/mysqld_safe --user=mysql
    • Log on: $SQL_HOME/bin/mysql -u syncope_user -p
    • Create a Syncope database: create database syncope; 
    2) Set up a container to host Apache Syncope

    The next step is to figure out in what container to deploy Syncope to. In this demo we will use Apache Tomcat, but see here for more information about installing Syncope in other containers. Install Apache Tomcat to $CATALINA_HOME. Now we will add a datasource for internal storage in Tomcat's 'conf/context.xml'. When Syncope does not find a datasource called 'jdbc/syncopeDataSource', it will connect to internal storage by instantiating a new connection per request, which carries a performance penalty. Add the following to 'conf/context.xml':

    <Resource name="jdbc/syncopeDataSource" auth="Container"
        type="javax.sql.DataSource"
        factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
        testWhileIdle="true" testOnBorrow="true" testOnReturn="true"
        validationQuery="SELECT 1" validationInterval="30000"
        maxActive="50" minIdle="2" maxWait="10000" initialSize="2"
        removeAbandonedTimeout="20000" removeAbandoned="true"
        logAbandoned="true" suspectTimeout="20000"
        timeBetweenEvictionRunsMillis="5000" minEvictableIdleTimeMillis="5000"
        jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;
        org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer"
        username="syncope_user" password="syncope_pass"
        driverClassName="com.mysql.jdbc.Driver"
        url="jdbc:mysql://localhost:3306/syncope?characterEncoding=UTF-8"/>

    Uncomment the "<Manager pathname="" />" configuration in context.xml as well. Next, download the JDBC driver jar for MySQL and put it in Tomcat's 'lib' directory. As we will be configuring a connector for a Derby resource in a future tutorial, also download the JDBC driver jar for Apache Derby and put it in Tomcat's 'lib' directory as well.

    3) Create a new Apache Syncope project

    To create a new Apache Syncope project, we must start with a Maven Archetype. For the purposes of this tutorial we will use Apache Syncope 1.1.3. Create a new directory to hold all of the project artifacts ($SYNCOPE_HOME). Create a new project in this directory as follows:

    mvn archetype:generate
        -DarchetypeGroupId=org.apache.syncope
        -DarchetypeArtifactId=syncope-archetype
        -DarchetypeRepository=http://repos1.maven.apache.org/maven2/
        -DarchetypeVersion=1.1.3

    Enter values as requested for the 'groupId', 'artifactId', 'version', 'package' and 'secretKey'. A new Syncope project will be created in a directory corresponding to the value entered for 'artifactId'. Build the project by going into the newly created directory and typing "mvn clean package". To launch Syncope in "embedded" mode, go into the "console" subdirectory and type "mvn -Pembedded". This provides a quick way to start up Syncope with some test data and connectors pre-installed, along with a H2 instance for internal storage. However, instead of doing this we will set up a proper standalone deployment.

    4) Install ConnId bundles

    Apache Syncope uses bundles supplied by the ConnId project to communicate with backend resources. In part III of this series of tutorials we will cover importing users from a database backend, and part IV will cover using a directory backend. Therefore, we need two ConnId bundles to handle these scenarios. Create two new directories in $SYNCOPE_HOME, one called "bundles" to store the ConnId bundles, and another called "logs" to store logging information. Go to the ConnId download page, and download the relevant jars to the bundles directory you have created. It should have the following files:
    • org.connid.bundles.ldap-1.3.6.jar
    • org.connid.bundles.db.table-2.1.5.jar.
    5) Configure the Apache Syncope project

    After creating the Apache Syncope project, we need to configure it to use the ConnId bundles we have downloaded, the logging directory we have created, and the MySQL instance for internal storage.

    Edit 'core/src/main/resources/persistence.properties' in your Syncope project, and replace the existing configuration with the following:

    jpa.driverClassName=com.mysql.jdbc.Driver
    jpa.url=jdbc:mysql://localhost:3306/syncope?characterEncoding=UTF-8
    jpa.username=syncope_user
    jpa.password=syncope_pass
    jpa.dialect=org.apache.openjpa.jdbc.sql.MySQLDictionary
    quartz.jobstore=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
    quartz.sql=tables_mysql.sql
    logback.sql=mysql.sql

    Edit 'core/src/main/webapp/WEB-INF/web.xml' and uncomment the "resource-ref" section (this is required as we are using a DataSource in Tomcat). If your Tomcat instance is starting on a port other than 8080, edit 'console/src/main/resources/configuration.properties' and change the port number.

    6) Deploy Apache Syncope to Tomcat

    Package everything up by executing the following command from the Syncope project:

    mvn clean package -Dbundles.directory=${bundles.dir} -Dlog.directory=${log.dir}

    Deploy 'core/target/syncope.war' and 'console/target/syncope-console.war' to the Tomcat 7 container. Start Tomcat and point a browser at "localhost:8080/syncope-console", logging in as "admin/password". You should see the following:


    At this point you should have successfully deployed Apache Syncope in the Apache Tomcat container, using MySQL as the internal storage. If this is not the case then consult the Tomcat logs, as well as the Syncope logs in the directory you have configured. In the next couple of tutorials we will look at importing data into our Syncope application from various backend resources.

    Thursday, June 27, 2013

    Denial of Service attacks on Apache CXF

    A significant new paper has emerged called "A new Approach towards DoS Penetration Testing on Web Services" by Andreas Falkenberg of SEC Consult Deutschland GmbH, and Christian Mainka, Juraj Somorovsky and Joerg Schwenk of Ruhr-University Bochum. In this paper, the authors developed a suite of automated tests for various Denial of Service (DoS) attacks on Web Services, and ran them against different web service stacks. In this post I will describe the attacks that were successful on Apache CXF and how they were fixed.

    The authors found that Apache CXF (prior to 2.7.4/ 2.6.7/ 2.5.10) was vulnerable (see CVE-2013-2160) to the following attacks:
    • Coercive Parsing Attack: The attacker sends a deeply nested XML document to the service.
    • Attribute Count Attack: The attacker sends a message with a (very) high attribute count.
    • Element Count Attack: The attacker sends a message with a (very) high number of non-nested elements.
    • DJBX31A Hash Collision: A specific hash collision attack.
    The effects of these attacks can vary from causing high CPU usage, to causing the JVM to run out of memory. Clearly the latter is a critical vulnerability. Prior to CXF 2.7.4 / 2.6.7 / 2.5.10, a CXF service was vulnerable to these attacks "out of the box". However, it was possible to avoid some of the attacks using CXF's DepthRestrictingInterceptor. If this was added to the InInterceptor chain, then it was possible to control the stack depth + the number of elements in the request. However it was not possible to control the number of attributes with this interceptor, and it also came with a performance cost.

    CXF uses Woodstox by default as the StAX XML Processor. It was decided that the best place to fix the vulnerabilities was at this level, both to offer protection to other stacks that use Woodstox, and also to remove the performance penalties associated with the DepthRestrictingInterceptor. From Woodstox 4.2.0, new functionality has been added to prevent Denial of Service attacks by restricting the size of XML. It uses the following defaults:
    • Maximum Attributes per Element: 1000.
    • Maximum Children per Element: (no effective bound).
    • Maximum Stack Depth: 1000.
    • Maximum Element Count: (no effective bound).
    • Maximum Number of Characters: (no effective bound).
    These bounds are quite loose to preserve backwards compatibility. CXF 2.7.4 / 2.6.7 / 2.5.10 pick up Woodstox 4.2.0. Other parser implementations are defined as "insecure parsers" and are rejected by default from CXF 2.7.5 unless a System property is set. CXF restricts the Woodstox default values further:
    • Maximum Attributes per Element: 500.
    • Maximum Children per Element: 50000.
    • Maximum Stack Depth: 100.
    CXF 2.7.4 / 2.6.7 / 2.5.10 are not vulnerable to any of the DoS attacks listed above due to these restrictions.



    Tuesday, June 25, 2013

    Apache XML Security for Java 1.4.8 and 1.5.5 released

    Two new versions of the Apache XML Security for Java project have been released and are available for download. These releases contain a fix for a critical security advisory CVE-2013-2172, which involves an XML Signature spoofing attack. Thanks to James Forshaw for reporting the vulnerability to the Apache Santuario project.

    Wednesday, May 22, 2013

    Apache CXF 2.7.5 released

    Apache CXF 2.7.5 has been released. The list of issues fixed is available here. The following security fixes of note have been made in this release:
    • The OpenSAML dependency has been upgraded from 2.5.1 to 2.5.3.
    • A change was made to the logic the STS uses to encrypt tokens that it issues. Previously it threw an exception if a key could not be found (at either service or at a more generic level) to use to encrypt the token. Now it only encrypts the token if a matching key can be found. This allows the ability to only encrypt tokens to specific "AppliesTo" endpoint addresses.
    • LDAP groups are now (better) supported as claims in the STS. See the following blog entry for more detail.
    • The CryptoCoverageChecker interceptor has been enhanced so that you can disable coverage checking for SOAP Faults. This is useful for testing/debugging if you want to figure out the root cause of a remote exception.

    Thursday, April 25, 2013

    Apache CXF 2.7.4 released

    Apache CXF 2.7.4 (and 2.6.7 + 2.5.10) have been released. Users are strongly encouraged to upgrade to the latest versions, due to a critical security issue which must remain undisclosed for the moment. These latest releases pick up Apache Santuario 1.5.4 and Apache WSS4J 1.6.10. In addition to the fixes in these projects, CXF 2.7.4 contains a number of security fixes of interest.

    1) WS-SecurityPolicy fixes

    A large number of negative tests for WS-Security(Policy) were added to CXF to try to smoke out some remaining issues surrounding validating a request against a defined security policy. The following issues were fixed as a result:
    • Layout policies "LaxTimestampFirst" and "LaxTimestampLast" were not validated correctly.
    • X509Token "PKI" policies were not validated correctly.
    • The "OnlySignEntireHeadersAndBody" policy was not validated correctly.
    • The "ProtectTokens" policy was not validated correctly in conjunction with EndorsingSupportingTokens.
    • SAML Token versions weren't validated against policy versions in certain circumstances.
    2) Populating the SecurityContext from a JAAS Subject from WSS4J

    An additional improvement to the WS-Security layer in CXF is that the SecurityContext is now populated from a JAAS Subject from WSS4J, if one is available. For example, if you are using a custom UsernameTokenValidator with WSS4J to validate a received UsernameToken via JAAS, and are returning the Subject (as per WSS4J's JAASUsernameTokenValidator), then CXF will attempt to extract roles from the Subject and populate the SecurityContext accordingly. The advantage of this is that a user can check the standard SecurityContext methods (e.g. "isUserInRole") to perform authorization.

    This is controlled by two JAX-WS properties (see the documentation for more information):
    • ws-security.role.classifier - The Subject Role Classifier to use.  If this value is not specified, then it tries to get roles using the DefaultSecurityContext in cxf-rt-core. Otherwise it uses this value in combination with the "ws-security.role.classifier.type" property to get the roles from the Subject.
    • ws-security.role.classifier.type - The Subject Role Classifier Type to use. Currently accepted values are "prefix" or "classname". Must be used in conjunction with the "ws-security.role.classifier". The default value is "prefix".
    3) STS fixes

    The SecurityTokenService (STS) fixes contained in this release are:
    • The STS Client was not always sending an "AppliesTo" address in a request to the STS, depending on how the STS Client was deployed.
    • The STS Client was always using the "old" WS-Policy namespace for "AppliesTo", instead of getting the namespace from the policy.
    • The STS now supports processing Claims in a request that are retrieved from a security policy as "IssuedToken/Claims". Previously, it would only issue claims that were contained in a "RequestSecurityTokenTemplate" policy. 
    4) CVE-2012-5575

    Finally, a note on security advisory CVE-2012-5575 was added to the CXF security advisories page. This attack exploits the fact that Apache CXF will attempt to decrypt arbitrary ciphertexts, without first checking to see if the algorithm corresponds to the given encryption algorithm defined by the WS-SecurityPolicy AlgorithmSuite definition. This can be exploited by chosen ciphertext attacks to retrieve the plaintext. Please note that this issue has been fixed since CXF 2.5.7, 2.6.4, and 2.7.1.


    Friday, April 19, 2013

    Apache Santuario 1.5.4 and Apache WSS4j 1.6.10 released

    Two new bug-fix releases of note in Apache security products:

    Apache Santuario 1.5.4 has been released. Amongst the issues fixed is a thread-safety problem when secure validation is enabled, and a possible NPE due to ThreadLocal storage when an application is deployed in certain containers.

    Apache WSS4J 1.6.10 has also been released. The issues fixed are available here. A performance issue was fixed in the MemoryReplayCache, which is used to guard against replay attacks. An interop issue was fixed with older Axis 1.x stacks. UsernameTokens with no password elements have been explicitly disallowed by default (although this is configurable). Finally, "time-to-live" functionality has been added to disallow "stale" UsernameTokens (with older Created values).

    Thursday, March 14, 2013

    Signature and Encryption Key Identifiers in Apache WSS4J

    The Apache WSS4J configuration allows you to specify how to reference a public key or certificate when signing or encrypting a SOAP message via the following configuration items:
    This blog entry will explain what values are valid for each of these configuration items, and will explain what each of these values means. Firstly, let's look at what these configuration items refer to.

    When creating a Signature you have the option of adding content to the Signature KeyInfo Element. This lets the recipient know what certificate/public key to use to verify the signature. Specifying a value for WSHandlerConstants.SIG_KEY_ID allows you to change how to refer to the key.

    When encrypting some part of the message, a session key is typically generated and used to encrypt the message part, which is then wrapped in an EncryptedData Element. This refers (typically via a Direct Reference) to a EncryptedKey Element in the security header, where the session key is encrypted using the public key of the recipient. Specifying a value for WSHandlerConstants.ENC_KEY_ID allows you to change how to refer to the public key of the recipient in the EncryptedKey KeyInfo element.

    The following valid values for these configuration items are:
    • IssuerSerial (default)
    • DirectReference
    • X509KeyIdentifier
    • Thumbprint
    • SKIKeyIdentifier
    • KeyValue (signature only)
    • EncryptedKeySHA1 (encryption only)
    1) IssuerSerial

    This (default) key identifier method means that the Issuer Name and Serial Number of a X.509 Certificate is included directly in the KeyInfo Element. For example:

    <ds:KeyInfo>
        <wsse:SecurityTokenReference>
            <ds:X509Data>
                <ds:X509IssuerSerial>
                    <ds:X509IssuerName>CN=XYZ</ds:X509IssuerName>
                    <ds:X509SerialNumber>124124....</ds:X509SerialNumber>
                </ds:X509IssuerSerial>
            </ds:X509Data>
        </wsse:SecurityTokenReference>
    </ds:KeyInfo>

    The certificate is not included in the message and so the recipient will have to have access to the certificate matching the given Issuer Name and Serial Number in a keystore.

    2) DirectReference

    This key identifier method is used when the X.509 Certificate is included in the message, unlike the IssuerSerial case above. The certificate is Base-64 encoded and included in the request via a BinarySecurityToken element, e.g.:

    <wsse:BinarySecurityToken EncodingType="...#Base64Binary"
        ValueType="...#X509v3">MIIBY...
    </wsse:BinarySecurityToken>

    This Certificate is then referred to directly from the KeyInfo Element as follows:

    <ds:KeyInfo>
        <wsse:SecurityTokenReference>
            <wsse:Reference URI="#X509-..." ValueType="...#X509v3"/>
        </wsse:SecurityTokenReference>
    </ds:KeyInfo>

    3) X509KeyIdentifier 

    This key identifier method is similar to DirectReference, in that the certificate is included in the request. However, instead of referring to a certificate, the certificate is included directly in the KeyInfo element, e.g.:

    <ds:KeyInfo>
        <wsse:SecurityTokenReference>
            <wsse:KeyIdentifier EncodingType="...#Base64Binary"
                ValueType="...#X509v3">MIIB...
            </wsse:KeyIdentifier>
        </wsse:SecurityTokenReference>
    </ds:KeyInfo>

    4) Thumbprint

    This Key Identifier method refers to the Certificate via a SHA-1 Thumbprint. The certificate may or may not be included in the request. For example:

    <ds:KeyInfo>
        <wsse:SecurityTokenReference>
            <wsse:KeyIdentifier EncodingType="...#Base64Binary"
                ValueType="...#ThumbprintSHA1">
                5epW9GhL6s0kC9X9egsRZ90ooeE=
            </wsse:KeyIdentifier>
        </wsse:SecurityTokenReference>
    </ds:KeyInfo>

    5) SKIKeyIdentifier

    This Key Identifier method refers to a Certificate via a Base-64 encoding of the Subject Key Identifier, e.g.:

    <ds:KeyInfo>
        <wsse:SecurityTokenReference>
            <wsse:KeyIdentifier EncodingType="...#Base64Binary"
                ValueType="...#X509SubjectKeyIdentifier">
                2DUoN4ppxJz/RNgcCDsJ4SocPdk=
            </wsse:KeyIdentifier>
        </wsse:SecurityTokenReference>
    </ds:KeyInfo>

    6) KeyValue

    This Key Identifier method only applies for Signatures. It includes the (RSA) PublicKey directly in the Signature KeyInfo Element as follows:

    <ds:KeyInfo>
        <ds:KeyValue>
            <ds:RSAKeyValue>
                <ds:Modulus>tfJ29N0G1...</ds:Modulus>
                <ds:Exponent>AQAB</ds:Exponent>
            </ds:RSAKeyValue>
        </ds:KeyValue>
    </ds:KeyInfo>

    7) EncryptedKeySHA1

    This Key Identifier method only applies for Encryption. Unlike the previous methods it refers to the way the EncryptedData references the EncryptedKey Element, rather than the way the EncryptedKey Element refers to the public key. For example:

    <ds:KeyInfo>
        <wsse:KeyIdentifier EncodingType="...#Base64Binary"   
            ValueType="...#EncryptedKeySHA1">
            X/8wvCY...
        </wsse:KeyIdentifier>
    </ds:KeyInfo>

    Thursday, February 28, 2013

    Recent security advisories for Apache CXF

    Apache CXF 2.7.3 (release notes), 2.6.6, and 2.5.9 have been released and are available for download. These releases contain fixes for a number of critical security issues, which I will describe below.

    1) CVE-2012-5633

    A security advisory has been issued in relation to a possible circumvention of WS-Security processing of an inbound request, due to the URIMappingInterceptor in CXF. This is a legacy interceptor (largely made redundant by JAX-RS) that allows some basic "rest style" access to a simple SOAP service. The vulnerability occurs when a simple SOAP service is secured with the WSS4JInInterceptor. WS-Security processing is completely by-passed in the case of a HTTP GET request, and so unauthenticated access to the service can be enabled by the URIMappingInterceptor.

    This is a critical vulnerability if you are using a WS-Security UsernameToken
    or a SOAP message signature via the WSS4JInInterceptor to authenticate users
    for a simple SOAP service. Please note that this advisory does not apply if
    you are using WS-SecurityPolicy to secure the service, as the relevant policies
    will not be asserted. Also note that this attack is only applicable to
    relatively simple services that can be mapped to a URI via the URIMappingInterceptor.

    Although this issue is fixed in CXF 2.5.8, 2.6.5 and 2.7.2, due to a separate
    security vulnerability (CVE-2013-0239), CXF users should upgrade to either 2.5.9, 2.6.6, or 2.7.3.

    2) CVE-2013-0239

    A security advisory has been issued in relation to an authentication bypass involving a UsernameToken WS-SecurityPolicy requirement. If a UsernameToken element is sent with no password child element, then authentication is bypassed by default. This is due to the use-case of supporting deriving keys from a UsernameToken, where a password element would not be sent in the token.

    The vulnerability does not apply in any of the following circumstances:
    • You are using a custom UsernameTokenValidator which does not allow the 'verifyUnknownPassword' use-case, or that otherwise insists that a password must be present in the token (such as the 'JAASUsernameTokenValidator' in WSS4J).
    • You are using a 'sp:HashPassword' policy that requires a hashed password to be present in the token.
    • You are using the older style of configuring WS-Security without using WS-SecurityPolicy.
    If you are relying on WS-SecurityPolicy enabled plaintext UsernameTokens to
    authenticate users, and if neither points a) nor b) apply, then you must
    upgrade to either CXF 2.5.9, 2.6.6 or 2.7.3, or else configure a custom
    UsernameTokenValidator implementation to insist that a password element must
    be present.

    3) Note on CVE-2011-2487

    A security advisory 'note' was also published to the CXF Security Advisories page, giving details on an attack against XML Encryption that affects users of older versions of CXF (prior to 2.5.3 and 2.4.7). It carries the following recommendation:
    It is recommended that the use of the RSA v1.5 key transport algorithm be discontinued. Instead the RSA-OAEP key transport algorithm should be used. This algorithm is used by default from WSS4J 1.6.8 onwards. If you are using WS-SecurityPolicy, then make sure not to use the AlgorithmSuite policies ending in "Rsa15.

    Saturday, February 23, 2013

    WS-Federation support in Apache CXF

    Apache CXF is a leading web services stack with excellent support for a long list of security protocols such as WS-Security, OAuth, etc. A recent addition to this list is support for WS-Federation via the Apache CXF Fediz subproject. In this post, we will introduce Fediz and illustrate how to secure a web application with Fediz via an example.

    1) Introducing Apache CXF Fediz

    The Apache CXF Fediz subproject provides an easy way to secure your web applications via the WS-Federation Passive Requestor Profile. A good place to start in understanding this profile is to look at the Fediz architecture page. A typical scenario is that a client browser will attempt to access a web application secured with a Fediz plugin. Fediz will redirect the brower to an Identity Provider (IdP) for authentication, if no token/cookie is present in the request. In this way, authentication is externalized from the web application to a third party IdP.

    Fediz ships with an IdP component, but naturally the Fediz container plugin is also tested with other implementations. The IdP prompts the user for her credentials, and authenticates them via a Security Token Service (STS). The IdP requests that the STS issues a signed SAML Token on successful authentication. The IdP is configured to request that the STS inserts certain "Claim" attributes in the SAML Token, where the specific "Claim" types are configured per-realm. The client brower is then redirected back to the application server with the inclusion of the SAML Token.

    The Fediz plugin parses the received SAML Token. Authentication is established via the trust verification of the signature on the token. Role-based Access Control (RBAC) is supported by configuring the plugin with a "roleURI" attribute (which should correspond to the standard Claims URI for a role). The corresponding attribute values are extracted from the SAML Token and stored as the role(s) of the user. The actual security enforcement is delegated to the underlying container / application server. The containers that are (planned to be) supported are:
    • Apache Tomcat - See here.
    • Jetty - See here, available in the forthcoming 1.1 release.
    • Spring - See here, available in the forthcoming 1.1 release.
    • JBoss - Planned for the forthcoming 1.1 release. 
    An important point to note is that the Fediz container plugin is actually protocol-agnostic. WS-Federation is the only protocol supported to date, but support for the SAML Web SSO Profile may be added at a later date. For more detailed information on Fediz you should refer to the documentation, and also to the excellent series of blog articles by Oliver Wulff, who is the main developer on Fediz.

    2) Fediz example

    Download the latest Apache CXF Fediz release (currently 1.0.3) here, and extract it to a new directory (${fediz.home}). It ships with two examples, 'simpleWebapp' and 'wsclientWebapp'. We will cover the former as part of this tutorial. We will use a Apache Tomcat 7 container to host both the Idp/STS and service application - this is not recommended, but is an easy way to get the example to work. Please see the associated README.txt of the simpleWebapp example for more information about how to deploy the example properly. Most of the deployment information in this section is based on the Fediz Tomcat documentation, which I recommend reading for a more in-depth treatment of deploying Fediz to Tomcat.

    a) Deploying the IdP/STS

    To deploy the Idp/STS to Tomcat:
    • Create a new directory: ${catalina.home}/lib/fediz
    • Edit ${catalina.home}/conf/catalina.properties and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
    • Copy ${fediz.home}/plugins/tomcat/lib/* to ${catalina.home}/lib/fediz
    • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
    Now we need to set up TLS:
    • Copy ${fediz.home}/examples/samplekeys/tomcat-idp.jks to ${catalina.home}.
    • Edit the TLS Connector in ${catalina.home}/conf/server.xml', e.g.: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="tomcat-idp.jks" keystorePass="tompass" clientAuth="false" sslProtocol="TLS" URIEncoding="UTF-8"  />
    Now start Tomcat, and check that the STS is live by opening the STS WSDL in a web browser: 'https://localhost:8443/fediz-idp-sts/STSService?wsdl'

    b) Deploying the service

    To deploy the service to Tomcat:
    • Copy ${fediz.home}/examples/samplekeys/tomcat-rp.jks to ${catalina.home}.
    • Copy ${fediz.home}/examples/simpleWebapp/src/main/config/fediz_config.xml to ${catalina.home}/conf/
    • Edit ${catalina.home}/conf/fediz_config.xml and replace '9443' with '8443'.
    • Do a "mvn clean install" in ${fediz.home}/examples/simpleWebapp
    • Copy ${fediz.home}/examples/simpleWebapp/target/fedizhelloworld.war to ${catalina.home}/webapps.
    c) Testing the service

    To test the service navigate to:
    • https://localhost:8443/fedizhelloworld/  (this is not secured) 
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    With the latter URL, the browser is redirected to the IDP and is prompted for a username and password. Enter "alice/ecila" or "bob/bob" or "ted/det" to test the various roles that are associated with these username/password pairs.

    Finally, you can see the metadata of the service via the standard URL:
    • https://localhost:8443/fedizhelloworld/FederationMetadata/2007-06/FederationMetadata.xml

    Thursday, January 24, 2013

    Recent security enhancements in Apache CXF 2.7.x

    In this post, I will cover some new security features and enhancements that are contained in Apache CXF 2.7.2 (release notes), as well as the previous 2.7.1 release (release notes).

    1) STS Enhancements
    • The STS ClaimsManager used to call all ClaimsHandler implementations for processing. Now it only calls the implementations that support the requested claim (CXF-4461).
    • New functionality was added to the STS to support processing 'primary' and 'secondary' claims, and to merge claims with the same dialects (CXF-4664).
    • It is now possible for the STSClient to send a 'Claims' Element to the STS via a CallbackHandler (CXF-4638).
    • You can now configure signature + encryption Crypto objects in the STS via a URL or properties object, as per the runtime (CXF-4705).
    2) SAML Enhancements
    • SAML SubjectConfirmation requirements are now enforced for the non-policy case (CXF-4655).
    • The JAX-RS SAML interceptors have been enhanced to allow sending an existing SAML Token, rather than always creating one (CXF-4639).
    3) XACML Enhancements

    A new 'cxf-rt-security' module was introduced in CXF 2.7.1, for security functionality that is common to several of the runtime modules (WS-Security, RS-Security, etc.). For now this module contains some new security functionality based around XACML:
    • It contains some helper classes to construct XACML Request statements using OpenSAML for the XACML 2.0 core specification (see here), as well as the SAML Profile of XACML 2.0 (and here).
    • It contains an interface to create an XACML request given a Principal, list of roles and a CXF Message object, as well as a default implementation. This implementation follows the SAML 2.0 profile of XACML 2.0. The principal name is inserted as the Subject ID, and the list of roles associated with that principal are inserted as Subject roles.  The action to send defaults to "execute". The resource is the WSDL Operation for a SOAP service, and the request URI for a REST service. The current DateTime is also sent in an Environment.
    • An abstract interceptor is provided that wraps the XACML request creation functionality given above. It can perform an XACML authorization request to a remote PDP, and make an authorization decision based on the response. It takes the principal and roles from the CXF SecurityContext, and uses the XACMLRequestBuilder to construct an XACML Request statement.

    Monday, January 7, 2013

    Apache WSS4J 1.6.9 released

    Apache WSS4J 1.6.9 has been released. This release contains a single (critical) fix for a bug which prevented WSS4J 1.6.8 from working correctly in an OSGi container.