Wednesday, October 26, 2016

Switching authentication mechanisms in the Apache CXF Fediz STS

Apache CXF Fediz ships with an Identity Provider (IdP) that can authenticate users via either the WS-Federation or SAML SSO protocols. The IdP delegates user authentication to a Security Token Service (STS) web application using the WS-Trust protocol. The STS implementation in Fediz ships with some sample user data for use in the tests. For a real-world scenario, deployers will have to swap the sample data out for an identity backend (such as Active Directory or LDAP). This post will explain how this can be done, with a particular focus on some recent changes to the STS web application in Fediz to make the process easier.

1) The default STS that ships with Fediz

First let's explain a bit about how the STS is configured by default in Fediz to cater for the testcases.

a) Endpoints and user authentication

The STS must define two distinct set of endpoints to work with the IdP. Firstly, the STS must be able to authenticate the user credentials that are presented to the IdP. Typically this is a Username + Password combination. However, X.509 client certificates and Kerberos tokens are also supported. Note that by default, the STS authenticates usernames and passwords via a simple file local to the STS.

After successful user authentication, a SAML token is returned to the IdP. The IdP then gets another SAML token "on behalf of" the authenticated user for a given realm, authenticating using its own credentials. So we need a second endpoint in the STS to issue this token. By default, the STS requires that the IdP authenticate using TLS client authentication. The security policies are defined in the WSDLs available here.

b) Realms

The Fediz IdP and STS support the concept of authenticating users in different realms. By default, the IdP is configured to authenticate users in "Realm A". This corresponds to a specific endpoint address in the STS. The STS also defines user authentication endpoints in "Realm B" for use in test scenarios involving identity federation between two IdPs.

In addition, the STS defines some configuration to map user identities between realms. In other words, how a principal in one realm should map to another realm, and how the claims in one realm map to those in another realm.

2) Changing the STS in Fediz 1.3.2 to use LDAP

From the forthcoming 1.3.2 release onwards, the Fediz STS web application is a bit easier to customize for your specific deployment needs. Let's see how easy it is to switch the STS to use LDAP.

a) Deploy the vanilla IdP and STS to Apache Tomcat

To start with, we will deploy the STS and IdP containing the sample data to Apache Tomcat.
  • Create a new directory: ${catalina.home}/lib/fediz
  • Edit ${catalina.home}/conf/ and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
  • Copy ${fediz.home}/plugins/tomcat/lib/* to ${catalina.home}/lib/fediz
  • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
  • Download and copy the hsqldb jar (e.g. hsqldb-2.3.4.jar) to ${catalina.home}/lib 
  • Copy idp-ssl-key.jks and idp-ssl-trust.jks from ${fediz.home}/examples/sampleKeys to ${catalina.home}
  • Edit the TLS Connector in ${catalina.home}/conf/server.xml', e.g.: <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
Now start Tomcat and then enter the following in a web browser (authenticating with "alice/ecila" in "realm A" - you should be directed to the URL for the default service application (404, as we have not configured it):


b) Change the STS authentication mechanism to Active Directory

To simulate an Active Directory instance for demonstration purposes, we will modify some LDAP system tests in the Fediz source that use Apache Directory. Check out the Fediz source and build it via "mvn install -DskipTests". Now go into "systests/ldap" and edit the LDAPTest. "@Ignore" the existing test + uncomment the test which just "sleeps". Also change the "@CreateTransport" annotation to start the LDAP port on "12345" instead of a random port.

Next we'll configure the Fediz STS to use this LDAP instance for authentication. Edit 'webapps/fediz-idp-sts/WEB-INF/cxf-transport.xml'. Change "endpoints/file.xml" to "endpoints/ldap.xml". Next edit 'webapps/fediz-idp-sts/WEB-INF/endpoints/ldap.xml" and just change the port from "389" to "12345".

Now we need to configure a JAAS configuration file, which the STS uses to validate the received Username + Password to LDAP. Copy this file to the "conf" directory of Tomcat, substituting "12345" for "portno". Now restart Tomcat, this time specifying the location of the JAAS configuration file, e.g.:
  • export JAVA_OPTS="-Xmx2048M"
This is all the changes that are required to swap over to use an LDAP instance for authentication.

Wednesday, September 28, 2016

Securing an Apache Kafka broker - part IV

This is the fourth in a series of articles on securing an Apache Kafka broker. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. The third post showed how Apache Ranger could be used instead to create and enforce authorization policies for Apache Kafka. In this post we will look at an alternative authorization solution called Apache Sentry.

1) Build the Apache Sentry distribution

First we will build and install the Apache Sentry distribution. Download Apache Sentry (1.7.0 was used for the purposes of this tutorial). Verify that the signature is valid and that the message digests match. Now extract and build the source and copy the distribution to a location where you wish to install it:
  • tar zxvf apache-sentry-1.7.0-src.tar.gz
  • cd apache-sentry-1.7.0-src
  • mvn clean install -DskipTests
  • cp -r sentry-dist/target/apache-sentry-1.7.0-bin ${sentry.home}
Apache Sentry has an authorization plugin for Apache Kafka, amongst other big data projects. In addition it comes with an RPC service which stores authorization privileges in a database. For the purposes of this tutorial we will just configure the authorization privileges in a configuration file locally to the broker. Therefore we don't need to do any further configuration to the distribution at this point.

2) Configure authorization in the broker

Configure Apache Kafka as per the first tutorial. To enable authorization using Apache Sentry we also need to follow these steps. First edit 'config/' and add:
Next copy the jars from the "lib" directory of the Sentry distribution to the Kafka "libs" directory. Then create a new file in the config directory called "sentry-site.xml" with the following content:

This is the configuration file for the Sentry plugin for Kafka. It essentially says that the authorization privileges are stored in a local file, and that the groups for authenticated users should be retrieved from this file. Finally, we need to specify the authorization privileges. Create a new file in the config directory called "sentry.ini" with the following content:

This configuration file contains three separate sections. The "[users]" section maps the authenticated principals to local groups. The "[groups]" section maps the groups to roles, and the "[roles]" section lists the actual privileges. Now we can start the broker as in the first tutorial:
  • bin/ config/ 
3) Test authorization

Now lets test the authorization logic. Start the producer:
  • bin/ --broker-list localhost:9092 --topic test --producer.config config/
Send a few messages to check that the producer is authorized correctly. Now start the consumer:
  • bin/ --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/ --new-consumer
If everything is configured correctly then it should work as in the first tutorial. 

Monday, September 26, 2016

Securing an Apache Kafka broker - part III

This is the third in a series of blog posts about securing Apache Kafka. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. However, this approach is not recommended for non-trivial deployments. In this post we will show at how we can create flexible authorization policies for Apache Kafka using the Apache Ranger admin UI. Then we will show how to enforce these policies at the broker.

1) Install the Apache Ranger Kafka plugin

The first step is to download Apache Ranger (0.6.1-incubating was used in this post). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • tar zxvf apache-ranger-incubating-0.6.1.tar.gz
  • cd apache-ranger-incubating-0.6.1
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-0.6.1-kafka-plugin.tar.gz
  • mv ranger-0.6.1-kafka-plugin ${ranger.kafka.home}
Now go to ${ranger.kafka.home} and edit "". You need to specify the following properties:
  • COMPONENT_INSTALL_DIR_NAME: The location of your Kafka installation
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "KafkaTest".
Save "" and install the plugin as root via "sudo ./". The Apache Ranger Kafka plugin should now be successfully installed (although not yet configured properly) in the broker.

2) Configure authorization in the broker

Configure Apache Kafka as per the first tutorial. There are a number of steps we need to follow to configure the Ranger Kafka plugin before it is operational:
  • Edit 'config/' and add the following:
  • Add the Kafka "config" directory to the classpath, so that we can pick up the Ranger configuration files: export CLASSPATH=$KAFKA_HOME/config
  • Copy the Apache Commons Logging jar into $KAFKA_HOME/libs. 
  • The ranger plugin will try to store policies by default in "/etc/ranger/KafkaTest/policycache". As we installed the plugin as "root" make sure that this directory is accessible to the user that is running the broker.
Now we can start the broker as in the first tutorial:
  • bin/ config/
3) Configure authorization policies in the Apache Ranger Admin UI 

At this point we should have configured the broker so that the Apache Ranger plugin is used to communicate with the Apache Ranger admin service to download authorization policies. So we need to install and configure the Apache Ranger admin service. Please refer to this blog post for how to do this. Assuming the admin service is already installed, start it via "sudo ranger-admin start". Open a browser and log on to "localhost:6080" with the credentials "admin/admin".

First lets add some new users that match the SSL principals we have created in the first tutorial. Click on "Settings" and "Users/Groups". Add new users for the principals:
  • CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE
  • CN=Service,O=Apache,L=Dublin,ST=Leinster,C=IE
  • CN=Broker,O=Apache,L=Dublin,ST=Leinster,C=IE
Now go back to the Service Manager screen and click on the "+" button next to "KAFKA". Create a new service called "KafkaTest". Click "Test Connection" to make sure it can communicate with the Apache Kafka broker. Then click "add" to save the new service. Click on the new service. There should be an "admin" policy already created. Edit the policy and give the "broker" principal above the rights to perform any operation and save the policy. Now create a new policy called "TestPolicy" for the topic "test". Give the service principal the rights to "Consume, Describe and Publish". Give the client principal the rights to "Consum and Describe" only.

4) Test authorization

Now lets test the authorization logic. Bear in mind that by default the Kafka plugin reloads policies from the admin service every 30 seconds, so you may need to wait that long or to restart the broker to download the newly created policies. Start the producer:
  • bin/ --broker-list localhost:9092 --topic test --producer.config config/
Send a few messages to check that the producer is authorized correctly. Now start the consumer:
  • bin/ --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/ --new-consumer
If everything is configured correctly then it should work as in the first tutorial.

Friday, September 23, 2016

Integrating Apache Camel with Apache Syncope - part III

This is the third in a series of blog posts about integrating Apache Camel with Apache Syncope. The first post introduced the new Apache Camel provisioning manager that is available in Apache Syncope 2.0.0, and gave an example of how we can modify the default behaviour to send an email to an administrator when a user was created. The second post showed how an administrator can keep track of user password changes for auditing purposes. In this post we will show how to integrate Syncope with Apache ActiveMQ using Camel.

1) The use-case

The use-case is that Apache Syncope is used for Identity Management in a large organisation. When users are created we would like to be able to gather certain information about the new users and process it dynamically in some way. In particular, we are interested in the age of the new users and the country in which they are based. Perhaps at the reception desk of the company HQ we display a map with the number of employees in each country highlighted. To decouple whatever applications are processing the data from Syncope itself, we will use a messaging solution, namely Apache ActiveMQ. When new users are created, we will modify the default Camel route to send a message to two topics corresponding to the age and location of the user.

2) Download and configure Apache ActiveMQ

The first step is to download Apache ActiveMQ (currently 5.14.0). Unzip it and start it via:
  • bin/activemq start 
Now go to the web interface of ActiveMQ - 'http://localhost:8161/admin/', logging in with credentials 'admin/admin'. Click on the "Queues" tab and create two new queues called 'age' and 'country'.

3) Download and configure Apache Syncope

Download and extract the standalone version of Apache Syncope 2.0.0. Before we start it we will copy the jars we need to get Camel working with ActiveMQ in Syncope. In the "webapps/syncope/WEB-INF/lib" directory of the Apache Tomcat instance bundled with Syncope, copy the following jars:
  • From $ACTIVEMQ_HOME/lib: activemq-client-5.14.0.jar + activemq-spring-5.14.0.jar + hawtbuf-1.11.jar + geronimo-j2ee-management_1.1_spec-1.0.1.jar
  • From $ACTIVEMQ_HOME/lib/camel: activemq-camel-5.14.0.jar + camel-jms-2.16.3.jar
  • From $ACTIVEMQ_HOME/lib/optional: activemq-pool-5.14.0.jar + activemq-jms-pool-5.14.0.jar + spring-jms-4.1.9.RELEASE.jar
Next we need to create a Camel spring configuration file containing a bean with the address of the broker. Add the following file to the Tomcat lib directory (called "camelRoutesContext.xml"):

Now we can start the embedded Apache Tomcat instance. Open a browser and navigate to 'http://localhost:9080/syncope-console' logging in with 'admin/password'. The first thing we need to do is to configure user attributes for "age" and "country". Go to "Configuration/Types" in the left-hand menu, and click on the "Schemas" tab. Create two plain (mandatory) schema types: "age" of type "String" and "country" of type "Long". Now click on the "AnyTypeClasses" tab and create a new AnyTypeClass selecting the two plain schema types we just created. Finally, click on the "AnyType" tab and edit the "USER". Add the new AnyTypeClass you created and hit "save".

Now we will modify the Camel route invoked when a user is created. Click on "Extensions/Camel Routes" in the left-hand configuration menu. Edit the "createUser" route and add the following above the "bean method" part:
  • <setBody><simple>${body.plainAttrMap[age].values[0]}</simple></setBody>
  • <to uri="activemq:age"/>
  • <setBody><simple>${exchangeProperty.actual.plainAttrMap[country].values[0]}</simple></setBody>
  • <to uri="activemq:country"/>
This should be fairly straightforward to follow. We are setting the message body to be the age of the newly created User, and dispatching that message to the "age" queue. We then follow the same process for the "country". We also need to change "body" in the "bean method" line to "exchangeProperty.actual", this is because we have redefined what the body is for each of the Camel routes above.

Now let's create some new users. Click on the "Realms" menu and select the "USER" tab. Create new users "alice" in country "usa" of age "25" and "bob" in country "canada" of age "27". Now let's look at the ActiveMQ console again. We should see two new messages in each of the queues as follows, ready to be consumed:

Thursday, September 22, 2016

Using SHA-512 with Apache CXF SOAP web services

XML Signature is used extensively in SOAP web services to guarantee message integrity, non-repudiation, as well as client authentication via PKI. A digest algorithm crops up in XML Signature both as part of the Signature Method (rsa-sha1 for example), as well as in the digests of the data that are signed. As recent weaknesses have emerged with the use of SHA-1, it makes sense to use the SHA-2 digest algorithm instead. In this post we will look how to configure Apache CXF to use SHA-512 (i.e. SHA-2 with 512 bits) as the digest algorithm.

1) Configuring the STS to use SHA-512

Apache CXF ships with a SecurityTokenService (STS) that is widely deployed. The principal function of the STS is to issue signed SAML tokens, although it supports a wide range of other functionalities and token types. The STS (for more recent versions of CXF) uses RSA-SHA256 for the signature method when signing SAML tokens, and uses SHA-256 for the digest algorithm. In this section we'll look at how to configure the STS to use SHA-512 instead.

You can specify signature and digest algorithms via the SignatureProperties class in the STS. To specify SHA-512 for signature and digest algorithms for generated tokens in the STS add the following bean to your spring configuration:

Next you need to reference this bean in the StaticSTSProperties bean for your STS:
  • <property name="signatureProperties" ref="sigProps" />
2) Configuring WS-SecurityPolicy to use SHA-512

Service requests are typically secured at a message level using WS-SecurityPolicy. It is possibly to specify the algorithms used to secure the request, as well as the key sizes, by configuring an AlgorithmSuite policy. Unfortunately the last WS-SecurityPolicy spec is quite dated at this point, and lacks support for more modern algorithms as part of the default AlgorithmSuite policies that are defined in the spec. The spec only supports using RSA-SHA1 for signature, and only SHA-1 and SHA-256 for digest algorithms.

Luckily, Apache CXF users can avail of a few different ways to use stronger algorithms with web service requests. In CXF there is a JAX-WS property called 'ws-security.asymmetric.signature.algorithm' for AsymmetricBinding policies (similarly 'ws-security.symmetric.signature.algorithm' for SymmetricBinding policies). This overrides the default signature algorithm of the policy. So for example, to switch to use RSA-SHA512 instead of RSA-SHA1 simply set the following property on your client/endpoint:
  • <entry key="ws-security.asymmetric.signature.algorithm" value=""/>
There is no corresponding property to explicitly configure the digest algorithm, as the default AlgorithmSuite policies already support SHA-256 (although one could be added if there was enough demand). If you really need to support SHA-512 here, an option is to use a custom AlgorithmSuite (which will obviously not be portable), or to override one of the existing ones.

It's pretty straightforward to do this. First you need to create an AlgorithmSuiteLoader implementation to handle the policy. Here is one used in the tests that creates a custom AlgorithmSuite policy called 'Basic128RsaSha512', which extends the 'Basic128' policy to use RSA-SHA512 for the signature method, and SHA-512 for the digest method. This AlgorithmSuiteLoader can be referenced in Spring via:

The policy in question looks like:
  • <cxf:Basic128RsaSha512 xmlns:cxf=""/>

Wednesday, September 21, 2016

Invoking on the Talend ESB STS using SoapUI

Talend ESB ships with a powerful SecurityTokenService (STS) based on the STS that ships with Apache CXF. The Talend Open Studio for ESB contains UI support for creating web service clients that use the STS to obtain SAML tokens for authentication (and also authorization via roles embedded in the tokens). However, it is sometimes useful to be able to obtain tokens with a third party client. In this post we will show how SoapUI can be used to obtain SAML Tokens from the Talend ESB STS.

1) Download and run Talend Open Studio for ESB

The first step is to download Talend Open Studio for ESB (the current version at the time of writing this post is 6.2.1). Unzip it and start the container via:
  • Runtime_ESBSE/container/bin/trun
The next step is to start the STS itself:
  • tesb:start-sts
2) Download and run SoapUI

Download SoapUI and run the installation script. Create a new SOAP Project called "STS" using the WSDL:
  • http://localhost:8040/services/SecurityTokenService/UT?wsdl
The WSDL of the STS defines a number of different services. The one we are interested in is the "UT_Binding", which requires a WS-Security UsernameToken to authenticate the client. Click on "UT_Binding/Issue/Request 1" in the left-hand menu to see a sample request for the service. Now we need to do some editing of the request. Remove the 'Context="?"' attribute from RequestSecurityToken. Then paste the following into the Body of the RequestSecurityToken:
  • <t:TokenType xmlns:t=""></t:TokenType>
  • <t:KeyType xmlns:t=""></t:KeyType>
  • <t:RequestType xmlns:t=""></t:RequestType>
Now we need to configure a username and password to use when authenticating the client request. In the "Request Properties" box in the lower left corner, add "tesb" for the "username" and "password" properties. Now right click in the request pane, and select "Add WSS Username Token" (Password Text). Now send the request and you should receive a SAML Token in response.

Bear in mind that if you wish to re-use the SAML Token retrieved from the STS in a subsequent request, you must copy it from the "Raw" tab and not the "XML" tab of the response. The latter adds in whitespace that breaks the signature on the token. Another thing to watch out for is that the STS maintains a cache of the Username Token nonce values, so you will need to recreate the UsernameToken each time you want to get a new token.

3) Requesting a "PublicKey" KeyType

The example above uses a "Bearer" KeyType. Another common use-case, as is the case with the security-enabled services developed using the Talend Studio, is when the token must have the PublicKey/Certificate of the client embedded in it. To request such a token from the STS, change the "Bearer" KeyType as above to "PublicKey". However, we also need to present a certificate to the STS to include in the token.

As we are just using the test credentials used by the Talend STS, go to the Runtime_ESBSE/container/etc/keystores and extract the client key with:
  • keytool -exportcert -rfc -keystore clientstore.jks -alias myclientkey -file client.cer -storepass cspass
Edit client.cer + remove the first and end lines (that contain BEGIN/END CERTIFICATE). Now go back to SOAP-UI and add the following to the RequestSecurityToken Body:
  • <t:UseKey xmlns:t=""><ds:KeyInfo xmlns:ds=""><ds:X509Data><ds:X509Certificate>...</ds:X509Certificate></ds:X509Data></ds:KeyInfo></t:UseKey>
where the content of the X.509 Certificate is the content in client.cer. This time, the token issued by the STS will contain the public key of the client embedded in the SAML Subject.

Monday, September 19, 2016

Securing an Apache Kafka broker - part II

In the previous post, we looked at how to configure an Apache Kafka broker to require SSL client authentication. In this post we will add authorization to the example, making sure that only authorized producers can send messages to the broker. In addition, we will show how to enforce authorization rules per-topic for consumers.

1) Configure authorization in the broker

Configure Apache Kafka as per the previous tutorial. To enforce some custom authorization rules in Kafka, we will need to implement the Kafka Authorizer interface. This interface contains an "authorize" method, which supplies a Session Object, where you can obtain the current principal, as well as the Operation and Resource upon which to enforce an authorization decision.

In terms of the example detailed in the previous post, we created broker, service (producer) and client (consumer) principals. We want to enforce authorization decisions as follows:
  • Let the broker principal do anything
  • Let the producer principal read/write on all topics
  • Let the consumer principal read/describe only on topics starting with "test".
There is a sample Authorizer implementation available in some Kafka unit test I wrote in github that can be used in this example - CustomAuthorizer:

Next we need to package up the CustomAuthorizer in a jar so that it can be used in the broker. You can do this by checking out the testcases github repo, and invoking "mvn clean package jar:test-jar -DskipTests" in the "apache/bigdata/kafka" directory. Now copy the resulting test jar in "target" to the "libs" directory in your Kafka installation. Finally, edit the "config/" file and add the following configuration item:
2) Test authorization

Now lets test the authorization logic. Restart the broker and the producer:
  • bin/ config/
  • bin/ --broker-list localhost:9092 --topic test --producer.config config/
Send a few messages to check that the producer is authorized correctly. Now start the consumer:
  • bin/ --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/ --new-consumer
If everything is configured correctly then it should work as in the first tutorial. Now we will create a new topic called "messages":
  • bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic messages
Restart the producer to send messages to "messages" instead of "test". This should work correctly. Now try to consume from "messages" instead of "test". This should result in an authorization failure, as the "client" principal can only consume from the "test" topic according to the authorization rules.