Quantcast
Channel: Arjan Tijms' Weblog
Viewing all 66 articles
Browse latest View live

JEUS application server: The story continues

$
0
0

In a previous blog post I wrote about JEUS, a Java EE 7 certified application server that is popular in Korea but pretty much unknown elsewhere. In this follow up I'll report about the latest findings and take a look at how to deploy an application to JEUS and do some debugging.

The previous installment was quite a story, but there's more. The us domain that was previously just gone briefly flickered into existence again, and it indeed offered the much sought after JEUS 7 download link; jeus70_linux_x86.bin. In order to get this I once again had to register a new account, and this time the signup process limited my password to 12 characters and did not allow capitals and non-alphanumerics (the 4 other signup processes that I went through did allow this). Unfortunately after a few hours the site suddenly featured completely different content. It's now largely, but not entirely, identical to the main corporate site, and doesn't feature the JEUS 7 download link anymore.

The tech site appeared to have a forum"hidden" behind the Q & A link. With some 5 to 10 posts per day it's clear that JEUS is definitely very much alive in Korea. (Google translate unfortunately stops working after you click on a post, so the forum is best viewed directly using the Chrome auto-translation).

Just as the entire site, the forum posts are mainly about JEUS 5 (J2EE 1.4, 2003) and JEUS 6 (Java EE 5, 2006). Apparently Java EE 6 (2009) hasn't fully penetrated the Korean market yet, although occasionally Korean users too wonder why JEUS 7 is not out there in the open. After asking on the forum the kind people from TmaxSoft where friendly enough to upload a JEUS 7 trial version again, this time on the English variant of the tech site ((which is rather empty otherwise, but now does feature this unique download)

There appears to be an Eclipse plug-in available after all on the download page, called JMaker, which in the typical ancient versions trend is of course for Eclipse 3.2 (2006) and supports JEUS 5. Next to the download page there's also a "resource room", which has a download page as well that has another Eclipse plug-in called the Jeus Bridge. This one supports Eclipse 3.5 (2009), which is slowly creeping towards the modern world, but unfortunately it too only supports JEUS 5. I got in touch with JEUS' lead developer Yoon Kyung Koo and he let me know they're attempting to open up the plug-in, which I take as meaning it will eventually appear in the Eclipse marketplace and such.

Downloads

So far I've found the following download links:

JEUS binarySizeInitial release refJava EE versionDocumentationDev previewPublic downloadSite
jeus50-unix-generic.bin265 MB May 2005J2EE 1.4JEUS 5XXKorean tech site
jeus50-win.exe205 MB XX
jeus60-winx86-en.exe147 MB June, 2007Java EE 5JEUS v6.0 Fix8 Online Manual (11/2011)XX
jeus60-winx64-ko.exe190 MB XX
jeus60_unix_generic_ko.bin268 MB XX
jeus70_linux_x86.bin133 MB June, 2012Java EE 6JEUS v7.0 Fix#1 Korean Online Manual (04/2013)

JEUS v7.0 Fix#1 English PDF set (08/2013)
VXUS corporate site (dead link, no other source)
jeus70_unix_generic.bin546 MB XXEnglish tech site
jeus70_win_x86.exe441 MB XX
jeus70_win_x64.exe445 MB XX
jeus80_unix_generic_ko.bin165 MB Est. Q4 2014Java EE 7[no documentation yet]VVInt. corporate site

 

Deploying to JEUS 8

Just as GlassFish, WebLogic and Geronimo among others, it seems it's only possible to deploy an application to JEUS via a running server. The simplicity that JBoss, TomEE and Tomcat offer by simply copying an archive to a special directory is unfortunately not universally embraced by Java EE implementations.

In order to deploy an app to JEUS 8 we first start it again:


./bin/startDomainAdminServer -domain jeus_domain -u administrator -p admin007
After it's started we connect to the running instance via the jeusadmin command:

./bin/jeusadmin -u administrator -p admin007
In my home directory I have a small war that I normally use for quick JASPIC tests. In order to deploy this we first have to install it via the admin console that we just opened:

[DAS]jeus_domain.adminServer>installapp /home/arjan/abc.war
Successfully installed the application [abc_war].
This command will copy the .war to [jeus install dir]/domains/jeus_domain/.uploaded and [jeus install dir]/domains/jeus_domain/.applications. After this kind of pre-staging step we can do the actual deploy:

[DAS]jeus_domain.adminServer>deploy abc_war -all
deploy the application for the application [abc_war] succeeded.
Note that the first argument for the deploy command is "abc_war". This is the application ID that is assigned to the application when it was installed and defaults to the full archive name with all periods replaced by underscores. There is more information available for each command via the help command, e.g. help deploy will list all arguments with some short explanation.

The deploy command will add a new entry to [jeus install dir]/domains/jeus_domain/config/domain.xml:


<deployed-application>
<id>abc_war</id>
<path>/home/arjan/jeus8/domains/jeus_domain/.applications/abc_war/abc.war</path>
<type>WAR</type>
<targets>
<all-target/>
</targets>
<options>
<classloading>ISOLATED</classloading>
<fast-deploy>false</fast-deploy>
<keep-generated>false</keep-generated>
<shared>false</shared>
<security-domain-name>SYSTEM_DOMAIN</security-domain-name>
</options>
</deployed-application>

Requesting http://localhost:8808/abc/index.jsp?doLogin=true in a browser did load the correct test page, but JASPIC itself didn't work as expected. We should see the authenticated user's name and role, but this didn't happen:

This is not entirely unexpected as most servers need a special configuration (typically a vendor specific group-to-role mapping) before JASPIC starts working and I haven't added any such configuration for JASPIC yet.

Debugging JEUS 8

In order to find out where things go wrong it's always handy to do a little debugging. Maybe it's not the configuration at all, but perhaps the @WebListener isn't called that installs the SAM, or maybe it is, but then the SAM isn't called, etc. But without any tooling support, how do we debug an application running on JEUS? Using good-old System.out debugging gave me some hints. The SAM is not called for public pages (it should be) but it is indeed called for protected pages. Unfortunately despite being called for a protected page there still wasn't any sight of the authenticated name nor roles. Obviously we need some -real- debugging.

The most straightforward method to do that in this case is via remote debugging, which means we need to add a special agentlib parameter when starting up JEUS. [jeus install dir]/bin/startDomainAdminServer seemed like an obvious place as this starts a Java application that looks to be the server. Unfortunately it appeared to be only a launcher that launches a new process representing the real server and then exits. After some digging I found out that additional parameters for this "real server" can be specified in the same [jeus install dir]/domains/jeus_domain/config/domain.xml file that we saw before via the jvm-option element:




adminServer



-Xmx1024m -XX:MaxPermSize=128m
-agentlib:jdwp=transport=dt_socket,address=1044,server=y,suspend=y





Starting JEUS 8 again will cause it to halt and wait for a debug connection:


[launcher-1] [Launcher-0012] Starting the server [adminServer] with the command
/opt/jdk1.7.0_40/jre/bin/java -DadminServer -Xmx1024m -XX:MaxPermSize=128m -agentlib:jdwp=transport=dt_socket,address=1044,server=y,suspend=y -server -Xbootclasspath/p:/home/arjan/jeus8/lib/system/extension.jar -classpath /home/arjan/jeus8/lib/system/bootstrap.jar -Djava.security.policy=/home/arjan/jeus8/domains/jeus_domain/config/security/policy -Djava.library.path=/home/arjan/jeus8/lib/system -Djava.endorsed.dirs=/home/arjan/jeus8/lib/endorsed -Djeus.properties.replicate=jeus,sun.rmi,java.util,java.net -Djeus.jvm.version=hotspot -Djava.util.logging.config.file=/home/arjan/jeus8/bin/logging.properties -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.util.logging.manager=jeus.util.logging.JeusLogManager -Djeus.home=/home/arjan/jeus8 -Djava.net.preferIPv4Stack=true -Djeus.tm.checkReg=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Djeus.domain.name=jeus_domain -Djava.naming.factory.initial=jeus.jndi.JNSContextFactory -Djava.naming.factory.url.pkgs=jeus.jndi.jns.url -Djeus.server.protectmode=false -XX:+UnlockDiagnosticVMOptions -XX:+LogVMOutput -XX:LogFile=/home/arjan/jeus8/domains/jeus_domain/servers/adminServer/logs/jvm.log jeus.server.admin.DomainAdminServerBootstrapper -domain jeus_domain -u administrator -server adminServer .
[2013.10.18 23:36:56][2] [launcher-1] [Launcher-0014] The server[adminServer] is being started ...
Listening for transport dt_socket at address: 1044

After this point we can attach to it via Eclipse:

If everything went well, you'll see JEUS 8's thread appearing in the debug view:

Setting a break-point inside the SAM and requesting the protected page again indeed breaks us into the debugger:

Without diving much further in the details it appears that despite passing the JASPIC TCK, the handler (shown in the image above as JeusCallbackHandler) does exactly *nothing*, so a standard JASPIC module can never really work. I confirmed that this is indeed the case with JEUS' lead developer. Remember though that the JEUS version being tested here is just a developer preview and the final version will most likely implement the required behavior correctly.

Interestingly, after playing around with JEUS' authentication some more it appeared that JEUS uses JASPIC auth modules to implement the build-in Servlet authentication methods. This is almost certainly also without using the Callback handler (as it does absolutely nothing, what would be the point?) so it's not in a fully JASPIC compliant way, but it's nevertheless really interesting. All other Java EE servers that I looked at don't actually use Java EE authentication as their native authentication system, but just wrap the JASPIC interfaces on top of whatever else they're using natively. JEUS has an exclusive here by basing their authentication directly on JASPIC. If they only can make it fully compliant JEUS will be one of the most interesting servers for JASPIC our there.

Redeploying

Deploying a single war once that's already in your home is one thing, but how to easily see your changes after editing in your IDE? Unfortunately there's no real easy solution for this without tooling support, but I cooked up the following admittedly very crude script that when executed from the (Maven) project folder somewhat gets the job done:


#!/bin/sh

WAR_HOME=/home/arjan/eclipse43ee/workspace/auth/target/

# Rebuild project
mvn package
cp ${WAR_HOME}/auth-0.0.1-SNAPSHOT.war ${WAR_HOME}/auth.war

# Redeploy to JEUS
~/jeus8/bin/jeusadmin -u administrator -p admin007 "undeploy auth_war"
~/jeus8/bin/jeusadmin -u administrator -p admin007 "uninstall auth_war"
~/jeus8/bin/jeusadmin -u administrator -p admin007 "installapp ${WAR_HOME}/auth.war"
~/jeus8/bin/jeusadmin -u administrator -p admin007 "deploy auth_war -all"

Meanwhile I have a tail on the log in another console:


tail -f domains/jeus_domain/servers/adminServer/logs/JeusServer.log

It works, but it's not exactly the experience you'd normally expect when working with an IDE.

Finally, we stop the server again via the following command:


./bin/stopServer -host localhost -u administrator -p admin007

Conclusion

In this article we haven't really extensively tested JEUS, but we just went through the steps of finding the download links, installing it (see previous article), deploying a really small application, and doing some debugging on the Java EE 7 developer preview. I didn't yet test a bigger application such as the OmniFaces showcase, something I still plan to do for JEUS 7 (the production ready Java EE 6 implementation).

By putting up English documentation and downloadable trial versions of JEUS 7 as well as a developer preview of JEUS 8 TmaxSoft has made some small but important steps to get JEUS out of international obscurity. Still they have some way to go. While we saw that deployment and debugging can be done without tool support, very few developers would consider this a viable option and because of that may not be really eager to try out JEUS.

An important next step for TmaxSoft will be to indeed make the required tooling easily and freely available.

Arjan Tijms


JAAS in Java EE is not the universal standard you may think it is

$
0
0
The Java EE security landscape is complex and fragmented. There is the original Java SE security model, there is JAAS, there is JASPIC, there is JACC and there are the various specs like Servlet, EJB and JAX-RS that each have their own sections on security.

For some reason or the other it's too often thought that there's only a single all encompassing security framework in Java EE, and that the name of that framework is JAAS. This is however not the case. Far from it. Raymond K. NG has explained this a long time ago in the following articles:

In short, Java EE has its own security model and over the years has tried to bridge this to the existing Java SE model, but this has never worked flawlessly. One particular issue that we'll be taking a more in-depth look at in this article concerns the JAAS Subject.

The JAAS Subject represent an entity like a person and is implemented as a so-called "bag of Principals". This means it holds a collection of arbitrary Principals, where a Principal can be anything that identifies the entity, like an email, telephone number or an identification number. This gives it great flexibility to work with, but since none of those Principals are in any way standardized also makes it difficult and open to interpretation. In a way one might almost just pass a plain HashMap along, which after all is very flexible too.

For Java EE most of the flexibility that the Subject offers is lost however. The most predominant APIs only deal with 2 specific principals; the caller/user principal and the group/role principal.

An enormous amount of complexity and rather arcane API workarounds have spun around a simple but very unfortunate fact; those two core principals have not been standardized in any way in Java EE. This means that a JAAS login module, which populates a Subject with principals when it commits, can never be used directly with an arbitrary Java EE server as it has no idea how a particular Java EE server represents the caller/user and group/role principals.

As explained in a previous article JASPIC has introduced a workaround for this, where a so-called CallerPrincipalCallback and GroupPrincipalCallback do represent the data for those two principals in a standardized way. The idea is that a container can read this data from those callbacks in this standardized way and then construct container specific principals which are subsequently stored in a container specific way into the Subject. This works to some degree, but it remains a crude workaround.

The problem is that the JASPIC CallbackHandler to which these Callbacks are passed is under no obligation to just read the data and directly store it into the Subject without any further side effects. In practice, this is thus indeed not what happens. Handlers will set global security contexts, call into services to execute all kinds of login logic and what have you. Furthermore, there is nothing available to directly read those Principals back from a given Subject. Indirectly you can get the user/caller principal back via e.g. the Servlet API's HttpServletRequest#getUserPrincipal, but this only works for the current authenticated user and not for an arbitrary Subject that you may have lying around. Likewise, the role/group principals can be obtained via the JACC API. Unfortunately JACC itself is a rather obscure API that while being part of Java EE does not necessarily actually has to be active at runtime by default, and in fact doesn't even have to work out of the box since Java EE implementations are not mandated to ship with an actual default JACC provider (they only have to provide the SPI to plug such provider into the container).

The big question is then of course why these two core principals have never been standardized. Do containers use such different representations that standardization is not feasible, or did it just never occurred to anyone in all those years that it may be handy to standardize these?

In order to see if vastly different representations might be the issue I took a look at how actual containers exactly store the Principals inside a Subject. This was primarily done by examining an instance of a Subject via a debugger, but I also took a quick look at some example JAAS modules for a couple of servers. After all, at some point the JAAS module has to add the Principals to the given Subject and the example may indicate how this is done for a specific server.

Examining an instance of a Subject via a debugger


For authentication I used the JASPIC auth module that was introduced in step 5 of the aforementioned previous article and the same test application. In order to retrieve the Subject I used the following JACC code in the Servlet:

Subject subject = (Subject) PolicyContext.getContext("javax.security.auth.Subject.container");

For JBoss EAP 6.2, GlassFish 4 and Geronimo 3.0.1 this worked perfectly. There were however issues with WebLogic 12.1.2, WebSphere 8.5 and JEUS 8.

In WebLogic 12.1.2 JACC is not enabled by default. You have to explicitly enable it by starting the container with a huge string of command line options, including one that references the directory where WebLogic is installed:


./startWebLogic.sh -Djava.security.manager -Djavax.security.jacc.policy.provider=weblogic.security.jacc.simpleprovider.SimpleJACCPolicy -Djavax.security.jacc.PolicyConfigurationFactory.provider=weblogic.security.jacc.simpleprovider.PolicyConfigurationFactoryImpl -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl -Djava.security.policy==/opt/weblogic12.1.2/wls12120/wlserver/server/lib/weblogic.policy

A particular nasty requirement here is that WebLogic requires the Java SE security manager to be activated. The Java SE security manager is used for code level protection, which is a level of protection that is rarely if ever needed in Java EE as it's extremely rare that a server will run untrusted code (like e.g. a browser which runs an untrusted Applet from the Internet). Activating the security manager can have a huge performance impact and this is typically not recommended for application servers. Yet, WebLogic for some reason as the only server in the string of servers that I tested requires this.

After starting WebLogic this way it crashed immediately with an exception about not having access to read some internal file. To quickly remedy this I took the extreme measure of just granting anything to everything, which kinda beats using the security manager in the first place and which more or less proves the unnecessity of the WebLogic security manager requirement. To do this I put the following in the referenced policy file:


grant {
permission java.security.AllPermission;
};

Needless to say this is for testing only and should not be used for a production environment where code level security really is required. After this workaround WebLogic booted, but then started to throw out of memory exceptions. These appeared to be caused by a sudden lack of memory in the permanent generation. After I boosted this from 256MB to 512MB WebLogic was stable again.

WebSphere 8.5 proved to be even worse. Here too JACC is not enabled by default, but it can be enabled via the admin console in the security section. However WebSphere does not seem to ship with any default JACC provider. There's only a provider available that's a client for something called the Tivoli Access Manager, which after some research appeared to be some kind of authorization server that needed to be installed separately. For a moment I pondered whether I should try to install this, but the WebSphere installation had taken a horrendous amount of time already and it would be over the top and an immense overkill to have an external authorization server running just to get the thread local Subject inside a Servlet.

"Luckily" there's also an IBM proprietary way to get the Subject, so I used that instead:


Subject subject = com.ibm.websphere.security.auth.WSSubject.getCallerSubject();

Finally in JEUS 8 too JACC is not enabled by default. It can be enabled by commenting out or removing the existing repository-service and custom-authorization-service elements in the authorization section of [jeus install dir]/domains/jeus_domain/config/domain.xml and adding the empty jacc-service element. It then becomes just this:


<authorization>
<jacc-service/>
</authorization>

Although there's no security manager used here and thus no overhead from that, TMaxSoft warns in its JEUS 7 security manual that the default JACC provider that will be activated by the above shown configuration is mainly for testing and doesn't advise to use it for real production usage. Curiously though, I found after some experimentation that the non-JACC default authorization provider in JEUS is pretty much just like JACC but with some small differences. This may be an interesting thing to take a deeper look at in a future article.

Additionally JEUS 8 also offered a proprietary way to get hold of the Subject when JACC is not enabled:


Subject subject = jeus.security.impl.login.CommonLoginService.doGetCurrentSubject().toJAASSubject();

Sadly JEUS 8 is the only application server that despite being Java EE 7 certified doesn't implement JASPIC in such a way that it actually works; an authentication module can be installed and it's correctly called for each request, but then the container just does nothing with the user/caller and group/roles that are passed to the handler. Because of this a native login module had to be used for JEUS 8 (the default file based username/password module).

The following shows the result of inspecting the Subject on every server:

JBoss EAP 6.2


Principals
org.jboss.security.SimpleGroup (name=CallerPrincipal)
members
org.jboss.security.SimplePrincipal (name=test)

org.jboss.security.SimpleGroup (name=Roles)
members
org.jboss.security.SimplePrincipal (name=architect)

GlassFish 4.0


Principals
org.glassfish.security.common.PrincipalImpl (name = test)
org.glassfish.security.common.Group (name = architect)

Geronimo 3.0.1


Pricipals
org.apache.geronimo.security.realm.providers.GeronimoUserPrincipal (name="test")
org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal (name="architect")
org.apache.geronimo.security.IdentificationPrincipal
(name = org.apache.geronimo.security.IdentificationPrincipal[[1392054106031:0x0f9d4c68befbfbee189e0ca0dadd3757d577da8b]]
(id = SubjectId (name = [1392054106031:0x0f9d4c68befbfbee189e0ca0dadd3757d577da8b], subjectId = 1392054106031))

WebLogic 12.1.2


Principals
weblogic.security.principal.WLSUserImpl (name = test)
weblogic.security.principal.WLSGroupImpl (name = architect)

WebSphere 8.5


Principals
com.ibm.ws.security.common.auth.WSPrincipalImpl (username = test, fullname = defaultWIMFileBasedRealm/test)

PublicCredentials
com.ibm.ws.security.auth.WSCredentialImpl
(accessId = user:defaultWIMFileBasedRealm/uid=test,o=defaultWIMFileBasedRealm
groupIds
group:defaultWIMFileBasedRealm/cn=architect,o=defaultWIMFileBasedRealm
realmuniqueusername = defaultWIMFileBasedRealm/uid=test,o=defaultWIMFileBasedRealm
uniqueusername = uid=test,o=defaultWIMFileBasedRealm
username = test)

JEUS 8


Principals
jeus.security.resource.PrincipalImpl (name = test)
jeus.security.resource.GroupPrincipalImpl
(name = architect,
description =,
individuals = {Principal test=Principal test} (Hashtable of jeus.security.resource.PrincipalImpl to jeus.security.resource.PrincipalImpl),
subgroups = [] (empty Vector))
jeus.security.base.Subject$SerializedSubjectPrincipal (bytes = …)

PrivateCredentials
jeus.security.resource.Password (algorithm = "base64", password = "YWRtaW4wMDc=", plainPassword = "admin007")
jeus.security.resource.SystemPassword (algorithm = null, password = "globalpass", plainPassword = "globalpass")

So there you have it; 6 different servers implementing pretty much the same thing in 6 different ways.

What we see is that there are several methods to distinguish between a Principal that represents a user/caller name and one that represents a group/role name. JBoss EAP 6.2 as the only tested server uses a named Group for this. The name of the Group indicated the type of the Principal. Not entirely consistently this is "CallerPrincipal" for the user/caller Principal and "Roles" for the Group/Roles Principals. Pretty much every other server uses the class type for this type of distinction. Of course every server uses its own type. Things like org.glassfish.security.common.PrincipalImpl, org.apache.geronimo.security.realm.providers.GeronimoUserPrincipal, weblogic.security.principal.WLSUserImpl and jeus.security.resource.PrincipalImpl are completely identical; a simple Principal implementation with a single String attribute called "name".

WebSphere 8.5 does do things a little different. It only models the user/caller name as a Principal and when doing that for some reason needs an additional "fullname" attribute, but otherwise this is still pretty much the same thing as what the other servers use. Things are more different when it comes to the group/roles. As the only server tested WebSphere doesn't put these in the Principals collection, but uses the private credentials one for this. This collection is of the general type Object, and thus can hold other things than just Principals. It can be argued whether this is more correct or not. Principals are supposed to identify an entity, but does a group/role identify an entity, or is it more a security related attribute? At any length, WebSphere is the only one that thinks the latter.

We also see that two servers store extra internal data into the Subject. Geronimo for some reason needs a org.apache.geronimo.security.IdentificationPrincipal, while JEUS needs an extra jeus.security.base.Subject$SerializedSubjectPrincipal. In case of JEUS the extra Principal seems to be used to convert from its own proprietary Subject type to the standard JAAS Subject type and back again. It looks like JEUS is the only server that thinks it absolutely needs its very own Subject type. From a quick glance at the Geronimo code it wasn't clear why Geronimo needs the IdentificationPrincipal, but I'm sure there is a solid reason for that.

Finally the group/role Principal in JEUS 8 is special in that it holds a reference to the collection of all individuals who are in that group/role. It may be that this specific linkage is only done when using a small build-in user repository like the local file based ones. In case of an external provider like Facebook it would of course really not be feasible. Also note that the credentials are there in JEUS 8 since we really presented a password when we used the server provided login module. In all other cases we used JASPIC and didn't provide a password.

Looking at example JAAS login modules

As mentioned above, JAAS example login modules are another source that's worth looking at. Specifically interesting is the commit method, where the module is supposed to do the transfer from the user/caller name and group/role names into the Subject.

Geronimo 3.0.1 comes with several example modules. Of those the commit methods are conceptually identical. Given below is the method from the PropertiesFileLoginModule:


public boolean commit() throws LoginException {
if(loginSucceeded) {
if(username != null) {
allPrincipals.add(new GeronimoUserPrincipal(username));
}
for (Map.Entry<String, Set<String>> entry : groups.entrySet()) {
String groupName = entry.getKey();
Set<String> users = entry.getValue();
for (String user : users) {
if (username.equals(user)) {
allPrincipals.add(new GeronimoGroupPrincipal(groupName));
break;
}
}
}
subject.getPrincipals().addAll(allPrincipals);
}
// Clear out the private state
username = null;
password = null;

return loginSucceeded;
}

In the security manual of JEUS 7 a couple of JAAS login modules are given as well. Given below is the commit method for the DBRealmLoginModule(there is an English version available as PDF, but it's often moved around and requires a login and is thus unfortunately very hard to link to)


public boolean commit() throws LoginException {
if (succeeded == false) {
return false;
} else {
userPrincipal = new PrincipalImpl(username);
if (!subject.getPrincipals().contains(userPrincipal))
subject.getPrincipals().add(userPrincipal);

ArrayList roles = getRoleSets();
for (Iterator i = roles.iterator(); i.hasNext();) {
String roleName = (String) i.next();
logger.debug("Adding role to subject : username = " + username +
", roleName = " + roleName);
subject.getPrincipals().add(new RolePrincipalImpl(roleName));
}

userCredential = new Password(password);
subject.getPrivateCredentials().add(userCredential);

username = null;
password = null;
domain = null;
commitSucceeded = true;
return true;
}
}

JBoss ships with a number of login modules, which are essentially JAAS login modules as well. They have abstracted the common commit method to the base class AbstractServerLoginModule, which is shown below:


public boolean commit() throws LoginException {
PicketBoxLogger.LOGGER.traceBeginCommit(loginOk);
if (loginOk == false)
return false;

Set<Principal> principals = subject.getPrincipals();
Principal identity = getIdentity();
principals.add(identity);

// add role groups returned by getRoleSets.
Group[] roleSets = getRoleSets();
for (int g = 0; g < roleSets.length; g++) {
Group group = roleSets[g];
String name = group.getName();
Group subjectGroup = createGroup(name, principals);
if (subjectGroup instanceof NestableGroup) {
/*
* A NestableGroup only allows Groups to be added to it so we need to add a SimpleGroup to subjectRoles to contain the roles
*/
SimpleGroup tmp = new SimpleGroup("Roles");
subjectGroup.addMember(tmp);
subjectGroup = tmp;
}
// Copy the group members to the Subject group
Enumeration<? extends Principal> members = group.members();
while (members.hasMoreElements()) {
Principal role = (Principal) members.nextElement();
subjectGroup.addMember(role);
}
}

// add the CallerPrincipal group if none has been added in getRoleSets
Group callerGroup = getCallerPrincipalGroup(principals);
if (callerGroup == null) {
callerGroup = new SimpleGroup(SecurityConstants.CALLER_PRINCIPAL_GROUP);
callerGroup.addMember(identity);
principals.add(callerGroup);
}

return true;
}

As can be seen the JAAS modules all use the server specific way to store the Principals inside the Subject and this is done in pretty much the same way as the JASPIC container code did. A remarkable difference is that the JBoss JAAS module inserts the caller/user Principal twice; once directly in the root of the principals set and once in a named group, while the JASPIC auth code only used the named group. In the example DBRealmLoginModule of JEUS there's no trace of a reference to all users in a group/role. So this is either just an extra thing, or perhaps JEUS adds this information to the Subject after the JAAS login module has committed. For this article I did not investigate that further.

Conclusion

We looked at a relatively large number of servers here, nearly all the ones that implement the full Java EE profile (and thus support both JASPIC and JACC where the Subject type comes into play). There are more servers out there like Tomcat, TomEE, Jetty and Resin, but I did not look into them. From previous experience I know that Tomcat doesn't use the Subject type internally and only has support to read from a Subject via its JAAS realm. There are a small number of rather obscure other full Java EE implementations like the Hitachi and NEC offerings which I might investigate in a future article.

For the servers tested it seems that most servers could simply use two standardized Principals for the caller/user and group/role Principal. E.g.

  • javax.security.CallerPrincipal
  • javax.security.GroupPrincipal

Each principal would only need a single attribute name of type String. GlassFish, Geronimo and WebLogic could use such types as a direct replacement. JBoss would have to switch from looking at a group node to the class type of the Principal. JEUS could directly use the standard type for the caller/user principal, but may have to stuff the extra information it now puts in to the group/role principal somewhere else (assuming that it indeed really needs this info and we did not just observed some coincidental side-effect of the particular login-module that we used). The only server that really does things differently is WebSphere. Its caller/user principal is mostly equivalent to those of the other servers, but the group/role one is radically different.

To accommodate the extra info a few servers may need to store in the Principal, it might be feasible to define an extra Map<String, Object> attribute on the standardized Principals where this extra server specific info could be stored.

All in all it thus seems this standardization should be possible without too many problems. The introduction of these two small types would likely simplify the far too abstract and complex Java EE security landscape a lot. The question remains though why this wasn't done long ago.

Arjan Tijms

Implementing container authorization in Java EE with JACC

$
0
0
A while back we looked at how container authentication is done in Java EE by using the JASPIC API. In this article we'll take a look at its authorization counterpart; JACC/JSR 115.

JACC, which stands for Java Authorization Contract for Containers and for some reason also for Java Authorization Service Provider Contract for Containers is a specification that according to the official Java EE documentation "defines a contract between a Java EE application server and an authorization policy provider" and which "defines java.security.Permission classes that satisfy the Java EE authorization model.".

 

Public opinion

While JASPIC had only been added to Java EE as late as in Java EE 6, JACC has been part of Java EE since the dark old days of J2EE 1.4. Developers should thus have had plenty of time to get accustomed to JACC, but unfortunately this doesn't quite seem to be the case. While preparing for this article I talked to a few rather advanced Java EE developers (as in, those who literally wrote the book, worked or have worked on implementing various aspects of Java EE, etc). A few of their responses:

  • "I really have no idea whatsoever what JACC is supposed to do"
  • (After telling I'm working on a JACC article) "Wow! You really have guts!"
  • "[the situation surrounding JACC] really made me cry"
Online there are relatively few resources to be found about JACC. One that I did found said the following:
  • " [...] the PolicyConfiguration's (atrocious) contract [...] "
  • "More ammunition for my case that this particular Java EE contract is not worth the paper it's printed on."

 

What does JACC do?

The negativity aside, the prevailing emotion seems to be that it's just not clear what JACC brings to the table. While JASPIC is not widely used either people do tend to easily get what JASPIC primarily does; provide a standardized API for authentication modules, where those authentication modules are things that check credentials against LDAP servers, a database, a local file, etc and where those credentials are asked via mechanisms like an HTML form or HTTP BASIC etc.

But what does a "contract between an AS and an authorization policy provider"actually mean? In other words, what does JACC do?

In a very practical sense JACC offers the followings things:

  • Contextual objects: A number of convenient objects bound to thread local storage that can be obtained from "anywhere" like the current http request and the current Subject.
  • Authorization queries: A way for application code to ask the container authorization related things like: "What roles does the current user have?" and "Will this user have access to this URL?"
  • A hook into the authorization process: A way to register your own class that will:
    • receive all authorization constraints that have been put in web.xml, ejb-jar.xml and corresponding annotations
    • be consulted when the container makes an authentication decision, e.g. to determine if a user/caller has access to a URL
We'll take a more in-depth look at each of those things below.

 

Contextual objects

Just as the Servlet spec makes a number of (contextual) objects available as request parameters, and JSF does something like this via EL implicit objects, so does JACC makes a number of objects available.

While those objects seem to be primarily intended to be consumed by the special JACC policy providers (see below), they are in fact specified to work in the context of a "dispatched call" (the invocation of an actual Servlet + Filters or an EJB + Interceptors). As such they typically do work when called from user code inside e.g. a Servlet, but the spec could be a bit stricter here and specifically mandate this.

Contrary to Servlet's request/session/application attributes, you don't store instances of objects directly into the JACC policy context. Instead, just like can optionally be done for JNDI you store for each key a factory that knows how to produce an object corresponding to that key. E.g. in order to put a value "bar" in the context using the key "foo" you'd use the following code:


final String key = "foo";

PolicyContext.registerHandler(key,
new PolicyContextHandler() {
@Override
public Object getContext(String key, Object data) throws PolicyContextException {
return "bar";
}

@Override
public boolean supports(String key) throws PolicyContextException {
return key.equals(key);
}

@Override
public String[] getKeys() throws PolicyContextException {
String[] keys = { key };
return keys;
}
},
true
);

// result will be "bar"
String result = PolicyContext.getContext("foo");
For general usage it's bit unwieldy that the factory (called handler) must known its own key(s). Most containers first get the factory by looking it up in a table using the key it was registered with and then ask the factory if it indeed supports that key. Since a factory has to be registered individually for each key it supports anyway it's debatable whether the responsibility for ensuring the correct keys are used shouldn't be with the one doing the registration. That way the factory interface could have been made a bit simpler.

By default the following keys are defined:

  • javax.security.auth.Subject.container - Returns the current Subject or null if user not authenticated.
  • javax.servlet.http.HttpServletRequest - Returns the current request when requested from a Servlet.
  • javax.xml.soap.SOAPMessage - Returns the SOAP message when requested from an EJB when JAX-WS is used.
  • javax.ejb.EnterpriseBean - Returns the EJB 1.0 "EnterpriseBean" interface. It's best forgotten that this exists.
  • javax.ejb.arguments - Returns the method arguments of the last call to an EJB that's on the stack.
  • java.security.Policy.supportsReuse - Indicates that a container (server) supports caching of an authorization outcome.

Getting the Subject is especially crucial here, since it's pretty much the only way in Java EE to get a hold of this, and we need this for our authorization queries (see below).

For general usage the other objects have the most value for actual policy providers, as in e.g. JSF there are already ways to get the current request from TLS. In practice however semi-proprietary JAAS login modules also not rarely use the PolicyContext to get access to the http request. It should be noted that it's not entirely clear what the "current" request is, especially when servlet filters are used that do request wrapping. Doing an authorization query first (see below) may force the container to set the current request to the one that's really current.

 

Authorization queries

JACC provides an API and a means to ask the container several authorization related things. Just like JASPIC, JACC provides an API that's rather abstract and which seems to be infinitely flexible. Unfortunately this also means it's infinitely difficult for users to find out how to perform certain common tasks. So, while JACC enables us to ask the aforementioned things like "What roles does the current user have?" and "Will this user have access to this URL?", there aren't any convenience methods such as "List<String> getAllUserRoles();" or "boolean hasAccess(String url);". Below we'll show how these things are being doing in JACC.

Get all users roles

We'll start with obtaining the Subject instance corresponding to the current user. For simplicity we assume the user is indeed logged-in here.

Subject subject = (Subject) PolicyContext.getContext("javax.security.auth.Subject.container");
After that we'll get the so-called permission collection from the container that corresponds to this Subject:

PermissionCollection permissionCollection = Policy.getPolicy().getPermissions(
new ProtectionDomain(
new CodeSource(null, (Certificate[]) null),
null, null,
subject.getPrincipals().toArray(new Principal[subject.getPrincipals().size()])
)
);
Most types shown here originate from the original Java SE security system (pre-JAAS even). Their usage is relatively rare in Java EE so we'll give a quick primer here. A more thorough explanation can be found in books like e.g. this one.


CodeSource

A code source indicates from which URL a class was loaded and with which certificates (if any) it was signed. This class was used in the original security model to protect your system from classes that were loaded from websites on the Internet; i.e. when Applets were the main focus area of Java. The roles associated with a user in Java EE do not depend in any way on the location from which the class asking this was loaded (they are so-called Principal-based permissions), so we provide a null for both the URL and the certificates here.

ProtectionDomain

A protection domain is primarily a grouping of a code source, and a set of static permissions that apply for the given Principals. In the original Java SE security model this type was introduced to be associated with a collection of classes, where each class part of the domain holds a reference to this type. In this case we're not using the protection domain in that way, but merely use it as input to ask which permissions the current Subject has. As such, the code source, static permissions and class loader (third parameter) are totally irrelevant. The only thing that matters is the Subject, and specifically its principals.

The reason why we pass in a code source with all its fields set to null instead of just a null directly is that the getPermissions() method of well known Policy implementations like the PolicyFile from the Oracle JDK will call methods on it without checking if the entire thing isn't null. The bulky code to transform the Principals into an array is an unfortunate mismatch between the design of the Subject class (which uses a Set) and the ProtectionDomain (which uses a native array). It's extra unfortunate since the ProtectionDomain's constructor specifies it will copy the array again, so two copies of our original Set will be made here.

Policy

A policy is the main mechanism in Java SE that originally encapsulated the mapping between code sources and a global set of permissions that were given to that code source. Only much later in Java 1.4 was the ability added to get the permissions for a protection domain. Its JavaDoc has the somewhat disheartening warning that applications should not actually call the method. In case of JACC we can probably ignore this warning (application server implementations call this method too).

PermissionCollection

At first glance a PermissionCollection is exactly what its name implies (no pun intended): a collection of permissions. So why does the JDK have a special type for this and doesn't it just use a Collection<Permission> or a List<Permission>? Maybe part of the answer is that PermissionCollection was created before the Collection Framework in Java was introduced.

But this may be only part of the answer. A PermissionCollection and its most important subclass Permissions make a distinction between homogenous and heterogenous Permissions. Adding an arbitrary Permission to a Permissions class is supposed to not just add it randomly in sequence, but to add it internally to a special "bucket" . This bucket is another PermissionCollection that stores permissions of the same type. It's typically implemented as a Class to PermissionCollection Map. This somewhat complex mechanism is used to optimize checking for a permission; iterating over every individual permission would not be ideal. We at least should be able to go right away to the right type of permission. E.g. when checking for permission to access a file, it's useless to ask every socket permission whether we have access to that.


After this we call the implies() method on the collection:


permissionCollection.implies(new WebRoleRefPermission("", "nothing"));
This is small trick, hack if you will, to get rid of a special type of permission that might be in the collection; the UnresolvedPermission. This is a special type of permission that may be used when permissions are read from a file. Such file then typically contains the fully qualified name of a class that represents a specific permission. If this class hasn't been loaded yet or has been loaded by another class loader than the one from which the file is read, a UnresolvedPermission will be created that just contains this fully qualified class name as a String. The implies() method checks if the given permission is implied by the collection and therefor forces the actual loading of at least the WebRoleRefPermission class. This class is the standard permission type that corresponds to the non-standard representation of the group/roles inside the collection of principals that we're after.

Finally we iterate over the permission collection and collect all role names from the WebRoleRefPermission:


Set<String> roles = new HashSet<>();
for (Permission permission : list(permissionCollection.elements())) {
if (permission instanceof WebRoleRefPermission) {
String role = permission.getActions();

if (!roles.contains(role) && request.isUserInRole(role)) {
roles.add(role);
}
}
}
A thing to note here is that there's no such thing as a WebRolePermission, but only a WebRoleRefPermission. In the Servlet spec a role ref is the thing that you use when inside a specific Servlet a role name is used that's different from the role names in the rest of your application. In theory this could be handy for secured Servlets from a library that you include in your application. Role refs are fully optional and when you don't use them you can simply use the application wide role names directly.

In JACC however there are only role refs. When a role ref is not explicitly defined then they are simply defaulted to the application role names. Since a role ref is per servlet, the number of WebRoleRefPermission instances that will be created is *at least* the number of roles in the application plus one (for the '**' role), times the number of servlets in the application (typically plus three for the default and JSP servlet, and an extra one for the so-called unmapped context). So given an application with two roles "foo" and "bar" and two Servlets named "servlet1" and "servlet2", the WebRoleRefPermission instances that will be created is as follows:

  1. servlet1 - foo
  2. servlet1 - bar
  3. servlet1 - **
  4. servlet2 - foo
  5. servlet2 - bar
  6. servlet2 - **
  7. default - foo
  8. default - bar
  9. default - **
  10. jsp - foo
  11. jsp - bar
  12. jsp - **
  13. "" - foo
  14. "" - bar
  15. "" - **
In order to filter out the duplicates the above code uses a Set and not e.g. a List. Additionally, to filter out any role refs other than those for the current Servlet from which we are calling the code we additionally do the request.isUserInRole(role) check. Alternatively we could have checked the name attribute of each WebRoleRefPermission as that one corresponds to the name of the current Servlet, which can be obtained via GenericServlet#getServletName. If we're sure that there aren't any role references being used in the application, or if we explicitly want all global roles, the following code can be used instead of the last fragment given above:

Set<String> roles = new HashSet<>();
for (Permission permission : list(permissionCollection.elements())) {
if (permission instanceof WebRoleRefPermission && permission.getName().isEmpty()) {
roles.add(permission.getActions());
}
}

Typically JACC providers will create the total list of WebRoleRefPermission instances when an application is deployed and then return a sub-selection based on the Principals that we (indirectly) passed in our call to Policy#getPermissions. This however requires that all roles are statically and upfront declared. But a JASPIC auth module can dynamically return any amount of roles to the container and via HttpServletRequest#isUserInRole() an application can dynamically query for any such role without anything needing to be declared. Unfortunately such dynamic role usage typically doesn't work when JACC is used (the Java EE specification also forbids this, but on servers like JBoss it works anyway).

All in all the above shown query needs a lot of code and a lot of useless types and parameters for which nulls have to be passed. This could have been a lot simpler by any of the following means:

  • Availability of a "List<String> getAllUserRoles();" method
  • Standardization of the group/role Principals inside a subject (see JAAS in Java EE is not the universal standard you may think it is)
  • A convenience method to get permissions based on just a Subject or Principal collection, e.g. PermissionCollection getPermissions(Subject subject); or PermissionCollection getPermissions(Collection<Principals> principals);

This same technique with slightly different code is also explained here: Using JACC to determine a caller's roles

Has access

Asking whether a user has permission to access a given resource (e.g. a Servlet) is luckily a bit smaller:

Subject subject = (Subject) PolicyContext.getContext("javax.security.auth.Subject.container");

boolean hasAccess = Policy.getPolicy().implies(
new ProtectionDomain(
new CodeSource(null, (Certificate[]) null),
null, null,
subject.getPrincipals().toArray(new Principal[subject.getPrincipals().size()])
),
new WebResourcePermission("/protected/Servlet", "GET"))
;
We first get the Subject and create a ProtectionDomain in the same way as we did before. This time around we don't need to get the permission collection, but can make use of a small shortcut in the API. Calling implies on the Policy instance effectively invokes it on the permission collection that this instance maintains. Besides being ever so slightly shorter in code, it's presumably more efficient as well.

The second parameter that we pass in is the actual query; via a WebResourcePermission instance we can ask whether the resource "/protected/Servlet" can be accessed via a GET request. Both parameters support patterns and wildcards (see the JavaDoc). It's important to note that a WebResourcePermission only checks permission for the resource name and the HTTP method. There's a third aspect for checking access to a resource and that boils down to the URI scheme that's used (http vs https) and which corresponds to the <transport-guarantee> in a <user-data-constraint> element in web.xml. In JACC this information is NOT included in the WebResourcePermission, but a separate permission has been invented for that; the WebUserDataPermission, which can be individually queried.

Although this query requires less code than the previous one, it's probably still more verbose than many users would want to cope with. Creating the ProtectionDomain remains a cumbersome affair and getting the Subject via a String that needs to be remembered and requiring a cast unfortunately all add to the feeling of JACC being an arcane API. Here too, things could be made a lot simpler by:

  • Availability of boolean hasAccess(String resource);" and boolean hasAccess(String resource, String metbod);" methods
  • A convenience method to do a permission check based on the current user, e.g. boolean hasUserPermission(Permission permission);
  • A convenience method to do a permission check based on just a Subject or Principal collection, e.g. boolean hasPermission(Subject subject, Permission permission); or boolean hasPermission(Collection principals, Permission permission);
As it appears, several JACC implementations in fact internally have methods not quite unlike these.

 

A hook into the authorization process

The most central feature of JACC is that it allows a class to be registered that hooks into the authorization process. "A class" is actually not entirely correct as typically 3 distinct classes are needed; a factory (there's of course always a factory), a "configuration" and the actual class (the "policy") that's called by the container (together the provider classes).

The factory and the policy have to be set individually via system properties (e.g. -D on the commandline). The configuration doesn't have to be set this way since it's created by the factory.

The following table gives an overview of these:

Class System property Description Origin
javax.security.jacc.PolicyConfigurationFactoryjavax.security.jacc.PolicyConfigurationFactory.providerCreates and stores instances of PolicyConfiguration JACC
javax.security.jacc.PolicyConfiguration - Receives Permission instances corresponding to authorization constraints and can configure a Policy instance JACC
java.security.Policyjavax.security.jacc.policy.provider Called by the container for authorization decisions Java SE

 

Complexity

While JACC is not the only specification that requires the registration of a factory (e.g. JSF has a similar requirement for setting a custom external context) its perhaps debatable whether the requirement to implement 3 classes and set 2 system properties isn't yet another reason why JACC is perceived as arcane. Indeed, when looking at a typical implementation of the PolicyConfigurationFactory it doesn't seem it does anything that the container wouldn't be able to do itself.

Likewise, the need to have a separate "configuration" and policy object isn't entirely clear either. Although just an extra class, such seemingly simple requirement does greatly add to the (perceived) complexity. This is extra problematic in the case of JACC since the "configuration" has an extremely heavyweight interface with an associated requirement to implement a state machine that controls the life cycle in which its methods can be called. Reading it one can only wonder why the container doesn't implement this required state machine and just lets the user implement the bare essentials. And if this all isn't enough the spec also hints at users having to make sure their "configuration" implementation is thread safe, again a task which one would think a container would be able to do.

Of course we have to realize that JACC originates from J2EE 1.4, where a simple "hello, world" EJB 2 required implementing many useless (for most users) container interfaces, the use of custom tools and verbose registrations in XML deployment descriptors, and where a custom component in JSF required the implementation and registration of many such moving parts as well. Eventually the EJB bean was simplified to just a single POJO with a single annotation, and while the JSF custom component didn't became a POJO it did became a single class with a single annotation as well.

JACC had some maintenance revisions, but never got a really major revision for ease of use. It thus still strongly reflects the J2EE 1.4 era thinking where everything had to be ultimately flexible, abstract and under the control of the user. Seemingly admirable goals, but unfortunately in practice leading to a level of complexity very few users are willing to cope with.

Registration

The fact that the mentioned provider classes can only be registered via system properties is problematic as well. In a way it's maybe better than having no standardized registration method at all and leaving it completely up to the discretion of an application server vendor, but having it as the only option means the JACC provider classes can only be registered globally for the entire server. In particular it thus means that all applications on a server have to share the same JACC provider even though authorization is not rarely quite specific to an individual application.

At any length not having a way to register the required classes from within an application archive seriously impedes testing, makes tutorials and example applications harder and doesn't really work well with cloud deployments either. Interestingly JASPIC which was released much later dished the system properties method again and left a server wide registration method unspecified, but did specify a way to register its artifacts from within an application archive.

Default implementation

Arguably one of the biggest issues with JACC is that the spec doesn't mandate a default implementation of a JACC provider to be present. This has two very serious consequences. First of all users wanting to use JACC for authorization queries can not just do this, since there might not be a default JACC provider available for their server, and if there is it might not be activated.

Maybe even worse is that users wanting to provide extended authorization behavior have to implement the entire default behavior from scratch. This is bad. Really bad. Even JSF 1.0 which definitely had its own share of issues allowed users to provide extensions by overriding only what they wanted to be different from whatever a default implementation provided. JACC is complex enough as it is. Not providing a default implementation is making the complexity barrier literally go through the roof.

Role mapping

While we have seen some pretty big issues with JACC already, by far the biggest issue and a complete hindrance to any form of portability of JACC policy providers is the fact that there's a huge gaping hole in the specification; the crucial concept of "role mapping" is not specified. The spec mentions it, but then says it's the provider's responsibility.

The problem is two fold; many containers (but not all) have a mechanism in place where the role names that are returned by a (JASPIC) authentication module (typically called "groups" at this point) are mapped to the role names that are used in the application. This can be a one to one mapping, e.g. "admin" is mapped to "administrator", but can also be a many to many mapping. For example when "admin and "super" both map to "super-users", but "admin" is also mapped to "administrator". JACC has no standard API available to access this mapping, yet it can't work without this knowledge if the application server indeed uses it.

In practice JACC providers thus have to be paired with a factory to obtain a custom role mapper. This role mapper has to be implemented again for every application server that uses role mapping and every JACC provider. Especially for the somewhat lesser known application servers and/or closed source ones it can be rather obscure to find out how to implement a role mapper for it. And the task is different for every other JACC policy provider as well, and some JACC policy providers may not have a factory or interface for it.

Even if we ignore role mapping all together (e.g. assume role "admin" returned by an auth module is the same "admin" used by the application, which is not at all uncommon), there is another crucial task that's typically attributed to the role mapper which we're missing; the ability to identify which Principals are in fact roles. As we've seen above in the section that explained the authorization queries we're passing in the Principals of a Subject. But as we've explained in a previous article this collection contains all sorts of principals and there's no standard way to identify which of those represent roles.

So the somewhat shockingly conclusion is that given the current Java EE specification it's not possible to write a portable and actual working JACC policy provider. At some point we just need the roles, but there's no way to get to them via any Java EE API, SPI, convention or otherwise. In sample code below we've implemented a crude hack where we hardcoded the way to get to the roles for a small group of known servers. This is of course far from ideal.

Sample code

The code below shows how to implement a simple as can be JACC module that implements the default behavior for Servlet and EJB containers. This thus doesn't show how to provide specialized behavior, but should give some idea what's needed. The required state machine was left out as well as the article was already getting way too long without it. The code is thus only intended to give a general impression of what JACC requires to be implemented. It's not an actual working and compliant JACC provider, and so definitely not recommended to be used for any actual application.

The factory
We start with creating the factory. This is the class that we primarily register with the container. It stores instances of the configuration class TestPolicyConfiguration in a static concurrent map. The code is more or less thread-safe with respect to creating a configuration. In case of a race we'll create an instance for nothing, but since it's not a heavy instance there won't be much harm done. For simplicity's sake we won't be paying much if any attention to thread-safety after this. The getPolicyConfiguration() will be called by the container whenever it needs the instance corresponding to the "contextID", which is in its simplest form the (web) application for which we are doing authorization.

import static javax.security.jacc.PolicyContext.getContextID;

// other imports omitted for brevity

public class TestPolicyConfigurationFactory extends PolicyConfigurationFactory {

private static final ConcurrentMap<String, TestPolicyConfiguration> configurations = new ConcurrentHashMap<>();

@Override
public PolicyConfiguration getPolicyConfiguration(String contextID, boolean remove) throws PolicyContextException {

if (!configurations.containsKey(contextID)) {
configurations.putIfAbsent(contextID, new TestPolicyConfiguration(contextID));
}

if (remove) {
configurations.clear();
}

return configurations.get(contextID);
}

@Override
public boolean inService(String contextID) throws PolicyContextException {
return getPolicyConfiguration(contextID, false).inService();
}

public static TestPolicyConfiguration getCurrentPolicyConfiguration() {
return configurations.get(getContextID());
}

}
The configuration
The configuration class has a ton of methods to implement and additionally according to its JavaDocs a state machine has to be implemented as well.

The methods that need to be implemented can be placed into a few groups. We'll first show the implementation of each group of methods and then after that show the entire class.

The first is simply about the identity of the configuration. It contains the method getContextID that returns the ID for the module for which the configuration is created. Although there's no guidance where this ID has to come from, a logical way seems to pass it from the factory via the constructor:


private final String contextID;

public TestPolicyConfiguration(String contextID) {
this.contextID = contextID;
}

@Override
public String getContextID() throws PolicyContextException {
return contextID;
}
The second group concerns the methods that receive from the container all permissions applicable to the application module, namely the excluded, unchecked and per role permissions. No less than 6 methods are demanded to be implemented. 3 of them each take a single permission of the aforementioned types, while the other 3 take a collection (PermissionCollection) of these permissions. Having these two sub-groups of methods seems rather unnecessary. Maybe there was a deeper reason once, but of the several existing implementations I studied all just iterated over the collection and added each individual permission separately to an internal collection.

At any length, we just collect each permission that we receive in the most straightforward way possible.



private Permissions excludedPermissions = new Permissions();
private Permissions uncheckedPermissions = new Permissions();
private Map<String, Permissions> perRolePermissions = new HashMap<String, Permissions>();

// Group 2a: collect single permission

@Override
public void addToExcludedPolicy(Permission permission) throws PolicyContextException {
excludedPermissions.add(permission);
}

@Override
public void addToUncheckedPolicy(Permission permission) throws PolicyContextException {
uncheckedPermissions.add(permission);
}

@Override
public void addToRole(String roleName, Permission permission) throws PolicyContextException {
Permissions permissions = perRolePermissions.get(roleName);
if (permissions == null) {
permissions = new Permissions();
perRolePermissions.put(roleName, permissions);
}

permissions.add(permission);
}

// Group 2b: collect multiple permissions

@Override
public void addToExcludedPolicy(PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToExcludedPolicy(permission);
}
}

@Override
public void addToUncheckedPolicy(PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToUncheckedPolicy(permission);
}
}

@Override
public void addToRole(String roleName, PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToRole(roleName, permission);
}
}

The third group are life-cycle methods, partly part of the above mentioned state machine. Much of the complexity here is caused by the fact that multi-module applications have to share some authorization data but also still need to have their own, and that some modules have to be available before others. The commit methods signals that the last permission has been given to the configuration class. Depending on the exact strategy used, the configuration class could now e.g. start building up some specialized data structure, write the collected permissions to a standard policy file on disk, etc.

In our case we have nothing special to do. For this simple example we only implement the most basic requirement of the aforementioned state machine and that's making sure the inService method returns true when we're done. Containers may check for this via the previously shown factory, and will otherwise not hand over the permissions to our configuration class or keep doing that forever.


@Override
public void linkConfiguration(PolicyConfiguration link) throws PolicyContextException {
}

boolean inservice;
@Override
public void commit() throws PolicyContextException {
inservice = true;
}

@Override
public boolean inService() throws PolicyContextException {
return inservice;
}

The fourth group concern methods that ask for the deletion of previously given permissions. Here too we see some redundant overlap; there's 1 method that deletes all permissions and 1 method for each type. These methods are supposedly needed because programmatic registration of Servlets can change the authorization data during container startup and every time(?) that this happens all previously collected permissions would have to be deleted. Apparently the container can't delay the moment of handing permissions to the configuration class since during the time when it's legal to register Servlets a call can be made to another module (e.g. a ServletContextListener could call an EJB in a separate EJB module). Still, it's questionable whether it's really needed to have a kind of back-tracking permission collector in place and whether part of this burden should really be placed on the user implementing an authorization extension.

In our case the implementations are pretty straightforward. The Permissions type doesn't have a clear() method so we'll replace it by a new instance. For the Map we can call a clear() method, but there's a special protocol to be executed concerning the "*" role; if the collection explicitly contains a role name "*" remove it, otherwise clear the entire collection.


@Override
public void delete() throws PolicyContextException {
removeExcludedPolicy();
removeUncheckedPolicy();
perRolePermissions.clear();
}

@Override
public void removeExcludedPolicy() throws PolicyContextException {
excludedPermissions = new Permissions();
}

@Override
public void removeRole(String roleName) throws PolicyContextException {
if (perRolePermissions.containsKey(roleName)) {
perRolePermissions.remove(roleName);
} else if ("*".equals(roleName)) {
perRolePermissions.clear();
}
}

Finally we added 3 getters ourselves to give access to the 3 collections of permissions that we have been building up. We'll use this below to give the Policy instance access to the permissions.


public Permissions getExcludedPermissions() {
return excludedPermissions;
}

public Permissions getUncheckedPermissions() {
return uncheckedPermissions;
}

public Map<String, Permissions> getPerRolePermissions() {
return perRolePermissions;
}

Although initially unwieldy looking, the PolicyConfiguration in our case becomes just a data structure to add, get and remove three types of different but related groups of objects. All together it becomes this:


import static java.util.Collections.list;

// other imports omitted for brevity

public class TestPolicyConfiguration implements PolicyConfiguration {

private final String contextID;

private Permissions excludedPermissions = new Permissions();
private Permissions uncheckedPermissions = new Permissions();
private Map<String, Permissions> perRolePermissions = new HashMap<String, Permissions>();

// Group 1: identity

public TestPolicyConfiguration(String contextID) {
this.contextID = contextID;
}

@Override
public String getContextID() throws PolicyContextException {
return contextID;
}


// Group 2: collect permissions from container

// Group 2a: collect single permission

@Override
public void addToExcludedPolicy(Permission permission) throws PolicyContextException {
excludedPermissions.add(permission);
}

@Override
public void addToUncheckedPolicy(Permission permission) throws PolicyContextException {
uncheckedPermissions.add(permission);
}

@Override
public void addToRole(String roleName, Permission permission) throws PolicyContextException {
Permissions permissions = perRolePermissions.get(roleName);
if (permissions == null) {
permissions = new Permissions();
perRolePermissions.put(roleName, permissions);
}

permissions.add(permission);
}

// Group 2b: collect multiple permissions

@Override
public void addToExcludedPolicy(PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToExcludedPolicy(permission);
}
}

@Override
public void addToUncheckedPolicy(PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToUncheckedPolicy(permission);
}
}

@Override
public void addToRole(String roleName, PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToRole(roleName, permission);
}
}

// Group 3: life-cycle methods

@Override
public void linkConfiguration(PolicyConfiguration link) throws PolicyContextException {
}

boolean inservice;
@Override
public void commit() throws PolicyContextException {
inservice = true;
}

@Override
public boolean inService() throws PolicyContextException {
return inservice;
}

// Group 4: removing all or specific collection types again

@Override
public void delete() throws PolicyContextException {
removeExcludedPolicy();
removeUncheckedPolicy();
perRolePermissions.clear();
}

@Override
public void removeExcludedPolicy() throws PolicyContextException {
excludedPermissions = new Permissions();
}

@Override
public void removeRole(String roleName) throws PolicyContextException {
if (perRolePermissions.containsKey(roleName)) {
perRolePermissions.remove(roleName);
} else if ("*".equals(roleName)) {
perRolePermissions.clear();
}
}

@Override
public void removeUncheckedPolicy() throws PolicyContextException {
uncheckedPermissions = new Permissions();
}

// Group 5: extra methods

public Permissions getExcludedPermissions() {
return excludedPermissions;
}

public Permissions getUncheckedPermissions() {
return uncheckedPermissions;
}

public Map<String, Permissions> getPerRolePermissions() {
return perRolePermissions;
}
}
The policy
To implement the policy we inherit from the Java SE policy class and override the implies and getPermissions methods. Where the PolicyConfiguration is the "dumb" data structure that just contains the collections of different permissions, the Policy implements the rules that operate on this data.

The policy instance is per the JDK rules a single instance for the entire JVM, so central to its implementation in Java EE is that it first has to obtain the correct data corresponding to the "current" Java EE application or module within such application. The key to obtaining this data is the Context ID that is set in thread local storage by the container prior to calling the policy. The factory that we showed earlier stored the configuration instance under this key in a concurrent map and we use the extra method that we added to the factory here to conveniently retrieve it again.

The policy implementation also contains the hacky method that we referred to earlier for extracting the roles from a collection of principals. It just iterates over the collection and matches the principals against the known group names for each server. Of course this is a brittle and incomplete technique. Newer versions of servers can change the class names and we have not covered all servers; e.g. WebSphere is missing since via the custom method to obtain a Subject we saw earlier that the roles were stored in the credentials (we may need to study this further).


package test;

import static java.util.Collections.list;
import static test.TestPolicyConfigurationFactory.getCurrentPolicyConfiguration;

// other imports omitted for brevity

public class TestPolicy extends Policy {

private Policy previousPolicy = Policy.getPolicy();

@Override
public boolean implies(ProtectionDomain domain, Permission permission) {

TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();

if (isExcluded(policyConfiguration.getExcludedPermissions(), permission)) {
// Excluded permissions cannot be accessed by anyone
return false;
}

if (isUnchecked(policyConfiguration.getUncheckedPermissions(), permission)) {
// Unchecked permissions are free to be accessed by everyone
return true;
}

if (hasAccessViaRole(policyConfiguration.getPerRolePermissions(), getRoles(domain.getPrincipals()), permission)) {
// Access is granted via role. Note that if this returns false it doesn't mean the permission is not
// granted. A role can only grant, not take away permissions.
return true;
}

if (previousPolicy != null) {
return previousPolicy.implies(domain, permission);
}

return false;
}

@Override
public PermissionCollection getPermissions(ProtectionDomain domain) {

Permissions permissions = new Permissions();

TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
Permissions excludedPermissions = policyConfiguration.getExcludedPermissions();

// First get all permissions from the previous (original) policy
if (previousPolicy != null) {
collectPermissions(previousPolicy.getPermissions(domain), permissions, excludedPermissions);
}

// If there are any static permissions, add those next
if (domain.getPermissions() != null) {
collectPermissions(domain.getPermissions(), permissions, excludedPermissions);
}

// Thirdly, get all unchecked permissions
collectPermissions(policyConfiguration.getUncheckedPermissions(), permissions, excludedPermissions);

// Finally get the permissions for each role *that the current user has*
Map<String, Permissions> perRolePermissions = policyConfiguration.getPerRolePermissions();
for (String role : getRoles(domain.getPrincipals())) {
if (perRolePermissions.containsKey(role)) {
collectPermissions(perRolePermissions.get(role), permissions, excludedPermissions);
}
}

return permissions;
}

@Override
public PermissionCollection getPermissions(CodeSource codesource) {

Permissions permissions = new Permissions();

TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
Permissions excludedPermissions = policyConfiguration.getExcludedPermissions();

// First get all permissions from the previous (original) policy
if (previousPolicy != null) {
collectPermissions(previousPolicy.getPermissions(codesource), permissions, excludedPermissions);
}

// Secondly get the static permissions. Note that there are only two sources possible here, without
// knowing the roles of the current user we can't check the per role permissions.
collectPermissions(policyConfiguration.getUncheckedPermissions(), permissions, excludedPermissions);

return permissions;
}

private boolean isExcluded(Permissions excludedPermissions, Permission permission) {
if (excludedPermissions.implies(permission)) {
return true;
}

for (Permission excludedPermission : list(excludedPermissions.elements())) {
if (permission.implies(excludedPermission)) {
return true;
}
}

return false;
}

private boolean isUnchecked(Permissions uncheckedPermissions, Permission permission) {
return uncheckedPermissions.implies(permission);
}

private boolean hasAccessViaRole(Map<String, Permissions> perRolePermissions, List<String> roles, Permission permission) {
for (String role : roles) {
if (perRolePermissions.containsKey(role) && perRolePermissions.get(role).implies(permission)) {
return true;
}
}

return false;
}

/**
* Copies permissions from a source into a target skipping any permission that's excluded.
*
* @param sourcePermissions
* @param targetPermissions
* @param excludedPermissions
*/
private void collectPermissions(PermissionCollection sourcePermissions, PermissionCollection targetPermissions, Permissions excludedPermissions) {

boolean hasExcludedPermissions = excludedPermissions.elements().hasMoreElements();

for (Permission permission : list(sourcePermissions.elements())) {
if (!hasExcludedPermissions || !isExcluded(excludedPermissions, permission)) {
targetPermissions.add(permission);
}
}
}

/**
* Extracts the roles from the vendor specific principals. SAD that this is needed :(
* @param principals
* @return
*/
private List<String> getRoles(Principal[] principals) {
List<String> roles = new ArrayList<>();

for (Principal principal : principals) {
switch (principal.getClass().getName()) {

case "org.glassfish.security.common.Group": // GlassFish
case "org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal": // Geronimo
case "weblogic.security.principal.WLSGroupImpl": // WebLogic
case "jeus.security.resource.GroupPrincipalImpl": // JEUS
roles.add(principal.getName());
break;

case "org.jboss.security.SimpleGroup": // JBoss
if (principal.getName().equals("Roles") && principal instanceof Group) {
Group rolesGroup = (Group) principal;
for (Principal groupPrincipal : list(rolesGroup.members())) {
roles.add(groupPrincipal.getName());
}

// Should only be one group holding the roles, so can exit the loop
// early
return roles;
}
}
}

return roles;
}

}

 

Summary of JACC's problems

Unfortunately we've seen that JACC has a few problems. It has a somewhat arcane and verbose API which easily puts (new) users off. We've also seen that it puts too much responsibilities on the user, especially when it comes to customizing the authorization system.

A fatal flaw in JACC is that it didn't specify how to access roles from a collection of Principals. Since authorization is primarily role based in Java EE this makes it downright impossible to create portable JACC policy providers and much harder than necessary to create vendor specific ones.

Another fatal flaw of JACC is that there's not necessarily a default implementation of JACC active at runtime. This means that general Java EE applications cannot just use JACC; they would have to instruct their users to make sure JACC is activated for their server. Since not all servers ship with a simple default implementation this would not even work for all servers.

On the one hand it's impressive that JACC has managed to integrate so well with Java SE security. On the other hand it's debatable whether this has been really useful in practice. Java EE containers can now consult the Java SE Policy for authorization, but when a security manager is installed Java SE code will consult the very same one.

Java SE permissions are mostly about what code coming from some code location (e.g. a specific library directory on the severer) is allowed to do, while Java EE permissions are about the things an external user logged-in to the server is allowed to do. With the current setup the JACC policy replaces the low level Java SE one and thus all Java SE security checks will go through the JACC policy as well. This may be tens of checks per second or even more. All that a JACC policy can really do is delegate these to the default Java SE policy.

 

Conclusion

The concept of having a central repository holding all authorization rules is by itself a powerful addition to the Java EE security system. It allows one to query the repository and thus programmatically check if a resource will be accessible. This is particularly useful for URL based resources; if a user would not have access we can omit rendering a link to that resource, or perhaps render it with a different color, with a lock icon, etc.

Although not explicitly demonstrated in this article it shouldn't require too much imagination to see that the default implementation that we showed can easily do something else instead and thus tailor the authorization system to the specific needs of an application.

As shown it might not be that difficult to reduce the verbosity of the JACC API by introducing a couple of convenience methods for common functionality. Specifying which principals corresponds to roles would be the most straightforward solution for the roles problem, but a container provided role mapper instance that returns a list of group/role principals given a collection of strings representing application roles and the other way around might be a workable solution as well.

While tedious for users to implement, container vendors should not have much difficulties implementing a default JACC policy provider and activating it by default. Actually mandating providers to USE JACC themselves for their Servlet and EJB authorization decisions may be another story though. A couple of vendors (specifically Oracle themselves for WebLogic) claim performance and flexibility issues with JACC and actually more or less encourage users not to use it. In case of WebLogic this advice probably stems from the BEA days, but it's still in the current WebLogic documentation. With Oracle being the steward of JACC it's remarkable that they effectively suggest their own users not to use it, although GlassFish which is also from Oracle is one of the few or perhaps only server that in fact uses JACC internally itself.

As it stands JACC is not used a lot and not universally loved, but a few relatively small changes may be all that's needed to make it much more accessible and easy to use.

Arjan Tijms

Further reading:

What should be the minimum dependency requirements for the next major OmniFaces version?

$
0
0
For the next major version of OmniFaces (2.x, planned for over a few months) we're currently looking at determining the minimum version of its dependencies. Specifically the minimum Java SE version is tricky to get right.

OmniFaces 1.x currently targets JSF 2.0/2.1 (2009/2010), which depends on Servlet 2.5 (2006) and Java SE 5 (2004).

In practice relatively few people appear to be using Servlet 2.5 so for OmniFaces 1.x we primarily test on Servlet 3.0. Nevertheless, Servlet 2.5 is the official minimum requirement and we sometimes had to go to great lengths to keep OmniFaces 1.x compatible with Servlet 2.5. As for the Java SE 5 requirement, we noted that so few people were still using this (especially among the people who were also using JSF 2.1) that we set Java SE 6 as the minimum. Until so far we never had a complaint about Java SE 6 being the minimum.

For OmniFaces 2.x we'll be targeting JSF 2.2 (2013), which itself has Servlet 3.0 (2009) as the minimum Servlet version and Java SE 6 (2006) as the minimum JDK version.

So this begs the question about which Servlet/Java EE and Java SE version we're going to require as the minimum this time. Lower versions means more people could use the technology, but it also means newer features can't be supported or can't be used by us. The latter isn't directly visible to the end-user, but it often means OmniFaces development is simply slower (we need more code to do something or need to reinvent a wheel ourselves that's already present in a newer JDK or Java EE release).

One possibility is to strictly adhere to what JSF 2.2 is depending on. This thus means Servlet 3.0 and Java SE 6. Specifically Java SE 6 is nasty though. It's 8 years old already, currently EOL even and we'll probably have to keep using this during the entire JSF 2.2 time-frame. When that ends it will be over 11 years old. At OmniFaces and ZEEF (the company we work at) we're big fans of being on top of recent developments, and being forced to work with an 11 years old JDK doesn't exactly fits in with that.

The other possibility is to do things just like we did with OmniFaces 1.x; support the same Servlet version as the JSF version we target, but move up one JDK version. This would thus mean OmniFaces 2.x will require JDK 7 as a minimum.

Yet another possibility would be to be a little more progressive with OmniFaces 2.x; have the technology from Java EE 7 as the minimum requirement (JSF 2.2 is part of Java EE 7 after all). This would mostly boil down to having Servlet 3.1 and CDI 1.1 as dependencies. Taking being progressive even one step further would be to start using JDK 8 as well, but this may be too much for now.

Besides the version numbers, another question that we've been struggling with is whether we should require CDI to be present or not. In OmniFaces 1.x we do use CDI, but mainly because Tomcat doesn't have CDI by default we jump through all kinds of hoops to prevent class not found exceptions; e.g. using reflection or using otherwise needless indirections via interfaces that don't import any CDI types, which are then implemented by classes which do, etc.

As CDI is becoming increasingly more foundational to Java EE and JSF is on the verge of officially deprecating its own managed beans (it's pretty much effectively deprecated in JSF 2.2 already), plus the fact that CDI is pretty easy to add to Tomcat, such a requirement may not be that bad. Nevertheless we're still pretty much undecided about this.

Summing up, should OmniFaces 2.x require as a minimum:

  1. Exactly what JSF 2.2 does: Servlet 3.0 and JDK 6.0
  2. Same pattern as OmniFaces 1.x did: Servlet 3.0 and JDK 7.0
  3. The versions of Java EE 7 of which JSF 2.2 is a part: Servlet 3.1, CDI 1.1 and JDK 7.0
  4. An extra progressive set: Java EE 7 and JDK 8.0

What do you think? If you want you can vote for your favorite option using the poll at the top right corner of this blog.

Arjan Tijms

Implementation components used by various Java EE servers

$
0
0
There are quite a lot of Java EE server implementations out there. There are a bunch of well known ones like JBoss, GlassFish and TomEE, and some less known ones like Resin and Liberty, and a couple of obscure ones like JEUS and WebOTX.

One thing to keep in mind is that all those implementations are not all completely unique. There are a dozen or so Java EE implementations, but there are most definitely not a dozen of JSF implementations (in fact there are only two; Mojarra and MyFaces).

Java EE implementations in some way are not entirely unlike Linux distributions; they package together a large amount of existing software, which is glued together via software developed by the distro vendor and where some software is directly developed by that vendor, but then also used by other vendors.

In Java EE for example JBoss develops the CDI implementation Weld and uses that in its Java EE servers, but other vendors like Oracle also use this. The other way around, Oracle develops Mojarra, the aforementioned JSF implementation, and uses this in its servers. JBoss on its turn then uses Mojarra instead of developing its own JSF implementation.

In this post we'll take a deeper look at which of these "components" the various Java EE servers are using.

One source that's worth looking at to dig up this information is the Oracle Java EE certification page. While this does lists some implementations for each server, it's unfortunately highly irregular and incoherent. Some servers will list their JSF implementation, while some others don't do this but do list their JPA implementation. It gives one a start, but it's a very incomplete list and a list that's thus different for each server.

Another way is to download each server and just look at the /lib or /modules directory and look at the jar files being present. This works to some degree, but some servers rename jars of well known projects. E.g. Mojarra becomes "glassfish-jsf" in WebLogic. WebSphere does something similar.

Wikipedia, vendor product pages and technical presentations sometimes do mention some of the implementation libraries, but again it's only a few implementations that are mentioned if they are mentioned at all. A big exception to this is a post that I somehow missed when doing my initial research from Arun Gupta about WildFly 8 (the likely base of a future JBoss EAP 7) which very clearly lists and references nearly all component implementations used by that server.

A last resort is to hunt for several well known interfaces and/or abstract classes in each spec and then check by which class these are implemented in each server. This is fairly easy for specs like JSF, e.g. FacesContext is clearly implemented by the implementation. However for JTA and JCA this is somewhat more difficult as it contains mostly interfaces that are to be implemented by user code.

For reference, I used the following types for this last resort method:

  • Servlet - HttpServletRequest
  • JSF - FacesContext
  • CDI - BeanManager
  • JPA - EntityManager
  • BV - javax.validation.Configuration, ParameterNameProvider
  • EJB - SessionContext
  • JAX-RS - ContextResolver, javax.ws.rs.core.Application
  • JCA - WorkManager, ConnectionManager, ManagedConnection
  • JMS - Destination
  • EL - ELContext, ValueExpression
  • JTA - TransactionManager
  • JASPIC - ServerAuthConfig
  • Mail - MimeMultipart
  • WebSocket - ServerWebSocketContainer, Encoder
  • Concurrency - ManagedScheduledExecutorService
  • Batch - JobContext

Without further ado, here's the matrix of Java EE implementation components used by 10 Java EE servers:

VendorRed HatOracleApacheIBMTMaxCauchoOW2
Spec/ASJBoss/WildFlyGlassFishWebLogicGeronimoTomEE+WebSphereLibertyJEUSResinJOnAS
ServletUndertowTomcat derivative &GrizzlyNameless internalTomcat/JettyTomcatNameless internalNameless internalNameless internal (jeus servlet)Nameless internalTomcat + Jetty*
JSFMojarra*MojarraMojarraMyFacesMyFacesMyFaces*MyFaces*Mojarra*Mojarra*Mojarra (def) + MyFaces*
CDIWeldWeld*Weld*OWBOWBOWB*OWB*WeldCanDI (semi internal)Weld*
JPAHibernateEclipselink+EclipseLink+OpenJPAOpenJPAOpenJPA*OpenJPA* Future version will be EclipseLink*Eclipselink*Eclipselink*Hibernate (def) + EclipseLink (certified with)*
BVHibernate ValidatorHibernate Validator*Hibernate Validator*BValBValBVal*BVal*Hibernate Validator*Hibernate Validator*Hibernate Validator*
EJBNameless internalNameless internal (EJB-container)Nameless internalOpenEJBOpenEJBNameless internalNameless internalNameless internalNameless internalEasyBeans
JAX-RSRESTEasyJerseyJerseyWinkCXFWink*Wink*Jersey*-Jersey*
JCAIronJacamarNameless internal (Connectors-runtime)Nameless internalNameless internal (Geronimo Connector)Nameless internal (Geronimo Connector)Nameless internal-Nameless internalNameless internalNameless internal
JMSHornetQOpenMQWebLogic JMS (closed source)ActiveMQActiveMQSiBus (closed source)Liberty messaging (closed source)Nameless internalNameless internalJORAM
ELEL RI*EL RIEL RIApache / Jasper / Tomcat(?) ELApache / Jasper / Tomcat(?) ELApache / Jasper / Tomcat(?) EL*Apache / Jasper / Tomcat(?) EL*EL RI*Nameless internalEL RI + Apache / Jasper / Tomcat(?) EL*
JTANarayanaNameless internalNameless internalNameless internal (Geronimo Transaction)Nameless internal (Geronimo Transaction)Nameless internalNameless internalNameless internal (jeus tm)Nameless internalJOTM
JASPICPart of PicketBoxNameless internalNameless internalNameless internal (Geronimo Jaspi)-Nameless internal-Nameless internal--
MailJavaMail RI*JavaMail RIJavaMail RI(Geronimo?) JavaMail(Geronimo?) JavaMailJavaMail RI*-JavaMail RI*JavaMail RI*JavaMail RI*
WebSocketUndertowTyrus-----Nameless internal (jeus websocket)--
ConcurrencyConcurrency RI*Concurrency RI-----Nameless Internal (jeus concurrent)--
JBatchJBeretJBatch RI (IBM)*-----JBatch RI (IBM)*-
(asterisk behind component name means vendor in given column uses implementation from other vendor, plus behind name means the implementation used to be from the vendor in that column, but the vendor donated the implementation to some external organization)

Looking at the matrix we can see there are mainly 3 big parties creating separate and re-usable Java EE components; Red Hat, Oracle and Apache. Apache is maybe a special case though, as it's an organization hosting tons of projects and not a vendor with a single strategic goal.

Next to these big parties there are two smaller ones producing a few components. Of those OW2 has a separate and re-usable implementation of EJB, JMS and JTA, while Resin has its own implementation of CDI. In the case of Resin it looks like it's only semi re-usable though. The implementation has its own name (CanDI) but there's isn't really a separate artifact or project page available for it, nor are there really any instructions on how to use CanDI on e.g. Tomcat or Jetty (like Weld has).

Apart from using (well known) open source implementations of components all servers (both open and closed source) had a couple of unnamed and/or internal implementations. Of these, JASPIC was most frequently implemented by nameless internal code, namely 4 out of 5 times, although the one implementation that was named (PicketBox) isn't really a direct JASPIC implementation but is more a security related project that includes the JASPIC implementation classes. JTA and EJB followed closely with 8 respectively 7 out of 10 implementations being nameless and internal. Remarkable is that all closed source servers tested had a nameless internal implementation of Servlet.

At the other end of the spectrum in the servers that I looked at there were no nameless internal and no closed source implementations of JSF, JPA, Bean Validation, JAX-RS, JavaMail and JBatch.

It's hard to say what exactly drives the creation of nameless internal components. One explanation may be that J2EE started out having Servlet and EJB as the internal foundation of everything, meaning a server didn't just include EJB, but more or less WAS EJB. In that world it wouldn't make much sense to include a re-usable EJB implementation. With the rise of open source Java EE components it made more sense to just reuse these, so all newer specs (JSF, JPA, etc) are preferable re-used from open source. One exception to this is however JEUS, which despite being in a hurry to be the first certified Java EE 7 implementation still felt the need to create their own implementations of the brand new WebSocket and Concurrency specs. It will be interesting to see what the next crop of Java EE 7 implementations will do with respect to these two specs.

An interesting observation is that WebSphere, which by some people may be seen as the poster child of the closed source and commercial AS, actually uses relatively many open source components, and of those nearly all of them are from Apache (which may also better explain why IBM sponsored the development of Geronimo for some time). JavaMail for some reason is the exception here. Geronimo has its own implementation of it, but WebSphere uses the Sun/Oracle RI version.

Another interesting observation is that servers don't seem to randomly mix components, but either use the RI components for everything, or use the Apache ones for everything. There's no server that uses say JMS from JBoss, JSF from Oracle and JPA from Apache. An exception to the rule is when servers allow alternative components to be configured, or even ship with multiple implementations of the same spec like JOnAS does.

We do have to realize that a Java EE application server is quite a bit more than just the set of spec components. For one there's always the integration code that's server specific, but there are also things like the implementation of pools for various things, the (im)possibility to do fail-over for datasources, (perhaps unfortunately) a number of security modules for LDAP, Database, Kerberos etc, and lower level server functionality like modular kernels (osgi or otherwise) that dynamically (e.g. JBoss) or statically (e.g. Liberty) load implementation components.

JEUS for instance may look like GlassFish as it uses a fair amount of the same components, but in actuality it's a completely different server at many levels.

Finally, note that not all servers were investigated and not all components. Notably the 3 Japanese servers NEC WebOTX, Fujitsu Interstage and Hitachi Cosminexus were not investigated, the reason being they are not exactly trivial to obtain. At the component level things like JAX-RPC, JAX-WS, SAAJ, JNDI etc are not in the matrix. They were mainly omitted to somewhat reduce the research time. I do hope to find some more time at a later stage and add the remaining Java EE servers and some more components.

Arjan Tijms

JASPIC improvements in WebLogic 12.1.3

$
0
0
Yesterday WebLogic 12.1.3 was finally released. First of all congratulations to the team at Oracle for getting this release out of the door! :)

Among the big news is that WebLogic 12.1.3 is now a mixed Java EE 6/EE 7 server by (optionally) supporting several Java EE 7 technologies like JAX-RS 2.0.

Next to this there are a ton of smaller changes and fixes as well. One of those fixes concerns the standard authentication system of Java EE (JASPIC). As we saw some time back, the JASPIC implementation in WebLogic 12.1.1 and 12.1.2 wasn't entirely optimal (to WebLogic's defense, very few JASPIC implementations were at the time).

One particular problem with JASPIC is that it almost can't be any different than that its TCK is rather incomplete; implementations that don't actually authenticate or which are missing the most basic functionality got certified in the past. For this purpose I have created a small set of tests that checks for the most basic capabilities. Note that these tests have since been contributed to Arun Gupta's Java EE 7 samples project, and have additionally been extended. Since those tests have Java EE 7 as a baseline requirement we unfortunately can't use them directly to test WebLogic 12.1.3.

For WebLogic 12.1.2 we saw the following results for the original Java EE 6 tests:



[INFO] jaspic-capabilities-test .......................... SUCCESS [1.140s]
[INFO] jaspic-capabilities-test-common ................... SUCCESS [1.545s]
[INFO] jaspic-capabilities-test-basic-authentication ..... FAILURE [7.533s]
[INFO] jaspic-capabilities-test-lifecycle ................ FAILURE [3.825s]
[INFO] jaspic-capabilities-test-wrapping ................. FAILURE [3.803s]
[INFO] jaspic-capabilities-test-ejb-propagation .......... SUCCESS [4.624s]

FAILURES:

testUserIdentityIsStateless(org.omnifaces.jaspictest.BasicAuthenticationStatelessIT
java.lang.AssertionError: User principal was 'test', but it should be null here. The container seemed to have remembered it from the previous request.
at org.omnifaces.jaspictest.BasicAuthenticationStatelessIT.testUserIdentityIsStateless(BasicAuthenticationStatelessIT.java:137)

testPublicPageNotRememberLogin(org.omnifaces.jaspictest.BasicAuthenticationPublicIT)java.lang.AssertionError: null
at org.omnifaces.jaspictest.BasicAuthenticationPublicIT.testPublicPageNotLoggedin(BasicAuthenticationPublicIT.java:44)
at org.omnifaces.jaspictest.BasicAuthenticationPublicIT.testPublicPageNotRememberLogin(BasicAuthenticationPublicIT.java:64)

testBasicSAMMethodsCalled(org.omnifaces.jaspictest.AuthModuleMethodInvocationIT)
java.lang.AssertionError: SAM methods called in wrong order
at org.omnifaces.jaspictest.AuthModuleMethodInvocationIT.testBasicSAMMethodsCalled(AuthModuleMethodInvocationIT.java:54)

testResponseWrapping(org.omnifaces.jaspictest.WrappingIT)
java.lang.AssertionError: Response wrapped by SAM did not arrive in Servlet.
at org.omnifaces.jaspictest.WrappingIT.testResponseWrapping(WrappingIT.java:53)

testRequestWrapping(org.omnifaces.jaspictest.WrappingIT)
java.lang.AssertionError: Request wrapped by SAM did not arrive in Servlet.
at org.omnifaces.jaspictest.WrappingIT.testRequestWrapping(WrappingIT.java:45)


WebLogic 12.1.3 does quite a bit better as we now see the following:



[INFO] jaspic-capabilities-test .......................... SUCCESS [1.172s]
[INFO] jaspic-capabilities-test-common ................... SUCCESS [1.802s]
[INFO] jaspic-capabilities-test-basic-authentication ..... FAILURE [6.811s]
[INFO] jaspic-capabilities-test-lifecycle ................ SUCCESS [3.847s]
[INFO] jaspic-capabilities-test-wrapping ................. SUCCESS [3.777s]
[INFO] jaspic-capabilities-test-ejb-propagation .......... SUCCESS [4.800s]

FAILURES:

testUserIdentityIsStateless(org.omnifaces.jaspictest.BasicAuthenticationStatelessIT)
java.lang.AssertionError: User principal was 'test', but it should be null here. The container seemed to have remembered it from the previous request.
at org.omnifaces.jaspictest.BasicAuthenticationStatelessIT.testUserIdentityIsStateless(BasicAuthenticationStatelessIT.java:137)


In particular WebLogic 12.1.1 and 12.1.2 didn't support request/response wrapping (a feature that curiously not a single server supported), called a lifecycle method at the wrong time (the method secureResponse was called before a Servlet was invoked instead of after) and remembered the username of a previously logged-in user (within the same session, but JASPIC is supposed to be stateless).

As of WebLogic 12.1.3 the lifecycle method is called at the correct moment and request/response wrapping is actually possible. This now brings the total number of servers where the request/response can be wrapped to 3 (GlassFish since 4.0 and JBoss since WildFly 8 can also do this).

It remains a curious thing that the JASPIC TCK seemingly catches so few issues, but slowly the implementations of JASPIC are getting better. The JASPIC improvements in WebLogic 12.1.3 may not have made the headlines, but it's another important step for Java EE authentication.

Arjan Tijms

JSF 2.3 wish list part I - Components

$
0
0
Over the last days several Java EE specs have published JSR proposals for their next versions. Today JSF published its proposal on the JSF mailing list, titled: Let's get started on JSF 2.3

The JSR groups improvements into 4 categories:

  • Small scale new features
  • Community driven improvements
  • Platform integration
  • Action oriented MVC support

An interesting aspect is the "Community driven improvements", which means it's basically up to the community what will be done exactly. In practice this mostly boils down to issues that have been entered into the JSF issue tracker. It's remarkable how many community filed issues JSF has compared to several other Java EE specs; clearly JSF always has been a spec that's very much community driven. At ZEEF.com we're more than happy to take advantage of this opportunity and contribute whatever we can to JSF 2.3.

Taking a look at this existing issue tracker we see there are quite a lot of ideas indeed. So what should the community driven improvements focus on? Improving JSF's core strengths further, adding more features, incorporating ideas of other frameworks, clarifying/fixing edge cases, performance? All in all there's quite a lot that can be done, but there's as always only a limited amount of resources available so choices have to be made.

One thing that JSF has been working towards is pushing away functionality that became available in the larger Java EE platform, therefore positioning itself more as the default MVC framework in Java EE and less as an out of the box standalone MVC framework for Tomcat et al. Examples are ditching its own managed bean model, its own DI system, and its own expression language. Pushing away these concerns means more of the available resources can be spend on things that are truly unique to JSF.

Important key areas of JSF for which there are currently more than a few issues in the tracker are the following:

  • Components
  • Component iteration
  • State
  • AJAX

In this article I'll look at the issues related to components. The other key areas will be investigated in follow-up articles.

Components

While JSF is about more than just components, and it's certainly not idiomatic JSF to have a page consisting solely out of components, arguably the component model is still one of JSF's most defining features. Historically components were curiously tedious to create in JSF, but in current versions creating a basic component is pretty straightforward.

The simplification efforts should however not stop here as there's still more to be done. As shown in the above reference, there's e.g. still the required family override, which for most simple use cases doesn't make much sense to provide. This is captured by the following existing issues:

A more profound task is to increase the usage of annotations for components in order to make a more "component centric" programming model possible. This means that the programmer works more from the point of view of a component, and threats the component as a more "active thing" instead of something that's passively defined in XML files and assembled by the framework.

For this at least the component's attributes should be declared via annotations, making it no longer "required" to do a tedious registration of those in a taglib.xml file. Note that this registration is currently not technically required, but without it tools like a Facelet editor won't be able to do any autocompletion, so in practice people mostly define them anyway.

Besides simply mimicking the limited expressiveness for declaring attributes that's now available in taglib.xml files, some additional features would be really welcome. E.g. the ability to declare whether an attribute is required, its range of valid values and more advanced things like declaring that an attribute is an "output variable" (like the "var" attribute of a data table).

A nonsensical component to illustrate some of the ideas:


@FacesComponent
public class CustomComponent extends UIComponentBase {

@Attribute
@NotNull
private ComponentAttribute<String> value;

@Attribute
@Min(3)
private int dots;

@Attribute(type=out)
@RequestScoped
private String var;

private String theDots;

@PostConstruct
public void init() {
theDots = createDots(dots);
}


@Override
public void encodeBegin(FacesContext context) throws IOException {
ResponseWriter writer = context.getResponseWriter();
writer.write(value.getValue().toUpperCase());
writer.write(theDots);

if (var != null) {
getRequestParameterMap().put(var, theDots);
}
}
}
In the above example there are 4 instance variables, of which 3 are component attributes and marked with @Attribute. These last 3 could be recognized by tooling to perform auto-completion in e.g. tags associated with this component. Constraints on the attributes could be expressed via Bean Validation, which can then partially be processed by tooling as well.

Attribute value in the example above has as type ComponentAttribute, which could be a relatively simple wrapper around a component's existing attributes collection (obtainable via getAttributes()). The reason this should not directly be a String is that it can now be transparently backed by a deferred value expression (a binding that is lazily resolved when its value is obtained). Types like ComponentAttribute shouldn't be required when the component designer only wants to accept literals or immediate expressions. We see this happening for the dots and var attributes.

Finally, the example does away with declaring an explicit name for the component. In a fully annotation centric workflow a component name (which is typically used to refer to it in XML files) doesn't have as much use. A default name (e.g. the fully qualified class name, which is what we always use in OmniFaces for components anyway) would probably be best.

This is captured by the following issues:

Creating components is one thing, but the ease with which existing components can be customized is just as important or perhaps even more important. With all the moving parts that components had in the past this was never really simple. With components themselves being simplified, customizing existing ones could be simplified as well but here too there's more to be done. For instance, often times a user only knows a component by its tag and sees this as the entry point to override something. Internally however there's still the component name, the component class, the renderer name and the renderer class. Either of these can be problematic, but particularly the renderer class can be difficult to obtain.

E.g. suppose the user did get as far as finding out that <h:outputText> is the tag for component javax.faces.component.html.HtmlOutputText. This however uses a renderer named javax.faces.Text as shown by the following code fragment:


public class HtmlOutputText extends javax.faces.component.UIOutput {

public HtmlOutputText() {
setRendererType("javax.faces.Text");
}

public static final String COMPONENT_TYPE = "javax.faces.HtmlOutputText";
How does the user find out which renderer is associated with javax.faces.Text? And why is the component name javax.faces.HtmlOutputText as opposed to its fully qualified classname javax.faces.component.html.HtmlOutputText? To make matters somewhat worse, when we want to override the renderer of an component but keep its existing tag, we also have the find out the render-kit-id. (it's a question whether the advantages that all these indirection names offer really outweigh the extra complexity users have to deal with)

For creating components we can if we want ignore these things, but if we customize an existing component we often can't. Tooling may help us to discover those names, but in absence of such tools and/or to reduce our dependencies on them JSF could optionally just let us specify more visible things like the tag name instead.

This is captured by the following issues:

Although strictly speaking not part of the component model itself, one other issue is the ease with which a set of components can easily be grouped together. There's the composite component for that, but this has as a side-effect that a new component is created that has the set of components as its children. This doesn't work for those situations where the group of components is to be put inside a parent component that does something directly based on its children (like h:panelGrid). There's the Facelet tag for this, but it still has the somewhat old fashioned requirements of XML registrations. JSF could simplify this by giving a Facelet tag the same conveniences as were introduced for composite components. Another option might be the introduction of some kind of visit hint, via which things like a panel grid could be requested to look at the children of some component instead of that component itself. This could be handy to give composite components some of the power for which a Facelet tag is needed now.

This is partially captured by the following issues:

Finally there's an assortment of other issues on the tracker that aim to simplify working with components or make the model more powerful. For instance there's still come confusion about the encodeBegin(), encodeChildren(), encodeEnd() methods vs the newer encodeAll() in UIComponent. Also, dynamic manipulation of the component tree (fairly typical in various other frameworks that have a component or element tree) is still not entirely clear. As it appears, such modification is safe to do during the preRenderView event, but this fact is not immediately obvious and the current 2.2 spec doesn't mention it. Furthermore even if it's clear for someone that manipulation has to happen during this event, then the code to register for this event and handle it is still a bit tedious (see previous link).

Something annotation based may again be used to simplify matters, e.g.:


@FacesComponent
public class CustomComponent extends UIComponentBase {

@PreRenderView // or @ModifyComponentTree as alias event
public void init() {
this.getParent().getChildren().add(...);
}
}

This and some other things are captured by the following issues:

That's it for now. Stay tuned for the next part that will take a look at component iteration (UIData, UIRepeat, etc).

Arjan Tijms

High time to standardize Java EE converters?

$
0
0
A common task when developing Java EE applications is that of converting data. In JSF we convert objects to a string representation for rendering it inside an (HTML) response, and convert it back to an object after a postback. In JPA we convert objects from and to types known by our database, in JAX-RS we convert request parameters strings into objects etc etc.

So given the pervasiveness of this task, is there any common converter type or mechanism in the Java EE platform?

Unfortunately it appears such a common converter type is not there. While rather similar in nature, many specs in Java EE define their very own converter type. Below we take a look at the various converter types that are currently in use by the platform.

JSF

One of the earlier converter types in the Java EE platform is contributed by JSF. This converter type is able to convert from String to Object and the other way around. Because it pre-dates Java SE 5 it doesn't use a generic type parameter. While its name and methods are very general, the signature of both methods takes two JSF specific types. These specific types however are rarely if ever needed for the actual conversion, but are typically used to provide feedback to the user after validation has failed.

The main API class looks as follows:


public interface Converter {
Object getAsObject(FacesContext context, UIComponent component, String value);
String getAsString(FacesContext context, UIComponent component, Object value);
}
See: javax.faces.convert.Converter

JAX-RS

JAX-RS too defines its very own converter type; ParamConverter. Just like the JSF Converter it's able to convert from a String to any Java Object, but this time there is a generic type parameter in the interface. There's also a method defined to convert the Object back into a String, but this one is curiously reserved for future use.

The main API class looks as follows:


public interface ParamConverter<T> {
T fromString(String value);
String toString(T value);
}
See: javax.w.rs.ext.ParamConverter

JPA

One of the most flexible converters in terms of its interface is the JPA converter AttributeConverter. This one is able to convert between any two types in both directions as denoted by 2 generic type parameters. The naming of the converter methods are very specific though.

The main API class looks as follows:


public interface AttributeConverter<X,Y> {
Y convertToDatabaseColumn (X attribute);
X convertToEntityAttribute (Y dbData);
}
See: javax.persistence.AttributeConverter

WebSocket

WebSocket has its own converter as well. Architecturally they are a bit different. In contrast with the above shown converters, WebSocket defines separate interfaces for both directions of the conversion whereas the other specs just put two methods in the same type. WebSocket also defines separate interfaces for one of the several supported target types, whereas the other converters support either String or an Object/generic type parameter.

The two supported target types are String and ByteBuffer, with each having a variant where the converter doesn't provide the converted value via a return value, but writes it to a Writer instance that's passed into the converter method as an extra parameter.

Another thing that sets the WebSocket converters apart from the other Java EE converters is that instances have an init and destroy method and are guaranteed to be used by one thread at a time only.

The String to Object API classes look as follows:


public static interface Decoder.Text<T> extends Decoder {
T decode(String s) throws DecodeException;
boolean willDecode(String s);
}
public static interface Encoder.Text<T> extends Encoder {
String encode(T object) throws EncodeException;
}
See: javax.websocket.Decoder.TextSee: javax.websocket.Encoder.Text

PropertyEditor (Java SE)

Java SE actually has a universal converter API as well, namely the PropertyEditor. This API converts Objects from and to String, just as the JSF converter. As demonstrated before this type of converter is often used in Java EE code as well.

A PropertyEditor converter is almost always registered globally and inherently stateful. You first set a source value on an instance and then call another method to get the converted value. Remarkable for this converter type is that it contains lots of unrelated methods, including a method specific for painting in an AWT environment: paintValue(Graphics gfx, Rectangle box). This highly unfocused set of functionality makes the PropertyEditor a less than ideal converter for general usage, but in most cases the nonsense methods can simply be ignored and the ubiquitous availability in Java SE is of course a big plus.

The main API class and main conversion methods look as follows:


public interface PropertyEditor {
// String to Object
void setAsText(String text) throws IllegalArgumentException;
Object getValue();
// Object to String
void setValue(Object value);
String getAsText();
// Tons of other useless methods omitted
}
See: java.beans.PropertyEditor

Other

There are some specs that use a more implicit notion of conversion and could take advantage of a platform conversion API if there happened to be one. This includes remote EJB and JMS. Both are capable of transferring objects in binary form using what is essentially also a kind of conversion API: Java SE serialization. Finally JAXB has a number of converters as well, but they are build in and only defined for a finite amount of types.

Conclusion

We've seen that there are quite a number of APIs available in Java EE as well as Java SE that deal with conversion. The APIs we looked at differ somewhat in capabilities, and use different terminology for what are essentially similar concepts. The platform as a whole would certainly benefit from having a single unified conversion API; this could eventually somewhat reduce the size of individual specs, makes it easier to have a library of converters available and would surely give the Java EE platform a more consistent feel.

Arjan Tijms


Getting the target of value expressions

$
0
0
In the Java EE platform programmers have a way to reference values in beans via textual expressions. These textual expressions are then compiled by the implementation (via the Expression Language, AKA EL spec) to instances of ValueExpression.

E.g. the following EL expression can be used to refer to the named bean "foo" and its property "bar":


#{foo.bar}

Expressions can be chains of arbitrary length, and can include method calls as well. E.g.:


#{foo.bar(1).kaz.zak(test)}

An important aspect of these expressions is that they are highly contextual, specifically where it concerns the top level variables. These consists of the object that starts the chain ("foo" here) and any EL variables used as method arguments ("test" here). Because of this, it's not a totally unknown requirement for wanting to resolve the expression when it's still in context in order to obtain the so-called final base and the final property/method, the last one including the resolved and bound parameters.

Now the EL API does provide a method to get the final base and property of an expression if there is one, but this one unfortunately only supports properties, not methods. When method invocations were introduced in EL 2.2 for usage in ValueExpressions and chains (which is subtly different from a MethodExpression that existed before that) this seems to have been done in the most minimal way. As a result, a lot of JavaDoc and supporting APIs were seemingly not updated.

For instance, the JavaDoc for ValueExpression still says:

For any of the five methods, the ELResolver.getValue[...] method is used to resolve all properties up to but excluding the last one. This provides the base object.
There is no mention here that ELResolver.invoke is used as well if any of the intermediate nodes in the chain is a method invocation (like bar(1) in #{foo.bar(1).kaz.zak(test)}).

The fact that there's a ValueReference only supporting properties and no corresponding MethodReference is extra curious, since method invocations in chains and ValueExpressions and the ValueReference type were both introduced in EL 2.2.

So is there any hope of getting the final base and method if a ValueExpression happens to be pointing to a method? There appears to be a way, but it's a little tricky. The trick in question consists of using a special tracing ELResolver and taking advantage of the fact that some methods on ValueExpression are specified to resolve the expression "up to but excluding the last [node]". Using this we can use the following approach:

  • Instantiate an EL context which contains the special tracing EL resolver
  • Call a method on the ValueExpression that resolves the chain until the next to last node (e.g. getType()) using the special EL context
  • In the tracing EL resolver count each intermediate call, so when getType() returns the length of the chain is known
  • Call a method on the ValueExpression that resolves the entire chain (e.g. getValue()) using the same special EL context instance
  • When the EL resolver reaches the next to last node (determined by counting intermediate calls again), wrap the return value from ElResolver.getValue or ElResolver.invoke
  • If either ElResolver.getValue or ElResolver.invoke is called again later with our special wrapped type, we know this is the final node and can collect all details that we need; the base, property or method name and the resolved method parameters (if any). All of these are simply passed to us by the EL implementation
The return value wrapping of the next to last node (at call count N) may need some extra explanation. After all, why not just wait till we're called the Nth + 1 time? The issue is that this Nth + 1 call may be for resolving variables that are passed as parameters into the final node if this final node is a method invocation. The amount of such parameters is unknown and each parameter can consist of a chain of arbitrary length.

E.g. consider the following expression:


#{foo.bar.kaz(test.a.b.c(x.r), bean.x.y.z(o).p)}
In such a case the first pass of the above given approach will count the calls up until the point of resolving "bar", which is thus at call count N. If "kaz" was a simple property, our EL resolver would be asked to resolve [return value of "bar"]."kaz" at call count N + 1. However, since "kaz" is not a simple property but a complex method invocation with EL variables, the next call after N will be for resolving the base of the first EL variable used in the method invocation ("test" here).

One may also wonder why we do not "simply" get the textual EL representation of an EL expression, chop off the last node using simple string manipulation and resolve that. The reason is two fold. First, it may work for very simple expressions (like #{a.b.c}), but doesn't work in general for complex ones (e.g. #{empty foo? a.b.c : x.y.z}). A second issue is that a given ValueExpression instance all too often contains state (like an embedded VariableMapper instance), which is lost when we just get the EL string from a ValueExpression and evaluate that.

The approach outlined above has been implemented in OmniFaces 2.0. For completeness the most important bit of it, the tracing EL resolver is given below:


class InspectorElResolver extends ELResolverWrapper {

private int passOneCallCount;
private int passTwoCallCount;

private Object lastBase;
private Object lastProperty; // Method name in case VE referenced a method, otherwise property name
private Object[] lastParams; // Actual parameters supplied to a method (if any)

private boolean subchainResolving;

// Marker holder via which we can track our last base. This should become
// the last base in a next iteration. This is needed because if the very last property is a
// method node with a variable, we can't track resolving that variable anymore since it will not have been processed by the
// getType() call of the first pass.
// E.g. a.b.c(var.foo())
private FinalBaseHolder finalBaseHolder;

private InspectorPass pass = InspectorPass.PASS1_FIND_NEXT_TO_LAST_NODE;

public InspectorElResolver(ELResolver elResolver) {
super(elResolver);
}

@Override
public Object getValue(ELContext context, Object base, Object property) {

if (base instanceof FinalBaseHolder) {
// If we get called with a FinalBaseHolder, which was set in the next to last node,
// we know we're done and can set the base and property as the final ones.
lastBase = ((FinalBaseHolder) base).getBase();
lastProperty = property;

context.setPropertyResolved(true);
return ValueExpressionType.PROPERTY;
}

checkSubchainStarted(base);

if (subchainResolving) {
return super.getValue(context, base, property);
}

recordCall(base, property);

return wrapOutcomeIfNeeded(super.getValue(context, base, property));
}

@Override
public Object invoke(ELContext context, Object base, Object method, Class<?>[] paramTypes, Object[] params) {

if (base instanceof FinalBaseHolder) {
// If we get called with a FinalBaseHolder, which was set in the next to last node,
// we know we're done and can set the base, method and params as the final ones.
lastBase = ((FinalBaseHolder) base).getBase();
lastProperty = method;
lastParams = params;

context.setPropertyResolved(true);
return ValueExpressionType.METHOD;
}

checkSubchainStarted(base);

if (subchainResolving) {
return super.invoke(context, base, method, paramTypes, params);
}

recordCall(base, method);

return wrapOutcomeIfNeeded(super.invoke(context, base, method, paramTypes, params));
}

@Override
public Class<?> getType(ELContext context, Object base, Object property) {

// getType is only called on the last element in the chain (if the EL
// implementation actually calls this, which might not be the case if the
// value expression references a method)
//
// We thus do know the size of the chain now, and the "lastBase" and "lastProperty"
// that were set *before* this call are the next to last now.
//
// Alternatively, this method is NOT called by the EL implementation, but then
// "lastBase" and "lastProperty" are still the next to last.
//
// Independent of what the EL implementation does, "passOneCallCount" should thus represent
// the total size of the call chain minus 1. We use this in pass two to capture the
// final base, property/method and optionally parameters.

context.setPropertyResolved(true);

// Special value to signal that getType() has actually been called (this value is
// not used by the algorithm now, but may be useful when debugging)
return InspectorElContext.class;
}

private boolean isAtNextToLastNode() {
return passTwoCallCount == passOneCallCount;
}

private void checkSubchainStarted(Object base) {
if (pass == InspectorPass.PASS2_FIND_FINAL_NODE && base == null && isAtNextToLastNode()) {
// If "base" is null it means a new chain is being resolved.
// The main expression chain likely has ended with a method that has one or more EL variables
// as parameters that now need to be resolved.
// E.g. a.b().c.d(var1)
subchainResolving = true;
}
}

private void recordCall(Object base, Object property) {

switch (pass) {
case PASS1_FIND_NEXT_TO_LAST_NODE:

// In the first "find next to last" pass, we'll be collecting the next to last element
// in an expression.
// E.g. given the expression a.b().c.d, we'll end up with the base returned by b() and "c" as
// the last property.

passOneCallCount++;
lastBase = base;
lastProperty = property;

break;

case PASS2_FIND_FINAL_NODE:

// In the second "find final node" pass, we'll collecting the final node
// in an expression. We need to take care that we're not actually calling / invoking
// that last element as it may have a side-effect that the user doesn't want to happen
// twice (like storing something in a DB etc).

passTwoCallCount++;

if (passTwoCallCount == passOneCallCount) {

// We're at the same call count as the first phase ended with.
// If the chain has resolved the same, we should be dealing with the same base and property now

if (base != lastBase || property != lastProperty) {
throw new IllegalStateException(
"First and second pass of resolver at call #" + passTwoCallCount +
" resolved to different base or property.");
}

}

break;
}
}

private Object wrapOutcomeIfNeeded(Object outcome) {
if (pass == InspectorPass.PASS2_FIND_FINAL_NODE && finalBaseHolder == null && isAtNextToLastNode()) {
// We're at the second pass and at the next to last node in the expression chain.
// "outcome" which we have just resolved should thus represent our final base.

// Wrap our final base in a special class that we can recognize when the EL implementation
// invokes this resolver later again with it.
finalBaseHolder = new FinalBaseHolder(outcome);
return finalBaseHolder;
}

return outcome;
}

public InspectorPass getPass() {
return pass;
}

public void setPass(InspectorPass pass) {
this.pass = pass;
}

public Object getBase() {
return lastBase;
}

public Object getProperty() {
return lastProperty;
}

public Object[] getParams() {
return lastParams;
}

}

As seen, the support for ValueExpressions that point to methods is not optimal in the current EL specification. With some efforts we can workaround this, but arguably such functionality should be present in the specification itself.

Arjan Tijms

Java EE process cycles and server availability

$
0
0
When we normally talk about the Java EE cycle time, we talk about the time it takes between major revisions of the spec. E.g. the time between Java EE 6 and Java EE 7. While this is indeed the leading cycle time, there are two additional cycles that are of major importance:
  1. The time it takes for vendors to release an initial product that implements the new spec revision
  2. The time it takes vendors to stabilize their product (which incidentally is closely tied to the actual user adoption rate)

In this article we'll take a somewhat closer look on the time it takes vendors to release their initial product. But first let's take a quick look at the time between spec releases. The following table lists the Java EE version history and the delta time between versions:

Java EE delta times between releases
VersionStart dateRelease dateDays since last releaseDays spent on spec
1.2-12 Dec, 1999 - -
1.318 Feb, 200024 Sep, 2001653 days (1 year, 9 months) 584 (1 year, 7 months)
1.422 Oct, 200112 Nov, 2003779 days (2 years, 1 month) 751 (2 years)
510 May, 200411 May, 2006911 days (2 years, 6 months) 731 (2 years)
616 Jul, 200710 Dec, 20091310 days (3 years, 7 months) 878 (2 years, 4 months)
714 Mar, 201128 May, 20131266 days (3 years, 5 months) 806 (2 years, 2 months)
817 Aug, 2014~May, 2017 (*) 1461 days days (4 years) (*) 1015 (2 years, 9 months) (*)
*(estimated)

As can be seen the time between releases has been steadily increasing, but seemed to have been stabilized to approximately three and a half years. The original plan was to release Java EE 8 using the same pace, meaning we would have expected it around the end of 2016, but this was later changed to H1 2017. "H1" not rarely means the last month of H1 (often certainly not the first 3 months, or otherwise Q1 would be used). This means around May 2017 is a likely release date, pushing the time to a solid 4 years.

It may be worth emphasizing that the time between releases is not fully spend on Java EE. Typically there is what one may call with respect to spec work The Big Void™ between releases. It's a period of time where there is no spec work being done. This void starts right after the spec is released and the various EGs are disbanded. The time is used differently by everyone, but typically it's used for implementation work, cleaning up and refactoring code, project structures, tests and other artifacts.

After some time (~1 year for Java EE 6, ~5 months for Java EE 7) initial discussions start where just some ideas are pitched and the landscape is explored. After that it still takes some time until the work really kicks off for the front runners (~1 year and 5 months for Java EE 6, ~1 year and 3 months for Java EE 7).

Those numbers are however for the front runners, a bunch of sub-specs of Java EE start even later than this, and some of them even finish well before the release date of the main umbrella spec. So while the time between releases seems like a long time, it's important to realize by far not all this time is actually spend on the various specifications. As can be seen in the table above, the time actually spend on the specification has been fairly stable at around 2 years. 1.3 was a bit below that and 6 a bit above it, but it's all fairly close to these two years. What has been increasing is the time taken up by The Void (or uptake as some others call it); less than a month between 1.3 and 1.4 to well over a year between 5 and 6, and 6 and 7.

As mentioned previously, finalizing the spec is only one aspect of the entire process. With the exception of GlassFish, the reference implementation (RI) that is made available at the same time that the new spec revision becomes available, the implementation cycle of Java EE starts right after a spec release.

A small complication in tracking Java EE server products is that various of these products are variations of each other or just different versions taken from the same code line. E.g. WASCE is (was) an intermediate release of Geronimmo. JBoss AS 6 is obviously just an earlier version of JBoss AS 7, which is itself an earlier version of JBoss EAP 6 (although JBoss markets it as a separate product). NetWeaver is said to be a version of TomEE, etc.

Also complicating the certification and first version story is that a number of vendors chose to have beta or technical preview versions certified. In one occasion a vendor even certified a snapshot version. Obviously those versions are not intended for any practical (production) use. It's perhaps somewhat questionable that servers that in the eyes of their own vendors are very far from the stability required by their customers can be certified at all.

The following two tables show how long it took the Java EE 6 Full- and Web Profile to be implemented for each server.

Java EE 6 Full Profile server implementation times
ServerRelease dateDays since spec released
GlassFish 3.010 Dec, 2009 0
* JEUS 7 Tech Preview 115 Jan, 2010 36
WebSphere 8.022 June, 2011 559 (1 year, 6 months)
* Geronimo 3.0 BETA 114 November, 2011 704 (1 year, 11 months)
WebLogic 12.1.11 Dec, 2011 721 (1 year, 11 months)
Interstage AS 10.127 December, 2011 747 (2 years)
* JBoss AS 7.1 17 Feb, 2012 799 (2 years, 2 months)
(u)Cosminexus 9.0~16 April, 2012 858 (2 years, 4 months)
JEUS 7.0~1 June, 2012 904 (2 years, 5 months)
JBoss EAP 620 June, 2012 923 (2 years, 6 months)
Geronimo 3.013 July, 2012 946 (2 years, 7 months)
WebOTX AS 9.130 May, 2013 1267 (3 years, 5 months)
InforSuite AS 9.1~July, 2014 ~1664 (4 years, 6 months)
* denotes a server that's a tech preview, community, developer preview, beta, etc version

Java EE 6 Web Profile server implementation times
ServerRelease dateDays since spec released
* JBoss AS 6.028 December, 2010 322 (10 months)
Resin 4.0.17May, 2011 507 (1 year, 4 months)
* JBoss AS 7.012 July, 2011 579 (1 year, 7 months)
* TomEE beta4 Oct, 2011 663 (1 year, 9 months)
TomEE 1.008 May, 2012 880 (2 years, 4 months)
* JOnAS 5.3.0-M8-SNAPSHOT[14 Nov, 2012 ~ 07 Jan 2013] 1070~1124 (~3 years)
Liberty 8.5.514 Jun, 2013 1282 (3 years, 6 months)
JOnAS 5.304 Oct 2013 1394 (3 years, 9 months)
* denotes a server that's a tech preview, community, developer preview, beta, etc version

As we can see here, excluding GlassFish and the tech preview of JEUS, it took 1 year and 6 months for the first production ready (according to the vendor!) Java EE 6 full profile server to appear on the market, while most other servers appeared after around two and half years.

Do note that "production ready according to the vendor" is a state that can not easily be quantified with respect to quality. What some vendor calls 1.0 Final, may correspond to what another vendor calls 0.5 Beta. From the above table it doesn't mean that say WebLogic 12.1.1 (production ready according to its vendor) is either more or less stable than e.g. JEUS 7 Tech Preview 1 (not production ready according to its vendor).

The Java EE 7 spec was released at 28 May, 2013, which is 522 days (1 year, 5 months) ago at the time of writing. So let's take a look at what the current situation is with respect to available Java EE 7 servers:

Java EE 7 Full Profile server implementation times
ServerRelease dateDays since spec released
GlassFish 4.028 May, 2013 0
* JEUS 8 developer preview~26 Aug, 2013 90 (2 months, 29 days)
* JBoss WildFly 8.011 Feb, 2014 259 (8 months, 14 days)
Hitachi AS (Cosminexus) 10.0 19 Dec, 2014 570 (1 year, 6 months)
Liberty 8.5.5.6 25 Jun, 2015 758 (2 years, 1 month)
* denotes a server that's a tech preview, community, developer preview, beta, etc version

Although there are just a few entries, those largely follow the same pattern as the Java EE 6 implementation cycle. (I'll be updating the table above as new certified servers come in)

GlassFish is by definition the first release, while JEUS is again the second one with a developer preview (a pattern that goes all the way back to J2EE 1.2). There's unfortunately no information available on when JEUS 8 developer preview was exactly released, but a blog posting about this was published at 26 aug, 2003 so I took that date.

For JBoss the situation for Java EE 7 compared to EE 6 is not really that much different either. WildFly 8 was released after 259 days (the plan was 167 days), which is not that different from JBoss AS 6 which was released after 322 days. One big difference here though is that AS 6 was only certified for the web profile, while in fact practically implementing the full profile. The similarities don't end there, as just as with Java EE 6 the eventual production version (JBoss EAP 6) wasn't based on JBoss AS 6.x, but on the major new version JBoss AS 7. This time around, again it strongly looks like JBoss EAP 7 will not be based on JBoss WildFly 8.x, but on the major new version JBoss WildFly 10.

Hitachi AS was the first implementation of Java EE 7 that is commercially supported by its own vendor, but outside Japan Hitachi AS is not that well known. For the rest of the world IBM's Liberty was the first one.

If history is anything to go by, we may see one or two additional Java EE 7 implementations in a few months, while after a little more than a year from now most servers should be available in a Java EE 7 flavor. At the moment of writing it looks like Web Profile implementation TomEE 2.0 indeed isn't that far away, while Oracle WebLogic shouldn't take much longer than a few months.

Arjan Tijms

OmniFaces 2.0 RC1 available for testing

$
0
0
We are happy to announce that we have just released OmniFaces 2.0 release candidate 1.

OmniFaces 2.0 is the first release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Our Servlet dependency is now Servlet 3.0 from Java EE 6 (used to be 2.5, although we optionally used 3.0 features before). The minimal Java SE version is now Java 7.

A full list of what's new and changed is available here.

OmniFaces 2.0 RC1 can be tested by adding the following dependency to your pom.xml:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces</artifactId>
<version>2.0-RC1</version>
</dependency>

Alternatively the jars files can be downloaded directly.

If no major bugs surface we hope to release OmniFaces 2.0 final in about one week from now.

Arjan Tijms

Header based stateless token authentication for JAX-RS

$
0
0
Authentication is a topic that comes up often for web applications. The Java EE spec supports authentication for those via the Servlet and JASPIC specs, but doesn't say too much about how to authenticate for JAX-RS.

Luckily JAX-RS is simply layered on top of Servlets, and one can therefore just use JASPIC's authentication modules for the Servlet Container Profile. There's thus not really a need for a separate REST profile, as there is for SOAP web services.

While using the same basic technologies as authentication modules for web applications, the requirements for modules that are to be used for JAX-RS are a bit different.

JAX-RS is often used to implement an API that is used by scripts. Such scripts typically do not engage into an authentication dialog with the server, i.e. it's rare for an API to redirect to a form asking for credentials, let alone asking to log-in with a social provider.

An even more fundamental difference is that in web apps it's commonplace to establish a session for among others authentication purposes. While possible to do this for JAX-RS as well, it's not exactly a best practice. Restful APIs are supposed to be fully stateless.

To prevent the need for going into an arbitrary authentication dialog with the server, it's typically for scripts to send their credentials upfront with a request. For this BASIC authentication can be used, which does actually initiates a dialog albeit a standardised one. An other option is to provide a token as either a request parameter or as an HTTP header. It should go without saying that in both these case all communication should be done exclusively via https.

Preventing a session to be created can be done in several ways as well. One way is to store the authentication data in an encrypted cookie instead of storing that data in the HTTP session. While this surely works it does feel somewhat weird to "blindly" except the authenticated identity from what the client provides. If the encryption is strong enough it *should* be okayish, but still. Another method is to quite simply authenticate every time over again with each request. This however has its own problem, namely the potential for bad performance. An in-memory user store will likely be very fast to authenticate against, but anything involving an external system like a database or ldap server probably is not.

The performance problem of authenticating with each request can be mitigated though by using an authentication cache. The question is then whether this isn't really the same as creating a session?

While both an (http) session and a cache consume memory at the server, a major difference between the two is that a session is a store for all kinds of data, which includes state, but a cache is only about data locality. A cache is thus by definition never the primary source of data.

What this means is that we can throw data away from a cache at arbitrary times, and the client won't know the difference except for the fact its next request may be somewhat slower. We can't really do that with session data. Setting a hard limit on the size of a cache is thus a lot easier for a cache then it is for a session, and it's not mandatory to replicate a cache across a cluster.

Still, as with many things it's a trade off; having zero data stored at the server, but having a cookie send along with the request and needing to decrypt that every time (which for strong encryption can be computational expensive), or having some data at the server (in a very manageable way), but without the uneasiness of directly accepting an authenticated state from the client.

Here we'll be giving an example for a general stateless auth module that uses header based token authentication and authenticates with each request. This is combined with an application level component that processes the token and maintains a cache. The auth module is implemented using JASPIC, the Java EE standard SPI for authentication. The example uses a utility library that I'm incubating called OmniSecurity. This library is not a security framework itself, but provides several convenience utilities for the existing Java EE security APIs. (like OmniFaces does for JSF and Guava does for Java)

One caveat is that the example assumes CDI is available in an authentication module. In practice this is the case when running on JBoss, but not when running on most other servers. Another caveat is that OmniSecurity is not yet stable or complete. We're working towards an 1.0 version, but the current version 0.6-ALPHA is as the name implies just an alpha version.

The module itself look as follows:


public class TokenAuthModule extends HttpServerAuthModule {

private final static Pattern tokenPattern = compile("OmniLogin\\s+auth\\s*=\\s*(.*)");

@Override
public AuthStatus validateHttpRequest(HttpServletRequest request, HttpServletResponse response, HttpMsgContext httpMsgContext) throws AuthException {

String token = getToken(request);
if (!isEmpty(token)) {

// If a token is present, authenticate with it whether this is strictly required or not.

TokenAuthenticator tokenAuthenticator = getReferenceOrNull(TokenAuthenticator.class);
if (tokenAuthenticator != null) {

if (tokenAuthenticator.authenticate(token)) {
return httpMsgContext.notifyContainerAboutLogin(tokenAuthenticator.getUserName(), tokenAuthenticator.getApplicationRoles());
}
}
}

if (httpMsgContext.isProtected()) {
return httpMsgContext.responseNotFound();
}

return httpMsgContext.doNothing();
}

private String getToken(HttpServletRequest request) {
String authorizationHeader = request.getHeader("Authorization");
if (!isEmpty(authorizationHeader)) {

Matcher tokenMatcher = tokenPattern.matcher(authorizationHeader);
if (tokenMatcher.matches()) {
return tokenMatcher.group(1);
}
}

return null;
}

}
Below is a quick primer on Java EE's authentication modules:
A server auth module (SAM) is not entirely unlike a servlet filter, albeit one that is called before every other filter. Just as a servlet filter it's called with an HttpServletRequest and HttpServletResponse, is capable of including and forwarding to resources, and can wrap both the request and the response. A key difference is that it also receives an object via which it can pass a username and optionally a series of roles to the container. These will then become the authenticated identity, i.e. the username that is passed to the container here will be what HtttpServletRequest.getUserPrincipal().getName() returns. Furthermore, a server auth module doesn't control the continuation of the filter chain by calling or not calling FilterChain.doFilter(), but by returning a status code.

In the example above the authentication module extracts a token from the request. If one is present, it obtains a reference to a TokenAuthenticator, which does the actual authentication of the token and provides a username and roles if the token is valid. It's not strictly necessary to have this separation and the authentication module could just as well contain all required code directly. However, just like the separation of responsibilities in MVC, it's typical in authentication to have a separation between the mechanism and the repository. The first contains the code that does interaction with the environment (aka the authentication dialog, aka authentication messaging), while the latter doesn't know anything about an environment and only keeps a collection of users and roles that are accessed via some set of credentials (e.g. username/password, keys, tokens, etc).

If the token is found to be valid, the authentication module retrieves the username and roles from the authenticator and passes these to the container. Whenever an authentication module does this, it's supposed to return the status "SUCCESS". By using the HttpMsgContext this requirement is largely made invisible; the code just returns whatever HttpMsgContext.notifyContainerAboutLogin returns.

If authentication did not happen for whatever reason, it depends on whether the resource (URL) that was accessed is protected (requires an authenticated user) or is public (does not require an authenticated user). In the first situation we always return a 404 to the client. This is a general security precaution. According to HTTP we should actually return a 403 here, but if we did users can attempt to guess what the protected resources are. For applications where it's already clear what all the protected resources are it would make more sense to indeed return that 403 here. If the resource is a public one, the code "does nothing". Since authentication modules in Java EE need to return something and there's no status code that indicates nothing should happen, in fact doing nothing requires a tiny bit of work. Luckily this work is largely abstracted by HttpMsgContext.doNothing().

Note that the TokenAuthModule as shown above is already implemented in the OmniSecurity library and can be used as is. The TokenAuthenticator however has to be implemented by user code. An example of an implementation is shown below:


@RequestScoped
public class APITokenAuthModule implements TokenAuthenticator {

@Inject
private UserService userService;

@Inject
private CacheManager cacheManager;

private User user;

@Override
public boolean authenticate(String token) {
try {
Cache<String, User> usersCache = cacheManager.getDefaultCache();

User cachedUser = usersCache.get(token);
if (cachedUser != null) {
user = cachedUser;
} else {
user = userService.getUserByLoginToken(token);
usersCache.put(token, user);
}
} catch (InvalidCredentialsException e) {
return false;
}

return true;
}

@Override
public String getUserName() {
return user == null ? null : user.getUserName();
}

@Override
public List<String> getApplicationRoles() {
return user == null ? emptyList() : user.getRoles();
}

// (Two empty methods omitted)
}
This TokenAuthenticator implementation is injected with both a service to obtain users from, as well as a cache instance (InfiniSpan was used here). The code simply checks if a User instance associated with a token is already in the cache, and if it's not gets if from the service and puts it in the cache. The User instance is subsequently used to provide a user name and roles.

Installing the authentication module can be done during startup of the container via a Servlet context listener as follows:


@WebListener
public class SamRegistrationListener extends BaseServletContextListener {

@Override
public void contextInitialized(ServletContextEvent sce) {
Jaspic.registerServerAuthModule(new TokenAuthModule(), sce.getServletContext());
}
}
After installing the authentication module as outlined in this article in a JAX-RS application, it can be tested as follows:

curl -vs -H "Authorization: OmniLogin auth=ABCDEFGH123" https://localhost:8080/api/foo

As shown in this article, adding an authentication module for JAX-RS that's fully stateless and doesn't store an authenticated state on the client is relatively straightforward using Java EE authentication modules. Big caveats are that the most straightforward approach uses CDI which is not always available in authentication modules (in WildFly it's available), and that the example uses the OmniSecurity library to simplify some of JASPIC's arcane native APIs, but OmniSecurity is still only in an alpha status.

Arjan Tijms

OmniFaces 2.0 RC2 available for testing

$
0
0
After an intense debugging session following the release of OmniFaces 2.0, we have decided to release one more release candidate; OmniFaces 2.0 RC2.

For RC2 we mostly focused on TomEE 2.0 compatibility. Even though TomEE 2.0 is only available in a SNAPSHOT release, we're happy to see that it passed almost all of our tests and was able to run our showcase application just fine. The only place where it failed was with the viewParamValidationFailed page, but this appeared to be an issue in MyFaces and unrelated to TomEE itself.

To repeat from the RC1 announcement: OmniFaces 2.0 is the first release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Our Servlet dependency is now Servlet 3.0 from Java EE 6 (used to be 2.5, although we optionally used 3.0 features before). The minimal Java SE version is now Java 7.

A full list of what's new and changed is available here.

OmniFaces 2.0 RC2 can be tested by adding the following dependency to your pom.xml:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces</artifactId>
<version>2.0-RC2</version>
</dependency>

Alternatively the jars files can be downloaded directly.

We're currently investigating one last issue, if that's resolved and no other major bugs appear we'd like to release OmniFaces 2.0 at the end of this week.

Arjan Tijms

OmniFaces 2.0 released!

$
0
0
After a poll regarding the future dependencies of OmniFaces 2.0 and tworelease candidates we're proud to announce that today we've finally released OmniFaces 2.0.

OmniFaces 2.0 is a direct continuation of OmniFaces 1.x, but has started to build on newer dependencies. We also took the opportunity to do a little refactoring here and there (specifically noticeable in the Events class).

The easiest way to use OmniFaces is via Maven by adding the following to pom.xml:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces</artifactId>
<version>2.0</version>
</dependency>

A detailed description of the biggest items of this release can be found on the blog of BalusC.

One particular new feature not mentioned there is a new capability that has been added to <o:validateBean>; class level bean validation. While JSF core and OmniFaces both have had a validateBean for some time, one thing it curiously did not do despite its name is actually validating a bean. Instead, those existing versions just controlled various aspects of bean validation. Bean validation itself was then only applied to individual properties of a bean, namely those ones that were bound to input components.

With OmniFaces 2.0 it's now possible to specify that a bean should be validated at the class level. The following gives an example of this:


<h:inputText value="#{bean.product.item}" />
<h:inputText value="#{bean.product.order}" />

<o:validateBean value="#{bean.product}" validationGroups="com.example.MyGroup" />

Using the existing bean validation integration of JSF, only product.item and product.order can be validated, since these are the properties that are directly bound to an input component. Using <o:validateBean> the product itself can be validated as well, and this will happen at the right place in the JSF lifecycle. The right place in the lifecycle means that it will be in the "process validation" phase. True to the way JSF works, if validation fails the actual model will not be updated. In order to prevent this update class level bean validation will be performed on a copy of the actual product (with a plug-in structure to chose between multiple ways to copy the model object).

More information about this class level bean validation can be found on the associated showcase page. A complete overview of all thats new can be found on the what's new page.

Arjan Tijms

JSF and MVC 1.0, a comparison in code

$
0
0
One of the new specs that will debut in Java EE 8 will be MVC 1.0, a second MVC framework alongside the existing MVC framework JSF.

A lot has been written about this. Discussions have mostly been about the why, whether it isn't introduced too late in the game, and what the advantages (if any) above JSF exactly are. Among the advantages that were initially mentioned were the ability to have different templating engines, have better performance and the ability to be stateless. Discussions have furthermore also been about the name of this new framework.

This name can be somewhat confusing. Namely, the term MVC to contrast with JSF is perhaps technically not entirely accurate, as both are MVC frameworks. The flavor of MVC intended to be implemented by MVC 1.0 is actually "action-based MVC", most well known among Java developers as "MVC the way Spring MVC implements it". The flavor of MVC that JSF implements is "Component-based MVC". Alternative terms for this are MVC-push and MVC-pull.

One can argue that JSF since 2.0 has been moving to a more hybrid model; view parameters, the PreRenderView event and view actions have been key elements of this, but the best practice of having a single backing bean back a single view and things like injectable request parameters and eager request scoped beans have been contributing to this as well. The discussion of component-based MVC vs action-based MVC is therefore a little less black and white than it may initially seem, but of course in it's core JSF clearly remains a component-based MVC framework.

When people took a closer look at the advantages mentioned above it became quickly clear they weren't quite specific to action-based MVC. JSF most definitely supports additional templating engines, there's a specific plug-in mechanism for that called the VDL (View Declaration Language). Stacked up against an MVC framework, JSF actually performs rather well, and of course JSF can be used stateless.

So the official motivation for introducing a second MVC framework in Java EE is largely not about a specific advantage that MVC 1.0 will bring to the table, but first and foremost about having a "different" approach. Depending on one's use case, either one of the approaches can be better, or suit one's mental model (perhaps based on experience) better, but very few claims are made about which approach is actually better.

Here we're also not going to investigate which approach is better, but will take a closer look at two actual code examples where the same functionality is implemented by both MVC 1.0 and JSF. Since MVC 1.0 is still in its early stages I took code examples from Spring MVC instead. It's expected that MVC 1.0 will be rather close to Spring MVC, not as to the actual APIs and plumbing used, but with regard to the overall approach and idea.

As I'm not a Spring MVC user myself, I took the examples from a Reddit discussion about this very topic. They are shown and discussed below:

CRUD

The first example is about a typical CRUD use case. The Spring controller is given first, followed by a backing bean in JSF.

Spring MVC


@Named
@RequestMapping("/appointments")
public class AppointmentsController {

@Inject
private AppointmentBook appointmentBook;

@RequestMapping(value="/new", method = RequestMethod.GET)
public String getNewForm(Model model) {
model.addAttribute("appointment", new Appointment();
return "appointment-edit";
}

@RequestMapping(value="/new", method = RequestMethod.POST)
public String add(@Valid Appointment appointment, BindingResult result, RedirectAttributes redirectAttributes) {
if (result.hasErrors()) {
return "appointments/new";
}
appointmentBook.addAppointment(appointment);
redirectAttributes.addFlashAttribute("message", "Successfully added"+appointment.getTitle();

return "redirect:/appointments";
}

}

JSF


@Named
@ViewScoped
public class NewAppointmentsBacking {

@Inject
private AppointmentBook appointmentBook;

private Appointment appointment = new Appointment();

public Appointment getAppointment() {
return appointment;
}

public String add() {
appointmentBook.addAppointment(appointment);
addFlashMessage("Successfully added " + appointment.getTitle());

return "/appointments?faces-redirect=true";
}
}

As can be seen from the two code examples, there are at a first glance quite a number of similarities. However there are also a number of fundamental differences that are perhaps not immediately obvious.

Starting with the similarities, both versions are @Named and have the same service injected via the same @Inject annotation. When a URL is requested (via a GET) then in both versions there's a new Appointment instantiated. In the Spring version this happens in getNewForm(), in the JSF version this happens via the instance field initializer. Both versions subsequently make this instance available to the view. In the Spring MVC version this happens by setting it as an attribute of the model object that's passed in, while in the JSF version this happens via a getter.

The view typically contains a form where a user is supposed to edit various properties of the Appointment shown above. When this form is posted back to the server, in both versions an add() method is called where the (edited) Appointment instance is saved via the service that was previously injected and a flash message is set.

Finally both versions return an outcome that redirects the user to a new page (PRG pattern). Spring MVC uses the syntax "redirect:/appointments" for this, while JSF uses "/appointments?faces-redirect=true" to express the same thing.

Despite the large number of similarities as observed above, there is a big fundamental difference between the two; the class shown for Spring MVC represents a controller. It's mapped directly to a URL and it's pretty much the first thing that is invoked. All of the above runs without having determined what the view will be. Values computed here will be stored in a contextual object and a view is selected. We can think of this store as pushing values (the view didn't ask for it, since it's not even selected at this point). Hence the alternative name "MVC push" for this approach.

The class shown for the JSF example is NOT a controller. In JSF the controller is provided by the framework. It selects a view based on the incoming URL and the outcome of a ResourceHandler. This will cause a view to execute, and as part of that execution a (backing) bean at some point will be pulled in. Only after this pull has been done will the logic of the class in question start executing. Because of this the alternative name for this approach is "MVC pull".

Over to the concrete differences; in the Spring MVC sample instantiating the Appointment had to be explicitly mapped to a URL and the view to be rendered afterwards is explicitly defined. In the JSF version, both URL and view are defaulted; it's the view from which the bean is pulled. A backing bean can override the default view to be rendered by using the aforementioned view action. This gives it some of the "feel" of a controller, but doesn't change the fundamental fact that the backing bean had to be pulled into scope by the initial view first (things like @Eager in OmniFaces do blur the lines further by instantiating beans before a view pulls them in).

The post back case shows something similar. In the Spring version the add() method is explicitly mapped to a URL, while in the JSF version it corresponds to an action method of the view that pulled the bean in.

There's another difference with respect to validation. In the Spring MVC example there's an explicit check to see if validation has failed and an explicit selection of a view to display errors. In this case that view is the same one again ("appointments/new"), but it's still provided explicitly. In the JSF example there's no explicit check. Instead, the code relies on the default of staying on the same view and not invoking the action method. In effect, the exact same thing happens in both cases but the mindset to get there is different.

Dynamically loading images

The second example is about a case where a list of images is rendered first and where subsequently the content of those images is dynamically provided by the beans in question. The Spring code is again given first, followed by the JSF code.

Spring MVC


<c:forEach value="${thumbnails}" var="thumbnail">
<div>
<div class="thumbnail">
<img src="/thumbnails/${thumbnail.id}" />
</div>
<c:out value="${thumbnail.caption}" />
</div>
</c:forEach>

@Controller
public ThumbnailsController {

@Inject
private ThumbnailsDAO thumbnails;

@RequestMapping(value = "/", method = RequestMethod.GET)
public ModelAndView images() {
ModelAndView mv = new ModelAndView("images");
mv.addObject("thumbnails", thumbnailsDAO.getThumbnails());
return mv;
}

@RequestMapping(value = "/thumbnails/{id}", method = RequestMethod.GET, produces = "image/jpeg")
public @ResponseBody byte[] thumbnail(@PathParam long id) {
return thumbnailsDAO.getThumbnail(id);
}
}

JSF


<ui:repeat value="#{thumbnails}" var="thumbnail">
<div>
<div class="thumbnail">
<o:graphicImage value="#{thumbnailsBacking.thumbnail(thumbnail.id)}" />
</div>
#{thumbnail.caption}
</div>
</ui:repeat>

@Model
public class ThumbnailsBacking {

@Inject
private ThumbnailsDAO thumbnailsDAO;

@Produces @RequestScoped @Named("thumbnails")
public List<Thumbnail> getThumbnails() {
return thumbnailsDAO.getThumbnails();
}

public byte[] thumbnail(Long id) {
return thumbnailsDAO.getThumbnail(id);
}
}

Starting with the similarities again, we see that the markup for both views is fairly similar in structure. Both have an iteration tag that takes values from an input list called thumbnails and during each round of the iteration the ID of each individual thumbnail is used to render an image link.

Both the classes for Spring MVC and JSF call getThumbnails() on the injected DAO for the initial GET request, and both have a nearly identical thumbnail() method where getThumbnail(id) is called on the DAO in response to each request for a dynamic image that was rendered before.

Both versions also show that each framework has an alternative way to do what they do. In the Spring MVC example we see that instead of having a Model passed-in and returning a String based outcome, there's an alternative version that uses a ModelAndView instance, where the outcome is set on this object.

In the JSF version we see that instead of having an instance field + getter, there's an alternative version based an a producer. In that variant the data is made available under the EL name "thumbnails", just as in the Spring MVC version.

On to the differences, we see that the Spring MVC version is again using explicit URLs. The otherwise identical thumbnail() method has an extra annotation for specifying the URL to which it's mapped. This very URL is the one that's used in the img tag in the view. JSF on the other hand doesn't ask to map the method to a URL. Instead, there's an EL expression used to point directly to the method that delivers the image content. The component (o:graphicImage here) then generates the URL.

While the producer method that we showed in the JSF example (getThumbnails()) looked like JSF was declarative pushing a value, it's in fact still about a push. The method will not be called, and therefor a value not produced, until the EL variable "thumbnails" is resolved for the first time.

Another difference is that the view in the JSF example contains two components (ui:repeat and o:graphicImage) that adhere to JSF's component model, and that the view uses a templating language (Facelets) that is part of the JSF spec itself. Spring MVC (of course) doesn't specify a component model, and while it could theoretically come with its own templating language it doesn't have that one either. Instead, Spring MVC relies on external templating systems, e.g. JSP or Thymeleaf.

Finally, a remarkable difference is that the two very similar classes ThumbnailsController and ThumbnailsBacking are annotated by @Controller respectively @Model, two completely opposite responsibilities of the MVC pattern. Indeed, in JSF everything that's referenced by the view (via EL expressions) if officially called the model. ThumbnailsBacking is from JSF's point of the view the model. In practice the lines are bit more blurred, and the backing bean is more akin to a plumbing component that sits between the model, view and controller.

Conclusion

We haven't gone in-depth to what it means to have a component model and what advantages that has, nor have we discussed in any detail what a RESTful architecture brings to the table. In passing we mentioned the concept of state, but did not look at that either. Instead, we mainly focussed on code examples for two different use cases and compared and contrasted these. In that comparison we tried as much as possible to refrain from any judgement about which approach is better, component based MVC or action-oriented MVC (as I'm one of the authors of the JSF utility library OmniFaces and a member of the JSF EG such a judgement would always be biased of course).

We saw that while the code examples at first glance have remarkable similarities there are in fact deep fundamental differences between the two approaches. It's an open question whether the future is with either one of those two, with a hybrid approach of them, or with both living next to each other. Java EE 8 at least will opt for that last option and will have both a component based MVC framework and an action-oriented one.

Arjan Tijms


Java EE authorization - JACC revisited part I

$
0
0
A while ago we took a look at container authorization in Java EE, which we saw was taken care of by a specification called JACC.

We saw that JACC offered a clear standardized hook into what's often seen as a completely opaque and container specific process, but that it also had a number of disadvantages. Furthermore we provided a partial (non-working) implementation of a JACC provider to illustrate the idea.

In this part of the article we'll revisit JACC by taking a closer look at some of the mentioned disadvantages and dive a little deeper in the concept of role mapping. In part II we'll be looking at the first element of a more complete implementation of the JACC provider that was shown before.

To refresh our memory, the following were the disadvantages that we previously discovered:

  • Arcane & verbose API
  • No portable way to see what the groups/roles are in a collection of Principals
  • No portable way to use the container's role to group mapper
  • No default implementation of a JACC provider active or even available
  • Mixing Java SE and EE permissions (which protect against totally different things) when security manager is used
  • JACC provider has to be installed for the entire AS; can not be registered from or for a single application

As it later on appeared though, there's a little more to say about a few of these items.

Role mapping

While it's indeed the case that there's no portable way to get to either the groups or the container's role to group mapper, it appeared there was something called the primary use case for which JACC was originally conceived.

For this primary use case the idea was that a custom JACC provider would be coupled with a (custom) authentication module that only provided a caller principal (which contains the user name). That JACC provider would then contact an (external) authorization system to fetch authorization data based on this single caller principal. This authorization data can then be a collection of roles or anything that the JACC provider can either locally map to roles, or something to which it can map the permissions that a PolicyConfiguration initially collects. For this use case it's indeed not necessary to have portable access to groups or a role to groups mapper.

Building on this primary use case, it also appears that JASPIC auth modules in fact do have a means to put a specific implementation of a caller principal into the subject. JASPIC being JASPIC with its bare minimum of TCK tests this of course didn't work on all containers and there's still a gap present where the container is allowed to "map" that principal (whatever this means), but the basic idea is there. A JACC provider that knows about the auth module being used can then unambiguously pick out the caller principal from the set of principals in a subject. All of this would be so much simpler though if the caller principal was simply standardized in the first place, but alas.

To illustrate the basic process for a custom JACC provider according to this primary use case:


Auth module——provides——► Caller Principal (name = "someuser")

JACC provider——contacts—with—"someuser"——► Authorization System

Authorization System——returns——► roles ["admin", "architect"]

JACC provider——indexes—with—"admin"——► rolesToPermissions
JACC provider——indexes with—"architect"——► rolesToPermissions

As can be seen above there is no need for role mapping in this primary use case.

For the default implementation of a proprietary JACC provider that ships with a Java EE container the basic process is a little bit different as shown next:









role to group mapping in place
RoleGroups
"admin"["admin-group"]
"architect"["architect-group"]
"expert"["expert-group"]


JACC provider——calls—with—["admin", "architect", "expert"] ——► Role Mapper
Role mapper——returns——► ["admin-group", "architect-group", "expert-group"]

Auth module——provides——► Caller Principal (name = "someuser")
Auth module——provides——► Group Principal (name = "admin-group", name = "architect-group")

JACC provider maps "admin-group" to "admin"
JACC provider maps "architect-group "to "architect"

JACC provider——indexes—with—"admin"——► rolesToPermissions
JACC provider——indexes—with—"architect"——► rolesToPermissions

In the second use case the role mapper and possibly knowledge of which principals represent groups is needed, but since this JACC provider is the one that ships with a Java EE container it's arguably "allowed" to use proprietary techniques.

Do note that the mapping technique shown maps a subject's groups to roles, and uses that to check permissions. While this may conceptually be the most straightforward approach, it's not the only way.

Groups to permission mapping

An alternative approach is to remap the roles-to-permission collection to a groups-to-permission collection using the information from the role mapper. This is what both GlassFish and WebLogic implicitly do when they write out their granted.policy file.

The following is an illustration of this process. Suppose we have a role to permissions map as shown in the following table:

Role-to-permissions
RolePermission
"admin"[WebResourcePermission ("/protected/*" GET)]

This means a user that's in the logical application role "admin" is allowed to do a GET request for resources in the /protected folder. Now suppose the role mapper gave us the following role to group mapping:

Role-to-groups
RoleGroups
"admin"["admin-group", "adm"]

This means the logical application role "admin" is mapped to the groups "admin-group" and "adm". What we can now do is first reverse the last mapping into a group-to-roles map as shown in the following table:

Group-to-roles
GroupRoles
"admin-group"["admin"]
"adm"["admin"]

Subsequently we can then iterate over this new map and look up the permissions associated with each role in the existing role to permissions map to create our target group to permissions map. This is shown in the table below:

Group-to-permissions
GroupPermissions
"admin-group"[WebResourcePermission ("/protected/*" GET)]
"adm"[WebResourcePermission ("/protected/*" GET)]

Finally, consider a current subject with principals as shown in the next table:

Subject's principals
TypeName
com.somevendor.CallerPrincipalImpl"someuser"
com.somevendor.GroupPrincipalImpl"admin-group"
com.somevendor.GroupPrincipalImpl"architect-group"

Given the above shown group to permissions map and subject's principals, a JACC provider can now iterate over the group principals that belong to this subject and via the map check each such group against the permissions for that group. Note that the JACC provider does have to know that com.somevendor.GroupPrincipalImpl is the principal type that represents groups.

Principal to permission mapping

Yet another alternative approach is to remap the roles-to-permission collection to a principals-to-permission collection, again using the information from the role mapper. This is what both Geronimo and GlassFish' optional SimplePolicyProvider do.

Principal to permission mapping basically works like group to permission mapping, except that the JACC provider doesn't need to have knowledge of the principals involved. For the JACC provider those principals are pretty much opaque then, and it doesn't matter if they represent groups, callers, or something else entirely. All the JACC provider does is compare (using equals() or implies()) principals in the map against those in the subject.

The following code fragment taken from Geronimo 3.0.1 demonstrates the mapping algorithm:


for (Map.Entry<Principal, Set<String>> principalEntry : principalRoleMapping.entrySet()) {
Principal principal = principalEntry.getKey();
Permissions principalPermissions = principalPermissionsMap.get(principal);

if (principalPermissions == null) {
principalPermissions = new Permissions();
principalPermissionsMap.put(principal, principalPermissions);
}

Set<String> roleSet = principalEntry.getValue();
for (String role : roleSet) {
Permissions permissions = rolePermissionsMap.get(role);
if (permissions == null) continue;
for (Enumeration<Permission> rolePermissions = permissions.elements(); rolePermissions.hasMoreElements();) {
principalPermissions.add(rolePermissions.nextElement());
}
}

}

In the code fragment above rolePermissions is the map the provider created before the mapping, principalRoleMapping is the mapping from the role mapper and principalPermissions is the final map that's used for access decisions.

Default JACC provider

Several full Java EE implementations do not ship with an activated JACC provider, which makes it extremely troublesome for portable Java EE applications to just make use of JACC for things like asking if a user will be allowed to access say a URL.

As it appears, Java EE implementations are actually required to ship with an activated JACC provider and are even required to use it for access decisions. Clearly there's no TCK test for this, so just as we saw with JASPIC, vendors take different approaches in absence of such test. In the end it doesn't matter so much what the spec says, as it's the TCK that has the final word on compatibility certification. In this case, the TCK clearly says it's NOT required, while as mentioned the spec says it is. Why both JASPIC and JACC have historically tested so little is still not entirely clear, but I have it on good authority (no pun ;)) that the situation is going to be improved.

So while this is theoretically not a spec issue, it is still very much a practical issue. I looked at 6 Java EE implementations and found the following:

JACC default providers
ServerJACC provider presentJACC provider activatedVendor discourages to activate JACC
JBoss EAP 6.3VVX
GlassFish 4.1VVX
Geronimo 3.0.1VVX
WebLogic 12.1.3VXV
JEUS 8 previewVXV
WebSphere 8.5XX- (no provider present so nothing to discourage)

As can be seen only half of the servers investigated have JACC actually enabled. WebLogic 12.1.3 and JEUS 8 preview both do ship with a JACC policy provider, but it has to be enabled explicitly. Both WebLogic and JEUS 8 in their documentation somewhat advice against using JACC. TMaxSoft warns in its JEUS 7 security manual (there's not one for JEUS 8 yet) that the default JACC provider that will be activated is mainly for testing and doesn't advise to use it for real production usage.

WebSphere does not even ship with any default JACC policy provider, at least not that I could find. There's only a Tivoli Access Manager client, for which you have to install a separate external authorization server.

I haven't yet investigated Interstage AS, Cosminexus and WebOTX, but I hope to be able to look at them at a later stage.

Conclusion

Given the historical background of JACC it's a little bit more understandable why access to the role mapper was never standardized. Still, it is something that's needed for other use cases than the historical primary use case, so after all this time is still something that would be welcome to have. Another huge disadvantage of JACC, the fact that it's simply not always there in Java EE, appeared to be yet another case of incomplete TCK coverage.

Continue reading at part II.

Arjan Tijms

Java EE authorization - JACC revisited part II

$
0
0
This is the second part of a series where we revisit JACC after taking an initial look at it last year. In the first part we somewhat rectified a few of the disadvantages that were initially discovered and looked at various role mapping strategies.

In this second part we'll take an in-depth look at obtaining the container specific role mapper and the container specific way of how a JACC provider is deployed. In the next and final part we'll be bringing it all together and present a fully working JACC provider.

Container specifics

The way in which to obtain the role mapper and what data it exactly provides differs greatly for each container, and is something that containers don't really document either. Also, although the two system properties that need to be specified for the two JACC artifacts are standardized, it's often not at all clear how the jar file containing the JACC provider implementation classes has to be added to the container's class path.

After much research I obtained the details on how to do this for the following servers:

  • GlassFish 4.1
  • WebLogic 12.1.3
  • Geronimo 3.0.1
This list is admittedly limited, but as it appeared the process of finding out these details can be rather time consuming and frankly maddening. Given the amount of time that already went into this research I decided to leave it at these three, but hope to look into additional servers at a later date.

The JACC provider that we'll present in the next part will use a RoleMapper class that at runtime tries to obtain the native mapper from each known server using reflection (so to avoid compile dependencies). Whatever the native role mapper returns is transformed to a group to roles map first (see part I for more details on the various mappings). In the section below the specific reflective code for each server is given first. The full RoleMapper class is given afterwards.

GlassFish

The one server where the role mapper was simple to obtain was GlassFish. The code how to do this is clearly visible in the in-memory example JACC provider that ships with GlassFish. A small confusing thing is that the example class and its interface contain many methods that aren't actually used. Based on this example the reflective code and mapping became as follows:


private boolean tryGlassFish(String contextID, Collection<String> allDeclaredRoles) {

try {
Class<?> SecurityRoleMapperFactoryClass = Class.forName("org.glassfish.deployment.common.SecurityRoleMapperFactory");

Object factoryInstance = Class.forName("org.glassfish.internal.api.Globals")
.getMethod("get", SecurityRoleMapperFactoryClass.getClass())
.invoke(null, SecurityRoleMapperFactoryClass);

Object securityRoleMapperInstance = SecurityRoleMapperFactoryClass.getMethod("getRoleMapper", String.class)
.invoke(factoryInstance, contextID);

@SuppressWarnings("unchecked")
Map<String, Subject> roleToSubjectMap = (Map<String, Subject>) Class.forName("org.glassfish.deployment.common.SecurityRoleMapper")
.getMethod("getRoleToSubjectMapping")
.invoke(securityRoleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToSubjectMap.containsKey(role)) {
Set<Principal> principals = roleToSubjectMap.get(role).getPrincipals();

List<String> groups = getGroupsFromPrincipals(principals);
for (String group : groups) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).add(role);
}

if ("**".equals(role) && !groups.isEmpty()) {
// JACC spec 3.2 states:
//
// "For the any "authenticated user role", "**", and unless an application specific mapping has
// been established for this role,
// the provider must ensure that all permissions added to the role are granted to any
// authenticated user."
//
// Here we check for the "unless" part mentioned above. If we're dealing with the "**" role here
// and groups is not
// empty, then there's an application specific mapping and "**" maps only to those groups, not
// to any authenticated user.
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

Finding out how to install the JACC provider took a bit more time. For some reason the documentation doesn't mention it, but the location to put the mentioned jar file is simply:


[glassfish_home]/glassfish/lib
GlassFish has a convenience mechanism to put a named JACC configuration in the following file:

[glassfish_home]/glassfish/domains1/domain1/config/domain.xml
This name has to be added to the security-config element and a jacc-provider element that specifies both the policy and factory classes as follows:

<security-service jacc="test">
<!-- Other elements here -->
<jacc-provider policy-provider="test.TestPolicy" name="test" policy-configuration-factory-provider="test.TestPolicyConfigurationFactory"></jacc-provider>
</security-service>

WebLogic

WebLogic turned out to be a great deal more difficult than GlassFish. Being closed source you can't just look into any default JACC provider, but as it happens the WebLogic documentation mentioned (actually, requires) a pluggable role mapper:


-Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl
Unfortunately, even though an option for a role mapper factory class is used, there's no documentation on what one's own role mapper factory should do (which interfaces it should implement, which interfaces the actual role mapper it returns should implement etc).

After a fair amount of Googling I did eventually found that what appears to be a super class is documented. Furthermore, the interface of a type called RoleMapper is documented as well.

Unfortunately that last interface does not contain any of the actual methods to do role mapping, so you can't use an implementation of just this. This all was really surprising; WebLogic gives the option to specify a role mapper factory, but key details are missing. Still, the above gave just enough hints to do some reflective experiments, and after a lot of trial and error I came to the following code that seemed to do the trick:


private boolean tryWebLogic(String contextID, Collection<String> allDeclaredRoles) {

try {

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html
Class<?> roleMapperFactoryClass = Class.forName("weblogic.security.jacc.RoleMapperFactory");

// RoleMapperFactory implementation class always seems to be the value of what is passed on the commandline
// via the -Dweblogic.security.jacc.RoleMapperFactory.provider option.
// See http://docs.oracle.com/cd/E57014_01/wls/SCPRG/server_prot.htm
Object roleMapperFactoryInstance = roleMapperFactoryClass.getMethod("getRoleMapperFactory")
.invoke(null);

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html#getRoleMapperForContextID(java.lang.String)
Object roleMapperInstance = roleMapperFactoryClass.getMethod("getRoleMapperForContextID", String.class)
.invoke(roleMapperFactoryInstance, contextID);

// This seems really awkward; the Map contains BOTH group names and user names, without ANY way to
// distinguish between the two.
// If a user now has a name that happens to be a role as well, we have an issue :X
@SuppressWarnings("unchecked")
Map<String, String[]> roleToPrincipalNamesMap = (Map<String, String[]>) Class.forName("weblogic.security.jacc.simpleprovider.RoleMapperImpl")
.getMethod("getRolesToPrincipalNames")
.invoke(roleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToPrincipalNamesMap.containsKey(role)) {

List<String> groupsOrUserNames = asList(roleToPrincipalNamesMap.get(role));

for (String groupOrUserName : roleToPrincipalNamesMap.get(role)) {
// Ignore the fact that the collection also contains user names and hope
// that there are no user names in the application with the same name as a group
if (!groupToRoles.containsKey(groupOrUserName)) {
groupToRoles.put(groupOrUserName, new ArrayList<String>());
}
groupToRoles.get(groupOrUserName).add(role);
}

if ("**".equals(role) && !groupsOrUserNames.isEmpty()) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

Adding the two standard system properties for WebLogic appeared to be done most conveniently in the file:


[wls_home]/user_projects/domains/mydomain/bin/setDomainEnv.sh
There's a comment in the file that says to uncomment a section to use JACC, but that however is completely wrong. If you do indeed uncomment it, the server will not start: it are a few -D options, each on the beginning of a line, but at that point in the file you can't specify -D options that way. Furthermore it suggests that it's required to activate the Java SE security manager, but LUCKILY this is NOT the case. From WebLogic 12.1.3 onwards the security manager is no longer required (which is a huge win for working with JACC on WebLogic). The following does work though for our own JACC provider:

JACC_PROPERTIES="-Djavax.security.jacc.policy.provider=test.TestPolicy -Djavax.security.jacc.PolicyConfigurationFactory.provider=test.TestPolicyConfigurationFactory -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl "

JAVA_PROPERTIES="${JAVA_PROPERTIES} ${EXTRA_JAVA_PROPERTIES} ${JACC_PROPERTIES}"
export JAVA_PROPERTIES
For completeness and future reference, the following definition for JACC_PROPERTIES activates the provided JACC provider:

# JACC_PROPERTIES="-Djavax.security.jacc.policy.provider=weblogic.security.jacc.simpleprovider.SimpleJACCPolicy -Djavax.security.jacc.PolicyConfigurationFactory.provider=weblogic.security.jacc.simpleprovider.PolicyConfigurationFactoryImpl -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl "
(Do note that WebLogic violates the Java EE spec here. Such activation should NOT be needed, as a JACC provider should be active by default.)

The location of where to put the JACC provider jar was not as straightforward. I tried the [wls_home]/user_projects/domains/mydomain/lib] folder, and although WebLogic did seem to detect "something" here as it would log during startup that it encountered a library and was adding it, it would not actually work and class not found exceptions followed. After some fiddling I got around this by adding the following at the point the CLASSPATH variable is exported:


CLASSPATH="${DOMAIN_HOME}/lib/jacctest-0.0.1-SNAPSHOT.jar:${CLASSPATH}"
export CLASSPATH
I'm not sure if this is the recommended approach, but it seemed to do the trick.

Geronimo

Where WebLogic was a great deal more difficult than GlassFish, Geronimo unfortunately was extremely more difficult. In 2 decades of working with a variety of platforms and languages I think getting this to work ranks pretty high on the list of downright bizarre things that are required to get something to work. The only thing that comes close is getting some obscure undocumented activeX control to work in a C++ Windows app around 1997.

The role mapper in Geronimo is not directly accessibly via some factory or service as in GlassFish and WebLogic, but instead there's a map containing the mapping, which is injected in a Geronimo specific JACC provider that extends something and implements many interfaces. As we obviously don't have or want to have a Geronimo specific provider I tried to find out how this injection exactly works.

Things start with a class called GeronimoSecurityBuilderImpl that parses the XML that expresses the role mapping. Nothing too obscure here. This class then registers a so-called GBean (a kind of Geronimo specific JMX bean) that it passes the previously mentioned Map, and then registers a second GBean that it gives a reference to this first GBean. Meanwhile, the Geronimo specific policy configuration factory, called GeronimoPolicyConfigurationFactory"registers" itself via a static method on one of the GBeans mentioned before. Those GBeans at some point start running, and use the factory that was set by the static method to get a Geronimo specific policy configuration and then call a method on that to pass the Map containing the role mapping.

Now this scheme is not only rather convoluted to say the least, there's also no way to get to this map from anywhere else without resorting to very ugly hacks and using reflection to hack into private instance variables. It was possible to programmatically obtain a GBean, but the one we're after has many instances and it didn't prove easy to get the one that applies to the current web app. There seemed to be an option if you know the maven-like coordinates of your own app, but I didn't wanted to hardcode these and didn't found an API to obtain those programmatically. Via the source I noticed another way was via some meta data about a GBean, but there was no API available to obtain this.

After spending far more hours than willing to admit, I finally came to the following code to obtain the Map I was after:


private void tryGeronimoAlternative() {
Kernel kernel = KernelRegistry.getSingleKernel();

try {
ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();

Field registryField = kernel.getClass().getDeclaredField("registry");
registryField.setAccessible(true);
BasicRegistry registry = (BasicRegistry) registryField.get(kernel);

Set<GBeanInstance> instances = registry.listGBeans(new AbstractNameQuery(null, Collections.EMPTY_MAP, ApplicationPrincipalRoleConfigurationManager.class.getName()));

Map<Principal, Set<String>> principalRoleMap = null;
for (GBeanInstance instance : instances) {

Field classLoaderField = instance.getClass().getDeclaredField("classLoader");
classLoaderField.setAccessible(true);
ClassLoader gBeanClassLoader = (ClassLoader) classLoaderField.get(instance);

if (gBeanClassLoader.equals(contextClassLoader)) {

ApplicationPrincipalRoleConfigurationManager manager = (ApplicationPrincipalRoleConfigurationManager) instance.getTarget();
Field principalRoleMapField = manager.getClass().getDeclaredField("principalRoleMap");
principalRoleMapField.setAccessible(true);

principalRoleMap = (Map<Principal, Set<String>>) principalRoleMapField.get(manager);
break;

}

// process principalRoleMap here

}

} catch (InternalKernelException | IllegalStateException | NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e1) {
// Ignore
}
}
Note that this is the "raw" code and not yet converted to be fully reflection based like the GlassFish and WebLogic examples and not is not yet converting the principalRoleMap to the uniform format we use.

In order to install the custom JACC provider I looked for a config file or startup script, but there didn't seem to be an obvious one. So I just supplied the standardized options directly on the command line as follows:


-Djavax.security.jacc.policy.provider=test.TestPolicy
-Djavax.security.jacc.PolicyConfigurationFactory.provider=test.TestPolicyConfigurationFactory
I then tried to find a place to put the jar again, but simply couldn't find one. There just doesn't seem to be any mechanism to extend Geronimo's class path for the entire server, which is (perhaps unfortunately) what JACC needs. There were some options for individual deployments, but this cannot work for JACC since the Policy instance is called at a very low level and for everything that is deployed on the server. Geronimo by default deploys about 10 applications for all kinds of things. Mocking with each and every one of them just isn't feasible.

What I eventually did is perhaps one of the biggest hacks ever; I injected the required classes directly into the Geronimo library that contains the default JACC provider. After all, this provider is already used, so surely Geronimo has to be able to load my custom provider from THIS location :X

All libraries in Geronimo are OSGI bundles, so in addition to just injecting my classes I also had to adjust the MANIFEST, but after doing that Geronimo was FINALLY able to find my custom JACC provider. The MANIFEST was updated by copying the existing one from the jar and adding the following to it:


test;uses:="org.apa
che.geronimo.security.jaspi,javax.security.auth,org.apache.geronimo.s
ecurity,org.apache.geronimo.security.realm.providers,org.apache.geron
imo.security.jaas,javax.security.auth.callback,javax.security.auth.lo
gin,javax.security.auth.message.callback"
And then running the zip command as follows:

zip /test/geronimo-tomcat7-javaee6-3.0.1/repository/org/apache/geronimo/framework/geronimo-security/3.0.1/geronimo-security-3.0.1.jar META-INF/MANIFEST.MF
From the root directory where my compiled classes live I executed the following command to inject them:

jar uf /test/geronimo-tomcat7-javaee6-3.0.1/repository/org/apache/geronimo/framework/geronimo-security/3.0.1/geronimo-security-3.0.1.jar test/*
I happily admit it's pretty insane to do it like this. Hopefully this is not really the way to do it, and there's a sane way that I just happened to miss, or that someone with deep Geronimo knowledge would "just know".

Much to my dismay, the absurdity didn't end there. As it appears the previously mentioned GBeans act as a kind of protection mechanism to ensure only Geronimo specific JACC providers are installed. Since the entire purpose of the exercise is to install a general universal JACC provider, turning it into a Geronimo specific one obviously wasn't an option. The scarce documentation vaguely hints at replacing some of these GBeans or the security builder specifically for your application, but since JACC is installed for the entire server this just isn't feasible.

Eventually I tricked Geronimo into thinking a Geronimo specific JACC provider was installed by instantiating (via reflection) a dummy Geronimo policy provider factory and putting intercepting proxies into it to prevent a NPE that would otherwise ensue. As a side effect of this hack to beat Geronimo's "protection" I could capture the map I previously grabbed via reflective hacks somewhat easier.

The code to install the dummy factory:


try {
// Geronimo 3.0.1 contains a protection mechanism to ensure only a Geronimo policy provider is installed.
// This protection can be beat by creating an instance of GeronimoPolicyConfigurationFactory once. This instance
// will statically register itself with an internal Geronimo class
geronimoPolicyConfigurationFactoryInstance = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory").newInstance();
geronimoContextToRoleMapping = new ConcurrentHashMap<>();
} catch (Exception e) {
// ignore
}
The code to put the capturing policy configurations in place:

// Are we dealing with Geronimo?
if (geronimoPolicyConfigurationFactoryInstance != null) {

// PrincipalRoleConfiguration

try {
Class<?> geronimoPolicyConfigurationClass = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfiguration");

Object geronimoPolicyConfigurationProxy = Proxy.newProxyInstance(TestRoleMapper.class.getClassLoader(), new Class[] {geronimoPolicyConfigurationClass}, new InvocationHandler() {

@SuppressWarnings("unchecked")
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

// Take special action on the following method:

// void setPrincipalRoleMapping(Map<Principal, Set<String>> principalRoleMap) throws PolicyContextException;
if (method.getName().equals("setPrincipalRoleMapping")) {

geronimoContextToRoleMapping.put(contextID, (Map<Principal, Set<String>>) args[0]);

}
return null;
}
});

// Set the proxy on the GeronimoPolicyConfigurationFactory so it will call us back later with the role mapping via the following method:

// public void setPolicyConfiguration(String contextID, GeronimoPolicyConfiguration configuration) {
Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory")
.getMethod("setPolicyConfiguration", String.class, geronimoPolicyConfigurationClass)
.invoke(geronimoPolicyConfigurationFactoryInstance, contextID, geronimoPolicyConfigurationProxy);


} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
// Ignore
}
}
And finally the code to transform the map into our uniform target map:

private boolean tryGeronimo(String contextID, Collection<String> allDeclaredRoles) {
if (geronimoContextToRoleMapping != null) {

if (geronimoContextToRoleMapping.containsKey(contextID)) {
Map<Principal, Set<String>> principalsToRoles = geronimoContextToRoleMapping.get(contextID);

for (Map.Entry<Principal, Set<String>> entry : principalsToRoles.entrySet()) {

// Convert the principal that's used as the key in the Map to a list of zero or more groups.
// (for Geronimo we know that using the default role mapper it's always zero or one group)
for (String group : principalToGroups(entry.getKey())) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).addAll(entry.getValue());

if (entry.getValue().contains("**")) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}
}

return true;
}

return false;
}

The role mapper class

After having taken a look at the code for each individual server in isolation above, it's now time to show the full code for the RoleMapper class. This is the class that the JACC provider that we'll present in the next part will use as the universal way to obtain the server's role mapping, as-if this was already standardized:


package test;

import static java.util.Arrays.asList;
import static java.util.Collections.list;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.security.Principal;
import java.security.acl.Group;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import javax.security.auth.Subject;

public class TestRoleMapper {

private static Object geronimoPolicyConfigurationFactoryInstance;
private static ConcurrentMap<String, Map<Principal, Set<String>>> geronimoContextToRoleMapping;

private Map<String, List<String>> groupToRoles = new HashMap<>();

private boolean oneToOneMapping;
private boolean anyAuthenticatedUserRoleMapped = false;

public static void onFactoryCreated() {
tryInitGeronimo();
}

private static void tryInitGeronimo() {
try {
// Geronimo 3.0.1 contains a protection mechanism to ensure only a Geronimo policy provider is installed.
// This protection can be beat by creating an instance of GeronimoPolicyConfigurationFactory once. This instance
// will statically register itself with an internal Geronimo class
geronimoPolicyConfigurationFactoryInstance = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory").newInstance();
geronimoContextToRoleMapping = new ConcurrentHashMap<>();
} catch (Exception e) {
// ignore
}
}

public static void onPolicyConfigurationCreated(final String contextID) {

// Are we dealing with Geronimo?
if (geronimoPolicyConfigurationFactoryInstance != null) {

// PrincipalRoleConfiguration

try {
Class<?> geronimoPolicyConfigurationClass = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfiguration");

Object geronimoPolicyConfigurationProxy = Proxy.newProxyInstance(TestRoleMapper.class.getClassLoader(), new Class[] {geronimoPolicyConfigurationClass}, new InvocationHandler() {

@SuppressWarnings("unchecked")
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

// Take special action on the following method:

// void setPrincipalRoleMapping(Map<Principal, Set<String>> principalRoleMap) throws PolicyContextException;
if (method.getName().equals("setPrincipalRoleMapping")) {

geronimoContextToRoleMapping.put(contextID, (Map<Principal, Set<String>>) args[0]);

}
return null;
}
});

// Set the proxy on the GeronimoPolicyConfigurationFactory so it will call us back later with the role mapping via the following method:

// public void setPolicyConfiguration(String contextID, GeronimoPolicyConfiguration configuration) {
Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory")
.getMethod("setPolicyConfiguration", String.class, geronimoPolicyConfigurationClass)
.invoke(geronimoPolicyConfigurationFactoryInstance, contextID, geronimoPolicyConfigurationProxy);


} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
// Ignore
}
}
}


public TestRoleMapper(String contextID, Collection<String> allDeclaredRoles) {
// Initialize the groupToRoles map

// Try to get a hold of the proprietary role mapper of each known
// AS. Sad that this is needed :(
if (tryGlassFish(contextID, allDeclaredRoles)) {
return;
} else if (tryWebLogic(contextID, allDeclaredRoles)) {
return;
} else if (tryGeronimo(contextID, allDeclaredRoles)) {
return;
} else {
oneToOneMapping = true;
}
}

public List<String> getMappedRolesFromPrincipals(Principal[] principals) {
return getMappedRolesFromPrincipals(asList(principals));
}

public boolean isAnyAuthenticatedUserRoleMapped() {
return anyAuthenticatedUserRoleMapped;
}

public List<String> getMappedRolesFromPrincipals(Iterable<Principal> principals) {

// Extract the list of groups from the principals. These principals typically contain
// different kind of principals, some groups, some others. The groups are unfortunately vendor
// specific.
List<String> groups = getGroupsFromPrincipals(principals);

// Map the groups to roles. E.g. map "admin" to "administrator". Some servers require this.
return mapGroupsToRoles(groups);
}

private List<String> mapGroupsToRoles(List<String> groups) {

if (oneToOneMapping) {
// There is no mapping used, groups directly represent roles.
return groups;
}

List<String> roles = new ArrayList<>();

for (String group : groups) {
if (groupToRoles.containsKey(group)) {
roles.addAll(groupToRoles.get(group));
}
}

return roles;
}

private boolean tryGlassFish(String contextID, Collection<String> allDeclaredRoles) {

try {
Class<?> SecurityRoleMapperFactoryClass = Class.forName("org.glassfish.deployment.common.SecurityRoleMapperFactory");

Object factoryInstance = Class.forName("org.glassfish.internal.api.Globals")
.getMethod("get", SecurityRoleMapperFactoryClass.getClass())
.invoke(null, SecurityRoleMapperFactoryClass);

Object securityRoleMapperInstance = SecurityRoleMapperFactoryClass.getMethod("getRoleMapper", String.class)
.invoke(factoryInstance, contextID);

@SuppressWarnings("unchecked")
Map<String, Subject> roleToSubjectMap = (Map<String, Subject>) Class.forName("org.glassfish.deployment.common.SecurityRoleMapper")
.getMethod("getRoleToSubjectMapping")
.invoke(securityRoleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToSubjectMap.containsKey(role)) {
Set<Principal> principals = roleToSubjectMap.get(role).getPrincipals();

List<String> groups = getGroupsFromPrincipals(principals);
for (String group : groups) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).add(role);
}

if ("**".equals(role) && !groups.isEmpty()) {
// JACC spec 3.2 states:
//
// "For the any "authenticated user role", "**", and unless an application specific mapping has
// been established for this role,
// the provider must ensure that all permissions added to the role are granted to any
// authenticated user."
//
// Here we check for the "unless" part mentioned above. If we're dealing with the "**" role here
// and groups is not
// empty, then there's an application specific mapping and "**" maps only to those groups, not
// to any authenticated user.
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

private boolean tryWebLogic(String contextID, Collection<String> allDeclaredRoles) {

try {

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html
Class<?> roleMapperFactoryClass = Class.forName("weblogic.security.jacc.RoleMapperFactory");

// RoleMapperFactory implementation class always seems to be the value of what is passed on the commandline
// via the -Dweblogic.security.jacc.RoleMapperFactory.provider option.
// See http://docs.oracle.com/cd/E57014_01/wls/SCPRG/server_prot.htm
Object roleMapperFactoryInstance = roleMapperFactoryClass.getMethod("getRoleMapperFactory")
.invoke(null);

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html#getRoleMapperForContextID(java.lang.String)
Object roleMapperInstance = roleMapperFactoryClass.getMethod("getRoleMapperForContextID", String.class)
.invoke(roleMapperFactoryInstance, contextID);

// This seems really awkward; the Map contains BOTH group names and user names, without ANY way to
// distinguish between the two.
// If a user now has a name that happens to be a role as well, we have an issue :X
@SuppressWarnings("unchecked")
Map<String, String[]> roleToPrincipalNamesMap = (Map<String, String[]>) Class.forName("weblogic.security.jacc.simpleprovider.RoleMapperImpl")
.getMethod("getRolesToPrincipalNames")
.invoke(roleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToPrincipalNamesMap.containsKey(role)) {

List<String> groupsOrUserNames = asList(roleToPrincipalNamesMap.get(role));

for (String groupOrUserName : roleToPrincipalNamesMap.get(role)) {
// Ignore the fact that the collection also contains user names and hope
// that there are no user names in the application with the same name as a group
if (!groupToRoles.containsKey(groupOrUserName)) {
groupToRoles.put(groupOrUserName, new ArrayList<String>());
}
groupToRoles.get(groupOrUserName).add(role);
}

if ("**".equals(role) && !groupsOrUserNames.isEmpty()) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

private boolean tryGeronimo(String contextID, Collection<String> allDeclaredRoles) {
if (geronimoContextToRoleMapping != null) {

if (geronimoContextToRoleMapping.containsKey(contextID)) {
Map<Principal, Set<String>> principalsToRoles = geronimoContextToRoleMapping.get(contextID);

for (Map.Entry<Principal, Set<String>> entry : principalsToRoles.entrySet()) {

// Convert the principal that's used as the key in the Map to a list of zero or more groups.
// (for Geronimo we know that using the default role mapper it's always zero or one group)
for (String group : principalToGroups(entry.getKey())) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).addAll(entry.getValue());

if (entry.getValue().contains("**")) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}
}

return true;
}

return false;
}

/**
* Extracts the roles from the vendor specific principals. SAD that this is needed :(
*
* @param principals
* @return
*/
public List<String> getGroupsFromPrincipals(Iterable<Principal> principals) {
List<String> groups = new ArrayList<>();

for (Principal principal : principals) {
if (principalToGroups(principal, groups)) {
// return value of true means we're done early. This can be used
// when we know there's only 1 principal holding all the groups
return groups;
}
}

return groups;
}

public List<String> principalToGroups(Principal principal) {
List<String> groups = new ArrayList<>();
principalToGroups(principal, groups);
return groups;
}

public boolean principalToGroups(Principal principal, List<String> groups) {
switch (principal.getClass().getName()) {

case "org.glassfish.security.common.Group": // GlassFish
case "org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal": // Geronimo
case "weblogic.security.principal.WLSGroupImpl": // WebLogic
case "jeus.security.resource.GroupPrincipalImpl": // JEUS
groups.add(principal.getName());
break;

case "org.jboss.security.SimpleGroup": // JBoss
if (principal.getName().equals("Roles") && principal instanceof Group) {
Group rolesGroup = (Group) principal;
for (Principal groupPrincipal : list(rolesGroup.members())) {
groups.add(groupPrincipal.getName());
}

// Should only be one group holding the roles, so can exit the loop
// early
return true;
}
}
return false;
}

}

Server mapping overview

Each server essentially provides the same core data; a role to group mapping, but each server puts this data in a different format. The table below summarizes this:

Role to group format per server
ServerMapKeyValue
GlassFish 4.1Map<String, Subject>Role nameSubject containing Principals representing groups and users (different class type for each)
WebLogic 12.1.3Map<String, String[]>Role nameGroups and user names (impossible to distinguish which is which)
Geronimo 3.0.1Map<Principal, Set<String>>Principal representing group or user (different class type for each)Role names

As we can see above, GlassFish and WebLogic both have a "a role name to groups and users" format. In the case of GlassFish the groups and users are for some reason wrapped in a Subject. A Map<String, Set<Principal>> would perhaps have been more logical here. WebLogic unfortunately uses a String to represent both group- and user names, meaning there's no way to know if a given name represents a group or a user. One can only guess at what the idea behind this design decision must have been.

Geronimo finally does the mapping exactly the other way around; it has a "group or user to role names" format. After all the insanity we saw with Geronimo this actually is a fairly sane mapping.

Conclusion

As we saw obtaining the container specific role mapping for a universal JACC provider is no easy feat. Finding out how to deploy a JACC provider appeared to be surprisingly difficult, and in case of Geronimo even nearly impossible. It's hard to say what can be done to improve this. Should JACC define an extra standardized property where you can provide the path to a jar file? E.g. something like

-Djavax.security.jacc.provider.jar=/usr/lib/myprovider.jar
At least for testing, and probably for regular usage as well, it would be extremely convenient if JACC providers could additionally be registered from within an application archive.

Arjan Tijms

The most popular upcoming Java EE 8 technologies according to ZEEF users

$
0
0
I maintain a page on zeef.com about the upcoming Java EE 8 specification. On this page I collect all interesting links about the various sub-specs that will be updated or newly introduced in EE 8. The page is up since April last year and therefor currently has almost 10 months worth of data (at the moment 8.7k views, 5k clicks).

While there still aren't any discussions and thus links available for quite a couple of specs, it does give us some early insight in what's popular. At the moment the ranking is as follows:

PositionLinkCategory
1Java EE 8 roadmap [png]Java EE 8 overal
2JSF MVC discussionJSF 2.3
3Let's get started on JSF 2.3JSF 2.3
4Servlet 4.0Servlet 4.0
5Java EE 8 Takes Off!Java EE 8 overal
6An MVC action-based framework in Java EE 8MVC 1.0
7JSF and MVC 1.0, a comparison in codeMVC 1.0
8Let's get started on Servlet 4.0Servlet 4.0
9JavaOne Replay: 'Java EE 8 Overview' by Linda DeMichielJava EE 8 overal
10A CDI 2 Wish ListCDI 2.0

If we look at the single highest ranking link for each spec, we'll get to the following global ranking:

  1. Java EE 8 overal
  2. JSF 2.3
  3. Servlet 4.0
  4. MVC 1.0
  5. CDI 2.0
  6. JAX-RS 2.1
  7. JSON-B 1.0
  8. JCache 1.0
  9. JMS 2.1
  10. Java (EE) Configuration
  11. Java EE Security API 1.0
  12. JCA.next
  13. Java EE Management API 2.0
  14. JSON-P 1.1
Interestingly, when we don't look at the single highest clicked link per spec, but aggregate the clicks for all top links, we get a somewhat different ranking as shown below (the relative positions compared to the first ranking are shown behind each spec):

  1. Java EE 8 overal (=)
  2. MVC 1.0 (+2)
  3. JSF 2.3 (-1)
  4. CDI 2.0 (+1)
  5. Servlet 4.0 (-2)
  6. JCache 1.0 (+2)
  7. Java (EE) Configuration (+3)
  8. JAX-RS 2.1 (-2)
  9. JMS 2.1 (=)
  10. JSON-B 1.0 (-3)
  11. Java EE Security API 1.0 (=)
  12. JCA.next (=)
  13. Java EE Management API 2.0 (=)
  14. JSON-P 1.1 (=)

As we can see the specs that occupy the top 5 are still the same, but whereas JSF 2.3 was the most popular sub-spec where it concerned a single link, looking at all links together it's now MVC 1.0. The umbrella spec Java EE however is still firmly on top. The bottom segment is even exactly the same, but for most of them very few information is available so a block is basically the same as a link. Specifically for the Java EE Management API and JSON-P 1.1 there's no information at all available beyond a single announcement that the initial JSR was posted.

While the above ranking does give us some data points, we have to take into account that it's not just about the technologies themselves but also about a number of other factors. E.g. the position on the page does influence clicks. The Java EE 8 block is on the top left of the page and will be seen first by most visitors. Then again, CDI 2.0 is at a pretty good position at the top middle of the page, but got relatively few clicks. JSF 2.3 and especially MVC 1.0 are at a less ideal position at the middle left of the page, below the so-called "fold" of many screens (meaning, you have to scroll to see it). Yet, both of them received the most clicks after the umbrella spec.

The observant reader may notice that some key Java EE technologies such as JPA, EJB, Bean Validation and Expression Language are missing. It's likely that these specs will either not be updated at all for Java EE 8, or will only receive a very small update (called a MR or Maintenance Release in the JCP process).

Oracle has indicated on multiple occasions that this is almost entirely due to resource issues. Apparently there just aren't enough resources available to be able to update all specs. Even though there are e.g. dozens of JPA JIRA issues filed and persistence is arguably one of the most important aspect of the majority of (web) applications, it's just not possible to have a major update for it, unfortunately.

Conclusion

In general we can say that for this particular data point the web technologies gather the most interest, while the back end/business and supporting technologies are a little less popular. It will be interesting to see if and if so how the numbers will change when more information become available. Java EE Management API 2.0 for one seems really unpopular now, but there simply isn't much to measure yet.

Arjan Tijms

The most popular Java EE servers in 2014/2015 according to OmniFaces users

$
0
0
For a little over 3 months (from half of November 2014 to late February 2015) we had a poll on the OmniFaces website asking what AS (Application Server) people used with OmniFaces (people could select multiple servers).

The response was quite overwhelming for our little project; no less than 840 people responded, choosing a grand total of 1108 servers.

The final results are as follows:

PositionServerVotes (Percentage)
1JBoss (AS/EAP/WildFly)395 (47%)
2GlassFish206 (24%)
3Tomcat/Mojarra/Weld186 (22%)
4TomEE85 (10%)
5WebSphere55 (6%)
6WebLogic49 (6%)
7Tomcat/MyFaces/OWB33 (3%)
8Jetty/Mojarra/Weld19 (2%)
9Geronimo13 (1%)
10JEUS11 (1%)
11Liberty9 (1%)
12Jetty/MyFaces/OWB9 (1%)
13JOnAS8 (0%)
14NetWeaver8 (0%)
15Resin6 (0%)
16InforSuite5 (0%)
17WebOTX4 (0%)
18Interstage AS4 (0%)
19(u)Cosminexus3 (0%)

As can be seen the clear winner here is JBoss, which gets nearly half of all votes and nearly twice the amount of the runner up; GlassFish. Just slightly below GlassFish at number 3 is Tomcat in the specific combination with Mojarra and Weld.

It has be noted that Mojarra & Weld are typically but a small part of a homegrown Java EE stack, which often also includes things like Hibernate, Hibernate-Validations and many more components. For the specific case of OmniFaces however the Servlet, JSF and CDI implementations are what matter most so that's why we specifically included these in the poll. Another homegrown stack based on Tomcat, but using Myfaces and OWB (OpenWebBeans) instead scores significantly lower and ends up at place 7.

We acknowledge that people not necessarily have to use Mojarra and Weld together, but can also use Mojarra with OWB, or MyFaces with Weld. However we wanted to somewhat limit the options for homegrown stacks, and a little research ahead hinted these were the more popular combinations. In a follow up poll we may zoom into this and specifically address homegrown stacks by asking which individual components people use.

An interesting observation is that the entire top 4 consists solely out of open source servers, together good for 103% relative to the amount of people who voted (remember that 1 person could vote for multiple servers), or a total of 79% relative to all servers voted for.

While these are certainly impressive numbers, we do have to realize that the voters are self selected and specifically concern those who use OmniFaces. OmniFaces is an open source library without any form of commercial support. It's perhaps not entirely unreasonable to surmise that environments that favor closed source commercially supported servers are less likely to use OmniFaces. Taking that into account, the numbers thus don't necessarily mean that open source servers are indeed used that much in general.

That said, the two big commercial servers WebSphere and WebLogic still got a fair amount of votes; 104 together which is 9% relative to all servers voted for.

The fully open source and once much talked about server Geronimo got significantly few votes; only 13. The fact that Geronimo has more or less stopped developing its server and the lack of a visible community (people blogging about it, writing articles, responding to issues etc) probably contributes to that.

It's somewhat surprising that IBM's new lightweight AS Liberty got only 9 votes, where older (and more heavier) AS WebSphere got 55 votes. Maybe Liberty indeed isn't used that much yet, or maybe the name recognition isn't that big at the moment. A potential weakness in the poll is that we left out the company names. For well known servers such as JBoss and GlassFish you rarely see people calling it Red Hat JBoss or Oracle GlassFish, but in case of Liberty it might have been clearer to call it "IBM Liberty (WLP)".

Another small surprise is that the somewhat obscure server JEUS got as many votes as it did; 11 in total. This is perhaps extra surprising since creator TMaxSoft for some unknown reason consistently calls it a WAS instead of an AS, and the poll asked for the latter.

The "Japanese obscure three" (WebOTX, Interstage AS and (u)Cosminexus) are at the bottom of the list, yet at least 3 to 4 persons each claim to be using it with OmniFaces. Since not all of these servers are trivial to obtain, we've never tested OmniFaces on any of them so frankly have no idea how well OmniFaces runs on them. Even though according to this poll it concerns just a small amount of people, we're now quite eager to try out a few of these servers in the future, just to see how things work there.

Conclusion

For the particular community of those who use Omnifaces, we've seen that open source servers in general and particularly JBoss, GlassFish and TomEE are the most popular Java EE servers. Tomcat and Jetty were included as well, but aren't officially Java EE (although one can build stacks on them that get close).

A couple of servers, which really are complete Java EE implementations just as well and one might think take just as much work to build and maintain, only see a very low amount of users according to this poll. That's of course not to say that they aren't used much in general, but may just gather to a different audience.

Arjan Tijms

Java EE authorization - JACC revisited part III

$
0
0
This is the third and final part of a series where we revisit JACC after taking an initial look at it last year.

In the first part we mainly looked at various role mapping strategies, while the main topic of the second part was obtaining the container specific role mapper and the container specific way of how a JACC provider is deployed.

In this third and final part we'll be bringing it all together and present a fully working JACC provider for a single application module (e.g. a single war).

Architecture

As explained before, implementing a JACC provider requires implementing three classes:

  1. PolicyConfigurationFactory
  2. PolicyConfiguration
  3. Policy
Zooming into these, the following is what is more accurately required to be implemented:
  1. A factory that provides an object that collects permissions
  2. A state machine that controls the life-cyle of this permission collector
  3. Linking permissions of multiple modules and utilities
  4. Collecting and managing permissions
  5. Processing permissions after collecting
  6. An "authorization module" using permissions for authorization decisions

In the implementation given before we put all this functionality in the specified three classes. Here we'll split out each item to a separate class (we'll skip linking though, which is only required for EARs where security constraints are defined in multiple modules). This will result in more classes in total, but each class is hopefully easier to understand.

A factory that provides an object that collects permissions

The factory is largely as given earlier, but contains a few fixes and makes use of the state machine that is shown below.


import static javax.security.jacc.PolicyContext.getContextID;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import javax.security.jacc.PolicyConfiguration;
import javax.security.jacc.PolicyConfigurationFactory;
import javax.security.jacc.PolicyContextException;

public class TestPolicyConfigurationFactory extends PolicyConfigurationFactory {

private static final ConcurrentMap<String, TestPolicyConfigurationStateMachine> configurators = new ConcurrentHashMap<>();

@Override
public PolicyConfiguration getPolicyConfiguration(String contextID, boolean remove) throws PolicyContextException {

if (!configurators.containsKey(contextID)) {
configurators.putIfAbsent(contextID, new TestPolicyConfigurationStateMachine(new TestPolicyConfiguration(contextID)));
}

TestPolicyConfigurationStateMachine testPolicyConfigurationStateMachine = configurators.get(contextID);

if (remove) {
testPolicyConfigurationStateMachine.delete();
}

// According to the contract of getPolicyConfiguration() every PolicyConfiguration returned from here
// should always be transitioned to the OPEN state.
testPolicyConfigurationStateMachine.open();

return testPolicyConfigurationStateMachine;
}

@Override
public boolean inService(String contextID) throws PolicyContextException {
TestPolicyConfigurationStateMachine testPolicyConfigurationStateMachine = configurators.get(contextID);
if (testPolicyConfigurationStateMachine == null) {
return false;
}

return testPolicyConfigurationStateMachine.inService();
}

public static TestPolicyConfiguration getCurrentPolicyConfiguration() {
return (TestPolicyConfiguration) configurators.get(getContextID()).getPolicyConfiguration();
}

}

A state machine that controls the life-cyle of this permission collector

The state machine as required by the spec was left out in the previous example, but we've implemented it now. A possible implementation could have been to actually use a generic state machine that's been given some kind of rules file. Indeed, some implementations take this approach. But as the rules are actually not that complicated and there are not much transitions to speak of I found that just providing a few checks was a much easier method.

A class such as this would perhaps better be provided by the container, as it seems unlikely individual PolicyConfigurations would often if ever need to do anything specific here.


import static test.TestPolicyConfigurationStateMachine.State.DELETED;
import static test.TestPolicyConfigurationStateMachine.State.INSERVICE;
import static test.TestPolicyConfigurationStateMachine.State.OPEN;

import java.security.Permission;
import java.security.PermissionCollection;

import javax.security.jacc.PolicyConfiguration;
import javax.security.jacc.PolicyConfigurationFactory;
import javax.security.jacc.PolicyContextException;

public class TestPolicyConfigurationStateMachine implements PolicyConfiguration {

public static enum State {
OPEN, INSERVICE, DELETED
};

private State state = OPEN;
private PolicyConfiguration policyConfiguration;


public TestPolicyConfigurationStateMachine(PolicyConfiguration policyConfiguration) {
this.policyConfiguration = policyConfiguration;
}

public PolicyConfiguration getPolicyConfiguration() {
return policyConfiguration;
}


// ### Methods that can be called in any state and don't change state

@Override
public String getContextID() throws PolicyContextException {
return policyConfiguration.getContextID();
}

@Override
public boolean inService() throws PolicyContextException {
return state == INSERVICE;
}


// ### Methods where state should be OPEN and don't change state

@Override
public void addToExcludedPolicy(Permission permission) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToExcludedPolicy(permission);
}

@Override
public void addToUncheckedPolicy(Permission permission) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToUncheckedPolicy(permission);
}

@Override
public void addToRole(String roleName, Permission permission) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToRole(roleName, permission);
}

@Override
public void addToExcludedPolicy(PermissionCollection permissions) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToExcludedPolicy(permissions);
}

@Override
public void addToUncheckedPolicy(PermissionCollection permissions) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToUncheckedPolicy(permissions);
}

@Override
public void addToRole(String roleName, PermissionCollection permissions) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToRole(roleName, permissions);
}

@Override
public void linkConfiguration(PolicyConfiguration link) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.linkConfiguration(link);
}

@Override
public void removeExcludedPolicy() throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.removeExcludedPolicy();

}

@Override
public void removeRole(String roleName) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.removeRole(roleName);
}

@Override
public void removeUncheckedPolicy() throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.removeUncheckedPolicy();
}


// Methods that change the state
//
// commit() can only be called when the state is OPEN or INSERVICE and next state is always INSERVICE
// delete() can always be called and target state will always be DELETED
// open() can always be called and target state will always be OPEN

@Override
public void commit() throws PolicyContextException {
checkStateIsNot(DELETED);

if (state == OPEN) {
// Not 100% sure; allow double commit, or ignore double commit?
// Here we ignore and only call commit on the actual policyConfiguration
// when the state is OPEN
policyConfiguration.commit();
state = INSERVICE;
}
}

@Override
public void delete() throws PolicyContextException {
policyConfiguration.delete();
state = DELETED;
}

/**
* Transition back to open. This method is required because of the {@link PolicyConfigurationFactory} contract, but is
* mysteriously missing from the interface.
*/
public void open() {
state = OPEN;
}


// ### Private methods

private void checkStateIs(State requiredState) {
if (state != requiredState) {
throw new IllegalStateException("Required status is " + requiredState + " but actual state is " + state);
}
}

private void checkStateIsNot(State undesiredState) {
if (state == undesiredState) {
throw new IllegalStateException("State could not be " + undesiredState + " but actual state is");
}
}

}

Linking permissions of multiple modules and utilities

As mentioned we did not implement linking (perhaps we'll look at this in a future article), but as its an interface method we have to put an (empty) implementation somewhere. At the same time JACC curiously requires us to implement a couple of variations on the permission collection methods that don't even seem to be called in practice by any container we looked at. Finally the PolicyConfiguration interface requires an explicit life-cycle method and an identity method. The life-cycle method is not implemented either since all life-cycle managing is done by the state machine that wraps our actual PolicyConfiguration.

All these "distracting" methods were conveniently shoved into a base class as follows:


import static java.util.Collections.list;

import java.security.Permission;
import java.security.PermissionCollection;

import javax.security.jacc.PolicyConfiguration;
import javax.security.jacc.PolicyContextException;

public abstract class TestPolicyConfigurationBase implements PolicyConfiguration {

private final String contextID;

public TestPolicyConfigurationBase(String contextID) {
this.contextID = contextID;
}

@Override
public String getContextID() throws PolicyContextException {
return contextID;
}

@Override
public void addToExcludedPolicy(PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToExcludedPolicy(permission);
}
}

@Override
public void addToUncheckedPolicy(PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToUncheckedPolicy(permission);
}
}

@Override
public void addToRole(String roleName, PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToRole(roleName, permission);
}
}

@Override
public void linkConfiguration(PolicyConfiguration link) throws PolicyContextException {
}

@Override
public boolean inService() throws PolicyContextException {
// Not used, taken care of by PolicyConfigurationStateMachine
return true;
}

}

Collecting and managing permissions

The next step concerns a base class for a PolicyConfiguration that takes care of the actual collection of permissions, and making those collected permissions available later on. For each permission that the container discovers it calls the appropriate method in this class.

This kind of permission collecting, like the state machine, is actually pretty generic. One wonders if it wouldn't be a great deal simpler if the container just called a single init() method once (or even better, used injection) with a simple data structure containing collections of all permission types. Looking at some container implementations it indeed looks like the container has those collections already and just loops over them handing them one by one to our PolicyConfiguration.


import java.security.Permission;
import java.security.Permissions;
import java.util.HashMap;
import java.util.Map;

import javax.security.jacc.PolicyContextException;

public abstract class TestPolicyConfigurationPermissions extends TestPolicyConfigurationBase {

private Permissions excludedPermissions = new Permissions();
private Permissions uncheckedPermissions = new Permissions();
private Map<String, Permissions> perRolePermissions = new HashMap<String, Permissions>();

public TestPolicyConfigurationPermissions(String contextID) {
super(contextID);
}

@Override
public void addToExcludedPolicy(Permission permission) throws PolicyContextException {
excludedPermissions.add(permission);
}

@Override
public void addToUncheckedPolicy(Permission permission) throws PolicyContextException {
uncheckedPermissions.add(permission);
}

@Override
public void addToRole(String roleName, Permission permission) throws PolicyContextException {
Permissions permissions = perRolePermissions.get(roleName);
if (permissions == null) {
permissions = new Permissions();
perRolePermissions.put(roleName, permissions);
}

permissions.add(permission);
}

@Override
public void delete() throws PolicyContextException {
removeExcludedPolicy();
removeUncheckedPolicy();
perRolePermissions.clear();
}

@Override
public void removeExcludedPolicy() throws PolicyContextException {
excludedPermissions = new Permissions();
}

@Override
public void removeRole(String roleName) throws PolicyContextException {
if (perRolePermissions.containsKey(roleName)) {
perRolePermissions.remove(roleName);
} else if ("*".equals(roleName)) {
perRolePermissions.clear();
}
}

@Override
public void removeUncheckedPolicy() throws PolicyContextException {
uncheckedPermissions = new Permissions();
}

public Permissions getExcludedPermissions() {
return excludedPermissions;
}

public Permissions getUncheckedPermissions() {
return uncheckedPermissions;
}

public Map<String, Permissions> getPerRolePermissions() {
return perRolePermissions;
}

}

Processing permissions after collecting

The final part of the PolicyConfiguration concerns a kind of life cycle method again, namely a method that the container calls to indicate all permissions have been handed over to the PolicyConfiguration. In a more modern implementation this might have been an @PostConstruct annotated method.

Contrary to most methods of the PolicyConfiguration that we've seen until now, what happens here is pretty specific to the custom policy provider. Some implementations do a lot of work here and generate a .policy file in the standard Java SE format and write that to disk. This file is then intended to be read back by a standard Java SE Policy implementation.

Other implementations use this moment to optimize the collected permissions by transforming them into their own internal data structure.

In our case we keep the permissions as we collected them and just instantiate a role mapper implementation at this point. The full set of roles that are associated with permissions that each depend on a certain role are passed into the role mapper.


import javax.security.jacc.PolicyContextException;

public class TestPolicyConfiguration extends TestPolicyConfigurationPermissions {

public TestPolicyConfiguration(String contextID) {
super(contextID);
}

private TestRoleMapper roleMapper;

@Override
public void commit() throws PolicyContextException {
roleMapper = new TestRoleMapper(getContextID(), getPerRolePermissions().keySet());
}

public TestRoleMapper getRoleMapper() {
return roleMapper;
}

}
The role mapper referenced in the code shown above was presented in part II of this article and didn't change between parts, but for completeness we'll present it here again:

import static java.util.Arrays.asList;
import static java.util.Collections.list;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.security.Principal;
import java.security.acl.Group;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import javax.security.auth.Subject;

public class TestRoleMapper {

private static Object geronimoPolicyConfigurationFactoryInstance;
private static ConcurrentMap<String, Map<Principal, Set<String>>> geronimoContextToRoleMapping;

private Map<String, List<String>> groupToRoles = new HashMap<>();

private boolean oneToOneMapping;
private boolean anyAuthenticatedUserRoleMapped = false;

public static void onFactoryCreated() {
tryInitGeronimo();
}

private static void tryInitGeronimo() {
try {
// Geronimo 3.0.1 contains a protection mechanism to ensure only a Geronimo policy provider is installed.
// This protection can be beat by creating an instance of GeronimoPolicyConfigurationFactory once. This instance
// will statically register itself with an internal Geronimo class
geronimoPolicyConfigurationFactoryInstance = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory").newInstance();
geronimoContextToRoleMapping = new ConcurrentHashMap<>();
} catch (Exception e) {
// ignore
}
}

public static void onPolicyConfigurationCreated(final String contextID) {

// Are we dealing with Geronimo?
if (geronimoPolicyConfigurationFactoryInstance != null) {

// PrincipalRoleConfiguration

try {
Class<?> geronimoPolicyConfigurationClass = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfiguration");

Object geronimoPolicyConfigurationProxy = Proxy.newProxyInstance(TestRoleMapper.class.getClassLoader(), new Class[] {geronimoPolicyConfigurationClass}, new InvocationHandler() {

@SuppressWarnings("unchecked")
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

// Take special action on the following method:

// void setPrincipalRoleMapping(Map<Principal, Set<String>> principalRoleMap) throws PolicyContextException;
if (method.getName().equals("setPrincipalRoleMapping")) {

geronimoContextToRoleMapping.put(contextID, (Map<Principal, Set<String>>) args[0]);

}
return null;
}
});

// Set the proxy on the GeronimoPolicyConfigurationFactory so it will call us back later with the role mapping via the following method:

// public void setPolicyConfiguration(String contextID, GeronimoPolicyConfiguration configuration) {
Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory")
.getMethod("setPolicyConfiguration", String.class, geronimoPolicyConfigurationClass)
.invoke(geronimoPolicyConfigurationFactoryInstance, contextID, geronimoPolicyConfigurationProxy);


} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
// Ignore
}
}
}


public TestRoleMapper(String contextID, Collection<String> allDeclaredRoles) {
// Initialize the groupToRoles map

// Try to get a hold of the proprietary role mapper of each known
// AS. Sad that this is needed :(
if (tryGlassFish(contextID, allDeclaredRoles)) {
return;
} else if (tryWebLogic(contextID, allDeclaredRoles)) {
return;
} else if (tryGeronimo(contextID, allDeclaredRoles)) {
return;
} else {
oneToOneMapping = true;
}
}

public List<String> getMappedRolesFromPrincipals(Principal[] principals) {
return getMappedRolesFromPrincipals(asList(principals));
}

public boolean isAnyAuthenticatedUserRoleMapped() {
return anyAuthenticatedUserRoleMapped;
}

public List<String> getMappedRolesFromPrincipals(Iterable<Principal> principals) {

// Extract the list of groups from the principals. These principals typically contain
// different kind of principals, some groups, some others. The groups are unfortunately vendor
// specific.
List<String> groups = getGroupsFromPrincipals(principals);

// Map the groups to roles. E.g. map "admin" to "administrator". Some servers require this.
return mapGroupsToRoles(groups);
}

private List<String> mapGroupsToRoles(List<String> groups) {

if (oneToOneMapping) {
// There is no mapping used, groups directly represent roles.
return groups;
}

List<String> roles = new ArrayList<>();

for (String group : groups) {
if (groupToRoles.containsKey(group)) {
roles.addAll(groupToRoles.get(group));
}
}

return roles;
}

private boolean tryGlassFish(String contextID, Collection<String> allDeclaredRoles) {

try {
Class<?> SecurityRoleMapperFactoryClass = Class.forName("org.glassfish.deployment.common.SecurityRoleMapperFactory");

Object factoryInstance = Class.forName("org.glassfish.internal.api.Globals")
.getMethod("get", SecurityRoleMapperFactoryClass.getClass())
.invoke(null, SecurityRoleMapperFactoryClass);

Object securityRoleMapperInstance = SecurityRoleMapperFactoryClass.getMethod("getRoleMapper", String.class)
.invoke(factoryInstance, contextID);

@SuppressWarnings("unchecked")
Map<String, Subject> roleToSubjectMap = (Map<String, Subject>) Class.forName("org.glassfish.deployment.common.SecurityRoleMapper")
.getMethod("getRoleToSubjectMapping")
.invoke(securityRoleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToSubjectMap.containsKey(role)) {
Set<Principal> principals = roleToSubjectMap.get(role).getPrincipals();

List<String> groups = getGroupsFromPrincipals(principals);
for (String group : groups) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).add(role);
}

if ("**".equals(role) && !groups.isEmpty()) {
// JACC spec 3.2 states:
//
// "For the any "authenticated user role", "**", and unless an application specific mapping has
// been established for this role,
// the provider must ensure that all permissions added to the role are granted to any
// authenticated user."
//
// Here we check for the "unless" part mentioned above. If we're dealing with the "**" role here
// and groups is not
// empty, then there's an application specific mapping and "**" maps only to those groups, not
// to any authenticated user.
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

private boolean tryWebLogic(String contextID, Collection<String> allDeclaredRoles) {

try {

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html
Class<?> roleMapperFactoryClass = Class.forName("weblogic.security.jacc.RoleMapperFactory");

// RoleMapperFactory implementation class always seems to be the value of what is passed on the commandline
// via the -Dweblogic.security.jacc.RoleMapperFactory.provider option.
// See http://docs.oracle.com/cd/E57014_01/wls/SCPRG/server_prot.htm
Object roleMapperFactoryInstance = roleMapperFactoryClass.getMethod("getRoleMapperFactory")
.invoke(null);

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html#getRoleMapperForContextID(java.lang.String)
Object roleMapperInstance = roleMapperFactoryClass.getMethod("getRoleMapperForContextID", String.class)
.invoke(roleMapperFactoryInstance, contextID);

// This seems really awkward; the Map contains BOTH group names and user names, without ANY way to
// distinguish between the two.
// If a user now has a name that happens to be a role as well, we have an issue :X
@SuppressWarnings("unchecked")
Map<String, String[]> roleToPrincipalNamesMap = (Map<String, String[]>) Class.forName("weblogic.security.jacc.simpleprovider.RoleMapperImpl")
.getMethod("getRolesToPrincipalNames")
.invoke(roleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToPrincipalNamesMap.containsKey(role)) {

List<String> groupsOrUserNames = asList(roleToPrincipalNamesMap.get(role));

for (String groupOrUserName : roleToPrincipalNamesMap.get(role)) {
// Ignore the fact that the collection also contains user names and hope
// that there are no user names in the application with the same name as a group
if (!groupToRoles.containsKey(groupOrUserName)) {
groupToRoles.put(groupOrUserName, new ArrayList<String>());
}
groupToRoles.get(groupOrUserName).add(role);
}

if ("**".equals(role) && !groupsOrUserNames.isEmpty()) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

private boolean tryGeronimo(String contextID, Collection<String> allDeclaredRoles) {
if (geronimoContextToRoleMapping != null) {

if (geronimoContextToRoleMapping.containsKey(contextID)) {
Map<Principal, Set<String>> principalsToRoles = geronimoContextToRoleMapping.get(contextID);

for (Map.Entry<Principal, Set<String>> entry : principalsToRoles.entrySet()) {

// Convert the principal that's used as the key in the Map to a list of zero or more groups.
// (for Geronimo we know that using the default role mapper it's always zero or one group)
for (String group : principalToGroups(entry.getKey())) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).addAll(entry.getValue());

if (entry.getValue().contains("**")) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}
}

return true;
}

return false;
}

/**
* Extracts the roles from the vendor specific principals. SAD that this is needed :(
*
* @param principals
* @return
*/
public List<String> getGroupsFromPrincipals(Iterable<Principal> principals) {
List<String> groups = new ArrayList<>();

for (Principal principal : principals) {
if (principalToGroups(principal, groups)) {
// return value of true means we're done early. This can be used
// when we know there's only 1 principal holding all the groups
return groups;
}
}

return groups;
}

public List<String> principalToGroups(Principal principal) {
List<String> groups = new ArrayList<>();
principalToGroups(principal, groups);
return groups;
}

public boolean principalToGroups(Principal principal, List<String> groups) {
switch (principal.getClass().getName()) {

case "org.glassfish.security.common.Group": // GlassFish
case "org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal": // Geronimo
case "weblogic.security.principal.WLSGroupImpl": // WebLogic
case "jeus.security.resource.GroupPrincipalImpl": // JEUS
groups.add(principal.getName());
break;

case "org.jboss.security.SimpleGroup": // JBoss
if (principal.getName().equals("Roles") && principal instanceof Group) {
Group rolesGroup = (Group) principal;
for (Principal groupPrincipal : list(rolesGroup.members())) {
groups.add(groupPrincipal.getName());
}

// Should only be one group holding the roles, so can exit the loop
// early
return true;
}
}
return false;
}

}

An "authorization module" using permissions for authorization decisions

At long last we present the actual "authorization module" (called Policy in Java SE and JACC). Compared to the version we presented before this now delegates extracting the list of roles from the principles that are associated with the authenticated user to the role mapper we showed above. In addition to that we also added the case where we check for the so-called "any authenticated user", which means it doesn't matter which roles a user has, but only the fact if this user is authenticated or not counts.

This authorization module implements the default authorization algorithm defined by the Servlet and JACC specs, which does the following checks in order:

  1. Is permission excluded? (nobody can access those)
  2. Is permission unchecked? (everyone can access those)
  3. Is permission granted to every authenticated user?
  4. Is permission granted to any of the roles the current user is in?
  5. Is permission granted by the previous (if any) authorization module?

The idea of a custom authorization module is often to do something specific authorization wise, so this would be the most likely place to put custom code. In fact, if only this particular class could be injected with the permissions that now have to be collected by our own classes as shown above, then JACC would be massively simplified in one fell swoop.

In that case only this class would be have to be implemented. Even better would be if the default algorithm was also provided in a portable way. With that we could potentially only implement the parts that are really different for our custom implementation and leave the rest to the default implementation.


import static java.util.Arrays.asList;
import static java.util.Collections.list;
import static test.TestPolicyConfigurationFactory.getCurrentPolicyConfiguration;

import java.security.CodeSource;
import java.security.Permission;
import java.security.PermissionCollection;
import java.security.Permissions;
import java.security.Policy;
import java.security.Principal;
import java.security.ProtectionDomain;
import java.util.List;
import java.util.Map;

public class TestPolicy extends Policy {

private Policy previousPolicy = Policy.getPolicy();

@Override
public boolean implies(ProtectionDomain domain, Permission permission) {

TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
TestRoleMapper roleMapper = policyConfiguration.getRoleMapper();

if (isExcluded(policyConfiguration.getExcludedPermissions(), permission)) {
// Excluded permissions cannot be accessed by anyone
return false;
}

if (isUnchecked(policyConfiguration.getUncheckedPermissions(), permission)) {
// Unchecked permissions are free to be accessed by everyone
return true;
}

List<Principal> currentUserPrincipals = asList(domain.getPrincipals());

if (!roleMapper.isAnyAuthenticatedUserRoleMapped() && !currentUserPrincipals.isEmpty()) {
// The "any authenticated user" role is not mapped, so available to anyone and the current
// user is assumed to be authenticated (we assume that an unauthenticated user doesn't have any principals
// whatever they are)
if (hasAccessViaRole(policyConfiguration.getPerRolePermissions(), "**", permission)) {
// Access is granted purely based on the user being authenticated (the actual roles, if any, the user has it not important)
return true;
}
}

if (hasAccessViaRoles(policyConfiguration.getPerRolePermissions(), roleMapper.getMappedRolesFromPrincipals(currentUserPrincipals), permission)) {
// Access is granted via role. Note that if this returns false it doesn't mean the permission is not
// granted. A role can only grant, not take away permissions.
return true;
}

// Access not granted via any of the JACC maintained Permissions. Check the previous (default) policy.
// Note: this is likely to be called in case it concerns a Java SE type permissions.
// TODO: Should we not distinguish between JACC and Java SE Permissions at the start of this method? Seems
// very unlikely that JACC would ever say anything about a Java SE Permission, or that the Java SE
// policy says anything about a JACC Permission. Why are these two systems even combined in the first place?
if (previousPolicy != null) {
return previousPolicy.implies(domain, permission);
}

return false;
}

@Override
public PermissionCollection getPermissions(ProtectionDomain domain) {

Permissions permissions = new Permissions();

TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
TestRoleMapper roleMapper = policyConfiguration.getRoleMapper();

Permissions excludedPermissions = policyConfiguration.getExcludedPermissions();

// First get all permissions from the previous (original) policy
if (previousPolicy != null) {
collectPermissions(previousPolicy.getPermissions(domain), permissions, excludedPermissions);
}

// If there are any static permissions, add those next
if (domain.getPermissions() != null) {
collectPermissions(domain.getPermissions(), permissions, excludedPermissions);
}

// Thirdly, get all unchecked permissions
collectPermissions(policyConfiguration.getUncheckedPermissions(), permissions, excludedPermissions);

// Finally get the permissions for each role *that the current user has*
//
// Note that the principles that are put into the ProtectionDomain object are those from the current user.
// (for a Server application, passing in a Subject would have been more logical, but the Policy class was
// made for Java SE with code-level security in mind)
Map<String, Permissions> perRolePermissions = policyConfiguration.getPerRolePermissions();
for (String role : roleMapper.getMappedRolesFromPrincipals(domain.getPrincipals())) {
if (perRolePermissions.containsKey(role)) {
collectPermissions(perRolePermissions.get(role), permissions, excludedPermissions);
}
}

return permissions;
}

@Override
public PermissionCollection getPermissions(CodeSource codesource) {

Permissions permissions = new Permissions();

TestPolicyConfigurationPermissions policyConfiguration = getCurrentPolicyConfiguration();
Permissions excludedPermissions = policyConfiguration.getExcludedPermissions();

// First get all permissions from the previous (original) policy
if (previousPolicy != null) {
collectPermissions(previousPolicy.getPermissions(codesource), permissions, excludedPermissions);
}

// Secondly get the static permissions. Note that there are only two sources possible here, without
// knowing the roles of the current user we can't check the per role permissions.
collectPermissions(policyConfiguration.getUncheckedPermissions(), permissions, excludedPermissions);

return permissions;
}

private boolean isExcluded(Permissions excludedPermissions, Permission permission) {
if (excludedPermissions.implies(permission)) {
return true;
}

for (Permission excludedPermission : list(excludedPermissions.elements())) {
if (permission.implies(excludedPermission)) {
return true;
}
}

return false;
}

private boolean isUnchecked(Permissions uncheckedPermissions, Permission permission) {
return uncheckedPermissions.implies(permission);
}

private boolean hasAccessViaRoles(Map<String, Permissions> perRolePermissions, List<String> roles, Permission permission) {
for (String role : roles) {
if (hasAccessViaRole(perRolePermissions, role, permission)) {
return true;
}
}

return false;
}

private boolean hasAccessViaRole(Map<String, Permissions> perRolePermissions, String role, Permission permission) {
return perRolePermissions.containsKey(role) && perRolePermissions.get(role).implies(permission);
}

/**
* Copies permissions from a source into a target skipping any permission that's excluded.
*
* @param sourcePermissions
* @param targetPermissions
* @param excludedPermissions
*/
private void collectPermissions(PermissionCollection sourcePermissions, PermissionCollection targetPermissions, Permissions excludedPermissions) {

boolean hasExcludedPermissions = excludedPermissions.elements().hasMoreElements();

for (Permission permission : list(sourcePermissions.elements())) {
if (!hasExcludedPermissions || !isExcluded(excludedPermissions, permission)) {
targetPermissions.add(permission);
}
}
}

}

Conclusion

This concludes our three parter on revisiting JACC. In this third and final part we have looked at an actual Policy Provider. We have broken up the implementation into several parts that each focused on a particular responsibility. While the Policy Provider is complete and working (tested on GlassFish, WebLogic and Geronimo) we did not implement module linking yet, so it's with the caveat that it only works within a single war.

To implement another custom Policy Provider many of these parts can probably be re-used as-is and likely only the Policy itself has to customized.

Arjan Tijms

Viewing all 66 articles
Browse latest View live