Sunday, November 18, 2012

Bookmark and Share

Watching a talk from Square's CTO Bob Lee, I just learned about Dagger, a new dependency injection framework for Java and Android which is currently in the works at Square, Inc.

Considering the number of existing DI solutions in the Java space – e.g. CDI, Google Guice and Spring – one might wonder whether the world really needs yet another DI framework. According to Bob's talk, Dagger (a pun on "directed acyclic graph") is the attempt to create a modern and fast DI framework based on the insights gained during development and usage of Guice (Bob was the founder of the Guice project at Google). And indeed does Dagger come up with some quite interesting ideas which I'd like to discuss in more detail in the following.

Overview

Dagger is centered around the annotations for dependency injection defined by JSR 330 (which Bob Lee co-led). This is a good thing because it increases portability of your code between different DI solutions.

Dependencies are retrieved by annotating fields or constructors with @Inject:

1
2
3
4
5
6
7
8
9
10
11
public class Circus {

    private final Artist artist;

    @Inject    
    public Circus(Artist artist) {
        this.artist = artist;
    }

    //...
}

To satisfy dependencies, Dagger creates the required objects using their @Inject-annotated constructor (in turn creating and passing any dependencies) or the default no-args constructor.

Where that's not possible (e.g. when an implementation of an interface needs to be injected) provider methods can be used. Provider methods must be annotated with @Provides and be defined in a class annotated with @Module like this:

1
2
3
4
5
6
@Module
public class CircusModule {
    @Provides Artist provideArtist() {
        return new Juggler();
    }
}

The @Module annotation is also used to define the entry point of an application:

1
2
3
4
@Module( entryPoints=CircusApp.class )
public class CircusModule {
    //...
}

This entry point represents the root of the object graph managed by Dagger. As we'll see in a moment, explicitly defining the root allows for compile-time validation of the dependency graph. An instance of the entry point type can be retrieved from the ObjectGraph class, passing the module(s) to create the graph from:

1
2
3
ObjectGraph objectGraph = ObjectGraph.create(new CircusModule());
CircusApp circus = objectGraph.get(CircusApp.class);
circus.startPerformance();

Dagger also also provides support for qualifiers, lazy injection, injection of providers and more. The project's web site gives a good overview. Apart from that it's interesting to see what Dagger deliberately does not support to avoid an increased complexity:

  • Circular dependencies between objects
  • Method injection
  • Custom scopes (Objects are either newly created for each injection or singleton-scoped)

Code generation

DI frameworks usually make intensive use of reflection to examine annotations, find injection points, create managed objects etc. While reflection today isn't as expensive as it used to be in earlier years, it still can take a considerable amount of time to create large object graphs with lots of dependencies.

Dagger tries to improve upon that with the help of code generation. It provides a JSR 269 based annotation processor which is used at compile time to create an adapter class for each managed type. These adapter classes contain all the logic required at run time to set up the object graph by invoking constructors and populating references to other objects, without making use of reflection.

This approach promises performance benefits over reflection-based ways for creating object graphs typically used by DI frameworks. On my machine Dagger needed roughly half the time to initialize the graph of the CoffeeApp example using the generated classes compared to using reflection (which it also supports as fallback). Of course this is by no means a comprehensive benchmark and can't be compared with other frameworks but it surely shows the potential of the code generation approach.

The annotation processor also performs a validation of the object graph and its dependencies at compile time. So if for instance no matching type (or more than one) can be found for a given injection point, the build will fail with an error message describing the problem. This helps in reducing turn-around times compared to discovering this sort of error only at application start-up. Implementing this sort of checks using an annotation processor makes them available in IDEs (which typically can integrate annotation processors) as well as headless builds, e.g. on a CI server.

Object graph visualization

Not mentioned in the documentation, Dagger also provides an annotation processor which generates a GraphViz file visualizing the object graph. This may be useful to get an understanding of unknown object graphs. The following shows the graph from the CoffeeApp example:

Summary

Dagger is a new dependency injection framework for Java and Android.

While it's still in the works (the current version is 0.9 and there are still some apparent bugs), I find the concept of using an annotation processor for validating the object graph at compile time and generating code for a faster initialization at runtime very interesting. In particular on mobile devices fast start up times are essential for a good user experience.

I also like the idea of leaving out features which might provide some value but would add much complexity. One thing I'm missing though is some sort of interceptor or decorator mechanism. This would be helpful for implementing typical cross-cutting concerns.

It'll definitely be interesting to see how the code generation approach works out in practice and whether other DI solutions possibly adapt that idea.

Wednesday, August 29, 2012

Bookmark and Share

Note: This post originally appeared on beanvalidation.org. Please post any feedback over there.

Now that everybody is returning from their summer holidays, also the Bean Validation team is getting back to their desks in order to work with full steam towards revision 1.1.

As you know, the largest new feature will be method validation, that is the validation of method parameters and return values using constraint annotations. Bean Validation 1.1 early draft 1 lays the ground for this, and right now we're tackling some advanced questions still open in that area (btw. if you haven't yet tried out the reference implementation of ED1, this is the perfect time to do so and give us your feedback).

The problem

One question the EG currently is discussing is whether and, if so, how a refinement of method constraints should be allowed in sub-types. That is, if a class implements a method of an interface or overrides a method from a super class, should the sub-type be allowed to place any additional constraints?

The current draft defines the following rules for such cases (see the draft document for all the gory details):

  • No parameter constraints may be specified in addition to those constraints defined on the method in the interface or super class.
  • Return value constraints may be added in sub-types.

The rationale

The rationale behind this is the principle of behavioral sub-typing, which demands that wherever a given type T is used, it should be possible to replace T with a sub-type S of T. This means that a sub-type must not strengthen a method's preconditions (by adding parameter constraints), as this might cause client code working correctly against T to fail when working against S. A sub-type may also not weaken a method's postconditions. However, a sub-type may strengthen the method's postconditions (by adding return value constraints), as client code working against T still will work against S.

Can you show me some code, please?

To give you an example, the following shows a constraint declaration considered illegal as of the current draft, as parameter constraints are added to the placeOrder() method in a sub-class of OrderService:

1
2
3
4
5
6
7
8
9
10
11
12
public class OrderService {
    void placeOrder(@NotNull String customerCode, @NotNull Item item, int quantity) { ... }
}

public class SimpleOrderService extends OrderService {

    @Override
    public void placeOrder(
        @Size(min=3, max=20) String customerCode,
        Item item,
        @Min(1) int quantity) { ... }
}

Alternatives

While this approach works, follows principles of clean OO design and also is employed by other Programming by Contract solutions, some voices in the EG expressed doubts whether the handling of parameter constraints isn't too restrictive and thus may limit innovation in that area. In particular with respect to legacy code, the question was raised whether it shouldn't be allowed to add parameter constraints in sub-types.

One example may be a legacy interface, which technically has no constraints (that is, no parameter constraints are placed on its methods), but comes with a verbal description of preconditions in its documentation. In this case an implementor of that interface might wish to implement this contract by placing corresponding constraint annotations on the implementation.

An open question in this situation is what should the behavior be if the interface is being constrained afterwards?

Give use your feedback!

So what do you think, should such a refinement of parameter constraints be allowed or not? Possible alternatives:

  • allow such a refinement by default
  • have some sort of switch controlling the behavior (either standardized or provider-specific)

As there are pro's and con's of either approach, we'd very interested in user feedback on this.

Let us know what you think by posting a comment directly to this blog, shooting a message to the mailing list or participating in this Doodle vote. Which use cases you have encountered come to mind where the possibility to refine parameter constraints may help you?

Sunday, July 1, 2012

Bookmark and Share

While thinking about how to take advantage of Bean Validation within JavaFX 2 based applications, I just learned that JavaFX is actually part of the JDK installation since Java SE 7 Update 2. The latest JDK (Update 5) comes with JavaFX 2.1.1.

This makes it very easy to use the JavaFX API within Maven based applications; all what's required is to add the following dependency to your POM file:

1
2
3
4
5
6
7
<dependency>
    <groupId>com.oracle</groupId>
    <artifactId>javafx-runtime</artifactId>
    <scope>system</scope>
    <version>2.1.1</version>
    <systemPath>${java.home}/../jre/lib/jfxrt.jar</systemPath>
</dependency>

That was simple :)

Now to the bad news: as it seems the installation doesn't contain the JavaFX API sources or JavaDocs. But that's not really a problem, as a ZIP with the sources can be downloaded from OpenJFX's Mercurial server and the latest Javadocs can be found here.

Sunday, June 10, 2012

Bookmark and Share

The Spring framework has in general an excellent documentation. One exception to me is the reference guide's chapter on Spring AOP, which I personally find not as comprehensible as other parts of the documentation.

What I'm missing in particular is a complete example demonstrating how to use Spring AOP together with AOP Alliance MethodInterceptors. Where possible, I prefer to use AOP Alliance compliant interceptors over other Spring AOP advice types, as they foster interoperability with other AOP frameworks compatible with the AOP Alliance API, such as Google Guice.

So without further ado, an example for using AOP Alliance method interceptors with Spring AOP is shown in the following.

First define an interceptor by implementing the interface org.aopalliance.intercept.MethodInterceptor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@Component
public class InvocationCountInterceptor implements MethodInterceptor {

    private AtomicLong invocationCount = new AtomicLong();

    @Override
    public Object invoke(MethodInvocation invocation) throws Throwable {

        invocationCount.incrementAndGet();
        return invocation.proceed();
    }

    public AtomicLong getInvocationCount() {
        return invocationCount;
    }
}

By specifying the @Component stereotype annotation, the interceptor is enabled to be detected automatically by the Spring container using component scanning.

Now the types/methods to which the interceptor shall be applied need to be configured. Using Spring's AOP schema, this is done by defining an advisor which references the interceptor bean and links it with a pointcut expression describing the targeted methods:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
        http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.1.xsd      
    ">

    ...
    <aop:config>
        <aop:advisor advice-ref="invocationCountInterceptor" pointcut="within(de.gmorling.moapa.springaop.service..*)"/>
    </aop:config>
    ...

</beans>

Here the interceptor is applied to all invocations of methods defined within the de.gmorling.moapa.springaop.service package and its sub-packages. But the pointcut could also be more fine-grained and match only methods on specific types or even single methods.

If several interceptors shall be bound to the same pointcut, the pointcut expression can also be defined separately and referenced from multiple advisors like so:

1
2
3
4
5
6
7
...
<aop:config>
    <aop:pointcut id="serviceLayer" expression="within(de.gmorling.moapa.springaop.service..*)"/>
    <aop:advisor advice-ref="invocationCountInterceptor" pointcut-ref="serviceLayer"/>
    <aop:advisor advice-ref="anotherInterceptor" pointcut-ref="serviceLayer"/>
</aop:config>
... 

Note that the AspectJ weaver JAR and – when applying interceptors to classes – CGLIB 2 need to be on the class path. A complete sample Maven project can be found on GitHub.