Blog

Gradle to the Rescue, Part 1 – We Need a Better Build Tool

In this three-part Blog series I will share my very personal view on the problems of the two most common Java build tools, Ant and Maven, and the improvements brought by the new competitor called Gradle.

Introduction

Every Java developer with some experience will have encountered the classic Ant build tool during his career. Most of you will also have at least some experience with Maven, which in many aspects achieved it’s goal to provide a “better” Java build tool.

However, if you are seriously concerned with the automated build of complex Java applications, you will know all too well that neither Ant nor Maven are really satisfactory solutions.

But lately a new tool made its appearance in the Java open source world which claims:

Gradle combines the power and flexibility of Ant with the dependency management and conventions of Maven into a more effective way to build. (Gradle Homepage)

Amazingly Gradle keeps this bold promise. But before we take a closer look at Gradle in the following articles, this part will be concerned with the two traditional Java build tools: Ant and Maven.

Any Tool is Better than None

If you started out with Java 1.0 in the 1990s like myself, you will remember the time before Ant. After typing javac aa/acme/foo/MyFooMain.java followed by CLASSPATH=. java aa.acme.foo.MyFooMain a few times, I quickly started to leverage the power of my Solaris Bourne shell and wrote a two line script to compile and run my Java application.

In 1997 Java 1.1 introduced JAR archives; an automated build had to produce these JARs from the compiled classes and any resources. Applications, applets and now also servlets got more complex. Developers started to use third party open source libraries which forced them to cope with dependency management (though for a long time many Java projects handled this by simply throwing all third-party JARs into a single lib directory).

After the actual build applets or servlets had to be deployed to webservers and the servers restarted. More and more projects had a growing “history”, triggering the need to handle project versions and, for example, filter resources so the current version would show up in the README file or the about dialog of the generated applications. Some developers started to use automated tests which needed to be compiled and executed during the build.

Some phases of the default Maven build lifecycle illustrating typical build tasks:

  • validate – Validate that the project is correct and all necessary information is available.
  • compile – Compile the source code of the project.
  • test – Test the compiled source code using a suitable unit testing framework.
  • package – Take the compiled code and package it in its distributable format, such as a JAR.
  • integration-test – Process and deploy the package if necessary into an environment where integration tests can be run.
  • verify – Run any checks to verify the package is valid and meets quality criteria.
  • install – Install the package into the local repository, for use as a dependency in other projects locally.
  • deploy – Done in an integration or release environment, copies the final package to the remote repository for sharing with other developers and projects.

(Apache Maven Documentation: Introduction to the Build Lifecycle)

Developers extended their shell and batch scripts or used (and abused) existing build tools like make on Unix and Linux to manage the growing complexity of builds. In many cases this used to work reasonably well.

But while one of the great achievements of Java was its platform independence, this did not exend to shell, batch or make script. While it is not difficult to create shell or make scripts running well on different Unix and Linux platforms, these would typically not work on Windows. Compiling and installing make on Windows or maintaining equivalent versions of your build scripts as Unix shell and as Windows batch scripts was cumbersome and error-prone.

So as a Java developer you could easily edit your source code on any platform and execute your built applications on it, but actually building the application in a platform independent way was more difficult. This especially affected the growing Java open source scene where developers around the world using all kinds of operating systems contributed to a project. A “native” Java build tool was needed.

The Ant Revolution

In the year 2000 Ant revolutionized the way Java projects have been built. Finally there was a build tool written in and for Java. Ant quickly gained momentum, and soon it was the build tool used to build virtually every serious Java project on the planet.

The core of Ant is an XML DSL, a domain specific, procedural scripting language in XML syntax. Ant scripts are programs written in that DSL describing what operations to execute to reach one (or multiple) of the “targets” defined in the script. Each target defines a sequence of “tasks” to be executed to complete the target. You could write any kind of application in Ant, but the intended purpose clearly is to build some “artifacts” (such as Java JAR, WAR or EAR files) from some sources (such as Java source files).

Sample Ant Buildfile:

<?xml version="1.0"?>

<project name="MyProject" default="dist" basedir=".">
    <description>simple example build file</description>

  <!-- set global properties for this build -->
  <property name="src" location="src"/>
  <property name="build" location="build"/>
  <property name="dist"  location="dist"/>

  <target name="init">
    <!-- Create the time stamp -->
    <tstamp/>
    <!-- Create the build directory structure used by compile -->
    <mkdir dir="${build}"/>
  </target>

  <target name="compile" depends="init"
          description="compile the source">
    <!-- Compile the java code from ${src} into ${build} -->
    <javac srcdir="${src}" destdir="${build}"/>
  </target>

  <target name="dist" depends="compile"
        description="generate the distribution" >
    <!-- Create the distribution directory -->
    <mkdir dir="${dist}/lib"/>

    <!-- Put everything in ${build} into the MyProject-${DSTAMP}.jar file -->
    <jar jarfile="${dist}/lib/MyProject-${DSTAMP}.jar" basedir="${build}"/>
  </target>

  <target name="clean"
        description="clean up" >
    <!-- Delete the ${build} and ${dist} directory trees -->
    <delete dir="${build}"/>
    <delete dir="${dist}"/>
  </target>
</project>

(Apache Ant Documentation: Using Apache Ant)

But Ant was not the solution to all problems. The Ant scripting language is quite peculiar:

  • There are no classic data types or local variables but global string “properties” which are neither real constants (since they may be initialized to some arbitrary value at runtime) nor real variables, since once initialized they (normally) cannot be modified anymore. That said, if you execute a target from within another target using AntCall (for a target in the same Ant script) or Ant (for a target in another Ant script) you actually are allowed to redefine properties for the execution of the called target. (Though in that case you will sooner or later discover that there is no easy way to send any values back from the called target to the caller.)

  • There is no way to express a loop, and no easy way for simple conditional execution of some tasks. For conditions, certain tasks (for example fail) define parameters which prevent execution of the task depending if some property is set or not. Conditional execution depending on the existence of a property is also possible for a target as a whole. If your task(s) do not have built-in conditional execution (many don’t), you will be forced to introduce two extra targets to express a simple if statement; one to conditionally set a property and another one which executes the task(s) or not depending on the state of the property, which previously has ben set (or not) by the other target.

  • A big issue is the difficulty to write reusable code, short of writing your own custom Ant tasks in Java. Over times numerous ways to reuse existing Ant code in the same or another script have been built into Ant. Apart from dependencies between targets, there are also several tasks like Ant, AntCall, MacroDef, Import or Include supposed to help reusing Ant code. However, the number of different Ant tasks attempting to solve the same problem over and over again is a telling sign that none of these tasks succeeded to provide a satisfactory solution. Actually, all of them are cumbersome to use and riddled with subtle difficulties taking the Ant novice by surprise.

  • On top of that, the XML syntax chosen for Ant scripts, while human readable, certainly is anything else than human friendly; dissecting complex Ant scripts is a tedious occupation.

Over time Ant itself has evolved and extensions have been made to work around some of the limitations. Third party libraries like Ant Contrib went even further to provide features like modifiable variables or loops, and today Ivy even allows for sophisticated dependency management (as you will see in part 2 of this Blog). But none of these improvements managed to really overcome the basic deficiencies, which is probably not possible if you want to retain at least a reasonable grade of backwards compatibility.

Ants built-in tasks are great, and the availability of vendor supplied Ant tasks for practically every Java development tool is very welcome. However, using Ant as a scripting language to write “programs” for anything but very simple, static build functionality will inevitably lead to severe headaches.

You might consider writing your own custom Ant tasks in Java to implement complex logic. At least Ant makes this easy by providing a very simple API which can quickly be mastered by any Java developer. On the other hand it will often be overkill to setup a separate “meta project” just to realize some non-standard actions during a build. Furthermore you will end up writing your build scripts in Java, which is a possibility but certainly not the most appropriate language for this domain.

There have been attempts to overcome Ants limitations by developing a “better” Ant. One of the most notable is Gant which allows writing your Ant scripts in Groovy instead of the classic Ant XML scripting language. (Note that Gant was an important inspiration which lead to the development of Gradle.)

Gant might have been the successor to Ant, but I guess it came too late. In the meantime Maven, which initially started out very slow, had picked up speed and was about to prove that there might be more efficient ways to express build functionality than a classic “program” written in a scripting language.

The Maven Revolution

When writing yet another Ant script one inevitably feels the deja vu. In every Java project you will find yourself writing targets to compile the production and test sources, copy resources, execute unit tests, produce Javadocs or generate JAR files and possibly other Java archive types like WAR or EAR. Wouldn’t it be nice instead to simply tell your build tool that your project is a Java project, and then the tool would figure out how to compile the sources, run unit tests and finally package a JAR file? This is what Maven is about.

Instead of writing a program which describes the dynamic process of how to build your project, Maven lets you write a configuration (called a “project object model” or “POM” for short) describing the static properties of your project. Specifically you configure a “packaging” (like jar or war) telling Maven what kind of project you want to build.

Just the packaging by itself would not be sufficient to build a project. To build a Java project with a JAR file as its product, for example, Maven needs to know additional information, such as where to find the Java sources and resources, which classes to execute as unit test suites, where to put the compiled classes before packaging or how to name the JAR file. Here another fundamental Maven concept comes into play: “conventions” based on “best practices”. Maven conventions dictate that the Java source file for productive source code can be found in a subdirectory src/main/java and the corresponding resources in src/main/resources, while Java sources for unit tests and their resources belong into src/test/java and src/test/resources and all classes from the test sources with names ending in …Test will be executed as unit test suites during the build.

Maven allows you to override such defaults, but it is very much recommended to stick with the predefined conventions and the standard directory layout. On one hand it will save you from a lot of work when configuring your projects, on the other hand it sets a standard for any Java project being built with Maven. Once you know the Maven conventions it should be trivial to find your way through any conforming project.

A Maven POM may not only contain elements describing the actual build process. There is a plethora of project attributes like name, version, license, contact information of the contributing developers, version control and artifact repositories, to name just a few. Some of this information is used by Maven or third party plugins and influences the build behaviour. Others attributes may contain meta information, intended to be read by human readers, either in the POM file itself or in generated documentation.

Sample Maven Buildfile:

<?xml version="1.0"?>

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>xx.acme.mavensamples</groupId>
  <artifactId>my-maven-sample</artifactId>
  <version>1.0</version>
  <packaging>jar</packaging>

  <name>My Maven Sample</name>
  <description>A sample project to demonstrate Maven.</description>
  <url>http://www.acme.xx/mavensamples/</url>
  <inceptionYear>2013</inceptionYear>
  <organization>Acme Switzerland</organization>

  <licenses>
    <license>
      <name>The Apache Software License, Version 2.0</name>
      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
    </license>
  </licenses>

  <dependencies>
    <dependency>
      <groupId>commons-codec</groupId>
      <artifactId>commons-codec</artifactId>
      <version>1.8</version>
      <scope>compile</scope>
    </dependency>

    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.0</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

Apart from the revolutionary way to describe build logic one of the most impressing features of Maven certainly is its dependency handling combined with the public Maven Central artifact repository. Many developers will probably name it as the single most important reason why they use Maven.

All in all Maven introduced and formalized key ideas and concepts about building Java (and other) projects. But not all was well. The implementation of these ideas often lacked the brilliance of the concepts behind it. Many Maven plugins contain a lot of bugs. The documentation, at least in parts, is equally buggy and often outdated and incomplete. Running into plugins which behave differently from what’s documented is not uncommon. And that pattern continues with the Maven Central repository, which for many groups is quite an underdocumented, historycally grown mess, possibly with buggy or at least less than optimal dependency information, and, to my personal disappointment, even nowadays often without the source packages for OSS projects.

I am happy that Maven and Maven Central have been slowly but steadily improving over the years. But it will be challenging to overcome some of the more fundamental design decisions which proved to be problematic in practice. For example, defining the build by configuring plugins and predefined properties and binding goals to lifecycle phases forces you to learn a lot about the inner workings of Maven and specific plugins to get better control of your build. Once you mastered that, you will find yourself constantly applying obscure constructs and non-intuitive tricks.

Rather sooner than later you will hit the limits of Maven plugins. Maybe you want to tweak the behaviour of an existing plugin in a way not intended by its authors. Maybe you want to do something completely different for which no one yet wrote a Maven plugin. Maybe you need to modify an existing build lifecycle or define your own one. Maybe you would like to handle more exotic kinds of project dependencies which cannot be expressed with the predefined set of Maven scopes. At that moment you will discover that the central idea of Maven is at the same time it’s biggest limitation: Maven helps you with some guidance on how to structure your projects and setup your build, but it also confines you to these structures.

The point is: when building software projects, there are occasions where you have to literally “program” parts of your build in a more or less generic programming or scripting language. Like:

  • the management of a complex environment for automated integration tests,
  • the dynamic location of some system dependency,
  • generating localized resources using a custom built translation DB
  • or triggering the fanfare following your one millionth successfull build.

Maven provides no good solution for this kind of custom functionality. Either you can use the antrun plugin or you write your own Maven plugin in Java. While the former leads you back to Ant with all its problems, the latter is a tough road to go as well. (Maven is terribly overengineered, the internas are not very well documented and the API is difficult to understand and use.)

Maven also makes it difficult to define when exactly a custom build action (a “goal”) should be executed. You can “bind” your action to an existing lifecycle “phase”, but your influence on the execution order of the goals bound to a phase is very limited. (Goals bound to a phase by a parent project, for example, will always be executed before goals bound by the child project to the same phase.) And to simply insert an additional phase into a built-in lifecycle for your action, you will have to package a plugin JAR file containing a decent amount of somewhat esoteric configurations in several XML files. Defining your own lifecycle from scratch is equally tedious.

Maven lifecycles initally were intended to be simple sequences of phases. This contrasts to the build order of projects for which Maven will construct a directed acyclic graph, allowing for tree like build order dependencies. Not surprisingly the strictly sequential lifecycles prove to be too limited in practice. Maven therefore allows “forking” new lifecycle threads from the executing thread. But you will have to go through the pains of writing a custom Maven plugin to achieve this. And, like so many of Mavens features, forking lifecycles is only a crude workaround for a problematic initial design decision.

Even one of Mavens strengths, the dependency handling has significant limitations. On one hand, for simple projects it is often overkill to depend on a full-fledged Maven repository and dependency configurations. Sometimes you like to throw some JARs into a lib folder and put them all in the classpath like in the “good old times”. But Maven won’t let you do this easily.

On the other hand, for any serious project a decent dependency handling certainly pays off very quickly. Needless to say that Maven supports all the basic necessities when it comes to dependency handling. However, Mavens six dependency scopes seem to be missing some consistency. It is difficult to understand why there are compile and runtime scopes but only a single test and a single system scope. A complete set of scopes would probably contain compileTest, runtimeTest, compileSystem, runtimeSystem and maybe even compileTestSystem and runtimeTestSystem scopes. You might want to define your own custom scopes to fix this inconsistency (or for many other uses), but this is not possible with Maven.

So for all the light which Maven brought into the business of building Java projects, there is also plenty of shadow. Maven was an important step in the right direction, but it is much more of an experimental prototype and proof of concept than a mature, well-designed tool.

We Need a Better Build Tool

Many developers, mostly the ones who never wrote an Ant script or a Maven POM themselves, easily underestimate the complexity of building even moderately big software projects. The ones of us being challenged with Ant and Maven day by day know better. Every build is a “meta-project” which has to be developed, tested and documented like any other software project.

Given this complexity, we should have tools which support us as much as possible to reduce and break down this complexity into manageable and reusable parts. But neither Ant nor Maven are really successful in this aspect.

Until recently there was simply no serious alternative to Ant and Maven in the Jave world. But about a year ago version 1.0 of the new Java build tool named Gradle has been released. Gradles promise is to combine the flexibility of Ant and with the configurative approach of Maven, but to omit most – if not all – of their shortcomings.

In part 2 and part 3 (coming soon) I will explore if and how Gradle is able to fullfill this promise.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *