Gradle surviving tricks

Gradle did come to stay with us. Although existing before Android Studio came, it was the Google IDE the tool that popularised it. But are we using the most of it?

After meeting some colleagues and other developers at different conferences, I have realised that the community is underusing Gradle and not exploiting all its possibilities. With this article, I want to share a few tricks I have learned that focus on improving productivity and helping to the Continuous Integration process:

Keeping multiple builds with different icons, names and package extensions

Very likely, at your organisation the Continuous Integration process will create different binaries depending on the build type (alpha, beta and release). For your team will be important to keep a copy of the different versions, so for instance you can keep the last beta version and the release version. By default, Android will keep only a package name (which substitutes each previous install) and the same icon. We can easily modify this, to allow all the different builds to co-exist together.

We can set up in the buildType a new applicationIdSuffix and a new versionNameSuffix.

 
     buildTypes {
        debug {
            debuggable true
            applicationIdSuffix '.debug'
            versionNameSuffix '-debug'
            signingConfig signingConfigs.debug
        }

For each build that we have created, we must also add under the src folder a new folder with the name of the build type. Any content within this folder will overwrite the original one when the application is being compiled in a particular build type (i.e., if we add a different icon it will be taken from here).

 

Screen Shot 2014-11-30 at 18.12.28 copy

And as we can see, all the different build types will be co-existing together.

screenshot

Using different values depending on the build type.

A very recurrent problem: we want to manually handle different values that correspond with different build types (for example, we will want to call a different URL, or track to a different Google Analytics account, depending if the application is the production one or not). By using the token buildConfigField, this task is very trivial with Gradle (and removes all the risk associated with the manual handling!)

Screen Shot 2014-11-30 at 18.00.38 2

Increasing the version code automatically.

We have already written about automatically increasing the versionCode and an extended version. The idea behind it is that every Jenkins build will take care of increasing the versionCode and therefore notifying the recipients that a new version is available. This is a really cool feature that helps us to keep a real track of all the different APKs that we are generating.

 

Handling duplicates in dex files

Did you ever see an error saying something like Multiple dex files define Lwhatever/package? It really sucks. This means you are adding twice a file that is include in one of your dependencies. If you are working with this projects this sucks twice, since you might have trouble to identify exactly which package is causing this. We can call gradlew dependencies to find in the root project where this is happening, but this is not working with subprojects (and with the current version there is no –recursive or –include-submodules flag).

There is however a small workaround. We can define this in our root build.gradle file:

 
subprojects {
    task listAllDependencies(type: DependencyReportTask) {}
}

which basically executes gradlew dependencies for all the subprojects. This will tell us exactly in a moment where are our dependencies, so you can eliminate them. By calling gradle listAllDependencies you will get something similar to the following paragraph:

 
+--- com.actionbarsherlock:actionbarsherlock:4.4.0
+--- project :libraries:ViewPagerIndicator
|    \--- com.android.support:support-v4:19.1.+ -> 19.1.0
+--- project :libraries:sixtappkit-android
|    +--- com.google.android.gms:play-services:4.3.23
|    |    \--- com.android.support:support-v4:19.0.1 -> 19.1.0
|    +--- com.android.support:support-v4:19.1.0
|    +--- com.actionbarsherlock:actionbarsherlock:4.4.0
|    +--- com.google.maps.android:android-maps-utils:0.3.1
|    |    \--- com.google.android.gms:play-services:4.3+ -> 4.3.23 (*)
|    +--- com.github.chrisbanes.photoview:library:1.2.2
|    +--- org.projectlombok:lombok:1.12.4
|    +--- org.roboguice:roboguice:2.0
|    +--- com.squareup.retrofit:retrofit:1.4.1
|    +--- com.octo.android.robospice:robospice:1.4.11
|    |    \--- com.octo.android.robospice:robospice-cache:1.4.11
|    |         +--- org.apache.commons:commons-lang3:3.2.1
|    |         \--- org.apache.commons:commons-io:1.3.2
|    |              \--- commons-io:commons-io:1.3.2
|    \--- com.octo.android.robospice:robospice-retrofit:1.4.11
|         +--- com.octo.android.robospice:robospice:1.4.11 (*)
|         \--- com.squareup.retrofit:retrofit:1.3.0 -> 1.4.1
+--- com.github.chrisbanes.actionbarpulltorefresh:extra-abs:+ -> 0.9.9
|    +--- com.android.support:support-v4:[18.0,) -> 19.1.0
|    +--- com.actionbarsherlock:actionbarsherlock:[4.4,) -> 4.4.0
|    \--- com.github.chrisbanes.actionbarpulltorefresh:library:0.9.9
|         \--- com.github.castorflex.smoothprogressbar:library:0.4.+ -> 0.4.0
+--- com.crashlytics.android:crashlytics:1.+ -> 1.1.13
+--- de.greenrobot:eventbus:2.2.0
+--- com.jakewharton:disklrucache:2.0.2
+--- com.j256.ormlite:ormlite-android:4.48
|    \--- com.j256.ormlite:ormlite-core:4.48
+--- com.j256.ormlite:ormlite-core:4.48
\--- de.keyboardsurfer.android.widget:crouton:1.8.4

I always have this task for all my projects. Since falling into the multiple dex error is probable, I know I can always use this task rather than checking the last changes in the project.

Your build process will increase and gets more complex as your company or organisation grows, is nothing static at all. Those tricks have helped me increased the productivity, and I have now included them by default in any project I work with.

Increasing the performance of Gradle builds

Lately, I have been immersed into adding a bunch of new projects to our CI server. Although we have been using a distributed system to achieve a parallel build, at some point our builds were requiring a considerable amount of time. Providing some numbers: consider an scenario with 49 different projects (backend, frontend, mobile), in different branches (production, development) constantly building and deploying. There was an average waiting list to build projects of more than 20 minutes, with some projects taking more than 10 minutes to build and deploy. Something against the spirit of CI, really. After doing some research, I increased the performance of my builds in about one third. This is how to achieve it:

So with the problem detected, the next step was to find a solution: how to improve the performance of the builds. The first platform to improve was Android (15 from all our projects are Android based, which is around one third of the total). We are using Gradle build system and Android Studio. While is still great, is an on-going product with constant releases and has not reached its peak of performance yet . First, the important point was to identify the bottlenecks. I used the following script in our build.gradle file to detect which tasks were more problematic:

class TimingsListener implements TaskExecutionListener, BuildListener {
    private Clock clock
    private timings = []

    @Override
    void buildFinished(BuildResult result) {
        println "Task timings:"
        for (timing in timings) {
            if (timing[0] >= 50) {
                printf "%7sms  %s\n", timing
            }
        }
    }

    @Override
    void buildStarted(Gradle gradle) {}

    @Override
    void projectsEvaluated(Gradle gradle) {}

    @Override
    void projectsLoaded(Gradle gradle) {}

    @Override
    void settingsEvaluated(Settings settings) {}

    @Override
    void beforeExecute(Task task) {
        clock = new org.gradle.util.Clock()
    }

    @Override
    void afterExecute(Task task, TaskState state) {
        def miliseconds = clock.timeInMs
        timings.add([miliseconds, task.path])
        task.project.logger.warn "${task.path} took ${miliseconds}ms"
    }
}

gradle.addListener new TimingsListener()

This code is relatively straight forward. For each task being executed by Gradle, will measure the time required, and when the build is finished will print the amount of time each task needed.

In order to perform a right benchmarking, I would use my computer with no extra program rather than the console, and run gradle clean assembleRelease. I run this in one of our ship projects with a quite typical structure for our company: a single project, containing 6 maven libraries and 2 local ones.

My first experiment shown nothing really surprising: I run gradle clean mergeReleaseResources, preDexRelease and dexRelease were the tasks more time consuming. Particularly:

Bildschirmfoto 2014-04-23 um 15.49.43

Pre-dexing is used for speeding up incremental builds. Pre-dexes dependencies of a module, so that they can just be merged together into the final dex file, but won won’t affect the release build (since you should be doing clean builds for release builds anyway). So we could get rid of this process during the release build:

  dexOptions {
    preDexLibraries = false
  }

While doing some research met two options to be used with gradlew:
–parallel executes parallel build for decoupled projects, leading to an increase in performance.
–daemon allows to execute our gradle as a daemon, speeding up the build time.

This options can be called from the console:

./gradlew --parallel --daemon clean assembleRelease

Or can be included in a gradle.properties file:

org.gradle.parallel=true
org.gradle.daemon=true

Combining all the points: I run again the same command and got the following:

Bildschirmfoto 2014-04-23 um 15.57.03

The increase in performance have been also successful in subsequent builds, and it is on average a 30% faster than the non-optimized version.

Automatically increasing versionCode with Gradle

Continuous Integration means, above all, automatization. The user should not be in charge of the distribution or deployment process. Everything should be scripted!

While deploying new versions in Android, one of the common tasks is to increase the versionCode to identify a particular build. Using the new Gradle system, this can also be automatized.

def getVersionCodeAndroid() {
    println "Hello getVersionCode"
    def manifestFile = file("src/main/AndroidManifest.xml")
    def pattern = Pattern.compile("versionCode=\"(\\d+)\"")
    def manifestText = manifestFile.getText()
    def matcher = pattern.matcher(manifestText)
    matcher.find()
    def version = ++Integer.parseInt(matcher.group(1))
    println sprintf("Returning version %d", version)
    return version
}

task('writeVersionCode')  {     
    def manifestFile = file("src/main/AndroidManifest.xml")   
    def pattern = Pattern.compile("versionCode=\"(\\d+)\"")   
    def manifestText = manifestFile.getText()    
    def matcher = pattern.matcher(manifestText)     
    matcher.find()    
    def versionCode = Integer.parseInt(matcher.group(1))   
    def manifestContent = matcher.replaceAll("versionCode=\"" + ++versionCode + "\"")     
    manifestFile.write(manifestContent) 
} 

tasks.whenTaskAdded { task ->
    if (task.name == 'generateReleaseBuildConfig') {
        task.dependsOn 'writeVersionCode'
    }

    if (task.name == 'generateDebugBuildConfig') {
        task.dependsOn 'writeVersionCode'
    }
}

In our defaultConfig, we will need to specify that the versionCode must be read from the newly added function:

  versionCode getVersionCodeAndroid()

Testing Asynchronous Tasks on Android

Recently, at Sixt we have been migrating our development environment from Eclipse to Android Studio. This has mean we have also moved to the new build system, Gradle, and applying TDD and CI to our software development process. This is not the place to discuss the benefits of applying CI to a software development plan, but to talk about a problem arising when testing tasks running on different threads than the UI in Android.

 

A test in Android is (broad definition) an extension of a JUnit Suitcase. They do include setUp() and tearDown() for initialization/closing the tests, and infers using reflection the different test methods (starting with JUnit 4 we can use annotations to specify the priority and execution of all the tests). A typical test structure will look like:

public class MyManagerTest extends ActivityTestCase {

	public MyManagerTest(String name) {
		super(name);
	}

	protected void setUp() throws Exception {
		super.setUp();
	}

	protected void tearDown() throws Exception {
		super.tearDown();
	}

	public void testDummyTest() {
		fail("Failing test");
	}

}

This is a very obvious instance: in a practical case we would like to test things such as HTTP requests, SQL storage, etc. In Sixt we follow a Manager/Model approach: each Model contains the representation of an Entity (a Car, a User…) and each Manager groups a set of functionality using different models (for example, our LoginManager might require of models Users to interact with them). Most our managers perform HTTP  requests intensively in order to retrieve data from our backend. As an example, we would perform the login of a user using the following code:

 

	mLoginManager.performLoginWithUsername("username", "password", new OnLoginListener() {
		@Override
		public void onFailure(Throwable throwable) {
			fail();
		}

		Override
		public void onSuccess(User customer) {
		//..
		}
	});

When it comes to apply this to our own test suitcase, we just make the process fail() when the result does not work as we were expecting. We can see why in the method onFailure() we call to fail().

However, even if I was using a wrong username the test was still passing. Wondering around, seems that the test executed the code sequentially, and did not wait until the result of the callbacks was back. This is certainly a bad approach, since a modern application do intense usage of asynchronous tasks and callback methods to retrieve data from a backend!. Tried applying the @UiThreadTest bust still didn’t work.

I found the following working method. I simply use CountDownLatch signal objects to implement the wait-notify (you can use synchronized(lock){… lock.notify();}, however this results in ugly code) mechanism. The previous code will look like follows:

	final CountDownLatch signal = new CountDownLatch(1);
	mLoginManager.performLoginWithUsername("username", "password", new OnLoginListener() {
		@Override
		public void onFailure(Throwable throwable) {
			fail();
			signal.countDown();
		}

		Override
		public void onSuccess(User customer) {
			signal.countDown();
		}
	});
	signal.await();