Displaying JavaDoc in Android Studio

Android Studio does not displayed Javadoc by default when an element is being hovered, which is a burden for productivity. However, this can be easily set up:

Go to Preferences and then Editor. When the window appears, mark the last option:

Screen Shot 2014-05-15 at 1.09.31 PM

Alternatively you can use “Control + J” while hovering a method to display the Javadoc in a floating window.

Swipeable-cards, a library to provide Tinder-like cards for Android

Swipeable-cards is a native library for Android that provide a Tinder card like effect. A card can be constructed using an image and displayed with animation effects, dismiss-to-like and dismiss-to-unlike, and use different sorting mechanisms.

The library is compatible for Android versions 3.0 (API Level 11) and upwards.

A library and a sample application are provided with the code.




Download the library with git and import it into your project (right now there is only Gradle support, so you need to import it writing in your build.gradle the following:

 compile project(':AndTinder')

and in your settings.gradle

include ‘AndTinder’
When you have included the library in your project, you need to proceeed as follows. First, create a container to store the cards.

<com.andtinder.view.CardContainer xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_height="fill_parent" />

From your Activity, inflate into a CardContainer the container you declared in your XML

mCardContainer = (CardContainer) findViewById(R.id.layoutview);

The card container can sort the cards either ordered or disordered:


Now you need to create your cards. The procedure is quite simple: you just need to create an object CardView and provide the image resource you want to add:

CardView card = new CardView(mCardContainer,new CardModel(R.drawable.picture1));

Additionally, you can set up a Delegate to be notified when the image is being liked or disliked:

 card.setOnCardDimissedDelegate(new CardView.OnCardDimissedDelegate() {
        public void onLike(CardView cardView) {
            Log.d("AndTinder", "I liked it");

        public void onDislike(CardView cardView) {
            Log.d("AndTinder", "I did not liked it");

Finally, add the cards to the container:


There are many things that can be done with this library.

* Allow custom templates
* Extend image personalization options
* Recreate the container when it has been emptied

If you want to colaborate with the project or have any idea to be implemented feel free to submit a pull request or to write an issue!

Increasing the performance of Gradle builds

Lately, I have been immersed into adding a bunch of new projects to our CI server. Although we have been using a distributed system to achieve a parallel build, at some point our builds were requiring a considerable amount of time. Providing some numbers: consider an scenario with 49 different projects (backend, frontend, mobile), in different branches (production, development) constantly building and deploying. There was an average waiting list to build projects of more than 20 minutes, with some projects taking more than 10 minutes to build and deploy. Something against the spirit of CI, really. After doing some research, I increased the performance of my builds in about one third. This is how to achieve it:

So with the problem detected, the next step was to find a solution: how to improve the performance of the builds. The first platform to improve was Android (15 from all our projects are Android based, which is around one third of the total). We are using Gradle build system and Android Studio. While is still great, is an on-going product with constant releases and has not reached its peak of performance yet . First, the important point was to identify the bottlenecks. I used the following script in our build.gradle file to detect which tasks were more problematic:

class TimingsListener implements TaskExecutionListener, BuildListener {
    private Clock clock
    private timings = []

    void buildFinished(BuildResult result) {
        println "Task timings:"
        for (timing in timings) {
            if (timing[0] >= 50) {
                printf "%7sms  %s\n", timing

    void buildStarted(Gradle gradle) {}

    void projectsEvaluated(Gradle gradle) {}

    void projectsLoaded(Gradle gradle) {}

    void settingsEvaluated(Settings settings) {}

    void beforeExecute(Task task) {
        clock = new org.gradle.util.Clock()

    void afterExecute(Task task, TaskState state) {
        def miliseconds = clock.timeInMs
        timings.add([miliseconds, task.path])
        task.project.logger.warn "${task.path} took ${miliseconds}ms"

gradle.addListener new TimingsListener()

This code is relatively straight forward. For each task being executed by Gradle, will measure the time required, and when the build is finished will print the amount of time each task needed.

In order to perform a right benchmarking, I would use my computer with no extra program rather than the console, and run gradle clean assembleRelease. I run this in one of our ship projects with a quite typical structure for our company: a single project, containing 6 maven libraries and 2 local ones.

My first experiment shown nothing really surprising: I run gradle clean mergeReleaseResources, preDexRelease and dexRelease were the tasks more time consuming. Particularly:

Bildschirmfoto 2014-04-23 um 15.49.43

Pre-dexing is used for speeding up incremental builds. Pre-dexes dependencies of a module, so that they can just be merged together into the final dex file, but won won’t affect the release build (since you should be doing clean builds for release builds anyway). So we could get rid of this process during the release build:

  dexOptions {
    preDexLibraries = false

While doing some research met two options to be used with gradlew:
–parallel executes parallel build for decoupled projects, leading to an increase in performance.
–daemon allows to execute our gradle as a daemon, speeding up the build time.

This options can be called from the console:

./gradlew --parallel --daemon clean assembleRelease

Or can be included in a gradle.properties file:


Combining all the points: I run again the same command and got the following:

Bildschirmfoto 2014-04-23 um 15.57.03

The increase in performance have been also successful in subsequent builds, and it is on average a 30% faster than the non-optimized version.

Export XML files to CSV (or Android I18N files to iPhone I18N files)

Android and iOS have two different peculiarities when it comes to I18N: they present different file formats. Google platform uses XML, whereas Cupertino’s uses CSV files. Lately we came to a problem to unify some of out I18N resources. In order to do that, we wanted to have a unified format to easily compare whether the strings where correctly modified or not.

This Ruby script was designed to export Android strings into a .CSV file (and therefore, really easy to script if you need to compare it with its Apple pair). Remember you need to install Nokogiri to make it work.

And call it with ruby scriptname.rb inputfile.xml outputfile.csv

require "nokogiri"
file =  File.open(ARGV[0])
xml_doc = Nokogiri::XML(file)
array = xml_doc.xpath("//string")
str = ""
array.each do |xpath_node|
  str.concat( "#{xpath_node.attribute('name')}, #{xpath_node.text}\n")
puts str
File.open(ARGV[1], 'w') { |file| file.write(str) }

Which city has the most intense Android scene in Europe?

StackExchange Data Explorer is an open source tool to run SQL queries against public data from StackOverflow. Since StackOverflow is the biggest development forum of the world, there is surely a lot of information that companies can actually retrieve from their system in order to take some business decision (this is actually a brilliant place to apply BigData) Moving now to different issues: I was discussing with some event organizers the possibility of bringing an Android event from the USA to Europe. Since I do live in Munich (and besides being a trendy mobile city, I think is a really cool place to organize such events), I was trying to convince them that Munich was the choice. They were resilient about it, so I needed to prove with some data that Munich would be a really nice election. At this time, I was thinking how could I use Data Explorer and BigData to support my thesis. I remember two times I used it before, to display the most active developers from Barcelona (the city I lived before), Munich, and the two cities combined. Something similar could be a valid approach. In the previous SQL queries, I was clustering the top developers from each city based on their contribution to questions tagged with the token “android“. So I could possibly group all the developers contribution from a certain city with questions tagged with the same token. I come over with this script:

COUNT(*) AS UpVotes
FROM Tags t
INNER JOIN PostTags pt ON pt.TagId = t.id
INNER JOIN Posts p ON p.ParentId = pt.PostId
INNER JOIN Votes v ON v.PostId = p.Id and VoteTypeId = 2
INNER JOIN Users u ON u.Id = p.OwnerUserId
(LOWER(Location) LIKE '% germany%' OR 
LOWER(Location) LIKE '% spain%' OR 
LOWER(Location) LIKE '% holland%' OR 
LOWER(Location) LIKE '% france%' OR 
LOWER(Location) LIKE '% italy% ') OR 
LOWER(Location) LIKE '% netherland%' OR 
LOWER(Location) LIKE '% united kingdom%' OR 
LOWER(Location) LIKE '% poland%' OR 
LOWER(Location) LIKE '% sweden%'
AND TagName = 'android'
GROUP BY u.Location



I decided to search in Germany, Spain, Holland/Netherlands, France, Italy, United Kingdom, Poland and Sweden (not that Sweden is a big country in terms of population, but I do work with a bunch of Swedish colleagues :-) ). I did some little experiments to get rid of some nomenclature errors and statistical noise (for instance, I tried also with England, thinking that some developers might registered their selves as English residents instead of UK). After some refining, I came with the following result:


There is a major preponderance of cities from Germany and the UK, with only 5 cities from different countries in the first 20 cities against 9 british cities and 6 Germans. Is not my purpose here to give a full sociologic analysis (i.e., many cities are from UK since StackOverflow is an English based community and there are other local communities in different countries – although 80% of the Internet documentation is English based), but to give a rough approach. So, now we have a bunch of cities with some numbers. We still need to go through a normalization process (i.e., cities with more population will always have more UpVotes than cities with less population). Since StackOverflow does not provide statistics for cities population (is not either their task) I will correlate this values manually. Thus, I will assign to each city the factor that determines the relationship between UpVotes and population. To get the population value, I did use Wikipedia.

  1. London: 1.783.457 / 8.174.000 = 0.21
  2. Reading: 701.683 / 145.700 = 4.815
  3. Berlin: 570.379 / 3.502.000 = 0.16
  4. Paris: 381.560 / 2.234.105 =0.17
  5. Amsterdam: 335.999 / 779.808 = 0.43
  6. Cambridge: 328.603 / 123.900 = 2.65
  7. Munich: 252.516 / 1.378.000 = 0.18
  8. Frankfurt: 139.674 / 691.518 = 0.20
  9. Manchester: 126.113 / 510.700 = 0.24
  10. Lyon: 124.057 / 474.946 = 0.26
  11. Warsaw: 116.670 / 1.717.000 = 0.06
  12. Oxford: 100.504 / 150.200 = 0.66
  13. Hamburg: 979.30/ 1.799.000 = 0.05
  14. Madrid: 96.906 / 3.234.000 = 0.02
  15. Karlsruhe: 95.848 /297.488 = 0.32
  16. Ulm: 92.675/ 123.672 = 0.74
  17. Edinburgh: 84.748 / 495.370 = 0.17
  18. Brighton: 81.612 / 155.919 = 0.52
  19. Ulverston: 79.056 / 11.524 = 6.86
  20. Barcelona: 78.130 / 162.1000 = 0.04

And if we now sort it by the coefficient:

  1. Ulverston (UK): 6.86
  2. Reading (UK): 4.815
  3. Cambridge (UK): 2.65
  4. Ulm (Germany): 0.74
  5. Oxford (UK): 0.66
  6. Brighton (UK): 0.52
  7. Amsterdam (Netherlands): 0.43
  8. Karlsruhe (Germany): 0.32
  9. Lyon (France): 0.26
  10. Manchester (UK): 0.24
  11. London (UK): 0.21
  12. Frankfurt (Germany): 0.20
  13. Munich (Germany): 0.18
  14. Paris (France): 0.17
  15. Edinburgh (UK): 0.17
  16. Berlin (Germany): 0.16
  17. Warsaw (Poland): 0.06
  18. Hambug (Germany): 0.05
  19. Barcelona (Spain): 0.04
  20. Madrid (Spain): 0.02

There is some very interesting information in this graph:

  • The result of UK cities are brilliant. Reading, Cambridge and Oxford, famous for their universities, are all in the top 5
  • Ulverston, a tiny city of North West England, scores first in the ranking. It would be interesting to determine the reasons, since there is no known university or industry in the town. Probably a top user from StackOverflow explains it, but it can be discarded as statistical noise.
  • For most of the cities, a value between 0 and 1 seems the norm.
  • Big capitals are generally in the mid part of the table, except Amsterdam

After this sampling Munich does not score that bad (although better in absolute terms). There are, however, a bunch of other different reasons to choose a place for a certain event (proximity to other places, infrastructures, communication, hosting prices, global number of possible attendants, etc). But after this little experiment, I can only suggest to organizers to move to the city of Ulverston (even if I still think that Munich offers a great beer). Follow me on Twitter @eenriquelopez !