Developers are always looking for a better way to test hybrid mobile applications’ performance across devices. Modus teams use Travis, Jenkins, and AWS Device Farm to run these tests. That’s the beauty of Continuous Integration (CI) and it is something we strongly adhere to here at Modus Create.
For this article, we will assume you have an Android Package (APK) built. We’ll run that .apk file through Device Farm and run some basic tests against it using the AWS CLI. In the articles that follow, we’ll go into detail on the implementation in Travis and Jenkins.
Let’s begin!
We have an Ionic hybrid mobile app repo at the Modus Create Github account called “notes-app-ionic-pro“. Every time a developer makes a push to a PR on the notes-app-ionic-pro repo, we should be able to:
- Use a CI tool to provision a suitable build environment for Android.
- Produce an Android Package (APK, .apk extension) by building the code in the created environment.
- Run a specific set of tests against the .apk file using AWS Device Farm.
- View the test reports.
- If step #3 ran without failure, do something specific, like publish it somewhere, or store it somewhere, etc.
For the CI tool, both Travis and Jenkins were used for the sake of demonstrating both implementations individually. AWS Device Farm tests were triggered using shell scripts and the reports can be viewed in the CI output. The code for the above is contained in the Dockerfile, Jenkinsfile, .travis.yml files and the ci directory.
In Travis, the environment was provisioned via the .travis.yml file where we specified things like environment variables, build tools, packages required, etc. Shell scripts were then run to perform the actual build and to trigger the AWS Device Farm tests. The same was done for Jenkins, in a Jenkinsfile, except the build environment was created using Docker using this Dockerfile.
For both Jenkins and Travis, shell scripts were used to install some dependencies, build the code and execute the AWS Device Farm tests. AWS S3 was used to store the configuration files that contain the kinds of AWS Device Farm tests to be run and the devices to test against. The shell script grabs this configuration file from S3 and uses it to prepare tests and run them.
To build an Android APK, the build environment would need to have the following installed:
- Java
- Android SDK
- Gradle
- Ionic Framework
- Node.js
- A package manager for Node.js (NPM or even Yarn, perhaps?)
Once built, the .apk
file, along with other artifacts such as logs will be uploaded to AWS S3. This is important to note because of the different ways an S3 upload is achieved in Travis vs. Jenkins.
Prior to implementing the build in either CI, we determined the environment we would need to have by first attempting to successfully build the app in Docker in our local environment. The Dockerfile was very helpful when implementing this in Jenkins as we’ll see later.
AWS Device Farm Tests
The AWS Device Farm page explains what it does best:
AWS Device Farm is an app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time. View video, screenshots, logs, and performance data to pinpoint and fix issues and increase quality before shipping your app.
Device Farm runs your tests on actual devices and it’s relatively easy to get started using it. The aws_device_farm_run.sh
shell script is where we trigger the Device Farm run. What follows is an explanation of the steps required to get started with Device Farm. In the next two articles, we’ll show how we tied this up Travis and Jenkins.
Configure AWS Device Farm Tests
AWS Device Farm is quite flexible and allows the creation of a “device pool” which is a list of devices that can be specified so that your build can be tested against them. The type of tests to be run against that build on the pool of devices can also be specified. This lets us “configure” our tests.
The configuration files for the above were placed in S3 so that the shell scripts can grab them via AWS CLI and use them to set up the Device Farm tests. Here’s what we have in my device-farm-configs-976851222302
S3 bucket:
$ export S3_CONFIG_BUCKET=device-farm-configs-976851222302 $ cd $(mktemp -d) $ aws s3 cp s3://$S3_CONFIG_BUCKET/ . --recursive download: s3://device-farm-configs-976851222302/tests/BUILTIN_EXPLORER.jinja2 to tests/BUILTIN_EXPLORER.jinja2 download: s3://device-farm-configs-976851222302/android/device-pool.json to android/device-pool.json $ tree . . ├── android │ └── device-pool.json └── tests └── BUILTIN_EXPLORER.jinja2 2 directories, 2 files
Our android/device-pool.json
contains the list of devices while tests/BUILTIN_EXPLORER.jinja2
contains the types of tests to run.
The AWS Device Farm documentation for the create-device-pool
command has some good examples on how to define a device pool. A list of devices can also be displayed via aws devicefarm list-devices --region us-west-2
.
AWS Device Farm Devices
How many devices are available in AWS Device Farm? Well, we can use jq to count them:
As you can see, there are 367 devices available. Just for kicks, we also wanted to see how many of those devices are running Android. To do this, I filtered the output with AWS CLI’s –query option:
$ aws devicefarm list-devices --query 'devices[?platform==`ANDROID`]' --region us-west-2 | jq '. | length' 178
The output shows that out of those 355 devices, 178 of them run Android. Not a bad number of devices!
NOTE: The --query
option was used quite a lot in our AWS CLI commands so make sure you have a good understanding of how it works.
Now that we are aware of what devices are available to us, we can better understand what android/device-pool.json
and tests/BUILTIN_EXPLORER.jinja2
contain.
This is what android/device-pool.json
looks like:
[ { "attribute": "ARN", "operator": "IN", "value": "[\"arn:aws:devicefarm:us-west-2::device:5931A012CB1C4E68BD3434DF722ADBC8\",\"arn:aws:devicefarm:us-west-2::device:042BB3B077B745348CC7BE5D4703D585\",\"arn:aws:devicefarm:us-west-2::device:D18BF26972B842729CDE481BA22AE6D8\"]" } ]
The above JSON defines a list of devices based on ARN (Amazon Resource Name – which can be extracted via the AWS CLI list-devices
used previously). The ARN’s above are for the following devices:
- Samsung Galaxy S5 (AT&T), model SM-G900A running Android 6.0.1
- Samsung Galaxy S5 (T-Mobile), model SM-G900T running Android 4.4.2
- Samsung Galaxy S5 (Verizon), model SM-G900V running Android 6.0.1
This JSON is passed in via the --rules
parameter of the create-device-pool
command. Again, take a look at the create-device-pool
examples in order to see out the different ways a device pool can be configured.
The tests/BUILTIN_EXPLORER.jinja2
file is a Jinja2 template that contains:
{ "type":"BUILTIN_EXPLORER", "testPackageArn":"{{ upload_arn }}" }
This JSON is passed in via the --test
parameter of the AWS Device Farm schedule-run
command. More details on defining tests are available in the schedule-run
documentation. As you can see, the BUILTIN_EXPLORER
test was specified and a jinja2 placeholder exists that looks like ‘{{ upload_arn }}’
. That would be the ARN of the uploaded file (which will be done in the next few steps) which is the .apk
file we built in Travis. Once the file is uploaded via AWS CLI, the ARN of that upload will be output on screen and that will replace the jinja2 placeholder in the tests/BUILTIN_EXPLORER.jinja2
file.
Now that you’ve seen the structure of the configuration used here, it’s time to run some AWS CLI commands to get started with Device Farm!
NOTE: Keep in mind that AWS Device Farm only works in the us-west-2
region so, when using AWS CLI to access Device Farm, make sure you append --region us-west-2
to your command so that AWS CLI sends your requests to the correct regional endpoint. Also note that the shell scripts I created are used in both the Travis and Jenkins implementation so I won’t be going over the shell scripts for both Jenkins and Travis.
Get Configs
First, let’s change our working directory to a clean one and download our configs from the S3 bucket:
$ cd $(mktemp -d) $ S3_CONFIG_BUCKET='device-farm-configs-976851222302' $ aws s3 cp \ s3://"${S3_CONFIG_BUCKET}"/android/device-pool.json \ ./android/device-pool.json $ aws s3 cp \ s3://"${S3_CONFIG_BUCKET}"/tests/BUILTIN_EXPLORER.jinja2 \ ./tests/BUILTIN_EXPLORER.jinja2
As previously mentioned, device-pool.json
defines the devices we want to test against and BUILTIN_EXPLORER.jinja2
is a Jinja2 template that defines the kinds of tests we want to run against our device pool.
Create Project
Using create-project
, create a new project and we are making sure we save the project.arn
to PROJECT_ARN
from the output because it will be used later on:
$ PROJECT_ARN=$(aws devicefarm create-project \ --name android-debug.apk \ --query 'project.arn' \ --output text \ --region us-west-2)
The value for --name
can be anything you like. In this case, the project was named android-debug.apk
which is the same name as my .apk
file.
Create Device Pool
Then use create-device-pool
, using the project.arn
from the previous command, specifying the path to the downloaded android/device-pool.json
file and we are naming this device pool "ANDROID-devices"
. Here again, devicePool.arn
is saved to DEVICE_POOL_ARN
for later use.
$ DEVICE_POOL_ARN=$(aws devicefarm create-device-pool \ --project-arn "${PROJECT_ARN}" \ --name 'ANDROID-devices' \ --rules file://./android/device-pool.json \ --query 'devicePool.arn' \ --output text \ --region us-west-2)
Create an Upload
In order to upload our .apk
file, we need to first tell Device Farm that we are planning to upload an .apk
file (because, among other things, you can also upload external data for testing or tests like an Appium test package) using create-upload, at which point, AWS CLI will respond with a signed URL which we can use as the endpoint to upload the .apk
file to. Note that --type
is set to ANDROID_APP
. That is what tells Device Farm that we are uploading an APK file and not an Appium test package, for example. There are more upload types in the create-upload
documentation, if you’re interested:
$ IFS=
The output, which would be an array assigned to UPLOAD_META
, would contain two elements:
- the signed URL
- the ARN for the upload which we’ll need in the next step.
Let’s assign them to two variables for ease of use:
$ UPLOAD_URL="${UPLOAD_META[0]}" $ UPLOAD_ARN="${UPLOAD_META[1]}"
Finally, we can go ahead and upload our android-debug.apk file to the signed URL:
$ curl -T ./android-debug.apk "${UPLOAD_URL}"
Schedule a Run
A project with a device pool was created and the .apk file uploaded. We are all set to begin running our tests. All that’s left to do is to schedule a run using schedule-run
. As you can see in the documentation for schedule-run
, we need to provide --test
which will contain the information about the test for the run to be scheduled. If you can recall, we downloaded this file from our S3 bucket in the beginning. Right now, it’s a Jinja2 template so we’ll need to replace the placeholders in the file with the right values (i.e. $UPLOAD_ARN
from the previous AWS CLI command)
$ echo "{\"upload_arn\":\"$UPLOAD_ARN\"}" > ./upload_arn.json $ TEST_FILE=$(jinja2 \ ./tests/BUILTIN_EXPLORER.jinja2 \ ./upload_arn.json \ --format=json)
At this point, TEST_FILE
will contain the test type we want to run as well as the ARN for the file we uploaded. So, we can go ahead and schedule a run now!
$ RUN_ARN=$(aws devicefarm schedule-run \ --project-arn "${PROJECT_ARN}" \ --app-arn "${UPLOAD_ARN}" \ --device-pool-arn "${DEVICE_POOL_ARN}" \ --name "My scheduled run" \ --test "${TEST_FILE}" \ --query 'run.arn' \ --output text \ --region us-west-2)
Get Run Information
At this point, the tests would have begun running and it would be prudent to get information about the run because it can take some time to run and it’s always good to be able to monitor the tests as they run.
To achieve this, get-run
can be used in a while loop so that we can display the test information as it progresses:
$ aws devicefarm get-run \ --arn "$RUN_ARN" \ --query 'run.[status,arn,result,counters]' \ --output json \ --region us-west-2
If you run the above immediately after the schedule-run
command, you will likely see something similar to this:
[ "RUNNING", "arn:aws:devicefarm:us-west-2:041440807701:run:...", "PENDING", { "skipped": 0, "warned": 0, "failed": 0, "stopped": 0, "passed": 0, "errored": 0, "total": 0 } ]
On the completion of the run, the get-run
output would look more like this if all tests pass:
[ "COMPLETED", "arn:aws:devicefarm:us-west-2:041440807701:run:...", "PASSED", { "skipped": 0, "warned": 0, "failed": 0, "stopped": 0, "passed": 9, "errored": 0, "total": 9 } ]
List Jobs
Once the tests have completed, information about the tests can be viewed using list-jobs
which will show a whole bunch of information including the devices the test ran against and the specifications of those devices and how long the test took:
$ aws devicefarm list-jobs \ --arn "${RUN_ARN}" \ --output json \ --region us-west-2
In the following series of this article, we will show you how we managed to generate a neat little report at the end of our Travis/Jenkins runs, using jq and the output of list-jobs
, that looked a bit like this:
These are the same devices specified in android/device-pool.json
and using jq, we can format the output so that it’s more readable.
Managing Artifacts
After the tests have completed, the artifacts need to be collected and uploaded to S3. These artifacts can be viewed using list-artifacts
and then downloaded to disk in preparation for the upload. These artifacts can vary from log files, to screenshots, to videos of the tests.
If you wanted to just get the screenshots taken during the Device Farm run, you could do something like this:
$ aws devicefarm list-artifacts \ --arn "${RUN_ARN}" \ --type 'FILE' \ --output json \ --region us-west-2
Since --type
is 'FILE'
, only artifacts of type FILE
will be listed. Go ahead, try it out yourself!
The output would contain URLs from where you can download the artifacts. Using cURL, you can download these artifacts. You’ll see the code for this in the next part of this series, don’t worry! But for now, here are the artifacts we were able to download from our run:
$ tree . . ├── DEVICE_LOG │ ├── Logcat-13659.logcat │ ├── Logcat-14880.logcat │ ├── Logcat-1580.logcat │ ├── Logcat-16819.logcat │ ├── Logcat-2215.logcat │ ├── Logcat-24887.logcat │ ├── Logcat-25828.logcat │ ├── Logcat-27014.logcat │ └── Logcat-8700.logcat ├── EXPLORER_EVENT_LOG │ ├── Explorer Event Log-2121.txt │ ├── Explorer Event Log-3694.txt │ └── Explorer Event Log-4349.txt ├── EXPLORER_SUMMARY_LOG │ ├── Explorer Summary Log-11291.json │ ├── Explorer Summary Log-23756.json │ └── Explorer Summary Log-30811.json ├── INSTRUMENTATION_OUTPUT │ ├── Instrumentation Output-13971.txt │ ├── Instrumentation Output-150.txt │ └── Instrumentation Output-26447.txt ├── list-jobs.json ├── RAW_FILE │ ├── TCP dump log-1077.txt │ ├── TCP dump log-12503.txt │ ├── TCP dump log-13818.txt │ ├── TCP dump log-19300.txt │ ├── TCP dump log-2152.txt │ ├── TCP dump log-22789.txt │ ├── TCP dump log-28897.txt │ ├── TCP dump log-6871.txt │ └── TCP dump log-9938.txt ├── SCREENSHOT │ ├── 0-17202.jpg │ └── 2-9285.jpg ├── android-debug.apk └── VIDEO ├── Video-13046.mp4 ├── Video-17413.mp4 └── Video-27312.mp4 7 directories, 34 files
Now, you have everything you need from the test run. If you’d like to delete your Device Farm project in order to keep your dashboard clean, go ahead and delete it:
$ aws devicefarm delete-project \ --arn "${PROJECT_ARN}" \ --output json \ --region us-west-2
Conclusion
AWS Device Farm was completely new to me but thanks to the excellent documentation on the AWS site, the process wasn’t hard to figure out. As you can see, once you figure out the commands you need to run an AWS Device Farm test, it’s quite simple to do. There’s actually a lot more that can be done but this is a relatively simple example of how you can easily get started on Device Farm.
What I love about Device Farm is that you’re running your tests against real physical devices in the cloud, you get to see the progress of a run using get-run
and you can then list your artifacts using list-artifacts
. How cool is that?
In the next part of this series, we’ll be using TravisCI to build our .apk
and then run tests against it on AWS Device Farm.
\t' read -ra UPLOAD_META <<< $(aws devicefarm create-upload \ --name android-debug.apk \ --type ANDROID_APP \ --project-arn "${PROJECT_ARN}" \ --query 'upload.[url,arn]' \ --output text \ --region us-west-2)
The output, which would be an array assigned to UPLOAD_META
, would contain two elements:
- the signed URL
- the ARN for the upload which we’ll need in the next step.
Let’s assign them to two variables for ease of use:
Finally, we can go ahead and upload our android-debug.apk file to the signed URL:
$ curl -T ./android-debug.apk "${UPLOAD_URL}"
Schedule a Run
A project with a device pool was created and the .apk file uploaded. We are all set to begin running our tests. All that’s left to do is to schedule a run using schedule-run
. As you can see in the documentation for schedule-run
, we need to provide --test
which will contain the information about the test for the run to be scheduled. If you can recall, we downloaded this file from our S3 bucket in the beginning. Right now, it’s a Jinja2 template so we’ll need to replace the placeholders in the file with the right values (i.e. $UPLOAD_ARN
from the previous AWS CLI command)
At this point, TEST_FILE
will contain the test type we want to run as well as the ARN for the file we uploaded. So, we can go ahead and schedule a run now!
Get Run Information
At this point, the tests would have begun running and it would be prudent to get information about the run because it can take some time to run and it’s always good to be able to monitor the tests as they run.
To achieve this, get-run
can be used in a while loop so that we can display the test information as it progresses:
If you run the above immediately after the schedule-run
command, you will likely see something similar to this:
On the completion of the run, the get-run
output would look more like this if all tests pass:
List Jobs
Once the tests have completed, information about the tests can be viewed using list-jobs
which will show a whole bunch of information including the devices the test ran against and the specifications of those devices and how long the test took:
In the following series of this article, we will show you how we managed to generate a neat little report at the end of our Travis/Jenkins runs, using jq and the output of list-jobs
, that looked a bit like this:
These are the same devices specified in android/device-pool.json
and using jq, we can format the output so that it’s more readable.
Managing Artifacts
After the tests have completed, the artifacts need to be collected and uploaded to S3. These artifacts can be viewed using list-artifacts
and then downloaded to disk in preparation for the upload. These artifacts can vary from log files, to screenshots, to videos of the tests.
If you wanted to just get the screenshots taken during the Device Farm run, you could do something like this:
Since --type
is 'FILE'
, only artifacts of type FILE
will be listed. Go ahead, try it out yourself!
The output would contain URLs from where you can download the artifacts. Using cURL, you can download these artifacts. You’ll see the code for this in the next part of this series, don’t worry! But for now, here are the artifacts we were able to download from our run:
Now, you have everything you need from the test run. If you’d like to delete your Device Farm project in order to keep your dashboard clean, go ahead and delete it:
Conclusion
AWS Device Farm was completely new to me but thanks to the excellent documentation on the AWS site, the process wasn’t hard to figure out. As you can see, once you figure out the commands you need to run an AWS Device Farm test, it’s quite simple to do. There’s actually a lot more that can be done but this is a relatively simple example of how you can easily get started on Device Farm.
What I love about Device Farm is that you’re running your tests against real physical devices in the cloud, you get to see the progress of a run using get-run
and you can then list your artifacts using list-artifacts
. How cool is that?
In the next part of this series, we’ll be using TravisCI to build our .apk
and then run tests against it on AWS Device Farm.
Ensure your software is secure, reliable, and ready to scale—explore our expert testing and QA services today!
Housni Yakoob
Related Posts
-
Hybrid Application Testing with Protractor and Appium
Testing hybrid projects can be challenging. The number of mobile devices the product has to…
-
Promoting New Blog Entries with AWS Lambda and Slack
Here at Modus Create, we're avid users of Slack as a team collaboration tool and…