This year several team members from Modus had the opportunity to attend Amazon’s annual Web Services event in Las Vegas – AWS re:Invent. One of the stars of this year’s conference was Amazon’s new hardware platform DeepLens. DeepLens is an IoT (Internet of Things) and Machine Learning (ML) device that allows engineers to leverage Amazon’s AWS platform for developing image recognition applications.
The Modus team having attended a workshop on DeepLens were incredibly lucky to each receive a device from the Amazon team. It’s expected launch date is April 2018 so our team has around 4 months to test the device before it hits the shelves.
Due to the combination of hardware and a range of tools available in AWS, the scope of what can be achieved with this device is extensive. For example, the DeepLens would be perfect for converting sign language into written text or perhaps triggering an Alexa skill.
The DeepsLens measures 6.5 inches high by 1.5 inches across and 4 inches deep. It comes in white at the moment and the 4MP 1080p camera is positioned on top of it. The box includes the power unit, detachable plug prongs and a microSD card.
On the front of the device in addition to the camera lens are three blue LEDs denoting power, WiFi connection status and whether the camera is currently on. Also included is the On/Off button. I noticed with my device however the button only seems to switch the DeepLens on and not off.
The back of the device is where you can find the various ports. Running from top to bottom we have:
- USB 3.0
- USB 3.0
- Audio out
- Power supply
Although not noted in the device’s packaged documentation, the DeepLens also supports Bluetooth within its hardware. However you will need to switch this on yourself once logged in. The CPU is an Intel Atom and the device packs 8GB of RAM, 16GB of onboard memory and the Intel Gen9 Graphics Engine. For those wishing to extend the storage capacity, the microSD card can be used.
Video from the device can be streamed to a monitor via HDMI or in theory over an SSH connection with the right configuration and software.
Note: neither an HDMI to microHDMI converter or microHDMI cable is included in the box. Therefore, if you wish to see streaming images you will need this in addition to a keyboard, mouse and HDMI compatible monitor. This is of course unless you are prepared to figure out how to stream the images over your network.
The DeepLens comes pre-installed with the Ubuntu 16.04 LTS operating system. Developers can thus SSH into the device and/or connect via the graphical desktop and install extra Linux packages as needed.
I opted during setup that I wouldn’t use a mouse and keyboard. Instead I decided to install Synergy on the device. This allows me to control the Ubuntu desktop using the keyboard and trackpad via my Macbook.
You can read more about Synergy here: https://symless.com/synergy
When the DeepLens camera is turned on it produces two video output streams. The first is just a video stream with no processing. The second is where we interact with the software for processing the video in combination with our machine learning program. In order to use the project stream, you will first need to deploy a project to your DeepLens device via AWS.
Using the DeepLens
When initially setting up a DeepLens project at AWS re:Invent we had the help of Amazon staff, and pre-setup hardware. On returning home with the DeepLens, however we used the initial documentation that Amazon provided via their website.
Since December 2017, Amazon has provided a handy 10 minute setup guide in preparation for more users coming onboard in Q1 of 2018. This guide walks users through:
- Logging into the AWS console and locating DeepLens relevant content
- Registering the DeepLens and setting up relevant permissions
- Downloading the certificate bundle
- Connecting to and configuring the DeepLens
In addition to this, Amazon has a number of pre-canned AWS DeepLens projects you can try out. This includes a Hotdog/Not Hotdog application based upon the famous Silicon Valley episode.
In fact, when setting up the project at AWS re:Invent we tested the Hotdog/Not Hotdog project out with the following little guy:
Much to our amusement the DeepLens picked this up as being a hotdog! You might want to give this a try yourself and see the limits of the project’s ability to spot a real from fake edible item.
Having experimented with some of the existing applications you will then be ready to learn more about DeepLens or start building your own project.
For those with early copies of the DeepLens and wishing to build out their own application, AWS has an open competition called the DeepLens Challenge. The prizes for winning entries include $15,000 USD cash, custom DeepLens devices and tickets to AWS re:Invent.
The competition runs until February 14 2018, so good luck!
At $249 USD, the DeepLens is one of the first deep learning devices commercially available to home users on the market.
In addition to its great hardware specs, Amazon has provided a rich development environment and tool set via AWS to combine with it. The ability to use multiple Amazon services from SageMaker to Lambda gives budding engineers the opportunity to use these services in interesting development projects and learn along the way.
While getting up and running with it requires knowledge of how to interact with Amazon and some development skill, for engineers wishing to get into Machine Learning without having to learn large amounts of academic material in preparation this is the device for them.
Where Amazon will take this and the types of services we will see built on it is anyone’s guess. However, opening up Machine Learning to the public can only bring dividends.
We’ll be looking forward to AWS re:Invent 2018 to see what exciting new features Amazon announces for the growing world of DeepLens and AI.