I received my Amazon DeepLens yesterday.
With advances in deep neural networks come higher computing power and memory requirement, not just during training but also at inference (using the trained model to make predictions).
It is not always possible (or desirable) in real-world applications to send data to the cloud constantly for inference. To make inference at the edge, we will need to embed trained models into low-power, low-memory platforms. The DeepLens is neither of those yet, but it is an example of what needs to happen for off-line inference.
First impressions of the DeepLens:
- setting up the WiFi is quite slow, compared to CCTV or Philips Hue.
- Typical use case is to train models in Amazon SageMaker, and use AWS console to import the trained model then deploy onto the DeepLens.
- The deployment process is seamlesss. After model is deployed, DeepLens can do the inference without connecting to Amazon cloud.
- None of the sample models I tried (head pose detection, activity recognition) worked very well. The framerate is pretty good, but the accuracy is very low to my eyes, even though they were small models and only recognize 9 head poses and 30-odd activities.
I have a few ideas I want to try next. For those, I’ll need to finally learn training MXNet on SageMaker. Should be fun! (and hopefully not too expensive)