Building Your Own Voice Activated Skynet with Amazon Web Services

Hey all,

Here are a couple of videos from this year’s Sirius Madness event where @virtualtacit and I demonstrated using Amazon Web Services and Alexa (Amazon Echo) to flip a Raspberry Pi controlled drone. Below you can find some of the background on how we built it.

Oh, and aside from the fact that voice controlled drones are awesome!, our goal was to show people how easy and inexpensive it can be to accomplish some very complex tasks, with very little infrastructure…FREE. Amazon provides access to AWS free for the first year.

This video shows a little bit of the Heads Up Display (HUD) interaction. A node.js web server on the Raspberry Pi is used to process IoT changes and show the corresponding HUD page (Dashboard, HUD, System Status)

We were a little concerned that there wasn’t enough head room to carry out the flip maneuver. You can see the drone clips the ceiling slightly. Fortunately it recovers.

Inspiration: AWS re:Invent 2015

We saw some really cool hacks in the Amazon AWS re:Invent conference’s makerspace. A few projects included the:

So, naturally, we had to build something.

The Goal – What do want need

  • Voice Activation
  • Send Commands to control devices
    • Activate Lights and Fog Machine
    • Wirelessly Communicate with a Raspberry Pi
    • Activate Heads Up Display

What services/hardware do we need

We decided to build a voice-activated control system called uberJARVIS. You can see a breakdown of the services and devices we used below.

The flow of the control system can be seen below.

  1. The Amazon Echo accepts the commands and passes them to the Alexa Skills Kit (ASK)
  2. (ASK) breaks down the commands into simple JSON and passes the output to Amazon’s Lambda service. Check out the ASK/JSON Sample Below.
  3. Lambda is running a node.js script to process the JSON output, and update a ‘thing’ we created in AWS IoT.
  4. The thing, which is a Raspberry Pi, is subscribed to the ‘thing’ shadow in the AWS IoT repository, and processes changes like:
    • Deploy mark 42 – Deploys the drone
    • Power On: – Powers the system on (fog light and relays)
    •  Move Up|Down|Left|Right – Moves the drone
  5. When the RaspberryPi detects a change, it will execute the necessary response. In the case of ‘deploy mark 42’, it will execute the ar-drone npm library to wirelessly launch the drone. Big shoutout to the developers of this module: ar-drone npm module
  6. Finally, we added a small web server to the Raspberry Pi to provide visual feedback in a webpage. Similar to how we launch the drone, we can tell uberJARVIS to show a specific page on in the browser.

Raspberry Pi with Relay mounted. I took apart this inexpensive RF controller and connected the GPIO from the Pi to control the on/off functions. These are activated by the node.js script running on the Pi turn turn on a fog machine.

Alexa Skills Kit (ASK) and JSON make it Simple

Here is an example of what the ASK request/response looks like. In this example, we are sending a simple command ‘Tell Jarvis to power on’ which the ASK parses and generates a very easy to handle response that gets sent to Lambda. Take a look at the ASK interaction model if you are interested in building your own skill for Amazon’s Echo. uberJARVIS is the skill we created, and has an invocation phrase of ‘jarvis’.  To invoke it, we simply tell the Echo ‘Alexa, tell jarvis to do something

This service simulator allows you to test your skill without having to talk to your echo. Very helpful for troubleshooting and testing during development.

uberJARVIS Github Repo

Full disclosure. I am more of a pinch-hit developer, so don’t laugh too hard at some of the spaghetti code.

Resources:

Leave a Reply

Your email address will not be published. Required fields are marked *

*