Check out Sentinel for Linux, Windows, and OS X on GitHub
I recently got my roommate a Dream Cheeky Thunder USB missile launcher. It was fun to play around with for a bit, but the included software was very limited in its functionality. Just when I was ready to dismiss it as an overpriced toy, my roommate came up with a great idea: could we mount a camera to the turret and make it automatically aim and fire at faces?
After a couple weeks of experimentation, we came up with Sentinel, a Python script that does just that, making heavy use of the excellent OpenCV computer vision library. Here’s how we did it.
But first, a video demo!
Note: The above video was filmed while our computers were rather bogged down, and Sentinel usually runs signficantly faster than that (especially on Windows and OS X, where it can go as fast as 3 iterations per second).
Step Zero. The concept
The main loop of the program is conceptually quite simple:
At each iteration, the camera takes a picture, which is then processed to detect any faces. If a face is detected, the turret adjust itself to bring the face closer to the center of the camera’s field of view. If the face is close enough to the center and the turret is armed, it then fires a missile at its target. The camera’s output is also sent to the screen, with an ominous red reticule drawn over a detected face.
Note: I’m only showing bits of pieces of our source code here, and I’m simplifying it where I can to keep the code chunks relevant. You can see the full source code here.
Step One. Assembling the hardware
We had a small Logitech C270 webcam lying around, so we taped it to the top of the turret, aiming it roughly parallel to the trajectory of the turret’s missiles, as shown in the photo above.
The webcam’s position about an inch above the turret means that when the camera is pointing toward a target’s face, the turret usually ends up shooting them in the neck or chest, which is a nice side effect, since neither of us wanted to get shot in the eye.
Step Two. Controlling the turret
Controlling USB devices from Python requires the PyUSB library, which in turn needs a low level driver - we used libusb.
After that, connecting to a USB device is just a matter of finding its vendor and product IDs. In the case of this missile launcher, lsusb
reported these IDs to be 2123:1010
. In Linux, you also need to detach the kernel driver if it is active:
Now that the device is configured, we can send commands to it. Fortunately for us, others had already discovered the commands that the device accepts to position its turret, fire missiles, and toggle its built-in LED:
Step Three. Controlling the camera
Now that we’ve set up the rocket launcher, the next step is accessing the webcam to make it take a picture once per loop iteration. We ended up doing this in a few different ways.
Linux: streamer
On Linux, I was excited to find streamer, a fast and simple-to-use photo/video capture tool. Even after we switched to using OpenCV’s photo capture capabilities on Windows, streamer still ended up giving the best performance on Linux.
Windows and OS X: OpenCV
Capturing photos on Windows proved trickier. Initially, we used CommandCam, which gave good results but was a litle too slow to use. We finally switched to using OpenCV’s image capture methods, but ran into a serious problem.
The images that OpenCV was processing were always a few frames out of date compared to the images the camera was taking, which ended up making the turret oscillate back and forth instead of homing in to its target, because it was adjusting its position based on outdated information.
This behavior is actually intentional in OpenCV: images are stored in a buffer and retrieved as quickly as possible, and the latest image taken is not generally the image retrieved. Since OpenCV is generally used in a situation where continuous footage is being taken from a video camera, this is not usually a problem. However, since we were taking only one picture at a time and then repositioning the turret based on each picture, this behavior was unacceptable.
Our solution was a little hackish but succeeded in correcting the problem: we simply made a clear_buffer
method that repeatedly grabs images from the buffer until only the latest image is left, slowing the process down slightly but greatly improving the turret’s behavior:
The webcam is set up for use with OpenCV via self.webcam = cv2.VideoCapture(int(self.opts.camera))
within the Camera
class’s initializer, and frames are captured like this:
We’re currently using OpenCV for photo capture in OS X as well, though it doesn’t seem to work as well as in Windows, so we’re looking for alternative tools we can use to capture photos from within the script.
Step Four. Face recognition with OpenCV
Once a photo is captured, it’s taken to OpenCV for face detection. A lot of things are happening in this method, so I’ve tried to annotate it as much as possible. draw_reticule
is a helper method that draws targets of various styles.
Step Five. Displaying the target
After OpenCV has detected any faces, we display the modified image (converted to grayscale, with red targets drawn over faces).
This method has a great deal of platform-dependent code, to make it play equally nicely with Linux, OS X, and Windows:
- In Linux, we open up ImageMagick display windows. These windows do not refresh automatically, so we kill any existing windows each time we open a new one.
- In OS X, we open a Preview window. Conveniently, calling
open -a Preview [path]
refreshes the current Preview window. - In Windows, we open Windows Photo Viewer (this might not work in older versions of Windows). It refreshes itself automatically, so
we only open a window the first time
Camera.display
is called.
Step Six. Aiming and firing
If a face is detected, the turret adjusts itself to try to bring the face into the center of its field of view. The Camera.face_detect
method returns x_adj
and y_adj
, which correspond to the distance from the center of the most prominent face (expressed as a fraction of the total width and height of the photo, respectively). These values are passed to the Turret.adjust
method:
Note that the only way to move the turret a certain distance is to estimate how long the turret’s rotation would last and send it commands to move and stop with the appropriate timing.
Finally, Turret.ready_aim_fire
detects if the face was close enough to the center of the camera to fire, turning on the turret’s LED as a warning before firing a missile (if the --disarm
flag is passed to the script, the LED is turned on, but no missile is fired). Then the loop continues, until the turret has fired all four of its missiles:
Try it yourself!
If you have a Dream Cheeky brand USB missile launcher (though it wouldn’t take much work to support other brands of missile launchers), a compact webcam, and a desire to build your very own defense system for your home or workplace, check out our GitHub repository.
We’ve gotten Sentinel working on Windows, OS X, and several Linux distros, though installing the dependencies (OpenCV, PyUSB, and others, depending on platform) can take some work.
We’re currently hard at work on some more features, including:
- different modes of operation (such as a “sweep mode”, in which the turret continually pans until it locates a face, rather than staying idle when it doesn’t see a face)
- a “kill-cam” feature that stores the pictures that it takes right as it shoots a target
- easier installation of dependencies (especially on Windows)
Got any comments, questions, or suggestions? Be sure to let me know, either here or on Github.
Comments
blog comments powered by Disqus