This is the second post on my experience running the MX ruby apprenticeship program. The Previous Post focused on our goals and plans for the apprenticeship.

With dates planned it was time to get some applications for the apprenticeship. How should someone apply to the apprenticeship? How would we know which candidates best fit our criteria of being self-motivated to learn new things and tackle hard problems? Who would bother applying in the first place?

The Application

Since we had a pretty ambiguous selection criteria, we decided to use a similarly ambiguous application medium.

Make us a page, anywhere on the internet, that provides evidence that you are self-motivated, learn new things easily and like to solve complex problems.

The requirement of making a page helps to enforce our base criteria for what skills an apprentice should have coming into the program. But it also lets the apprentices have a lot of freedom in choosing how to present themselves. They could use videos, animations, gifs, etc.

The MX apprenticeship has been a lot of work and fun so far. This blog post is the beginning of an effort to document what I’ve learned along the way.

The Goal

I initially got interested in the idea of an apprenticeship while listening to the ruby rogues episode with Joe Mastey and Jill Lynch. I help organize a local ruby meetup group, so I liked the idea of helping new engineers. I also love my job at MX and want to see the team grow.

My early motivation was focused on facilitating the learning that comes with a first professional programming job, but I knew that goal of the apprenticeship needed to be focused on the company sponsoring it. I decided that the primary objective was to add amazing engineers to the MX team.

Now that I have some basic facial recognition working we need to be able to control the direction of the camera. The first step to do this is to get some wires connected from my laptop to a servo and send signals that control the motion of the servo.

Ingredients

Artoo.io has a really simple interface for handling gpio like servos. In my case I used a digispark to get access to some gpio pins that my laptop can control. Eventually I will use the Beagle Bone Black to control the servos, but the digispark gives me an easy way to test this out for just $9.

The first step in working towards Friendly Bot is to get some code working that can capture an image from a webcam and detect a face in it. I knew that OpenCV had facilities for doing facial recognition, but I was hoping to avoid some of the documentation pain that I have heard about from other people. I did a quick search for rubygems that wrapped opencv and found spyglass by André Medeiros.

Spyglass makes a serious attempt to simplify the OpenCV API and so far it looks very promising. There is even a great example of doing facial recognition that got me started almost immediately.

In many ways Robotics was my first tech love. I remember my dad bringing home a broken printer from work and I took it apart and used the parts to make a little machine that had some LED eyes and drove itself forward and backward. It wasn’t really a robot, but it got me so interested in robotics that I started learning to program.

TL\DR; Want to play with an object database? Skip Maglev and play with PStore.

My last few posts have been about my adventures of trying to get Puma to run on Maglev. That adventure began because Johnny T infected my brain with the the idea of persistent objects.

TL\DR; Puma runs on maglev with a few caveats. Full support is around the corner?

My last post was a perfect example of why you shouldn’t post something until you are sure…

But it turns out that I wasn’t too far off. Tim’s IO.select fix got us past the first hurdle and once he had fixed that puma would run, but it would hang on every request.

Edit: After writing this I discovered that there was an problem with my GEM_PATH environment variable so puma was actually running under MRI. No Wonder it was running at almost the exact same speed as MRI :) Sorry for the false information.

In my last post I talked about debugging my problems when trying to get puma running under maglev. In just a few short days I’ve gotten help on the mailing list, at a users group and on the Maglev github repo. The result of that help is that I can run puma under Maglev!

TL/DR; Maglev still has a way to go, but they provide some very low-level information to help debug.

Puma is a really powerful webserver and that is something that Maglev really needs. As far as I can find the only webserver option that Maglev users have today is Webrick. I love that ruby comes with Webrick out of the box as a fast and easy way to make an HTTP server, but the benchmark below tells the story of why maglev also needs a production-ready webserver:

TL/DR; If you want to run maglev on a small AWS instance you probably need to reduce the shared page cache size. Try adjusting the SHR_PAGE_CACHE_SIZE_KB setting in system.conf