In the latest issue of HackSpace magazine, we met creator, 3D designer, and self-taught robot maker Jorvon Moss.
Jorvon Moss is an artist by training. From his workshop in LA, he makes robots – not to do anything like send people into space or fight crime or any of the stereotypical things that comic book fans are supposed to want to do, but just for fun. He’s also an obsessive documenter, so if you’ve ever wanted to know how to build (or how not to build) anything with embedded AI, or servos, or face recognition, he’s a good person to ask. We spoke to him in the early hours one morning, to ask him what he’s been up to recently, how to get into building with artificial intelligence, and how he approaches the creative process
HackSpace: So Jorvon, you’re a full-time maker now. How did that come about?
Jorvon Moss: I was working full-time at an access control company, doing that full-time and making things in my free time. But then my company got bought by Motorola. And since that happened, they bought everyone stock, so that gave me extra money. I stayed for like another month, just because I thought I’d try and see if I can get used to this, but I honestly couldn’t get used to it.
I had a very serious back-and-forth with my boss and he told that it just wasn’t for me. So, that same day, I put in my two-week notice. I asked the internet brain for help, because I was like, hey, I want to be a full-time maker now because I can’t do nine-to-five anymore. And the people at Adafruit actually were the first ones to reach out and be like, hey, you sound awesome for this position. They reached out to Digi-Key, and I started working for Digi-Key for a while as a freelancer. Their technical content creator job came up and they offered it to me, so now I make for a living.
During the lockdowns, I’ve had more time to work on stuff and I was able to develop my skills better.
I got a lot more into wearables over the years, but I still focus mainly on robots. That’s my thing. I’m currently working on one right now, which I literally took apart last night again.
It’s not done yet – I have to add an extra jaw point to give it a mouth. I’m hoping to have this done soon so I can get pictures of it and document it properly. Most of it has been, like, upgrading software I use.
I’ve been practising with AI. I’ve mastered facial recognition, now I’m moving on to other stuff.
HS: How did you get started on the road to AI and robotics?
JM: I started with simple small builds, like everyone else does. I made this little cup thing, I designed and 3D-printed it myself. And then of course, I saw Alex Glow’s owl, which inspired me to build my spider. But then I wanted more. So I kept going. I just kept building and building and building and building and building. And now I do this, but I started with just a 3D printer and CAD in my room. I’m all self-taught when it comes to making all these different forms.
HS: How does someone with no mathematics or computer science background get into facial recognition with AI? Because that sounds really hard.
JM: I was trying to teach myself years ago before I got into it, because at one point I felt like I had mastered using certain boards like Arduino. I mastered basic Raspberry Pi stuff so I could make servos move in a way that I thought was useful, and I wanted to make my robots do more, and AI was the next step into that.
So again, I used the power of the internet, and found tutorials on how to do AI, and I’ve just been slowly picking it up one by one. And then eventually, I started testing my AI systems, just to see if I can actually get facial recognition to work.
I got help from friends, of course, colleagues who are able to put input here in there and be like, ‘Oh, this line of code is wrong. You did this wrong, you should have done this’. And I’m like, ‘Oh, well, thank you. I’ll try this.’
It was great. Honestly. It took a while – I think it took me about four months to finally get the facial recognition software working on one of my robots. But now I keep that stuff together; that way, if I ever want a robot with facial recognition working efficiently again, I can easily just set it up. I want to keep moving forward, I want to keep adding stuff.
So now I know facial recognition, I want to get into other things, like voice recognition, making smarter robots that can be worn like compatible type of things. I actually built a robot backpack, which is easier than it sounds.
I plan to do a little bit more travelling this year so I built this bag. That way I can have a very artistic and custom way to carry my robot. And also something I learned from a design point of view, having a robot strapped to your shoulder causes problems. It looks cool from far away, but when you get close, you always have the leaning problem with the robot leaning forward, or off to the side a bit. So I built this backpack to have an extra little stand on it. So my robot actually stands on the bag more than on my shoulder, which actually helps visually, because it’s looking over my shoulder. It gives it a vantage point to look around.
HS: So it can look around and recognise and react to faces? I guess that’ll get some reactions.
JM: Yeah, I’m working on two different versions of this robot, one with AI in it that’s doing facial recognition. But the current one, I’m just controlling with my phone, because I don’t want certain people to get scared.
I took a prototype of my facial recognition bot to SiliCon last year, and it got some love, but it also got some bad reactions too.
And also, I found out how flimsy facial recognition really is. Because the thing is, the robot wouldn’t notice you if you were wearing a mask. So if you had your mask on, you’re invisible to my robot. It couldn’t tell you had a face, but as soon as you pulled your mask down for a second: oh look, there’s a human.
HS: We can forgive that in a robot though, right? Less so in a self-driving car.
JM: Luckily, this doesn’t do anything too crazy or important, it’s just designed to inspire. That’s one of the biggest parts of my brand, literally just making for fun, and I really want to continue to, like, push that ideal. I really don’t make with actual purpose like most people do.
The thing in the big engineering spaces is always like ‘making to make the world a better place’, or making to do this or that. I like to think that I want to just make because making is fun. You shouldn’t have to have a mission to save the world with your project. Sometimes it’s just nice to build something. Make for fun. Make for kicks and giggles. Why not? Go ahead and build what you want. Who’s going to stop you?
HS: What sort of reaction do you get when you take your companion bots out into the world to show the public?
JM: I had a talk in San Francisco last year, when I went and showed my robots to kids, and kids got to see the facial recognition, see how they move around and stuff like that. Kids love them. A little bit too much, because I had a few kids try to steal them.
I really want to do more talks in general, I am 110% here to do that. But I think that’s one of the main reasons I want to make this one cell-phone-controlled; that way I could actually interact with people a little bit more. Because with facial recognition, AI – at least this is what I’ve learned – so much has to happen for facial recognition to work correctly, like perfect lighting for example. Sometimes when you’re out and about [with] the sun behind you, my robot can’t see you. It’s very finicky.
HS: What sort of hardware do you use to get facial recognition into a little box that sits on your shoulder?
JM: I use two different things, depending on the size of the project. I use the Raspberry Pi with the camera attached to it, and then do the whole open-source facial thing to attach it to a simple servo for pan and tilt, and power the two components separately; or I use the OpenMV (openmv.io) and a Pololu Maestro servo controller. I have those two boards talking to each other, and then that can also do facial recognition very well.
If you really want to get into easy AI stuff, I would definitely recommend the OpenMV cam, especially for the fact that it comes with some stuff pre-loaded already. So you can plug it up to your computer and instantly start seeing some example code for facial recognition, identification, and stuff like that. It’s a perfect starter board.
HS: I flew back into the UK from abroad last November, and the facial recognition passport scanner was absolutely stumped by the fact that I’ve grown a beard and was wearing glasses.
JM: You have to see facial recognition as being in its infant stage, that’s how I see it. Like it’s, it’s still a baby. It’s like a toddler who finally hits three and finally has a decent type of brain to figure out what stuff is. It’s still learning. It’s going to be asking ‘What is this? What is that?’ Trust me, when I first took my first facial recognition robot out, it didn’t recognise any people at all, and I thought I’d done something wrong. Turns out that everyone around me was wearing a mask – it can’t see me when I’m wearing a mask.
It’s those little things. I just assumed it was looking for facial shape. No: it’s looking at everything. Eyes, nose, mouth… it’s looking for all those things to be present, or it won’t recognise an object as technically a face. Unless, that is, you teach it yourself to look for other things, like eyes, or the presence of a person’s body, or something like that.
HS: If you’re mostly self-taught, does that mean that you’ve spent a lot of time hanging around makerspaces?
JM: Not particularly. I do my best learning by myself. I’m that nerd kid who grabs a whole bunch of books and stuff and then just, like, sits at a friend’s computer and writes things down and starts testing things like crazy. Makerspaces are great resources for people who don’t have the equipment to do the things they need to do. And luckily enough, I have the equipment. But my main problem is my focus – I feel like I have ADHD brain. I focus better at home, in front of my desk, because every time I go into a makerspace, I end up doing more talking and hanging out than actual work.
When I’m studying, I try to make a single goal per day. So it’ll be something like ‘Alright, let’s get the camera set up today.’ Once the camera is set up and it can see me, I’ll look up what I have to do next, and the next day I try to do that.
I build upon trial and error on a daily basis. Every single day I’m trying to build this, trying to build that, and some days are good; some days work perfectly. And some days, things have to be redesigned and reworked. This is the fifth time I’ve rebuilt the robot I’m working on currently. I literally took it apart last night because I didn’t like the way it was moving. I started to add some spare parts to it, reattaching them, and it worked. I just have to make it a little prettier.
HS: Speaking of things looking pretty, how do you finish a project off? How do decide that something is finished?
JM: Fun fact: I don’t decide when something is finished. The internet does! I personally believe that nothing I do is perfect, nothing I’ve done is good enough in my opinion. So, what I end up doing most of the time is going back and rebuilding it; at least getting it to a point where it’s functional and it looks good. And then I’ll post it and, depending on how badly the internet wants it, [that] determines whether I develop it or not.
But to me, nothing is ever really done. It’s always based on deadlines; if a deadline says something has to be done by a certain date, I’ll take features out of a build to get it done. Because personally, I don’t think any of my work is actually to the point where I’m happy with it. Perfection is an illusion.
When I’m working for a company, I publish work more often. But for my personal work, it all depends on the internet, the community. For example, the goggles that I made were really popular. Once I made them, I made another pair to make sure that everything worked, and then I published it, because the internet wanted it. But to me, it’s still not done. Like I could add more moving parts to it and add this cool thing and this cool thing… That’s why I just call them prototypes. That way, if anything doesn’t work, it’s OK.
HS: You’ve taken a lot of pressure off yourself by using that word.
JM: Exactly. People have a lot of preconceptions of what a perfect project is thanks to Apple et al. And then, of course, you have movies – all of these tech characters like to build things in like five minutes without any real parts. They somehow glue imaginary material together and make a thing.
I’m by myself; I don’t have a team of people who are building these things for me; I got to do the best I can. And, usually, I just think of one thing that I want to be the main functionality and work from there.
HS: Am I right in thinking that you’re following the convention of naming your builds after mythological creatures? You’ve got robots called Odin, Prometheus, Asi, Helen…
JM: No, Helen was based off of Helen [Leigh] from Crowd Supply because they bought me a very expensive part. And I appreciated it so much that I named a robot after them. That’s actually how I got my first OpenMV Cam, because it was really expensive and at the time I couldn’t afford it.
But I posted on Twitter (I love Twitter — it’s like a giant internet brain) like ‘Hey, I’m looking for some help. Does anyone have an extra one of these lying around?’, and Helen sorted me out with one.
HS: Let’s keep that quiet or everyone will want one. It sounds like the internet is pretty useful for you as a giver of second opinions, not just for finding facts.
JM: I feel like currently, the internet is as complicated as a human. And what I mean by that is that there are different layers of a person, like, no person is perfectly good and no person is perfectly bad. I feel like the internet’s, kind of, becoming slowly human that way.
I love the internet. I love the fact that I can, like, entertain myself for hours, watch movies, build things, reach out to people that I could not talk to in person. I love these things, but at the same time, it’s also a place for people to escape from reality. It’s 100% a tool for me that I try to use for good.
HS: What do you have planned for your next build?
JM: I got a Raspberry Pi 4 recently, which I’m working with to build my own AI, to put into an R2-D2-sized personal robot. I want it to be more intelligent and more useful than my other things, so it’s going to take a while. I want to have this really big robot to walk or roll beside me, and have it [be] intelligent [enough] to look up when people are talking to it, so it looks like it’s following the conversation.
I’ve talked to a few of my programming friends and we discussed the best method of teaching, which is probably going to be to treat it like a baby – showing it the items and getting it to repeat what those items are back to you. By necessity, that’s going to be a slow process. So, it’s gonna take a while. That’s fine. I mean, we all like babies. They’re small. They’re cute – it’s just that mine will have an off-switch.