Camera competition winners!
After a lot of swearing and arguing, we have managed to boil down nearly 700 entries to just ten winners. It was a very hard decision, and if you didn’t win please don’t feel too disheartened; we had some really exceptional entries for this competition and found it really hard to come to a final decision.
As you’ll remember if you entered, we were looking for camera projects which would involve the winners in writing some software and doing something interesting with the cameras. The winners, who will receive a rare-as-hen’s-teeth pre-production Raspberry Pi camera board, are (in no particular order):
Brian’s was one of several cat-thwarting project ideas we were sent, but we liked his best. Brian’s cat has a habit of bringing half-dead things home for his master through the catflap, and he’s been trying to deal with the problem by using a webcam/Arduino/PC solution with some custom Python code to detect feline mouthfuls of rat. He wants to replace the setup with a Pi and a camera module so he can get better framerate and better resolution at lower cost and lower power. (You’re right, Brian. Using an Arduino just to control a relay is overkill.)
We liked your Nerf gun idea, Brian, but we also felt sorry for your cat, so it’s the rat-detecting catflap we’d like to see developed, please!
We had several outstanding entries from younger contestants. Matthew’s was one of them. He’s 12, and has an idea for using the camera as a communication tool – we got very excited about this, because we can think of all kinds of useful applications for what Matthew’s planning to build. He says:
Some of my recent builds are a cleaning robot, and a arcade machine. However my most recent build was a morse code key. The key is hooked up to three blue LEDs. I would like to get the Raspberry Pi camera to read those flashes of light so I can communicate with my Raspberry Pi via morse code.
Matthew says he also has some ideas about using the camera in a one-player chess board. We’re really looking forward to seeing what he develops.
Ross is a postdoctoral fellow in a microbiology lab, who studies novel ways of combating antibiotic-resistant bacteria. He tests compounds to investigate their effect on bacterial growth, and their ability to enhance the body’s own defences against pathogens. One of the tests he uses on a daily basis involves exposing bacteria with white blood cells that are isolated from healthy volunteers. White blood cells represent the main line of defence against pathogens and will kill most bacteria they are exposed to, though by adding drugs Ross can either enhance or inhibit their bactericidal capacity.
The tedious part of these experiments involves counting the tiny colonies bacteria form on petri dishes after overnight incubation. Counting visually is time-consuming and prone to error. Ross’s winning project idea is to use a Raspberry Pi with a camera to count the colonies using ImageJ image analysis software with a custom colony-counting module. He says the small size of the colonies will make this type of analysis a challenge, but says that if he can get the system to work it would save his lab massive amounts of time, freeing up attention for more experimental work.
Sebastian’s working on an augmented reality project. He’s developing a cross-platform SDK that can recover the location and orientation of a camera relative to a known piece of the environment. The camera can be “taught” about local 3D objects, or “taught” to recognise 2D patterns. He wants to integrate the video stream from the camera into the SDK, then track a 2D planar pattern and display a virtual character on it, using the HDMI output to display the scene on an external monitor. He got our attention when he proposed doing this with a virtual model of a Raspberry Pi on the Raspberry logo.
Stephen works for a charity designing and implementing medical engineering training courses for developing countries. He specialises in medical devices, with a particular interest in equipment related to premature babies – such as incubators, monitors, and jaundice detection. He says:
One of the potential issues with pre-term babies in incubators (used to keep them warm) is jaundice. A very cheap and effective method to cure this is by using a series of blue LEDs [blue light phototherapy]. However, within Uganda (where are main project is running), there aren’t enough LED devices (mainly because the current medical grade ones are expensive), and the staff aren’t trained well enough to know when to use the device and when to switch it off (and what light levels to use).
If I was to win the camera, I would use it, connected to the Pi and the LEDs, to image the baby, looking for jaundice. If it finds evidence of jaundice, the LEDs will come on, and as the jaundice reduces, so does the LED brightness, until the jaundice is gone, at which point the user is made aware and the device switches off the lights.
Matt had a fantastic idea when researching building a theramin. A regular theramin gives you control of pitch and volume only, with two hands and two aerials. His competition entry was an idea for a camera theremin, where, using a camera and some machine vision instead of an aerial, both pitch and volume can be controlled with one hand, leaving the other one free to fiddle with synth parameters.
I’m thinking a camera, a bit of blob detection and separation, and control of pitch, volume in xy, and filter cutoff by blob area…and for a first cut, probably using some distastefully coloured gloves to make the image processing easy on the Pi.
I did a little dance around the room when I read this one. Thanks Matt!
Amy is 11, and she has a really excellent blog that we’ve been enjoying at http://appler.net/amy/. She’s already a seasoned Pi hacker. She says:
I have a house on a lake, and my mom loves it when birds come in the cove near our house. I want to connect a camera to my Raspberry Pi and have it detect when there are birds in our cove. When it detects that there are birds in our cove, I want the Raspberry Pi to notify my mom by sending her a text message or email so she can come outside and look at the birds.
This would definitely be a challenging project to do, but I’m excited to give it a try!
We have a feeling that Amy will complete her project with flying colours. (And we’re hoping for some great pictures too!)
Andy’s project involves a life-sized robotic Dalek which he’s already built. Gordon was open-mouthed with awe and fear at the photo which Andy sent in:
Andy sent us the code for the Arduino-based Dalek voice modulator he’s been working on to demonstrate his chops; you can download it at GitHub. His camera application will use OpenCV to do facial recognition on the Pi, which will then drive two motors in the head to rotate the dome so that the Dalek can keep eye contact with a person while talking to them. (I can only think of one thing the Dalek is likely to say, but Andy promises us a much larger vocabulary than the usual “Exterminate!”) He plans to open source the results.
Adam’s entry made us laugh, but it’s got serious potential as a useful addition to everybody’s living room (and promises to build healthy relationships everywhere).
Me and my girlfriend argue. She is the calmest person I know and such a sweetheart, but when I get home from work you can guarantee she’s pinched the best spot in the living room for watching TV so I have to sit at the other end of the room, and it just ain’t as good!
So what I’ve been putting together is a little unit that sits between the TV and the stand, and moves the TV to the best position for everyone in the room – ahhh peace!
But how do I do this? Well so far I’ve been reliant on a webcam that sits on top of the telly to take an image of whoever is sat there, be it 1 or however many people. It then uses OpenCV to detect all the faces, workout the midpoint for all the people sat there and using wiringpi, move the TV n degrees to the left or right and settle on a great angle for all to be happy with where jubilation will forth ensue.
At the minute I’m activating the code using a web front end so I can hit ‘Track’ from any device. I’ve also got SiriProxy running on the Pi so I can nicely ask my iPhone and iPad to track us and move the TV.
The problem with the webcam is it takes a long time to get the image, take the image, check the image and work out the details.
What I’d love is to be able to utilise realtime-ish video, plug that into OpenCV and display this on the TV through picture in picture over HDMI so I can make the final decision on who should get the best angle, and also through a streamed link to my iDevice.
We’d love it too, Adam. And if you can now come up with something that outsources the argument about whether you watch Grand Designs or allow your partner to flick wildly between music TV channels for the next hour, we’d be even happier.
David Crawley (and the HackerDojo robotics team)
David is heading up a robot project at Hacker Dojo in Silicon Valley, which he describes as “a makerspace filled with free thinkers”. (I’ve never met a constrained thinker in a hackspace/makerspace. There’s something curiously liberating about milling machines.)
Hercules, Hacker Dojo’s Pi-powered robot, is newly under development, and is working through 12 challenges that David has set the team of volunteers, ranging from seeing and following a person, to delivering a cooler of beer (the prize for the team that accomplishes this is that they get to keep the cooler of beer), or driving around the Hacker Dojo for a week saying hello to known people, and finding and identifying a persona non-grata (this requires algorithms to efficiently move around the Dojo and find people who might be trying to hide, as well as doing all the person detection/ face detection/face recognition work). He says:
We already have developed the capability for our robot to do simple face tracking and face following using the Raspberry Pi with a USB camera. However, the performance is very, very slow and the latency is extraordinarily high. There are several reasons for this:
1) We are running our face detection and face recognition algorithms on the processor only (not using the GPU)
2) We haven’t optimized the code yet
3) The USB interface with our camera seems to be raising a lot of interrupts for the processor that take time away from face detection
So we plan to:
1) Write a shader for the face detection and face recognition so that we can get face detection functioning on the GPU
2) Optimize our code and the camera interface for smooth performance, introduce PID control etc.
3) Test a camera that has a better interface to the Pi
We’ll be watching to see how Hercules gets on with the tasks he’s been set once he’s equipped with his new eyeball!
Thank you so much to everyone who entered. We were pretty overwhelmed by the quality and number of entries; this is an amazing community packed with some amazingly smart people. We’re looking forward to hearing from all the winners when they’re further along with their projects, and learning how the camera coped in these very different applications.
I’m hoping for some video follow up on the Dalek :)
Tell Eben the trick with arguments is only to start winnable ones – which means he needs to watch Grand Designs with good grace.
No no no! Keep quiet on the winnable ones – only start the ones you can lose – then you get kudos for giving in gracefully
These all sound like interesting projects! (Bit scary to be relying on a camera with AWB for jaundice detection, but I assume they’ll be using RAW output and standard color reference patches for calibration.) Is there any particular timeframe established when we might get updates on their progress?
You can turn off the AWB, but you would be at the mercy of lighting.
If anything is a controlled environment, you would think a medical treatment facility for premature babies would be one, even in a developing country. So the lighting should be easily kept to a known color temperature. It should also be possible to place a panel with known colors in the camera field of view to allow for comparison of skin image color or custom color balancing. The lighting and camera field of view could be limited to a limb within a housing so that the preemie isn’t disturbed during hours of natural darkness to prevent interference with optimal periods of sleep.
I’m assuming the ambient lighting will be controlled by the Pi as well – hurrah for LEDs!
It seems like they won’t rely solely on the camera detection. But it will hopefully assist their resources :) Also, looking forward to updates.
First thought is: if the bedding is white then AWB should be a doddle, and blue LEDs going on and off would have screwed up non-AWB colour-sensing anyway.
Second thought: is jaundice very obvious given the variability of natural skin colour?
Just as an example, depending on the shutter speed, it is possible for a video camera to show slowly changing color shifts under fluorescent light. This is (roughly) because the color of light coming from a standard fixture (which is a UV gas-discharge tube with various phosphors on the inner side of the glass envelope) shifts from moment to moment, based on the instantaneous current from the 50-Hz or 60-Hz AC power. The eye averages out these fast color shifts, but the camera may not. This is less true, and maybe entirely not true with the “compact fluorescent” fixtures that use high-frequency ballast circuits.
Wow, great projects. Really looking forward to seeing the results!
I would like to develop an application that would scan for possible skin cancer. You would take a picture of the skin to use as a baseline then future pictures would be compared against the baseline for changes. I am not sure where to start. Any suggestions?
You could start here: http://www.cancer.gov/cancertopics/wyntk/skin although if you are in the US, be aware that the FDA rules, and the legal liability issues with such a device probably make it an impossible product right now.
I can see where I went wrong with my proposal … I’m older than 12 and am now incapable of the level of innovative thinking demonstrated by the winners. I suppose I should have seen the writing on the wall when I’ve found that I have to consult pre-teens on an increasing basis to solve problems like getting the flashing 12:00 on my VCR to read the current time (yes, I have even bigger problems given that I still have a VCR powered up).
I’m really looking forward to the video theremin – 1950s science fiction movie soundtrack composers, run for your lives! :D
Andy Grove & Adam Farah should merge their projects & add a few lasers. “Change the channel, or move the TV where I can’t see it right then you will suffer.” http://www.youtube.com/watch?v=YQLbwOGT8eM
The first part of David Crawley’s group’s project (face detection and tracking) is similar to what I’m trying to do with my RPi at the minute. They sound a bit more capable though :$ is the code open sourced at all? On GitHub perhaps? I’d very much like to have a look.
Yes, I too would like to see any code on face detection and tracking. I’m assuming all the code written for this contest will be open.
OpenCV has a pretty decent face recognition class ready-to-go. Well, human face anyway. I had to roll my own cat chin recognition for the cat flap project.
With awesome projects like this, normal people like me didn’t stand a change in that competition :)
I really hope to see more of Dalek!
Matthias Van Gestel
Cool projects, I love it that you guys are promoting education.
Damn really hoped I could win one, will the camera be available next month? We could really use it as fast as possible.
Matthias Van Gestel
Actually I coded the linksprite UART camera today, with MAX compression enabled and a optimal data frame request, with a modified size (320*240) and we still need 400msec to 1500 msec to get one frame. On the bright side opencv only takes 40msec to generate linefollowing data (YAY) Too bad for the uart camera
Congratulations to the winners… All projects look very special… It trully must have been a very hard decision.
Just FYI – Your link for http://www.hackerdojo.com/ is broken, you left the http:// off it and it’s linking to http://raspberrypi.org/www.hackerdojo.com/.
Also – is there a date for the cameras yet? I’m flying out to a robotics competition ( http://usfirst.org and our team site is at http://roboroos.org ) in a few days, and I’d love to make use of them.
Please hurry and get past beta on this product, or I might just be forced to steal one from one of these winners ; )
Can’t wait until it supports OpenCV (or vice versa, more to the point) so I can get stuck in. Got 100 ideas for things to do when it gets to that stage.
You guys cannot imagine how happy I was when I saw my name on the winners list! I still can’t believe it!! I called my girlfriend straight away and she loved it!
I’ll be posting all the code and 3D printed part files (though at the moment the prints are samsung specific!). Hopefully this should be pretty soon as everything is printed and ready to go. Just need to make my code accept the new board.
Man I’m happy today!!!
Thanks Adam – we really liked your entry; it was one of the few we all agreed on!
Thank you all! I feel privileged!
Congratulations on all winners. I can imagine it was a difficult challenge to reduce them down to 10!
So happy to heard that my project was accepted :D Any idea of the time frame for delivery? I’ll be off to Uganda about the 8th of April and would love to be able to take the camera with me for some initial experiments. Though I’ll be able to do work on in the UK when I get back (about 4 weeks) before travelling to Uganda with a small prototype later in the year.
I’m more than happy to make any work I do open-source, as I’ll be doing with other PI based projects such as ECG simulator and a PI controlled, bulb heated incubator.
Can’t wait to get started and look forward to seeing updates from the other projects as well.
I think Emma got them in the mail (at least to everyone who sent in an address) yesterday, so it should be with you soon.
The colony counting is hardly world breaking as it has already been done (to some extent) – and grabbing the images off the camera to feed into ImageJ is hardly expected to be challenging. or is it the case that getting ImageJ to run on the Pi is the challenge?
http://rsb.info.nih.gov/ij/plugins/colony-counter.html refers and I’m sure there was a recent publication somewhere on this. There are numerous threads already online regarding these aspects, including not only colony counting but blue/white or pink/white counting. The challenges are in the image processing, not the Pi hardware aspects of it.
Liz, should I email you my postal addy? I can’t remember whether I included it or not? Fanks…
Hmm – I thought Emma had mailed everyone who hadn’t sent in an address (you guys are *rubbish* at reading rules), but if she’s not mailed you yet, yes, please send it to me. I think you know where my address is!
Wow, I stumbled across an article on Google+ just now with a picture of my Dalek and discovered that I was a winner. I *never* win competitions.
I hope to make rapid progress with my project and yes, I will open source it. If anyone wants to follow my progress please follow me on Google+ (https://plus.google.com/u/0/104092699297917090903/posts).
Wow! I can’t believe I was one of the ten people that won this contest out of seven hundred! I can’t wait to get started with this project. My dad and I are going to have lots of fun doing it. If you want to monitor my progress, I’ll be posting updates on my .
Thanks Amy – and congratulations! We loved your entry (and we’ve really been enjoying your blog).
I am so exited!!!!!!! How long do you think it will take to ship to Canada.
I assume these were sent out a week ago, so I’m looking forward to seeing some unboxing photos / vids and initial impressions soon!