This is my first attempt at a GPS navigation robot (eventhough i started more than 3 years ago!)... It's meant to participate in the local ORE's Magellan competition.
|Adapt SMC's constants for the new encoders/drive train|
|Migrate source code from ARM to x86 so I can use a normal portable (mostly endianess issues)|
|Find a way to add encoders to output shaft of laslo motors.|
|Replace kludgy low-encoder-count encoders|
|Integrate gpsd daemon with robot code|
|Mount SRX2's camera in front of robot so it can easily spot the cone|
|Fix broken sonar sensor|
|Test compass input and associated motor schemas|
|rewrite planner's high-level FSM for basic waypoint navigation|
|Integrate AVOID schema to avoir obstacles|
|Test GPS-based driving outside (for 1st time)|
|Build a good bumper system. Preferably a full bumper skirt that covers all around.|
|Use camera tilt/pan base as a rotating turret to decouple camera tracking from base...|
|Replace SLA battery with lighter LiPo from SRX2.|
|Replace capricious belt-drive with direct or chain drive to reduce slack.|
|Integrate basic speech synth (eSpeak, Flite, Festival?) to signal Cone was reached.|
|Integrate libconfig so we can migrate most variables/definitions (behaviours,MS, PS, etc...) to config files.|
|Replace those old loose wheels with something softer and more precise|
To avoid performance related issues I ditch my trusty old NSLU2 ARM (ixp425) board and simply popped my Dell portable in there this way I get LOTS of processing power for vision algorithms. It's currently running plain Fedora 13 but could theorically use a custom linux to speed up booting, etc...
Using one of those gigantic 17Amp/hr SLA. Heavy but reliable although I could probably replace it with that nice set of LiPo I got gathering dust...
I'm reusing the old Xtreme Overkill Splatbot base to allow me to navigate over rougher terrains.
Because I'm reusing the Splatbot base, I use a set of those very powerful "Lazslo" 12V motors. The power is transmitted to the lawnmower wheels using a cogged belt.
My first attempt at adding encoders was less than satisfactory so I ended up "stealing" the home-made ones I made for SRX1! Who knew those would still be in use after so many years!
And because they are glued to the motors' output shaft I get a decent max_tick_per_pid_period of 38. More than enough for an outdoor robot, especially considering how much slack I have in those belt-driven wheels.
Peripheral Access Module (PAM)
I'm reusing SRX2's PAM module to allow the portable to communicate with the I2C devices via USB.
Smart Motor Controller (SMC)
Why change what isn't broken right? I'm using a slightly modified SMC codebase (accounting for different drivetrain).
Input/Output Module (IOM)
The IOM is reused to simplify access to the Sonar and compass.
The Compass is a CMPS03 from Devantech.
It is read by the IOM using a softare based I2C master (My master I2C bus runs at 400Khz which causes trouble to the CMPS03.)
an old Devantech SRF04. A more recent one would be nicer 'cause I'm forced to read this one using software I2C (on IOM).
Might consider using a second one later to better avoid obstacles.
I've changed camera once more. This time opting for a Logitech Quickcam 9000 with Zeiss high quality glass lenses and UVC interface to simplify driver support under linux.
It's quite good and although I tried hard to counter the automatic exposure adjustment It ended up being a blessing as I noticed once my colour thresholds are set they remained almost dead on from indoors to outdoors!
The only side-effect to this is that the frame-rate varies accordingly to the amount of light available so indoors the frame-rate can get as low as 5fps!
To counter this effect I've tried to decouple the Vision thread as much as possible from the faster running sensor thread. But in brighter lightning conditions or even outdoors it runs at 15fps+.
The software is mostly reused from SRX2's unfinished codebase.
In short it consists of a 3 layer architecture composed of
a motor schema based reactive lower level, a simple schema sequencer middle layer and a static planner (merely an FSM for now) at the top.
The current prototype uses NPTL threading to seperate the various module into threads to maximize performance.
There is a machine vision thread that provides the perceptual schemas with vision data they can use to guide their decisions.
There is an executive thread that runs a TCP server which responds to query from a Tcl/Tk GUI via ethernet/Wifi.