Right from the beginning I knew that the Sheldon robot would have a multi-processor architecture. This appeals to me the same way object-oriented programming appeals – you can create a modular system in which each part does a specific task and can be treated like a black box.
The structure will be hierarchical. At the top – what I’m going to be calling the core processor – will be a single board Linux computer like a Raspberry Pi, a BeagleBone Black or maybe something from Odroid, depending on how much grunt or how many Uarts I need. This core will carry out the high-level decision-making and, probably, communications with the outside world.
Communicating with that will be nodes dedicated to certain tasks. I haven’t yet decided what the communications protocol will be – the Raspberry Pi, for example, has only a single Uart by default which limits conventional serial communications. I2C or SPI are obvious candidates. Or maybe it’ll be something bespoke (because even if not perfect, it will be interesting to develop). We’ll see.
One of the reasons for this structure is that each module can focus on a specific job. With the earlier robots, for example, a single microcontroller – typically an Arduino of some kind – would do everything. That meant the main loop of the program was polling sensors, controlling DC motors, actuating servos and heaven knows what else. With such an approach, response times suffer. For example, a proximity sensor might trigger but the robot would still run into the obstacle because it was busy doing something else in the loop.
Each node will be semi-autonomous, meaning that it will be able to carry out actions in response to, say, sensor inputs without first referring to the core processor. However, the node will also respond to commands from the core as well as reporting data to it.
The first of the modules is the motor controller. I’ve chosen a Teensy 3.5 for this. It has a fast 32-bit processor and lots of GPIO. I was originally going to use a Teensy 3.2, a board I’ve used for lots of things, and can’t for the life of me remember why I chose to upgrade to the bigger, more powerful and more expensive device, but there you go. I’ve bought it now…
(I do remember that I chose the 3.5 rather than the 3.6 because the former’s GPIO pins are 5V-tolerant. I like tolerance – it’s a good thing.)
I foresee a navigation node, perhaps using rangefinders of various types. This is likely to be separate to an imaging node. And so it goes on… Hopefully, the modular approach will allow me to swap in and out all manner of capabilities, depending on what I feel like playing with at any one time.
The core–node structure also means I can use single-board computers where I need muscle, multi-threading or other features of an operating system and microcontrollers where things need to be done in real time.
One final point, I’ve chosen the ‘core–node’ nomenclature in part because I think it describes the relationship between the parts accurately, and also partly because, over the years, I’ve become increasingly uncomfortable with the terms ‘master’ and ‘slave’ so commonly used in the field of computing. Others might consider these terms neutral and devoid of any overtones, but I don’t and it seems I’m not alone. But in any case, core and node more precisely define the roles these systems are playing.