Competition Results

A little bit late, but the blog wouldn’t be complete without this retrospective.

The competition weekend this year was of course very different from last time: without any of the stress (and also the excitement and human contact…) of a physical competition, we could just relax and watch all the funny, brilliant, crazy videos from around the world.

We did hope to do better than last time, but still, this was a very pleasant surprise: in the overall results, we came out third in the Intermediate category!

Many thanks to the sponsors for all the cool prizes – and most of all, to the organisers for running this awesome competition and for all the inspiration and motivation it brings!

Now we’ll review what we did for each challenge.

But first, here is our submission for Technical and Artistic Merit:

Tidy Up the Toys

This is probably the challenge we spent the most time on initially. The thinking went something like this:

  • We know the exact dimensions of the arena we’re in. We also know where the boxes are initially and exactly where to move them.
  • We also know our initial position and orientation in the arena.
  • If we could somehow always know in every moment our position in the arena and which way we’re facing, tidying up would be easy!

This could be achieved by dead reckoning. But in practice it would probably be very hard to make it reliable and stable enough, especially with all the somewhat erratic leg movements. (As opposed to for example wheels on a flat surface in an ideal situation, perfectly controllable, always moving in a reliable way.)

So instead of that we did a very, very basic first draft for SLAM (well, actually just L) using only five distance sensors and a compass. Making a little tool for simulation helped enormously: it made it possible to test many different scenarios with various positions and angles – and by introducing sensor noise and delays, it was actually pretty close to the real world. Debugging and tuning this localisation method took a long time, and probably would have been impossible without simulation, where you can immediately see what’s going on and pause, retry quickly, etc.

The solution for this challenge was also a good test for our little software architecture idea with behaviours running concurrently, cooperating and building on each other. These behaviours all operate on a single global “robot state”, reading or setting values in it. Going roughly from “top to bottom”, and greatly simplified, the behaviours look something like this:

  • run-tidy-up is the main behaviour running the challenge. It knows the path we need to follow to pick up and carry each box. It has a predefined list of points to visit. It sets speed to a constant value and for each point sets arena-target to the coordinates. Then it calls directly the go-to-target behaviour.
  • go-to-target knows how to direct the robot to a given target position. It looks for the robot’s current position inside the arena (arena-x and arena-y), calculates the angle between that and arena-target and sets this as set-heading. Then it waits until the current position is close enough (assuming we already set a non-zero speed).
  • find-pos-in-arena (always running) is the behaviour that figures out our current position: it looks at the dist values (all the distances from sensors), heading (the current compass heading) and sets arena-x and arena-y if it can find a solution.
  • go-towards-set-heading (always running) knows how to actually maintain a given heading while moving: it compares set-heading (desired value) and heading (actual value), works out how fast and in which direction to turn and sets dir (which is equivalent to turn velocity).

Finally, at a lower level, “below” these behaviours:

  • speed and dir are ultimately translated to velocities for the left and right motors, which are continuously sent to the microcontroller so that it can deal with the actual motor controller hardware.
  • Sensor readings (dist from the five distance sensors heading from the compass) are continuously updated in the robot state.

Here is what all this achieved: with the actual challenge run video, we’re also showing a simulated run: it doesn’t match perfectly, but you can see roughly the same behaviour and the effects of sensor noise and delays – and it shows some of the internals.

A little explanation of what’s shown in those simulation boxes:

On the left, the simulated world is shown, with some extra information superimposed:

  • The blue lines are the distances (dist) as seen by the five sensors (some of them disappearing when a wall is out of range).
  • The green line is set-heading, the direction the robot is currently trying to follow.
  • The flashing green dot is arena-target, the next point on the predetermined path we’re trying to reach.
  • The red rectangle is what the robot currently “thinks” is the arena around it.

On the right, more details are shown from the find-pos-in-arena behaviour:

  • The grey robot outlines are all the possible robot positions and orientations we determine from the distance measurements.
  • The green outline is the currently accepted solution (arena-x and arena-y + orientation).
  • When the outline turns red, it means we have lost our bearings: we have no good solution (not enough walls are in range to determine position – or our position drifted too much from the last accepted one due to noise or lag).

The end result could have been more precise and could have been faster, but all in all, it wasn’t too bad!

Feed the Fish

At first, we didn’t expect this challenge to be particularly difficult. (After checking that our idea of a shooter mechanism can indeed shoot things: how hard can it be to actually build it and shoot something into a fairly close target…?)

In reality, it took many weeks to actually make it usable!

Many thanks to Daniel for his fish tank! Without that, we couldn’t have achieved first place in this challenge – not only because of how awesome the 3D fish tank is, but also because of the motivation: it would have been a shame if we had to leave the fish starving because of an unfinished or unreliable shooter – so building that properly was the primary focus for a long time, avoiding distractions or jumping on to the other challenges.

And lastly, after sorting out the shooter, the small detail of autonomously moving back and forth between the firing position and the start position has also proven to be tricky. Again, how hard can it be to move about 75 cm in a straight line and then back…?

In practice, short, fine-tuned movements with these legs are practically impossible! Continuous movement is fine, we can maintain a given speed – the problem is with starting! When motors are started on each side, they will never move both sets of legs exactly the same way, so initially the robot can swing to one side a bit, then compensates, and eventually it will straighten as it keeps going. But this means that when we repeat this sequence a few times, our end positions will drift a lot, and we will end up far outside of the starting rectangle!

This simulation demonstrates the issue: it uses the naive control approach that relies only on the front distance – but we added some randomness to the speed of each motor. The effect is slightly exaggerated here, but it’s close to what happens in the real world.

Adding a compass can improve this, as it helps us to keep our orientation and to correct any deviation from a straight line, but it’s still not enough: the robot’s position can still drift in the Y axis over time.

Looking at one distance only and following the compass isn’t enough, because this only gives us one dimension, but we need to move between two points in 2D!

If only we could use our find-pos-in-arena and go-to-target behaviours we used for Tidy up the Toys! We could just tell the robot to move to any arbitrary (x,y) position – it might not be absolutely accurate, but it wouldn’t accumulate errors over time! There are two reasons we can’t do that: 1. There is a fish tank in the middle of the arena, so sometimes our sensors would see that instead of a wall, completely confusing our sense of position. 2. We had to relocate our distance sensors to the front, at a very low position, to be out of the way of the shooter at the top. This means the left and right sensors are blocked by the legs, so they are unusable.

But we can “cheat”! Because we know our initial position and orientation, and because our movement is extremely limited in this particular challenge, we don’t need to use the full, generic, localisation logic: assuming (hoping) we’re always roughly facing right and moving back and forth at the lower part of the arena, just by looking at our distances forward and 45 degrees to the right (this won’t be obstructed by legs), we can determine our position in 2D!

This is where layers of behaviours and composability help a lot: all we need to do here is to swap out the generic find-pos-in-arena with a simpler behaviour, find-pos-in-arena-for-fish. This provides the same values (arena-pos-x, arena-pos-y), but uses a simpler, more limited method, that’s good enough for this situation. And everything else remains the same, go-to-target and all the other behaviours will work just fine!

And here is the end result:

Up the Garden Path

This challenge probably got the least amount of attention. The solution was supposed to be simple though: the goal is to follow a path in an empty arena. Because the path and all the dimensions are already known, generic line-following or trying ad-hoc decisions where to turn isn’t really necessary. If we have a way of precisely following a predetermined path, that should be the most efficient solution. (Assuming we don’t worry too much about scoring the highest points in this challenge by using a more “advanced” method!)

So we can rely on what we’ve already developed for Tidy Up the Toys! It’s basically the same thing, just simpler: in both cases we have a series of points we want to visit in order, but here we don’t even need to stop to pick up or drop boxes!

The main behaviour for this challenge, run-garden-path, is just a simpler version of run-tidy-up: it has a definition of the path as a list of coordinates and goes through them, setting arena-target and calling go-to-target for each.

However, this challenge quickly made obvious some issues and limitations in our control methods: here we need to follow the path more precisely than in Tidy up the Toys, and the path includes some difficult points and orientations: the usable range of the VL53L0X distance sensors is about 1.28 m, which is of course slightly less than the diagonals of the arena! So there are a few “blind spots” near the corners where with certain orientations we don’t see enough walls to determine our location! This is usually not a problem when moving in a straight line as we quickly get a “lock” again. But the path for this challenge happens to have a few difficult turns where Phantom can easily wander off. (Note that we didn’t include any dead reckoning, so we can completely lose track of where we are if we deviate too much from the last known matching point!)

Another unexpected problem involved precise movements with lots of turns: our motor control logic probably wasn’t really adequate for this: stopping, restarting or reversing is always problematic, especially with legs, as these introduce delays and errors in movement. So we have to be careful how “aggressively” we try to turn: it’s better to allow small deviations and keep moving, correcting as we go, than to stop!

The combination of the above issues and possibly some other mistakes somewhere made it really hard to achieve smooth line following – and as usual, there wasn’t really enough time to investigate this further. At the end, keeping the original, generic control method and with a little “hacking” of various parameters, we managed to do a not great but more or less acceptable run.

DIY Obstacle Course

Thinking about obstacles, we figured that the greatest obstacle for Dad (or his robot) is definitely his overenthusiastic family (preventing him from living a quiet, peaceful life). So we put objects in the obstacle course that represent our, erm, quite unorthodox family life.

First, Phantom needed to overcome paper obstacles representing our unshakable belief that no matter how modern the world gets, we keep reading paper books, writing letters on paper by hand (with fountain pens!), keeping reminders written on paper, using traditional paper calendars and diaries, etc.

The toy animals, representing our youngest family member’s immense love for nature, kept an eye on Phantom’s (highly successful) efforts.

Next, Phantom needed to push the ducks (representing Pi Wars) out of the way, maneuvering around the three kids’ first shoes and their most important first toys.

Following that, Phantom needed to cross the tunnel under a small skeleton army (representing our middle child’s obsession for Warhammer and the like), under the eyes of our cat (or rather the Cat who owns our family): Sir Tihamer Odysseus Denes-Dulo Bel et Bon of Mars First of His Name.

The yellow balls in the corner represent Dad’s and his son’s heroic efforts to create a sophisticated (yet still effective) shooter.

The globe that Phantom had to avoid knocking off the Bear Hill represents our love for wandering.

On the hill, before greeting Babbage Bear, Phantom had to avoid getting caught in the Periodic Table or the Trainee Barista T-shirt representing our two sons’ vocations while admiring our talented (only not particularly in painting) son-in-law’s artwork.

Descending from Bear Hill, Phantom was greeted by our 2018 Pi Wars photobook (on paper!).

Phantom needed to gently place a golf ball into our first child’s wooden baby toy box (the mother firmly refusing to let them play with anything made of plastic), guarded by a houseplant (the little one’s signature) with a worm (a present from the big one) and the painting of our future cafe named after Tihamer.

And finally, back to square one!

Updated: