TinyG is a many-axis motion control system. TinyG components execute G code directly without the need for a general-purpose microcomputer. TinyG is meant to be a complete embedded solution for small motor control. The design goals are to build a board that can handle most motors up thru NEMA23 and be networked for multi-axis motion control well beyond just 3 or 4 axes. Technical details can be found here. In this post I’d like to discuss the goals in a little more detail and lay out some of the design choices that support the goals.
In short – TinyG should be a cheap, reliable way to drive DIY CNC projects of various kinds:
- “classic” CNC robots like milling machines, 3D printers, and other systems that typically require 3 or 4 motors
- multi-motor robotic CNC like 6 axis arms and other complex constructions that can take dozens of motors
- motion control projects that take dozens to hundreds of coordinated motors – for example:
- a CNC marimba (ok, so you could use solenoids)
- a multi-machine automated production line
- a moving head conference room simulation
The biggest challenges are:
COST: All-in cost for a controlled stepper axis (control CPU + software + stepper driver) is optimistically about $25 per axis for a low-end 3 axis solution – e.g. three Allegro A4983 boards, an Arduino (Adafruit, Makershed), and grbl for Gcode control. If you want better power handling and other features you can move to Makerbot electronics ($35 per axis + Arduino), or to commercial systems like the Geckodrive 540 four axis controller ($299 or $75 per axis + whatever you want to drive it with). As the price goes up so does the power handling, features, and reliability. So any solution that you could afford to deploy a with large number of motors had better be towards the low end of the range – and be prepared to ride the cost curve downward over time. (I suppose you could bit bang some MOSFETs or H bridges directly from a CPU, but you probably want circuit protection, perhaps microstepping, and I don’t know that the costs are significantly cheaper when you add it all up).
We chose the Atmel xmega processor and the TI DRV8811 stepper drivers as a good base to work from, figuring this provided a good balance between cost and capability. If a better CPU or driver chip becomes available we’ll probably use it. We ported Simen Svale Skogsrud’s grbl code to the xmega as an excellent way to do 3 axis CNC. It’s driven fro serial input, so parallel ports are not needed. The board accepts USB or direct TTL serial from an Arduino or similar.
CONTROL: By running the Gcode interpreter directly on the chip it avoids the expense and problems of trying to precisely control step timing with a general purpose computer that has other things on its mind (and non-deteministic interrupt latencies – “Index files now?”). PC based CNC controllers are happy if pulse jitter (uncertainty between pulses) is less than 20 microseconds. The less jitter the smoother the movement and the faster the motor can be run, particularly athigh microstep values. The pulse timing is therefore the most heavily optimized code in the system. There is also a dedicated timer for each axis (no DDA required), so the timing accuracy is about 35 ns. (32 Mhz clock) – but this can degrade significantly to about 2 microseconds if the step interrupts collide (still pretty happy with the result). This means the board can maintain very high motion accuracy, very short arc segments, and so forth. By limiting the workload of each CPU to 4 axes it means at least that part of the motion control won’t degrade as things scale up. So doing 12 axes requires 3 boards, each with their own CPU.
SCALABILITY: Now that we’ve got a network of motion controllers, the next problem is, well, networking. Again, latency and jitter is the enemy. TinyG uses an RS485 network running at 500 Kbps and a fairly efficient packet protocol – in total supporting about 4000 packets per second with true broadcast. That’s about 250 microseconds per packet, so for some applications that’s OK, for others it won’t be. The simplest way we’ve come up with to control a network of controllers is to put the entire interpreter in each CPU, broadcast the command stream to the entire array, have each board build the full model, but only actuate the axes it actually has. This is similar to the way the big screens in Times Square work – each LED panel gets the full video signal but only displays the portion of the image it’s responsible for. In this environment the latency doesn’t really matter – as long as it’s consistent across the broadcast (so you can’t use a ring network, it must actually be a broadcast). If that doesn’t work (too slow, model too big for each unit, or some other barrier) we will need a time-triggered protocol like OSC or an OSC-lite.
STATUS: The units are running a fairly reliable Gcode system right now. We are working to fully test it and round out those features we think are necessary for DIY or manual Gcode operation – like a good stop/start/end control (harder than it sounds), zeroing and table homing, easy configuration, error trapping for hand-generated Gcode, and some other features. Focus on networking and making other interpreters comes next.