I’ve been working on the C code for TinyG for 2 years now and I still feel like a rank newbie. The more I get into the domain the more I find that I just don’t know. This weekend I was refactoring a part of the planner to get rid of a bug in some complex feedhold cases. As I got into it further I realized more of the implications that the fundamental variables in CNC (Gcode) are really just time and position. This is obvious if you look at the math for the motion equations – the base equation is V = dr/dt — velocity is the derivative of position (r) over time (t), and further derivatives on down to acceleration, jerk, (then snap, crackle and pop! Really. I’m not making this up.)
What was not obvious is how this manifests itself in Gcode and in the underlying motor controller code. First off, Gcode is typically stated as position and velocity, aka feed rate. So G1 F500 X100 Y100 Z-5 moves X, Y and Z from some starting position, say (0,0,0), to (100,100,-5) at a velocity of 500 mm per minute (or perhaps inches per minute, depending). This is the most common way to generate gcode.
But this gets really complicated when you add rotary axes. What exactly does G1 F500 X100 Y100 Z-5 A3600 B1800 mean? Kramer says what that is in the Gcode “Gold Source” specification (NIST RS274/NGCv3, section 184.108.40.206) on Feed Rate. You can read more about that here if you want: Gcode Language Support – Feed Rates and Traverse Rates, but in short, feed rate is a complicated mashup of linear and rotary motion that somehow gets all the axes in the move to the right end point at the right time.
So it’s really just position and time, when it’s all said and done. My *real* machinist friends tell me that this is why machine generated Gcode for complicated machining centers is often produced using Inverse Time Mode – where the F word is interpreted as time, not as velocity. That way you con’t have to mess with these crazy transformations, you just tell the machine where to be, when.
I discovered a while ago that this has ramifications in the controller code. Basically, you want to ditch velocity as a variable as soon as you can, and do all your computation in time and position. These need to stay accurate or errors will creep into the work. Velocity, acceleration, jerk? Not so much. Errors here are much more forgiving.
Another implication is that there is a quanta for time, or a Planck’s constant for the controller. It’s not 5.391 x 10(-44)th like in the real world, it’s the minimum interpolation interval that the firmware/hardware system can sustain. In EMC2 this is about 1 millisecond. In TinyG it’s closer to 2.5 milliseconds. This is the minimum update time that you can change the parameters for the motor drivers, or the update rate. If you go faster than this you won’t be able to compute the next update in time and the system will fail. Likewise there is a minimum quanta for position, which is basically how far an axis moves for one step (or for one microstep if you don’t really care about the increased relative error that introduces – microsteps are not perfect).
So the whole system needs to be constructed around these limits. Thinking about the problem this way opens up a bunch of new possibilities. Like easier ways of doing splining, or better ways of coordinating axes in parallel robots. Or how to coordinate many-axis machines working in the same space – think multiple adaptive exclusion regions all competing for access to the same endpoints or regions.