I have two function implementations that I expected to work the same:
#define HEAD_STEP_DELAY 1000
unsigned long HeadLastMicros;
void HeadUpdate()
{
if ((micros()>=HeadLastMicros) && ((micros()-HeadLastMicros)<HEAD_STEP_DELAY)) {
return;
}
HeadLastMicros = micros();
HeadStep(true);
}
and
#define HEAD_STEP_DELAY 1
unsigned long HeadLastMillis;
void HeadUpdate()
{
if ((millis()>=HeadLastMillis) && ((millis()-HeadLastMillis)<HEAD_STEP_DELAY)) {
return;
}
HeadLastMillis = millis();
HeadStep(true);
}
I know that the resolution of micros() is limited to about 8us, but I figure given that I have a 1000us wait, both of the above ought to be relatively close to the same behavior. I'm driving a stepper motor from this code, and the stepper that used micros() was behaving erratically. I haven't put the scope on it yet, so I don't know for sure, but the torque was significantly less (motor stalled easily) and it sounded wonky.
I'm using an ATMEGA324A together with the MightyCore library.
Is there anything wrong with my approach? Thanks.
EDIT: I did put the scope on it after my initial post, and the period of the micros() code is approximately half the period of the millis() code.