Take a look at the function SHA1Transform taken from an SHA1 algorithm on Github. Assuming SHA1HANDSOFF is defined, the function looks like this:
void SHA1Transform(
uint32_t state[5],
const unsigned char buffer[64]
)
{
uint32_t a, b, c, d, e;
typedef union
{
unsigned char c[64];
uint32_t l[16];
} CHAR64LONG16;
CHAR64LONG16 block[1]; /* use array to appear as a pointer */
memcpy(block, buffer, 64);
/* Copy context->state[] to working vars */
a = state[0];
b = state[1];
c = state[2];
d = state[3];
e = state[4];
/* 4 rounds of 20 operations each. Loop unrolled. */
R0(a, b, c, d, e, 0);
R0(e, a, b, c, d, 1);
[LOTS OF SIMILAR STATEMENTS REMOVED FOR READABILITY]
/* Add the working vars back into context.state[] */
state[0] += a;
state[1] += b;
state[2] += c;
state[3] += d;
state[4] += e;
/* Wipe variables */
a = b = c = d = e = 0;
memset(block, '\0', sizeof(block));
}
As you can see, before the function exits, the variables a,b,c,d,e are reset to zero and also the contents of block is zeroed. Since these are local variables this looks like unnecessary overhead to me because they'll go out of scope as soon as the function returns anyway.
Nevertheless, I'm wondering if there's any reason why the author of the code chose to reset those locals? Is there any or do you agree that it's unnecessary overhead to reset these local variables?
memset()can be optimized out.a = b = c = d = e = 0;can be elided, too, as it has no observable effect.