Click for: WEBPAGE INDEX
web page last updated thur.5.jun.2003.7am
Dont use macros and constants.
Sometimes you have no choice but to use macros, however try everything
to avoid the use of macros.
There are many problems with macros,
The most insiduous problem with macros is that they look like function calls
but they have subtle differences. This can lead to bugs. Example:
#define max(a,b) ((a>b)?a:b)
|
If I now make a statement x = max( f(a) , f(b) );
the result of this code is different depending on whether max
is a macro or a function. If max
is a macro the function
f
gets called 3 times, however if max
is a
function the function
f
gets called 2 times!
This is because when max
is a macro, the statement
is expanded into:
x = ( (f(a) > f(b) )? f(a):f(b) ) ;
|
Note that if f
is a heavy duty function, then this code
may actually be more inefficient than using max
as a function.
Really reliability is a much more important attribute than speed, it
is better for code to be reliable than for it to be fast. Generally function
calls are very safe, macro calls can be very unsafe.
I'll give just one further problem with macros, macros are not
retargettable. If you redefine a macro, you really have to recompile
everything to be sure that the new definition reaches all its usages.
Basically write a program with no macros at all, only when the program
is up and running + debugged perhaps change some usages to macros if it
appears to increase speed. If possible try to only use macros in #?.c files
and not in header files. That way you contain their usage.
Dont use constants!
Note firstly that constants are really a special form of macro,
In gs8
there is a problem that the source code uses constants,
eg to define the memory overhead. The problem with this is that the
memory overhead of gs8 is hard wired into the program. If you want to
change the memory overhead you have to recompile the whole program.
If instead they had used functions to define the memory overhead eg
int minimum_free()
to determine the minimum amount of memory
to leave free for other programs then this function could perhaps read an
env variable. :this means the user can retarget the memory overhead.
|
Note that minimum_free()
is an example of
programming-by-exclusion,
and as such is bad. A much better programming-by-inclusion approach is
int maximum_usage()
to determine the maximum memory overhead of
the program: you want to limit how much a program uses not how much it
doesnt use! If you limit how much it doesnt use then your program will
stretch itself out and gobble up all your resources. You increased your
systems memory to be able to run more stuff, but then find you can
still only run 1 prog. because it has decided the minimum it will leave
free is ½ meg!
|
Now you may be wondering why not replace a constant
#define MINIMUM_FREE 500000
by a function:
|
#define MINIMUM_FREE minimum_free()
?
|
(Yes, gs8
operates on leaving you ½ meg free!, sad but true,
I have changed this to leaving you 5meg free, which slows it down a bit
but is much healthier.)
OK the problem with retargetting the constant to a function is that
its no good due to gs8
source code using the constant for
constant initializer structures! Thus the constant (renamed here for clarity)
really has to be a constant it cannot be retargetted to a function call.
This brings us to the problem of other constants such as structures,
these also cause problems. People think they are clever writing eg
struct xyz abc = { 0 , 1 , 12 , 100 , 41 ,... , } ;
|
This is not clever. This was another source of probs in gs8
,
what the problem is, is that if the definition of xyz
is ever
altered the above initialization becomes garbage!
Always try to initialize structures dynamically via named fields,
eg:
struct xyz *abc;
|
abc = calloc( 1 , sizeof( struct xyz ) ) ;
|
abc->name = "abc" ; abc->width = 12 ; abc->height = 15 ;
|
It requires a bit more typing but is much more bug resistant,
if someone rearranges the definition
of struct xyz
the code will still be correct. It is also easier
to verify correctness by merely reading it! It says
abc->height = 15 ;
, well does that sound right, is the height
15?
Also if someone removes the height
field from the
definition of struct xyz
the compiler will catch this. With
the original constant initializer approach this wont be caught. This problem
happened in gs8
so its a real problem.
My own source code does in fact use some macros + constants, but I do
everything I possibly can to avoid them. Some macros are forced on me,
because gs8's internal mechanism often involves macros.
However eg I define maximum and minimum as functions.
Generally now I replace constants
by env-variables that override default values.
To speed up the code I design things so the env variable only gets read once.
:this allows the user to change program behaviour at run time.
Think about multiple compiles: 68020, 68040, 68060 versions of a program,
if instead you just had one program but with cpu variants of the speed critical
bits then you could set your cpu version via env variable or better still
the prog could determine what cpu its on and do this automatically!
gs8 however will be done as multiple compiles.
By religiously avoiding these macros + constants a lot of bugs
simply dont happen in my code! When I first began programming I sometimes
would waste a week searching for a bug caused by macros. Avoidance of macros
really speeds up development, no doubt about it. It was when I was forced to
learn Modula-3 as a student that I began not using macros. Once I found that
I could often write bug-free code first time round without using macros,
I began not using them and realized that they are a false economy. With gs8
I see many cases where we are paying in an ongoing way for someones usage of
macros + constants.
General conclusion: macros + constants create more problems than they
solve.