You'll need to look in <limits.h> (or one of the files it includes, e.g., sys/syslimits.h on OS X) for the #define of UID_MAX.
Most recent operating systems (Solaris 2.x, OS X, BSD, Linux, HP-UX 11i, AIX 6) can handle up to two billion (2^31-2), so I would assume that and make a workaround for the more obscure systems that don't.
% grep '#define __U32_TYPE' /usr/include/bits/types.h
#define __U32_TYPE unsigned int
This lets you find out the C type. Since you need the size in bytes, your best option is parsing the typedef name according to the specification in types.h:
We define __S<SIZE>_TYPE and __U<SIZE>_TYPE for the signed and unsigned
variants of each of the following integer types on this machine.
16 -- "natural" 16-bit type (always short)
32 -- "natural" 32-bit type (always int)
64 -- "natural" 64-bit type (long or long long)
LONG32 -- 32-bit type, traditionally long
QUAD -- 64-bit type, always long long
WORD -- natural type of __WORDSIZE bits (int or long)
LONGWORD -- type of __WORDSIZE bits, traditionally long
So, here is a one-liner:
% grep '#define __UID_T_TYPE' /usr/include/bits/typesizes.h | cut -f 3 | sed -r 's/__([US])([^_]*)_.*/\1 \2/'
U 32
Here U means unsigned (this can also be S for signed) and 32 is the size (look it up in the list above; I think, most of the time you can assume that that's already size in bytes, but if you want your script to be fully portable it might be better to do case switch on this value).
In this link the question is asked and a responder uses a trial & error method to determine the system in question uses a signed long int, leaving 31 bits to store the value, with a max of 2,147,483,647.
# groupadd -g 42949672950 testgrp
# more /etc/group
testgrp:*:2147483647:
That's an interesting question. I'd be surprised if there was a standard, portable method to determine this.
I don't have a Linux box handy, but the id command on FreeBSD 8.0 wraps back to zero:
# id 4294967296
uid=0(root) gid=0(wheel) groups=0(wheel),5(operator)
I'm sure this is undefined behavior, but I'd wager that most versions of id would either wrap to zero with 65'536 (if 16-bit UID) and 4'294'967'296 or error out if you went beyond the system limit.
You'll need to look in
<limits.h>
(or one of the files it includes, e.g.,sys/syslimits.h
on OS X) for the#define
ofUID_MAX
.Most recent operating systems (Solaris 2.x, OS X, BSD, Linux, HP-UX 11i, AIX 6) can handle up to two billion (
2^31-2
), so I would assume that and make a workaround for the more obscure systems that don't.glibc provides definitions for all those system types.
You can check
/usr/include/bits/typesizes.h
:Next you look into
/usr/include/bits/types.h
:This lets you find out the C type. Since you need the size in bytes, your best option is parsing the typedef name according to the specification in
types.h
:So, here is a one-liner:
Here
U
meansunsigned
(this can also beS
forsigned
) and32
is the size (look it up in the list above; I think, most of the time you can assume that that's already size in bytes, but if you want your script to be fully portable it might be better to docase
switch on this value).In this link the question is asked and a responder uses a trial & error method to determine the system in question uses a signed long int, leaving 31 bits to store the value, with a max of 2,147,483,647.
That's an interesting question. I'd be surprised if there was a standard, portable method to determine this.
I don't have a Linux box handy, but the
id
command on FreeBSD 8.0 wraps back to zero:I'm sure this is undefined behavior, but I'd wager that most versions of
id
would either wrap to zero with65'536
(if 16-bit UID) and4'294'967'296
or error out if you went beyond the system limit.