↑ Software ↑


xdr_util - XDR Functions and Utilities

The XDR_UTIL package provides additional XDR functions and utilities that I've needed or thought I needed.


Enumerated types in C are problematic with respect to XDR. According to the C Standard, enumerated values are integer constants. Therefore, the storage allocated for a variable of a given enumerated type only needs to be large enough to hold the enumerated values of that type. For example, variables of type

    enum GuitarString { broken = 0,
                        Elow = 1, A = 2, D = 3,
                        G = 5, B = 5, Ehigh = 6 } ;

only require one byte to hold the values 0-6. On the other hand, variables of type

    enum GaxaxySize { blackhole =        0LL,
                      dwarf =      5F5E100LL,		-- 10^8
                      giant = 5AF3107A4000LL } ;	-- 10^14

need at least 48 bits to hold the possible values (from Wikipedia's Galaxy entry).

Sun's original RPC assumed enums were signed, 16-bit shorts or signed, 32-bit longs to be exchanged with xdr_long() or xdr_short(), respectively. The xdr_enum() function begins with a simple type declaration:

    enum sizecheck { SIZEVAL };

The size of this type is compared to sizeof (short) and sizeof (long) and xdr_short() or xdr_long() is called as necessary. On a compiler that optimizes such things, however, enum sizecheck only requires one byte, a size unlikely to be equal to the size of a short or long integer. The BSD-XDR library I'm using adds a little more nuance to the code when an int and long differ in size, but it's still basically the same as the original Sun RPC implementation.

The xdr_enum() function takes a pointer to a variable of type enum_t, which is defined as a 32-bit, signed integer. By using an intermediate enum_t variable, this function can be clumsily used to encode/decode 8-, 16-, and 32-bit enumerated types:

    enum  GuitarString  which = Elow ;
    enum_t  whatever ;
    XDR  xdrStream ;

    ... initialize XDR stream structure ...

    whatever = (enum_t) which ;			-- 8-bit enum to 32-bit value.
    xdrStream.x_op = XDR_ENCODE ;
    xdr_enum (&xdrStream, &whatever) ;		-- Encode before sending.

    ... send and then receive same type back ...

    xdrStream.x_op = XDR_DECODE ;
    xdr_enum (&xdrStream, &whatever) ;		-- Decode after receiving.
    which = (enum GuitarString) whatever ;	-- 32-bit value to 8-bit enum.

That is a workable solution, but it doesn't handle enumerated types wider than 32 bits. So I wrote some XDR functions for size-specific, signed/unsigned, enumerated types:

    xdr_enum8_t(), xdr_enum8u_t(), ...,
    ..., xdr_enum64_t(), xdr_enum64u_t()

These functions encode/decode their arguments as intN_ts or uintN_ts. Knowing the range of enumerated values of an enum type, an application can choose the appropriate function to call. This is fine if both sides of a transmission stream agree on the width of the enum. If they don't, the chosen function can still be called since it covers the range of enumerated values, but one or both sides of the stream will have to use the intermediate value hack above.

The 3-argument xdr_enumN_t() and xdr_enumNu_t() functions (where "N" is the character "N" and not a stand-in for a bit width) have the same problem as the functions above if the two sides don't agree on the width of an enum.

TRIVIA: I was testing some XDR-based programs on the Palm Pilot emulator (POSE) and they would crash almost right away. I knew the GCC cross-compiler used one-byte enums and I thought that was my problem. Therefore, I wrote these XDR enum functions. It turned out that the problem was with one-byte enums, but not with XDR. (Yet. The one-byte enum still needed to be handled correctly with XDR.) The GCC cross-compiler aligns function arguments on even addresses on the call stack. The PalmOS SDK's header file for variable-length argument lists, "unix_stdarg.h", adds the size of the current argument to the stack address to get the location of the next argument. Advancing to the argument following a one-byte enum results in an odd address and a crash. The solution was to use GCC's <stdarg.h>, not the SDK's header file.

time_t, timespec, and timeval

UNIX struct timevals have two fields: (i) number of seconds and (ii) number of microseconds. Early on, both fields were long integers, typically 32-bits wide:

    struct  timeval {
        long  tv_sec ;
        long  tv_usec ;

Now, the standard definition of this structure is as follows:

    struct  timeval {
        time_t  tv_sec ;
        suseconds_t  tv_usec ;

where suseconds_t is a signed integer that is wide enough to hold numbers in the range -1..1,000,000. There is some confusion about the representation of time_t. The ISO draft C17 standard says "The integer and real floating types are collectively called real types" (item and "The types declared are ... clock_t and time_t which are real types capable of representing times" (item The C2x draft standard has exactly the same wording (with C17's item moved to It sounds as if time_t can be an integer or a floating-point type. (The Standard has links to where you can (i) buy the C Standard documents and (ii) find free online copies of the draft proposals. Yes, you have to buy the standard for one of the most widely used programming languages in the world.)

The Open Group Base Specifications 2004 (a nice, extensive, publically available website) said, "time_t and clock_t shall be integer or real-floating types". At some point between 2004 and 2018, the Open Group changed this to "clock_t shall be an integer or real-floating type[;] time_t shall be an integer type".

Regardless of the confusion, the size of time_t can vary on different platforms, most likely between the old, but still used, convention of a signed, 32-bit integer (which will overflow in January 2038) and the more recent convention of a signed, 64-bit integer. To handle both the 32- and 64-bit sizes somewhat transparently, my xdr_time_t() function transfers the time as a 64-bit, IEEE 754 double-precision, floating-point number. (IEEE 754 is also the standard, over-the-wire representation of floating-point numbers used by XDR.)

IEEE double-precision floats have a 53-bit, unsigned mantissa (52 explicit bits plus the implied bit for normalized numbers); the sign bit is stored separately. IEEE floats can therefore handle over 9 quadrillion consecutive integers:

0 .. 253 - 1 = 9,007,199,254,740,991

Dividing by 60, 60, 24, and 365.2422 will show that an IEEE float can represent, in integer seconds, times over 285-million years in the future and an equal number of years in the past.

Programs on platforms with 32-bit time_ts still face the possibility of receiving times that are too large. Hmmm, maybe I should do some more bit twiddling in xdr_time_t() and just use plain, old 64-bit integers ...

Public Procedures

xdr_enumN_t() - encodes/decodes variable-width, signed enumerations.
xdr_enumNu_t() - encodes/decodes variable-width, usigned enumerations.
xdr_enum8_t() - encodes/decodes 8-bit signed enumerations.
xdr_enum8u_t() - encodes/decodes 8-bit unsigned enumerations.
xdr_enum16_t() - encodes/decodes 16-bit signed enumerations.
xdr_enum16u_t() - encodes/decodes 16-bit unsigned enumerations.
xdr_enum32_t() - encodes/decodes 32-bit signed enumerations.
xdr_enum32u_t() - encodes/decodes 32-bit unsigned enumerations.
xdr_enum64_t() - encodes/decodes 64-bit signed enumerations.
xdr_enum64u_t() - encodes/decodes 64-bit unsigned enumerations.
xdr_time_t() - encodes/decodes a time_t time in seconds.
xdr_timespec() - encodes/decodes a timespec structure.
xdr_timeval() - encodes/decodes a timeval structure.
xdr_timeval32() - encodes/decodes a timeval structure with 32-bit fields.

Source Files


Alex Measday  /  E-mail