↑ Writing ↑

GEONius.com
1-Jun-2016
 E-mail 

GLoC's DECnet Interface to EPOCH

September 30, 1997

Three methods of linking the Telstar 4 Ground Loop Control (GLoC) system's DECnet interfaces to EPOCH are presented and evaluated. But first, brief descriptions of DECnet and of GLoC's network interfaces ...

(Note: The information about GLoC was gleaned from the GLoC requirements and design documents and is, I hope, correct.)
(Also see KOREASAT's GLC.)

DECnet and TCP/IP Networking

From a conceptual standpoint, DECnet is very similar to TCP/IP: servers listen for network connection requests from clients, and clients send connection requests to servers. After a server accepts a connection request from a client, data can be exchanged between the two over the newly established network connection.

At the programming level, similar steps are taken to establish either a DECnet connection or a TCP/IP connection, albeit with different system library calls. For TCP/IP networking, Digital's VMS/Ultrix Connection (UCX) package on the GLoC VAX provides the VAX/VMS programmer with the standard UNIX socket calls.

The location of a TCP/IP server is typically specified as

    service@host

where service identifies the "port" at which the server is listening for connection requests and host is the computer on which the server is running. The location of a DECnet server is specified as

    host::"TASK=service"

Same type of information, different syntax.

GLoC and the Network

In an excellent design decision, VMS mailboxes (or message queues) are used for all communications between the core GLoC software and the outside world. A number of ancillary C processes surrounding the GLoC core link the mailboxes to the network. Messages in outgoing message queues are forwarded to the appropriate network destination (e.g., CRT or RTS). Messages received over the network are, in turn, routed to the appropriate incoming message queue (from which GLoC will read them). In the following example, the PKTRCV process reads messages from the RTS over DECnet and queues them up to GLoC:

     (DECnet)
RTS ---------> PKTRCV -----> Status_Request_Q

And, in the other direction, TCPSND sends outgoing messages (queued up by GLoC) over a TCP/IP network connection to CRT:

     (TCP/IP)
CRT <--------- TCPSND <----- TCP_Send_Q

Some important questions for us on EPOCH - and which apply to both DECnet and TCP/IP communications - are:

  1. For a given network link, who is the client and who is the server?

  2. What are the service names for the servers?

  3. On network links for which GLoC is the client, how do we redirect GLoC's connection requests to the EPOCH front-end?

In scanning the executables, command procedures, and debug logs on ISI's in-house GLoC VAX, I only found references to two DECnet server addresses:

    HAWLY1::"TASK=DECRCV"
    HAWLY2::"TASK=DECRCV"

These make it seem as if all DECnet communications between GLoC and the given machine are routed through a single server process.

GLoC's DECnet Interface with EPOCH

We have come up with three possible solutions for handling DECnet communications between GLoC and EPOCH:

  1. Buy a third-party "DECnet on Unix" library (Ki NETWORKS' DNA) which we will use to write DECnet-TCP/IP conversion software. Since the DNA package under consideration does not run on the EPOCH front-end machines, a separate Solaris workstation would be purchased and dedicated to DECnet-TCP/IP conversion:
    
           (DECnet) |       Conversion        | (TCP/IP)
    GLoC <----------|---->     Task      <----|----------> EPOCH
                    |      (Workstation)      |
    
    
  2. Replace the ancillary DECnet I/O processes on the VAX (e.g., PKTXMT and PKTRCV) with new processes that use TCP/IP network connections.

  3. Write and run a DECnet-TCP/IP conversion process on the VAX. The GLoC's DECnet I/O processes would establish DECnet connections with the conversion process, which would, in turn, establish TCP/IP connections with EPOCH:
    
          (DECnet)  Conversion      | (TCP/IP)
    GLoC <-------->    Task    <----|----------> EPOCH
                                    |
    

All of the proposed solutions would require more information about GLoC internals than is currently available to us - we need source code! In addition to needing answers to the questions asked earlier about network service names, etc., we need to know:

 

Option #1: DECnet on UNIX

Vlad will discuss the "DECnet on UNIX" option more fully. Running a conversion task on a UNIX workstation is equivalent to running it on the VAX (option #3) with the added benefits of:

 

Option #2: Replacing Ancillary I/O Processes

Replacing the GLoC's existing DECnet I/O processes with ones that speak TCP/IP has two big advantages over the other proposed solutions:

On the other hand,

An earlier form of this proposal would have replaced all of the ancillary I/O processes, both DECnet and TCP/IP, with processes that speak the EPOCH protocol using an existing C (not C++) library. Despite its attractions (simplified software on the EPOCH side and a possible reduction in extraneous network traffic), this solution would have resulted in a close coupling of GLoC and EPOCH, making the GLOC ancillary I/O processes vulnerable to changes in EPOCH (e.g., the communications protocol).

 

Option #3: DECnet-TCP/IP Conversion on the VAX

A DECnet-to-TCP/IP conversion process running on the VAX appears to be the simplest, least costly, and most robust solution to linking GLoC's DECnet communications to EPOCH's TCP/IP interfaces. Arbitrary messages received by the conversion task over a DECnet connection would be output as is to the corresponding TCP/IP connection and vice-versa. The EPOCH software running on the front ends would be responsible for interpreting messages received from GLoC and formatting (in GLoC format) messages sent to GLoC.

This VAX-based solution offers:

(In fairness, each of the items listed above would apply to the "DECnet on UNIX" option as well. The conversion process running on the UNIX workstation could be a near-clone of the VAX conversion program, with different system calls for performing DECnet I/O and with the third-party DNA library as a wild card.)

Some minor drawbacks of the VAX-based conversion process:

DECnet-TCP/IP Conversion Task Design

The conversion task provides a data link between a DECnet connection to GLoC and a TCP/IP connection to EPOCH. Messages received on one connection are tranmitted on the other connection and vice-versa:

  (DECnet)   Conversion   (TCP/IP)
<---------->    Task    <---------->

Depending on how GLoC reads DECnet messages, the conversion task may need to ensure that messages sent from EPOCH to GLoC are aligned on the GLoC's QIO read calls. To do so, the conversion task will need to know the record boundaries of messages received from EPOCH, information supplied by the length field (bytes 41-42) in the GLoC standard packet header. (Ouch! The conversion task was this close to being truly generic ...)

The conversion program will consist of the following components (the use of the term "object" does not imply the use of C++):

  1. TCP/IP object

  2. DECnet object

    These objects represent network connections of the specified type. Each DECnet object contains a link to the TCP/IP object with which it is associated and vice-versa. Development of these components will aided by the fact that I've written network daemons like this before and because of the similarity between the two types of objects: once you've written one, the other quickly follows.

    (Not explicitly listed above are the trivial-to-implement server objects which, when connection requests are received, create connection objects of the appropriate type. Component #4 below, the TCP/IP API, already implements the TCP/IP server objects.)

  3. I/O Multiplexing Dispatcher (existing and well-tested)

    Patterned after the X Windows Toolkit main loop, the I/O dispatcher monitors sockets for I/O and keeps track of timers. When an I/O or timer event occurs, a user-registered callback is invoked to process the event. The UNIX version uses the select(2) system call to monitor sockets; the VMS version uses event flags, as does the DECwindows implementation of X Windows. It's a fairly simple matter to write callbacks that are portable to both UNIX and VMS.

    Input-available callbacks would be used to answer connection requests from clients and to read incoming messages. In the former case, the input callback would create a DECnet or TCP/IP object for the new client connection. In the latter case, the input callback would read the message and output it to the linked connection.

  4. High-Level TCP/IP API (existing and rock-solid)

  5. High-Level DECnet API

    These Application Programmer Interfaces (API) hide the system-level details of establishing and communicating over networking connections. The DECnet API I have in mind is described in more detail below.

Note that the conversion program could be targeted equally easily to either the VAX or a "DECnet on UNIX" workstation.

 

High-Level DECnet API

The high-level DECnet Applications Programming Interface is designed to shield the applications programmer from the complexity of the low-level system calls required to establish and talk over DECnet connections. The ability to specify event flags in some calls makes it easy to program non-blocking and/or timed operations in conjunction with an event flag-based, I/O event dispatcher.

High-level pseudocode specifying which VMS system calls and in what order they're made has been mapped out for the various functions.

(Note: Details about DECnet programming were found in Digital's DECnet for OpenVMS Networking Manual and in some example programs downloaded from DECUS.)

Client Calls

Endpoint connection ;

status = dnetCall ("taskName[@node]", eventFlag, &connection) ;

Establishes a Decnet client connection to the named task on the specified host. If an event flag greater than 0 is specified, the connection request is issued, but dnetCall() does not wait for the response from the server; the application is responsible for calling dnetComplete() to complete the connection.

status = dnetComplete (connection) ;

Waits for the connection acceptance or rejection from the server.

Server Calls

status = dnetListen ("taskName", eventFlag, &listener) ;

Creates a listening channel for a Decnet server and declares its service, taskName, to the operating system. If an event flag greater than 0 is specified, an asynchronous read of the first connection request is initiated, with the completion of the read to be posted to the event flag.

status = dnetAnswer (listener, &client) ;

Waits for and reads the next connection request received in the listening channel's mailbox. A new Endpoint is created for the client connection. If an event flag was specified in the original dnetListen() call, a new asynchronous read of the next connection request is then initiated; its completion will be posted to the same event flag.

Input/Output

status = dnetRead (connection, eventFlag, numBytesToRead, buffer, &numBytesRead) ;

Reads the requested amount of data from a connection. (Or however much data is available in the incoming message.) If an event flag greater than 0 is specified, an asynchronous read of the data is initiated; the completion of the read will be posted to the event flag.

status = dnetWrite (connection, eventFlag, numBytesToWrite, buffer, &numBytesWritten) ;

Writes the requested amount of data to a connection. If an event flag greater than 0 is specified, an asynchronous write of the data is initiated; the completion of the write will be posted to the event flag.

iosb = dnetIOStatus (connection) ;

Returns the I/O status of the most recently completed, asynchronous operation on the connection.

status = dnetDestroy (endpoint) ;

Closes a listening socket or data connection. Any outstanding QIOs are cancelled.

Alex Measday  /  E-mail