|
|
|
Three methods of linking the Telstar 4 Ground Loop Control (GLoC) system's DECnet interfaces to EPOCH are presented and evaluated. But first, brief descriptions of DECnet and of GLoC's network interfaces ...
(Note: The information about GLoC was gleaned from the GLoC requirements and design documents and is, I hope, correct.)
(Also see KOREASAT's GLC.)
From a conceptual standpoint, DECnet is very similar to TCP/IP: servers listen for network connection requests from clients, and clients send connection requests to servers. After a server accepts a connection request from a client, data can be exchanged between the two over the newly established network connection.
At the programming level, similar steps are taken to establish either a DECnet connection or a TCP/IP connection, albeit with different system library calls. For TCP/IP networking, Digital's VMS/Ultrix Connection (UCX) package on the GLoC VAX provides the VAX/VMS programmer with the standard UNIX socket calls.
The location of a TCP/IP server is typically specified as
service@host
where service identifies the "port" at which the server is listening for connection requests and host is the computer on which the server is running. The location of a DECnet server is specified as
host::"TASK=service"
Same type of information, different syntax.
In an excellent design decision, VMS mailboxes (or message queues) are used for all communications between the core GLoC software and the outside world. A number of ancillary C processes surrounding the GLoC core link the mailboxes to the network. Messages in outgoing message queues are forwarded to the appropriate network destination (e.g., CRT or RTS). Messages received over the network are, in turn, routed to the appropriate incoming message queue (from which GLoC will read them). In the following example, the PKTRCV process reads messages from the RTS over DECnet and queues them up to GLoC:
(DECnet) RTS ---------> PKTRCV -----> Status_Request_Q |
And, in the other direction, TCPSND sends outgoing messages (queued up by GLoC) over a TCP/IP network connection to CRT:
(TCP/IP) CRT <--------- TCPSND <----- TCP_Send_Q |
Some important questions for us on EPOCH - and which apply to both DECnet and TCP/IP communications - are:
In scanning the executables, command procedures, and debug logs on ISI's in-house GLoC VAX, I only found references to two DECnet server addresses:
HAWLY1::"TASK=DECRCV" HAWLY2::"TASK=DECRCV"
These make it seem as if all DECnet communications between GLoC and the given machine are routed through a single server process.
We have come up with three possible solutions for handling DECnet communications between GLoC and EPOCH:
(DECnet) | Conversion | (TCP/IP) GLoC <----------|----> Task <----|----------> EPOCH | (Workstation) | |
(DECnet) Conversion | (TCP/IP) GLoC <--------> Task <----|----------> EPOCH | |
All of the proposed solutions would require more information about GLoC internals than is currently available to us - we need source code! In addition to needing answers to the questions asked earlier about network service names, etc., we need to know:
Vlad will discuss the "DECnet on UNIX" option more fully. Running a conversion task on a UNIX workstation is equivalent to running it on the VAX (option #3) with the added benefits of:
Replacing the GLoC's existing DECnet I/O processes with ones that speak TCP/IP has two big advantages over the other proposed solutions:
On the other hand,
An earlier form of this proposal would have replaced all of the ancillary I/O processes, both DECnet and TCP/IP, with processes that speak the EPOCH protocol using an existing C (not C++) library. Despite its attractions (simplified software on the EPOCH side and a possible reduction in extraneous network traffic), this solution would have resulted in a close coupling of GLoC and EPOCH, making the GLOC ancillary I/O processes vulnerable to changes in EPOCH (e.g., the communications protocol).
A DECnet-to-TCP/IP conversion process running on the VAX appears to be the simplest, least costly, and most robust solution to linking GLoC's DECnet communications to EPOCH's TCP/IP interfaces. Arbitrary messages received by the conversion task over a DECnet connection would be output as is to the corresponding TCP/IP connection and vice-versa. The EPOCH software running on the front ends would be responsible for interpreting messages received from GLoC and formatting (in GLoC format) messages sent to GLoC.
This VAX-based solution offers:
(In fairness, each of the items listed above would apply to the "DECnet on UNIX" option as well. The conversion process running on the UNIX workstation could be a near-clone of the VAX conversion program, with different system calls for performing DECnet I/O and with the third-party DNA library as a wild card.)
Some minor drawbacks of the VAX-based conversion process:
The conversion task provides a data link between a DECnet connection to GLoC and a TCP/IP connection to EPOCH. Messages received on one connection are tranmitted on the other connection and vice-versa:
(DECnet) Conversion (TCP/IP) <----------> Task <----------> |
Depending on how GLoC reads DECnet messages, the conversion task may need to ensure that messages sent from EPOCH to GLoC are aligned on the GLoC's QIO read calls. To do so, the conversion task will need to know the record boundaries of messages received from EPOCH, information supplied by the length field (bytes 41-42) in the GLoC standard packet header. (Ouch! The conversion task was this close to being truly generic ...)
The conversion program will consist of the following components (the use of the term "object" does not imply the use of C++):
These objects represent network connections of the specified type. Each DECnet object contains a link to the TCP/IP object with which it is associated and vice-versa. Development of these components will aided by the fact that I've written network daemons like this before and because of the similarity between the two types of objects: once you've written one, the other quickly follows.
(Not explicitly listed above are the trivial-to-implement server objects which, when connection requests are received, create connection objects of the appropriate type. Component #4 below, the TCP/IP API, already implements the TCP/IP server objects.)
Patterned after the X Windows Toolkit main loop, the I/O dispatcher
monitors sockets for I/O and keeps track of timers. When an I/O or timer
event occurs, a user-registered callback is invoked to process the event.
The UNIX version uses the select(2)
system call to monitor
sockets; the VMS version uses event flags, as does the DECwindows
implementation
of X Windows. It's a fairly simple matter to write callbacks that are
portable to both UNIX and VMS.
Input-available callbacks would be used to answer connection requests from clients and to read incoming messages. In the former case, the input callback would create a DECnet or TCP/IP object for the new client connection. In the latter case, the input callback would read the message and output it to the linked connection.
These Application Programmer Interfaces (API) hide the system-level details of establishing and communicating over networking connections. The DECnet API I have in mind is described in more detail below.
Note that the conversion program could be targeted equally easily to either the VAX or a "DECnet on UNIX" workstation.
The high-level DECnet Applications Programming Interface is designed to shield the applications programmer from the complexity of the low-level system calls required to establish and talk over DECnet connections. The ability to specify event flags in some calls makes it easy to program non-blocking and/or timed operations in conjunction with an event flag-based, I/O event dispatcher.
High-level pseudocode specifying which VMS system calls and in what order they're made has been mapped out for the various functions.
(Note: Details about DECnet programming were found in Digital's DECnet for OpenVMS Networking Manual and in some example programs downloaded from DECUS.)
Endpoint connection ;
status = dnetCall ("taskName[@node]", eventFlag, &connection) ;
dnetCall()
does not wait for the response from
the server; the application is responsible for calling
dnetComplete()
to complete the connection.
status = dnetComplete (connection) ;
status = dnetListen ("taskName", eventFlag, &listener) ;
status = dnetAnswer (listener, &client) ;
Endpoint
is created for the client
connection. If an event flag was specified in the original
dnetListen()
call, a new asynchronous read of the next
connection request is then initiated; its completion will be posted
to the same event flag.
status = dnetRead (connection, eventFlag, numBytesToRead,
buffer, &numBytesRead) ;
status = dnetWrite (connection, eventFlag, numBytesToWrite,
buffer, &numBytesWritten) ;
iosb = dnetIOStatus (connection) ;
status = dnetDestroy (endpoint) ;