|
|
|
A number of people have been pleading ignorance with regards to interfacing with the TPOCC parser (Steve, of course, doesn't need to plead). This memo will do two things. First, a brief introduction to networking is given. Second, you'll be shown how the parser and your process can route messages to each other.
Just kidding! Network communications are really very easy to understand. Simply think of the network as a telephone system. Two people can talk to each other over a phone line. Two processes can talk to each other over a network connection. One person calls the other on the phone; the other person answers at the other end. A client process calls a server process over the network; the server process answers at the other end. It's this simple:
phone connection Steve *---------------------------> Alex (with ear plugs) network connection Client <---------------------------> Server
"*" would be an arrow if Alex could get a word in edgewise. And yes, Luan, there is a networking equivalent to 1-900-976-PARTY.
In UNIX, there is a socket at each end of a network connection. A socket is just a file descriptor that a process can read from or write to. Whatever the client writes to its socket can be read by the server from its socket. In the same manner, whatever the server writes to its socket can be read by the client from its socket.
The data transferred over the network can take any form. On TPOCC, we try to use Sun's eXternal Data Representation (XDR) protocol for passing data. XDR defines a standard layout for various data types that travel across the network. Integers are represented as 4-byte numbers and floating point values adhere to the IEEE floating point format. Character strings are represented as: (i) a 4-byte length field, (ii) the string data, and (iii) the fill bytes necessary to round the string length up to a multiple of 4 bytes. Standard representations for additional data types are also defined.
With the appropriate XDR software on both sides, two computers with dissimilar architectures can trade data back and forth. The XDR software on machine A translates the data from machine A's representation to XDR format. The data is transmitted in XDR representation to machine B. The XDR software on machine B then translates the data from XDR format to B's representation. The data is now ready for use on machine B. In one of those curious coincidences that sometimes occur in nature, most of the XDR data representations map directly into the corresponding Sun data representations.
Reads and writes on a socket are performed using the UNIX read(2)
and write(2)
system calls. One caveat regarding I/O on a socket: the
number of bytes of data requested may not equal the actual number of
bytes read/written in a single read()
/write()
call. For example, if
you try to write out 100 bytes to the network, write()
might return
saying it only transferred 53 bytes. You must issue one or more
additional write()
's to output the remaining 47 bytes. The network
routines found in the TPOCC library (net_read()
, net_write()
, and the
various net_read_xdr_type()
and
net_write_xdr_type()
) automatically loop to input/output
the requested amounts of data.
Reading and writing data between two processes over a network
connection is no big deal; the hard part is establishing the initial
connection between the two processes. Returning to the telephone
analogy, the client process "calls" the server process which, in turn,
must "answer" the call. Two TPOCC library routines, net_call()
and
net_answer()
, implement these functions and greatly simplify the task
of connecting a client and a server.
To "call" a server, a client process must know the computer on which the server can be found (the server's host) and the server's name. For example, the parser would contact Barbara's MCCSIM process as follows:
char *host_name = "blue_box" ; char *server_name = "mccsim_by_barbara" ; int server_socket ; if (net_call (host_name, server_name, &server_socket)) { vperror ("[%s] Error calling \"%s\":\"%s\".\n", argv[0], host_name, server_name) ; }
If the MCCSIM process "answers" the parser's "call", net_call()
returns a file descriptor in server_socket
that the parser can use to
read messages from and write messages to MCCSIM.
To pick up the receiver, the MCCSIM process simply does the following:
char *server_name = "mccsim_by_barbara" ; int client_socket, server_socket = -1 ; if (net_answer (server_name, 99, &server_socket, &client_socket)) { vperror ("[%s] Error answering connection request.\n", argv[0]) ; }
If all goes well, net_answer()
returns a file descriptor in
client_socket
that MCCSIM can use to read messages from and write
messages to the parser.
At this point, you're probably thinking to yourself, "Wow! The rest of my life should be this easy! I'm gonna be sure to mention these library routines to Russ 'CEO' Talcott at the next ISI pizza bash." Whoa! Life is not quite so easy for servers, or even clients for that matter.
A client process is one that requests a service. A server process is one that performs a service. Much as a poor, hard-working, silently-suffering, X-Windows programmer may have a multitude of living-in-the-lap-of-luxury managers telling him what to do, a server process is liable to have many clients requesting its service.
Using net_answer()
, a server process sits and listens at an
assigned network port for connection requests from clients. The
network port itself has a socket bound to it, the server_socket
returned by net_answer()
. When a network connection request from a
net_call()
'ing client is accepted, UNIX automatically creates a new
socket, the client_socket
returned by net_answer()
. The listening
port socket is dedicated to fielding connection requests from clients;
the client socket is used for actual data communications between the
server and its new client.
Having established a connection with a client, the server process must choose one among several paths to follow:
Skeleton code for the above servicing paradigms is presented in the following paragraphs.
Sequential servicing of clients is the easiest method. Answer a client call, service the client, and, finally, hang up the call:
char *server_name = "mccsim_by_barbara" ; int client_socket, server_socket = -1 ; for ( ; ; ) { /* FOR ever */ if (net_answer (server_name, 99, &server_socket, &client_socket)) { vperror ("[%s] Error answering connection request.\n", argv[0]) ; continue ; } ... ... Service the client by net_read()ing and ... net_write()ing on client_socket. ... shutdown (client_socket, 2) ; close (client_socket) ; }
Forking a child process to service a client is a clean and frequently-used technique. A forked process inherits all the open file descriptors from its parent. The forked process closes its copy of the listening port socket and talks to the client over the data socket; the main server process closes its copy of the data socket and continues to listen at the listening port socket:
char *server_name = "mccsim_by_barbara" ; int client_socket, server_socket = -1 ; for ( ; ; ) { /* FOR ever */ if (net_answer (server_name, 99, &server_socket, &client_socket)) { vperror ("[%s] Error answering connection request.\n", argv[0]) ; continue ; } if (fork () == 0) { /* Am I the forked process? */ close (server_socket) ; ... ... Service the client by net_read()ing and ... net_write()ing on client_socket. ... shutdown (client_socket, 2) ; close (client_socket) ; exit (0) ; } else { /* Am I the main process. */ close (client_socket) ; } }
Trying to walk and chew gum at the same time might be difficult
for some people, but simultaneously servicing old clients and
listening for new clients is not really all that hard. The UNIX
system call, select(2)
, allows you to monitor multiple I/O channels
for the availability of input data (including connection requests).
select()
uses bit masks to specify the one or more file descriptors it
must examine; macros for manipulating bit masks are defined in the
system "types.h" file.
One of the arguments to select()
is a timeout value. If no
timeout value is specified, select()
blocks until data becomes
available on one of the I/O channels. A timeout value of zero causes
select()
to return immediately, in effect a quick poll to see if any
of the I/O channels is ready for reading. Other timeout values limit
the amount of time select()
will wait if all channels are idle.
The following example illustrates how a server program could
listen for and accept connection requests from new clients while
servicing previously connected clients. Note how net_answer()
is
initially called to create the listening port socket without actually
accepting any connection requests:
#ifdef PRE_SUN_OS_40 # include "fd.h" /* File descriptor set definitions. */ #else # include <sys/types.h> /* System type definitions. */ #endif char *server_name = "mccsim_by_barbara" ; int client, num_clients = 0 ; int client_socket[FD_SETSIZE], server_socket = -1 ; fd_set read_mask ; /* Create and bind a socket to the listening port. */ if (net_answer (server_name, -99, &server_socket, NULL)) { vperror ("[%s] Error setting up server socket.\n", argv[0]) ; exit (errno) ; } /********************************************************************* Field connection requests from new clients and service old clients. *********************************************************************/ for ( ; ; ) { /* FOR ever */ /* Monitor all the sockets for input. */ FD_ZERO(&read_mask) ; FD_SET(server_socket, &read_mask) ; for (client = 0 ; client < num_clients ; client++) FD_SET(client_socket[client], &read_mask) ; while (select (FD_SETSIZE, &read_mask, NULL, NULL, NULL) < 0) { if (errno == EINTR) continue ; /* SELECT interrupted by signal - try again. */ vperror ("[%s] Error checking network for input.\nselect: ", argv[0]) ; exit (errno) ; } /* If a new client is requesting a connection, then accept the request and add the client to the list of clients. */ if (FD_ISSET(server_socket, &read_mask)) { if (net_answer (server_name, 99, &server_socket, &client_socket[num_clients++])) { vperror ("[%s] Error answering connection request.\n", argv[0]) ; num_clients-- ; } } /* Service any old clients that need to be serviced. */ for (client = 0 ; client < num_clients ; client++) { if (FD_ISSET(client_socket[client], &read_mask)) { ... ... Service the client by net_read()ing and ... net_write()ing on client_socket[client]. ... } } }
select()
, as used in the examples above, blocks until data is
received on one of the file descriptors. This, of course, would be a
problem if your program must also react to events outside the I/O
system. For example, earlier versions of the events server had to (i)
listen for connection requests from client display processes, (ii)
read event class messages from connected clients, and (iii) read event
logger messages from a message queue. Message queues are not part of
the UNIX I/O system and, therefore, can not be monitored by select()
.
Unfortunately, we are not using the VAX/VMS (Valuable, Mature, Sophisticated) operating system, otherwise we'd have AST's and event flags at our disposal. All you have available under UNIX is signals and semaphores. If data comes in on any one of your 87 I/O channels, all you'll get is a faceless SIGIO interrupt - you'll have to poll all 87 file descriptors to determine which one to read. And the only way to be alerted to the presence of new messages in a message queue is to depend on the sending process to issue a UNIX signal or to set a semaphore.
One scheme for monitoring multiple types of events is for the process to wait on an event semaphore. Whenever an event occurs, the semaphore should be signalled somehow. When the process' wait on the event semaphore completes, it then goes out and polls all of its event sources.
How is the event semaphore set? In the case of message queue input and similar events, the sending process can signal the receiving process' event semaphore. In the case of I/O system events, a SIGIO signal handler in the receiving process can field the SIGIO interrupt and signal the event semaphore.
The following code fragments were taken from the now-defunct Router server. The main router process, whose code is shown below, monitored (i) its input message queue, (ii) its network server port, and (iii) standard input, for "connection requests". Whenever a connection request was received, the server would fork a child router process to act as a bridge between a message queue link and a network connection.
... #include <fcntl.h> /* File control definitions. */ #include <signal.h> /* Signal definitions. */ ... main () { ... /************************************************************************** Create an input message queue and an event semaphore for the main router process. Set up a network socket to receive connection requests. Configure the socket and standard input so that incoming requests will cause the I/O handler to be invoked. (The event semaphore, signalled by the I/O handler or by other processes sending messages, enables the router to monitor the network, standard input, and its message queue simultaneously.) **************************************************************************/ if (create_msq (router_key, &my_input_msq)) vperror ("[%s] Error creating input message queue.\n", pname) ; if (create_sem (router_key, &my_event_sem)) vperror ("[%s] Error creating event semaphore.\n", pname) ; if (net_answer (server_name, -99, &server_socket, &client_socket)) vperror ("[%s] Error setting up server socket.\n", pname) ; /* Set up a handler to receive SIGIO interrupts (generated by file I/O) and configure the input streams (network and standard input) to generate these interrupts. The FCNTL parameters are described in FCNTL(2V) in the UNIX documentation. NOTE: FCNTL calls vary between operating systems, and even between Sun OS 4.0 and Sun OS 3.4. */ saved_IO_handlers = set_handler (IO_handler, SIGIO, 0) ; if (fcntl (server_socket, F_SETOWN, pid)) vperror ("[%s] Error redirecting network I/O interrupts.\nfcntl: ", pname) ; if (fcntl (server_socket, F_SETFL, FASYNC)) vperror ("[%s] Error enabling asynchronous network I/O.\nfcntl: ", pname) ; if (fcntl (fileno (stdin), F_SETOWN, pid)) vperror ("[%s] Error redirecting TTY I/O interrupts.\nfcntl: ", pname) ; if (fcntl (fileno (stdin), F_SETFL, FASYNC)) vperror ("[%s] Error enabling asynchronous TTY I/O.\nfcntl: ", pname) /************************************************************************** Wait for a "connection request" (via the network or via the message queue) and accept it. Then fork a separate process to handle the connection and go back to waiting. **************************************************************************/ for ( ; ; ) { if (wait_on_flag (&my_event_sem, -1)) { vperror ("[%s] Error waiting on event semaphore.\n", pname) ; exit (errno) ; } ... ... Poll the input message queue, standard input, ... and the server's listening port. If a connection ... request was received, then, fork a process to ... handle the connection. ... if (fork () == 0) { /* Am I the forked process? */ reset_handlers (saved_IO_handlers) ; close (server_socket) ; ... ... A child router process simply acts as a go-between ... between a message queue and a network connection. ... Anything read from one gets written out to the other. ... shutdown (client_socket, 2) ; close (client_socket) ; exit (0) ; } else { /* Am I the main process. */ close (client_socket) ; } } /* Keep waiting for more connection requests. */ } /************************************************************************* I/O Handler. Invoked by the system in response to a SIGIO signal: IO_handler (sig, code, scp, addr) where (see SIGVEC(2) for detailed descriptions of arguments) <sig> is the signal causing invocation of the handler. <code> provides additional information for certain signals. <scp> points to the signal context prior to the signal. <addr> provides additional address information for certain signals. **************************************************************************/ void IO_handler (sig, code, scp, addr) int sig, code ; struct sigcontext *scp ; char *addr ; { signal_flag (&my_event_sem) ; }
The program above is really a good example of how not to design your program. An alternative to mixing message queues and network sockets is to use FIFO's instead of message queues. FIFO's, or named pipes, provide the same functionality as message queues, but with all the bells and whistles of normal file I/O. Their mention in the Sun documentation does not exactly stand out, so I think we should forgive the poor soul who, enamored of VMS mailboxes, pushed message queues.
When a client does a net_call()
, how does the system map the host
and server names to a server port on a given machine? When a server
does a net_answer()
, how does the system map the server's name to a
server port? Quite simply, the system looks up the information in
some system files.
The /etc/hosts
system file contains a list of remote computer
system names and their corresponding network addresses. Server names
and their corresponding port numbers are listed in /etc/services
.
Note that host names and addresses are unique on a network. Server
names and port numbers are local to a particular host; there is no
requirement that they be unique across different hosts. A fully-
qualified network address for a server is constructed from the
server's host address and the server's port number.
net_call()
looks up the host and server names in their respective
files, constructs the server's complete network address from its host
address and its port number, binds the full address to a socket, and
then connects to the server through the socket. net_answer()
looks up
the local host name and the name of the server in the two files,
constructs the server's complete network address, binds the address to
a socket, and then listens to that socket for connection requests.
In order for a client to contact a server: (i) the server's host
machine must appear in the client computer's /etc/hosts
file, and
(ii) the server's name and port number must be added to the
/etc/services
files on both the client and server computers. The
/etc/hosts
files are usually updated whenever a new computer is
attached to the network, so you probably don't have to worry about
that file. To add your server to the /etc/services
file, edit the
file, and position down near the bottom:
... # # TPOCC Services # ticl_server 2468/tcp netecho 2469/tcp router_server 2470/tcp tlmgen 2471/tcp ...
As you can see, NETECHO is assigned port number 2469 and uses the "tcp" network protocol. Simply pick an unused port number (0-1023 are reserved) and add your server to the file. Be sure and use the same port number on both the client and server machines.
Some of you are probably scratching your heads, "Why the need for
identical entries in the /etc/services
files on the client and server
computers?" The reason is that net_call()
is cheating a little bit by
making these assumptions. In reality, the client's system should look
up the server name in the server host's /etc/services
file, not in
its own services file. To do so, however, requires a distributed
database of hosts and server names. UNIX provides this capability
with the Yellow Pages network lookup service. Someday we can set this
up, but the present method will do for now. Upgrading to Yellow Pages
should not affect the application programs, since any changes will be
hidden inside net_call()
and net_answer()
.
Meng has been trying to punch holes in the TPOCC Library networking routines for the past year and never misses an opportunity to discredit them in front of my managers. So far, she has been unsuccessful. (Steve, Nancy, Barbara, and Gordon have had more success, however, with little quips like "Get any sleep last night, Alex?" and "I'm glad ISI's payroll program doesn't core dump on decimal points, Alex.") Trust me! The networking utilities work pretty well, IMHO. There are some pitfalls in UNIX networking, however, that you should watch out for.
First, as I mentioned much earlier, don't expect the amount of
data that you write out to the network to bear any relationship to the
amount of data the receiving process reads in at one shot. If you
write two 100-byte records to a socket connection, the other program
might read()
the data in 34-, 147-, and 19-byte chunks. (And, by the
same token, don't assume that only two write()
's were required to
output the 200 bytes.)
net_read()
and net_write()
automatically loop to input or output
the requested amount of data, so you usually don't have to be
concerned with this problem at that level. You do, however, need to
superimpose a protocol on the data transfers between two processes;
they need to have some way of knowing exactly how much data to read or
write and where the message boundaries are. If a client and server
always exchange fixed-length, 20-byte messages, they'll always know
how much data to read/write for a full message.
XDR comes in handy for more complicated data transfers. The parser, for instance, transmits and receives variable-length messages. Using XDR strings, each message is sent out as a 4-byte length field followed by the variable-length message text. The receiving program reads 4 bytes for the length field and then the specified amount of text; the reader always knows how much to read. The data and events servers utilize even more complex C structures composed of integers, reals, character strings, etc. and take advantage of a full range of XDR data types and record delimiters.
Speaking of XDR, the net_xdr_util
utilities in the TPOCC Library
provide a simplified network interface for some of the primitive XDR
data types. If you or your requirements are more sophisticated, then
you ought to check out Sun's XDR routines, including their XDR code
generator, rpcgen - Nancy is the expert on these.
Some final caveats deal with terminating network connections.
UNIX's low-level read()
function returns zero on end-of-file (broken
connection) or if there is no data available; using ioctl()
to check
how much data is waiting to be read suffers from the same ambiguity.
The read()
problem will only affect you if your socket is in non-
blocking mode; ioctl()
will have the problem whatever your mode is.
It's possible to distinguish between end-of-file and no-data by
performing a select()
before doing the read()
or ioctl()
. If select()
says there is data to read and the immediately following
read()
/ioctl()
says there is no data, then you've lost your
connection.
To terminate a connection, both the client and the server
processes need to shutdown()
and close()
their respective sockets:
shutdown (socket, 2) ; close (socket) ;
The shutdown()
call discards any data in transit on the socket
connection and prevents any more reads and writes. It's important
that both sides of a connection shutdown and close their sockets.
Suppose a server process goes down while the client process remains up
with an open socket connection. If you try starting up the server
again, its net_answer()
will fail with a "bind: address already in
use" message. This error will keep up until the client's socket is
shutdown and closed. If the client only closes its socket, the system
will automatically shut it down - after a few minutes!
When Steve's not busy casting data types, he casts aspersions on the parser program. Officially titled the TPOCC Interactive Command Language (TICL) Interpreter, the parser is actually a very versatile program. The parser functions as both a client and a server:
(Client) (Server) Display <----------> Parser <----------> Applications (Client) (Server)
As a server, the parser receives and processes commands from Display. As a client, the parser sends out commands to application tasks for processing.
The parser is implemented as a network-based server. A parser server for the desired language (e.g., ICE, MCCSIM, etc.) is brought up and it continually listens for connection requests from display processes. When a connection request is received and accepted, the parser server forks a child process that is dedicated to serving the new connection. The parser server returns to listening for connection requests and the child process begins processing commands received over the connection.
Normally, the display process will need to establish a connection to a parser process whenever a new page containing an input command line is brought up. Establishing a connection is as simple as:
if (net_call (host_name, server_name, &server_socket)) { vperror ("[%s] Error calling \"%s\":\"%s\".\n", argv[0], host_name, server_name) ; }
where host_name
is the name of the computer on which the parser server
is running (NULL can be specified if the display and parser processes
are running on the same machine); server_name
is the name of the
parser server (e.g., "parser_ice", "parser_mccsim", etc.).
server_socket
returns the network connection to the forked parser
process.
All communications between Display and a forked parser process are done using Sun's XDR protocol for ASCII strings. The code shown below uses the TPOCC library routines for reading and writing XDR strings, but you could just as well use the standard Sun routines.
All operator input should be sent to the parser process as-is,
i.e., "PAGE CMDSTATUS
" should be sent over as
"PAGE CMDSTATUS
". The parser process does not currently
handle the old-style bracketed display information, so don't send it.
The TICL_TALK program uses the following bit of code to send operator
input to the parser:
length = strlen (input_buffer) ; if (net_write_xdr_string (server_socket, input_buffer, length)) { vperror ("[%s] Error writing message to network.\n", argv[0]) ; }
All output from the parser to the display process has a type tag
enclosed in brackets at the beginning of each message (for example,
"[OI] This text was entered by the operator.
"). The following
message types are currently handled or envisioned for the future:
- [AS] - Prompt String
- The text of this message should be used to prompt the operator for input. The next operator input is treated by the parser as the operator's response to the prompt.
- [NI] - Network Input
- The text of this message was just received by the parser as input from a remote application process (e.g., the spacecraft command server, telemetry decom, etc.).
- [NO] - Network Output
- The text of this message was just output by the parser to a remote application process.
- [OI] - Operator Input
- The text of this message was just received by the parser as input from the operator (i.e., the operator just entered it at the keyboard).
- [OO] - Operator Output
- The text of this message is to be displayed on the operator's screen. This message type encompasses error messages, informational messages, etc.
- [PI] - Procedure Input (to be executed)
- The text of each line read from a STOL procedure file and executed by the parser is echoed to the display process with this type tag. Each PI message is actually output immediately before the line is executed.
- [PN] - Procedure Input (NOT to be executed)
- Not all lines read from a procedure file are executed, e.g., the unexecuted parts of IF-THEN-ELSE blocks, etc. Lines read from a procedure file but not executed are echoed to the display process with the "
[PN]
" tag.
- [PO] - Procedure Output
- This message class consists of operator output generated by STOL directives input from a procedure. This message class is not currently implemented and its potential usefulness is not clear.
- [XQ] - Executable Command
- The text of this message is a command to be executed by the display process, e.g., page, clear, etc.
The type tags can be used by the display process to channel the different messages to different parts of the operator display. Additional message classes will probably be created as we exercise the system and find the need for them.
When the display process receives an executable command ("[XQ]
"),
it should return a status message to the parser. For right now, use
the old-style status message,
"%STATUS error_number error_text
". An
error number of zero indicates no error.
The display process net_call()
's the parser server and the parser
server must net_answer()
Display's call. In a similar fashion, the
parser server net_call()
's an application task and the application
task must net_answer()
Parser's call. As mentioned in the networking
basics, the server programs (the parser and the application tasks)
should be entered into the /etc/services
files on the various
machines.
To paraphrase an earlier example, the application tasks should answer as follows when the parser calls:
char *server_name = "NASCOM_slow_speed_interface" ; int client_socket, server_socket = -1 ; if (net_answer (server_name, 99, &server_socket, &client_socket)) { vperror ("[%s] Error answering connection request.\n", argv[0]) ; }
server_socket
returns a socket for the application task's listening
port; this socket will be used by net_answer()
the next time you call
it. client_socket
returns a socket connection to a parser; this
socket is used for exchanging messages between the parser and the
application.
Once I've got your ear, how can I make you understand what I'm saying? XDR ASCII strings, of course. To read a message from the parser:
char message[128] ; int length ; length = sizeof message ; if (net_read_xdr_string (server_socket, message, &length)) { vperror ("[%s] Error reading message from parser.\n", argv[0]) ; } printf ("Message from Parser = %*s\n", length, message) ;
To write a message to the parser:
char *message = "DUMP CORE" ; if (net_write_xdr_string (server_socket, message, strlen (message))) { vperror ("[%s] Error writing message to parser.\n", argv[0]) ; }
Once you've read a message from the parser, what do you do with
it? There are number of ways of decoding a character string:
sscanf()
, strtok()
, getword()
, ...
Ask Alex if you need help. If you have an aversion to character strings
(known in the medical literature as gould-weenia), talk to me.
I'd prefer not to get involved in discussions such as "The one millisecond
it takes to scan a page command from the parser is severely impacting the
two-minute time it takes my program to bring up a display page." We can
at least negotiate, however.
What commands will Parser be sending your application? Don't ask me - tell me. What commands can your application send to the parser? Anything listed in the SRD STOL Appendix and anything you tell me you want to send. I need to hear from you what commands you need implemented in the parser.
Whenever an application program receives an executable command,
it should return a status message to the parser. For right now, send
"%STATUS error_number error_text
". An error
number of zero indicates no error.
To summarize, your application task should be capable of the following actions, in approximately this order:
net_answer()
connection requests directed to your server.
net_read_xdr_string()
a command from the parser.
net_write_xdr_string()
a status message back to the parser.
net_write_xdr_string()
a command to the parser.
net_read_xdr_string()
a status message from the parser.
shutdown()
the listening port and parser connections.
close()
the listening port and parser connections.
Running the parser and your program together is possible now, but not particularly useful. The parser can establish connections with other processes and transmit and receive messages, but you're probably better off using a utility program, TALKNET, at this time. TALKNET (see my memo on the TPOCC utilities) provides a terminal interface to the network and can function as either a client or a server. I use it as a server to test out parser. You'll most likely use TALKNET as a client:
(tty) (network) Operator <----------> TALKNET <----------> Application
To run TALKNET, first start up your application. If you want to see debug from your program, you should run it in another window or on another terminal. If your program doesn't output anything, you can just run it in the background. Then, start up TALKNET:
% talknet -xX [host_name] server_name
If not specified, host_name defaults to the machine you're running on. TALKNET will connect to the server, tell you so, and then sit waiting for input from you at the keyboard or from your application via the network. Input from you is written out to the network; input from the network is written out to your screen. The little "x" option instructs TALKNET to dump the network input to your screen in parallel hexadecimal and ASCII formats. The big "X" option tells TALKNET to write your keyboard input out to the network as an XDR ASCII string. Type "control-C" to exit TALKNET.
UNIX is a pretty complicated operating system, so it's
understandable that Wind River's VxWorks falls a little bit short in
the compatibility department. Thanks to Luan's valiant efforts, the
net_util
and net_xdr_util
utilities have been ported to VxWorks.
There are some differences between UNIX and VxWorks that you should be
aware of, however.
select()
under VxWorks does not have the full functionality of
its UNIX cousin. Only the read bit mask is examined and waiting is
implemented by periodic polling at a user-configurable rate. But
select()
should work well enough for any of the cases outlined in this
memo.
Of more concern is the fact that VxWorks has no SIGIO signalling
mechanism, so your options with regard to asynchronous I/O are rather
limited. UNIX uses fcntl(2)
to configure file descriptors for
asynchronous I/O. There is no fcntl()
in VxWorks and ioctl(2)
does
not have the needed capabilities.
Gordon has expressed some reservations about the need for VxWorks-resident applications to accept and service connections from multiple parsers. This might not be a major problem since the command controller parser will likely be the only parser talking to those applications.
In any case, Gordon suggested that there be a central "router" on the blue box that acts as the focal point of the communications between the parsers and the applications. A suitable moniker for this scheme would be "The Gordon Knot". [If you don't get it, the Gordian Knot refers to an ancient legend that said that anyone who could unravel Steve's spaghetti code would become the president (whether of ISI or of the USA nobody knows). Well, along comes Alexander the Great (no relation on either count), wielding a mighty VMS sword, and he made quick work of turning that dishevelled pasta into nicely- layered lasagna. The rest is history.]
Gordon's idea has a certain appeal and someone familiar with VxWorks ought to give some thought to it. While simple in concept, the "router" won't be a trivial program, as there must be some way of deciding which applications are to receive messages from which parser, and vice-versa.