WebSynth 
Audio 
Release 3.0


Introduction

Server Components

The Applet

Controlling the Applet :
    WebSynth API

The Player

WebSynth Live:
   
Single Session
   
Multiple Session

Notes

 

 

Server Components



The Server architecture is composed of three main components. Each component may be run on the same machine or dislocated in a LAN. Not all of the components are necessary for a live broadcast to be succesful. Any of the application components may be run with command line switches to modify behaviour. Use the -h switch to get a list of available options.

1. LiveEncoder (Java/JNI Solaris Sparc and Win32 - necessary)


Purpose
This is the recording/encoding engine to be run on a Solaris Sparc and/or Win32 machines equipped with audio card(s). It may also be used to bridge gsm data from a socket, both in client or server mode. Once run, it will start sending multicast packets through the LAN.
This release support both 14.4 and 28.8 quality mode, but no multisession for 28.8.
You will need multicast packets to be enabled on the network card and on the local network.
You will better disable these packets on the gateway to the internet and on any other unwanted route.
You may have multiple instances of this component running on the same machine or on different machines, broadcasting different sessions.

 

Syntax
java com.sonicle.wsaudio.LiveEncoder [options].

 

Options Reference

-n name

Name of current live session (default name is default). You may  specify a new name if you want to have multiple  live session. This option is not necessary for a single  live session. The name must be resolved by the LiveSessionServer.  If you provide a wrong name or the LiveSessionServer is not running,  you will get an error message. Read below for further information  regarding the LiveSessionServer.
You may use the special session name default28 to start a 28.8  quality session. The LiveSessionServer will not be queried in  this case, and a default Multicast IP port will be used. This  release does not support multi sessions at 28.8 qualities.

-p port

Yuo must specify this option only if you changed the default  port of the LiveSessionServer (default is 9123). Be careful to  use the same port of a running LiveSessionServer. If you provide  a wrong port or the LiveSessionServer is not running, you will  get an error message. Read below for further information regarding  the LiveSessionServer.

-t level

Set the silent detection threshold (default is 256). Value  range is 0-32767. This level is also affected by the volume system  settings of the audio card. Set to 0 to disable silent detection.  A value of 32767 means mute session.

-v

Set verbose output (default disabled). Useful for debugging.

-d name
-i devport

Audio device name for live session (default is system default).
Audio device port. 0=microphone, 1=line in. (default is 0).
This parameters are valid on Solaris only, and let you specify different /dev/audio devices. On Win32 you will have to set them through system settings.

-f file.sa

File name of a recorded session (default is live session)

-b port
-b addr:port

Activate bridge listening at port. Server Bridge for gsm packets.
Activate bridge connecting to addr:port. Client Bridge for gsm  packets.

 -h

Print help.

 

Usage Notes

The -i option makes sense when we use the sound card device. You cannot use -b -f -d options together.

 

Example

You can run a live recording session with default parameters with:
java com.sonicle.wsaudio.LiveEncoder

You can view help with:
java com.sonicle.wsaudio.LiveEncoder -h

 

2. LiveBridge (Java/JNI Solaris Sparc and Win32 - optional)


Purpose
This component substitutes the LiveEncoder when the audio source is not a sound card, but a network service capable of transmitting uLaw/ALaw 8KHz data in TCP or UDP packets. This component routes these packets into the native encoding engine pushing them out the same way as a LiveEncoder. It is to be run on a Solaris Sparc and/or Win32 machines. Once run, it will start sending multicast packets on the local network.
Becuase uLaw 8KHz data would be too poor for a 28.8 encoding, this release support only 14.4 mode bridging.
You will need multicast packets to be enabled on the network card and on the local network.
You will better disable these packets on the gateway to the internet and on any other unwanted route.
You may have multiple instances of this component running on the same machine or on different machines, broadcasting different sessions.

Source format specification

The format of the source audio data is the same for TCP and UDP mode. While in UDP mode you construct actual UDP packets, in TCP mode you will stream these packes into the TCP socket.
Each packet must be 168 bytes long. The first 8 bytes contain header informations, the remaining 160 bytes are uLaw/ALaw audio data.

[0-3  ]*: Reserved for future use (any value)
[4-7  ]*: Packet count (0=start of transmission)
[8-167] : uLaw/ALaw data

*
These are unsigned integer values to be written in reverse order (least byte first). They may be considered as Java long values, as they have to be assigned to Java long variables to be manipulated. You may use the following function as a reference for filling these bytes in Java:

/* Fill the specified byte buffer at the specified position
   using the long value as a 4 bytes unsigned integer value
void writeLongAsU32(byte b[], int i, long l) {
    int uc, uc2, uc3, uc4;
    uc4=(int) ((l & 0xff000000) >> 24);
    uc3=(int) ((l & 0x00ff0000) >> 16);
    uc2=(int) ((l & 0x0000ff00) >> 8);
    uc =(int) (l & 0x000000ff);
    b[i+0]=(byte)uc;
    b[i+1]=(byte)uc2;
    b[i+2]=(byte)uc3;
    b[i+3]=(byte)uc4;
}


Notes

The bridging architecture implements software logics that are inherently present when working with a true sound card. Because the audio source is a network transmission, no assumption is made about the affidability of the source. The timings of the source packets are tracked and checked in real time against acceptable delay bounds (you may change these bounds through command line switches). Packets may be discarded or queued depending on their frequency both at the LiveBridge entry point (the source is not relyable) and at the frontend end point (the receiving applet is reading too slow).
While the entry point is detected and corrected very quickly, the end point may require seconds before having knowledge that the applet is slow (this is because of network buffering through the Internet). When this happens, the end point will stop feeding the applet until the entire output buffer is empty, so input packets in between are discarded and the applet will restart synchronized.

The packet count plays a special role in this scenario. The start of transmission (packet count = 0) is a signal to the Bridge to reset its internal logic. When this is detected, any subsequent packet will be synchronized with respect to this moment.
 

Syntax
java com.sonicle.wsaudio.LiveBridge [options].

 

Options Reference

-n name

Name of current live session (default name is default). You may  specify a new name if you want to have multiple  live session. This option is not necessary for a single  live session. The name must be resolved by the LiveSessionServer.  If you provide a wrong name or the LiveSessionServer is not runnig,  you will get an error message. Read below for further information  regarding the LiveSessionServer.

-p port

Yuo must specify this option only if the you changed the default  port of the LiveSessionServer (default is 9123). Be careful to  use the same port of a running LiveSessionServer. If you provide  a wrong port or the LiveSessionServer is not runnig, you will  get an error message. Read below for further information regarding  the LiveSessionServer.

-t level

Set the silent detection threshold (default is 256). Value  range is 0-32767. This level is also affected by the volume system  settings of the audio card. Set to 0 to disable silent detection.  A value of 32767 means mute session.

-md millis

Maximum delay for output packets (default is 1000). This is the time tracking parameter that affects the acceptable error bounds of  the input source.

-ulaw
-alaw

Input is ulaw data
Input is alaw data

-tcp port

-tcp host:port

Start bridging in TCP server mode (default). Once run, it listens  on the specified port for a TCP socket connection. After the  connection is estabilished, it starts encoding uLaw data coming  from the socket. When the connection is dropped by the remote  host, it automatically restart listening for a new connection.
Start bridging in TCP client mode. Once run, it tries to connect  to the specified host:port via TCP socket. After the connection  is estabilished, it starts encoding uLaw data coming from the  socket. When the connection is dropped by the remote host, it  exits.

-udp host
-udp host:port

Start bridging in UDP mode. If you do not specify a port number,  any free available port will be automatically chosen. Once run,  it starts receiving UDP packets coming from the specified host.  Each packet uLaw data is queued internally, and a separate thread  flush this data into the encoding engine. Each packet size have  to be 164 bytes. The first 4 bytes must contain a Java long  value to be used for packet numbering. The other 160 bytes are  interpreted as uLaw 8KHz data.

-v

Set verbose output (default disabled). Useful for debugging.

 -h

Print help.

 

Example

You can run a live bridging session with default parameters with:
java com.sonicle.wsaudio.LiveBridge

You can view help with:
java com.sonicle.wsaudio.LiveBridge -h

You can run a live bridging session in UDP mode with:
java com.sonicle.wsaudio.LiveBridge -n mysession -udp ivr:8765

You can run a live bridging session in TCP client mode with:
java com.sonicle.wsaudio.LiveEncoder -n mysession -tcp ivr:9090

 

3. The Front end Server (Pure Java, necessary)


Purpose
This component will bridge the multicast packets on the local network through pure socket connections to the Internet. This sockets will be initiated by the WebSyhth Audio Applet, through standard HTTP requests.
It supports 28.8 quality transmissions. This mode is activated when the applet requests a live session named default28.
You may have multiple instances of this component running on different machines. It is important that the applet contacting one front-end is downloaded from the same front-end, beacuse of security restrictions. You may implement each front end in two ways:

  • LiveServlet
    If your front end server already run an http server with a servlet  engine, you will need to make the wsaudio.jar file visible to  this engine. Make sure the /servlet directory is assigned to  servlets on your http server. Put the applet jar file into the  http documents directory to feed the WSAudio applet. Once running on a browser it will automatically contact the correct servlet on your site.

    Syntax
    http://host/servlet/com.sonicle.wsaudio.LiveServlet
    (This will be actually issued by the WebSynth Audio Applet)
     
  • LiveServer
    If your front end server do not run any http server, you will  need to run this component. It implements a simplified http server,  so is capable of serving a directory structure to the web, but  it also directly detect WebSynth Audio applet requests of live  streams to start bridging packets. Put the applet jar file into  the http documents directory.

    Syntax
    java com.sonicle.wsaudio.LiveServer [options]

    Options Reference
     

-p port

Port number to listen on (default 80).

-r root

Document root directory (default .)

-i index

Default index file name (default index.html)

-s dir

Servlet directory (default /servlets)

-v

Set verbose output (default disabled). Usefull for debugging.

-h

Print help.


Notes about Servlet

LiveServlet looks for one parameter in the servlet engine, that is sessionserverport, where you may specify a different port for session name resolution (LiveSessionServer).
Note that servlet parameters are usually available only to alias instances of the servlet. When assigning a parameter directly to the servlet, the servlet engine may choose not to pass it to the running servlet.
In this case, use your servlet engine administration tool to:
- Create an alias of com.sonicle.wsaudio.LiveServlet and call it WebSynthServlet (or any other name)
- Assign the parameter to the alias
- Remember to use the SERVLETNAME parameter on the applet, so that it will use the correct alias (PARAM NAME=SERVLETNAME VALUE=WebSynthServlet).

Example
You can view help with:
java com.sonicle.wsaudio.LiveServer  -h

You can run on different port:
java com.sonicle.wsaudio.LiveServer  -p 8080

 

4. LiveSessionServer (Pure Java, optional)


Purpose
This component simplifies the management of multiple sessions of live audio, so it is necessary only when running multiple sessions. All of the other components will search for a session server on the local network when referencing live sessions by name, and when this name is not "default" or "default28".
This server will assign session names to multicast ip/ports, creating the correct link between the Encoder and the Front-end.
You will load a text file with these associations once and run the Session Server. Once these associations are known to the Session Server, you will forever use the name of the session without worrying about IP/ports.
This release does not support handling of 28.8 quality sessions naming. Any session resolved by the Session Server is considered a 14.4 session.

 

Syntax
java com.sonicle.wsaudio.LiveSessionServer [options]


Options Reference

-f file

Set the properties file to load (default is './sessions.properties').  See below to see the how to create the property file

-p port

Use a specific listen port (default is 9123). You may change  the default number if you want to set the LiveSession on a different  UDP Broadcast port or the port is already used. Remenber to run  LiveEncoder on the same port. See below to see how set LiveEncoder

-v

Set verbose output (default disabled). Usefull for debugging.

 -h

Print help.

 

Usage Notes

LiveSessionServer is programmed with a properties file. The properties file is a text file with the following format:

ConferenceName1=MulticastIPAddr:MulticastIPPort
...
ConferenceNameN=MulticastIPAddr:MulticastIPPort
 

 

Example

You can view help with:
java com.sonicle.wsaudio.LiveSessionServer -h

You can load you property file with:
java com.sonicle.wsaudio.LiveSessionServer -f myfile.props

 

5. LiveMulticastPlayer (Pure Java, optional)


Purpose
This component is a tool for the monitoring of live sessions running in the local network. You may use it on any Java enabled machine equipped with audio card, to listen to specific Multicast transmissions. It will help you understand what the front end is receiving before sending data to applets.

 

Syntax
java com.sonicle.wsaudio.LiveMulticastPlayer MulticastIP port [type]


Options Reference

MulticastIP

The IP of any transmission running from a LiveEncoder or LiveBridge

port

The port of any transmission running from a LiveEncoder or LiveBridge

[type]

The quality type of the transmission (0=14.4 , 1=28.8 , default=0)

 

6. Notes

The server options are based on multicast and broadcast. Accepted values are:

  • Broadcast port range is 1-65535.
  • Multicast port range is 1-65535.
  • Multicast IP address range is 224.0.0.1 - 239.255.255.255


This WebSynth Audio is a product of Sonicle (tm), Srl.
Copyright © 1998, 1999, Sonicle, Srl.
20090 Via Enrico Fermi, Assago, Milano ITALY.
All rights reserved.