Broadcast sound over the network using Java

    It became interesting for me to experiment with the transmission of sound over the network.
    I chose Java technology for this.
    As a result, I wrote three components - a transmitter for Java SE, a receiver for Java SE and a receiver for Android.

    In Java SE, classes from the javax.sound.sampled package were used to work with sound , in Android, the classes android.media.AudioFormat , android.media.AudioManager and android.media.AudioTrack were used .
    To work with the network - standard Socket and ServerSocket .

    Using these components, it was possible to successfully conduct a voice communication session between the Russian Far East and the Netherlands.

    And one more possible application - if you install a virtual sound card, for example, Virtual Audio Cable, you can stream music to other devices, and thus listen to music simultaneously in several rooms of the apartment (if there is an appropriate number of devices).


    1. The transmitter.



    The method of broadcasting sound is trivial - we read the stream of bytes from the microphone, and write it to the output stream of the socket.

    Working with a microphone and transmitting data over the network occurs in separate streams:

    mr = new MicrophoneReader();
    mr.start();
    ServerSocket ss = new ServerSocket(7373);
    while (true) {
    	Socket s = ss.accept();
    	Sender sndr = new Sender(s);
    	senderList.add(sndr);
    	sndr.start();
    }
    


    Microphone Stream:

    public void run() {
    	try {
    		microphone = AudioSystem.getTargetDataLine(format);
    		DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
    	        microphone = (TargetDataLine) AudioSystem.getLine(info);
    	        microphone.open(format);
    	        data = new byte[CHUNK_SIZE];
    	        microphone.start();
    	        while (!finishFlag) {
    	        	synchronized (monitor) {
    	        		if (senderNotReady==sendersCreated) {
    		        		monitor.notifyAll();
    		        		continue;
    	        		}        		
    		       		numBytesRead = microphone.read(data, 0, CHUNK_SIZE);		        		
    		       	}
    	        	System.out.print("Microphone reader: ");
    		       	System.out.print(numBytesRead);
    		       	System.out.println(" bytes read");
    	        }
    	} catch (LineUnavailableException e) {
    		e.printStackTrace();
    	}
    }
    


    UPD Note: it is important to choose the CHUNK_SIZE parameter correctly . If the value is too small, stuttering will be heard, if too large, the sound delay becomes noticeable.

    Sound Stream:

    public void run() {
    	try {
    		OutputStream os = s.getOutputStream();
    		while (!finishFlag) {
    			synchronized (monitor) {
    				senderNotReady++;
    				monitor.wait();
    				os.write(data, 0, numBytesRead);
    				os.flush();
    				senderNotReady--;
    			}
    			System.out.print("Sender #");
    			System.out.print(senderNumber);
    			System.out.print(": ");
    			System.out.print(numBytesRead);
    			System.out.println(" bytes sent");
    		}
    	} catch (Exception e) {
    		e.printStackTrace();
    	}
    }
    


    Both stream classes - nested, variables of the external class data , numBytesRead , senderNotReady , sendersCreated and monitor should be declared as volatile .
    The monitor object is used to synchronize threads.

    2. Receiver for Java SE.



    The method is also trivial - we read the stream of bytes from the socket, and write to the audio output.

    try {
    	InetAddress ipAddr = InetAddress.getByName(host);
    	Socket s = new Socket(ipAddr, 7373);
    	InputStream is = s.getInputStream();
    	DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, format);
    	speakers = (SourceDataLine) AudioSystem.getLine(dataLineInfo);
    	speakers.open(format);
    	speakers.start();
    	Scanner sc = new Scanner(System.in);
    	int numBytesRead;
    	byte[] data = new byte[204800];
    	while (true) {
    		numBytesRead = is.read(data);
    		speakers.write(data, 0, numBytesRead);
    	}
    } catch (Exception e) {
    	e.printStackTrace();
    }
    


    3. Receiver for Android.



    The method is the same.
    The only difference is that instead of javax.sound.sampled.SourceDataLine we use android.media.AudioTrack .
    You also need to consider that in Android, network operations cannot occur in the main application flow.
    With the creation of services, I decided not to bother, we will start the workflow from the main Activity.

    toogle.setOnClickListener(new View.OnClickListener() {
    	@Override
    	public void onClick(View v) {
    		if (!isRunning) {
    			isRunning = true;
    			toogle.setText("Stop");
    			rp = new ReceiverPlayer(hostname.getText().toString());
    			rp.start();
    		} else {
    			toogle.setText("Start");
    			isRunning = false;
    			rp.setFinishFlag();
    		}
    	}
    });
    


    Workflow code itself:

    class ReceiverPlayer extends Thread {
    	volatile boolean finishFlag;
    	String host;
    	public ReceiverPlayer(String hostname) {
    		host = hostname;
    		finishFlag = false;
    	}
    	public void setFinishFlag() {
    		finishFlag = true;
    	}
    	public void run() {
    		try {
    			InetAddress ipAddr = InetAddress.getByName(host);
    			Socket s = new Socket(ipAddr, 7373);
    			InputStream is = s.getInputStream();
    			int bufferSize = AudioTrack.getMinBufferSize(16000, 
    					AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
    			int numBytesRead;
    			byte[] data = new byte[bufferSize];
    			AudioTrack aTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 
    						16000, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
    						bufferSize, AudioTrack.MODE_STREAM);
    			aTrack.play();
    			while (!finishFlag) {
    				numBytesRead = is.read(data, 0, bufferSize);
    				aTrack.write(data, 0, numBytesRead);
    			}
    			aTrack.stop();
    			s.close();
    		} catch (Exception e) {
    			StringWriter sw = new StringWriter();
    			PrintWriter pw = new PrintWriter(sw);
    			e.printStackTrace(pw);
    			Log.e("Error",sw.toString());
    		}
    	}
    }
    


    4. Note on audio formats.



    Java SE uses the javax.sound.sampled.AudioFormat class .

    In Android, audio parameters are passed directly to the constructor of the android.media.AudioTrack object .

    Consider the constructors of these classes that were used in my code.

    Java SE:

    AudioFormat (float sampleRate, int sampleSizeInBits, int channels, boolean signed, boolean bigEndian)
    Constructs an AudioFormat with a linear PCM encoding and the given parameters.

    Android:

    AudioTrack (int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode) .

    For successful playback, the receiver and transmitter parameters are sampleRate / sampleRate ,sampleSizeInBits / audioFormat and channels / channelConfig must match.

    In addition, the mode value for Android must be set to AudioTrack.MODE_STREAM .

    It was also experimentally able to establish that for successful playback on Android, you need to transfer data in the signed little endian format , that is:
    signed = true; bigEndian = false.

    As a result, the following formats were selected:

    // Java SE:
    AudioFormat format = new AudioFormat(16000.0f, 16, 2, true, bigEndian);
    // Android: 
    AudioTrack aTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 
    						16000, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
    						bufferSize, AudioTrack.MODE_STREAM);
    


    5. Testing.



    Between a laptop on Windows 8 and a desktop on Debian Wheezy, everything started up right away without any problems.

    The receiver on Android initially made only noise, but this problem was resolved after the correct selection of the signed and bigEndian parameters for the audio format.

    On the Raspberry Pi (Raspbian Wheezy), stutters were initially heard - crutches were needed in the form of installing the avian lightweight virtual java machine.

    I wrote the following startup script:

    case "$1" in
        start)
            java -avian -jar jAudioReceiver.jar 192.168.1.50 &
            echo "kill -KILL $!">kill_receiver.sh
            ;;
        stop)
            ./kill_receiver.sh
            ;;
        esac
    


    The source code for all components is here:

    github.com/tabatsky/NetworkingAudio


    Also popular now: