Pi Thing of beauty

While I would be the last one to identify as an audiophile, I do have certain audiocentric pet-peeves that tend to set me on edge. Out-of-sync audio is one of those. For years I simply ran dedicated wires whenever I wanted multiple audio sinks to stay in sync (not often during my dorm days, but it happened on occasion). It was messy but I didn’t have to worry about audio being out of phase, and smaller living quarters necessitated a higher tolerance toward technological clutter.

Now that I’m the proud renter of a multi-room apartment, multi-room audio is a must - preferably without unsightly wires snaking around corners and over door frames. Drilling is also out of the question, leading inevitably to my gazing into the electromagnetic abyss (with only a touch of apprehension).

I tinkered around with Bluetooth for a bit but not only did I have issues with multiple devices paired at the same time, the audio was never in sync and the entire system felt no more reliable than a house of cards. The poor-man’s streaming service IceCast did O.K., but between client-side configuration, reliable failure recovery, and variations in buffering it was quickly relegated to the “cool, but not it” bin. I still use it as a shared radio stream for my fiance and myself so we can share music throughout the day, but onwards my search continued.

And then came RTP. Imagine a protocol specially designed for the broadcast of media with mechanisms of ensuring that content is played by multiple devices at the same time. Sounds pretty sweet? That’s what I thought too. With a little help from Pulseaudio, MPD, and a raspberry Pi or two, multi-room audio is a snap.


Materials I used

  • (Optional) MPD - Music Player Daemon
  • Pulseaudio - Standard on most common Linux distros
  • Raspberry Pi - I used 3B, but any should work
  • (Optional) Dedicated Ethernet/wireless network
  • USB Sound Adapter

I’m using MPD to queue up and play media which is then fed through a custom Pulseaudio sink set to broadcast RTP traffic throughout a dedicated LAN. Any device on the network can then receive and process the media. MPD (more on what it does below) is not required - almost any media program can be configured to use the new sink, though I’ll be talking specifically about MPD in this article. I used a Raspberry Pi as my media receiver because of the price, ease in configuration, onboard WiFi, and I had a one lying around. I used the 3B model but I imagine any other would also work (I the zero would be a convenient solution once any additional hardware is installed). I recommend using a dedicated audio adapter as I had buffering issues with the onboard DAC.

As mentioned above, my setup uses RTP broadcast (vs unicast) which can create quite a fair amount of network traffic. If you are on an entirely wired network you’ll probably be fine, but if you intend to broadcast RTP traffic via WiFi like me then you’re going to have a bad time. During RTP broadcast each packet is received by each device on the network, meaning that your poor wireless devices will constantly be receiving and processing packets. This is true for any device associated to the network as the packets still need to be received and processed just to determine that they can be discarded. This can dramatically drain the battery life of mobile devices and leaves much less bandwidth available to other devices. As such, I highly recommended that you utilize a dedicated network for RTP broadcast.

You could use unicast, configuring Pulseaudio to send targeted packets to selected devices, however this results in duplicate transmissions and requires both client and server-side setup with the addition of future devices. Using broadcast means that adding new devices only requires association to the WiFi network and client-side RTP config.

My Setup


I was already using Music Player Daemon to play and edit my music library so the fact that adding an additional audio source to its list of outputs is as easy as editing its config file was even more awesome. I won’t be covering the setup and/or use of MPD in this article, but the ArchLinux Wiki page has some great setup and configuration tips. In a nutshell, MPD is a daemon that can be controlled by various client applications (I use ncmpcpp) to search, edit, play, and otherwise manage your media library.

ncmpcpp NCMPCPP MPD client

I already had outputs configured for my system’s local Pulseaudio sink and IceCast, so adding yet another Pulseaudio source to pipe into RTP was a breeze. Other media applications will probably create a single Pulseaudio source by default and you will either need to dig through its documentation to find out how to create a secondary source, else you can manually map the single source to the new RTP sink via the pactl commandline tool or a suitable Pulseaudio GUI - Pavucontrol works wonders for me.

Relevant MPD config:

audio_output {
  type "pulse"
  name "RTP"
  sink "rtp"

This adds an additional output choice that can be selected from within MPD-compatible clients. Here ncmpcpp lists it as another output that can be toggled on and off. For now, this new output will likely generate a duplicate stream on your system’s default sink, we’ll handle the magic mapping in the next section.


New output option in NCMPCPP


I’m not much of a Pulseaudio guru, but that’s OK because all we’re doing is plumbing our new audio source to the built-in RTP module. Pulseaudio manages the “audio plumbing” of our system, mapping audio sources to various sinks which translate into audio cards, network devices, or any other means of transmitting audio information. We want to create a virtual sink (called a null-sink) that we can ship our new audio source out too, which will then be handed over to the RTP module. We are using the module-rtp-send module with a source address so it knows which network card to use.

This is all done via the system’s Pulseaudio config files, typically located at /etc/pulse/

Relevant Pulseaudio config - Sender:


load-module module-native-protocol-unix
load-module module-suspend-on-idle timeout=1
load-module module-null-sink sink_name=rtp format=s16be channels=2 rate=44100 sink_properties="device.description='RTP Multicast Sink'"
load-module module-rtp-send source_ip=<NIC_ip> source=rtp.monitor
# Optional - Send and receive on sender to play in time with other receivers
load-module module-rtp-send source_ip=<NIC_ip> source-rtp.monitor destination_ip=<NIC_ip>
load-module module-rtp-recv sap_address=<NIC_ip>

The last two lines configure a unicast transmission to and from the sending device (the source, destination and sap IPs are all the same). module-rtp-send is given a destination IP so packets are directed to a singular device (ourself) instead of being broadcasted, while module-rtp-recv is instructed to listen for incoming RTP traffic. This works provided that MPD has only the RTP source enabled and the local source disabled, else you’ll have two sinks playing the same media (for sure out of phase).


pavuctl showing new RTP source paired with null-sink, and local RTP stream listener

On each receiver, configuration is very simple. Simply add the following line to the Pulseaudio config file and any broadcast RTP traffic will be received and processed.

Relevant Pulseaudio config - Receiver:

load-module module-rtp-recv

Again, this applies to any device connected to the network that is running Pulseaudio.


As stated above, running a dedicated network is entirely optional but I feel will provide the best results. There are multiple ways to solve this depending on existing infrastructure, but the underlying idea is that you want to segment any RTP traffic to a separate broadcast domain such that the transmission medium shared with other devices is not flooded with RTP traffic. By far the easiest method is to simply add a secondary NIC in the media server and configure a simple DHCP server to hand out IP addresses to connected clients. This allows any normal traffic to come in on the existing interface while outbound traffic is split between the two interfaces based on the host’s routing table.

I recommend that you add a DHCP reservation for the new interface and for any networking equipment (such as wireless AP) so that you run into fewer headaches from a management perspective. Media clients can just use a random DHCP lease.

Example DHCPD config:

option subnet-mask;
option routers;
subnet {
  host media_server {
    hardware ethernet <MAC_ADDRESS>;
  host AP {
    hardware ethernet <MAC_ADDRESS>;

You can then start the DHCPD via an init script or a SystemD unit file, just be sure to specify the correct interface. You don’t want it handing out leases on your home network. If you are starting via a script, use the command dhcpd <interface>, else use a unit file similar to the one provided below:

Description=IPv4 DHCP server on %I

ExecStart=/usr/bin/dhcpd -4 -q -pf /run/ %I


If you want to add WiFi, a wireless access point in bridge mode can be used to bridge your wireless devices to your newly created wired network (way to broad a topic to cover here, but if you keep your dhcp server on your media machine, you definitely want the AP in bridge mode). DHCP leases should still be handed out with ease and the new AP will reduce interference with your standard WiFi network (most APs should be smart enough to auto-tune to an available channel, else you can break out your favorite channel analyzer and set them manually to minimize overlap).

Example config for joining Raspberry Pi to WiFi:


auto wlan0

allow-hotplug eth0
iface eth0 inet dhcp

allow-hotplug usb0
iface usb0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev


And that’s it! I hope this guide was useful and will help you on your way to your own awesome multi-room setup. Someday I’d like to add the ability to stream/cast media streams from laptops and/or phones, but that’ll be a project for the future. In the meantime I manage to get by with ncmpcpp and MALP for android.

Link headphones