Skip to content

Commit

Permalink
Rewrite the whole thing in ruby and add volume control
Browse files Browse the repository at this point in the history
;
  • Loading branch information
ahayworth committed Feb 16, 2021
1 parent 00c37d3 commit 858ef7b
Show file tree
Hide file tree
Showing 17 changed files with 720 additions and 770 deletions.
1 change: 1 addition & 0 deletions .ruby-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
2.7.2
7 changes: 7 additions & 0 deletions Gemfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
source 'https://rubygems.org'

gem 'async', '~> 1.28'
gem 'async-io', '~> 1.30'
gem 'dry-struct', '~> 1.4'
gem 'dry-inflector', '~> 0.2'
gem 'json-rpc-objects', '~> 0.4.6'
65 changes: 65 additions & 0 deletions Gemfile.lock
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
GEM
remote: https://rubygems.org/
specs:
abstract (1.0.0)
addressable (2.7.0)
public_suffix (>= 2.0.2, < 5.0)
async (1.28.7)
console (~> 1.10)
nio4r (~> 2.3)
timers (~> 4.1)
async-io (1.30.2)
async (~> 1.14)
concurrent-ruby (1.1.8)
console (1.10.1)
fiber-local
dry-configurable (0.12.0)
concurrent-ruby (~> 1.0)
dry-core (~> 0.5, >= 0.5.0)
dry-container (0.7.2)
concurrent-ruby (~> 1.0)
dry-configurable (~> 0.1, >= 0.1.3)
dry-core (0.5.0)
concurrent-ruby (~> 1.0)
dry-inflector (0.2.0)
dry-logic (1.1.0)
concurrent-ruby (~> 1.0)
dry-core (~> 0.5, >= 0.5)
dry-struct (1.4.0)
dry-core (~> 0.5, >= 0.5)
dry-types (~> 1.5)
ice_nine (~> 0.11)
dry-types (1.5.0)
concurrent-ruby (~> 1.0)
dry-container (~> 0.3)
dry-core (~> 0.5, >= 0.5)
dry-inflector (~> 0.1, >= 0.1.2)
dry-logic (~> 1.0, >= 1.0.2)
fiber-local (1.0.0)
ice_nine (0.11.2)
json-rpc-objects (0.4.6)
abstract (>= 1.0.0)
addressable (>= 2.2.2)
json-rpc-objects-json (>= 0.1.1)
ruby-version (>= 0.4.0)
json-rpc-objects-json (0.1.1)
multi_json
multi_json (1.15.0)
nio4r (2.5.5)
public_suffix (4.0.6)
ruby-version (0.4.3)
timers (4.3.2)

PLATFORMS
ruby
x86_64-darwin-20

DEPENDENCIES
async (~> 1.28)
async-io (~> 1.30)
dry-inflector (~> 0.2)
dry-struct (~> 1.4)
json-rpc-objects (~> 0.4.6)

BUNDLED WITH
2.2.3
57 changes: 42 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,55 @@
## snapcast-autoconfig

This script watches for pre-defined streams on a snapcast server; and if any of them are playing it will then ensure that a group with the configured clients is playing that stream.
snapcast-autoconfig watches pre-defined streams on a [`snapcast`](https://github.com/badaix/snapcast) server. If any of them are playing, it will then ensure that a group with the configured clients is playing that stream (and can optionally manipulate the volume of each client within that group).

### Wait, why is this necessary / useful?

Well, maybe it's better to just explain how I use it. I have multiple room in my house that have speakers I'd like to listen to:

- Living Room
- Kitchen
- Office
- Bedroom
- Bathroom
- Deck

But, there are actually *additional* logical groupings of those speakers that are useful:
- "Great Room" (Kitchen + Living Room)
- "Master Suite" (Bedroom + Bathroom)
- "Great Room + Outside" (Kitchen + Living Room + Deck ... in the summer!)
- "Whole House"
- "Whole House + Outside"

And what's more, I want these speakers *and* these zones to be available as Airplay targets! Me and mine like to use Airplay to play our music, because it's convenient and most people we know have iPhones. And we don't want to tell people "Oh wait, let me re-configure the sound server"... we want it to just happen automatically.

To solve this problem, I run multiple instances of [`shairport-sync`](https://github.com/mikebrady/shairport-sync) on my server, each one for a different speaker or logical zone (eg: I have an "Office" stream, a "Kitchen" stream... and also a "Whole House" stream, etc). And, I have snapcast set up to source music from each one of these airplay streams (shairport-sync instances). This brings us to the *why* of `snapcast-autoconfig`: it watches snapcast to see if any of these airplay streams become active, and then does the re-grouping of the clients behind the scenes.

All so that we can just say "airplay to the Great Room, mom!" when we're playing music at home.

Believe it or not, it actually works pretty well.

### Requirements

Node (tested with v15)
- Ruby >= 2.7.2 (I have tested with 2.7.2 and 3.0.0).
- A valid configuration file describing the streams to monitor, and the clients that should follow them. See the example in the repo.
- Your snapcast server *must* have the TCP api exposed and available (it is, by default).

### Installation

`npm install`
`bundle install`

### Deploying
### Deployment / Operation

I'm planning on systemd - but you do you.
I manually install this on my server, and have set up a systemd service to run it. I've included an example systemd unit file.
My own personal config file is also provided, and can be used as a reference.

### FAQ

- **Does it do x/y/z?** Probably not, but PRs are welcome.
- **You just put your personal config into git?!** I'm lazy, and the info is not sensitive. It's also an instructive example of how I use it.
- **There are bugs!!** I'm not surprised - help me fix them!
- **I need help!** Feel free to open an issue, but I'm basically just putting this out as-is unless anyone else is interested in helping.
- **What does the priority field mean** If there are two streams playing that have overlapping configured clients (ie: the 'kitchen' and 'whole house' streams are both playing, and they both claim the 'kitchen' client) - then the stream with the lowest priority wins.

### Other notes

- This expects that your clients have the ID set to something memorable; not the name.
- This might blow up, who knows. It's not well tested outside of my own living room.
- **Does it do x/y/z?** Probably not, but PRs are welcome!
- **Couldn't you just use the 'meta' stream type?** Almost!! Snapcast doesn't allow you to define "home" (or "default", or "initial") groups for clients; and that's still useful and important.
- **Wait, couldn't you just use the pre/post-play scripts in shairport-sync?** Yes, actually, but I didn't know about them when I first started the project and now I kinda like this.
- **Hold on, wouldn't Airplay2 make this obsolete?** Yes, basically. That'd be really nice to be honest. Hopefully someone cracks it eventually.
- **You just put your personal config into git?!** I'm lazy, and the info is not sensitive. It's also an instructive example of how I use it!
- **There are bugs!!** I'm not surprised - help me fix them! I've really only tested this at my house!
- **Didn't this used to be written in nodejs?** Yes, but that was a bad move on my part; I know ruby a lot better. I just didn't know much about Ruby event-loop programming at the time and node was an easy quick fix.
- **I need help!** Please open an issue, and I'll try to help if possible.
119 changes: 119 additions & 0 deletions autoconfig.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
require 'logger'
require 'yaml'
require_relative './lib/snapcast'

@config = {
'loglevel' =>'info',
'polling_interval' => 2,
}.merge(YAML::load(File.read('./config.yml')))

@logger = Logger.new(STDOUT, level: Logger::INFO)
if @config['loglevel'] != 'info'
new_level = "#{@config['loglevel']}!".to_sym
@logger.send(new_level) if @logger.respond_to?(new_level)
end
Async.logger = @logger

Async(annotation: "autoconfig.rb", logger: @logger) do |task|
begin
server = Snapcast::Server.new(@config['server'])
server.logger = @logger
server.poll!(@config['polling_interval'])

loop do
if server.groups.any?
@logger.debug("\n" + [
"-" * 10,
server.groups.map { |g|
"#{g.id} (name: '#{g.name}', stream: '#{g.stream&.id}' / #{g.stream&.status}) #{g.clients.map(&:id).inspect}"
},
"-" * 10,
].join("\n"))
end

# For each playing stream, determine if any changes need to be made.
# Ignore streams we're not managing (not in the config file).
managed_streams = server.streams.select do |stream|
stream.playing? && @config['streams'].has_key?(stream.id)
end

# Sort the streams so that they're ordered the same way as the config file.
managed_streams.sort_by! { |stream| @config['streams'].keys.index(stream.id) }

managed_streams.each do |stream|
# Find the configuration for this stream.
stream_config = @config['streams'][stream.id]

# Filter out any clients from the config that should be in higher-priority streams.
desired_client_ids = stream_config['clients'].reject do |client_id|
idx = @config['streams'].keys.index(stream.id)
more_important_streams = @config['streams'].first(idx).to_h.select do |k, _|
server.streams.select { |s| s.playing? }.map(&:id).include?(k)
end
more_important_clients = more_important_streams.flat_map { |_, v| v['clients'] }

more_important_clients.include?(client_id)
end
desired_clients = server.clients.select { |c| desired_client_ids.include?(c.id) }

# Bail out if we shouldn't actually move any clients to this stream.
next if desired_client_ids.none? || desired_clients.none?

# Now, find a candidate group to manage
candidate_group = server.groups
.select { |g| (g.clients.map(&:id) & desired_client_ids).any? }
.sort_by { |g| g.clients.size }
.first

# Bail if we can't find a group (I don't think this should happen)
next unless candidate_group

# Pull out any desired volume changes, with a default of '100'.
volume_config = stream_config['volume'] || {}
volume_config.default = 100

# Next, determine if this group is misconfigured in some way.
correct_stream = candidate_group.stream.id == stream.id
correct_clients = candidate_group.clients.map(&:id).to_set == desired_client_ids.to_set
correct_name = candidate_group.name == stream.id
correct_volume = candidate_group.clients.all? { |c| c.config.volume.percent == volume_config[c.id] }
unless correct_stream && correct_clients && correct_name && correct_volume
# We need to make changes, so let's log that proposal.
@logger.info "MISCONFIGURED: #{stream.id}"
@logger.info <<~EOF
Going to reconfigure group '#{candidate_group.id}':
Name: #{candidate_group.name} -> #{stream.id}
Stream: #{candidate_group.stream.id} -> #{stream.id}
Clients: #{candidate_group.clients.map(&:id).sort} -> #{desired_client_ids.sort}
EOF
candidate_group.clients.each do |client|
@logger.info " #{client.id} volume: #{client.config.volume.percent} -> #{volume_config[client.id]}"
end

# Actually make the changes now.
candidate_group.stream = stream unless correct_stream
candidate_group.clients = desired_clients unless correct_clients
candidate_group.name = stream.id unless correct_name

# We wouldn't have had correct client info for volume manipulation
# before, so we should break now and get new info before trying volume manipulation.
break unless correct_clients

candidate_group.clients.each do |client|
new_volume = volume_config[client.id]
client.volume = new_volume unless client.config.volume.percent == new_volume
end

# Break out of the loop if we've made changes (we have, by this point) so we
# can start the next round of modifications with correct info.
break
end
end

task.sleep @config['polling_interval']
end
rescue => e
puts e.backtrace.inspect
raise Snapcast::Error.new(e)
end
end
60 changes: 0 additions & 60 deletions config.js

This file was deleted.

52 changes: 52 additions & 0 deletions config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
loglevel: debug
# For now, we only support TCP connections. This could always be on localhost!
server: tcp://192.168.1.29:1705
# We only explicitly manage the streams and clients referenced in this file - however,
# that doesn't mean that snapcast-autoconfig won't inadvertently mess up a manual grouping
# you've made. Snapcast groups are dynamic, fuzzy things. YMMV.
#
# For each stream, the clients that should be grouped together when it starts playing are listed.
# The order of the streams is important - streams higher up take precedence over streams further
# down when deciding which group should claim clients. For example, consider a scenario where
# the 'office' and 'wholehouse' streams are playing simultaneously. In that case, the 'office' client
# would always be grouped with the 'office' stream as configured here, because it has the highest priority
# in the list. The 'wholehouse' stream wouldn't get the 'office' client; it'd technically be incomplete.
streams:
office:
clients:
- office
bedroom:
clients:
- bedroom
bathroom:
clients:
- bathroom
kitchen:
clients:
- kitchen
livingroom:
clients:
- livingroom
mastersuite:
clients:
- bedroom
- bathroom
greatroom:
clients:
- kitchen
- livingroom
# Here we've configured the 'kitchen' client to have its volume lowered to 70
# when it becomes part of this group. Volumes default to '100' if they are not explicitly
# listed; and will always overwrite any manual configuration you've done in the UI.
volume:
kitchen: 70
wholehouse:
clients:
- bedroom
- bathroom
- kitchen
- livingroom
- office
volume:
kitchen: 70
Loading

0 comments on commit 858ef7b

Please sign in to comment.