Skip to content

Commit 6fadc10

Browse files
committed
forogt a couple slides
1 parent d93b02b commit 6fadc10

File tree

1 file changed

+18
-1
lines changed

1 file changed

+18
-1
lines changed

2017_hashiconf/hashiconf.slide

+18-1
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,7 @@ We needed a way to share these backend processing nodes for multiple customers
101101
- Easy to operate
102102
- Easy to use API
103103
- Can schedule jobs based on CPU, Memory and Disk resources
104+
- Easy to integrate with existing infrastructure components
104105

105106
: easy to operate...we were going to be deploying this everywhere (cloud, on-prem, etc.) and operational easy was a primary concern. We couldn't waltz into a new customer's infrastructure and expect them to deploy and manage a complicated solution
106107
: easy to use API - wanted to make the development experience as easy as possible
@@ -130,13 +131,29 @@ We needed a way to share these backend processing nodes for multiple customers
130131

131132
If the docs weren't clear, there was a full client implementation (the Nomad CLI) to borrow from
132133

134+
* Integration with existing infrastructure components
135+
136+
Our biggest concern was centralized logging. We were able to add a wildcarded path to our filebeat configuration on the Nomad clients:
137+
138+
/data/nomad/alloc/*/alloc/logs/*.std*.*
139+
140+
We also turned on telemetry in the Nomad configuration to get metrics to statsite which is then configured to send them to graphite:
141+
142+
{
143+
"telemetry": {
144+
"statsite_address": "statsite.example.com:8125"
145+
},
146+
...
147+
}
148+
149+
133150
* Re-architecting to use Nomad
134151

135152
.image images/diagrams/with_nomad.png
136153

137154
Decided the jobs running in containers wouldn't touch the source storage directly
138155

139-
* Accessing the API
156+
* Accessing our API
140157

141158
- Container is given a narrowly scoped OAuth token (scoped to a single source file)
142159
- Allows the job in the container to download the file and store the results for only that file

0 commit comments

Comments
 (0)