Around the time in my life when I stopped ordering drinks made with more than one ingredient, I was woken up for the last time by a hypochondriac Nagios monitoring installation. If you are on-call long enough, you cultivate a violent reaction to the sound of your cell phone's text message alert. If your monitoring is overconfigured, that reaction boils over hastily, as it will interrupt you during meals, sex, sleep — all of the basics — with the excruciating operational details of your web site.
I've since developed, with the help of some noble systems administrators, a theory around service monitoring: monitors can be informative, actionable, or both. By informative, I mean that the alert must tell you categorically that there is a problem. By actionable I mean that receiving the alert must prompt some kind of immediate response. Alerts can therefore break down like this:
Neither Informative nor Actionable
I call these types of alerts Cool Story, Bro for short. These are bits of information that do not indicate any sort of problem state, and do not prompt any action. Cool Stories are things that you should not even have alerts for. They waste your time and make you paranoid. If you want to track a metric whose state is neither informative nor actionable, make it a graph, not an alert. Cool Story Bro alerts are things like:
- The load average on a server is above 20. This doesn't actually indicate a problem. In Linux, the load average is simply the number of processes in the kernel's run queue, and as long as the CPUs, disk channels, network interfaces, and memory are less than 100% capacity, the machine is not busy enough. A high load is nothing to worry about. I don't even have alerts on load average. I have production machines with load averages well over 100 that are working just fine.
- A job queue has more than X work units in it. Congratulations, dipshit, your queue is doing exactly what it is supposed to do. It's a subjective decision whether or not this is a failure state. One of the reasons that I hate queues.
- Some metric is greater than an empirically determined mean. I get personally offended by shit like this. "Have it alert if it's 10 more than the average"; first of all, read Zed Shaw's Programmers Need To Learn Statistics Or I Will Kill Them All, and second, alerts based on empirical data will frequently give false positives, as the measurements the alert takes are new empirical data you haven't seen before. Furthermore, any up/down check program that needs to keep state about the return results of previous checks is a recipe for tears.
Informative, but not Actionable
Informative but unactionable alerts are ones that indicate an abnormal state, but are not things that you need to handle immediately, so they should be e-mail alerts, not pager messages. They are things that you can handle during the workday, and don't need your surly, undivided attention at four o'clock in the morning. For example:
- The primary disk on your database server is at 90% capacity. Is the site down? No? Then fuck off, I'll get to it.
- Memory on your MongoDB server is at 80% capacity. For those of you working at Foursquare, when a critical server is approaching its maximum physical memory capacity, you should be aware, as it means bad shit may happen soon if it keeps growing.
- One out of three load-balanced web servers is down. Good to know, but I don't have to get off my ass just yet.
Informative and Actionable
This is your meat and potatoes. When one of these fuckers goes off, it's battlestations. Drop your cocks and grab your socks, we got shit to fix. Yes, these are the alerts that should be waking you up in the middle of the night. Milo's production Nagios config has roughly 10 of these, double that in e-mail only alerts, and quite a few cool stories to satisfy some paranoid delusions. Some example alerts that should get your lazy ass out of bed:
- The search handler of your public site serves HTTP 500. Your users probably weren't looking for that.
- Your API's latency is outside of SLA. Welcome to losing-moneyville. Population: you.
- The CSS statics on your home page fail to load. It's a really obscure HTTP status code, you probably haven't heard of it.
Systems Design Considerations
When you design a new system, design it to be monitorable. The basic criterion is this: there must be a stateless, deterministic way to check the system's health.
- stateless: The check run at time
t
must not depend on the outcome of the check run at timet - 1
. - deterministic: There is no random variability or subjective judgment to determine whether or not the system is healthy. Health is a binary value.
Producer → Queue → Blocking ConsumersEnd to end, how do you check that this system is OK? The logical entry point for a monitor is the producer, send a job through the system and check that it gets processed by a consumer, but the asynchronous queue makes that determination a judgment call. What if it takes a minute? What if it takes an hour? Is the system still OK if the time from job production to job completion is a day?
If you need the asynchronous model, what you generally want is a spool, where you say that it is not a requirement that work be processed as soon as possible, but rather, in an offline batch job. In this case, you can simply monitor that the spool size has not overflowed the capacity of the physical device it's on, and that your processing batch job has run successfully.
If you want work processed as it comes in, think about this:
Producer → Load Balancer → ConsumersYou still have a fixed number of consumers, except work is being done synchronously, and the load balancer decides which of the consumers gets the work. If all consumers are busy, the load balancer refuses incoming work, because your system is over capacity. This is a failure state, and can be deterministically measured.
But Ted you say, what if there's a lot of traffic and the consumers get backed up? Aye, welcome to the world of capacity planning. In this case, using an asynchronous queue is a crutch that helps you avoid thinking about your system's actual resource utilization, and it will come back to burn you because it us unmonitorable. In the face of increased traffic, if you're so confident that you can "spin up more queue workers", then you can sure as hell "spin up more synchronous workers".
Comments
Post a Comment
https://gengwg.blogspot.com/