Skip to main content

A practical intro to Prometheus

There are two terms that are used to describe monitoring - whitebox and blackbox. An example of blackbox monitoring are Nagios checks, like pinging a gateway to see if it responds. It’s called a blackbox because we can probe at a program, but we don’t really have any control over it’s internal state, or how it interacts with the rest of your system.
Whitebox monitoring on the other hand means having data about the internal state of your program. An example of this is adding a counter like requests_processed in your webserver and making that available to a time series database. A great introduction to whitebox vs blackbox monitoring is Jamie Wilkinson’s talk at PuppetConf ‘12.
Prometheus is both a time series database and a framework for instrumenting your code. First, you have to make the metrics available over http in a format that Prometheus server will understand. Then prometheus will scrape that server at a specified interval and store the metrics in it’s database. Once Prometheus has the data, you can analyse it, create alerts or expose certain stats to a dashboard.
For example, here are some metrics that prometheus exposes itself at http://prometheus/metrics:
# HELP http_requests_total Total number of HTTP requests made.
# TYPE http_requests_total counter
http_requests_total{code="200",handler="graph",method="get"} 6
http_requests_total{code="200",handler="label_values",method="get"} 6
Unless you’re writing an application yourself, it is unlikely that it exposes metrics in a prometheus format. Luckily, there are dozens of ‘exporters’ out there that will convert data from another format, into something that prometheus can understand. Today, I’ll focus on caching_exporter, an exporter that I wrote to monitor OS X Caching server, and mtail, the utility that it’s based on.

mtail

mtail is a daemon that will tail a log file and expose various metrics over HTTP, so that Prometheus can scrape them. To use mtail, we first need to write a set of rules that we can use to process the log file. Here is a basic mtail rule from the project README:
# ~/linecounter.mtail
# simple line counter
counter line_count
/$/ {
  line_count++
}
All the above rule says is: when we encounter an end of line anchor($), increment the counter line_count by one.
Now that we have a filter, we can start mtail.
mtail --progs linecounter.mtail --logs /Library/Server/Caching/Logs/Debug.log
Now if we open our browser and go to http://localhost:3903/metrics we should see:
# TYPE line_count counter
line_count{prog="linecounter.mtail",instance="mylaptop.example.net"} 1124
The counter on the right will start at 0 and increment by one every time there’s a new line in Debug.log.
The above example is simple, but it also doesn’t do anything useful. Let’s write a more complicated rule. If we look at Debug.log, we will see something like this:
2015-08-02 21:01:14.932 Cleanup succeeded.
2015-08-02 21:01:15.483 Request for registration from https://lcdn-registration.apple.com/lcdn/register succeeded
2015-08-02 21:01:15.488 This server has 0 peers
2015-08-02 21:05:17.364 #leDkqrU0GiHl Request by "itunesstored/1.0" for http://a1443.phobos.apple.com/us/r1000/169/Purple1/v4/4d/2c/16/4d2c169d-7aa6-df87-1c86-ff1f37251be5/hlw1128731461172829049.D2.pd.ipa
All of the above information can be turned into useful metrics with some regex magic.
# caching.mtail
counter caching_cleanups
counter caching_registrations
counter caching_requests by request_source, file_type

/^(?P\d+-\d+-\d+ \d+:\d+:\d+\.\d+)/{
    strptime($date, "2006-01-02 15:04:05.000")
    caching_parsed_log_lines++

    # Registration
    /(\bRegistration\b \bsucceeded\b\.)/ {
    caching_registrations++
    }
    # Cleanup
    /(\bCleanup\b \bsucceeded\b\.)/ {
    caching_cleanups++
    }
    
    # requests
    /\#.* (Request by \"(?P\w+\/\d+\.\d+)\" for http:\/\/).*\/[\w\.]+\.(?P\w+)/ {
    caching_requests[$request_source][$file_type]++
    }
}
Now this is what prometheus sees at http://caching.example.net:3903/metrics:
# TYPE caching_parsed_log_lines counter
caching_parsed_log_lines{prog="caching.mtail",instance="mylaptop.example.net"} 1124
# TYPE caching_registrations counter
# TYPE caching_cleanups counter
caching_cleanups{prog="caching.mtail",instance="mylaptop.example.net"} 182
# TYPE caching_requests counter
caching_requests{file_type="ipa",request_source="itunesstored/1.0",prog="caching.mtail",instance="mylaptop.example.net"} 2

caching_exporter

With a bit more work, we can turn everything in Debug.log into useful data. This post is specificaly about prometheus and mtail, but the above can also be achieved with Logstash and another metrics database. However, if you’re familiar with Caching Server, you know that some of the useful information is also stored in/Library/Server/Caching/Config/Config.plist and, /Library/Server/Caching/Logs/LastState.plist To get the data that I wanted out of the two plist files, I modified mtail a little and created my own exporter: https://github.com/groob/caching_exporter
It’s still mtail, and obeys the same command line flags and same *.mtail rule files, but also gets some counters from the plists. Here are a few metrics exposed from the Config plist:
# HELP caching_data data cached by server.
# TYPE caching_data gauge
caching_data{type="Books"} 3.9385975e+07
caching_data{type="Mac Software"} 7.3859516e+07
caching_data{type="Movies"} 0
# HELP caching_status_active whether caching server is currently running
# TYPE caching_status_active gauge
caching_status_active 1

Prometheus

Now that we have an exporter running, let’s get configure Prometheus. The official repo provides binaries for linux and OS X, but I prefer the docker container:
docker pull prom/prometheus
docker run -d --name prometheus -p 9090:9090 -v $(pwd)/config:/prometheus-config prom/prometheus -config.file=/prometheus-config/prometheus.yml
And a sample config file:
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).

  # Attach these extra labels to all timeseries collected by this Prometheus instance.
  labels:
    monitor: 'devbox'

scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped from this config.
  - job_name: 'caching-server'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 30s
    scrape_timeout: 10s

    target_groups:
      - targets: ['caching-server-url:3903']

Comments

Popular posts from this blog

OWASP Top 10 Threats and Mitigations Exam - Single Select

Last updated 4 Aug 11 Course Title: OWASP Top 10 Threats and Mitigation Exam Questions - Single Select 1) Which of the following consequences is most likely to occur due to an injection attack? Spoofing Cross-site request forgery Denial of service   Correct Insecure direct object references 2) Your application is created using a language that does not support a clear distinction between code and data. Which vulnerability is most likely to occur in your application? Injection   Correct Insecure direct object references Failure to restrict URL access Insufficient transport layer protection 3) Which of the following scenarios is most likely to cause an injection attack? Unvalidated input is embedded in an instruction stream.   Correct Unvalidated input can be distinguished from valid instructions. A Web application does not validate a client’s access to a resource. A Web action performs an operation on behalf of the user without checking a shared sec

CKA Simulator Kubernetes 1.22

  https://killer.sh Pre Setup Once you've gained access to your terminal it might be wise to spend ~1 minute to setup your environment. You could set these: alias k = kubectl                         # will already be pre-configured export do = "--dry-run=client -o yaml"     # k get pod x $do export now = "--force --grace-period 0"   # k delete pod x $now Vim To make vim use 2 spaces for a tab edit ~/.vimrc to contain: set tabstop=2 set expandtab set shiftwidth=2 More setup suggestions are in the tips section .     Question 1 | Contexts Task weight: 1%   You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts . Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh , the command should use kubectl . Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh , but without the use of k

标 题: 关于Daniel Guo 律师

发信人: q123452017 (水天一色), 信区: I140 标  题: 关于Daniel Guo 律师 关键字: Daniel Guo 发信站: BBS 未名空间站 (Thu Apr 26 02:11:35 2018, 美东) 这些是lz根据亲身经历在 Immigration版上发的帖以及一些关于Daniel Guo 律师的回 帖,希望大家不要被一些马甲帖广告帖所骗,慎重考虑选择律师。 WG 和Guo两家律师对比 1. fully refund的合约上的区别 wegreened家是case不过只要第二次没有file就可以fully refund。郭家是要两次case 没过才给refund,而且只要第二次pl draft好律师就可以不退任何律师费。 2. 回信速度 wegreened家一般24小时内回信。郭律师是在可以快速回复的时候才回复很快,对于需 要时间回复或者是不愿意给出确切答复的时候就回复的比较慢。 比如:lz问过郭律师他们律所在nsc区域最近eb1a的通过率,大家也知道nsc现在杀手如 云,但是郭律师过了两天只回复说让秘书update最近的case然后去网页上查,但是上面 并没有写明tsc还是nsc。 lz还问过郭律师关于准备ps (他要求的文件)的一些问题,模版上有的东西不是很清 楚,但是他一般就是把模版上的东西再copy一遍发过来。 3. 材料区别 (推荐信) 因为我只收到郭律师写的推荐信,所以可以比下两家推荐信 wegreened家推荐信写的比较长,而且每封推荐信会用不同的语气和风格,会包含lz写 的research summary里面的某个方面 郭家四封推荐信都是一个格式,一种语气,连地址,信的称呼都是一样的,怎么看四封 推荐信都是同一个人写出来的。套路基本都是第一段目的,第二段介绍推荐人,第三段 某篇或几篇文章的abstract,最后结论 4. 前期材料准备 wegreened家要按照他们的模版准备一个十几页的research summary。 郭律师在签约之前说的是只需要准备五页左右的summary,但是在lz签完约收到推荐信 ,郭律师又发来一个很长的ps要lz自己填,而且和pl的格式基本差不多。 总结下来,申请自己上心最重要。但是如果选律师,lz更倾向于wegreened,