
Monitoring and metrics for the Play Framework with Dropwizard Metrics
- Transfer
- Tutorial
At some point in the development of the application, each of us comes to the point that we need more information about what is happening inside the application or in the ability to monitor the application. In the case of the Play Framework, there is already a ready-made solution in the form of an excellent open-source library Kamon paired with the kamon-play module .
But today we are going to take a look at an alternative solution, integration and use of Drowizard Metrics previously known as Codahale Metrics
with the Play Framework.
Integration
So I started looking for ready-made solutions that could help me integrate these two tools.
I found some incomplete solutions:
- metrics-scala - Excellent library, elegant API with good Scala support, but in the case of the Play Framework there is not enough support.
- metrics-play - One of the first solutions Google is trying to satisfy your request, but this library is no longer supported and is not compatible with the latest versions of the Play Framework and Dropwizard Metrics. But there is a fork that has been updated to the latest versions, so I decided to try it.
Unfortunately, the metrics-play module provides only the basic functionality of everything that is available in the Dropwizard Metrics environment. This may be enough if you need simple metrics that are accessible through the REST api, but I had higher requirements and I decided to supplement the functionality of this module by writing the following modules:
- metrics-reporter-play - Support for Metrics reporters in the Play Framework.
- metrics-annotation-play - Metrics annotation support for the Play Framework via Guice AOP.
Actually, we will talk about this later.
Support for Metrics reporters in the Play Framework
Metrics provides a powerful toolkit for monitoring the behavior of critical components in a production environment. It also provides a means of sending measured data through reporters. Metrics reporters are a great way to send data from the application itself to your preferred metrics storage and visualization system.
At the time of writing, the supported reporters are as follows:
- console - Periodically sends data to the standard application exit stream
- graphite - Periodically sends data to Graphite.
Dropwizard Metrics and community well predostalyayut other reporters, for example Ganglia Reporter
, CSV Reporter
, InfluxDB Reporter
, ElasticSearch Reporter
and others.
Adding factories to support reporters in the library is an easy task.
Metrics annotation support for Play Framework via Guice AOP
By default, in order to use metrics, you need to call the Metric Registry to create metrics, create a context and manually manage it. For instance:
def doSomethingImportant() = {
val timer = registry.timer(name(classOf[WebProxy], "get-requests"))
val context = timer.time()
try // critical business logic
finally context.stop()
}
To keep everything DRY
annotated, the metrics-annotation-play module will create and properly call Timer for @Timed
, Meter for @Metered
, Counter for @Counted
and Gauge for @Gauge
. @ExceptionMetered
also supported, it creates a Meter that measures the frequency of throwing exceptions.
The previous example can be rewritten as follows:
@Timed
def doSomethingImportant = {
// critical business logic
}
or you can decode the entire class, which will create metrics for all explicit methods:
@Timed
class SuperCriticalFunctionality {
def doSomethingImportant = {
// critical business logic
}
}
This functionality is supported only for classes created through Guice, there are also some limitations of AOP.
Usage example
Let's try to use the library in a real application and see how everything works. The source code of the application can be found here .
I am using template activator play-scala
c sbt plugin
. We should add JCenter
to the list resolvers
and dependencies:
name := """play_metrics_example"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.8"
resolvers += Resolver.jcenterRepo
libraryDependencies ++= Seq(
"de.khamrakulov.metrics-reporter-play" %% "reporter-core" % "1.0.0",
"de.khamrakulov" %% "metrics-annotation-play" % "1.0.2",
"org.scalatestplus.play" %% "scalatestplus-play" % "1.5.1" % Test
)
For an example I use a сonsole
reporter, let's add a configuration to application.conf
.
metrics {
jvm = false
logback = false
reporters = [
{
type: "console"
frequency: "10 seconds"
}
]
}
As you can see, I deactivated the metrics jvm
and logback
, in order not to lose our metrics, I added a reporter who will output the metrics in stdout
10 second intervals.
Now we can start using annotations, I will decorate the index
controller method HomeController
:
@Singleton
class HomeController @Inject() extends Controller {
@Counted(monotonic = true)
@Timed
@Metered
def index = Action {
Ok(views.html.index("Your new application is ready."))
}
}
In fact, you should not use all annotations at once, as @Timed
combines Counter
and Meter
, but I did it to demonstrate the possibilities.
After starting the application and request Главной Страницы
, the reporter should display the metrics in stdout
:
-- Counters --------------------------------------------------------------------
controllers.HomeController.index.current
count = 1
-- Meters ----------------------------------------------------------------------
controllers.HomeController.index.meter
count = 1
mean rate = 0.25 events/second
1-minute rate = 0.00 events/second
5-minute rate = 0.00 events/second
15-minute rate = 0.00 events/second
-- Timers ----------------------------------------------------------------------
controllers.HomeController.index.timer
count = 1
mean rate = 0.25 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 14.59 milliseconds
max = 14.59 milliseconds
mean = 14.59 milliseconds
stddev = 0.00 milliseconds
median = 14.59 milliseconds
75% <= 14.59 milliseconds
95% <= 14.59 milliseconds
98% <= 14.59 milliseconds
99% <= 14.59 milliseconds
99.9% <= 14.59 milliseconds
Of course, you can still view the metrics via the REST api, for this you need to add the configuration to the routes
file:
GET /admin/metrics com.kenshoo.play.metrics.MetricsController.metrics
What's next?
Automatic Health Checks
Metrics also supports the ability to use automatic health checks. More information can be found in the official documentation .
More reporters
Creating the right metrics usage environment requires the support of more reporters. This should be another area of library development.
Proper Future Support
At the moment, in order to measure the runtime, Future
you need to manually perform all the actions. Proper Future support can help in the asynchronous Play Framework and can be a good addition.
Support HdrHistogram
An hdrhistogram provides an alternative implementation of a high-quality reservoir that can be used for Histogram
and Timer
.