Separate logging level for each request

Original author: Ivan Iakimov
  • Transfer
While reading Technology Radar from ThoughtWorks , I came across the " Log level per request " technique . We in Confirmit widely use logging, and it became interesting to me how to implement this functionality.

Description of the problem


Let's see what is at stake. Suppose we have a web service. At some point, it starts to fail on production servers, but only for some requests. For example, crashes occur only for a specific user. Or only on a separate access point ... We need to find the reason. In this case, logging comes to the rescue.

We can insert enough logging instructions into our code to understand the cause of the problem. These instructions usually associate a log level with your message (Debug, Info, Warning, ...). In addition, the log itself also has its own level. All messages with levels above the log level will be written to the log sink (file, database, ...). If the message level is below the log level, the message will simply be dropped.

When our application is working normally, we want to see as few entries in the log as possible so that its size remains small. At the same time, if the application fails, we want to have enough logs in the log so that we can understand the cause of the problem. The difficulty here is that we usually set one level of logging for all loggers in our application. If everything is in order, we keep this level high (for example, Warning). If we need to look for the cause of the failure, we install it below (for example, Debug).

One level of logging per application


When we set the logging level in the application to low, our log repositories are filled with a mass of messages. These messages come from different requests and are intermixed, since requests can be serviced simultaneously. This leads to a number of potential problems:

  • How to separate messages from problematic requests from messages from other requests?
  • Requests that do not lead to failures still spend their time writing to the log repositories, although these messages will never be used.
  • Messages from successful queries will occupy space in the log repositories, although these messages will never be used.

Frankly, these difficulties are not insurmountable. To separate the messages of “good” requests from the messages of “bad” requests, you can use the correlation id . All modern log processing systems allow filtering records by various criteria, including this one.

Performance is also usually not a major issue. Logging systems support asynchronous writing, so the effect of massive logging is usually not destructive.

And disk space is now relatively cheap, so storing multiple records is not a problem. Especially if we can delete old records from time to time.

However, can we improve this system? Can we set a separate logging level for each request, depending on certain conditions? I would like to consider this task here.

Separate logging level for each request


Let me spell out the requirements for a solution that I will implement. There should be a way to set a separate logging level for each request. There should be a flexible way to specify the conditions that determine the choice of this level for a particular request. And it should be possible to change these conditions during program execution without the need to restart it.

So, the task is set. Let's start.

I will create a simple .NET Core based web service. It will have a single controller:

[Route("api/[controller]")]
[ApiController]
publicclassValuesController : ControllerBase
{
    ...
    // GET api/values
    [HttpGet]
    public ActionResult<IEnumerable<string>> Get()
    {
        Logger.Info("Executing Get all");
        returnnew[] { "value1", "value2" };
    }
    // GET api/values/5
    [HttpGet("{id}")]
    public ActionResult<string> Get(int id)
    {
        Logger.Info($"Executing Get {id}");
        return"value";
    }
}

We will discuss the implementation of the Logger property later. For this application, I will use the log4net library for logging. This library provides an interesting opportunity. I speak about level inheritance inheritance . In short, if in the configuration of this library you say that the log with the name X must have the Info logging level , this means that all the logs with names starting with X. (for example, XY , XZ , XAB ) will inherit this very level.

From here comes the original idea. For each request, I will somehow determine the required level of logging. Then I will create a new named log in the log4net configuration . This log will have the required level of logging. After that, all logging objects created within the current query should have names starting with the name of this new log I created. The only problem here is that log4net never deletes the logs. Once created, they exist for the entire lifetime of the application. For this reason, I initially create one log for each possible level of logging:

<?xml version="1.0" encoding="utf-8" ?><log4net><appendername="Console"type="log4net.Appender.ConsoleAppender"><layouttype="log4net.Layout.PatternLayout"><!-- Pattern to output the caller's file name and line number --><conversionPatternvalue="%5level [%thread] (%file:%line) - %message%newline" /></layout></appender><appendername="RollingFile"type="log4net.Appender.RollingFileAppender"><filevalue="RequestLoggingLog.log" /><appendToFilevalue="true" /><maximumFileSizevalue="100KB" /><maxSizeRollBackupsvalue="2" /><layouttype="log4net.Layout.PatternLayout"><conversionPatternvalue="%level %thread %logger - %message%newline" /></layout></appender><root><levelvalue="WARN" /><appender-refref="Console" /><appender-refref="RollingFile" /></root><loggername="EdlinSoftware.Log.Error"><levelvalue="ERROR" /></logger><loggername="EdlinSoftware.Log.Warning"><levelvalue="WARN" /></logger><loggername="EdlinSoftware.Log.Info"><levelvalue="INFO" /></logger><loggername="EdlinSoftware.Log.Debug"><levelvalue="DEBUG" /></logger></log4net>

Now I have several logs with the names EdlinSoftware.Log.XXXX . These names will serve as prefixes for the log names used in queries. To avoid collisions between requests, I will store the computed prefix for this request in an AsyncLocal instance . The value of this object will be set in the new OWIN middleware:

app.Use(async (context, next) =>
{
    try
    {
        LogSupport.LogNamePrefix.Value = await LogSupport.GetLogNamePrefix(context);
        await next();
    }
    finally
    {
        LogSupport.LogNamePrefix.Value = null;
    }
});

When this value is set, it is very easy to create loggers with the desired name prefix:

publicstaticclassLogSupport
{
    publicstaticreadonly AsyncLocal<string> LogNamePrefix = new AsyncLocal<string>();
    publicstatic ILog GetLogger(string name)
    {
        return GetLoggerWithPrefixedName(name);
    }
    publicstatic ILog GetLogger(Type type)
    {
        return GetLoggerWithPrefixedName(type.FullName);
    }
    privatestatic ILog GetLoggerWithPrefixedName(string name)
    {
        if (!string.IsNullOrWhiteSpace(LogNamePrefix.Value))
        { name = $"{LogNamePrefix.Value}.{name}"; }
        return LogManager.GetLogger(typeof(LogSupport).Assembly, name);
    }
    ....
}

Now I understand how to get an instance of the logger in our controller:

[Route("api/[controller]")]
[ApiController]
publicclassValuesController : ControllerBase
{
    private ILog _logger;
    private ILog Logger
    {
        get => _logger ?? (_logger = LogSupport.GetLogger(typeof(ValuesController)));
    }
    ....
}

There is only one thing left to do: to somehow define the rules by which we choose the level of logging for each request. This should be a fairly flexible mechanism. The basic idea is to use C # scripting. I will create a LogLevelRules.json file , where I will define a set of pairs “rule - logging level”:

[
  {
    "logLevel": "Debug",
    "ruleCode": "context.Request.Path.Value == \"/api/values/1\""
  },
  {
    "logLevel": "Debug",
    "ruleCode": "context.Request.Path.Value == \"/api/values/3\""
  }
]

Here logLevel is the desired level of logging, and ruleCode is a C # code that returns a boolean value for a given request. The application will run rule codes one by one. The first rule, whose code returns true, will be used to set the appropriate logging level. If all rules return false, the default level will be used.

To create delegates from the string representation of the rules, the CSharpScript class is used :

publicclassGlobals
{
    public HttpContext context;
}
internalclassLogLevelRulesCompiler
{
    public IReadOnlyList<LogLevelRule> Compile(IReadOnlyList<LogLevelRuleDescription> levelRuleDescriptions)
    {
        var result = new List<LogLevelRule>();
        foreach (var levelRuleDescription in levelRuleDescriptions ?? new LogLevelRuleDescription[0])
        {
            var script = CSharpScript.Create<bool>(levelRuleDescription.RuleCode, globalsType: typeof(Globals));
            ScriptRunner<bool> runner = script.CreateDelegate();
            result.Add(new LogLevelRule(levelRuleDescription.LogLevel, runner));
        }
        return result;
    }
}
internalsealedclassLogLevelRule
{
    publicstring LogLevel { get; }
    public ScriptRunner<bool> Rule { get; }
    publicLogLevelRule(string logLevel, ScriptRunner<bool> rule)
    {
        LogLevel = logLevel ?? thrownew ArgumentNullException(nameof(logLevel));
        Rule = rule ?? thrownew ArgumentNullException(nameof(rule));
    }
}

Here, the Compile method retrieves a list of objects read from the LogLevelRules.json file . It creates a runner delegate for each rule.

This list of delegates can be saved:

LogSupport.LogLevelSetters = new LogLevelRulesCompiler().Compile(
    new LogLevelRulesFileReader().ReadFile("LogLevelRules.json")
);

and used further:

publicstaticclassLogSupport
{
    internalstatic IReadOnlyList<LogLevelRule> LogLevelSetters = new LogLevelRule[0];
    ...
    publicstaticasync Task<string> GetLogNamePrefix(HttpContext context)
    {
        var globals = new Globals
        {
            context = context
        };
        string result = null;
        foreach (var logLevelSetter in LogLevelSetters)
        {
            if (await logLevelSetter.Rule(globals))
            {
                result = $"EdlinSoftware.Log.{logLevelSetter.LogLevel}";
                break;
            }
        }
        return result;
    }
}

Thus, when launching an application, we read LogLevelRules.json , convert its contents into a list of delegates using the CSharpScript class , and save this list in the LogSupport.LogLevelSetters field . Then for each request we perform delegates from this list to get the logging level.

The only thing left to do is tracking changes in the LogLevelRules.json file . If we want to set the logging level for some requests, we add a new rule to this file. To force our application to apply these changes without restarting, you need to monitor the file:

var watcher = new FileSystemWatcher
{
    Path = Directory.GetCurrentDirectory(),
    Filter = "*.json",
    NotifyFilter = NotifyFilters.LastWrite
};
watcher.Changed += (sender, eventArgs) =>
{
    // Подождать, пока приложение, изменяющие файл, не освободит его.
    Thread.Sleep(1000);
    LogSupport.LogLevelSetters = new LogLevelRulesCompiler().Compile(
        new LogLevelRulesFileReader().ReadFile("LogLevelRules.json")
    );
};
watcher.EnableRaisingEvents = true;

It should be noted that for the sake of brevity, I dropped the stream synchronization code when working with the LogSupport.LogLevelSetters field . But in practice, such synchronization is required.

The complete application code can be found on GitHub .

disadvantages


The code shown above allows you to solve the problem of setting a separate logging level for each request. But he has a number of drawbacks. Let's discuss them.

1. This approach changes the names of the logs. So in the log repository, instead of " MyClassLogger ", something like " Edlinsoft.Log.Debug.MyClassLogger " will be stored . You can live with this, but it is not very convenient. Perhaps the problem can be overcome by changing the log layout.

2. It is now impossible to use static instances of the logger classes, since they must be created separately for each query. The most serious problem here, in my opinion, is that developers should always keep this in mind. Someone can accidentally create a static field with a logger and get strange results. You can overcome this situation by creating a wrapper class for loggers instead of directly using the log4net classes . Such a wrapper can always create new log4net logger instances for each operation. In this case, it can be freely used in static fields.

3. The described approach creates many instances of loggers. This consumes memory and processor cycles during creation and garbage collection. Depending on your application, this may or may not be a significant problem.

4. When we change the JSON file with the rules, the rules code may contain errors. It is very simple to use try-catch blocks so that these errors do not destroy our main application. However, we somehow have to find out that something has gone wrong. There are two types of errors:

  • Errors compile time code rules to delegates.
  • Runtime errors of these delegates.

Somehow we need to know about these errors, otherwise our logging will simply not work, and we will not even know about it.

5. The code of the rules in the JSON file may in principle contain any instructions. This can potentially lead to security issues. It is necessary to somehow limit the possibilities of this code. On the other hand, if the attacker was able to directly write to your application files, then the security problem is already there.

Conclusion


In general, I did not get the impression that such a solution should replace the existing approach to logging. A good tool with the ability to filter log entries can help here even when using a single level of logging to the entire application. Nevertheless, I hope that the analysis of this problem will give you food for thought.

Also popular now: