You’re tailing logs from a pod that’s misbehaving. You run kubectl logs and get hit with this:

{"timestamp":"2026-03-28T14:32:01.443Z","level":"error","message":"request failed","method":"POST","path":"/api/orders","status":500,"duration_ms":342,"trace_id":"abc123","error":"connection refused"}

That’s one log line. One. You’re squinting at it trying to find the status code buried somewhere in the middle of a wall of curly braces and quoted keys. Now imagine a hundred of these scrolling past. It’s technically structured, but it’s hostile to the person who actually needs to read it.

Or worse, the other extreme — unstructured text:

2026-03-28 14:32:01 ERROR request failed POST /api/orders 500 342ms abc123 connection refused

Good luck writing a parser for that. Which field is which? Is 342ms the duration or the request body size? Did someone add a new field last week that shifted everything over by one column? Unstructured logs are easy to write and nearly impossible to use at scale.

There’s a middle ground, and it’s been around for over a decade.

Containers log to stdout — but the format is up to you

The Twelve-Factor App established the convention: applications write logs to stdout, and the execution environment handles routing and storage. In Kubernetes, the container runtime captures stdout and stderr, stores them as files on the node, and kubectl logs reads them back. Cluster-level log shippers — Fluentd, Fluent Bit, Vector — collect from every container and forward to your aggregator of choice.

This is settled. Nobody seriously argues that containerized services should write to log files on disk. The interesting question isn’t where you log — it’s what format.

And most teams pick one of two defaults without thinking much about it: unstructured text (because it’s easy) or JSON (because someone said “structured logging” in a meeting). Both have real problems.

Three formats, same log event

Here’s the same event in three formats. Judge for yourself which one you’d want to read at 2 AM.

Plain text:

2026-03-28 14:32:01 ERROR request failed POST /api/orders 500 342ms

JSON:

{"ts":"2026-03-28T14:32:01Z","level":"error","msg":"request failed","method":"POST","path":"/api/orders","status":500,"duration_ms":342}

logfmt:

ts=2026-03-28T14:32:01Z level=error msg="request failed" method=POST path=/api/orders status=500 duration_ms=342

The logfmt version is immediately scannable. You can see every field and its value at a glance. No braces, no commas, no quoted keys. It reads left to right like a sentence. And unlike the plain text version, there’s no ambiguity about what each value means — every value has a key.

Why logfmt works for containers

logfmt was created at Heroku and documented by Brandur Leach. The format is dead simple: key=value pairs separated by spaces, one event per line. Values with spaces get quoted. That’s it. There’s no spec to memorize because there’s barely a spec at all.

Here’s why it’s a particularly good fit for container environments:

Human-readable in terminal output. When you run kubectl logs or docker logs, you’re reading raw text in a terminal. logfmt reads naturally — your eyes can scan key-value pairs without mentally parsing nested structure. JSON requires either a pretty-printer or pattern matching against walls of punctuation.

Grep-friendly. This is the one that wins me over every time:

# Find all 500 errors
kubectl logs deploy/api | grep "status=500"

# Find slow requests
kubectl logs deploy/api | grep "duration_ms=" | awk -F'duration_ms=' '{print $2}' | awk '{print $1}'

grep "status=500" just works. With JSON, you’re reaching for jq every time, which is powerful but not what you want when you’re triaging an incident.

Compact. logfmt is roughly 30–40% smaller than equivalent JSON for the same data. When you’re shipping millions of log lines per minute from a cluster to a remote aggregator, that bandwidth and storage difference adds up.

Flat structure maps to index fields. Log aggregators — Loki, Splunk, Datadog — index logs as flat key-value fields. logfmt maps directly to that model with no transformation. JSON supports nesting, which means your log pipeline either needs to flatten it (lossy, configuration-heavy) or index it nested (expensive, rarely useful). logfmt avoids the problem entirely by not supporting nesting in the first place. The constraint is the feature.

Eliminates format design decisions. With unstructured text, you’re constantly making implicit choices: what order do fields go in? What delimiter do you use? How do you handle values with spaces? With logfmt, the answer is always the same: key=value. New field? Add another pair. Done.

Natively parsed by the tools you’re already running. Grafana Loki has a logfmt parser stage. Splunk auto-parses key-value pairs. Fluentd and Fluent Bit have logfmt parser plugins. Vector has a logfmt codec. Datadog auto-detects it. You don’t need custom parsing rules — the format is recognized out of the box.

logfmt.net — bringing it to .NET

I built logfmt.net because when I went looking for a logfmt implementation in .NET, I didn’t find one that worked the way I wanted. Most .NET logging libraries default to JSON or their own bespoke formats, and configuring them to emit logfmt meant fighting the abstraction rather than using it.

logfmt.net is slim by design — it plugs straight into Microsoft.Extensions.Logging and ships an OpenTelemetry exporter, so there’s no new framework to learn and nothing weird to configure. It handles both encoding and decoding, and it’s fast enough that you won’t think about it.

Basic setup

using Microsoft.Extensions.Logging;

var builder = WebApplication.CreateBuilder(args);

// Add logfmt as the console logging format
builder.Logging.AddLogfmt();

var app = builder.Build();

That’s it. Your application now emits logfmt to stdout, which is exactly where Kubernetes expects it.

Using ILogger

public class OrderController : ControllerBase
{
    private readonly ILogger<OrderController> _logger;

    public OrderController(ILogger<OrderController> logger) => _logger = logger;

    [HttpPost]
    public IActionResult CreateOrder(OrderRequest request)
    {
        _logger.LogInformation("Order created for {CustomerId} with {ItemCount} items",
            request.CustomerId, request.ItemCount);

        // Output:
        // ts=2026-03-28T14:32:01Z level=info msg="Order created for cust-42 with 3 items" CustomerId=cust-42 ItemCount=3

        return Ok();
    }
}

The structured parameters you pass to ILogger become logfmt key-value pairs. No serialization ceremony, no format strings to maintain.

OpenTelemetry integration

If you’re already using OpenTelemetry for your logging pipeline:

builder.Logging.AddOpenTelemetry(otel =>
{
    otel.AddLogfmtConsoleExporter();
});

This plugs logfmt into the OpenTelemetry exporter model, so it plays nicely with the rest of your observability stack.

Severity filtering — zero-alloc fast path

One thing I cared about when building this: if a log call is filtered out by severity (you’re logging at Debug but the level is set to Information), the library should do zero work. No string formatting, no allocation, nothing. logfmt.net achieves that — filtered calls are a no-op.

The library is on NuGet and listed in BetterStack’s logfmt implementations guide. It’s MIT-licensed. If you’re doing .NET in containers, give it a look.

An old format for a modern problem

logfmt isn’t new. Heroku was using it over a decade ago. But it’s underappreciated — especially in ecosystems like .NET where JSON has become the unquestioned default for structured logging. JSON is fine for APIs. It’s less fine for the thing you’re reading at 2 AM in a terminal while your pager is going off.

If you’re building containerized services and you haven’t deliberately chosen your log format, you probably ended up with whatever your framework defaults to. That’s worth revisiting. The format you write to stdout is the interface between your application and your entire observability pipeline — it affects what you can grep, what your aggregator can parse, how much you pay for log storage, and how fast you can find the problem when something breaks.

logfmt won’t change your architecture. But it might make your next incident a little less painful, and honestly, that’s the kind of small improvement that compounds.