Skip to main content

Modern Web Observability in Home Lab

·3 mins

Monitoring a self-hosted stack in home lab isn’t just about uptimes, it’s about knowing exactly who is knocking on your digital door. Also, I’m a datanerd and I like to collect data to analyze it later. After refining my setup, I’ve moved from basic text logs to a fully enriched, geographical observability dashboard using the PLG stack (Promtail, Loki, Grafana).

I’ll show you how to make a Nginx gateway that logs JSON with GeoIP data and visualizes it on a Grafana.

The Architecture #

Nginx + PLG Stack

  1. Nginx identifies the visitor’s location using MaxMind databases and logs it as JSON
  2. Promtail ships these JSON objects to Loki
  3. Loki 3.0 handles the storage and retention
  4. Grafana parses the coordinates and plots them in real-time

Step 1: Nginx JSON Logging & GeoIP #

Standard Nginx logs are hard to parse. Switching to JSON makes them machine-readable. By integrating the libnginx-mod-http-geoip2 module and MaxMind databases, we can inject location data directly into every log line.

Nginx configuration #

First, we need the libnginx-mod-http-geoip2 module.

In my /etc/nginx/nginx.conf, I defined a specific JSON format:

geoip2 /etc/nginx/geo/GeoLite2-Country.mmdb {
    auto_reload 5m;
    $geoip2_data_country_code country iso_code;
}

geoip2 /etc/nginx/geo/GeoLite2-City.mmdb {
    auto_reload 5m;
    $geoip2_data_city_name    city names en;
    $geoip2_data_latitude     location latitude;
    $geoip2_data_longitude    location longitude;
    $geoip2_data_country_code country iso_code;
}

log_format json escape=json '{'
    '"time_local": "$time_local", '
    '"remote_addr": "$remote_addr", '
    '"request_uri": "$request_uri", '
    '"status": "$status", '
    '"server_name": "$server_name", '
    '"request_time": "$request_time", '
    '"request_method": "$request_method", '
    '"bytes_sent": "$bytes_sent", '
    '"http_host": "$http_host", '
    '"http_x_forwarded_for": "$http_x_forwarded_for", '
    '"http_cookie": "$http_cookie", '
    '"server_protocol": "$server_protocol", '
    '"upstream_addr": "$upstream_addr", '
    '"upstream_response_time": "$upstream_response_time", '
    '"ssl_protocol": "$ssl_protocol", '
    '"ssl_cipher": "$ssl_cipher", '
    '"http_user_agent": "$http_user_agent", '
    '"remote_user": "$remote_user", '
    '"geoip_country_code": "$geoip2_data_country_code", '
    '"geoip_city_name": "$geoip2_data_city_name", '
    '"geoip_lat": "$geoip2_data_latitude", '
    '"geoip_lon": "$geoip2_data_longitude"'
'}';

access_log /var/log/nginx/access.log json;

Step 2: Loki #

Loki is our log storage. Running on a small server (like a 5GB LXC container) means you can’t keep logs forever. Loki 3.x introduced a strict but efficient Compactor to manage disk space.

Here is my optimized config.yml for 14day retention:

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096
  log_level: debug
  grpc_server_max_concurrent_streams: 1000

common:
  instance_addr: 127.0.0.1
  path_prefix: /var/lib/loki
  storage:
    filesystem:
      chunks_directory: /var/lib/loki/chunks
      rules_directory: /var/lib/loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory

compactor:
  working_directory: /var/lib/loki/compactor
  compaction_interval: 10m
  retention_enabled: true
  retention_delete_delay: 2h
  retention_delete_worker_count: 50
  delete_request_store: filesystem

limits_config:
  retention_period: 14d
  max_entries_limit_per_query: 5000
  ingestion_rate_mb: 4
  ingestion_burst_size_mb: 8

table_manager:
  retention_deletes_enabled: true
  retention_period: 14d

query_range:
  results_cache:
    cache:
      embedded_cache:
        enabled: true
        max_size_mb: 100

schema_config:
  configs:
    - from: 2020-10-24
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

ruler:
  alertmanager_url: http://localhost:9093

frontend:
  encoding: protobuf

Step 3: Promtail #

Promtail watches the /var/log/nginx/access.log file. Its job is to detect the JSON structure and passes it to Loki.

server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /var/lib/promtail/positions.yaml

clients:
  - url: http://grafana.local:3100/loki/api/v1/push

scrape_configs:
- job_name: nginx_json
  static_configs:
  - targets:
      - localhost
    labels:
      job: nginx
      host: nginx.local
      __path__: /var/log/nginx/access.log

  pipeline_stages:
  - json:
      expressions:
        time_local: time_local
        status: status
        request_method: request_method
        server_name: server_name
        request_time: request_time
        remote_addr: remote_addr
        http_user_agent: http_user_agent

  - timestamp:
      source: time_local
      format: "02/Jan/2006:15:04:05 -0700"

  - labels:
      status:
      request_method:
      server_name:

Step 4: Visualize with Grafana #

If you have a Grafana instance, you can add a new data source pointing to Loki and create a dashboard.

You can download my Grafana Dashboard JSON here. It’s a variation of some dashboard I found on the internet, but I can’t recall the source.