Grepping through logs is a pain,

but sometimes unavoidable when reconstructing a user's path through your app. How did they raised those last 4 million errors you just got emailed about?

If you have multiple processes writing to your logfile, the requests and background processes' logs would be intermingled and trying to sort out what really happened to the user was hopeless, until Rails 3.2.

ActiveSupport::TaggedLogging to the rescue. 37 Signals wrote a great article about it here:

We're going to go a mile further and aggregate our logs in Graylog2, which is a little UI on top of Elasticsearch that allows easy searching and filtering of logs.

Also, we're going to do our own custom log tagging with log4r since we'll be using it to ship our logs to Graylog. I started by reading this article by the guys over at backupify.

First, I installed Graylog2,

which is surprisingly difficult.

I setup elasticsearch on a single node, same for MongoDB. The tricky part for me was getting Graylog to talk to Elasticsearch.

On Ubuntu this seems to work:


Notable config changes:

# my config was in /usr/local/share/elasticsearch/config/elasticsearch.yml graylog2



Download, extract, copy over the config and run:

# download extract and move to your install directory
#  i put it in /opt/graylog2-server

# copy config
cp graylog2.conf.example /etc/graylog2.conf
cp elasticsearch.yml.example /etc/graylog2-elasticsearch.yml
# Test run in the foreground
java -jar graylog2-server.jar --debug
# Run in the background
cd bin/
./graylog2ctl start

Notable config changes:

# in /etc/graylog2-elasticsearch.yml

# this must be the same as for your elasticsearch cluster graylog2

# you could also leave this out, but makes it easier to identify the graylog2 client instance "graylog2-server"

# we don't want the graylog2 client to store any data, or be master node
node.master: false false

# i turned off multicast
# adapt these for discovery to work in your network! multicast can be tricky

# i only have 1 node
discovery.zen.minimum_master_nodes: 1

# still turning off multicast
# 1. Disable multicast discovery (enabled by default): false

# set this to your servername or ip:9300
# 2. Configure an initial list of master nodes in the cluster
#    to perform discovery when new nodes (master or data) are started: ["YOURSERVERHERE:9300"]

Ok, Graylog server success.

You'll also need to install the web app, but since you're already on rails, I'll leave that exercise to you. I am running a fork of the graylog web app that deploys via capistrano, next to our other rails apps. It took a little nginx config and installing the right ruby with rbenv. Full directions are available over at graylog's site

Now let's make our app awesome.

First, add the gems.

gem 'log4r-exceptionable'
gem 'log4r-gelf'
gem 'lumber'

bundle install

Add some config to your config/production.rb

config.lumber.enabled = true

Note: you may want to test by adding the above to config/development.rb as well

Add a helper to your config/application.rb

# Put this before the module definition

# Print out all MDC items as space delimited key=value string, and is used in our
# log4r pattern formatter so that each log message shows the context active when it was called
def log4r_context
  ctx = Log4r::MDC.get_context.collect {|k, v| k.to_s + "=" + v.to_s }.join(" ")
  ctx.gsub!('%', '%%') # escape out embedded %'s so pattern formatter doesn't get confused
  return ctx

Add a before filter to your application_controller.rb

  before_filter :log_context
  # Make sure you keep your protected section at the end of class definition
    def log_context
      # Clear out any existing context from previous requests
      Log4r::MDC.get_context.keys.each {|k| Log4r::MDC.remove(k) }

      # Set the context
      #  the GelfOutputter will read the hash generated by Log4r and add these additional fields to the gelf message
      request_at =
      Log4r::MDC.put('app', Rails.application.class.parent_name)
      Log4r::MDC.put('env', Rails.env[0...3])
      Log4r::MDC.put('uuid', SecureRandom.uuid)
      Log4r::MDC.put('user_id', if current_user.present?
      Log4r::MDC.put('remote_ip', request.remote_ip)

Configure log4r in config/log4r.yml

      level: DEBUG
    - name: "rails"
      level: DEBUG
        - logfile
        - gelf
    - type: FileOutputter
      name: logfile
      filename: "#{log_file}"
      trunc: 'false'
      formatter   :
        date_pattern: '%H:%M:%S'
        pattern     : '%-5l %d %C [#{log4r_context}]: %M'
        type        : PatternFormatter
    - type: GelfOutputter
      name: gelf
      gelf_server: '[YOUR GELF SERVER IP OR HOSTNAME]'
      gelf_port: '12201'
      # Optional - showing default values
      # facility: "gelf-rb"
      # host: "#{Socket.gethostname}"
      # max_chunk_size: 'LAN'
      # level: INFO

Optionally, mount the lumber server to allow temporarily changing the log-level per model in config/routes.rb

# at the top add:
require 'lumber/server'

  # in routes draw add:
  #  I'm using devise + cancan, if a user can manage lumber, let them get to /lumber
  # to mount the route without permissions, just
  #  mount, :at => "/lumber"
  lumber_constraint = lambda do |request|
    (current_user = request.env['warden'].user).present? && current_user.can?(:manage, :lumber)
  constraints lumber_constraint do
    mount, :at => "/lumber"

All set, now just deploy your app and watch the logs roll in.