It’s just about as bad as it can get, unless you’ve actually done something to make it better.
You know how much hassle you need to go through to find out what happened yesterday at 2am in the reporting module that runs every night to index data and produce nice reports?
Let’s take a look at how you access your logs today, and see what a better and more accessible way of logging would be worth to you.
First of all, it is not enough to only look within the log of this particular reporting module of yours. Your software is probably not just this single, self contained module. It probably has a couple of other module that together makes up this distributed masterpiece of an architecture that you envisioned.
You need to correlate what happened in the reporting module, with something that happened in the download module – and you probably have lots of other relevant log sources. Do you even know how many, and are you able to get a complete overview of what happened throughout your entire system at 2am last night?
Log sources quickly becomes a mess because there are so many of them, and even some that you don’t control.
When do you need to view log files?
You probably don’t use log files for anything else than troubleshooting? Primarily because you have to go through so much hassle just to find the correct file, let alone dig through to the correct time of day.
But couldn’t you use log files to proactively make your software better, spot trends and overall use the data to make decisions when building new stuff or improving what’s already there?
On average, how big are the files your users upload? How many do they upload at a time? Oh, they can only upload a single file in one go – but how many times do they consecutively upload files then?
Quantitative data like this can be drawn from your log files, if they were easy to access and query. And it makes you able to play a whole different game when you don’t have to guess all the time, but actually rely on some real insights!
How do you access logs?
You copy the files from the server, of course! How many servers do you have? How big are the log files? Is your connection to the server fast enough to download that 2 gig log file you so desperately need?
The process of using log files are just completely broken. That’s why most log files stay deserted, and wind up being deleted because they take up to much storage.
You also need to know exactly where different services and modules store their logs. Most use rolling log files, so you also need to find the exact file by comparing the timestamp to one another.
How do you read log files?
If it’s 2 gigs, can you even open and browse it in a timely manner?
Most of the times you probably scroll through a huge file to see if you can find any interesting stuff buried there.
If your log file complies to any standard, you might be able to use a log parser that allows you to query several files at a time – but not all log files does that, and you still have difficulty correlating events across sources. Querying is good to spot trends.
How do you share your findings, or get help?
Analyzing log files is not entirely a combined effort. You often sit for yourself and try to find the needle in the haystack.
But in case you want help from your peers, what do you do? Send a fraction of the log in an e-mail, share it in a Gist?
Uploading the whole thing to source control, and using something like GitHub together with your team is probably one of the best ways to coherently analyze log files – much better than keeping the files to yourself.
Let’s be honest, discussing and analyzing the events in a log file is even more difficult and you probably only done it a couple of times.
Time to go home, how do you save your work?
It’s late, and you need to go home and continue your investigations. What do you do?
What if you concluded that nothing serious happened, but you found a little noise and smells you wanted to save for later when a serious issue does occur, do you just copy the log file to a shared drive, source control – do you write a little essay about what you found? Maybe you create an issue in your favorite bug tracker?
The process for saving this kind of work for later doesn’t really exist. You make up a new way every time, and trust your instinct to remind you later when it’s relevant.
A new issue occurred, where did the old insight go?
Or even worse, you forgot to save the log file from the download module last time, so you can’t determine if something has changed.
You also didn’t include a complete context of your previous findings, so you can’t see if this only occurs to admin users or all customers are affected.
We’ve seen how much hassle you encounter when using log files. Not only do you waste an enormous amounts of time, you also miss out on a few opportunities for using log entries to something more useful and proactive than troubleshooting.
So how much would a more accessible, and centralized management of log files be worth to you? Of course you can’t put a price on it, but I bet your life would improve – maybe not on a daily basis, but those sleepless nights and stressful days when customers are constantly calling support, and you get all the blame, is not entirely entertaining!
Sign up for my Product Hacking series
I’m actually working on a project where I set out to solve some of the problems above, and I’m sharing everything in the process. You’ll see how to build a SaaS app from an empty solution to shipping a real product – Sign up to receive my Product Hacking series with stories and examples that takes you from an empty solution, to a shipping product!