My Data Analysis Day with Claude
I decided to share how I use Claude in my work. Right now I'm embedding a Data Driven process into Larixon, we publish Classifieds in various countries. My job is to connect the metrics that product teams work with (focused in part on making our non-paying users happy) with unit economics metrics, financials, and ultimately with our profit.
How I used to work
Previously, I manually studied available metrics, built metric trees, unit economics models, and then figured out how to explain to teams like those improving the ad search experience that their product needs to make money for the company.
The difficulty was that there are many teams. I currently operate with 34 teams (the division is loose, but that's how many people are responsible for metrics in my pool). There are even more metrics: product teams work with over 150 of them, and the metric tree containing only unit economics metrics and product metrics linked to them has more than 180 nodes.
Keeping all these connections in your head manually is quite a challenge. And tracking mutual influences, progress, and so on is practically impossible.
Claude
With the arrival of agents, routine work can be automated. For example, finding connections between one document (a list of tasks in product teams) and a metric tree can be handed off to an LLM and it handles it well, and more importantly, fast.
To do this, you ask the agent to take a file and link it to another file. Crafting the prompt is a skill in itself some people even ask one agent to prepare a prompt for another agent. But either way: the machine quickly completes the task, having estimated it upfront at roughly 5–7 days of solid work, and traditionally finishes in 20 minutes. The artifact you get is an MD file that still needs to be read.
Some people ask another agent to read that file and produce a summary which again turns into an MD file. You quickly end up with a large number of various MD files, and in my case since I also work with financial reports a large number of CSV files too. And since my agent is fairly advanced, I also get ECSV files, which you need to know how to open (though they're just CSVs with comments).
In the course of my work I quickly accumulated a pile of such files, and things didn't get easier only harder. Before, I could control progress toward a goal through the complexity of the task itself; now, the bottleneck is me, due to my inability to quickly read a large volume of material, understand it, and draw conclusions.
How I work now
My work has changed. First of all, I started using loOom — a special utility that acts as a layer between me, the data, and the agent.
What does loOom do? First, it keeps a list of all documents created by the agent. It also knows how to configure the agent in my case Claude to work with itself. This means that when the agent creates an artifact (MD, CSV, ECSV), it immediately configures loOom to extract all useful entities from those artifacts, place them on a graph, and link them together.
This way, I can quickly see what artifacts the agent has prepared for me. Additionally, loOom can call the agent from within itself and pass context to it. As a result, I work with the agent from inside loOom and keep the agent in context. Essentially, loOom acts as external memory for the agent.
loOom can also work on its own it simply analyzes files, finds connections between them, and builds a graph. You can trace chains of relationships and see how entities are connected to one another. And you don't need to spend your own tokens for any of this.
My process now looks like this:
Launch loOom.
Find an entity of interest for example, from a financial report I notice that amid an overall Revenue decline, sales in one particular direction are growing.
I immediately see what other entities are connected to that metric.
I see which people are responsible for the related metrics.
If something is unclear, I press A and pass the context to the agent, then switch to it the agent is already in context and ready to provide additional information.
As a result, I get a new MD file, and the agent knows it needs to be placed on the graph.
After returning from the agent back to the product, I update the entity graph and...
Move on to the next task.
Using agents has greatly accelerated data analysis work, and the emergence of tools that prevent you from drowning in information has made the work much simpler.
To say thank you and show support for future content.
50€/annually
To gain access to commentary and content, please consider subscribing.
If you're already a customer, just log in.
we do not store your email, only the encrypted hash, which increases the security of your email.