We built a low-code IoT energy management system for an agricultural client — turning raw sensor telemetry into traceable cost allocation, reducing 16+ monthly hours to minutes, and enabling same-day invoice reporting.
What Is a Low-Code IoT Energy Management System?
A low-code IoT energy management system connects raw sensor telemetry to a structured allocation engine — determining, automatically, how electricity costs are distributed across multiple consumers. This case study describes one built for a family-owned agricultural business. The stack: Docker, n8n, Python, and Postgres on Supabase. The outcome: a 16-hour monthly manual process reduced to under 10 minutes per week, with same-day invoice allocation capability.
At a Glance
Challenge
A family-owned agricultural company spent over 16 hours each month manually reconciling IoT telemetry, utility files, and irrigation records to allocate energy costs among land renters — producing outputs that were regularly questioned.
Solution
Built an end-to-end data pipeline using n8n, Python, and Postgres on Supabase, surfacing validated allocation logic through a Looker dashboard that replaced all downstream spreadsheet work.
Results
16+ hours/month → minutes/week in recurring operational effort; same-day invoice allocation capability; reporting cadence shifted from monthly to weekly.
The Challenge: Manual Allocation in a Data-Rich Environment
The client operated a structured, data-oriented agricultural business. Their operational challenge was specific: determining, with enough precision and traceability, how electricity costs should be distributed across multiple land renters. Before automation, this required physically checking electric meters in hard-to-reach locations, then reconciling those readings against irrigation schedules reconstructed from emails, spreadsheets, and partial records.
When the client introduced IoT devices to improve visibility, they solved one problem and created another. Data quality improved — but incoming file volumes exceeded what their existing Excel-based workflows could handle without significant manual effort. The process consumed more than 16 hours each month, ran only once per month, and still required enough approximation that land renters regularly questioned the outputs.
How We Built It: IoT Data Pipeline Architecture
We designed the solution around a clear separation of responsibilities across three functional layers. Each tool was assigned a role that matched its strengths — keeping process logic auditable, file-handling clean, and analysis self-contained.
Self-hosted n8n served as the orchestration layer. It coordinates file collection, triggers ingestion jobs, sequences database operations, and manages retries and branching. By concentrating process logic in one place, the workflow became auditable and controllable without requiring technical expertise to monitor.
Python handled the ingestion boundary. Raw inputs — IoT telemetry, official energy provider files, and configuration files — arrived in mixed formats. Python normalized their structure and bulk-loaded them into staging tables in Postgres. This kept file-handling complexity outside n8n while maintaining a clean handoff between collection and storage.
Postgres, deployed on Supabase, acted as the semantic layer. It stores historical data, applies upsert logic, manages tariff calendars, and produces stable reporting structures. This is where the allocation logic lives — traceable from meter reading to tariff rule to final renter share. Looker completed the stack as the analysis layer. The dashboard was not designed as a visualization add-on. It was designed to replace all downstream spreadsheet investigation. Once operational, there was no analytical work that could not be performed directly within it.
Docker provided the infrastructure layer that made secure internal communication possible. Both n8n and the Python API run as separate containers within a shared Docker network. They communicate directly over that internal network — Python has no external exposure. Only n8n is reachable from outside, which eliminates an entire class of surface-area risk without adding operational complexity.
The project also required a change-management track running in parallel with the technical build. Metric definitions, tariff rules, and allocation logic had to be stabilized before anything could be automated reliably. We ran training sessions over approximately two months to ensure the responsible team member could operate, interpret, and trust the system independently.
Impact
- Recurring operational effort reduced from 16+ hours/month to under 10 minutes per week for file uploads
- Monthly configuration work — tariff updates and invoice value adjustments — capped at 30 minutes/month
- Equivalent analytical detail would have required 8 additional hours/week to produce manually
- Reporting cadence shifted from monthly to weekly, enabling faster pattern detection and issue resolution
- Same-day energy cost allocation now possible: invoice arrives, distribution is ready
Before and After:
| Dimension | Before Automation | After Automation |
|---|---|---|
| Data source | Manual meter readings + fragmented spreadsheets | IoT telemetry + utility files, ingested automatically |
| Monthly effort | 16+ hours of reconciliation | Under 10 minutes/week for file uploads |
| Reporting cadence | Monthly (when time allowed) | Weekly, with same-day invoice capability |
| Allocation traceability | Approximation-based, regularly questioned | Meter-level and tariff-level logic, fully auditable |
| Analytical depth | Limited by manual capacity | Detail equivalent to 8+ additional hours/week of manual work |
Why It Worked
This engagement succeeded because we treated automation and organizational change as a single project, not two separate tracks. The technical stack produced a reliable and scalable pipeline. The normalization work — stabilizing KPIs, tariff logic, and reporting cadence before any code ran — ensured that the system encoded the right logic, not just the existing one.
The result is not a faster version of a manual process. It is a fundamentally different operating model: weekly reporting with same-day invoice capability, allocation traceable to meter and tariff level, and a BI layer that has replaced the need for any downstream analysis. Land renters now receive allocations they can interrogate — not estimates they are asked to accept.
Frequently Asked Questions
Why would an agricultural company need an IoT energy management system?
IoT devices solve the visibility problem — they replace manual meter readings with continuous telemetry. But raw telemetry alone does not produce an allocation. The system described here adds the processing layer: ingesting sensor data, reconciling it with utility files and tariff logic, and producing a traceable cost distribution for each land renter.
What tools does a low-code IoT energy management stack use?
The stack comprises Docker for containerized infrastructure and internal networking, n8n for orchestration, Python for file normalization and ingestion, Postgres on Supabase as the data and transformation layer, and Looker as the reporting and analysis environment.
What types of IoT data files does the system process?
Three input categories: raw IoT telemetry files from the sensor provider, official consumption files from the energy utility, and configuration files covering tariff rules and renter assignments. The Python ingestion layer normalizes mixed formats before loading.
How does the system handle tariff or configuration changes?
Tariff rule updates are applied as monthly configuration edits through Google Sheets. The responsible team member makes these adjustments in approximately 30 minutes. The rest of the pipeline re-applies the updated logic automatically.
How long does it take to implement this type of automation?
In this engagement, the technical build and a parallel change-management track ran together over approximately two months. That included metric normalization, tariff logic validation, and training the team member responsible for ongoing operations.
Can this IoT pipeline scale to more meters or renters?
Yes. The data model accommodates new meter configurations and renter assignments through the configuration file layer. Expanding scope does not require changes to the core pipeline logic.
Work With Us
If your operation produces data but still relies on manual processes to extract business meaning from it, that gap is worth closing. We work with organisations that want structured, traceable answers — not automation for its own sake.
To discuss where your current setup stands, contact us at pedrorcosta@walkertrust.com or connect on LinkedIn.
Pedro Ribeiro da Costa is a Partner at WalkerTrust, specialising in AI-powered process automation and digital transformation. He has led automation and intelligence initiatives across Supply Chain and Business Intelligence in environments up to €1.2B in revenue.
