Not every legacy headache needs a consulting engagement. Five fixes most internal teams can ship in 30-90 days, plus a clear line for when to stop and call for help.

1. Kill the manual data exports

Almost every legacy system has someone running a daily SQL export, opening it in Excel, pivoting it, and emailing it to three people. Replace it with a scheduled query + automated dashboard (Metabase, Redash, or Power BI on a fresh extract). Time investment: 2-5 days. Saves: 20-40 hours per month, eliminates a dozen "I forgot to send the file" incidents per year.

Stop here when: the source data lives in a system that doesn't expose a query interface, or the export contains regulated PII that can't leave the source system without a DPO sign-off.

2. Add monitoring to the cron jobs nobody owns

Every legacy stack has 15-50 cron jobs. About a third silently fail occasionally. The team only finds out when someone downstream complains. Fix: deploy a healthcheck endpoint per job that pings a service like Healthchecks.io or Cronitor. Time investment: a week. Saves: roughly one production incident per month and a lot of weekend "why didn't the report run" calls.

Stop here when: cron jobs are running on systems where you can't install outbound HTTPS calls, or are hidden inside vendor packages you don't control.

3. Document the "tribal" data model

For the five tables that drive your business reports, write a one-page doc explaining: what each column means, what values are valid, what historical changes happened (renames, type changes, sentinel values), and who owns clarifying questions. Time investment: a week per table. Pays off every time someone joins, every time you debug a report, every time you scope an AI project.

Stop here when: the data model is so tangled that documenting it would take a quarter — at that point you're scoping a real refactor, not documentation.

4. Cap your batch windows

Most legacy systems have a 4am batch that runs for 2.5 hours, blocking everything until it finishes. Half the time, 80% of the runtime is one slow query. Spend two days profiling the batch (most platforms have query timing built in), find the dominant queries, add indexes or rewrite. Often cuts runtime 40-70%.

Stop here when: rewriting touches stored procedures that haven't been changed in a decade and have no test coverage. That's where you call for help.

5. Replace one fragile shell-and-FTP integration

Pick the most embarrassing one — the integration that breaks every time someone changes a filename. Replace it with a properly authenticated API call (or, if no API exists, a containerized script with retry logic and structured logs). Time investment: 1-3 weeks. Pays off every quarter for the next decade.

Stop here when: the receiving system requires file-based input by contract, or the partner is a mainframe operator who literally cannot accept anything else.

The line where in-house stops being enough

Three signs you've hit the limit of DIY fixes:

  • The fix touches systems you can't take down for testing — meaning every change is a production gamble.
  • The fix requires changing a data model that other teams depend on without their knowledge.
  • The fix is "stop using vendor X" but the contract has 4 years left and €8M of switching cost.

None of those are reasons to give up. They're reasons to bring in someone whose job is exactly this — and who has done it 30 times before, not just once.