← All insights

Applied AI ·

LLM-driven scope extraction for engineering estimates

A working pattern for using a small local model to turn a FEEED scope document into a structured estimate input — without sending anything to a cloud API.

There is a useful, narrow application of LLMs in engineering project work that almost no one is talking about: scope extraction.

A FEEED scope document is a 40–80 page Word file. Inside it are the inputs an estimating engineer needs — equipment counts by class, line-meter totals, instrumentation tag density, control narrative complexity. Today, an engineer reads the document, opens a spreadsheet, and types those numbers in by hand. It takes a day per scope. It is the most expensive form of data entry in the industry.

A small open-weight model running locally can do this in 90 seconds.

The pattern that works:

  1. Local model, not cloud. Anything client-coded goes to a local model. A small open-weight model running on the engineer’s laptop produces structured JSON output reliably enough for production estimating use.
  2. Structured prompt with explicit schema. The model is not asked to “summarise the scope”. It is asked to fill a known schema. Schema validation rejects malformed output and triggers a retry.
  3. Human in the loop, in the right place. The output is not the estimate. The output is the input to the estimate. A senior engineer reviews the extracted tags in 5 minutes and approves or overrides.
  4. Audit trail. Every extracted value carries a citation back to the source paragraph. No tag is in the estimate without a paragraph reference.

The result is not “AI does estimating”. The result is “the engineer’s day starts with the spreadsheet already populated”. The judgement layer — what gets it right or wrong — stays with the engineer.

This is the shape of useful AI in EPC project work. Not autonomous agents. Not chatbots. A grinding, reliable upstream tool that removes the part of the workflow that adds no judgement.