Hi @RZhu59 you’re running into a reporting nuance that trips up a lot of people with line items.
What you’re seeing is almost certainly quote versioning, not true duplicates. In HubSpot, every time a quote is created, updated, or re-published, HubSpot creates a new quote version under the hood. Line items are copied to each version and get new internal record IDs, even though from a user perspective you still see “one quote” with the same visible items.
The “Line items by average net price” template does not de-duplicate line items across quote versions. It counts line items at the record level, not at the “final published quote” level. So if that quote was edited or re-published once, your 6 line items can legitimately appear as 12 in the report, with two sets created at the same timestamp but different record IDs. That matches exactly what you described.
This also explains why clicking into the 12 doesn’t cleanly link back to a single quote view. Reports operate on the CRM objects, not on the human concept of “the final quote the customer saw.” HubSpot documents that quotes can have multiple versions and that line items are duplicated across versions here: (https://knowledge.hubspot.com/quotes/manage-quotes?latest=&%3Butm_campaign=SK&%3Butm_content=ITA_mic... )
Why it feels wrong is because the quote UI hides this complexity. In the quote record, you only see the latest version, so you correctly see 6 line items. The reporting engine, however, is aggregating all historical line-item records unless you explicitly filter them out. In 2025, this is still a known limitation of line-item reporting.
If your goal is “how many times was this product actually quoted to customers,” the safest workaround is to add a filter like “Quote status is Published” or “Quote is signed” and exclude draft or superseded versions, depending on what you’re measuring. HubSpot also calls out that line-item reports can include historical versions unless filtered carefully (https://knowledge.hubspot.com/create-line-item-revenue-reports )
So the data isn’t wrong, it’s just answering a slightly different question than the one you intended. Once you account for quote versions, the numbers usually line up.