Salesforce connector API calls

karenkaz
Contributor

We recently changed our association management system (think CRM but specific to associations) from one where we custom built an API to sync to HubSpot, to a Salesforce-based product.

 

We are now using the native HubSpot-Salesforce connector. It did NOT behave the same in production as it did in sandbox, and as a result when we first turned it on, it tried to sync so many records that we hit our custom object record limit. 

We've since deleted enough records to not run into that issue again. However, because we hit that limit we have 344k records for one of our custom objects that got paused and are listed as affected in the Sync Health section of the connector. 

We want to hit the re-sync button but it's unclear to me how many API calls that would use. When I asked the help chat AI, it said to assume 3 calls per record but I think that's referring to contacts, and these records are just for a single custom object; the contacts already exist.

 

This 3x rule also doesn't match with our experience when we turned on the sync; we hit the custom object record limit of 1.5M but never went over the API call limit for our Salesforce of 171k. We were literally watching this happen live when we turned on the sync and we never hit the 24-hour call limit.  

 

So it seems to me that the connector must do some sort of composite/batch requests where it updates or creates multiple records in a single API call, yet I can't find any documentation about whether this is true and if so, what we could expect in terms of API usage. Does anyone have any detailed information about how the native connector works with regard to API calls? 

 

Alternatively, in addition to adjusting how many of the API calls to allocate to Salesforce, is there a way to control the order in which these records are synced? Or to batch them based on criteria we define? Or do we just have to hit the resync button and see what happens?

2 Accepted solutions
karenkaz
Solution
Contributor

Replying to my own post in case it helps anyone else who shares this question. 

 

I reduced the API calls allocated to HubSpot to minimize the chance of impacting our other integrations and clicked the resync button on the 340k records that needed to be resynced. Some things I learned:

 

1. When there's a large number of records, apparently the integration syncs it in batches of 1k. There does not appear to be any way to adjust this. You have to manually click resync for every 1k batch.

2. The integration is definitely not using 1 API call per record, assuming the "API calls used" number in Sync health is accurate. It looked more like 80-100 API calls per 1000 records. 

3. Each 1k batch of resynced records could take anywhere from 5 seconds to 2+ hours to resync. No idea why there is so much variation per batch without knowing more about what is happening.

 

Anyway, hope that helps others with this same question!

View solution in original post

0 Upvotes
RubenBurdin
Solution
Guide

Hi @karenkaz , thanks for coming back and documenting what you observed. Honestly, posts like this save other teams weeks of guesswork. What you’re seeing lines up with how the native connector behaves these days, especially in 2025 with larger custom object volumes.

 

A few clarifications that may help close the loop. The HubSpot Salesforce connector does use batching internally, but it’s not exposed or configurable. HubSpot abstracts Salesforce Composite and Bulk APIs behind the connector, which is why you never saw a 1:1 relationship between records and API calls, and why the “3 calls per record” guidance falls apart for custom objects. Sync Health API usage is aggregated and approximate, not a literal per-transaction counter, which explains the 80–100 calls per 1k records pattern you observed.

 

HubSpot doesn’t document this in detail, but it’s consistent with how their integrations team prioritizes resilience over transparency (https://knowledge.hubspot.com/salesforce/resolve-salesforce-integration-sync-errors )

 

On ordering and control, unfortunately your conclusion is correct. There’s no supported way to define sync priority, filter batches, or stage resyncs by criteria once records are marked “affected.” The only levers you really have are API allocation throttling on the Salesforce side and manual resync initiation per batch

 

The large variance in batch duration usually comes from Salesforce-side contention: validation rules, flows, triggers, sharing recalculations, or row locks that HubSpot can’t see or report back cleanly.

Small disclosure since I’m close to this space: I work on Stacksync. Situations like yours are exactly why some teams move away from the native connector once custom objects and scale enter the picture, since having explicit control over batching, ordering, and retry behavior can make these resync events far less stressful.

Really appreciate you sharing real numbers here. Hope this helps others avoid surprises.

Did my answer help? Please mark it as a solution to help others find it too.

Ruben Burdin Ruben Burdin
HubSpot Advisor
Founder @ Stacksync
Real-Time Data Sync between any CRM and Database
Stacksync Banner

View solution in original post

5 Replies 5
RubenBurdin
Solution
Guide

Hi @karenkaz , thanks for coming back and documenting what you observed. Honestly, posts like this save other teams weeks of guesswork. What you’re seeing lines up with how the native connector behaves these days, especially in 2025 with larger custom object volumes.

 

A few clarifications that may help close the loop. The HubSpot Salesforce connector does use batching internally, but it’s not exposed or configurable. HubSpot abstracts Salesforce Composite and Bulk APIs behind the connector, which is why you never saw a 1:1 relationship between records and API calls, and why the “3 calls per record” guidance falls apart for custom objects. Sync Health API usage is aggregated and approximate, not a literal per-transaction counter, which explains the 80–100 calls per 1k records pattern you observed.

 

HubSpot doesn’t document this in detail, but it’s consistent with how their integrations team prioritizes resilience over transparency (https://knowledge.hubspot.com/salesforce/resolve-salesforce-integration-sync-errors )

 

On ordering and control, unfortunately your conclusion is correct. There’s no supported way to define sync priority, filter batches, or stage resyncs by criteria once records are marked “affected.” The only levers you really have are API allocation throttling on the Salesforce side and manual resync initiation per batch

 

The large variance in batch duration usually comes from Salesforce-side contention: validation rules, flows, triggers, sharing recalculations, or row locks that HubSpot can’t see or report back cleanly.

Small disclosure since I’m close to this space: I work on Stacksync. Situations like yours are exactly why some teams move away from the native connector once custom objects and scale enter the picture, since having explicit control over batching, ordering, and retry behavior can make these resync events far less stressful.

Really appreciate you sharing real numbers here. Hope this helps others avoid surprises.

Did my answer help? Please mark it as a solution to help others find it too.

Ruben Burdin Ruben Burdin
HubSpot Advisor
Founder @ Stacksync
Real-Time Data Sync between any CRM and Database
Stacksync Banner
karenkaz
Contributor

Thank your Ruben for your detailed response confirming our experiences! It's nice to know that it's not just us and that I'm interpreting our observations correctly. I appreciate your time replying to my question. 

karenkaz
Solution
Contributor

Replying to my own post in case it helps anyone else who shares this question. 

 

I reduced the API calls allocated to HubSpot to minimize the chance of impacting our other integrations and clicked the resync button on the 340k records that needed to be resynced. Some things I learned:

 

1. When there's a large number of records, apparently the integration syncs it in batches of 1k. There does not appear to be any way to adjust this. You have to manually click resync for every 1k batch.

2. The integration is definitely not using 1 API call per record, assuming the "API calls used" number in Sync health is accurate. It looked more like 80-100 API calls per 1000 records. 

3. Each 1k batch of resynced records could take anywhere from 5 seconds to 2+ hours to resync. No idea why there is so much variation per batch without knowing more about what is happening.

 

Anyway, hope that helps others with this same question!

0 Upvotes
seosiri
Contributor

are there not available group selection of data source?

0 Upvotes
Victor_Becerra
Community Manager
Community Manager

Hi @karenkaz 
Thank you for reaching out to the Community!
I'd like to invite some community members who are subject matter experts to join this conversation.
@seosiri @tmcginnis @GiantFocal - Would you be able to share any insights on this? Your expertise would be greatly appreciated.
Best,
Victor


loop Loop Marketing is a new four-stage approach that combines AI efficiency and human authenticity to drive growth.
Learn More