Large Deal Volume, CPU Overload
06-02-2019 08:10 - edited 06-02-2019 08:11
We have developed a Java Application that runs overnight and synchronises Hubspot with our main front end system.
We currently have around 500,000 deals (and growing) in Hubspot, and to avoid duplication we must somehow have access to all them at the very start of the Java Application. Currently, we are loading all 500,000 deals in to memory by bringing them down via the 'Get All Deals' API (GSON stream mode in to Array). This worked for a while but the CPU is working too hard and eventually failing once the JVM Heap space has been filled.
This process is running perfectly on our Citrix environment, but does not work on the server that we use to run our overnight jobs. It is this fact that heavily suggests it is also our environment that is contributing to the problem.
The problem, however, remains the same, I need to re-think the software we run overnight and eventually have a more scalable solution as our business grows. My question to the community is, does anyone have such a large volume of deals in Hubspot? How do they bring them in to memory without burning up the Memory / CPU of the machine their integration tool runs on.
(We are currently developing a solution that involves storing the 500,000 deals in to a temporary Database at the start which can then be queried in the main line of code).
Thanks in advance for any advice you could give!