I've successfully integrated my Copilot Studio agent with mcp.hubspot.com and i can get simple data as long as the prompt only needs the Query parameter.
However, when the prompt requires to use filters (simple example: "show companies with hs_object_id equal to 5xxxxxxxxx") the Copilot Studio Agent returns an error and it seems like it doesn't know how to correctly construct the call to the MCP tool.
I've tried multiple agent instructions - some generic, some very strict, but I couldn't make the Agent use the filterGroups. It's not that the format is incorrect, but the agent fails BEFORE using the MCP tool as it doesn't "know" how to use it, and i suspect it's somehow related to supported features by the MCP.
I wander if anyone faced such issue and can recommend a solution.
Thank you for bringing up the issue regarding MCP filters in Copilot Studio. You’re correct—whenever a prompt tries to use filters, like"show companies with hs_object_id equal to 5xxxxxxxxx", Copilot Studio Agent often returns an error because it doesn’t properly construct the corresponding MCP API call.
The underlying problem seems to be that the agent doesn’t recognize how to build filterGroups or advanced queries with MCP, especially for fields such ashs_object_id. As a result, even though basic queries work, more complex filtering logic fails at the construction stage rather than at HubSpot’s endpoint. This challenge isn’t unique; several users have run into the same roadblock, and so far, there’s no official fix except to manually build the payload for the MCP call.
If you’re looking for a more robust AI experience, I’d also recommend you give Claude Desktop a try—it’s often better at handling nuanced or complex integrations. Let me know if you have any questions or need help getting started!
UPDATE: the issue was verified 100% on Microsoft side. It looks like when doing things fast while creating the Custom Connector, it doesn't recignize the MCP's responses. I could see the MCP response in the Activity trace of the agent, but the agent just didn't "get" this result for processing.
After struggling with it for days, I recreated the Custom Connector, but this time i waited several seconds before each step - after loading the App to Hubspot with the redirect link, while entering data to the Copilot agent, etc. Every step was followed by 3-5 seconds if inactivity, and it worked! It looks like that actions take some time to complete in the bakground, and if they don't and you took the next action already - the connector gets out of sync with the agent.
One more thing - "fixing" connector issues after the connector was created already, will still keep the agent<>connector communication broken. Everything must happen right, and for the first time.
I didn't have such issues with OpenAI client (used the same MCP server, with the /openai entry) so it looks like MS Copilot Agent needs some further improvements.
I’m sorry to hear that the issue has resurfaced. Since you can see the correct JSON payload in the Activity tab but Copilot Studio still treats it as--NULL--, this confirms it’s almost certainly a schema validation or parsing issue on the Microsoft side, rather than a connection failure.
A few targeted suggestions to debug this "silent failure":
Schema Mismatch: Copilot is extremely strict about schema definitions. If the MCP returns a field that isn’t explicitly defined in the connector's schema (or if a field is nullable but defined as required), Copilot will often silently discard the entire object. I’d recommend capturing the exact JSON output from the Activity tab and validating it against the schema you defined in the Custom Connector to ensure they are 100% identical.
Try a Middleware Approach: Since direct connection is flaky, you might get more stability by placing a middleware (like an n8n webhook or a simple proxy) between Copilot and HubSpot. This lets you sanitize and simplify the JSON response before passing it back to the agent, ensuring it matches exactly what Copilot expects.
Alternative Client: As I mentioned before, this kind of "silent dropping" of data is a known frustration with Copilot Studio's current MCP implementation. If your use case allows, testing the same flow inClaude Desktop (which handles MCP natively) could help confirm if the issue is purely with how Copilot parses the MCP response.
Hopefully, one of these angles helps you pin this down!
UPDATE: the issue was verified 100% on Microsoft side. It looks like when doing things fast while creating the Custom Connector, it doesn't recignize the MCP's responses. I could see the MCP response in the Activity trace of the agent, but the agent just didn't "get" this result for processing.
After struggling with it for days, I recreated the Custom Connector, but this time i waited several seconds before each step - after loading the App to Hubspot with the redirect link, while entering data to the Copilot agent, etc. Every step was followed by 3-5 seconds if inactivity, and it worked! It looks like that actions take some time to complete in the bakground, and if they don't and you took the next action already - the connector gets out of sync with the agent.
One more thing - "fixing" connector issues after the connector was created already, will still keep the agent<>connector communication broken. Everything must happen right, and for the first time.
I didn't have such issues with OpenAI client (used the same MCP server, with the /openai entry) so it looks like MS Copilot Agent needs some further improvements.
Unfortunately, I ran into the same problem and can't solve it as I did before. The connector returns the result but it's not being "passed" Copilot Studio Agent, that keeps referring to the reponse as Null. The result is ok, I can see the data in the Activity tab of the agent, but the agent just ignores it. Has anyone else faced this issue?
I’m sorry to hear that the issue has resurfaced. Since you can see the correct JSON payload in the Activity tab but Copilot Studio still treats it as--NULL--, this confirms it’s almost certainly a schema validation or parsing issue on the Microsoft side, rather than a connection failure.
A few targeted suggestions to debug this "silent failure":
Schema Mismatch: Copilot is extremely strict about schema definitions. If the MCP returns a field that isn’t explicitly defined in the connector's schema (or if a field is nullable but defined as required), Copilot will often silently discard the entire object. I’d recommend capturing the exact JSON output from the Activity tab and validating it against the schema you defined in the Custom Connector to ensure they are 100% identical.
Try a Middleware Approach: Since direct connection is flaky, you might get more stability by placing a middleware (like an n8n webhook or a simple proxy) between Copilot and HubSpot. This lets you sanitize and simplify the JSON response before passing it back to the agent, ensuring it matches exactly what Copilot expects.
Alternative Client: As I mentioned before, this kind of "silent dropping" of data is a known frustration with Copilot Studio's current MCP implementation. If your use case allows, testing the same flow inClaude Desktop (which handles MCP natively) could help confirm if the issue is purely with how Copilot parses the MCP response.
Hopefully, one of these angles helps you pin this down!
Thank you for bringing up the issue regarding MCP filters in Copilot Studio. You’re correct—whenever a prompt tries to use filters, like"show companies with hs_object_id equal to 5xxxxxxxxx", Copilot Studio Agent often returns an error because it doesn’t properly construct the corresponding MCP API call.
The underlying problem seems to be that the agent doesn’t recognize how to build filterGroups or advanced queries with MCP, especially for fields such ashs_object_id. As a result, even though basic queries work, more complex filtering logic fails at the construction stage rather than at HubSpot’s endpoint. This challenge isn’t unique; several users have run into the same roadblock, and so far, there’s no official fix except to manually build the payload for the MCP call.
If you’re looking for a more robust AI experience, I’d also recommend you give Claude Desktop a try—it’s often better at handling nuanced or complex integrations. Let me know if you have any questions or need help getting started!