• Learn how AI and automation actually work in your Help Desk. Ask our experts how to improve team speed and customer happiness! AMA Nov 17-21.

    Ask us anything

AI Adventurers

Join our vibrant community of AI enthusiasts driven by curiosity and passion, exploring the latest advancements, applications, and innovations in artificial intelligence.

DianaGomez
Community Manager
Community Manager

Recap: AI Hangout: What Information Should and Shouldn’t Be Shared with AI

 

5f7d8e16b60a2.png

 

The discussion quickly centered on the tension between the utility of AI/LLMs and the critical need for data protection, particularly client and private information.

 

Key Takeaways & Perspectives:

 

  • Security Over Convenience: Several participants, particularly those serving clients in highly regulated markets (e.g., Germany), emphasized a Fort Knox approach to data. This involves intentionally limiting the input of any personal or private client data into public LLMs like ChatGPT or Claude, relying instead on high-level, generalized prompts for brainstorming or strategy.

 

  • The "Lethal Trifecta" Concern: A security risk was raised concerning the potential for an LLM, even one integrated via a connector with "super admin" privileges, to go "off the rails" and corrupt or exfiltrate sensitive CRM data. Mistrust of connectors was a recurring theme, with many opting not to link public LLMs to live HubSpot portals due to this security risk.

 

  • Advanced, Safe AI Use Cases: The call included examples of sophisticated, low-risk AI adoption:

    • Structured Note-Taking: Using AI note-takers (like AskElephant) with custom-sculpted prompts and workflows to generate structured, tailored meeting notes. These are fed to HubSpot in a controlled way, minimizing data exposure.

    • Automated Process Documentation: Using AI to generate step-by-step process documents and visual flowcharts from recorded client demonstrations. This non-sensitive, domain-specific knowledge is then placed in Breeze Assistant knowledge vaults in HubSpot, offering clients an internal, safe, on-demand reference tool.

 

  • The Problem of Over-Confidence: There was consensus that many users dangerously overestimate AI's competence and accuracy outside of limited, defined tasks. This leads to risky behavior like feeding real client names or sensitive data to general-purpose LLMs without realizing the failure modes or data retention risks.

 

  • Prompt Engineering & Fact-Checking: Strategies discussed for getting better results while minimizing risk include:

    • Treating the LLM as a "site thinker" or advisor to evaluate an existing strategic approach.

    • Asking the AI to strictly base results "on the facts that you know" and not "create something that is not true."

    • Documenting the AI process (as one participant does via Google Docs) to maintain a complete historical follow-up, similar to an audit trail.

 

Loved these insights? We're excited to bring you more opportunities to connect, learn, and discuss the future of AI. Join our community to be the first to know about upcoming events!


loop Loop Marketing is a new four-stage approach that combines AI efficiency and human authenticity to drive growth.
Learn More

2 Replies 2
DanielleGriffin
Top Contributor

Recap: AI Hangout: What Information Should and Shouldn’t Be Shared with AI

Sounds like it was a great chat!  I'm sorry to have missed it.

DianaGomez
Community Manager
Community Manager

Recap: AI Hangout: What Information Should and Shouldn’t Be Shared with AI

No worries! This will definitely be an ongoing conversation. We’ll have more hangouts about it, so there’ll be plenty of chances to jump in next time! 😀


loop Loop Marketing is a new four-stage approach that combines AI efficiency and human authenticity to drive growth.
Learn More