Product
How to Set Up Guardrails for Open Agentic Systems
Lessons from running AI agents in production
Running an open agentic system with access to your entire machine sounds powerful — until you think about what could go wrong. Jack shares his team's hard-won approach to deploying AI agents safely, and why the answer starts with one simple rule: don't run it where your sensitive data lives.
Rule Number One: Isolate Everything
Jack's team has deliberately avoided running open-source AI agents on local machines. Instead, they run everything on fresh virtual machines in the cloud. Every agent gets its own identity — its own email, its own permissions — running on a clean VM with no pre-existing sensitive data to worry about.
They chose GCP specifically because it integrates with their existing Google Workspace, letting them manage all permissions and access controls from a single place. It's more expensive than some alternatives, but the centralized control is worth it.
Infrastructure as Code for Full Auditability
All deployment and configuration is managed through Terraform. Everything is explicitly declared, fully auditable, and gives the team complete visibility into what each agent is doing at any time. As Jack puts it — everything lives in a safe, isolated bubble in the cloud.
His advice for anyone thinking about running agents on their local machine? Don't. Or if you must use local hardware, wipe it clean and start fresh. The critical thing is making sure there's no sensitive data anywhere the agent can reach.
The On-Prem Pendulum is Swinging Back
Jack sees a broader trend emerging. The industry has cycled between on-prem and off-prem for years, and his thesis is that we're entering a new phase where people will get excited about on-prem again. The reason: with models like Kimi K2 becoming powerful enough to run locally, teams can now hyper-control their environment and manage costs in ways that weren't possible before.
The key insight is thinking of your entire machine as a context window. When you control the hardware, you can pull in all the data you need and connect only the specific APIs you want — nothing more, nothing less.
A Real-World Example: Automated CRM Updates Without Writing Code
Jack's team has built a practical workflow where their agent connects to a Notion board, its own Gmail account, and meeting notes from Gemini. Every four hours, the agent checks all meeting notes, reviews anything in their Notion table, scans connected emails, and automatically updates their CRM board in Notion.
The remarkable part? This was set up almost entirely through prompting, not custom code. The team wrote virtually no code after the initial setup — it's all orchestrated through the agent's natural language understanding of the connected tools and data sources.
Key Takeaways
- Never run open agents on machines with sensitive data. Use fresh, isolated VMs instead.
- Use infrastructure as code (like Terraform) so every configuration is auditable and reproducible.
- Give each agent its own identity with scoped permissions managed from a central place.
- Consider on-prem for cost control and data sovereignty as local models become more capable.
- Think of your machine as a context window — connect only the APIs and data sources the agent actually needs.
- You don't always need to write code. Prompt-driven orchestration across connected tools can get you surprisingly far.