✦ Egnyte ✦ Designer + Product Manager ✦ Data Governance ✦ AI Safeguards ✦ Secure Content Collaboration ✦  
AI Governance & Data Control
AI Governance & Data Control
AI Governance & Data Control

Giving enterprise admins granular control over which data their AI can and cannot touch.

Giving enterprise admins granular control over which data their AI can and cannot touch.

Giving enterprise admins granular control over which data their AI can and cannot touch.

Released

As Product Designer, worked on WebUI & Mobile app (iOS, Android).

As Product Designer, worked on WebUI & Mobile app (iOS, Android).

As Product Designer, worked on WebUI & Mobile app (iOS, Android).

Tools used:

Tools used:

"AI can't be a black box" was one of the most repeated customer asks.

"AI can't be a black box" was one of the most repeated customer asks.

"AI can't be a black box" was one of the most repeated customer asks.

AI Safeguards adds a policy-driven validation layer applied consistently across every AI entry point (Egnyte web UI, mobile app and desktop app, MCP, public API, and Microsoft 365 integrations).


User define who can use AI and on which content, so sensitive data never reaches an LLM without explicit permission.

AI Safeguards adds a policy-driven validation layer applied consistently across every AI entry point (Egnyte web UI, mobile app and desktop app, MCP, public API, and Microsoft 365 integrations).


User define who can use AI and on which content, so sensitive data never reaches an LLM without explicit permission.

AI Safeguards adds a policy-driven validation layer applied consistently across every AI entry point (Egnyte web UI, mobile app and desktop app, MCP, public API, and Microsoft 365 integrations).


User define who can use AI and on which content, so sensitive data never reaches an LLM without explicit permission.

Key Product Decisions

Key Product Decisions

Key Product Decisions

1 | Defining safeguard policy builder as per user needs

1 | Defining safeguard policy builder as per user needs

1 | Defining safeguard policy builder as per user needs

Problem: Role-based restrictions alone can't cover real-world governance

Problem: Role-based restrictions alone can't cover real-world governance

Decision: Safeguards support multiple criteria: location, sensitive content classification, users/groups, and metadata such as last accessed date or file types.

Decision: Safeguards support multiple criteria: location, sensitive content classification, users/groups, and metadata such as last accessed date or file types.

Why: User think in folders, labels, and teams, not permission matrices, so the criteria builder matches that mental model directly.

Why: User think in folders, labels, and teams, not permission matrices, so the criteria builder matches that mental model directly.

2 | AI is not completely blocked for end users. Partial response is provided.

2 | AI is not completely blocked for end users. Partial response is provided.

2 | AI is not completely blocked for end users. Partial response is provided.

Problem: When AI silently restricts access to certain files without explanation, it creates confusion and frustration.

Problem: When AI silently restricts access to certain files without explanation, it creates confusion and frustration.

Decision: A neutral AI response is returned with sensitive content redacted, applied consistently across all AI entry points.

Decision: A neutral AI response is returned with sensitive content redacted, applied consistently across all AI entry points.

Why: End users shouldn't need to understand policy infrastructure. The response just needs to communicate the right thing without causing alarm.

Why: End users shouldn't need to understand policy infrastructure. The response just needs to communicate the right thing without causing alarm.

3 | Audit reports are generated from day one

3 | Audit reports are generated from day one

3 | Audit reports are generated from day one

Problem: Without logs, users couldn't prove compliance or investigate incidents.

Problem: Without logs, users couldn't prove compliance or investigate incidents.

Decision: Every file processed by AI is logged, giving users and compliance teams full visibility into data utilisation from day one of 'Labs' version.

Decision: Every file processed by AI is logged, giving users and compliance teams full visibility into data utilisation from day one of 'Labs' version.

Why: Auditability is a prerequisite for enterprise trust. Regulated customers won't adopt AI without it, so delaying it would delay adoption entirely.

Why: Auditability is a prerequisite for enterprise trust. Regulated customers won't adopt AI without it, so delaying it would delay adoption entirely.

The hardest part wasn't designing the controls. It was designing for a user who should not feel frustrated when AI was blocked for them.

The hardest part wasn't designing the controls. It was designing for a user who should not feel frustrated when AI was blocked for them.

The hardest part wasn't designing the controls. It was designing for a user who should not feel frustrated when AI was blocked for them.

Navigate through projects

Navigate through projects

© 2026 All Rights Reserved | Parmi Mehta

© 2026 All Rights Reserved | Parmi Mehta