Alyna
PricingAboutCareersBlog
Alyna
PricingAboutCareersBlog
Alyna
Alyna

An AI executive assistant you can call, message, or ping - across Slack/Teams, email, calendar, WhatsApp, and voice.

Product

AI Chief of StaffAI Executive AssistantAlyna vs ClawdbotAlyna vs OpenClawAlyna vs NemoClawAlyna vs MerlinPricing

Features

Multi-Agent WorkflowsBrowser AutomationAutomated SchedulesUnlimited MemoryWeb Search

Capabilities

Email + calendarSlack / TeamsMeeting prepApprovals + audit logVoice assistant

Company

AboutContactCareersSign In

Resources

BlogGet Access

Legal

Privacy Policy

Newsletter

Product news and behind-the-scenes updates.

© 2026 Alyna. All rights reserved.

AI Executive Assistant Vendor Due Diligence: SOC 2, GDPR & E - Alyna
SOC 2, GDPR, and EU AI Act checklist for AI executive assistant evaluation in 2026
By David WilliamsPublished Mar 12, 202610 min readSecurity

AI Executive Assistant Vendor Due Diligence: SOC 2, GDPR & EU AI Act (2026)

If an AI executive assistant can read inboxes, parse calendars, summarize meetings, and draft external communication, then it is not just a productivity tool. It is part of your operating environment. That means the evaluation standard should look less like "nice AI demo" and more like "vendor handling sensitive workflows on behalf of the business." This page is deliberately a vendor diligence playbook, not a general security explainer. The right question is not whether the assistant looks smart. It is whether the vendor can support procurement, legal review, and internal governance without making your team do guesswork.

A practical definition:

A compliant AI executive assistant is not one that claims to be secure. It is one that can document how it controls data, how it limits execution, and how it proves human oversight when the workflow demands it.

This guide breaks that due diligence into three frameworks that matter most for many B2B buyers: SOC 2 Type II, GDPR Article 28 processor obligations, and EU AI Act oversight and documentation expectations. It also explains why approval-first product design and exportable audit trails are not just product features; they are diligence evidence. For broader security background, see security and compliance for AI executive assistants. For adjacent governance context, see approval workflow governance and why approval-first AI assistants are becoming the enterprise default.

Why This Has Become a Hard Procurement Issue

The enterprise market is moving from experimentation to scrutiny. AI tools are no longer evaluated as isolated copilots; they are being assessed as systems that touch records, workflows, and personal data. Research summarized by CloudEagle.ai and Glacis shows the same problem from two angles: legal and procurement teams increasingly want AI-specific due diligence, but many vendors still answer with generic security language instead of AI-specific controls.

That gap matters because an executive assistant sits at an awkward intersection of risk domains:

  • security risk because it can access email, calendars, chat, and meeting context
  • privacy risk because it may process personal data from employees, candidates, customers, and partners
  • operational risk because it can influence external communication and scheduling
  • governance risk because poorly designed assistants blur who actually made the decision

A normal SaaS questionnaire is no longer enough. You need a vendor who can explain the product's control model.

SOC 2 Type II: What It Actually Tells You

SOC 2 is an assurance framework from the AICPA for service organizations handling customer data. For buyers, the useful distinction is usually between Type I and Type II:

  • Type I says controls are designed appropriately at a point in time.
  • Type II says controls were not only designed, but operated effectively over a review period.

That is why Type II generally matters more for an AI assistant vendor. It signals evidence over time, not just a policy binder.

What to ask for on SOC 2

QuestionWhy it matters
Do you have a current SOC 2 Type II report?Tells you whether controls were tested over time
Which trust services criteria are covered?Security is baseline; confidentiality/privacy may matter depending on your use case
What systems are in scope?A narrow report can create false confidence
Can you share the report under NDA?Serious buyers need the actual evidence, not just a badge
What has changed since the report period ended?AI products change quickly; stale reports can hide current risk

What SOC 2 does and does not prove

SOC 2 Type II is important, but it is not a complete AI governance answer. It helps validate that the vendor has tested controls around security and operations. It does not automatically prove:

  • that the model behaves safely in your use case
  • that human oversight is built into consequential actions
  • that the product's AI-specific risks are acceptable to your organization

So treat SOC 2 as table stakes, not the whole table.

GDPR Article 28: The Processor Test Buyers Should Actually Use

If the assistant processes personal data on your behalf, the vendor is usually acting as a processor. The European Commission's guidance on controllers and processors and the commonly cited summary of GDPR Article 28 processor obligations make the core requirement clear: the controller-processor relationship needs to be governed by a binding agreement, and the processor must provide sufficient guarantees around security and handling.

What your DPA review should cover

A good DPA check is not just "do they have one?" It is "does it answer the operational questions we actually care about?"

DPA topicWhat to verify
Processing instructionsVendor only processes data on documented instructions
Sub-processorsSub-processor list is available and changes are disclosed
Security measuresContract references appropriate technical and organizational controls
Data subject rightsVendor can support access, deletion, correction, export, and restriction requests
Deletion/returnContract states what happens to data at termination
Audit/cooperationVendor will provide information needed for compliance review
International transfersSCCs or equivalent safeguards are clearly stated when relevant

Questions that expose weak GDPR readiness

  • Where is customer data stored by default?
  • Which sub-processors can access content, metadata, or support tickets?
  • Can we opt out of model training on our data?
  • How do you handle deletion requests across logs, backups, and sub-processors?
  • What is your default retention period for content and metadata?

If a vendor cannot answer those questions cleanly, the issue is not just legal. It is operational immaturity.

EU AI Act: Why Human Oversight Is Now a Product Requirement

The EU AI Act Article 14 guidance focuses on human oversight for high-risk AI systems. Not every AI executive assistant will fall into the formal high-risk bucket, but buyers should still pay attention because the direction of regulation is clear: organizations need AI systems that let humans understand capabilities, monitor performance, and intervene appropriately.

That matters in executive assistant workflows because the assistant may not be making life-changing decisions, but it may still:

  • influence hiring communication
  • shape client or partner messaging
  • coordinate external meetings
  • draft follow-up content tied to sensitive business context

The safest interpretation for buyers is simple: if a workflow could create reputational, regulatory, or commercial consequences, you want meaningful human oversight and a record of that oversight.

What to ask vendors about EU AI Act readiness

  • How does the product enable human review before consequential external actions?
  • What documentation exists for model limits, failure modes, or escalation rules?
  • Can users see why a recommendation was made?
  • What logs exist for proposed actions, approvals, edits, and execution?
  • What roadmap exists for EU AI Act alignment if your product is sold into the EU?

You are not just checking a regulatory box. You are checking whether the product was designed for accountable use.

The Control Layer Most Buyers Miss: Approval + Audit Trail

This is where many AI assistant evaluations stay too abstract. Buyers review SOC 2, skim the DPA, ask whether the vendor is "EU AI Act aware," and then miss the core product control question: how does the assistant behave when it wants to act?

For executive assistants, the strongest answer is approval-first behavior with an exportable audit trail.

What to require in the product itself

Product controlWhat good looks like
Approval-first executionNo external email, message, or booking executes without explicit user approval
Editable approvalsUser can modify content before approving
Unified queuePending actions are visible in one place
Audit trailProposal, approver, timestamp, edit history, and final outcome are logged
Permission boundariesClear distinction between read, draft, and execute permissions
ExplainabilityEnough context is shown for the user to make an informed approval decision

Why this matters:

  • SOC 2 review becomes easier when access and actions are demonstrably controlled.
  • GDPR accountability is easier when processing tied to an action can be linked to a documented human decision.
  • EU AI Act oversight narratives are stronger when the product visibly supports human intervention before execution.

A Better Vendor Evaluation Checklist

Use this short-form checklist when evaluating AI executive assistant vendors.

Governance and security

  • Current SOC 2 Type II report available under NDA
  • Clear description of systems and controls in scope
  • Encryption in transit and at rest documented
  • Role-based access controls documented
  • Incident response and breach notification policy available

Privacy and data handling

  • DPA available and aligned to Article 28 requirements
  • Sub-processor list published or available on request
  • Data residency / transfer mechanism explained clearly
  • Retention and deletion policy documented
  • Model training defaults and customer opt-out position explained

AI-specific operational controls

  • Approval-first for consequential external actions
  • Exportable audit trail of proposals and approvals
  • Clear statement of what the system can do without approval
  • Human-review workflow for high-risk or ambiguous actions
  • Documentation on model limits and expected failure modes

Red Flags That Should Slow or Stop the Deal

  • SOC 2 badge on the website, but no willingness to share the report.
  • Generic DPA language without clear sub-processor, deletion, or transfer answers.
  • No explanation of whether customer content is used for model training.
  • Product can send or schedule autonomously, but the vendor cannot clearly describe the approval rules.
  • No exportable audit trail.
  • Legal or security answers that sound copied from a generic AI FAQ rather than tied to the product's actual behavior.

The Practical Buyer Position

The strongest procurement position is not "AI is too risky." It is "AI is acceptable when the vendor can show specific controls, product boundaries, and documented human oversight." That keeps your standard high without forcing you into endless theoretical debate.

For most executive-assistant evaluations, the buyer hierarchy should look like this:

  1. Can the vendor document baseline security and operational controls?
  2. Can the vendor document processor obligations and privacy handling?
  3. Can the product enforce approval-first behavior and preserve an audit trail?
  4. Can the vendor explain the model's boundaries well enough for your team to trust the workflow?

If the answer breaks at step three, the vendor is not really enterprise-ready for this category.

Summary

  • SOC 2 Type II matters because it shows tested controls over time, not just good intentions. See the AICPA SOC 2 overview.
  • GDPR Article 28 matters because the AI assistant vendor is often a processor and must provide sufficient guarantees, contractual controls, and support for data subject rights.
  • EU AI Act oversight expectations matter because buyers increasingly need systems that enable real human review and intervention, not just post-hoc monitoring.
  • The most useful product-level control is approval-first execution with a full audit trail.
  • A credible vendor should be able to answer not just "are you compliant?" but also "how does your product limit, log, and govern AI actions in practice?"

Alyna is designed around that buyer standard: approval-first workflows, auditability, and enterprise-friendly governance for email, calendar, and messaging. For adjacent reading, see why approval-first AI assistants win, security and compliance for AI executive assistants, and approval workflows for executives.


Alyna is built for teams that want AI leverage without weak controls: draft-first, approve-then-send, and a workflow you can actually defend in procurement. Get access.