Quick Overview
Quick Overview
🔐 Data Privacy & Handling
Only text prompts, including curated instructions and user input, are shared with Claude via Amazon Bedrock, not directly with Anthropic.
No customer or PII data is used to train or fine-tune the model.
Customers can fully opt out of AI features. Prompt engineering enforces strict data-sharing protocols.
Data is processed in AWS regions: Sydney, Frankfurt, Oregon, Montreal, and is retained until deleted by users/admins.
🧠 AI Model & Architecture
Enboarder uses Anthropic Claude via Amazon Bedrock (closed-source), with potential to use other supported models.
No model fine-tuning occurs on customer data. Enboarder has access only to its own prompts/responses via secure, isolated environments.
👥 Access & Permissions
Only Support and CS teams have access to AI-generated content, and are trained and vetted.
Admin users can use the AI Workflow Builder; lower permissions cannot.
Audit logs exist but are not shared externally.
🔒 Security & Compliance
Enboarder is SOC 2 and ISO 27001 compliant.
Defenses include input sanitization, prompt structuring, and response validation.
Data isolation is enforced via IAM roles; AI guardrails can be configured to block unsafe outputs.
🛠️ Customization & Control
AI outputs undergo two layers of human review before deployment.
AI use is optional per workflow, and users can fully edit outputs.
Visibility into prompts, outputs, and scoring helps track AI behavior, though full explainability is inherently limited.
📜 Legal & Ethical Standards
Enboarder retains IP of AI-generated content; customers are licensed users.
AI is provided “as-is”—users must validate accuracy and bias.
Ethical AI is enforced through human oversight, data minimization, bias mitigation, and regular reviews.
🔐 Data Privacy & Handling
❓ What data is sent to the AI model?
Enboarder utilizes a method known as "prompt engineering" to elicit responses from the language model (LLM). When interacting with Claude via Amazon Bedrock, only the text prompts that we create and submit through the Amazon Bedrock API are sent to the AI model. This includes our instructions, any customer data that we intentionally incorporate into the prompts, and the answers provided by users during each conversation. This data is processed through Amazon's infrastructure, rather than directly reaching Anthropic, which adds an extra layer of enterprise security. Amazon Bedrock serves as the intermediary service that manages these interactions with Claude. Importantly, no customer data is used to train or fine-tune the model; we are strictly using prompt engineering techniques to guide Claude's responses based on its existing capabilities.
❓ What aspect of the software is AI/generative AI and what type of output is being provided? How are customers expected to use the outputs?
There are two main features of generative AI: a chatbot and a workflow builder.
1) Chatbot: This component operates similarly to other large language models and is built on Anthropic’s Claude. It provides customers with written responses based on their specific data within Enboarder.
2) Workflow Builder: The workflow builder is designed to create and optimize workflows within Enboarder. It allows customers to seamlessly add functionality to their implementations.
❓ Is any customer or end-user data used to train the model?
No, it’s not. Please see section “What data is sent to the AI model”.We respond with carefully crafted prompts to achieve the best results from LLMs.
❓ Can we control what data is shared with the AI (e.g., opt-out of sharing PII)?
You can completely opt out of our AI functionality, but if you do use it, no further opting out is necessary. Enboarder has complete control over what data is shared with Claude and we determine exactly what information is included in each prompt sent to the model. We have established clear protocols for what information should never be included in prompts (such as certain types of PII or proprietary information) and we implement these restrictions through our prompt design practices. This gives us multiple technical controls to prevent unauthorized data sharing.
❓ Where is the data processed (geolocation of data centers)?
Data is processed at our AWS data centers located in Sydney, Frankfurt, Oregon, and Montreal.
❓ How long is data retained when used in the AI workflow builder?
The information will be kept until the user or administrator explicitly requests its deletion.
🧠 Model Usage & Architecture
❓ Are you using third-party LLMs (e.g., OpenAI, Anthropic, Azure OpenAI), or do you host your own?
Enboarder will utilize multiple models available through AWS Bedrock. Please refer to the model support byAWS region in Amazon Bedrock.We currently use Anthropic’s Claude, but reserve the right to use other models available through Amazon Bedrock in the future. We do not host our own models.
❓ Do you use a closed or open-source model?
Enboarder utilizes various models through the AWS Bedrock platform. These models include closed-source options such as Anthropic Claude Sonnet 3.5 and Sonnet 3.7.
All of these models are accessible via AWS Bedrock.
❓ Is the AI model fine-tuned on our data? If yes, how is that done securely?
Enboarder is not fine-tuning or training any AI models.
❓ What level of access does Enboarder have to the model outputs or prompts?
As a user of Claude through Amazon Bedrock, Enboarder has access only to the specific prompts we input and the responses that Claude generates within our account. Amazon Bedrock provides enterprise-grade isolation of our data and interactions. Neither Anthropic nor other Amazon Bedrock customers can access our prompts or the outputs we receive. Our access is limited strictly to our own conversations with the model through our Amazon Bedrock implementation, and we implement internal controls regarding how these outputs are stored, shared, and managed within our organization. Amazon Bedrock also provides logging and monitoring capabilities to maintain oversight of all interactions with the model.
👥 User Access & Permissions
❓ Who at Enboarder has access to our AI-generated content or underlying data?
Only our Support and CS Team. Enboarder ensures that these individuals have passed their background checks and completed AI security-related training.
❓ Can we set access controls for who in our organization can use the AI features?
Partially. All admin users who have permissions to create workflows will be able to use the AI workflow generator. Lower levels of permission that do not have workflow creation permission, will not be able to access it.
❓ Are audit logs available for prompts submitted to the AI and the responses?
Yes, but it will not be shared with customers.
🔄 Security & Compliance
❓ Is the AI workflow builder SOC 2 / ISO 27001
Yes, our core processes are in compliance with ISO 27001 and SOC 2 standards.
❓ How do you prevent prompt injection or other prompt manipulation attacks?
We implement several layers of protection against prompt injection and manipulation attacks while using Claude through Amazon Bedrock:
Input validation and sanitization: We validate and sanitize all user inputs before they're incorporated into prompts sent to Claude, removing or escaping potentially dangerous sequences.
Prompt structure enforcement: We use a consistent, secure prompt template structure that clearly separates system instructions from user inputs, making it harder for malicious inputs to override our controls.
Output verification: We validate Claude's responses before presenting them to ensure they conform to expected patterns and don't contain signs of successful prompt manipulation.
Permission restrictions: We limit which Enboarder personnel can create or modify prompts that interact with Claude, implementing proper authorization controls.
Regular security reviews: We conduct periodic reviews of our prompt templates and injection prevention mechanisms to identify and address potential vulnerabilities.
We utilize Amazon Bedrock's security controls and compliance features as an additional layer of protection.
While Enboarder does incorporate customer data into prompts, these controls ensure that only authorized data is included, and external parties cannot inject unauthorized instructions or manipulate the model's behavior.
❓ How do you handle data isolation in multi-tenant environments?
Each tenant's data is isolated using IAM roles, preventing any cross-contamination of customer information.
❓ What safeguards are in place to avoid the AI generating inappropriate, biased, or offensive content?
We've implemented multiple layers of protection to prevent Claude from generating inappropriate, biased, or offensive content:
Model selection: We specifically use Claude through Amazon Bedrock, which has been designed with built-in safeguards against generating harmful content.
Pre-release testing: Before deploying any prompt templates, we test them extensively to ensure they don't produce problematic responses across a variety of inputs.
Continuous monitoring: We regularly review samples of AI-generated content to identify and address any patterns of bias or inappropriate responses that might emerge over time.
Feedback mechanisms: We've implemented user feedback channels to quickly identify and address any inappropriate content that might slip through our safeguards.
The combination of Claude's built-in safety features and our own review processes creates a robust system for preventing inappropriate, biased, or offensive content.
🛠️ Customization & Control
❓ Can we restrict what the AI can generate (e.g., via guardrails or filters)?
Yes, we can explicitly restrict what Claude generates through. This powerful content filtering system allows us to:
Define custom content policies that prevent Claude from generating responses on specific topics or domains
Block responses containing harmful, offensive, or inappropriate content based on configurable thresholds
Apply contextual filtering that considers the entire conversation history when evaluating content
These guardrails work as an additional layer on top of Claude's built-in safety features, giving us precise control over what content can and cannot be generated. We can adjust these restrictions as needed to balance usability with appropriate content boundaries within our application.
❓ Can we review/edit the AI’s responses before they’re deployed in workflows?
Yes - the workflow that is generated by the AI passes through two layers of human review. The first review is in the “AI Designer”, where users can interact with the AI to remove, edit or add new sequences or modules. Once the user is happy with the AI-generated content, the workflow is finalized, where the user can manually edit any information within the workflow (exactly the way they operate with workflows today).
❓ Can the AI be disabled or turned off for certain workflows?
Even if the AI feature has been enabled for a customer, it is not mandatory to be used. For whichever workflows where the AI is not needed, customers can continue to build and maintain it manually, as they do today.
❓ Do we have visibility into what the AI is doing and why it produced a specific output?
We have several layers of visibility to look into Claude's outputs through our Amazon Bedrock implementation, though large language models like Claude do have inherent limitations in full explainability:
Prompt design transparency: We maintain complete visibility into the inputs we provide to Claude, allowing us to understand how our prompting strategy influences outputs.
Amazon CloudWatch integration: We can monitor and log all interactions with Claude through Bedrock, providing an audit trail of prompts and responses.
Workflow recommendation scoring: We have configured our recommendation engine to provide confidence levels with its responses, helping you understand when the recommendation model is more or less certain.
Output patterns analysis: We conduct regular reviews of Claude's outputs to identify patterns and refine our implementation.
While these measures provide significant operational visibility, it's important to note that large language models like Claude cannot always offer complete explanations to specific word choices or reasoning paths. We address this limitation through careful prompt engineering, rigorous testing, and human review of outputs.
📜 Legal & Ethical Considerations
❓ Who owns the IP of AI-generated content?
Per our terms and conditions, as all AI-generated content is specific to the Enboarder platform, Enboarder retains all ownership of AI-generated content, and customer has a license to use the content as outlined in the terms.
❓ Does Enboarder assume any liability for AI-generated content errors?
No, Enboarder does not. While we use cutting edge methodology and software to build our AI features, all AI features are provided “AS IS” and customers are required to review all AI-generated content for accuracy, completeness, bias, etc.
❓ What’s your approach to ethical AI use within the product?
Our approach to ethical AI use within our product is built on several foundational principles:
Transparency with users: We clearly communicate when AI is being used in the product and set appropriate expectations about its capabilities and limitations.
Human oversight: We maintain human review processes for our AI applications, ensuring appropriate supervision of the technology rather than full automation.
Responsible data handling: We carefully consider what data is shared with Claude through Amazon Bedrock, minimizing unnecessary exposure of personal or sensitive information.
Bias mitigation: We regularly test and monitor outputs for potential biases and work to address them through improved prompt engineering and guardrail configurations.
User feedback channels: We provide mechanisms for users to flag concerning AI outputs, ensuring continuous improvement based on real-world experiences.
Regular ethical reviews: We conduct periodic assessments of our AI implementations against evolving ethical standards and best practices.
Purpose-driven implementation: We only deploy AI for use cases where it adds meaningful value while respecting human agency and autonomy.
This ethical framework guides all decisions about when and how we integrate Claude's capabilities into our product, ensuring responsible innovation that respects our users' rights and expectations
❓ Is there a way to explain or interpret AI decisions in the workflow builder (Explainability)?
Within our workflow builder, we've implemented several features to provide explainability for Claude's AI-generated outputs:
Process visualization: The workflow builder visually displays how inputs flow through to AI-generated outputs, showing the sequence of operations and decision points.
Prompt visibility: During an AI conversation, users can view the underlying prompts being sent to Claude, providing transparency into how queries are structured. Where appropriate, we present the key factors that influenced the LLM’s response, helping users understand which inputs had the greatest impact.
Edit capability: Users can always adjust AI-generated content with clear visibility into what was AI-generated versus human-edited.
Enboarder best practice (Discover) workflows: Where appropriate, we have also incorporated Enboarder’s own IP, specifically, our Discover workflows, to inform the AI of what a good workflow could look like.