Skip to content

AI Guardrails and Preferences

Guardrails define the boundaries of AI behavior in your practice: what it can act on, what requires human approval, and how it communicates.

DPC Pro gives practice managers full control over the AI assistant’s capabilities. Guardrails determine which features the AI can use, what level of autonomy it has, and what topics or actions are off-limits. Preferences shape the AI’s communication tone, vocabulary, and approach.

This page covers how to configure guardrails and preferences, what the default settings are, and how to adjust them as your comfort with the AI grows. You can start with conservative settings and gradually expand the AI’s responsibilities.

Every guardrail change takes effect immediately and is logged in the audit trail.


When the AI assistant is first enabled for your practice, it starts with these default settings:

SettingDefault ValueWhat It Means
AI assistant enabledOnThe AI chat is available to authorized staff
Patient contextAllowedThe AI can access patient demographics and medical information when answering questions
Document search (RAG)Off per conversationMust be turned on for each conversation where you want the AI to reference uploaded documents
Daily message limit100 messages per dayMaximum number of AI messages across all users in your practice
Human approval for messagesRequiredAI-drafted patient messages must be reviewed and sent by a human
Autonomous patient contactNot availableThe AI cannot send any communication directly to patients

Regardless of your guardrail settings, the AI assistant always:

  • Requires a logged-in user to interact with it
  • Restricts data access to your current practice (no cross-practice queries)
  • Encrypts all conversation content at rest
  • Logs every interaction in the HIPAA audit trail
  • Refuses to provide definitive diagnoses
  • Reminds users to verify clinical information against guidelines


Practice managers can enable or disable individual AI features from the AI Assistant settings page.

  1. Navigate to Settings -> AI Assistant.
  2. Set AI Assistant to Enabled or Disabled.
  3. Select Save.

When disabled, the AI chat interface is hidden from all users and no AI processing occurs.

The AI assistant can include patient information in its responses when answering questions about specific patients. You can control this:

  1. Navigate to Settings -> AI Assistant.
  2. Find the Patient Context setting.
  3. Choose one of:
    • Allowed: The AI can access patient demographics, medical history, clinical notes, prescriptions, and appointments when a user asks about a specific patient
    • Disabled: The AI cannot access any patient-specific information. It can still answer general practice questions about scheduling, billing totals, and membership plans.
  4. Select Save.

Document search allows the AI to reference your practice’s uploaded documents (clinical protocols, medical references, practice policies) when answering questions.

Document search is toggled per conversation, not globally. When starting a new conversation:

  1. Open the AI chat.
  2. Select the Document Search toggle in the conversation header.
  3. When enabled, the AI searches relevant documents and cites sources in its responses.

If your practice has uploaded sensitive documents, be aware that enabling document search gives the AI access to their content for the current conversation.

  1. Navigate to Settings -> AI Assistant.
  2. Find the Daily Message Limit field.
  3. Enter the maximum number of AI messages per day for your practice (applies across all users).
  4. Select Save.

The default is 100 messages per day. If your practice has high AI usage, increase this limit. If you want to limit usage during an evaluation period, decrease it.


The AI assistant’s communication style is guided by its system prompt, which instructs it to be professional, accurate, and cautious in a healthcare context. The default behavior:

  • Professional and clinical tone: Responses are written in clear, professional language appropriate for healthcare staff
  • Caution with clinical information: The AI notes when information should be verified with clinical guidelines and never provides definitive diagnoses
  • HIPAA awareness: When patient data is included, the AI avoids reproducing Protected Health Information unnecessarily and summarizes rather than quoting records verbatim
  • Source citation: When document search is enabled, the AI references which documents informed its answer

When the AI drafts patient-facing messages (see AI-Drafted Message Replies), the tone should match your practice’s communication style. Currently, you adjust drafted messages by editing them before sending. The AI learns from the patterns in your existing messages over time.


The AI assistant has built-in restrictions that cannot be overridden by configuration:

  • Definitive diagnoses: The AI does not diagnose conditions. It may reference conditions listed in a patient’s record, but it will not state “this patient has X” as a clinical determination.
  • Treatment prescriptions: The AI does not recommend specific treatments or medication changes. It can look up what a patient is currently prescribed and report what clinical notes say, but treatment decisions are always made by providers.
  • Legal or regulatory advice: The AI does not provide legal, compliance, or regulatory guidance beyond general references to HIPAA and DPC practice norms.
  • Send messages to patients: All drafted messages require human review and explicit sending by a staff member
  • Modify patient records: The AI can read but not write to patient records, clinical notes, prescriptions, or billing records
  • Cancel or change memberships: Billing modifications must be performed by authorized staff through the normal billing workflow
  • Access other practices: Even if your organization has multiple practices, the AI only sees data from the practice you are currently working in
  • Contact external services: The AI does not have internet access and cannot reach external databases, APIs, or websites

If a patient requests that AI not be used in their care (or if your practice has a policy for certain patient categories), note this in the patient’s record and instruct your team accordingly. The AI does not automatically exclude specific patients from its data access.


Review your AI guardrails:

  • After the first two weeks: Check whether the defaults match your team’s actual usage patterns
  • When adding new staff: Ensure new team members understand what the AI can and cannot do
  • After changing practice workflows: If you start using a new billing process or messaging approach, verify that the AI’s behavior aligns
  • Quarterly: A brief review every quarter helps catch any settings that should be tightened or expanded

Every guardrail change is recorded in the audit log. To review recent changes:

  1. Navigate to Compliance -> Audit Log.
  2. Filter by action type Update and search for “AI” or “provider config.”
  3. The log shows who made the change, when it was made, and what the previous and new values were.

See Review AI Actions for full audit log instructions.

Guardrail configuration is typically managed by practice managers, but input from clinicians helps set appropriate boundaries. Consider:

  • Asking providers which AI features they find useful and which ones they prefer to have disabled
  • Discussing the patient context setting, as some providers prefer the AI to have full patient access, while others prefer to look up patients manually
  • Reviewing AI-drafted messages as a team during the first few weeks to calibrate expectations

If you need help configuring AI guardrails, reach out to the DPC Pro support team at [email protected] or visit the troubleshooting guide.