1. Why this page exists
We build AI systems for SMBs. That means we sit in the middle of decisions that touch real customers — patients, families, buyers, candidates. We'd rather be explicit about how we use AI, what we'll never do with it, and who's on the hook for what — than rely on a generic privacy policy to cover it.
This AI Policy supplements (and forms part of) our Privacy Policy and Terms of Service. If anything here conflicts with a signed Master Services Agreement for a specific engagement, the MSA controls.
2. AI providers we use
Below is the current list of AI and infrastructure providers we route data through. The list is updated when we add or remove providers; the “Last updated” date at the top of this page reflects the most recent change.
| Provider | Used for | Data sent | Training rights |
|---|---|---|---|
| OpenAI | Site assistant chat replies (gpt-4o-mini); occasional document tasks for clients | Conversation text from the floating chat widget; transient — not persisted by us | Does not train on our data (business-tier terms) |
| Anthropic | Available as an alternative model for client engagements (Claude family) | Only client data sent during an active engagement, per the MSA | Does not train on our data per signed agreement |
| Vapi.ai | Voice-agent infrastructure for client engagements (WebRTC, telephony orchestration) | Call audio, transcripts, and metadata during a client's voice-agent calls | Does not train on our data per signed agreement |
| ElevenLabs | Text-to-speech voices for voice agents | Text sent for speech synthesis — typically agent-side scripts and dynamic replies | Does not train on our data per signed agreement |
| Twilio | Telephony (phone numbers, call routing, SMS) for client voice agents and SMS workflows | Call metadata, audio streams (if media streams enabled), SMS content | Does not train on our data per signed agreement |
| Resend | Transactional email from the Site contact form | Name, email, message text submitted through the contact form | Does not train on our data (business-tier terms) |
| Vercel | Hosting and edge runtime for the Site and client web deployments | Standard server/CDN logs (IPs, timestamps, request paths) | Does not train on our data per signed agreement |
Clients with regulated data (PHI, NPI, FERPA-protected records, etc.) receive the full subprocessor list — including BAAs, DPAs, sub-processor flow-downs, and SCCs where applicable — as part of engagement onboarding.
3. What the AI we build will do
- Always identify itself as AI when asked, and unprompted at the start of voice/chat conversations where appropriate.
- Offer a human handoff path at any point a customer or user asks.
- Refuse to give medical, legal, financial, or other professional advice — and route those questions to your humans.
- Stay within the scope of the documented prompts, knowledge base, and tools we configured for the engagement.
- Log decisions and conversations for the retention period defined in your MSA, in an auditable form.
4. What the AI we build will not do
- Pretend to be a human. Period.
- Diagnose patients, provide specific investment advice, prescribe medication, or interpret test results.
- Make hiring/firing/promotion/termination decisions, or any other decision that should belong to a person.
- Be deployed without a documented human escalation path.
- Be used for impersonation, deceptive practices, or any unlawful purpose.
- Be trained on your data, your customers' data, or any private data you process through it.
- Be deployed in regulated contexts (PHI, NPI, FERPA, FINRA, etc.) without the appropriate BAA/DPA/SCC paperwork and configuration in place.
5. Human review and escalation
Every AI we deploy has a defined human-escalation path. We track “human request rate” as a deployment KPI — if customers are asking for a person more than rarely, the system needs tuning or the workflow is wrong for AI.
For client engagements, the responsible human reviewers, their roles, and the cadence of review are documented in the Statement of Work.
6. AI outputs are not professional advice
Nothing produced by the AI systems we build constitutes medical, legal, financial, regulatory, tax, or other professional advice. AI outputs are tools. You and your customers must consult qualified humans for those decisions.
7. Your responsibility as a deployer
When we build AI for your business and you deploy it, the deployment context is yours. That means:
- You decide whether AI is appropriate for the workflow. You must consult your own legal, regulatory, and compliance counsel about your jurisdiction and industry.
- You are responsible for monitoring the AI's outputs in production and for maintaining the human escalation path we design with you.
- You are responsible for any regulatory, contractual, or third-party consequences of the AI's actions in your business.
- We build the technology. We do not assume responsibility for damages arising from your business use of the technology — see the limitation-of-liability and indemnification sections of our Terms of Service.
8. How to challenge an automated decision
If you believe an AI system we built has made a decision about you (or refused to take an action you asked for) that should be reviewed by a human, contact the operator of that system (your service provider) and request human review. Every AI we deploy supports human handoff.
If you cannot reach the operator, or want to raise an AI-quality concern about a Code 7-built system directly with us, email bechor@code7talentsolutions.com with “AI Concern” in the subject line. We respond within five business days.
9. Changes to this policy
We update this AI Policy when we add or remove providers, change data-handling commitments, or refine our AI principles. The “Last updated” date at the top of this page tracks every material change.
10. Contact
Email bechor@code7talentsolutions.com for any AI Policy question.
VM TECHNOLOGIES LLC · New York, NY