By leveraging AI technologies, organisations can automate routine tasks, extract deeper insights from data, and deliver bespoke services to their clients.
Many organisations have already integrated AI into their operations and service lines, but client trust has not caught up at the same pace.
Concerns around data security, intellectual property, and ethics are valid, and acting on them is overdue. While regulators are beginning to set AI-specific rules (for example, the EU’s new AI Act) and to enforce existing laws, the global legal framework remains uneven and in flux meaning it is up to companies to develop their own protocols and prove responsible data management. The answer is not to avoid AI, but to design for safety from the start, through both technical controls and clear guidelines for how people are expected to use it.
At Anvil, we’ve built our AI practices on three pillars: technical control, guided accountability, and a culture of safety.
Hard Systems and Technical Safeguards
Utilising AI within a secure ecosystem is critical for ensuring that client data is always accounted for and managed.
- Data Isolation: All AI processing occurs within our or our client’s Microsoft Azure environment, a highly secure platform that prevents client data from being stored or used to train external models unless explicitly permitted.
- Encryption and Access Control: Vector embeddings and databases are encrypted, and access is strictly controlled. Permissions are tightly managed to ensure only authorised users can interact with sensitive information. Sensitive data is anonymised, redacted, or encrypted before processing, especially where personal data is involved.
- Client Opt-Outs: Clients retain full control over their data. They can opt out of sensitive data being used for training internal ML models and have their data fully returned at the end of contract. In instances where there are no clear specifications from the client, we default to our strict 7-year retention policy.
- System of Validation: It’s imperative to have a process in place for catching hallucinations – one of the most serious risks in AI systems. This includes running processes multiple times and comparing outputs, applying rule-based checks in critical sections, flagging inconsistencies for expert review against source documents, and running automated sampling to validate AI output against validated truth on an ongoing basis. Our testing strategy deliberately errs on the side of false positives: we’d rather flag a correct output for human review than miss an erroneous one.
Governance Frameworks in Action
Anvil currently maintains and updates its own Acceptable Use Policy for AI which ensures transparency for clients and clear guidance for employees. We’re actively aligning to the more rigid NIST AI Risk Management framework, but we’re candid that this is in process, not at completion. Responsible organisations are honest about where they are in that journey.
- Accountability by Name: AI risk is owned by a clearly designated individual with defined responsibility for oversight and decision-making. This ensures accountability is not diffused across committees or documents, and that there is a direct, accessible point of ownership for managing risk, enforcing policy, and responding to issues as they arise. Anvil also has named AI champions, who act as additional points of assistance for training, adoption, and development.
- Clear Labels and Inventories: All AI generated content is clearly labelled, and a disclaimer about its potential inaccuracies is put in place. In addition to labels, we maintain a comprehensive inventory of all AI models and datasets, with clear ownership and purpose, ensuring traceability and compliance. Companies who obscure or don’t track where or how AI is used in their product may contribute unchecked use of outputs and increased legal and reputational risk leading to trust breakdowns between the vendor and client.
- Data Processing Impact Assessments (DPIAs): For any data containing personal information, we document where it’s processed, how long it’s stored, and whether it’s used for model training. This is a mandatory step which all employees must follow before processing any personal client data. We also limit the enrichment or merging of personal data unless there is a valid legal basis.
Culture of AI Safety
Once the hard structures and governing frameworks are put in, people are the last line of defence in ensuring that AI is used with caution and integrity. For many organisations, promoting and maintaining a culture of AI safety is often the hardest part.
- Mandatory Employee Training: In our policy, every member of staff undergoes mandatory AI training. We ensure that all new hires receive training prior to being given access to client data, as well as temporary contracted staff and integrated partners. Dedicated AI champions act as support systems for staff that needs additional support in adopting safe AI practices or would like to integrate AI into their daily operations.
- Human Oversight: From start to finish, employees must be thoughtful and critical over their AI use. Alongside culture, procedures such as mandatory impact assessments are put in place to actively enforce added caution. The main intent is to account for where data goes and remediate the hallucinations and biases of AI outputs. As we don’t develop our own AI models, we partially rely on the bias mitigation built into OpenAI and Microsoft’s systems (which include content moderation, safety filters, and ongoing bias reduction work), but we of course supplement this with our own procedures on bias detection and the double-checking of outputs.
- Human Value and AI Resilience: AI is highly effective at accelerating and supporting work, but it should not come at the cost of reducing people to replaceable operators. We actively avoid approaches that flatten expertise or dilute human contribution. Our product and our client understanding exist because of human insight, judgment, and creativity, and we see that as a core competitive advantage, not something to be automated away. We are proud of the work our employees produce, and we are intentional about ensuring AI is used to enhance capability rather than create insecurity, disengagement, or loss of ownership over meaningful work.
Alongside these systems and processes sits a wider commitment to responsible use. As a certified B Corporation™, we have an added responsibility to ensure our use of AI aligns with high standards of social and ethical accountability, as well as clear legal and operational frameworks. Rather than seeing this as a constraint, we see it as something that strengthens how we build and use AI. We also recognise that AI governance is not a finished state and won’t be for quite some time. It continues to evolve alongside the technology itself, which is why our focus is on building systems and fostering behaviours that are robust enough to manage today’s risks, with room for flexibility to improve over time.
AI is now a core part of how organisations operate, and organisations that treat AI as a shortcut will continue to create risk, and those who avoid it all together may miss out on efficiency or innovation gains. We choose to embrace AI, and believe that treating it as a capability requiring discipline will help shape a better future for procurement and supply chain analytics.
