Most enterprise teams evaluate their Drupal support partner by what didn't break. Tickets resolved on time. Patches applied on schedule. Those metrics confirm that nothing visible has gone wrong. They don't tell you whether anything is being prevented.
Over the last 15 years, we've inherited many Drupal websites that were built and previously supported by someone else. The pattern across those takeovers is consistent enough to name: most enterprise Drupal sites are being maintained, not managed.
At Vardot, we draw a distinction that matters here: we don't maintain Drupal sites; we steward them. Our scope is not to keep a website alive; it is to keep it moving forward month by month, making it safer, faster, and more capable over time.
Maintenance is reactive; something breaks, a ticket gets filed, the ticket gets closed, and the SLA is met. Stewardship is different; it is the ongoing work of treating a website as a living organism that is continuously monitored, validated, and improved. That work happens between tickets, not because of them.
If the only thing your current Drupal support and maintenance model is producing is green dashboards and closed tickets, it is worth asking what is being done with the rest of the time.
The Three Shapes of Reactive Support
When a new client comes to Vardot from another partner, their previous support model almost always falls into one of three patterns.
The first is a single freelancer, often technically capable, almost always a single point of failure. Sometimes there is no dedicated support partner at all, and the website is being kept alive by an internal team that built it.
The second is an offshore team operating on a ticket-based model, with no named owner and no human face on the provider side that the client interacts with. No one who understands their business or their website. Tickets come in, tickets get closed, and the work runs as a transactional loop that does not accumulate into anything larger.
The third is an internal team that built the website and is now being squeezed to also support it. The same people who should be advancing the platform are absorbed in keeping it running.
The problem is rarely the people; the freelancer may be excellent, the offshore team may be technically sharp, the internal team may know the platform deeply. The problem is the model. None of these three produces dedicated ownership. No one is treating the website as a living organism that requires continuous attention.
And when no one is, in very simple terms, work is not getting done.
What a Reactive Model Quietly Misses
The specific form this takes is predictable. There will be a security update that no one is working on. There will be a search function that has stopped working, and no one has noticed. There will be components across the site that are not being monitored, are not being kept current, are not being checked, and no one owns the visibility to catch any of it.
None of these triggers a ticket, because the absence of visibility is the thing causing it.
A website is not a single piece of software. In Drupal specifically, it is built from many components: core, contributed modules, custom code, themes, and third-party packages.
In technical terms, that is the SBOM, the software bill of materials. With hardware, you build once, and there is no maintenance. With software, the SBOM gets updated day by day: security patches, improvements, gaps. Every item on that list has its own release cycle and its own vulnerabilities.
The questions a VP of Technology should be able to answer are:
Who owns the monitoring that keeps this organism healthy?
Who is checking that it is not becoming more vulnerable than it was last month?
Who is making sure it is gaining capability rather than losing it?
Under a reactive model, the answer is almost always that no one is.
Stewardship as a Standard
What we replace that with is a different standard for what "support" includes.
We treat enterprise change management as the floor, not a feature. Most of the takeovers we see arrive with one production environment and a developer's local machine, and that is it. That is against accepted enterprise practice. There should be a development environment, a staging environment, and a pre-production environment. Changes get tested and validated before they go live. Deployments have specific windows. There is communication around all of it. We call this our Vardot standards, because in practice, those are the things we don't see implemented in freelancer, offshore, or ticket-based modalities.
We build operational processes around automated tooling. We have spent years developing tools to measure the health of a Drupal site, including its components, its security posture, and its performance. The point is that nothing is left to chance. We don't wait for failures to surface. We predict them, monitor for them, and prevent them before they become incidents.
We engage senior expertise on a cadence. Every client has regular check-ins, weekly, bi-weekly, monthly, or quarterly, depending on how demanding and business-critical the site is. Those meetings include a senior member of our team. Not an account manager who escalates. A practitioner who can discuss the findings, answer the technical questions, and recommend what to work on next. That level of senior engagement is what is usually missing in other support modalities.
Where AI Sits in Our Workflow
The shift from reactive to proactive support is not primarily about adding more headcount. It is about changing what the people you already have are spending their time on. AI is what makes that shift viable at enterprise scale.
At Vardot, AI is embedded in our managed services workflow in a few specific places.
The first is automated testing. We have built testing frameworks that validate hundreds, if not thousands, of aspects of a website every time we change something, so that fixing one thing on the west does not quietly break something on the east. With AI, we extend that coverage toward near 100% of the website: every code change, feature update, or bug fix generates its own automated test. For clients who come to us with no automated testing in place, we know it is a big ask to do at once. So we will often establish CI/CD first, layer in basic AI-generated test coverage from day one, and grow the coverage with every subsequent change.
The second is ticket analysis. Every ticket filed by every client gets analyzed with AI. That helps us make sense of patterns across the engagement and serve clients better over time.
The third is internal tooling. We continuously develop micro-applications using AI that extend our monitoring and our ability to provide value to clients. The economics of building internal tooling have changed enough that we build more of it than we used to. (I have written separately on Forbes about how the build-versus-buy equation has shifted.)
Image
There is also a set of work that sits adjacent to support and is increasingly part of what an enterprise Drupal engagement should include: AI readiness for the website itself. LLMs.txt standards. Markdown versions of content that make the site easily available to AI agents and chatbots. Edge configuration that permits controlled, healthy bot access rather than blocking AI crawlers outright, which a lot of performance-focused deployments do by default, and which actively harms AI presence. As a Cloudflare partner, we configure the right combination of features to maintain performance and AI visibility at the same time.
We also advise on AI strategy as it pertains to the website itself, and that advice is shaped by the industry. We work primarily with nonprofits, international organizations, including UN agencies, healthcare, B2B commercial enterprises, and universities. The metric that matters depends on the sector: impact and fundraising for a nonprofit, lead generation for a B2B company, and reach and Google News visibility for a publisher. AI strategy on the website has to start from what the website is actually for.
There is a forward direction here that I will mention briefly. We have built internal AI tools, like a brand-compliance reviewer, that we use in our own workflows. The natural next step is bringing tools like that inside the CMS itself. Imagine an editor publishing an article in Drupal and getting on-brand-or-not feedback in the moment. Those tools exist, and we are building the bridge into the CMS now.
It is worth saying that Drupal is well-positioned for all of this. Its architecture enforces structured content, data governance, and revisions, characteristics that matter when integrating AI at enterprise scale, where structured data and editorial control determine output quality. Vardot is a Gold Sponsor of the Drupal AI Initiative and contributes to the Drupal AI Maker program, alongside a handful of other companies working to accelerate AI capability within the platform.
Seven Questions for Your Next Quarterly Review
If you suspect your current Drupal partner is reactive but cannot prove it, here are the seven questions I would take into your next QBR. The answers will tell you more about the shape of the engagement than any SLA report.
An analogy first. When you speak with an experienced doctor, three or four questions are usually enough for them to produce a reasonable diagnosis. Drupal experts work the same way. The first three questions below are how we diagnose a site before we even have access to the code.
1. Ask for the Software Bill of Materials and how it is being monitored
If no one can produce a current SBOM, no one is monitoring your attack surface.
2. Ask for the three standard Drupal reports: the module list, the content types and fields list, and the status report
Any Drupal expert can read the health of a site from these three documents alone, and they don't require code access.
3. Ask what the technical governance process looks like
Automated test coverage, a CI/CD pipeline, and separated environments for development, staging, pre-production, and production are the floor, not the ceiling.
4. Ask where AI is integrated in the support workflow today, specifically
Not "do you use AI" as a general question. Ask what AI is catching, predicting, or automating today as it pertains to your site.
5. Ask about Drupal certification status
Drupal certifications are an attestation that a partner actually knows Drupal, not a marketing badge.
6. Ask about the SLA and subject matter expertise across the ecosystem
Managed services do not end at the CMS. It extends to hosting, CDN, WAF, and observability.
7. Ask about the roadmap for advancing the site, not just maintaining it
If your partner does not have a view on where your site should be going next, they are scoped for maintenance, not for stewardship.
How We Do It at Vardot
Everything above is diagnostic. This section is what we actually do once a client signs.
The first step is our flagship site audit. It is a post-contract activity because it requires access to your code base and files. Once you hand us the keys, we can x-ray the site properly. The audit covers architecture, modules, configurations, performance, security, hosting and infrastructure, backups, content lifecycle, documentation, user experience, accessibility, risk, code quality, content governance, and technical governance. The output is typically 100 pages or more.
The findings are distilled into a recommendation quadrant in four categories: must-haves (high value, low effort), should-haves (high value, high effort), nice-to-haves (low value, low effort), and haves (low value, high effort). The quadrant becomes a decision framework for the client. The principle is that we don't hand you a buffet of options and ask you to pick. We tell you what we recommend for your budget and your business, and we work from there. Expertise is the value.
Image
Ongoing engagement runs on a regular cadence: weekly or bi-weekly for high-demand sites, monthly or quarterly for others. Every cadence meeting includes a senior member of our team.
Our SLA is built for proactive coverage. The team works 24/7 shifts with on-call engineers, and response times on alerts are up to one hour. The word for what enables that is observability, and it is central to the model. We use observability tools that give us visibility across logs, performance, CPU, and usage, so that we know about a problem before the client does. We also ask for access to Google Search Console and Google Analytics for the same reason, even when the client has dedicated SEO and analytics teams elsewhere. It gives us context we would otherwise be missing.
We also operate as a single agency of record across the website ecosystem, wherever a client is open to that model. We hold partnerships with Cloudflare, Acquia, Pantheon, Upsun, and New Relic, which let us take responsibility for hosting, CDN, WAF, and observability alongside Drupal itself. The alternative, which we see often, is split responsibility across vendors, and the blame-throwing that follows when something goes wrong at the seam between them.
A recent example. We have been onboarding a global chemicals manufacturer, and we have inherited their web ecosystem and are applying our standards across it. Some of those properties sit on a complex hosting setup with multiple vendors and a RACI matrix between them, which is good to have on paper, but slows things down. For this client, we are moving them onto our single-agency-of-record model, where we handle support, hosting, CDN, WAF, and observability through our own subcontractors and partners. The point is not the move itself. The point is that the alternative, multiple vendors, multiple SLAs, multiple escalation paths, is something we are actively unwinding for an enterprise client because the operational cost is real.
The Bar Is Rising
If your current Drupal partner can give you clear answers to the seven questions above, you are in a defensible position.
If they cannot, those answers are telling you something your SLA dashboard is not.
Maintenance keeps a Drupal site alive. Stewardship keeps it moving forward. They are not the same engagement, and they don't produce the same outcomes. From where we sit, the gap between the two is widening every quarter.
A standard SLA contract measures success by ticket response time and resolution rate. Stewardship measures success by what was prevented, what was advanced, and what was discovered before it became an incident. The two models can carry similar headline pricing but produce very different outcomes: a stewardship engagement includes site health monitoring, automated test coverage, AI-aided detection, senior practitioner cadence calls, and a living roadmap. None of these typically appear on a ticket-based SLA, which means the work either does not get done or gets billed separately as project work.
A flagship Vardot site audit examines fifteen areas of the platform: architecture, modules, configurations, performance, security, hosting and infrastructure, backups, content lifecycle, documentation, user experience, accessibility, risk, code quality, content governance, and technical governance. The output is typically a 100-page-plus report distilled into a recommendation quadrant: must-haves, should-haves, nice-to-haves, and haves, sorted by value and effort. The audit is a post-contract activity because it requires access to the codebase, files, and infrastructure. Timeline depends on site complexity, but most audits run two to four weeks from access to delivered report.
Yes, a large part of Vardot's managed services engagements are inherited platforms. The takeover process starts with the flagship site audit, which gives both sides a shared understanding of what the platform looks like underneath. The audit also surfaces the gaps that the previous support model left behind: outdated modules, missing test coverage, undocumented customizations, environments that do not match enterprise change management standards, and produces a roadmap for closing them. Inherited sites typically need three to six months of structured remediation before they reach the stewardship steady state.
The wrong metrics are tickets closed and SLA compliance: both can stay green while the platform decays underneath. The right metrics measure what the engagement is producing, not what it is responding to. We track security advisories actioned within their disclosure window, automated test coverage as a percentage of the codebase, the number of recommendations advanced from the audit roadmap each quarter, mean time to detect for incidents, and senior practitioner hours engaged per cadence cycle. Together those measures show whether the platform is moving forward, not just whether the lights are still on.