Patching On A Larger Scale.

Patching larger, complex environments requires focus and consistent dedication to the patching schedule.  This is a critical part of patch management.  These are the key areas that need to be right in order to put together a comprehensive and stable patching schedule.

1) CMDB – Your asset register.

Like many aspects of the IT Infrastructure, the CMDB is critical. The CMDB is the underlying part that ensures the patch schedule is accurate and covers the entire estate.  It also assists in change management if it is kept up to date.

There are of course many sources of asset databases – Active Directory, Monitoring Tools, VM consoles, Antivirus Consoles as well as tools that scan the network for assets.  However, it is critical that there is a single source of the truth and in most cases that will be the company CMDB.

2) Ownership and stakeholders.

In a larger environment this can be tricky for multiple reasons.  All servers are usually supported by a central team which is normally the server team or another infrastructure team, however there also tends to be multiple stakeholders when it comes to server ownership.

Understanding server ownership is important as there needs to be a form of sign off and agreement by the server owners and stakeholders in order to avoid unplanned or unnecessary  downtime or service disruption.  Who’s job is it to inform stakeholders, which can be other IT teams’, the business, internal or external users.

Reviewing and keeping an accurate record of server ownership is vital and the place to do this is within the CMDB.

3) Grouping and maintenance windows.

Once we understand the environment and who the owners and stakeholders are, we can start to put together the server groups and maintenance windows.  Grouping is important, particularly in large environments as this gives you optimum throughput and reduces service impact, especially if you are separating out cluster and paired servers.  If you have a test, dev and DR policy you will also want to consider how you group these servers for an optimum patching schedule and minimal service impact.

Once you have grouped the servers together based on a mix of ownership, business and service impact and criticality you are ready to put your schedule together.

4) The schedule.

Scheduling a complex environment requires accurate data, resource and good communication.  The schedule is not simply a list of dates, times and servers.  The schedule brings together all of the elements above and provides a clear view of not only what is being patched, but what has been patched.  Historic information is important as it aids support issues post patching.

In the event of the infrastructure team needing to roll out an emergency patch, such as WannaCry, the schedule along with a well maintained CMDB allows you to analyze rollout options such as how long it will take to rollout throughout your environment and the opportunities for an expediated rollout.

5) Change management and communication.

When it comes to patching, change management can take differing forms and processes from one company to another.  A rigid change management process does present risk to the patching schedule and without management or change management being fully on board with their own processes the schedule can become onerous to manage.

If the change process is too loose, then accountability and communication can be compromised leading to a poor and untrusted service.  Getting the balance right can be tricky, but the key point is to ensure that the change management team, who generally dictate the process also available in the event the change process breaks down, leading to failed patching events.

Wrapped into the change process is communication and in the case of patching you cannot over communicate.  Ensuring that the owners and stakeholders are aware of upcoming maintenance windows is crucial as well as the resources that are there to monitor systems.  Pre and post patching communication is a critical part of the patching process and is closely linked to change management as it gives all stakeholders the opportunity to have a view of the upcoming outages.

6) Analysing and reporting.

Just slightly outside of the patching schedule are analysing and reporting on the schedule.  This is important as this function will allow you to review the effectiveness of the schedule along with where the schedule aligns with compliancy.

Tools used to undertake the patching can be used to check compliancy and patch status, however relying on the deployment tools for reporting can become cumbersome particularly when you need to do some advance analysis like predicted compliancy compared and how it aligns with the schedule.  In addition to this, if you are running multiple deployment tools in a multi-operating system environment it can be difficult to bring the data together for quick, streamlined reporting.


This is why PalisadeSECURE developed AR3.  AR3 allows you to manage patching schedules, communication, change management, real time compliancy and report back to technical staff as well as reporting to board level.

With built in schedule management you can assign servers to groups, assign owners to groups and link the schedule to change management.  AR3 automatically sends out pre and post communications and assists with resource management, ensuring that resource is available to the patching schedule.

In addition, AR3 has an inbuilt ticketing system to allow you to track failed patching events such as tracking patches that failed to install.

Join Our Community of Security Professionals

Get blogs, cyber securityy industry news, updates, and articles delivered right to your inbox. We email once a week with curated topics just for you. No spam, just fun.