Since first envisioned in the early 2000s, the development and evaluation of WAF tools by suppliers and researchers has focused commercially on protecting specific aspects of Internet facing applications.  Broadly a web application firewall (WAF) protects sections of the family of HTTP technologies, that applies a pre-defined set of rules to HTTP traffic that block traffic preventing exploitation by a remote attacker.  Generally, these rules cover common gaps such as cross-site scripting, RFC violations, code injection, protocol non-compliance and signature errors; yet fails to address the majority of security issues discovered during penetration testing which are left to developer teams to remediate.  Compared with proxy servers which generally disconnect end-points from applications, current generation WAFs protect servers and specific classes of web application.

A WAF is software that detects predefined traffic signatures, while an NG-WAF includes risk scoring in addition to blocking.  In many respects a WAF could be considered as a reverse proxy server though in most instances they are run from a span port[1], coming as an application, a server component, or a traffic filter; and is always customised to fit with an application or programming set.  The resources to perform this customisation and maintain it are significant, hence current WAF suppliers focus on certain application types or development environments.

The main limitation remains compatibility with legitimate traffic when providing a generic WAF, one to be deployed in all circumstances ‘instantly’, which has limited the benefit-scope for what is arguably a high-value toolset for users of web applications.  Additionally, even when tightly managed, WAFs by themselves are insufficient to address the majority of discoverable vulnerabilities in an organisation’s internet facing perimeter.

Practical experience

The RedShield service combines a cloud-based platform to deliver the firewall and shielding functions between the customer’s web installation and the Intranet.  We describe this as ‘Shielding’ to differentiate it from current WAF tools as for the first time it provides a 100% barrier against attack.

Our service comes in two variants: most commonly, cloud servers localised to geographic clusters of customers and; on-premise, where a Shield server is located within a specific customer’s boundary or network.  This may be for reasons of traffic volume, numbers of web applications or security policy.  These servers operate on a VPN network back to the management and development centre currently in Wellington, New Zealand with another in Denver, USA.

Our differentiator is how the Shield-Codes are originated then deployed or adapted, making it commercially possible to 100% Shield for the first time.  See heading on our functional difference.

Simply the protection process is:

1st, a digital footprint or perimeter scan that captures background meta-data to what a customer may or may not, be aware of surrounding their systems;

2nd, an interactive scan of the Internet access point(s) themselves for specific vulnerabilities, accompanied by a fact-finding process for existing ISMS practices and penetration test results;

3rd, an algorithmic sourced shielding plan defines the rules to be applied to which Shield-Code component to be used, which is capable of automated deployment;

4th, any components in that Shielding plan not meeting the evaluation criteria for protecting against that vulnerability, are exception reported for the development team to respond to.  Practically this would mean a Shield-Code had not met the determined levels for protection and may require adaptation, or in the event of a brand-new vulnerability a new Shield-Code needing to be created; and

5th, a deeper ongoing vulnerability and exploit discovery either using a penetration test or bug bounty approaches.

The current library of root/source[2] Shield-Codes is over 50,000 strong, across the widest range of web technologies and platforms.  Meaning rarely do we now have to create a new Shield-Code from scratch, excepting for zero-day exposure, this library having been anticipated and grown organically over the last decade of penetration testing.  Historically, due to tightly managed baseline policies, most zero-day vulnerabilities have not required changes to policy to detect and block exploitation attempts.

Where a new shield object is required, our standard contractual service level follows a three-day cycle[3] from the discovery or awareness of a critical vulnerability, to a customer being fully protected[4].  This cycle deploys Shields according to a risk plan, anticipating how fast a fresh vulnerability becomes known to those who might exploit it.  In emergencies, expedited shields are available 24×7 reducing deployment below fifteen minutes, with Shield developers and security analysts immediately engaged to scope and deliver Shielding.  See Figure 1 for an example Shield plan.


  • Extending the life of an application.
  • Meeting launch and compliance audit deadlines.
  • Setting up does not create customer Tech Team workload.
  • Integrated continuous vulnerability and digital footprint scanning.
  • Deferring 3rd party software security related upgrades (e.g. SAP).
  • Minimal hours to deploy for generic Shielding.
  • Deliver zero-day responses in a working day.
  • Customer specific shields available in days.
  • Risk-Cost patching equation is more flexible.
  • Protects during migration to or from cloud hosting.
  • Protects during outsourced ICT supplier transfers.
  • Could redefine patching in-house to only critical items.

The Functional Difference

A WAF by strict definition doesn’t adapt application traffic, instead behaving like a network firewall as it classifies traffic as either safe or risky, and blocks the risky traffic, generates an alert, or ignores it!

The problems are that:

1. misclassification (false-positive) is rife meaning mistaken blocks to legitimate application requests result in a regime that only ever alerts, or worse, is abandoned.

2. only a third of reported application exploits can be protected by passive or active WAFs according to independent research[5].

3. the skills to maintain generic WAFs are unattainable and prohibitively expensive.

A reverse proxy server is a device that effectively breaks the client-server communication and hence is a place where application content and logic flows can be manipulated.  This has resulted in a number of vendors promoting WAF technology on top of a reverse proxy and hence they adapt basic items, for example sanitise server response headers.

We have taken this concept to the next logical step and adopted a stateful, programmable reverse-proxy to host Shield-Code objects, that perform advanced manipulation of both message content and application logic.  This approach addresses point 2. above, what about 1. and 3?  These are overcome using a programmed test centric and managed OODA[6] loop capability for a given dynamic environment, which is integrated into the service.

Through focus, using elements of AI and automation, we achieve speed and accuracy with Shield-Code productivity thousands of times above the industry baseline to date, meaning we achieve scale without an army of analyst/developers to support it.

Why not AI on its own?  For the same reasons machines aren’t leaders in writing beautiful applications (yet), they can’t reverse engineer and hence protect applications on their own.  Strategically, our Goal is further automating increasing numbers of functions and inferring causation from AI correlations.   We are operating with, and machine learning from, expertly trained and machine informed analysts.

First published and Copyright in 2017

[1] SPAN (Switched Port Analyzer), sends a copy of all network traffic packets on a network server port to another networked port optimised for analysis.

[2] A single Shield-Code may stem multiple variations operationally which are not included in the code count.

[3] This is a long time, typically we are much faster than this.  For example, by five days Equifax would have already been breached by Struts2.

[4] Breaches at Oracle took 43 days to remediate for all its installations.  2017 saw breaches and major losses of personal data at Equifax taking 18 days to conduct only the vulnerability scan, 116 days to patch the application and elapsed 143 days to be secured against that single vulnerability.

[5] Ramon Krikken: Future of AppSec Washington DC Security Symposium 2017, Gartner Research.  He specifically linked Exploit defence to WAF and RASP and abuse of functionality and access violation to code remediation.

[6] Observe, Orient, Decide, Act:  a tool developed by military strategist John Boyd to explain how organisations cope with uncertainty and chaotic environments.

Leave a Reply