Current modern networks are often sprawling with multiple areas of ingress. The growth of remote working and cloud computing has removed traditional boundaries and networks can contain internal resources and remote cloud services within the same internal networks. These advances in internal infrastructure have enabled massive growth and new ways of working across nearly all industries. However, has testing and scoping of these services kept up?
In this blog post, I will talk about traditional infrastructure testing versus newer testing methodologies. With this information and background, you should be able to scope, build and demand better penetration tests.
These tests ought to provide verifiable information, a comparison to real-life attack scenarios, and do away with Vulnerability Scanner filler reports. I will predominately talk about infrastructure testing however there are still numerous takeaways for scoping and planning application testing.
A lot of the concepts mentioned in this post are not new, there are a number of other testers and companies that also push this way of testing. I’d also like to credit Tim Medin (SANS instructor and Principle at Red Siege) who brought these ideas to my attention in my early years of testing through his podcasts and conference talks I was fortunate enough to attend.
How is Traditional Testing Carried Out?
Traditional penetration testing has long used a single Kali/Attack host plugged into the internal network. This host is often placed in or on management networks, able to view all networks and VLANs within scope. Allowing complete coverage, easy scanning and enumeration. However, this configuration and deployment makes significant assumptions and fails to work within the confines of a realistic attack scenario which is often not reflected in the reported issues. Key mitigations or contextual information may not be accounted for as the position on the network bypasses them, resulting in higher severity issues.
Authenticated testing is often a part of this testing type with a highly privileged account provided to the testing team to allow authenticated scanning and audit tools. However, accounts are not always provided; it’s not uncommon to hear “But you’re a hacker shouldn’t you find a way in?” in response to asking for credentials. Numerous tests also simply apply this “authenticated testing” to a sample set of hosts and use this information to carry out patch audits and identify out-of-date software. Although this information can prove useful this should not be the sole takeaway from a third party penetration test.
These traditional penetration tests also look to attack known exploits. They often follow the scan, verify, exploit, elevate and pivot flow, which is expected and can often yield useful results. However, in well-established environments with internal vulnerability scanning programs and robust and proactive patching cycles, this can often leave several stones unturned.
Modern networks also utilise hybrid and cloud deployments. These networks frequently intersect and interact with on-premise systems and servers. However, they are commonly removed from the scope or simply not tested at all. The traditional scan > identify > exploit process for these hosts is often useless as systems almost always require authentication to interact with. Authentication and access to these cloud services are often left out of scope as the focus is on “internal testing”. However, the line between cloud and internal systems is blurred within most modern networks. If you utilise a hybrid deployment and are not testing both internal and cloud systems in tandem, then you are severely limiting the scope of tests.
Pros and Cons of Traditional Testing
To summarise the simple pros and cons of traditional testing
- Good coverage of all internal assets or assets in scope.
- Useful to gain an understanding of vulnerabilities and their impact if there is no means to measure this currently.
- Authenticated testing can be used to highlight missing patches, server and host configuration issues and potentially undetected means of exploit.
- Fails to apply context and misses mitigations within findings.
- Makes massive assumptions on how attacks are likely to happen including a local device with unauthenticated access being physically plugged into the internal network.
- If no credentials are provided for authenticated testing the testing can miss huge amounts of data and provide a false sense of security.
- Traditional scan > identify > exploit methods are often redundant in developed networks with their own vulnerability management processes.
The above lists are not exhaustive but highlight some of the pros and cons that come with this traditional means of testing. Traditional internal testing still has its place within some networks and, when scoped correctly with clearly defined goals around the expected outcome, it can still yield useful actionable information to help enhance overall security. However, it is not the only way to do this.
Assumed Breach Testing
Assumed breach testing takes the authenticated testing approach and runs with it. The premise is simple and considers likely exploit paths and scenarios for actual attacks. The assumption is made that a breach will happen at some point somewhere in any network. This could be via a 0-day, take the new Exchange or FortiGate exploits as potential starters. Alternatively, access can be achieved via phishing with cloud or local host access achieved, like in recent Uber breaches.
In either scenario, the attacker is likely to start with credentialed access to a network. This access would be within the confines of a network device, server or appliance, with the approach taking priority away from achieving that initial foothold and placing the spotlight on what happens next. It is this area in which most networks I have tested fail. Even the most robust perimeter with the highest levels of patching and cutting-edge protections is not exempt from host and user breaches.
Assumed breach testing then looks to move onto the wider network, its goals (scope dependant) will often be to achieve the highest level of privilege or gain access to sensitive information. Both example goals provide a tangible real-world aim of the assessment. These exploit scenarios and findings put into context the failings of in-depth defences and allow an organisation to understand how they might potentially perform in a realistic attack. Assumed breach testing acts as a nice stopgap between normal vulnerability identification-based testing and a full-on Red Team. Testing does not emphasise stealth but looks to utilise similar tooling and objectives of a real threat.
Cloud environments also benefit from this form of testing. Starting within the authenticated confines of a network allows the testers to enumerate and move within cloud systems. It’s not unusual during assumed breach testing for the team to move from on-premise systems into cloud-only users or systems to achieve goals or highlight impact.
As with traditional testing assumed breach testing has its own pros and cons.
Pros and Cons of Assumed Breach Testing
- A more realistic approach to testing, the emphasis being on post-breach risk and strength in depth controls.
- Removes the focus on initial footholds and allows testing within the authenticated confines of networks. This often highlights entirely new or unknown issues.
- Much better at testing cloud-integrated or hybrid environments.
- Works within the confines of current network and domain controls, this adds context and realism to findings as well as potentially showing what is working well.
- Helps to prioritise the remediation of issues that could be used by a real threat
- Does not utilise the scan > identify > exploit method often. This may mean known exploits are missed.
- Will not give entire system or network coverage or vulnerability scan-styled output.
Overall, for most organisations, an assumed breach testing methodology will provide better value on testing. These tests can be used to pinpoint potential real issues in real scenarios without the overhead or resources required of more advanced testing methodologies such as a full Red Team.
The Best of Both Worlds
Why not do both?
A combination of both approaches can create a testing approach with the best of both worlds. A simple unauthenticated network penetration assessment can be used to highlight known exploits and unauthenticated attack paths, before moving onto assumed breach testing. This combination of testing can ensure better coverage of all concerns and potentially create alternative exploit scenario chains.
Using a joined approach will cover the fringe unauthenticated rogue device risk but allow testers to also prioritise the post-breach activities should unauthenticated testing yield no access.
Building Better Tests
Regardless of their chosen testing method, there are several things organisations can do to ensure they get the most from testing. It all starts with scoping. Countless clients and organisations come forward wanting or needing testing without considering the why’s or desired output. When building testing scopes or engaging third-party providers always try to ensure systems, networks or applications are being tested as they would be used.
This is one of the reasons Assumed Breach testing has such value. It’s very common to restrict testing to only certain areas or key systems as there is a desire to ensure they are secure. However, this blinkered approach to testing often does a disservice to the overall security of the target and sometimes creates a false sense of security.
By removing the realistic use or access to the target you compromise the test and assure that within the very specific and constrained test scenario it is secure. This may be advantageous for simply ticking a box but will result in an overall lower level of security. An example would be testing Azure Active Directory separately from on-premise internal Active Directory even though they are linked, and accounts are synchronised and used for access between the two. By imposing false restrictions, you limit the findings, outcome and usefulness of testing.
In place of limiting access consider limiting the scenario or goal. If you want to test a specific system identify its common use, access and management. Look to build a test that follows this and starts the testing team off from a legitimate starting point. Provide strict and structured goals, limiting the time an engagement takes and ensuring the team stays on target rather than sprawling off across the network.
This goal and scenario approach to testing will always be superior to false restrictions. However, it requires planning and transparency from both parties. The testing team must stay on target and ensure testing activities always bear the goal in mind. It also requires honesty and forward thinking from the target. The scenario must be designed realistically. If all real internal users are given local admin rights on their laptops but you don’t give the testing team the same access, you are again implementing false controls to alter the outcome. With both sides working together this can be avoided and lead to a well-structured, scoped and outcome-driven test that truly helps to elevate security within an organisation.