Friday, March 4, 2022

Carry out professional red and blue drills Part.18: Several unusual types of purple teams

In the previous chapter, we discussed some of the challenges of working with Purple Teams and the types of Purple Teams. In this chapter, I’ll build on the discussion in the previous chapter by continuing to describe a few less common types of purple teams.

  Catch the test on the spot

The catch-on-the-spot test is a game of “red team” terminology, where the red team accepts being caught on the spot. During the catch-on-the-spot test, the evaluator intends to be caught on purpose by the blue team. This type of purple team test focuses on various stages of assessment, for different reasons and different responses. The test starts with the evaluation part before the red team is discovered, the point in the red team operation that is discovered, and the evaluation part after it is identified. I have participated in this purple team drill event and found it very useful for client organizations. As mentioned earlier, this type of testing can lead to conflict between teams.

Catch-on-the-spot testing can of course be done manually or through automation. During such activities, evaluators would slowly expose their espionage skills until they bumblingly let the blue team catch what they were doing. At this juncture, the work of the Purple Team can go down one of two paths. After the blue team finds the red team, they stop the purple team’s activities and work with the red team to reconstruct what happened before the red team was found and why it wasn’t caught earlier. Alternatively, the blue team informs the red team members of what they are monitoring, and the red team slowly begins to enhance their espionage techniques and hide themselves until they become indistinguishable from the legitimate activity by the blue team. As an enduring purple team exercise, these two steps are repeated as the blue team learns and improves monitoring and identifying red team activity. One problem with this approach is that it is not a standardized, repeatable or defensible approach to improving blue teams and may not work for all organizations.

Another way to conduct catch-on-the-spot testing is to automate it with the help of ethical hacking espionage techniques. In this catch-on-the-spot test, most of the human factor is removed, and the results of the exercise are more an assessment of the prevention and monitoring capabilities that the organization already has.

I first saw this technology implemented by a red team member of a very large enterprise. After we escalated access to almost every device in the target data center, we started performing automated catch-on-the-spot testing. The role of espionage techniques is to select enforcement points for tools. Some targets are selected within the same data center, some are communicated between data centers, and some are selected in the DMZ and other management subnets within the target range. Among the tools we have installed is a set of aggressive security campaigns, with various levels of sophistication. Examples include adding keys to SSH users, persisting binaries or scripts, creating users, and other such operations. The activities are performed in order from least likely to be discovered to most likely to be discovered, and are performed simultaneously on each device. After completing the execution of these tools, we work with blue teams to determine which actions hit their monitoring rules, which actions are blocked, and which alerts are triggered. This gives blue team members a clear understanding of their defensive capabilities. In some cases, they thought they would be 100% alerted to detected attack operations, but due to misconfigured network snooping and other issues, their monitoring did not detect certain attack operations.

Such catch-on-the-spot testing can be performed with the blue team providing their monitoring and defense rules ahead of time, and the red team entering the test moves captured by the defense. This strategy provides an instant picture of the need for remediation, because if alerts are missed, they will be directly related to the activity that the blue team believes is failing. Another option is to have the red team come up with an attack, include it in a debrief at the end of the engagement, and review it with the blue team so they can adjust their capabilities to deal with such an attack. This also allows for a remediation order from highest risk to lowest risk, as missed extremely dangerous or obvious attacks should be handled as a priority.

The first two conventional catch-on-the-spot testing methods focus on signature-based security operations, where organic blue teams identify weaknesses in known operations that should be logged or should be blocked. A new idea I’ve discussed with a colleague is to use machine learning to automatically test an organization’s monitoring devices. This obviously deviates from the theme of the book, but I feel like it’s worth mentioning as much as other purple team concepts that caught the test on the spot. Essentially, a tool is installed in various locations across an organization and the Internet, and the tool listens to and understands an organization’s baseline network traffic. On that basis, it does the exact opposite of what heuristic monitoring software does. It starts sending its own traffic and slowly gets dumber. As the traffic sent by the tool starts to look less like the network baseline, monitoring software of varying sophistication should identify abnormal traffic at different points. If their heuristic or even signature-based traffic monitoring is configured to capture what they hope to capture, placing and executing such a tool at key points in the network flow can allow organizations to gain a factual understanding.

  seven escapes

Catch-on-the-spot testing is more focused on improving or identifying gaps in the blue team approach. Catch and Release is a purple team exercise designed to test the resiliency of red team operations and the ability of blue teams to identify and track red team activity. In this type of purple team drill, the red team gets caught. When this happens, the red team is informed of what has been discovered by the blue team in the actions they took, and then gives the red team a chance before the blue team starts actively trying to isolate their tools and kick them out of the network. for a short time. The amount of time between notification and capture or defense activity should correlate with the time it takes for an alert in a monitoring device to be triggered by a red team action recorded by the system and the time it takes for an analyst to notice it and thereby initiate an incident response. A “catch” in catch and release can be a simulated alarm or a real recognition of red team activity by the blue team. “Release” in this assessment is to allow time for the red team to mitigate capture and continue to infiltrate the network.

The benefit of this is that the red team can carry out any layoffs and resilience-building activities in an attempt to maintain a foothold in the organization. Additionally, blue teams can conduct true incident response, in which they actively try to remove attackers from the network, knowing that an emergency response has been activated. Of all the purple team events discussed, this type of purple team event allows both red and blue teams to foster creativity and improve their processes. Again, this is more likely to happen in an organization with red and blue teams. However, I think it’s a very beneficial way of practicing the Purple Team, which fully implements the concepts of attack simulation and the assessment of an organization’s emergency response.

The seven-escape seven test also highlights a very valuable point of view for red, blue, purple, and offensive safety. Being caught does not mean the threat has been defeated. Usually, when the drill is still going on, and the blue team tells us they’ve caught us, then when the drill is over, I’m in for a debrief or a conversation. From personal experience, action alerts are issued hours or even days after an activity occurs, and in almost any case, capturing the activity alone does not prevent the presence of an attacker. If a blue team finds out that I’ve executed a risky elevation of privilege exploit on a host and then finds out that I’m mining information on the host, but they do it two hours after I execute the exploit, and after I’ve stopped interacting with that machine for over an hour Just found out about me, that doesn’t exactly mean I’m defeated. I may have jumped to several other hosts. I urge defensive and offensive security practitioners to understand that capturing an attack is useless, both in assessments and in real-world activities, if subsequent incident response fails to defeat an attacker in an organization. It’s very frustrating when a blue team suggests that the assessment was not complicated because they caught an attack, or that the red team’s actions were later silent and the results were irrelevant. I’ve come across this time and time again, but all miss the point. This is a great opportunity for organizations to learn their own limitations and practice their incident response to attackers, which, unlike real hackers, does not reveal data and vulnerabilities to the public.

  helpful hacker

The least adversarial and easiest to implement Purple Team activity is what happens after an offensive security assessment, during remediation and mitigation findings. Whether acting as a purposeful purple team, or simply taking exercise results reporting to a higher level, attacker input on remediation and mitigation strategies is invaluable. This input ensures that defenders remediate security issues discovered by the exercise in a way that defeats the attack rather than simulating the attacker. This also helps to effectively prioritize and address the list of security issues identified by the exercise. While participating in purple team drills, I’ve witnessed many instances where a client’s security staff came up with an idea to solve a security problem, prevented a simulated attack by the red team, but didn’t address the root cause of the problem. This is similar to treating the symptoms rather than treating the cause of the infection. The following are real-world examples I’ve come across where the initial solution proposed by security people was for a symptom, not what we ended up working with them to implement to treat the cause.

There are two similar examples that involve security products used in target organizations that are used by red teams to move laterally throughout the enterprise organization during rehearsals. One of them is Linux-based enterprise configuration management software that centrally manages most of an organization’s work. The other is Windows endpoint antivirus software that is centrally managed by the server. In Linux software, credential reuse allows remote access, and kernel privilege escalation allows red teams to gain a foothold on configuration management servers. Since then, red teams can make changes to the enterprise, such as installing backdoors, changing passwords, and other actions, all of which provide privileged access to each managed node. On a Windows host, duplicating a semi-privileged user account on a single computer allows evaluators to turn their attention to the antivirus management server. From there, the red team was able to “decrypt” the locally stored password for the antivirus web console, and once the password was verified, the red team was able to execute the binary with system privileges on all computers in the domain, including domain controllers. In both cases, at the time of debriefing, the customer security staff suggested a number of mitigations that focused only on the symptoms of the problem. For Linux issues, security folks recommend upgrading the kernel version and changing the relevant user credentials. For Windows issues, security folks recommend upgrading your antivirus software to the latest version, which can better hide web console passwords. For both of these examples, red team evaluators suggest changing the true weaknesses of the organization’s security posture. In both cases, a very strong management tool should be “isolated” from other machines and should also use it’s own authentication (specific to the management software machine itself). Separation of privileged machines like this is the real problem; other vulnerabilities only allow evaluators to access them. That’s not to say that the recommendations of security personnel shouldn’t be implemented; those programs are also important. However, the red team’s attack thinking makes additional recommendations to block general attacks rather than specific attack paths.

A simpler example would be a strictly Linux data center within an organization. The red team took over the entire data center by gaining root access on one machine and reusing the same root account to access all other Linux servers. The security team simply disabled the ability to root to SSH, blocking the attack path used by the red team. In a further briefing with the blue team members, the red team told them that they could log in remotely with another user and “switch to root” in each command line window locally to install whatever tools they needed because The root account password remains the same on each machine. The red team recommends disabling users using SSH from switching to root, or changing root credentials on a different server to prevent credential reuse.

The last example illustrates the difference between mitigation advice given by blue and red teams, and the benefits of both working as purple teams. This example involves the execution of binary files. In a brief presentation, the red team highlighted that during the attack, they were able to execute an .exe tool using a scheduled task on a Windows machine. Defenders propose that they will start new ones using the scheduled tasks tool. exe binary when writing a signature. Again, this solution is worthwhile, but it only addresses a specific attack method, not the root cause. The red team works with security to help them understand the issue, and they can simply write one. dll and is executed by dispatching rundll.exe, which can even use another file extension such as . tlb. So the underlying problem is that the scheduled task is allowed to use the system context to start the binary. This example also shows by the way that the red team working together with the blue team can mitigate the threat itself.

In either case, it should be clear that it is extremely beneficial when an organization’s security staff, along with aggressive evaluators, work together to develop strategies for remediation and mitigation actions. How Purple Teams can execute is limited only by imagination, and each organization’s implementation should explore the best ways to utilize this concept to improve the organization’s overall security posture, enhance the skills of Blue Team and Red Team members, and better understand each other’s thinking.

  Summarize

This chapter discusses the concept of the Purple Team, the challenges it faces, and some of the different types of Purple Team practice activities. The unique advantages and disadvantages of each purple team type are also covered to highlight the best situations in which to use them. Real-life scenarios reinforce the message of this chapter.

The Links:   CM200DY-34A NL13676AC25-05D

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.