Cracks and Hacks

...A private matter is something one doesn't want the whole world to know, but a secret matter is something one doesn't want anybody to know. Privacy is the power to selectively reveal oneself to the world.    If two parties have some sort of dealings, then each has a memory of their interaction.  Each party can speak about their own memory of this; how could anyone prevent it?  One could pass laws against it, but the freedom of speech, even more than privacy, is fundamental to an open society; we seek not to restrict any speech at all. ... - Hughes

FIPs & OpenSSL: what will the security checkbox vendors do?

I am watching the OpenBSD community tear down and build up OpenSSL.    I have to say, the comments are enlightening and entertaining.  

remove FIPS mode support. people who require FIPS can buy something that meets their needs, but dumping it in here only penalizes the rest of us. ok miod

I am curious to see the fall out in the security checkbox vendors space.  Many vendors rely on OpenSSL's FIPS 140 compliance to sell to .gov entities.  

Please donate to a worthy crypto security cause

If you have ever used OpenSSL, please donate money to this worthy cause.  Your donation will go towards security and cryptographic researchers who are financially (or egotistically) motivated to discover security-related defects in OpenSSL’s intellectual property.   Trust me, OpenSSL needs it!!!!!!!!  See the below picture for a simple, secure code review on OpenSSL’s latest release, 1.0.1g.

OpenSSL101gInSecurity

What we see is typical of an older, open source C / C++ based application.  Overall, there are code quality issues in addition to common C / C++ software security defects.  Fortunately, some of the bugs require unique situations to exist.  Unfortunately, as we saw in HeartBleed, other defects are straight forward and easily exploitable.

OpenSSL April Fools?

Is it still April Fools for the OpenSSL team?  Taken from http://heartbleed.com/
 

"...The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet...."

Checkbox AWS assurance testing?

Scout2 is a security tool that lets AWS administrators asses their environment’s security posture. Using the AWS API, Scout2 gathers configuration data for manual inspection and highlights high-risk areas automatically. Rather than pouring through dozens of pages on the web, Scout2 supplies a clear view of the attack surface automatically.

Need help figuring out a Snapchat username? I have your back.

Jessica asks “I just got a snap chat n I want a cool username my names Jessica I tried to do so many user names wit my name but there all taken I want a rely unique one but not weird so I need ideas!!!”  - http://answers.yahoo.com/question/index?qid=20121225093158AAMOpu2 .

 

Well, Jessica, let me help you.  I can’t tell you what makes a good Snapchat username.  But what I can tell you is what makes a popular Snapchat username.

 

Top 10 base words in the username

chris = 779 (0.02%)

alex = 744 (0.02%)

mike = 691 (0.01%)

ashley = 612 (0.01%)

nick = 585 (0.01%)

anthony = 547 (0.01%)

matt = 521 (0.01%)

jess = 504 (0.01%)

steph = 491 (0.01%)

amanda = 490 (0.01%)

 

The length of a username

3 = 1605 (0.03%)

4 = 15460 (0.34%)

5 = 80193 (1.74%)

6 = 230707 (5.0%)

7 = 419731 (9.11%)

8 = 594500 (12.9%)

9 = 644745 (13.99%)

10 = 635258 (13.78%)

11 = 563808 (12.23%)

12 = 484685 (10.51%)

13 = 385229 (8.36%)

14 = 297672 (6.46%)

15 = 256023 (5.55%)

18 = 1 (0.0%)

20 = 2 (0.0%)

26 = 1 (0.0%)

29 = 1 (0.0%)

 

The most popular character sets to create a username

allstring: 2076601 (45.05%)

stringdigit: 1619213 (35.13%)

stringspecialstring: 509677 (11.06%)

othermask: 191237 (4.15%)

stringspecialdigit: 120925 (2.62%)

stringdigitstring: 91968 (2.0%)

 

If you wanted to end your username in digits, these are the ten most popular 4 digits

2013 = 5340 (0.12%)

1234 = 4750 (0.1%)

2000 = 3048 (0.07%)

2012 = 2432 (0.05%)

1991 = 2045 (0.04%)

1990 = 2019 (0.04%)

1994 = 2010 (0.04%)

2345 = 1988 (0.04%)

1992 = 1926 (0.04%)

1993 = 1916 (0.04%)

 

 

On a more serious note

It is worth mentioning not many usernames share different phone numbers.  Out of 4.6 million phone numbers, only a few share the same username.  This is an interesting.  Why?  I am not certain.  I wonder if these are internal test accounts:

baten_tp = 2 (0.0%)

giggless14 = 2 (0.0%)

majestick666 = 2 (0.0%)

queenofthisshit = 2 (0.0%)

spoon4real = 2 (0.0%)

dorabuggz = 2 (0.0%)

gala.pardo = 1 (0.0%)

erinspickles = 1 (0.0%)

flyinghorses = 1 (0.0%)

saraelizabeth98 = 1 (0.0%)

 

Yet another nail in SSL TLS 's coffin

Via Ivan - "...RC4 has long been considered problematic, but until very recently there was no known way to exploit the weaknesses. After the BEAST attack was disclosed in 2011, we—grudgingly—started using RC4 in order to avoid the vulnerable CBC suites in TLS 1.0 and earlier. This caused the usage of RC4 to increase, and some say that it now accounts for about 50% of all TLS traffic.

Last week, a group of researchers (Nadhem AlFardan, Dan Bernstein, Kenny Paterson, Bertram Poettering and Jacob Schuldt) announced significant advancements in the attacks against RC4, unveiling new weaknesses as well as new methods to exploit them. Matthew Green has a great overview on his blog, and here are the slides from the talk where the new issues were announced.

At the moment, the attack is not yet practical because it requires access to millions and possibly billions of copies of the same data encrypted using different keys. A browser would have to make that many connections to a server to give the attacker enough data. A possible exploitation path is to somehow instrument the browser to make a large number of connections, while a man in the middle is observing and recording the traffic.

We are still safe at the moment, but there is a tremendous incentive for researchers to improve the attacks on RC4, which means that we need to act swiftly...."

 

 

Vulnerability assessment vs. penetration test

Truer words could not have been written.  Worthwhile read if one has the time and fortitude.

 

The Difference Between a Vulnerability Assessment and a Penetration Test

There are many views on what constitutes a Vulnerability Assessment versus a Penetration Test. The main distinction, however, seems to be that some believe a thorough Penetration Test involves identifying as many vulnerabilities as possible, while others feel that Penetration Tests aregoal-oriented and are mostly unconcerned with what other vulnerabilities may exist.

I am in the latter group, and what follows is my argument for why you should be too.

Language Matters

Language is important, and we have two terms for a reason. We already have an (aptly named I might add) security test for compiling a complete list of vulnerabilities, i.e. a Vulnerability Assessment. If there isn't a clear, communicable distinction between this test type and a penetration test then we shouldn't be using separate terms. Such a distinction does exist, however, and it's a crucial one.

Clarified Definitions

Vulnerability Assessments are designed to yield a prioritized list of vulnerabilities and are generally for clients who already understand they are not where they want to be in terms of security. The customer already knows they have issues and simply need help identifying and prioritizing them.

The more issues identified the better, so naturally a white box approach should be embraced when possible. The deliverable for the assessment is, most importantly, a prioritized list of discovered vulnerabilities (and often how to remediate).

Penetration Tests are designed to achieve a specific, attacker-simulated goal and should be requested by customers who are already at their desired security posture. A typical goal could be to access the contents of the prized customer database on the internal network, or to modify a record in an HR system.

The deliverable for a penetration test is a report of how security was breached in order to reach the agreed-upon goal (and often how to remediate).

A Physical Analog

A good analog for this is a Tiger Team working for the government, likeRichard Marcinko used to run with Red Cell. Think about what his missions were: things like gain control of a nuclear submarine and bring it out into the bay.

So imagine that he's getting debriefed after a successful mission where he broke in through the east fence, and someone were to ask him about the security of the western side of the building. The answer would be simple:

We didn't even go to the west side. We saw an opening on the east-facing fence and we went after our target.

If the person doing the debrief were to respond with, "You didn't check the other fences? What kind of security test is it where you didn't even check all the fences?", the answer would be equally direct:

Listen, man, I could have come in a million ways. I could have burrowed under the fences altogether, parachuted in, got in the back of a truck coming in--whatever. You told me to steal your sub, and that's what I did. If you wanted a list of all the different ways your security sucks, you should have hired an auditor--not a SEAL team.

The Question of Exploitation

Another mistake people make when discussing vulnerability assessments vs. penetration tests is to pivot immediately to exploitation. The basic narrative is:

Finding vulnerabilities is a vulnerability assessment, and exploiting them is a penetration test.

This is incorrect.

Exploitation can be imagined as a sliding bar between none and full, which can be leveraged in both vulnerability assessments and penetration tests. Although most serious penetration tests lean heavily towards showing rather than telling (i.e. heavy on the exploitation side), it's also the case that you can often show that a vulnerability is real without full exploitation.

A penetration testing team may be able to simply take pictures standing next to the open safe, or to show they have full access to a database, etc., without actually taking the complete set of actions that a criminal could. And vulnerability assessments can slide along this scale as well for any subset of the list of issues discovered.

This could be time consuming, but exploitation doesn't, by definition, move you out of the realm of vulnerability assessment. The only key attributes of a VA vs. PT are list-orientation vs. goal-orientation, and the question of exploitation is simply not part of that calculation.

The Notion that Penetration Tests Include Vulnerability Assessments

It's also inaccurate to say that penetration tests always include a vulnerability assessment. Recall that penetration tests are goal-based, meaning that if you achieve your goal then you are successful. So, you likely perform something like a vulnerability assessment to find a good vuln to attack during a pentest, but you could just as easily find a vuln within 20 minutes that gets you to your goal.

It is accurate to say, in other words, that penetration tests rely on finding a one or more vulnerabilities to take advantage of, and that people often use some sort of process to systematically discover vulns for that purpose, but because they stop when they have what they need, and don't give the customer a complete and prioritized list of vulnerabilities, they didn't actually do a vulnerability assessment.

Summary

Vulnerability Assessment
Customer Maturity Level: Low to Medium. Usually requested by customers who already know they have issues, and need help getting started.
Goal: Attain a prioritized list of vulnerabilities in the environment so that remediation can occur.
Focus: Breadth over depth.

Penetration Test
Customer Maturity Level: High. The client believes their defenses to be strong, and wants to test that assertion.
Goal: Determine whether a mature security posture can withstand an intrusion attempt from an advanced attacker with a specific goal.
Focus: Depth over breadth.

http://danielmiessler.com/writing/vulnerability_assessment_penetration_test/ 

International contract negotation tips

"As an experienced contractor who mainly deals with overseas clients, all I have to say is that if you go into this sort of work without fully understanding the risks them you should give it all up and go do something else. Some basic rules.

1) Agree a payment plan. Stage payments for specific milestones.

2) Use an accountant who knows what they are doing with international business

3) Suggest using an escrow account. Client pays all the money up front. Then agrees to release parts as deliveries are made. Also agree a timeout clause so that if they don't agree the final payment you get it automatically after 6 months. Russuans are very bad at agreeing the final payment.

4) Use a Lawyer who knows about international contract law.

5) Both parties to agree that the laws of ONE country shall apply to the contract. UNLESS it is with former parts of the USSR that are not in the EU. Then agree Swiss Law.

6) Every change no matter how small must be agreed in writing and signed off with agreed costs. Do not do anything as a freebie. This habit is endemic in many countries especially in Moscow.

7) Make sure that the person on the other side is actually authorized to sign the contract. I've had clients try to wriggle out of payment saying 'He was not authorised to sign the contract so we can't pay you'

8) If you are going to sign away the title of the stuff you develop then make the transfer of title a separate contract. Agree in the original contract that title will change hands only when the job has been completed AND full payment made.

9) Learn the language especially the swear words. A few curses in their language can work wonders when a client is being awkward."

WANT A SIMPLE WAY TO KEEP YOUR CLOUDY BIG DATA PRIVATE AT LITTLE COST?

An interesting spin on an old technique brought to Hadoop

http://eprint.iacr.org/2012/398.pdf

"...Retrieval of previously outsourced data in a privacy-preserving manner is an important requirement in the face of an untrusted cloud provider. PIRMAP is the rst practical PIR mechanism suited to real-world cloud computing.  In the case, where a cloud user wishes to privately retrieve large les from the untrusted cloud, PIRMAP is communication ecient. Designed for prominent MapReduce clouds, it leverages their parallelism and aggregation phases for maximum performance. Our analysis shows that PIRMAP is an order of magnitude more ecient than trivial PIR and introduces acceptable overhead over non-privacy-preserving data retrieval. Additionally, we have shown that our scheme can scale to cloud stores of up to 1 TB on Amazon's Elastic MapReduce service..."

Random thought for an exploding honey token

I remember when Nuxi and I would create computationally compact compressed files and see which mail servers would attempt to inspect the contents.  Typically, the MTA would fail over due to lacking heap space, heavy swapping, insanely large disk IO, and other resource utilization problems.  Besides, during the school year, exploding the mail gateways was a great way to cause the university’s mail server to go down and buy a few day’s extra time.  So why not cause the same reckless behavior, but cause a large blip to happen when an inside actor attempts to inspect a honey token? 

Name the compressed file CreditCard_Customers.zip , <insert juicy file name>.zip, etc…. Then place it somewhere available for the intended audience.  Or somewhere not available.  For instance, put it in the confidential file share.  Then watch asset’s system logs for resource utilization errors related to unpacking a 35 PB zip file.  Or see if someone attempts to email it to their personal email address (assuming the MTA will cough and die when inspection occurs.)  You did make your MTA rugged, right?

Here are two test files.  Modify to your liking. 

Lazy AWS devops

I am seeing too much echo chamber, saber rattling, foolish dogma about agile SA / devops. “Just use <insert configuration tool name here.>  All of your problems will be solved.” Yeah, right. And Unicorns talk to virgins. DevOps setup isn’t simple. One will need to think about a different paradigm. As a result, one’s mindset will change.  For better and worse, one will slowly morph into the Bastard Operator from Hell.

Scenario time: a tornado takes out your data center. Or if you were at Google last year, aliens land and take over the United States. You are at the symphony. Don’t worry! Good thing you have an AWS EC2 account ready to bring up Production DR and IT-based BC.

All accomplished without a touch of the keyboard or service.  Back to wrapping your arm around your date @ the symphony.  

If you want to read more about each layer, continue reading… If not, go outside and enjoy a beer.

The first layer is bootstrapping. Traditional practice, one would file a ticket with the DC techs to rack the new machine. The DC techs would rack, cable, and power on the asset. The machine would PXE boot to grab the image to the system via ftp and / or tftp. Such a pain. This procedure would take 2-3 hours for each machine. One could get 20-30 machines up in parallel. Maybe much less if one could get the systems pre-imaged from the hardware vendor. Too much time and effort wasted. With newer SA methods, boot strapping can be accomplished in less than 5 minutes. Pick one’s infrastructure as a service. Scalr, GoGrid, AWS EC2, Rackspace Cloud, OpenQRM, and Engine Yard come to mind. Grab ruby’s fog gem or the vendor’s toolkit. If you are feeling sassy, utilize a build system such as Jenkins, or Buildbot. Grab your api keys from your IaaS vendor. Place the credentials in the toolkit. Then put the automation script in the build system or run it from command line. The script below would utilize Fog for EC2 to bootstrap a pre-configured, low powered, logical machine.

#!/usr/bin/ruby

require 'rubygems'

require 'fog'

require './secrets.rb'

cloud = Fog::AWS::EC2.new(:aws_access_key_id=>@aws_access_key_id,:aws_secret_access_key=>@aws_secret_access_key )

server = cloud.servers.create(:image_id => 'ami-323', :flavor_id => 'm1.small')

puts "Private IP: #{server.private_ip_address}"

puts "Public IP: #{server.ip_address}"

Nifty, now you have a machine running. But you have to take it your methods to the next level and configure it to the tasked assigned to you. You can login to the machine, manually configure packages, settings, and test the system for assurance and compliance. If you are lucky, you would have written script automation to handle this for you. Most likely, you haven’t nor would you periodically run those scripts the machine. Over time, everything will not look the same. As the number of systems increase, it becomes harder to manage. Utilizing configuration tools such as Chef, Puppet, CFEngine, or BCFG2, one can ask the machines “AM I like the standard image?” When they do not, and they will, the tool will alert and automatically correct the issue. It is a manageable problem. Completely hands-off. Now I wouldn’t go so far to say this tool automation solves your needs. You will have Three Mile Island cascading failures.  

Congrats, one has their application running on their logical asset. But wait, Product needs to reconfigure the application service to fix a bug. Many will blindly apply the change across all systems in a rolling fashion. Orchestration by blind faith is great, but will fall short. Capistrano, Mcollective, ControlTier, or IronFan are great tools to use in this effect. Not everyone can acquire Yahoo’s Limo. Orchestrate the organization’s processes and procedures into a frankenstein tool. For instance, when I bring a new Cassandra node online, I use the following code:

announce(:cassandra, :server, :compliance)

neighbors = discover_all(:cassandra, :server, :compliance).map(&:private_ip)

nodes = <%= neighbors.join(',') %>

Configure monitoring to know the good state and health of the service(s.) Then have monitoring integrated with orchestration and boot strapping to provision / deprovision instances until the entire data center is running at an optimal level. Monit, Munin, and Nagios come to mind.

An amazing benefit from utilizing the modern methods above is that one’s infrastructure heals itself and doesn’t depend on a single failure. 

DevOps interview questions

directory '/etc/elasticsearch' do

        recursive :true

        owner   'elasticsearch'

        group   'root'

        mode    755

end

directory '/var/log/elasticsearch' do

        recursive :true

        owner   'elasticsearch'

        group   'root'

        mode    755

end

Can you spot the 5 fundamental flaws in the above config for Elastic Search?

1.  The directory / files are hardcoded.  You want to avoid this at all costs.  Otherwise, what is the point of your agile and maintainable software-defined-infrastructure?

2.  Setting the owner to elasticsearch is a security hole.  It has a daemon-writeable configuration directory.  Ensure your software follows the principle behind least privileges.

3.  Setting group to root fails on bsd.

4.  Owner / Group are not DRY. 

5.  Filesystem permissions are incorrect.

Google Glass Developer program - DOS and XSS

There were two very simple Google Glass Mirror's quickstart DOS and XSS vulnerabilities.  The fixes have been introduced in changeset https://github.com/googleglass/mirror-quickstart-java/commit/738352eb5b5b73aa7bb911d0aeee3386f40dbf26

The DOS fix is rather simple.  Limit the request to 1000 lines.  The XSS fix is hackish but works.  Instead of reflecting the client's input back to the user, the error is directed to the error logging infrastructure.  Let's hope the error logging infrastructure is anti-XSS enabled.   

Enterprise Risk Management competition

For those who believe in competition and free markets, I propose an evolutionary risk algorithm competition.  

Rule set:

  • Patentable
  • Equal or better than a publication in a recognized journal
  • Recorded results are better or generally accepted by risk analyst professionals. The results reduce risk which is complex and hard.
  • Publishable upon generation
  • Better than human-created results, for long term challenges, and the traditional solution. Wins vs. humans or algorithms create by humans.

Bug Age - Pattern series

I love standards. My blackhat persona says this makes it easy to break into systems (mono-risk culture.) Everyone must buy the same machine, same software, same configuration. My whitehat persona says this leads to less configuration flaws. Then opponents must move further up the stack and delve into about code insecurity. One would think we would be prepared / situated when attackers are forced to move onto code insecurity. Mind you, this is a 2-5 years evolution. But 2-5 years is a lot of time preparing for code insecurity. The challenge is how does one build secure code cost-effectively? I am amazed at all the clever ways one can break poorly written php / java / perl / javascript / actionscript/ C / ruby / python code. Software insecurity is a well understood challenge. I have never met a software developer who wanted to create insecure code. It is not a soft problem is the sense programmers write insecure code. But there exist tools (behavioral, developmental, and thought) to reduce / eliminate classes of vulnerabilities. Lost long ago were formal proofs. These computing systems formal designed stuff at a higher level, assigned appropriate interfaces and with some mathematical confidence show a permutation of interfaces couldn’t be utilized by a hacker in the right order to enact unexpected behavior. Formal proofs have disappeared. Ultimately, that is the next problem to solve. The Age of Bugs is dead. Academia and other hackers have moved into the Age of Systems. Eventually software developers will move beyond common software vulnerabilities and utilize mechanisms that eliminate them.  Until then, software developers have a number of patterns to recognize and formally solve. 

In the coming entries, we will cover the following patterns in detail;

Code correctness – incentives to get code right, not secure.
Old code is scary – threat models change after years of use
Holistic security – All encompassing
Open source lesson – many hands in the kitchen
Never ending security – the never ending story
Today’s XSS is tomorrow’s CSRF
Retire unused code – poor financial investment
Tools are tools – nothing more, nothing less

Lazyness - automation

Prehistoric code metric

When I think of application security metrics, I think of an immature field where there are no actuarial data sets. As a result, I see some interesting numbers come out on number of vulnerabilities per thousand lines of code, code agitation, insecure programming languages, etc…  Rarely are they even close to being statistically sound (lacking standard deviation, error rates, false assumptions, etc…) As a result, when I attempt to triage which area of the code base to focus on, there is the one tried and true method: code age.

This process is built upon the following assumptions;

  • Old code is likely to have more security vulnerabilities than newer code. This is due to increased application security vulnerability awareness training, secure libraries, and other engineering improvement processes.
  • Threats evolve. Functionality that never expected automation is now easily automated. Do a diff between OWASP Top 10 2010 / 2007 / 2004. Proof is in that diff pudding. Third party library dependencies had their threat models and vulnerabilities change, as a result, your code is now at greater risk. Attackers have gotten smarter, better tool sets, additional resources. Think of the analogy as code ferments. Its' security posture doesn't get better with age.

First, identify all applicable source code files / binaries. Proactively prevent new security vulnerabilities from being introduced in the source code / binaries. Then rank files / binaries by age where age is when the source was originally created. Then perform static analysis on the old code. Then begin the security push phase of the SDLC – hands-on review for security vulnerabilities. This phase forces engineering to look for vulnerabilities in old code.