ERM - How did WOPR decide the only winning move is not to play?

"A strange game.  The only winning move is not to play."

WOPR evolved and learned while playing against himself.  Nifty!  As WOPR drew additional power, assumedly, WOPR was able to evolve due to extrinsic / intrinsic features.  Extrinsic evolution uses software simulation of the hardware to evaluate the effectiveness of each new model. This is great where the threat may not be too specific or rather abstract. It is best to apply this to the underlying hardware due to the fact abstraction from the underlying hardware will lead to a less optimal model. Intrinsic evolution is implemented in the hardware. Each model is evaluated and implemented based upon the threat, vulnerability, and other quantitative data. This is extremely useful for deducing the risk’s properties which can not be known by traditional risk methodologies. Imagine this as if each variant in the model is downloaded to the chip as a data design configuration. Where the fitness is evaluated by applying test vectors and calculating the fitness value from its’ response.

Assuming threat characteristics, for evolvable modelling design issues, an evolutionary algorithm determines some of the structure or parameters of a reconfigurable item. This item may exist in software, although it could be a simulation of the hardware of a final implementation. The reconfigurable item might alternatively be physically changeable hardware. Typically, the item is embedded in some sort of environment, where it responds, influences, and behaves. The evolutionary model creator devises a fitness evaluation procedure that monitors and possibly manipulates the environment and items, returning objective function metrics.  An algorithm generates structural / parametric variations of the risk, by applying variation operators (mutate, cross over, etc..) to some representation of the object’s configuration. All the system gets back are the measured objective values. Another way of thinking about the evaluation / environment / object process as a black-box system.  Where WOPR played each scenario and came to the same conclusion for all "The only winning move is not to play."

Evolutionary risk modeling series

When I see organizations perform threat modeling, rarely will I see them model for threats which evolve over some period of time or react to the organization’s action / reaction. Why? I do not know. I hear it is too hard but it really isn’t. I model these evolutionary-based risks using novel evolutionary algorithms. Generally speaking, evolutionary risk modeling techniques can be split into two categories: evolutionary algorithms and evolvable hardware. At the core of the techniques is that they solve problem sets in the same manner as a human. In the public sector, evolvable hardware is extremely immature. There are many practical and cost-related challenges, which must be overcome before one may reap the benefits of large scale applications of this technology. But that doesn’t mean it isn’t being done.

Yet Another Risk Management series

In today’s rapid-paced, ever-changing economy, the topic of enterprise risk management has gained significant interest beyond the financial industry and academia. Especially with the latest buzzwords surrounding cloud security and cloud risk. Fortunately for blackhats, risk management is infantile and handled in an informal manner.

When was the last time you have attended a formal risk management meeting? Did it look like this?

 

 Or did it look like this?

 

Worse yet, there are not actuarial datasets to use. DatalossDB comes close but much works needs to be accomplished to ensure the integrity of the data is beyond reproach. Verizon’s DBIR is better than nothing, but leaves much desired to arrive at the same conclusions. To this end, I will propose a comprehensive approach to enterprise risk management based on academic and business research.

In the coming months, I look forward to constructive feedback. We shall begin exploring state-of-the-art information technology’s qualitative and quantitative risk management methodology qualities. Followed by business reasons why risk management remains in institutional neglect. Along the way, we shall have take aways from several conceptual frameworks, and explore risk management tools which have been used or could be, such as IBM OpenPages, RiskAoA, custom excel spreadsheets, and other items. Our research will draw ideals from fields not normally associated with enterprise risk management. In order to isolate important risk drivers, certain perspectives will be had, IE regulatory and political. One could say this series on risk management is to promote a greater preemptive organizational outlook. Assisting institutions to foresee and exploit a business environment’s inefficiencies and reservations. On the other hand, an evolutionary market perspective is used to articulate a novel way to uncover data in the domain of risk management. We will find there are many ways to skin a cat to produce creative solutions.

Management Wednesday: BPM Modeling - not charts anymore #bpm

After one has accomplished the scoping phase, then the team should move on to modeling. Due to the large amount of time spent scoping, many scenarios will come to light: “What if I have 50% of the resources to accomplish the same task?” “What if we were successful only because of a natural disaster which caused our competition’s supply to dwindle?”

One’s team will model how the process(es) might operate under different assumptions, and multivariate scenarios. Thanks to the birth of the transistor, it is becoming reality to be able to complete round-trip engineering and simulations. Back in the 20th century, entities utilized PERT diagrams, Gantt charts, and other interesting flow chart visual aides. A thought leader took S. William’s 1967 article about business process modeling to create UML. UML is Unified Modeling Language. UML is commonly found in the software engineering landscape. It wasn’t until the 1990s when universities started to teach UML. Personally, I use Octave in conjunction with probabilistic graph modeling (statistical analysis tool and methodology,) BlueWorks (SaaS based software,) and WebSphere (application) to get the information my team needs at their finger tips whenever they need to “play with the numbers” at any time / place.

At the most basic level, a capitalistic business process model is the base model by which a corporation defines how a company generates revenue by its’ position in the value chain. Younger organizations will not spend much time modeling because they are too busy trying to raise capital. Mature organizations will spend too much time modeling. To what degree depends on the analytical executive personas. “What if we spent X% more on lead generation?” “What if we cross sold to our partner channel while reducing sales commissions on our direct sales?”

A common business process model relies upon resource scenarios, capital scenarios, and other multivariate analysis. Which will then feedback into previous internal and industry metrics. The nifty part about modeling: as a result, there will be transparency into business processes, as well as the centralization of business process models and execution metrics. This is extremely useful during mergers and acquisitions. With this clean slate, the organization is able to fundamentally rethink how they accomplish their work to improve some metric(s.) A few metrics one can look over; operational expenditures improve customer satisfaction, remove redundant overhead, increase competitive intelligence, and more. An interesting multiplier in this clean state phase is the use of mature information services. Technology allows entities to crunch numbers and crunch them fast. No longer does one have buildings full of accountants to take over your competition with their sailboat building.  Beware though: just because your models are sound does not mean they will happen in real life.  http://www.verisk.com/Verisk-Review/Articles/The-U.S.-Mortgage-Crisis-What-th...

Gribodemon on SpyEye 2.x - I expected better

Saturday, I noticed my application honeypot collected an interesting sample. The cracker took my bait and attempt to hack the planet via a SpyEye 2.x variant. Apparently, the limit of its sandbox testing was to look for known virtualized drivers, mac addresses, and other signatures typically found in / on virtualized sandboxes. Just another arrow to the quiver of changing everything default in a virtualized sandbox. Everything from PCI driver labels to ethernet mac addresses.

I am utterly amazed at the kit’s insecure coding. The small Windows executable is vulnerable to numerous buffer overflows, poor error handling, and poor cryptographic implementation. Don’t even get me started on their alleged “performance optimization.” I traced the outbound calls and dummy data exfiltration a web-based C&C system. Fortunate for me, it is a poorly coded web application. By poorly coded, there are 300+ XSS vulnerabilities, 60+ SQL injections, and numerous other poor secure coding practices. Gribo's response:  "run in a sefe place."

A typical example from the certificate handling code:

$id = $_GET['id'];

if (!$id) exit;

….

$dbase = db_open();

$sql = "SELECT data, bot_guid, name, date_rep FROM cert WHERE id = $id LIMIT 1";

$res = mysqli_query($dbase, $sql);

Needless to say, the C&C website was taken care of with no effort at all. While I commend Gribodemon and team offering free support, their efforts are better spent securing their kit from other crackers.

Great Git security story and suggested work arounds

People wonder why Microsoft keeps their repository secured with armed guards and what not...

http://mikegerwitz.com/docs/git-horror-story.html

"...You quickly check the history. git log --patch 3bc42b. “Added missing docblocks for X, Y and Z.” You form a puzzled expression, raising your hands from the keyboard slightly before tapping the space bar a few times with few expectations. Sure enough, in with a few minor docblock changes, there was one very inconspicuous line change that added the back door to the authentication system. The commit message is fairly clear and does not raise any red flags — why would you check it? Furthermore, the author of the commit was indeed you! Thoughts race through your mind. How could this have happened? That commit has your name, but you do not recall ever having made those changes. Furthermore, you would have never made that line change; it simply does not make sense. Did your colleague frame you by committing as you? Was your colleague’s system compromised? Was your host compromised? It couldn’t have been your local repository; that commit was clearly part of the merge and did not exist in your local repository until your pull on that morning two months ago. Regardless of what happened, one thing is horrifically clear: right now, you are the one being blamed...."

 

Security is hard. Security Tools are harder. Cloud Security Tools are hardest.

There are tools, security tools, and then there are cloud security tools.  Especially in the realm of security orchestration.  Many cloud snake oil tools were never designed for the cloud.  See RSA three years ago to today when a vendor slapped cloud on their marketing material for pre-existing on-premise software.  Or better yet:  They took their CFEngine and applied it to all of their customer's AWS instances.  A great example are the vulnerability managers / scanners.   Setup a DNS hostname or IP to scan.  Then the vulnerability "management" portion of the scanner will track the DNS / IP with metadata about the machine.  But what about when the IP or DNS name changes to a different IP / DNS hostname, but the machine instance stays the same? Many service-based security tools pricing structure are based upon some idea of a static concept (IP address, DNS entry, etc...)  So imagine an infrastructure where new machines are created and destroyed every few minutes.  It will get quite expensive.  Not to mention, the vulnerability / GRC management software doesn't have the concept of a machine instance jumping around the infrastructure with different IPs / DNS names but still representing the same machine scanned moments earlier.  Well, their business model understands this concept and it means more licenses and billable expenses.  This is assuming you are able to scan instances which exist for a few minutes then are terminated; you did solve that problem, right?  

Very few cloud snake oil tools have any type of API or programmatic interface by which to interact with the service or tool.  Imagine if you wanted to correlate information on everyone piggybacking into your office.  A simple correlation involves seeing who didn't swipe into the office but logged on locally to the office networked machine.  If you had to resort to scraping the building access system to get your swipes, then it doesn't have an API or programmatic interface.  One would expect to see start, stop, restart, running status, credential management, alerting, reporting, auditing, etc....  

One's mileage on the time it will take to construct / destroy the cloud security orchestration tool.  For many software-based tools, it will require a complex host or network agent.  Look at the build complexity required to run Chef: MongoDB, Solr, Rails, Ruby, etc...  Best case, the tool will require credentials or be at some trusted point in the architecture.  This is where orchestration tools will succeed.  Once you can do it for another environment, it is simple to transition the orchestration to the new environment.  Assuming one is building a mirror of their other environment.

While interoperability will always be an issue with security tools, orchestration is another beast.  Rarely, one will find one tool to natively interoperate with others.  Hence the business need for Bromium, Cloud Passage, High Cloud Security, HyTrust, and other cloud security corporations.  Ask yourself does your cloud security tool have the ability to push / pull information from your Arc Sight instance, correlate with Splunk's output, push into your GRC tool, pull the latest scan from Qualys, maintain policy compliance, and push out signatures to your Imperva instances?  How about a simpler question:  How will you pull your puppet / chef logs from Splunk or OSSEC and correlate with one's security checklist automation documentation to verify what one is seeing is a policy violation or an intrusion?  By the way, the asset which caused the violation is now destroyed by your orchestration software.  I hope your incident response team understands how to investigate cloud instances and be able to perform forensic investigations.

 

Airing one's dirty development laundry - You are doing it wrong

I recieved a lovely google alert this weekend.  

http://www.pastebay.net/1046168

Even with the most secret of secrets, the private key to a public / private key pair, entities manage to show their secrets to the world.  Human's err.  

 

Kinda reminds me of digging through development oriented copy/paste services: IE  http://pastebin.com/search?cx=partner-pub-4339714761096906%3A1qhz41g8k4m&cof=FORID%3A10&ie=UTF-8&q=username+password&sa.x=0&sa.y=0&sa=Search&siteurl=http%3A%2F%2Fpastebin.com%2F to find juicy credentials.

 You would be surprised what one would find in Web Services debugging information.... 

http://pastebin.com/search?cx=partner-pub-4339714761096906%3A1qhz41g8k4m&cof=FORID%3A10&ie=UTF-8&q=wsdl+username&sa.x=0&sa.y=0&sa=Search&siteurl=http%3A%2F%2Fpastebin.com%2F

 

Management Wednesday: Competitor acquires one of your customers

When a competitor acquires one’s customer(s), keep an eye on their usage. Common sense would dictate the newly acquired customer would quickly transition to a new service provider. If the acquired customer does not part ways, this would allow for some interesting marketing / pr material. Better yet, when the competitor renews the contract, make sure to take note of the renewal. One could use the renewal in aggressive marketing.

A lovely case study: SAP becomes NetSuite’s latest customer.

http://www.enterpriseirregulars.com/48979/netsuite-runs-sap/

http://www.zdnet.com/blog/saas/successfactors-swaps-netsuite-for-bydesign/1561

 

Giving back - whitehat security training for free

When possible, I try to give back to a community which has given me much. My latest endeavor has been to assist and provide free whitehat training. For those of you who need to refresh their skillset or want to see another perspective on incident response, risk management, ethics, legal and other whitehat topics; please visit teexwmdcampus and sign up for a course or two. All of the courses are eletronic based.  You are able to proceed at your own pace.  I would love any feedback, positive or negative.

Hilarious law enforcement educational video - Who is a hacker?

These videos are why it is easier to explain "I am a rent-a-cop" than to describe information security. The scary part: This 1995 video was used to educate law enforcement.  The scarier part: the updated videos are not much better. I wonder if this fictional hacker meets his end in Untraceable.

 "Hacking is easy."

"You might as well give me the keys to your front door. I'm going to get into your system."

Management Wednesday: BPM scoping

In business process management, there is no defined starting point. The solutions are transposable, adaptive, and can be set into motion regardless of the other solution’s state. In project’s scoping minimalist form, business process management is set into motion by a timeline approach. The timeline will start with a simple set of process models and evolve into degrees of automated, real-time auditing, and dynamic execution.

• Available human capital

• Processes’ complexity

• Workflows

• Integration / disruption complexity

Many times, as with basic project management fundamentals, the inability to properly quantify resources / human capital available will lead to many BPM project failures. Yes, there will always be shortages, but do not get involved with a project which is setup for failure from the start. Beware, engineers and developers are eternal optimists.

Expect to have multiple discovery sessions with various stakeholders. Beware of going too deep. Stop discovery when you are discovering for the sake of discovering. If one has discovered the entire organization and its’ processes, then one has gone too far. Walking out of these meetings, one will have an idea on current processes’ activeness.

From the discovered processes and activity, one will be able to start modeling workflows. This is where one’s subject matter expertise will greatly speed up this phase. The more time one spends in this phase, the better. Beware of spending too many cycles in this phase. One will find 9 times out of 10, one’s specs / workflows will change after prototyping with the customer. If one decides to utilize use cases, spend less time here.  One will make up for lacking details when one begins to prototype with the customer. Attempt to be creative to enable innovative processes which align with customers’ needs to remain adaptive, agile, and competitive.

The recipe for figuring out integration / disruptive complexity: One bit of disruptive change.Two bits of human nature resistant to change. A pinch of integration / disruption complexities. Mix the ingridents together and bake in some time. Then you will end up something which doesn’t look like what you imagined. Unfortunately, time has proven no one knows the final state of the process model. The better BPM experts will get close.