Malware authors are only human: stupid encryption mistakes

Malware authors are only human: stupid encryption mistakes – ITS Insider – Confluence

Go to start of metadata

Criminal developers eager to deploy their latest nefarious wares will often cut corners on Q&A offering hope to observant Security Professionals.  Probably the last thing many criminal malware developers have is time or resources to bug check and regression test their code.  Often relying on tool kits and plagiarism they’ll cobble together a “solution” without the tedious attention to quality design and performance.

In the case of crypto-ransomware, these shortcuts (when applied to the delicate nuances required for deploying quality encryption) can be their undoing.  In a recent paper presented at Virus Bulletin conference in Denver, Check Point’s Yaniv Balmas and Ben Herzog document their analysis of many shortcuts taken by these criminal masterminds.  Lets look at the top encryption blunders made by these developers:

Lack of Understanding

Malware authors compose primitives based on gut feeling and superstition; jump with eagerness at opportunities to poorly reinvent the wheel; with equal eagerness, at opportunities to use ready-made code that perfectly solves the wrong problem,” – Balmas and Herzog

Encryption is difficult and if you only have a cursory understanding of the principles needed for reliable encryption techniques you make amateur mistakes.  Analysis shows that many malware authors have trouble using encryption effectively which may allow security defenders to break the encryption and stop the malware.

Cargo cult programming

We’ve all done this; take working code, tweaking it a bit then re-purpose it for another use.  Its like those Thanksgiving left-overs that become a great casserole a week later; it gets the job done.  But if you are trying to make something criminally impenetrable and foolproof why would you just take random parts and piece them together?  Criminals who use bits and pieces from previously identified ransomware (e.g., the makers of CryptoDefense used parts from CryptoLocker) are ensuring that they get stopped because the Defenders already know how to identify the means to decrypt files.

Something old something new

If you are writing software for an international criminal mastermind don’t instantiate statically linked third party code into your solution.  Syndication has it upsides but when you’re a criminal those business relationships are not that strong; no SLAs here.  It was discovered that the groups who created Petya and DirCrypt to attack nuclear power plants used an operation on a remote server that made a bonehead mathematical mistake effectively rendering the encryption keys useless by setting the key size to “0”.

“Its all pretend”

Why spend anytime crafting an impenetrable solution when you can just fake it?  The makers of Nemucod decided to just threaten their victims with a ransom note before actually encrypting their files.  Any AntiMalware worth it salt could recognize the creation of the ransom note file and prevent any further downloads.  Researchers noted that Neumucod “sets the gold standard for minimal effort”.  Moreover, the creators of Poshcoder used symmetric AES encryption while threatening to have used asymmetric RSA-2048 and RSA-4069 encryption.  Just to explain, higher keys size (e.g., 2048, 4096, etc.) means harder to decrypt.

If the security community is willing to take the time to analyze these common mistakes we may just get away from these traps but until then we can rest assured that most of these bad guys are only human.


Active Directory Permissions Go Missing

The other day I encountered a situation with an obscure Active Directory process called “AdminSDHolder” that, by design, is intended to keep things protected and secure but is not well known by Administrators and can cause a real headache.

The Situation

A user reported a permissions problem with an Active Directory group they manage. Normally, the user can control membership of the group but they’d be denied access and couldn’t perform this operation.

Looking at the Advanced Security Settings of the group, I found the setting to “Include inheritable permissions from this object’s parent” had been disabled and that all the Access Control Entries (ACEs) in the groups Discretionary Access Control List (DACL) where set to a generic list. Since the user was assigned permissions through inheritance their rights where effectively denied and removed resulting in the inability to perform any functions on the group.

The Workaround

The fix was to check the inheritance box and apply the permissions from the parent. Checking with the user all things where back to normal and they could do their work.

Or so I thought…

Later that day, that same user reported the same problem. Looking again at the DACLs on the Advanced Security Settings I found the same condition as before…Inheritance was denied and ACEs where set back to generic.

I set up an Audit Policy on the Domain Controllers to audit all Directory Service Access and applied an audit control setting to the group to monitor all access and changes made to the group. I then tested my settings by making modifications to the group and saw an audit entry in the security log showing where I had made the change and what change I had made.

A few hours later, I checked the object again and saw the same conditions as before; inheritance was unchecked and security entries where reset.

Looking through the security log there was no evidence of anything done…no entries showing changes or access to the group in question. If there are no entries in the security log for either users or services making changes to Active Directory then it’s a process operating outside of normal user-mode.

The Cause

As it turns out, Active Directory has an internal process called “AdminSDHolder” that runs every hour to maintain the security settings of protected groups and their nested groups (i.e., groups that are members of protected groups). The permissions settings used are defined by the security-descriptor of the AD object cn=AdminSdHolder,cn=System,dc=yourdomain,dc=com.

When this process runs it check all “protected groups” in Active Directory for an attribute known as adminCount. If the value of adminCount is greater than 0, it changes the permissions on this object and resets the flag to disable inheritance of parent objects. It does this on all protected groups and the groups nested within the membership of these groups.

While it makes sense to have an automated process to keep standard permission levels set on protected groups within Active Directory, it can become a challenge to Administrators when they begin to nest groups and users together to form chains of permissions on objects.

Active Directory protects the following built-in groups by default:

  • Enterprise Admins
  • Schema Admins
  • Domain Admins
  • Administrators
  • Account Operators
  • Server Operators
  • Print Operators
  • Backup Operators
  • Cert Publishers

Any groups affiliated with these protected groups through membership are automatically flagged as protected by AdminSDHolder.

So back to my situation with the user group losing permissions. It turns out that the group being managed was a member of a protected group and therefore was having its permissions reset every hour. By removing the group membership from the protected group it was no longer subject to the AdminSDHolder process.

More Information:

Love Affair with the AD Recycle Bin

Microsoft has finally made something amazing without a lot of hype and publicity.  I’m talking about the Active Directory Recycle Bin.

Image you delete a user object in Active Directory without much thought.  I know it’s a sin but it happens.  Now if you are on Windows Server 2003 Functional Level you can “Reanimate” that object.  Which basically means you’ve resurrected it from the graveyard of deleted objects and brought it back into operation.  You’ve gone from Hero to Goat and then back to Hero (or at least not goat).  Congratulations!!!

All of a sudden that previously deleted user tries to log on and access some files or permissions assigned to him/her but cannot.  You look at the AD account permissions and attributes and see that they’re are no longer a member of the Accounting, Marketing, Operations, etc. groups and many custom attributes are gone.  What the heck…after all you brought them back from the dead!!!

Well yes..BUT!!!

By default, Active Directory, strips many of the attributes of an object when it is deleted and places it in a “tombstoned” state until it gets physically removed.

During that time you can successfully restore the object but the problem is many of the important attributes are gone including user-group memberships.  In a pinch, you can restore these objects authoritatively from a system state back-up which works but is complex and disruptive since the DC has to go offline.

Now fast forward to Windows Server 2008 R2 AD Recycle Bin.  Prior to physically removing the object and the old “tombstone” (which is now called “recycled” state) there is the “deleted” state.  This allows for a period of time that an object can be fully restored and brought back without any loss or disruption.

The Deleted Object Lifetime is defined in the AD schema and sets the time an object will remain in the recycle bin until it is finally removed when the Garbage Collection process cleans things.

It is simple yet elegant.

For more:

The Psychology of Web Design

This is a summary post of an article read from Website Magazine ( February 2009

When building a web presence one tends to forget that the overall strategy of this medium is to capture and direct your visitors to an organized and well-crafted experience.  Usually, folks don’t stumble across your site as they would an ad in a newspaper or on television.  They are actually taking the time to go to your site and open the door for a visit.  It is your job to make that visit both pleasant for the user and productive for you whether it is to share, sell or persuade.  In his article The Psychology of Web Design, Peter Prestipino touches on some of the major elements of good design that offer both the designer and clients some room to share in the crafting of a compelling web presence.

Schools of thought

  1. Listen to and monitor the actions and feedback of your customers
  2. As a designer, your should dictate best practices

The designers’ job is to build a website to fulfill the objectives of the client which meets\parallels the existing brand value.

Find a balance between properly satisfying needs of the brand and creating an effective web presence (i.e., one that drives customers to take specific actions, feel certain emotions and creates certain thoughts).

Express the objectives of the website though layout, form, color and theme.  Address the psychology of design from the perspective of purpose, balance and branding.

Being able to romanticize the experience while remaining in line with fundamental artistry achieves a certain Mastery of design psychology and provides a website with dramatically better odds of success.

Use URL shortening to drive traffic to specific landing pages based upon the target being captured from source.  Make a strong first impression regardless of the drop-off point.


How much do individual pages relate to the overall purpose of the site and in-turn the customer.

What is the aim of the site?

  1. Straying too far from the core mission of the site can be detrimental.
  2. Evaluate the use of all widgets and elements of a site as to their distracting aspect.
  3. Question whether stuff on the site is distracting visitors from taking actions we really want the to take

Present navigational queues as a primary purpose of a page.

  1. Visitors are coming to the site to search for something to see, read, hear or purchase.
  2. Use word press plug-ins to help user s find content (popular, recent, relevant, etc.)
  3. The result is More Interaction!!!

White Space

Use white space to achieve a balance and provide a sense of elegance through simplicity and focuses the readers eye on a desired part of the page.

  • It provides a sense of breathing room for the viewer.


The presentation of your content has a major impact on how it is consumed.

  • People usually don’t react well to rooms full of clutter.  When they feel spatially constricted, they usually look for a way out.
  • Use heat maps to track where visitors are going on your site (e.g., or
  • Layout and structure of content should address the specifics of influencing how viewers approach, consume, and act on the page.
  • Each pages should be designed to illicit a specific primary action.


Make and effort to explain the essence of your own brand; the most notable brands convey a certain cultural significance, shared life style and most importantly attitude.

Those with the ability to differentiate their brand can rise above the noise and create an enterprise of significant consequence.

  • Logos communicate the essence of an organization
  • Colors and design weigh heavily as consistent elements throughout the brand.
  • Strike a balance between design principles, client preferences and trial-and-error tests based on end user analytics data.

Virtual Infrastructure Management

What does a VMWare Virtual Infrastructure (VI) Administrator have to do to get their head around the dozens of host servers, subnets and virtual machines in their environment?  Stop…Take A Deep Breath…Move Forward.  VI sanity can be meet with a reasonable amount of planned activity equal in part to the regular due diligence Administrators apply to their physical infrastructure counter parts: host clean-up, log reviews, monitoring, etc.

Who Moved My Cheese: VI Change Management

First, get your bearings.  In a well-defined VI, a Virtual Machine (VM) resides on a dynamic cluster of host servers, those servers are part of a group and that group is composed of hardware (e.g., servers, SAN, networks, disks, etc.).  That being said, your VMs can jump from host to host and group to group at any point during the day or night.  As shared resources (CPU, RAM, etc.) are required and contention for those resources becomes an issue, VMs can automatically be evacuated from one host to another.  If your cluster is configured with Dynamic Resource Scheduling (DRS) or High Availability (HA), your VMs can reside on any host in the cluster at any time.  In this configuration, the VI dynamically makes room on a host if the intensive processing of one or more VMs be comes and issue and then re-balance those resources later on as things settle down.

Given the dynamic nature of this architecture, chasing these moving targets is often like the old adage…”hearding cats”.  So what should be considered an offical “Request For Change” (RFC) in a Virtual Infrastrucuture environment?  Here are some common changes that make sense to account for in your formal Change Control process:

  1. Manual vMotion or movement of a VM between hosts
  2. VM configuration changes – extending the permanent allocation of virtual hardware or resource share changes
  3. Deployment / introduction of new VMs into the environment
  4. Host configuration changes – maintenance changes
  5. Patches and updates to ESX hosts, host hardware maintenance, etc
  6. DRS – Automatic load leveling on hosts using vMotion, could occur daily or hourly
  7. Cluster change – Addition of LUNs, rescan of storage
  8. Cluster change – Removal of LUNs
  9. Cluster change –Host upgrade (Major change, VM downtime not always required)
  10. VMtools upgrades (after major host version upgrade, VM restarts after tools install)
  11. Addition of hosts to an existing cluster
  12. vCenter upgrades – no VM changes, but possible loss of access to VMs via vCenter
  13. Critical to performance and stability

Walk The Talk: The Practice of  VI maintenance

Like any administrative function, keeping an active eye on your hosts and VMs will give you a finger on the pulse of your environment.  Remember the three key concepts in VI management: shared storage, shared resources (CPU, RAM, Networks) and VM placement.  As things progress disks (SAN LUNs) will fill-up, VMs will need care and feeding and performance monitoring will tell yo how things are progressing.  Here are some best practices that most VI admins perform on a regular basis to keep their head in the game.  I’m sure your see many similarities to your regular duties when managing a physical environment.

Daily Tasks

  1. Gather Statistics and review previous day performance and utilization data
  2. Look for changes between current and previous day data on both VMs and hosts

Weekly Tasks

  1. Review host logs, vSphere logs document errors or issues to troubleshoot
  2. Review VMFS volume capacity; do not deploy VMs to LUNs with <20% available space
  3. Look for VMs with open snapshots; these can grow to big and cause performance issues or lock ups
  4. Monitor host drive space
  5. Decommission Test/Dev VMs to ensure to reclaim unused space

Monthly Tasks

  1. Create a capacity reports for IT management; there is a great tool for this from vKernel called Capacity Analyzer
  2. Update your VM templates with the latest hotfixes and patches approved for the environment