Sunday, September 27, 2015

Mounting NTFS .vmdk file in Linux for Forensic Analysis

Okay, so my forensics is a little rusty and this one left me scratching my head.

Here's the scenario:  I have a Windows Vista VMWare virtual machine with the disk drive broken into multiple .vmdk files, which I wanted to mount in a Linux VM for forensic analysis.  Things started off easily enough . . .

Step 1: Combine .vmdk files into one using VMWares' Virtual Disk Development Kit tool, vmware-diskmanager.  This comes included (I think) with VMWare Workstation and Fusion.  I had to pull down the free VDDK from VMWare.  I did this on a Windows box, but you could do it in Linux as well.

c:\> c:\VDDK\bin\vmware-vdiskmanager.exe -r <path to master .vmdk file> -t 2 <full path to target location, with filename>


c:\> c:\VDDK\bin\vmware-vdiskmanager.exe -r ...\Desktop\VistaVM\VirtualDisk.vmdk -t 2 ...\Desktop\temp\Vista.vmdk

Step 2: Convert monolithic .vmdk to raw disk image file using qemu-img (on Linux system).  First I copied the file over to a Linux VM, the issued the following:

# qemu-img convert -O raw /media/thumbdrive/Vista.vmdk  ~/images/diskimg.raw

At this point I thought I could mount the drive . . .

# mount -o loop,ro,show_sys_files,streams_interface=windows -t ntfs /root/diskimg.raw /mnt/windows_mount

NTFS signature is missing.
Failed to mount '/dev/loop2': Invalid argument
The device '/dev/loop2' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

Hmmm . . . no luck.  I tried a couple of other things (tried to use kpartx to split out partitions and used ntfsck to check the health of the image file.  No love with either of those.

Then I hit upon the requirement to identify the location of the boot drive using the -offset flag in mount.  This is something I have done before, but again, my forensics is rusty.

Step 3: To find the offset, we must use fdisk to find start sector (mount point):

# fdisk -l diskimg.raw 

Disk diskimg.raw: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x9eba1015

      Device Boot      Start         End      Blocks   Id  System
diskimg.raw1   *        2048    20969471    10483712    7  HPFS/NTFS/exFAT

So the boot device starts at sector 2048 and the sectors are 512 bytes in size.  The offset is, therefore, (2048 * 512) = 1048576.  Now we can go about mounting the dirve:

# mount -o loop,ro,show_sys_files,streams_interface=windows,offset=1048576 -t ntfs /root/diskimg.raw /mnt/windows_mount

And voila - the drive is mounted!  Now lets do some forensicating.

Thursday, April 30, 2015

Paradoxes of (Cyber) Counterinsurgency

This is a shameless cross-post.  Okay, not completely shameless - it is a refined and expanded version of a piece I originally posted to a couple of months ago. It incorporates some feedback on the original post and a detailed review by a colleague. It is an interesting topic that I may continue to refine. Certainly interested in further feedback.

The U.S. Army’s Field Manual 3-24, Counterinsurgency, broke the mold for Army doctrine, providing insights into counterinsurgency operations that were largely unknown to U.S. military professionals and offering techniques that could be applied at both the operational and tactical levels to improve local conditions.  The manual also highlighted the complex nature of counterinsurgency operations, providing a list of paradoxes, or seemingly contradictory truths, that highlight the difficulties inherent in this type of military operation.  Many parallels can be drawn between counterinsurgency and cyber operations, and practitioners of both face challenges even more complex than those encountered in more traditional, kinetic military operations.  Herein we provide a list of cyber paradoxes in the spirit of the counterinsurgency paradoxes given in FM 3-24.  Through these paradoxes, we hope to highlight the inherent complexity of cyber operations and provide insights to those who hope to be successful in this new operational domain.

The publication of the Army’s Field Manual 3-24, Counterinsurgency, in 2006 was a watershed event in the history of US Army doctrine. Previously published Army manuals, and much of the doctrine1 published since, tends to take a very high-level view of military operations.  These manuals often provide lots of theoretical background with little practical applicability.  Many military practitioners see them as abstract tomes handed down from the ivory tower of the Combined Arms Doctrine Directorate at Fort Leavenworth, KS.  Many Army officers even pride themselves in having avoided reading most of the doctrine that underpins their profession.

The new counterinsurgency manual was different.  The primary authors were then Lieutenant General David Patraeus and Lieutenant Colonel John Nagl2 at the Combined Arms Center at Fort Leavenworth.  It is unusual for a senior officer like LTG Patraeus to have such a hands-on role in writing doctrine, but Pataeus never shied away from the unusual.  A highly decorated Infantry Ranger with a Ph.D. in international relations from Princeton, Patraeus had just been promoted after successfully commanding one of the most storied divisions in the Army, the 101st Airborne Division.  His command included a year-long deployment to Mosul, Iraq during Operation Iraqi Freedom where Patraeus quickly learned how to successfully engage in counterinsurgency operations.  His primary co-author, John Nagl, was a Rhodes Scholar, having graduated near the top of his class at West Point in 1988, with a doctorate from Oxford University where he studied counterinsurgency.  Nagl published a revised version of his doctoral dissertation in 2002 under the title Learning to Eat Soup with a Knife, a well-received history of counterinsurgency lessons from Malaya and Vietnam.  The title was meant to convey unlikely successes in the seemingly impossible task of successful counterinsurgency operations.  Nagl had also served in Operation Iraqi Freedom as an Armor Battalion Operations Officer in the 1st Armored Division.  FM 3-24 was a refreshingly practical manual that was based on historical counterinsurgency doctrine and lessons learned by both officers from their own experiences fighting the deepening insurgency in Iraq in 2003 and 2004.

Like most doctrine, FM 3-24 was based on military theory developed over centuries, and from writings by insurgent leaders and counterinsurgents alike.  However, it is also full of practical tips for the tactical commander and small unit leader in successfully prosecuting a counterinsurgency.  One of the most useful sections, and one that breaks the mold of most Army doctrine, is a section entitled “Paradoxes of Counterinsurgency Operations”3.  This section provides a list of counterintuitive examples that make it clear to the reader, from Private to General, how the approach to counterinsurgency is different from other military operations.  For example, the paradox “sometimes the more force is used, the less effective it is” highlights the fact that in counterinsurgencies, unlike in most conventional conflict, increasing use of force provides opportunities for insurgents to cast the counterinsurgents as brutal and violent, thereby drawing more of the local population to the insurgent cause.

The inherently asymmetry of cyber conflict, where small groups or individuals regularly penetrate large corporate networks, makes it easy to draw parallels between malicious hackers and insurgents.  Malicious insider threats, those individuals that target networks from inside the organization, resemble insurgents even more closely.  In both scenarios network defenders and incident handling teams are the counterinsurgents.  Like most analogies, this one works in some places and not in others.  However, inasmuch as cyber defense is a counterinsurgency, there are similar paradoxes, many of which closely mirror military counterinsurgency paradoxes, which are helpful for the cyber defender to understand.  In this paper we draw on many of the paradoxes in FM 3-24 and highlight their applicability to cyber defense.  We then highlight a handful of similar paradoxes that are specific to cyber operations. 

Paradoxes of Cyber Operations.
A similarity between counterinsurgency operations and cyber operations is the complex and often unfamiliar set of mission considerations presented to the practitioner.  The paradoxes of counterinsurgency operations offered in FM 3-24 are intended to stimulate thinking and to provide examples of the different mindset required to solve problems under these complex circumstances.  Here we offer a list of cyber operations paradoxes in the spirit of the counterinsurgency paradoxes offered in FM 3-24.  Many of our paradoxes are taken directly from the FM 3-24 list intact, while others are used with minor rephrasing.  A few paradoxes are completely new, owing to the unique nature of the cyberspace domain.  We believe that our list will help cyber operators gain a better understanding of the complexities of operating in cyberspace.

Sometimes, the more you protect your perimeter, the less secure you may be4.  Early network defenders focused on building strong perimeter defenses using devices that would scan and filter potentially malicious traffic at network entry points.  Devices such as network-layer firewalls and intrusion prevention systems gave way to application-layer proxies and sophisticated content-monitoring systems, giving many network administrators and their managers a false sense of security.  Many still mistakenly equate increased network security budgets with a direct and corresponding reduced vulnerability to cyber threats.  Most security professionals now recognize that perimeter defense is only one part of the solution.  Successful defense requires a combination of layered defenses, well trained and rehearsed incident handlers, user education, and ‘hunt’ activities to locate and eradicate adversaries that have already found their way into your network.  In fact if given the choice, most of today’s network defenders would opt to bolster their intrusion analysis and incident handling processes rather than further enhance perimeter defenses5.

Sometimes, the more destructive the cyber weapon, the less effective it is6.  Some of the best cyber weapons are subtle, intended to achieve effects without adversaries even realizing that they have been targeted.  Consider Stuxnet, perhaps one of the most effective cyber weapons ever deployed.  While no one has taken direct credit for the development of Stuxnet, analysis of the malware reveals that the intended target was almost certainly centrifuges at Iran’s nuclear enrichment facility at Natanz7.  Stuxnet seems to have been carefully crafted, not only to evade detection, but to cause damage that would be mistaken for system design flaws or operational errors.  This allowed the malware to continue to be effective over a long period and cause damage to many devices over time and having a significant cumulative and potentially enduring effect.  Once Stuxnet found the specific devices it was designed to target, it would lie dormant for two weeks, recording operational data from the centrifuge cascades that it would play back later to indicate continued normal conditions to system operators8.  If the malware had been designed to quickly damage equipment without this careful deception and subtlety, the malware would likely have been discovered quickly and would have likely had a much reduced overall impact on the Natanz facility.  In this case, a subtle, prolonged attack was much more effective than a quick and obvious cyber attack.

Sometimes doing nothing is the best reaction9.    Signs of a network intrusion bring an almost visceral response from incident handlers and network defenders.  Any evidence of compromise is normally met with rapid action to extricate intruders and, hopefully, to reconfigure systems to prevent further similar intrusions.  While this solves the immediate problem of the fixing the compromise, it can tell the attacker a lot about the methods used by the network defense team to identify intruders and the methods they used to gain access.  On the other hand, a better response might be to observe and contain the attacker.  While an attacker often has the upper hand, network defenders enjoy a home field advantage that they can leverage to isolate and observe an intruder.  The longer defenders can observe the attacker, the more intelligence they can develop regarding tactics, techniques, and procedures (TTPs), and the more information they can glean regarding the attacker’s target in the network.  A defender that can reliably contain and observe an intruder also buys time that can be used to develop protective measures and prevent further penetrations.  Military intelligence professionals refer to tradeoffs between the risk and the benefits of data collection as “intelligence gain/loss calculus,” and they do this routinely.

Some of the best weapons for cyber operators do not shoot10.  Most cyber operators are not in a position to conduct offensive operations and therefore trust the defense of their networks to cyber weapons that do not “shoot.”  Sound network defense relies on skilled, experienced professionals who understand what standard network conditions look like and are able to anticipate and identify intrusions, then handle them appropriately.  One effective weapon in enterprise network defense is the fusion center, a collaborative workspace staffed with experienced network defenders and intelligence experts that gather information on cyber threats faced by other similar organizations, along with TTPs from adversaries that might target them, in order to inform defenses and mitigate exposure before their organization is targeted. 

If a tactic works this week, it might not work next week; if it works in this network, it might not work in the next11.  Most cyber weapons rely on very specific network conditions, and unlike physical terrain, cyber terrain is man-made and can change drastically over time.  Exploits are matched with vulnerabilities that must be present for the exploit to be successful, and effective network defenders constantly patch and update systems to eliminate existing vulnerabilities.  Similarly, defenders must be able to function in an environment where attackers discover new vulnerabilities routinely, and those vulnerabilities are exploitable until patched.  Even when a defender becomes aware of a new vulnerability, it takes time for software vendors to develop and distribute patches to fix them.  The market for zero-day vulnerabilities almost guarantees that defenders will face exploits that they are not equipped to handle12.

Many important decisions are not made by Generals13.  In counterinsurgency operations, young leaders interact with the population to improve local conditions through grass-roots change.  Senior leaders must ensure that Soldiers are equipped not only with an understanding of service doctrine, but also with sufficient information on their local situations and an understanding of the legal and ethical implications of their actions.  Soldiers are then empowered to take action locally that collectively improves overall conditions for the local population and reduces the insurgent’s influence in an area one village at a time.  Cyber operations can be very similar.  Soldiers and leaders will take direct tactical action on the keyboard in a way that most senior leaders aren’t able to do, nor even fully understand.  Those Soldiers must be equipped through proper training and education to understand the moral, ethical, technical, and legal implications of their actions in order to make sound decisions based on commander’s intent.  Most cyber operations will have the potential for far-reaching international implications since they traverse systems that exist in a variety of friendly, neutral, and adversary countries.  A few careless keystrokes could literally cause an international incident.

It is often easier to penetrate a computer thousands of miles away then it is to attack a computer in the next room.  Unobserved physical access to target computer systems is rare and risky; most unauthorized access relies on logical connections.  Physical proximity to a target is therefore rarely relevant in cyber operations.  A system that is close enough to you to be on the same network segment might make it more readily accessible through the network, but recent high profile compromises have relied more often on phishing or watering hole attacks that install malicious software on victim systems causing them to call back to the attacker’s command and control infrastructure.  Educating your users to avoid such social engineering attacks will go a long way toward preventing these sorts of compromises.  Other ways to make your network more secure are to restrict administrative accounts to appropriate personnel, deploy software such as Microsoft’s Enhanced Mitigation Experience Toolkit to prevent vulnerabilities in software from being successfully exploited, and configure email and other services to flag potential phishing messages and disable links in emails.  These steps will make your systems much more secure than systems in other, similar organizations regardless of location.

Collateral damage can be orders of magnitude worse than the intended effect.  In traditional combat, collateral damage from weapon systems is often a concern.  Bombs and missiles don’t always hit their mark and blast radii often extend beyond intended targets.  Damage from collateral effects, however, is largely predictable and are normally significantly less severe than the damage to the intended target.  Kinetic effects are generally well understood and commanders can make informed risk decisions based on known probabilities of unintended consequences.  A cyber weapon, however, can cause collateral effects that are unpredictable and severe.  A virus intended to infect and influence an adversary’s command and control infrastructure can easily spread far beyond its intended network and infect thousands or millions of systems.  Even if the virus recognizes that the system it has infected is not the target and the payload is never activated, companies will still invest significant resources to investigate the intrusion and eliminate the infection from their systems.  By its nature, malware is unpredictable. Even if payloads are not activated, malicious software can cause critical systems to crash, or it can introduce vulnerabilities that would not have existed otherwise.  Any attempts to argue that malicious software is benign unless it reaches its intended target are either na├»ve or purposely misleading.  Furthermore, a cyber weapon may never reach its intended target, causing collateral damage without ever achieving its intended purpose.

Using a cyber weapon can immediately render it, and a whole class of related weapons, obsolete.  To use an exploit is to risk having an adversary identify it’s use and the vulnerabilities it exploits, and then patch those vulnerabilities so the weapon can no longer be leveraged against their systems.  If the affected software is commercial off-the-shelf, a clever opponent might patch their own systems and then use a similar exploit to target systems belonging to you or your allies.  It is also possible, and perhaps more likely, that the exploit is identified by a third party who then publicizes the vulnerability and causes the vendor to create a patch for all users of the affected software, rendering the exploit more universally obsolete.  For example, the exploits used by Stuxnet were discovered and publicized by commercial security companies and the vulnerabilities, mostly in Microsoft products, were quickly patched14.  Other malware that took advantage of those same vulnerabilities were immediately rendered useless on patched systems.  It is useful to note, however, that many individuals and organizations do a poor job of keeping their systems fully patched, and that can make even known vulnerabilities exploitable.  A recent report by Secunia found that 11% of Internet Explorer installations are not fully patched, and 12.6% of users are running unpatched operating system software.  Furthermore, almost 6% of users are running unpatched End-of-Life software such as Windows XP15.  

The more junior a cyber operator is, the more experienced she is likely to be in cyber operations.  Cyber operations are largely the realm of “digital natives,” young people who have grown up with and are completely comfortable in a fully-connected, digital existence. Even senior leaders with technical backgrounds are at a disadvantage compared to junior officers and Soldiers who have fresh training, newly minted degrees, and recent operational experience.  Recently, two new college graduates with Computer Science degrees and significant outside-the-classroom cyber experiences (including conference attendance, security training, internships, and cyber club involvement) spent 6 months immediately after graduation as interns at Army Cyber Command where they served as technical advisors to the ARCYBER Commanding General before starting their Basic Officer Leader Courses (BOLC).  The CG relied heavily on their expertise, and commented that these junior officers’ technical understanding surpassed that of almost every officer currently assigned to the command.  These officers are well on their way to making names for themselves in the Army’s newly created Cyber branch.

Cyberspace operations and counterinsurgency operations are both unlike the more traditional and primarily kinetic combat practiced by the U.S. Army.  After years of fighting counterinsurgencies in Iraq and Afghanistan, however, an Army that was primarily trained for large-scale maneuver warfare was able to master the intricacies of counterinsurgency operations.  This evolution required a concerted effort among leaders at all levels and required a massive retooling of Army training curricula.  The paradoxes of counterinsurgency outlined in the 2006 Counterinsurgency field manual highlighted the challenge that the Army faced in refocusing to counterinsurgency operations.  It is encouraging to look back on 15 years of operations in Iraq and Afghanistan and see how the Army evolved to face this emerging threat.  Perhaps highlighting similar challenges in cyberspace operations will help lead to successes in this new form of warfare.

1 U.S. Department of Defense Joint Publication 1-02, Dictionary of Military and Associated Terms, defines doctrine as “fundamental principles by which the military forces or elements thereof guide their actions in support of national objectives.  It is authoritative but requires judgment in application.”
2 John Nagl provides a brief history of the development of FM 3-24 in a 2005 article on the University of Chicago website at
3 Headquarters, Department of the Army, “Field Manual 3-24: Counterinsurgency”, Dec., 2006, page 1-26.
4 The original counterinsurgency operations paradox is “Sometimes, the More You Protect Your Force, the Less Secure You May Be”.
5 As far back as 1999, Winn Schwartau, in Time Based Security (Interpact Press, 1999), decrided the failed “fortress mentality” espoused by many security vendors.  More recently, in the preface of The Practice of Network Security Monitoring, Understanding Incident Detection and Response (No Starch Press, 2014), Richard Bejtlich points out the need for monitoring for indications of compromise inside the network.
6 Original counterinsurgency paradox: “Sometimes the More Force Is Used, the Less Effective It Is”.
7 K. Zetter, “Countdown to Zero Day,” Crown Publishers, New York, NY.  2014.
8 Ibid.
9 Kept from FM 3-24 with same wording.
10 Original counterinsurgency paradox: “Some of the Best Weapons for Counterinsurgents Do Not Shoot.”
11 Original counterinsurgency paradox: “If a Tactic Works this Week, It Might Not Work Next Week; If It Works in this Province, It Might Not Work in the Next.”
12 Even noted black-hat to white-hat to now gray-hat hacker Kevin Mitnick has gotten into the zero-day exploit market.  See
13 Kept from FM 3-24 with same wording.
14 Symantek Corporation, “W32.Stuxnet Dossier.”  Available at  Feb. 2011.
15 Secunia ApS, “Secunia Emphasizes Patch Importance During National Cyber Security Awareness Month.”  Available at, 1 Oct. 2014.

Friday, February 13, 2015

ISIS as Cyber Threat?

CENTCOM-hacked-3In January of this year,the group known by the West as the Islamic State of Iraq and Greater Syria, or ISIS, made news by "hacking U.S. Central Command", taking control of CENTCOM's social media feeds, and posting internal documents [1].  In reality,someone sympathetic to ISIS' cause gained temporary control of the CENTCOM Twitter and YouTube accounts, probably after managers of those accounts fell victim to phishing emails and bad password practices, and posted documents that seem to have been readily available elsewhere online.   Why a sympathizer?  Do ISIS members call themselves ISIS?  And would they use as an avatar an image with the line "I love you isis"?  Seems more like the work of a fanboy than a terrorist.

The accounts were quickly taken offline and were back under the control of CENTCOM within a few hours, but not before creating a bit of an embarrassment for Central Command.
More recently, the ISIS splinter group calling itself the "Cyber Caliphate" launched a similar attack against a military spouses' group called Military Spouses of Strength and posted threats against several members [2].  This may have been a more successful campaign if their goal was to spread terror, as personal threats to military spouses could certainly result in someone looking over their shoulder.

Although these attacks seems to have been fairly low-level cyber vandalism, it does beg the question as to what sort of threat ISIS is from a cyber perspective.

ISIS has been particularly adept in their social media campaign, using sites such as Facebook, Twitter, and YouTube to disseminate video footage of executed hostages and to communicate their message to potential sympathizers.  This helps drive recruiting and fundraising, resulting in an estimated 20,000 - 30,000 fighters helping to expand their presence in the middle east. 

Despite their social media prowess, experts in and outside of the U.S. government are largely in agreement that ISIS doesn't post a significant cyber threat to the United States [3].  At least not yet.  A major attack on the U.S. might involve attacks on the energy industry or financial sector to cause large-scale power outages or financial crisis.  These sorts of attacks require significant infrastructure and a long-term campaign to infiltrate large numbers of computer systems within these respective sectors.  Such a campaign requires hard-core programmers that can create specialized software and a large, skilled team of cyber professionals working together from a facility with significant technological infrastructure.  There are currently a handful of nation-states that might meet this criteria, but terrorist groups like ISIS haven't demonstrated the capacity to do this yet.  Even if they could do this, it is not at all clear that such an effort would bring them closer to their goal of creating an Islamic Caliphate in the Middle East.  Instead, ISIS is focusing their energy on recruiting fighters and expanding their footholds in Syria and Iraq.





Monday, February 9, 2015

Is Attribution Important?

First, let me say that I am biased.  As a career Army officer, I have internalized, over 25+ years, the importance of threat intelligence.  It started when I was a young Lieutenant in Germany, facing off across the Fulda Gap against the Red Hoard.  After that was the first Gulf War, then on to peacekeeping operations in Bosnia and Kosovo, and finally Operations Iraqi Freedom and Enduring Freedom.  All of these different kinds of military operations (armored warfare, peacekeeping, counterinsurgency, and security force assistance) require a robust but tailored intelligence effort.  Cyber operations, whether in the public or private sector, are no different.  Network defense is by its nature adversarial.  You are defending against someone who wants to do harm to your organization, and to approach such a situation without some sort of intelligence on potential threats is likely doomed to failure.

An intelligence effort can have multiple components.  One important aspect, one that some don't consider to be part of the intelligence effort, is to know yourself.  The more you know, the better.  Having a full accounting of all of the devices on your network and what software is running on them, including versions and patch levels, is extremely helpful when a new vulnerability is announced.  This inventory helps your security team prioritize efforts when taking action to mitigate new threats.  Another potential component of a security intelligence apparatus is a "threat intelligence" capability.  Here you might keep the pulse of real-time threats and exposed vulnerabilities from various CERTs, ISACs, and other sources.  Any way you slice it, some level of intelligence collection and analysis is critical.

Attacker attribution is an intelligence effort that has recently garnered increased attention.  Like many cybersecurity topics, there are lots of opinions regarding its value, and even some diametrically opposed positions on the matter.  On one end of the spectrum is CrowdStrike, a company that has built a large part of its business on threat attribution and has provided these services to both public and private sector entities.  CrowStrike argues that threat attribution is essential.  Their tagline, "You Don't Have a Malware Problem, You Have an Adversary Problem," belies their assertion that once you know exactly who is attacking you, you can take defensive measure against that specific individual's or organization's known tactics, techniques, and procedures (TTP). 

Other end of the spectrum are Jack Daniel, Paul Asadoorian, and the crew at Security Weekly, who recently discussed threat attribution during episode 399 of their weekly podcast.  Jack devoted one of his "rants" to an eWeek article entitled "Best Defense Against a Cyber-Attack Is to Know Your Adversary", which describes the philosophy of Tom Chapman, director of cyber operations at EdgeWave Security (the article referenced in their show notes for episode 399 is an excerpt with the same title). Jack Daniel scoffs at this notion, arguing that knowing your adversary is of little value in network defense.  "You have to know your own sh*t," he says. 

During the 2014 National Science Foundation Cybersecurity Summit for Large Facilities and Cyberinfrastructure I moderated a panel on "Threat Profile for NSF Large Facilities and Cyberinfrastructure."  Among the panelists was Matthew Rosenquist, Cybersecurity Strategist for Intel Corporation, who had just given the keynote address.  Matt's talk on "Strategic Leadership for Managing Evolving Cybersecurity Risks" included prediction as a component of defense-in-depth.  He argued that predicting future attacks against your organization can come from an analysis of the most likely attackers, targets, and methods, something that threat attribution would certainly facilitate.  Matthew described a process by which threat actors are divided into "archetypes" with common attributes in terms of resources, skills, targets, tactics, etc.  Understanding who targeted you in the past helps in predicting which archetype is likely to target you in the future, and therefore informs your protective posture and helps you prioritize your approach to defense.  After all, it is difficult, some would say impossible, to completely protect everything in your network.

So, who's right?  I leave it up to you to decide whether threat attribution is relevant or not.  I'll state the obvious here: you must prioritize your intelligence effort according to the resources you can devote to it.  Of critical importance is knowing what hardware and software is on your network so that you know how new threats might impact you.  Next, subscribe to intelligence feeds necessary to know of new threats and vulnerabilities.  Analyze potential threat archetypes and keep track of those that you think would target you.  Finally, once you can do all that, begin to focus on threat attribution to refine your understand of who is targeting you and why.  

And I got through this without a single Sun Tzu quote . . .

Saturday, January 24, 2015

Book Review - The Innovators by Walter Isaacson

The Innovators
by Walter Isaacson
Simon & Schuster, 2014

Walter Isaacson’s The Innovators provides a superb history of computing technology, from it’s earliest beginnings in the mid-19th century, through the invention of transistors and integrated circuits that replaced the vacuum tube technology of World War 2 era computers, to today’s smart phones and global Internet.  Isaacson explores the circumstances and personalities surrounding each technological leap and weaves them together into an entertaining narrative, leaving the reader with both an understanding of the history of innovation, as well as the context in which these innovations were made possible.

Most of the names and stories behind them are familiar.  For example, Isaacson begins with Ada Lovelace and her work with Charles Babbage’s Analytical Engine in the 1840’s.  More than just telling the history and sequence of events, he describes Ada’s parents, her father the poet Lord Byron and her mother a mathematics tutor, and posits that Ada inherited elements of both of their sensibilities, leading to her fascination with both the scientific and philosophical elements of early computing theory.  Isaacson takes this approach throughout, drawing some tenuous conclusions about how early influences led to later events.  His connections are at least plausible, however, and as a whole don’t detract significantly from the storyline.

Other familiar names include Alan Turing, Clause Shannon, John von Neumann, Grace Hopper, Gordon Moore, Bill Gates, Tim Berners-Lee, Steve Jobs, Larry Page, Sergey Brin, and many others whose contributions are well known.  In addition to these famous innovators, Isaacson delves into contributions made by less famous instigators of changes such as Stuart Brand, who fostered connections between southern California’s technology community and the ‘hippie’ culture of of 1960s and 70s.  He shows how these connections led to communal aspects of technological progress, such as the early bulletin boards (that gave way to modern forums and blogs) and the open source software movement.  Isaacson makes the case well that these innovations are just as important as the technical ones that make them possible.

Throughout the book, Isaacson highlights the collaborative process surrounding most of the innovations described.  Most scientific advancements require visionaries, who can see the possible future created by new technology, and skilled craftsmen and engineers, who buy in to the vision and are able to make it a reality.  To help make this point, he repeats a sentiment often attributed to Thomas Edison, “visions without execution are hallucinations.”  Additionally, Isaacson makes the unsurprising conclusion that both the idea and the timing are important.  A brilliant idea conceived before supporting technologies exist to implement it is often lost to history, or at least has to sit on the sidelines until the timing is right.

A well researched and skillfully written narrative of the history of computing from it’s very beginnings, The Innovators, by Walter Isaacson, should be required reading for any computing or information technology professional.

Monday, January 12, 2015

A Clausewitzian Approach to Graduate Studies

While reading an article on SlashDot today on the debate about the importance of Grit vs. Intelligence in academic success, I was reminded of an old paper idea on a Clausewitzian approach to grad school.  It is one of those papers that I never got around to writing, but I always thought it might be helpful to Army graduate students going through programs behind me.  It is the approach that I used with great success, both as a Master's student at Duke University (Computer Science, '96 - '98) and during my Ph.D. studies at Virginia Tech (Computer Engineering, '05 - '08).  

Despite my lackluster grades in my History of the Military Art class during my senior year at West Point, I have always had an interest in military history.  One of the great military theorists of the 19th century was Carl von Clausewitz, a Prussian general who studied the campaigns of Fredrick the Great and Napoleon and wrote extensively on the nature and philosophy of warfare.  In his most famous work, On War, he says that war is "as fascinating trinity", composed of "violent emotion, chance, and rational calculation."  He goes on to (loosely) describe how these elements are embodied by a country's population, it's army, and it's government, and he describes how success in war relies on a balance of the three, and not on one alone.

While a master's student at Duke, I proposed that success as a graduate student relied on a similar "fascinating trinity", and I often used it as a device to advise graduate students that followed me.  My "trinity" included raw intelligence, work ethic, and politics.  A student that mastered two of three was very likely to succeed and finish school on time and on budget.  Possessing all three made it almost impossible to fail.  Possessing only one of the three, even to the extreme, was usually not sufficient for success.  My evidence is circumstantial.  Time after time I observed students whose intelligence shone like a bright star against the pale flicker of me and my ilk, but who, through laziness and either disinterest or misunderstanding of the political landscape, toiled for years in the trenches of research assistantships with little or no hope of ever finishing their degrees.

The Trinity:

Intelligence.  This is pretty straightforward.  People assume that Ph.D. holders are exceedingly intelligent.  Once you are in the club, you realize that this isn't necessarily the case.  Don't get me wrong - I'm not claiming you can earn a Ph.D. without some level of intelligence.  I just know that my pre-Ph.D. perception of how smart Doctors of Philosophy are was a bit off base.  It sure helps to be super-smart, though.  The best thing it does is it saves you time that you can otherwise devote to research and writing.  There was many a weekend and evening that I toiled over readings, math problems, and programming assignments that I could have otherwise devoted to work related to my research.  What raw intelligence doesn't necessarily do is give you a corner on the market of clever solutions to interesting problems.  Those come to everyone if you are willing to put in the effort, which brings me to the second leg of the stool . . .

Work Ethic.  Success in grad school, especially in a Ph.D. program, requires a hell of a lot of work.  Especially if you are time-constrained.  I completed both of my graduate degrees in the Army's Advanced Civil Schooling program and they only give you two years as a full-time student to finish a Masters degree and three years for a Ph.D.  (Three years for a Ph.D. seems awfully quick, but keep in mind that most Army Ph.D. students are building on a Masters degree.)  The willingness and ability to put in the time can make up for deficiencies in your background as well.  I entered my CS masters program with little experience in C programming and virtually no C++ experience, but I was expected to use both starting in the very first term.  I audited an undergrad C++ course and put in significant effort to bring myself up to speed.

Politics.  This is probably the least understood, but I would argue the most important aspect of a successful graduate experience.  And perhaps 'politics' isn't the most appropriate term, but to me, it is a term that gets closest to my point.  A graduate student has to do the basic mission analysis to understand course requirements and research/thesis/dissertation parameters, but this provides only a baseline understanding.  You have to understand the whole degree-granting system within you department and what role individuals play in it.  Of critical importance is the selection of your graduate adviser.  For you to be successful, this must be someone who understands your constraints (for example, the Army requirement to finish a Ph.D. in three years), and who has the standing in the department to declare, unilaterally if necessary, that you have completed the requirements for your degree.  Be careful picking anyone who is not at least an associate professor who is well-liked in the department and has a proven history of graduating students of whatever flavor you are (Masters or Ph.D.).  Don't pick the 'hungry' assistant professor that is trying to make a name for himself on the backs of his wage-slave grad students and who hasn't graduated a single student of your type!  I don't care if his interests exactly line up with yours - this is a recipe for disaster.  Be careful, also, in selecting your committee members.  They should either be close colleagues of your adviser, or young faculty that will do what he or she says.  Don't pick your advisers sworn enemy, even if he is your neighbor and you like him.  He may value screwing your adviser more than he does taking care of you.  There are lots of other political considerations, from the courses that you take (and who teaches them), to the fellow graduate students that you partner with on projects.  Be sure to do the complete mission analysis early.  There are some things that you can do early on that can be hard, or impossible, to undo.

So there it is.  I could perhaps expound more on each aspect of the trinity, but I'm not sure it would clarify much.  Maybe there's a reason I never got around to writing the paper . . .

Saturday, January 3, 2015

New Blog

Okay, new blogger here.  After years of cyber security research and writing, I have decided to take the leap into this format.  As much for me as for you, this blog will (I hope) serve as a central location for me to consolidate research notes, book reviews, and other writings that I find myself creating, then losing to obscurity in a tree of subfolders on various network and local hard drives.  (Now it just need to remember to put that stuff here, not there!  Wish me luck!)

David Raymond
January 2015