Flying Cars & Why The Hypervisor Is A Ride-On Lawnmower In Comparison

September 23rd, 2011 18 comments

I wrote a piece a while ago (in 2009) titled “Virtual Machines Are Part Of the Problem, Not the Solution…” in which I described the fact that hypervisors, virtualization and the packaging that supports them — Virtual Machines (VMs) — were actually kludges.

Specifically, VMs still contain the bloat (nee: cancer) that are operating systems and carry forward all of the issues and complexity (albeit now with more abstraction cowbell) that we already suffer.  Yes, it brings a lot of GOOD stuff, too, but tolerate the analog for a minute, m’kay.

Moreover, the move in operational models such as Cloud Computing (leveraging the virtualization theme) and the up-stack crawl from IaaS to PaaS (covered also in a blog I wrote titled: Silent Lucidity: IaaS – Already A Dinosaur?) seems to indicate a general trending toward a reduction in the number of layers in the overall compute stack.

Something I saw this morning reminded me of this and its relation to how the evolution and integration of various functions — such as virtualization and security — directly into CPUs themselves are going to dramatically disrupt how we perceive and value “virtualization” and “cloud” in the long run.

I’m not going to go into much detail because there’s a metric crapload of NDA type stuff associated with the details, but I offer you this as something you may have already thought about and the industry is gingerly crawling toward across multiple platforms.  You’ll have to divine and associate the rest:

Think “Microkernels”

…and in the immortal words of Forrest Gump “That’s all I’m gonna say ’bout that.”

/Hoff

* Ray DePena humorously quipped on Twitter that “…the flying car never materialized,” to which I retorted “Incorrect. It has just not been mass produced…” I believe this progression will — and must — materialize.

Enhanced by Zemanta

Cloud Security Start-Up: Dome9 – Firewall Management SaaS With a Twist

September 12th, 2011 No comments

Dome9 has peeked its head out from under the beta covers and officially launched their product today.  I got an advanced pre-brief last week and thought I’d summarize what I learned.

As it turns out I enjoy a storied past with Zohar Alon, Dome9’s CEO.  Back in the day, I was responsible for architecture and engineering of Infonet’s (now BT) global managed security services which included a four-continent deployment of Check Point Firewall-1 on Sun Sparcs.

Deploying thousands of managed firewall “appliances” (if I can even call them that now) and managing each of them individually with a small team posed quite a challenge for us.  It seems it posed a challenge for many others also.

Zohar was at Check Point and ultimately led the effort to deliver Provider-1 which formed the basis of their distributed firewall (and virtualized firewall) management solution which piggybacked on VSX.

Fast forward 15 years and here we are again — cloud and virtualization have taken the same set of security and device management issues and amplified them.  Zohar and his team looked at the challenge we face in managing the security of large “web-scale” cloud environments and brought Dome9 to life to help solve this problem.

Dome9’s premise is simple – use a centralized SaaS-based offering to help manage your distributed cloud access-control (read: firewall) management challenge using either an agent (in the guest) or agent-less (API) approach across multiple cloud IaaS platforms.

Their first iteration of the agent-based solution focuses on Windows and Linux-based OSes and can pretty much function anywhere.  The API version currently is limited to Amazon Web Services.

Dome9 seeks to fix the “open hole” access problem created when administrators create rules to allow system access and forget to close/remove them after the tasks are complete.  This can lead to security issues as open ports invite unwanted “guests.”  In their words:

  • Keep ALL administrative ports CLOSED on your servers without losing access and control.
  • Dynamically open any port On-Demand, any time, for anyone, and from anywhere.
  • Send time and location-based secure access invitations to third parties.
  • Close ports automatically, so you don’t have to manually reconfigure your firewall.
  • Securely access your cloud servers without fear of getting locked out.

The unique spin/value-proposition with Dome9 in it’s initial release is the role/VM/user focused and TIME-LIMIT based access policies you put in place to enable either static (always-open) or dynamic (time-limited) access control to authorized users.

Administrators can setup rules in advance for access or authorized users can request time-based access dynamically to previously-configured ports by clicking a button.  It quickly opens access and closes it once the time limit has been reached.

Basically Dome9 allows you to manage and reconcile “network” based ACLs and — where used — AWS security zones (across regions) with guest-based firewall rules.  With the agent installed, it’s clear you’ll be able to do more in both the short and long-term (think vulnerability management, configuration compliance, etc.) although they are quite focused on the access control problem today.

There are some workflow enhancements I suggested during the demo to enable requests from “users” to “administrators” to request access to ports not previously defined — imagine if port 443 is open to allow a user to install a plug-in that then needs a new TCP port to communicate.  If that port is not previously known/defined, there’s no automated way to open that port without an out-of-band process which makes the process clumsy.

We also discussed the issue of importing/supporting identity federation in order to define “users” from the Enterprise perspective across multiple clouds.  They could use your input if you have any.

There are other startups with similar models today such as CloudPassage (I’ve written about them before here) who look to leverage SaaS-based centralized security services to solve IaaS-based distributed security challenges.

In the long term, I see Cloud security services being chained together to form an overlay of sorts.  In fact, CloudFlare (another security SaaS offering) announced a partnership with Dome9 for this very thing.

Dome9 has a 14-day free trial two available pricing models:

  1. “Personal Server” – a FREE single protected server with a single administrator
  2. “Business Cloud” – Per-use pricing with 5 protected servers at $20 per month

If you’re dealing with trying to get a grip on your distributed firewall management problem, especially if you’re a big user of AWS, check out Dome9.

/Hoff

Enhanced by Zemanta

VMware vCloud Architecture ToolKit (vCAT) 2.0 – Get Some!

September 8th, 2011 No comments

Here’s a great resource for those of you trying to get your arms around VMware’s vCloud Architecture:

VMware vCloud Architecture ToolKit (vCAT) 2.0

This is a collection of really useful materials, clearly painting a picture of cloud rosiness, but valuable to understand how to approach the various deployment models and options for VMware’s cloud stack:

Enhanced by Zemanta

VMware’s vShield – Why It’s Such A Pain In the Security Ecosystem’s *aaS…

September 4th, 2011 15 comments

I’ve become…comfortably numb…

Whilst attending VMworld 2011 last week, I attended a number of VMware presentations, hands-on labs and engaged in quite a few discussions related to VMware’s vShield and overall security strategy.

I spent a ton of time discussing vShield with customers — some who love it, some who don’t — and thought long and hard about writing this blog.  I also spent some time on SiliconAngle’s The Cube discussing such, here.

I have dedicated quite a lot of time discussing the benefits of VMware’s security initiatives, so it’s important that you understand that I’m not trying to be overtly negative, nor am I simply pointing fingers as an uneducated, uninterested or uninvolved security blogger intent on poking the bear.  I live this stuff…every day, and like many, it’s starting to become messy. (Ed: I’ve highlighted this because many seem to have missed this point. See here for example.)

It’s fair to say that I have enjoyed “up-to-the-neck” status with VMware’s various security adventures since the first marketing inception almost 4 years ago with the introduction of the VMsafe APIs.  I’ve implemented products and helped deliver some of the ecosystem’s security offerings.  My previous job at Cisco was to provide the engineering interface between the two companies, specifically around the existing and next generation security offerings, and I now enjoy a role at Juniper which also includes this featured partnership.

I’m also personal friends with many of the folks at VMware on the product and engineering teams, so I like to think I have some perspective.  Maybe it’s skewed, but I don’t think so.

There are lots of things I cannot and will not say out of respect for obvious reasons pertaining to privileged communications and NDAs, but there are general issues that need to be aired.

Geez, enough with the CYA…get on with it then…

As I stated on The Cube interview, I totally understand VMware’s need to stand-alone and provide security capacities atop their platform; they simply cannot expect to move forward and be successful if they are to depend solely on synchronizing the roadmaps of dozens of security companies with theirs.

However, the continued fumbles and mis-management of the security ecosystem and their partnerships as well as the continued competitive nature of their evolving security suite makes this difficult.  Listening to VMware espouse that they are in the business of “security ecosystem enablement” when there are so few actually successful ecosystem partners involved beyond antimalware is disingenuous…or at best, a hopeful prediction of some future state.

Here’s something I wrote on the topic back in 2009: The Cart Before the Virtual Horse: VMware’s vShield/Zones vs. VMsafe APIs that I think illustrates some of the issues related to the perceived “strategy by bumping around in the dark.”

A big point of confusion is that vShield simultaneously describes both an ecosystem program and a set of products that is actually more than just anti-malware capabilities which is where the bulk of integration today is placed.

Analysts and journalists continue to miss the fact that “vShield” is actually made up of 4 components (not counting the VMsafe APIs):

  • vShield Edge
  • vShield App
  • vShield Endpoint
  • vShield Manager

What most people often mean when they refer to “vShield” are the last two components, completely missing the point that the first two products — which are now monetized and marketed/sold as core products for vSphere and vCloud Director — basically make it very difficult for the ecosystem to partner effectively since it’s becoming more difficult to exchange vShield solutions for someone else’s.

An important reason for this is that VMware’s sales force is incentivized (and compensated) on selling VMware security products, not the ecosystem’s — unless of course it is in the way of a big deal that only a partnership can overcome.  This is the interesting juxtaposition of VMware’s “good enough” versus incumbent security vendors “best-of-breed” product positioning.

VMware is not a security or networking company and ignoring the fact that big companies with decades of security and networking products are not simply going to fade away is silly.  This is true of networking as it is security (see software-defined networking as an example.)

Technically, vShield Edge is becoming more and more a critical piece of the overall architecture for VMware’s products — it acts as the perimeter demarcation and multi-tenant boundary in their Cloud offerings and continues to become the technology integration point for acquisitions as well as networking elements such as VXLAN.

As a third party tries to “integrate” a product which is functionally competitive with vShield Edge, the problems start to become much more visible and the partnerships more and more clumsy, especially in the eyes of the most important party privy to this scenario: the customer.

Jon Oltsik wrote a story recently in which he described the state of VMware’s security efforts: “vShield, Cloud Computing, and the Security Industry

So why aren’t more security vendors jumping on the bandwagon? Many of them look at vShield as a potentially competitive security product, not just a set of APIs.

In a recent Network World interview, Allwyn Sequeira, VMware’s chief technology officer of security and vice president of security and network solutions, admitted that the vShield program in many respects “does represent a challenge to the status quo” … (and) vShield does provide its own security services (firewall, application layer controls, etc.)

Why aren’t more vendors on-board? It’s because this positioning of VMware’s own security products which enjoy privileged and unobstructed access to the platform that ISV’s in the ecosystem do not have.  You can’t move beyond the status quo when there’s not a clear plan for doing so and the past and present are littered with the wreckage of prior attempts.

VMware has its own agenda: tightly integrate security services into vSphere and vCloud to continue to advance these platforms. Nevertheless, VMware’s role in virtualization/cloud and its massive market share can’t be ignored. So here’s a compromise I propose:

  1. Security vendors should become active VMware/vShield partners, integrate their security solutions, and work with VMware to continue to bolster cloud security. Since there is plenty of non-VMware business out there, the best heterogeneous platforms will likely win.
  2. VMware must make clear distinctions among APIs, platform planning, and its own security products. For example, if a large VMware shop wants to implement vShield for virtual security services but has already decided on Symantec (Vontu) or McAfee DLP, it should have the option for interoperability with no penalties (i.e., loss of functionality, pricing/support premiums, etc.).

Item #1 Sounds easy enough, right? Except it’s not.  If the way in which the architecture is designed effectively locks out the ecosystem from equal access to the platform except perhaps for a privileged few, “integrating” security solutions in a manner that makes those solutions competitive and not platform-specific is a tall order.  It also limits innovation in the marketplace.

Look how few startups still exist who orbit VMware as a platform.  You can count them on less fingers that exist on a single hand.  As an interesting side-note, Catbird — a company who used to produce their own security enforcement capabilities along with their strong management and compliance suite — has OEM’d VMware’s vShield App product instead of bothering to compete with it.

Now, item #2 above is right on the money.  That’s exactly what should happen; the customer should match his/her requirements against the available options, balance the performance, efficacy, functionality and costs and ultimately be free to choose.  However, as they say in Maine…”you can’t get there from here…” at least not unless item #1 gets solved.

In a complimentary piece to Jon’s, Ellen Messmer writes in “VMware strives to expand security partner ecosystem“:

Along with technical issues, there are political implications to the vShield approach for security vendors with a large installed base of customers as the vShield program asks for considerable investment in time and money to develop what are new types of security products under VMware’s oversight, plus sharing of threat-detection information with vShield Manager in a middleware approach.

…and…

The pressure to make vShield and its APIs a success is on VMware in some respects because VMware’s earlier security API , the VMsafe APIs, weren’t that successful. Sequiera candidly acknowledges that, saying, “we got the APIs wrong the first time,” adding that “the major security vendors have found it hard to integrate with VMsafe.”

Once bitten, twice shy…

So where’s the confidence that guarantees it will be easier this time? Basically, besides anti-malware functionality provided by integration with vShield endpoint, there’s not really a well-defined ecosystem-wide option for integration beyond that with VMware now.  Even VMware’s own roadmaps for integration are confusing.  In the case of vCloud Director, while vShield Edge is available as a bundled (and critical) component, vShield App is not!

Also, forcing integration with security products now to directly integrate with vShield Manager makes for even more challenges.

There are a handful of security products besides anti-malware in the market based on the VMsafe APIs, which are expected to be phased out eventually. VMware is reluctant to pin down an exact date, though some vendors anticipate end of next year.

That’s rather disturbing news for those companies who have invested in the roadmap and certification that VMware has put forth, isn’t it?  I can name at least one such company for whom this is a concern. 🙁

Because VMware has so far reserved the role of software-based firewalls and data-loss prevention under vShield to its own products, that has also contributed to unease among security vendors. But Sequiera says VMware is in discussions with Cisco on a firewall role in vShield.   And there could be many other changes that could perk vendor interest. VMware insists its vShield APIs are open but in the early days of vShield has taken the approach of working very closely with a few selected vendors.

Firstly, that’s not entirely accurate regarding firewall options. Cisco and Juniper both have VMware-specific “firewalls” on the market for some time; albeit they use different delivery vehicles.  Cisco uses the tightly co-engineered effort with the Nexus 1000v to provide access to their VSG offering and Juniper uses the VMsafe APIs for the vGW (nee’ Altor) firewall.  The issue is now one of VMware’s architecture for integrating moving forward.

Cisco has announced their forthcoming vASA (virtual ASA) product which will work with the existing Cisco VSG atop the Nexus 1000v, but this isn’t something that is “open” to the ecosystem as a whole, either.  To suggest that the existing APIs are “open” is inaccurate and without an API-based capability available to anyone who has the wherewithal to participate, we’ll see more native “integration” in private deals the likes of which we’re already witnessing with the inclusion of RSA’s DLP functionality in vShield/vSphere 5.

Not being able to replace vShield Edge with an ecosystem partner’s “edge” solution is really a problem.

In general, the potential for building a new generation of security products specifically designed for VMware’s virtualization software may be just beginning…

Well, it’s a pretty important step and I’d say that “beginning” still isn’t completely realized!

It’s important to note that these same vendors who have been patiently navigating VMware’s constant changes are also looking to emerging competitive platforms to hedge their bets. Many have already been burned by their experience thus far and see competitive platform offerings from vendors who do not compete with their own security solutions as much more attractive, regardless of how much marketshare they currently enjoy.  This includes community and open source initiatives.

Given their druthers, with a stable, open and well-rounded program, those in the security ecosystem would love to continue to produce top-notch solutions for their customers on what is today the dominant enterprise virtualization and cloud platform, but it’s getting more frustrating and difficult to do so.

It’s even worse at the service provider level where the architectural implications make the enterprise use cases described above look like cake.

It doesn’t have to be this way, however.

Jon finished up his piece by describing how the VMware/ecosystem partnership ought to work in a truly cooperative manner:

This seems like a worthwhile “win-win,” as that old tired business cliche goes. Heck, customers would win too as they already have non-VMware security tools in place. VMware will still sell loads of vShield product and the security industry becomes an active champion instead of a suspicious player in another idiotic industry concept, “coopitition.” The sooner that VMware and the security industry pass the peace pipe around, the better for everyone.

The only thing I disagree with is how this seems to paint the security industry as the obstructionist in this arms race.  It’s more than a peace pipe that’s needed.

Puff, puff, pass…it’s time for more than blowing smoke.

/Hoff

Enhanced by Zemanta

Quick Blip: Hoff In The Cube at VMworld 2011 – On VMware Security

September 1st, 2011 No comments

John Furrier and Dave Vellante from SiliconAngle were kind enough to have my on the Cube, live from VMworld 2011 on the topic of virtualization/cloud security, specifically VMware…:


Watch live video from SiliconANGLE.com on Justin.tv

Thanks for having me, guys.

/Hoff

Enhanced by Zemanta

Unsafe At Any Speed: The Darkside Of Automation

July 29th, 2011 5 comments

I’m a huge proponent of automation. Taking rote processes from the hands of humans & leveraging machines of all types to enable higher agility, lower cost and increased efficacy is a wonderful thing.

However, there’s a trade off; as automation matures and feedback loops become more closed with higher and higher clock rates yielding less time between execution, our ability to both detect and recover — let alone prevent — within a cascading failure domain is diminished.

Take three interesting, yet unrelated, examples:

  1. The premise of the W.O.P.R. in War Games — Joshua goes apeshit and almost starts WWIII by invoking a simulated game of global thermonuclear war
  2. The Airbus 380 failure – the luck of having 5 pilots on-board and their skill to override hundreds of cascading automation failures after an engine failure prevented a crash that would have killed hundreds of people.*
  3. The AWS EBS outage — the cloud version of Girls Gone Wild; automated replication caught in a FOR…NEXT loop

These weren’t “maliciously initiated” issues, they were accidents.  But how about “events” like Stuxnet?  What about a former Gartner analyst having his home automation (CASA-SCADA) control system hax0r3d!? There’s another obvious one missing, but we’ll get to that in a minute (hint: Flash Crash)

How do we engineer enough failsafe logic up and down the stack that can function at the same scale as the decision and controller logic does?   How do we integrate/expose enough telemetry that can be produced and consumed fast enough to actually allow actionable results in a timeframe that allows for graceful failure and recovery (nee survivability.)

One last example that is pertinent: high frequency trading (HFT) —  highly automated, computer driven, algorithmic-based stock trading at speeds measured in millionths of a second.

Check out how this works:

[Check out James Urquhart’s great Wisdom Of the Clouds blog post: “What Cloud Computing Can Learn From Flash Crash“]

In the use-case of HFT, ruthlessly squeezing nanoseconds from the processing loops — removing as much latency as possible from every element of the stack — literally has implications in the millions of dollars.

Technology vendors are doing many very interesting and innovative things architecturally to achieve these goals — some of them quite audacious — and anything that gets in the way or adds latency is generally not considered “useful.”  Security is usually one of them.

There are most definitely security technologies that allow for very low latency insertion of things like firewalls that have low single-digit microsecond latency figures (small packet,) but interestingly enough we’re also governed by the annoying laws of physics and things like propagation delay, serialization delay, TCP/IP protocol overhead, etc. all adds up.

Thus traditional approaches to “in-line” security — both detective and preventative — are not generally sustainable in these environments and thus require some deep thought so as to provide solutions that will scale as well as these HFT systems do…no short order.

I think this is another good use for “big data” and security data analytics.  Consider very high speed side-band systems that function along with these HFT systems that could potentially leverage the logic in these transactional trading systems to allow us to get closer to being able to solve the challenges of these environments.  Integrate these signaling and telemetry planes with “fabric-enabled” security capabilities and we might get somewhere useful.

This tees up nicely my buddy James Arlen’s talk at Blackhat on the insecurity of high frequency trading systems: “Security when nano seconds count”  You should plan on checking it out…I know I will.

/Hoff

*H/T to @reillyusa who also pointed me to “Questions Raised About Airbus Automated Control System” regarding the doomed Air France 447 flight.  Also, serendipitously, @etherealmind posted a link to a a story titled “Volkswagen demonstrates ‘Temporary Auto Pilot'” — what could *possibly* go wrong? 😉

Enhanced by Zemanta

More On Security & Big Data…Where Data Analytics and Security Collide

July 22nd, 2011 No comments
Racks of telecommunications equipment in part ...

Image via Wikipedia

My last blog post “InfoSec Fail: The Problem With Big Data Is Little Data,” prattled on a bit about how large data warehouses (or data lakes, from “Big Data Requires A Big, New Architecture,”) the intersection of next generation data centers, mobility and cloud computing were putting even more stress on “security”:

As Big Data and the databases/datastores it lives in interact with then proliferation of PaaS and SaaS offers, we have an opportunity to explore better ways of dealing with these problems — this is the benefit of mass centralization of information.

Of course there is an equal and opposite reaction to the “data gravity” property: mobility…and the replication (in chunks) and re-use of the same information across multiple devices.

This is when Big Data becomes Small Data and the ability to protect it gets even harder.

With the enormous amounts of data available, mining it — regardless of its source — and turning it into actionable information (nee intelligence) is really a strategic necessity, especially in the world of “security.”

Traditionally we’ve had to use tools such as security event information management (SEIM) tools or specialized visualization* suites to make sense of what ends up being telemetry which is often disconnected from the transaction and value of the asset from which they emanate.

Even when we do start to be able to integrate and correlate event, configuration, vulnerability or logging data, it’s very IT-centric.  It’s very INFRASTRUCTURE-centric.  It doesn’t really include much value about the actual information in use/transit or the implication of how it’s being consumed or related to.

This is where using Big Data and collective pools of sourced “puddles” as part of a larger data “lake” and then mining it using toolsets such as Hadoop come into play.

We’re starting to see the commercialization of Hadoop outside of vertical use cases for financial services and healthcare and more broadly adopted for analytics across entire lines of business, industry and verticals.  Combine the availability of cheap storage with ever more powerful and cost-effective compute and network and you’ve got a goldmine ready to tap.

One such solution you’ll hear more about is Zettaset who commercialize and productize Hadoop to enable the construction of enormously powerful data security warehouses and analytics.

Zettaset is a key component of a solution offering that is doing what I describe above for a CISO of a large company who integrates enormous amounts of disparate and seemingly unrelated data to make managed risk decisions that is fed to humans and automated processes alike.

These data are sourced from all across the business — including IT — and allows the teams and constituent interested parties from across the company to slice and dice data from petabytes of information which previously would have been silted.  Powerful.

Look for more announcements about this solution around the Blackhat timeframe.  It’s cool stuff.

This is one example where Big Data and “security” are paired in the positive.

/Hoff

* Ken Oestreich (@Fountnhead) tweeted an interesting and pertinent comment regarding my points related to SEIM and visualization tools that summarized the general idea I was getting at in referencing these existing toolsets:

…which of course was underscored by the clearly-bored Christian Reilly who has Citrix’s Cloud strategy already wrapped up tighter than piñata at a Mexican Wedding:

Related articles

Enhanced by Zemanta

InfoSecFail: The Problem With Big Data Is Little Data

June 26th, 2011 1 comment

(on my iPhone while my girls shop…)

While virtualization and cloud security concerns continue to catch the imaginative pause of pundits everywhere as they focus on how roles and technology morph yet again, a key perspective is often missing.

The emergence (or more specifically the renewed focus and prominent feature) of “big data” means that we are at yet another phase shift on the Hamster Security Sine Wave of Pain: The return of Information Centric Security.

(It never really went away, it’s just a long term problem)

Breach after breach featuring larger amounts of exfiltrated information shows we have huge issues with application security and even larger issues identifying, monitoring and protecting information (which I define as data with value) across it’s lifecycle.

This will bring about a resurgence of DLP and monitoring tools using a variety of deployment methodologies via virtualization and cloud that was at first seen as a hinderance but will now be an incredible boon.

As Big Data and the databases/datastores it lives in interact with then proliferation of PaaS and SaaS offers, we have an opportunity to explore better ways of dealing with these problems — this is the benefit of mass centralization of information.

Of course there is an equal and opposite reaction to the “data gravity” property: mobility…and the replication (in chunks) and re-use of the same information across multiple devices.

This is when Big Data becomes Small Data and the ability to protect it gets even harder.

Do you see new and innovative information protection capabilities emerging today? What form do they take?

Hoff

Categories: Uncategorized Tags:

OpenFlow Is the New “Cloud…”

June 24th, 2011 4 comments
Categories: Uncategorized Tags:

SecurityAutomata: A Reference For Security Automation…

June 24th, 2011 No comments

The SecurityAutomata Project is themed toward enabling consumers, service and technology solution providers to collectively share knowledge on how to automate and focus on the programmability of “security” across physical, virtual and cloud environments.

It’s a bit of an experiment, really. I want to enable better visibility into the state-of-the-art (as it were) of security automation by providing a neutral ground to discuss and demonstrate how security can be automated in physical, virtual and cloud computing environments.

There are many solutions available today but it’s often difficult to grasp how the approaches differ from one another and what sort of capabilities must exist to get them to work.

Please help us organize and contribute content to the SecurityAutomata Wiki here.

/Hoff

Related articles

Enhanced by Zemanta