Wednesday, August 25, 2010

EMC Unisphere Quick Notes

EMC is announcing a new management interface for its mid-tier storage line, that sounds and looks very promising.

Unisphere is a complete integration with vCenter, along with consolidation of management of CLARiiON and Celerra. This is intended to replace Navisphere and Celerra Manager. This framework also manages RecoverPoint/SE and will include PowerPath and is linked to Replication Manager. It also integrates Analyzer and Quality of Service Manager. It is integrated into lots of support, including PowerLink, without having to log into it separately.

This is Adobe Flex-based, so pretty light weight (not Java). There is a client for Windows, and a server, like off-array Navisphere. This is helpful with geographically dispersed infrastructure It is web-enabled. You don’t have to upgrade to FLARE 30 by using the server client, which is the other way to start using Unisphere.

I will elaborate on this know when I know more.

Here are some links to whitepapers, flash demos and videos.

http://www.emc.com/products/technology/unified-storage.htm?pid=Cloudcampaign-unified_campaign-070610

http://www.emc.com/collateral/software/white-papers/h8017-unisphere-element-manager.pdf

http://www.youtube.com/watch?v=mACVdai9YwE (Unisphere demo)

http://www.youtube.com/watch?v=oKubyt6XBcI&NR=1 (vCenter plug in demo)

On another topic, quickly...

EMC is also building in compression on the CLARiiON to reduce capacity, in the 2:1 range, intended for use in production. Technically, this isn’t deduplication, but more like white space reduction. FCoE support will be included in Unified Storage, to be announced this fall.

Friday, August 20, 2010

Defining the Cloud: Litmus Test

Here is a collection of thoughts I pulled together from recent conversations with colleagues and customers, about what constitutes a "cloud solution" or whether an application is a "cloud application". I'd love to have some comments on this.

Cloud Storage
I have been working on uses cases for one aspect of the cloud, storage (besides backup). There are four that seem to make sense. Can anyone think of anything else?

1. Archival
o Geographic dispersal
o Data stored on low cost media
o Apply WORM policy for compliance
2. Web Content
o Content stored on low cost media
o Use polices to position data
o Users access data directly
3. Disaster Recovery
o Replicate Data into the Cloud
o Apply polices to geographically disperse data
o Production data can be recalled when disaster strikes
4. Dynamic Data Allocation
o Directed at dynamic content such as video and delayed live feed
o Data replicated in geo-specific clouds (i.e. NA, SA, EU, Africa)
o Rapid rollout and tear down required


Some other thoughts from a recent presentation I made:
1. Remember principles of data management
o Performance
o Capacity
o Workflow
2. Cloud Storage works when
o Performance is not an issue
o Capacity is needed
o It fits the workflow
3. Cheap storage is easy, we all can do this, but…
o Geographically replicated, efficiently managed, AND cheap?
4. A useful archive or deep storage tier
o Data mining & re-analysis done in-situ with local cloud server resources if needed
5. “Downloader pays” good model to offset costs

Is the Cloud "right" for you?
As we (data storage architects) attempt to design and implement solutions, we are trying to solve a customer’s business problems while setting them up with a foundation for “smart growth” in technology. We should be thinking about these questions:
1. Will the customer benefit from a self-provisioning architecture that allows business users to set up their own servers/applications/storage/networking, and does that make sense from a business perspective??
2. Will the customer’s business benefit from IT that grows and shrinks in performance and capacity as needed?
3. Will the business benefit from “pay for use” by the end user, as a way to track investment and IT resource utilization?
4. Does the customer need to allow access to its business from anywhere, at any time?
5. Should the customer IT infrastructure be rebuilt so that it consists of “pools” of virtualized computing, networking and storage; and does that make sense from a business perspective?

The answers to these questions are not always “Yes”. Most companies do not need to build a private cloud now, or in the immediate future, but they need to understand what these changes could mean to them, both as IT professionals and as business enablers.

By Who and When will the Cloud be Adopted?
As has been pointed out by others, adoption of cloud-like computing will likely take place within IT and at the consumer and small-business level first, although latency issues will remain for business computing in the public cloud until the technology finishes maturing and content can be made ubiquitous (cheaply pushed out to the edge where it is needed). IT staff at larger companies will want cloud-like computing in the datacenter to ease power and space crunches and make them more efficient with a self-provisioning, elastic, network-based infrastructure, but as EMC has noted, it is a bit harder to push the enterprise business applications into a cloud. That will come with time, as power, cooling and staffing resources force efficiency on small business and datacenters alike. It will enabled by the successes we can make happen now and as the “hump” of the bell curve moves closer to the elastic infrastructure and virtualized computing environment we now call the Cloud.

The Litmus Test
Ask yourself if any technology fits most or all of these requirements? If not, it is not a cloud technology.
• Elastic?
• Self provisioning?
• Pay as you go?
• Pooled resources?
• Ubiquitous network access?

Example: SharePoint
SharePoint would be a great application to have in the cloud, but the resource pooling and provisioning would be infrastructure based, and that is where the cloud technology surfaces.
A service provider could allow you to request what is needed to deploy or scale SharePoint, but what is the technology behind the portal? BMC? Surgient? Something else? What is the storage behind it? How will the content and computing be geographically dispersed so that latency won’t be an issue?

I think any web-based interface to an application can be considered “enabling”, since that meets the “ubiquitous access” criteria. What will it be used for? How will it enable self-provisioning? Is what is being done through the interface able to scale?

Ask yourself (or your customer) this:
It all comes back to the same thing…what are you moving (and why) from the old paradigm of datacenter computing and moving to the new paradigm, regardless of whether it is private, public or federated?