EMC has many ways to replicate SAN data. Here is a short list of some of the technology and where they're used:
- MirrorView:
- MirrorView is used on Clariion and Celerra arrays and comes in two flavors, /A – Async and /S – Sync.
- Celerra Replicator:
- Used to replicate data between Celerra unified storage platforms.
- RecoverPoint:
- Appliance based Continuous Data Protection solution that also provides replication between EMC arrays. This is commonly referred to a "Tivo for the Datacenter" and is the latest and greatest technology when it comes to data protection and replication.
Each of these replication technologies replicates LUNS between arrays but they have different overhead requirements that you should consider.
MirrorView
- MirrorView requires 10 to 20% overhead to operate. So if you have 10TB of data to replicate you are going to need an additional 1 to 2TB of storage space on your production array.
Celerra Replicator
- Celerra Replicator can require up to 250% overhead. This number can vary depending on what you are replicating and how you plan to do it. This means that 10TB of replicated data could require an additional 25TB of disk space between the Production and DR arrays.
RecoverPoint
- While RecoverPoint is certainly used as a replication technology it provides much more than that. The ability to roll back to any point in time (similar to a Tivo) provides the ultimate granularity to DR. This is accomplished via a Write Splitter that is built in to Clariion arrays. RecoverPoint is also available for non-EMC arrays.
- RecoverPoint can be deployed in 3 different configurations, CDP (local only/not replicated), CRR (remote) and CLR (local and remote).
- CRR replicates data to your DR site where your "Tivo" capability resides. You essentially have an exact copy of the data you want to protect/replicate and a "Journal" which keeps track of all the writes and changes to the data. There is an exact copy of your protected data plus roughly 10 to 20% overhead for the Journal. So 10TB of "protected" data would require around 11 to 12TB of additional storage on the DR array.
- CLR is simply a combination of local CDP protection and remote CRR protection together. This provides the ultimate in protection and granularity at both sites and would require additional storage at both sites for the CDP/CRR "Copy" and the Journal.
This is obviously a very simplified summary of replication and replication overhead. The amount of additional storage required for any replicated solution will depend on the amount, type and change rate of data being replicated. There are also many things to consider around bandwidth between sites and the type of replication, Sync or Async, that you need.
2 comments:
Thanks for posting this up on your blog!
To add to Recover Point splitting ,
It not only depends on clariion splitting but also on host splitting (Solaris,HP-UX and AIX) as well as Fabric Splitting (Cisco Santap,Brocade SAS)
Post a Comment