There are many problems and challenges facing an organisation which is about to merge with another organisation or sell a subsidiary. Divesting yourself of a part of your company is rather like carrying out an elaborate surgical transplant – the correct parts of the existing entity have to be identified, isolated, and then meticulously extracted to ensure that nothing extraneous is inadvertently transferred from the source to the destination.
The business side of M&A can be gruelling – countless meetings between executives, lawyers, and bankers as well as a mountain of paperwork and red tape. But the division of data is becoming a near impossible step in the M&A process as rapidly expanding data overwhelms shrinking IT teams.
Why is this? Because 80% of organisational data is unstructured (Source: IDC) and unstructured data is a mystery for most organisations. They don’t know which data is used or not used, who is using it or not using it, who it belongs to, what it contains, which is sensitive, and who should or shouldn’t have access.
Even if some of these questions can be answered, moving the data without introducing service disruption, data corruption, or putting data at risk of leakage is no easy task. Terabytes and Petabytes of information take time to move around, and technology is required to either move “live data” safely, or data has to be moved while no one is using it (and when is that?).
Permissions don’t transfer easily between domains or across platforms, so technology is needed to help with that, or permissions need to be recreated during the transition. Since a single Terabyte of data usually has 2500 folders with unique permissions (50,000 folders total), each uniquely permissioned folder has 3-5 active directory groups, and each group has between 5-50 members, manual re-creation might take a bit more time than you think.
And permissions aren’t necessarily where they should be to begin with, as only 21% of organisations in a recent survey on data migrations report that they regularly make sure that folders and SharePoint sites are safe from global access groups, like everyone and domain users.
There are other reasons of course for the need to migrate data, such as the purchase of new storage devices, the retiring of legacy storage, the adoption of new platforms, cleaning up of stale data and the removing of specific content. In fact, according to the same survey, 95% of organisations move data around at least once per year, and 44% move data more than 5 times per year.
A successful data migration requires you to identify exactly what content is going to be moved, to decide whether to move it all at once or gradually, when to move it and what to do about permissions. You’ll also need to identify the data owners, determine who uses that data, and whether your data migration will affect those users while the data it is moved to your new network attached storage device, domain or SharePoint server. These are all vital tasks and take a lot of time to do manually. Could you do all of this automatically?
In order to do so, you would first need metadata to identify data that should or should not be moved – e.g. data that is stale, data that is created, accessed, or accessible by certain groups or individuals, data that contains specific content, or any combination of those metadata attributes. Once these data sets are identified, automation is needed to move or archive – whether their destination is a server in another domain or even on a completely different platform.
The following boxes must be ticked in order to move data securely and intelligently:
- Ability to schedule one-time or on-going migrations
- Option for incremental migrations for large data sets
- Automatically maintain, enhance and/or translate permissions for migrated data
- Ability to migrate data between servers in different domains
- Migration simulation and real time monitoring to avoid unwanted surprises
- Detailed reporting on migrated data, permissions and new groups
Does such technology exist? Yes. Enterprises are now able to configure one-time or on-going migrations, defining the destination path, folder, permissions translation, and when the migration should take place. This technology enables the rapid, safe execution of complex data migrations.
Users can easily implement and enforce policies for data retention and location based on content, accessibility, and activity. The same metadata used to facilitate the migration also helps identify and remediate exposure of sensitive data and excessive permissions, identify owners, stale data, and determine who has, should and should not have access.
Intelligent, automated migration of large scale data sets can reduce complexity and improve IT service by limiting user disruption and avoiding data breaches. By automatically identifying data sets for migration based on path, permissions, actual access and content classification, compliance policy implementation and enforcement are simplified while providing the flexibility to meet the business goals of any migration project.
Until recently, splitting large, complex data sets was about a precise as medieval surgery — the surgeon was as likely to take off his own thumb and a few errant limbs from the hapless patient during the process. In some cases the procedure was even less accurate — using the sort of Solomon approach in which a ‘virtual’ baby is divided down the middle. Now the procedure is as accurate and scientific as keyhole surgery and for the first time data migration can be automatic, accurate and swift.