Access Zone Migration Admin Guide

Access Zone Migration Admin Guide





Data migration is not just a one time operation.   Its a continous operation to move data between clusters, and within access zones on clusters and from one access zone to another cluster’s access zone.  This feature assists with moving data and configuration data (shares, exports, quotas, nfs aliases) with the data and updating the path and access zone on the target path.



Typical Use Cases:

  1. Split an access zone into two for failover granularity reasons

  2. Move an application to its own access zone for security

  3. Split data and application load between clusters

  4. Move data + configuration data to new access zone in the DR cluster for testing

  5. Move data + configuration data to new access zone to achieve active active clusters

  6. Migrate data + configuration from several remote cluster to a central cluster with into the same access zone (Fan in) or separate access zones)


What’s New

  1. New in 1.9 Access Zone Migration now allows the SyncIQ policy created to persist after the initial copy and Config sync phase.  

    1. This allows for phased cutover and incremental sync of data before the final cutover to new access zone or cluster for migrated data.

    2. The Policy will appear in the Jobs windows to support incremental config sync changes as well as data sync.



  1. Enterprise licensed Eyeglass appliance

Supported Clusters

  1. Isilon all models

  2. IsilonSD

  3. See Release notes for feature matrix support and OneFS supported releases


Planning Migrations between Access zones


Various options exist to move data between access zones and clusters with this feature.   This allows moving data and configuration data in various configurations and some planning is required.   When planning data migration review the source and destination paths and access zones you plan to move data from and too based on the rules below.

  1. Source path - The source access zone is selected when configurations is submitted based on the path matching an access zone base path

  2. Target path -  The target access zone same or different cluster access zone is auto detected based on the path matching a base path access zone.  


TARGET ACCESS ZONE MUST USE SAME AUTHENTICATION PROVIDERS AS SOURCE ACCESS ZONE as Eyeglass will not be able to translate User and Group SID between AD providers.

Data and Configuration Migration Workflows


Use Case #1 -  System to Other Access zone Same cluster



Use Case #2 - System to Other Access zone Remote cluster


Use Case #3 - Merge Access Zones Configuration




Use Case #4 - Overlapping Access Zones Configuration


4 Access Zones with overlapping paths:

In each access zone there are  configuration objects as  shares , exports, nfs aliases and quotas.

The goal is  to migrate the overlapping Access Zones to NEW Access Zone path that does not overlap on same or different cluster.

On Eyeglass, select job window, click add new job, select migration job tab: Select each source access zone path and zone name as source for migration, you can keep synciq policy option for incremental sync.  Note:  The initial SyncIQ policy is created with copy option.  It must be changed to incremental sync and a schedule applied to maintain sync between source and destination paths.


Check if the migration job has finished successfully.

Check the configuration objects on target Access Zone:


Note:  The overlapping path will cause the quotas that are  migrated to be duplicated on the new Access Zone paths.


Pre-requisites to Use the Migration Feature

The Eyeglass appliance must have the initial state for Quota Jobs (type QUOTA) set to Enabled to run an Access Migration Job.  By default these are Disabled.  To Enable them follow these steps:

  1. ssh to the Eyeglass appliance and login as admin user.

  2. Enter the following CLI commands:

igls adv initialstate set --quota=enabled

  1. Enter the following CLI command to check settings:

igls adv initialstate show

  1. Check that you see following in the list




How To create Access Migration Jobs


IMPORTANT NOTE:  Since Data copy phase can take hours to complete, the steps in a migration need a timeout and it's hard coded to 15000 minutes or approx 10 days.   Any migration of data longer than this will fail.

IMPORTANT NOTE: See above for access zone detection based on path entered.


  1. Open Jobs icon.

  2. Click Add New  Job.

  3. Select Migration Job tab.

  4. Screen Shot 2017-04-27 at 7.49.32 PM.png

  5. Enter source path to migrate on the source cluster

    1. Note: All Configuration data shares, exports, NFS aliases, Quotas must exist at path or below be included in the migration

  6. Enter the source cluster in the drop down (only Managed clusters in Eyeglass are listed)

    1. Source Access Zone is a drop down list of all listed access zones detected.  Select the zone where the configuration data exists.  NOTE: The reason this is not detected is some clusters allowed overlapping access zone base path which means a source path can have configuration from one or MORE access zones.

  7. De-Select “Enable  source write access”  option when the source path is protected by SyncIQ policy.  (WARNING: if not De-Selected, the lock policy that blocks writes to the source folder during migration will fail to be created and the migration job will fail, since the path is under a SyncIQ domain.  Also note that post migration default top level ACL’s will need to be reapplied since section in this guide)

  8. If  “Enable Source Write Access” option is unchecked (blocking access to the source path)

    1. This option when de-selected will lock the source path and deny all IO regardless of share or export access settings.   The migrated data will inherit the locking SyncIQ policy ACL’s on the parent folder when the migration is done.

    2. The Target path entered into the migration job will have modified permissions that will need to be restored BEFORE users can access the data in the new location.  This provides a 2nd level of data locking before the new data is in production.

    3. See Detailed Steps below to restore the ACL settings on the parent folder.

  9. Keep SyncIQ Policy (used for incremental data sync after initial sync of data)

    1. > 1.9 required - check this box to leave the SyncIQ policy after the policy runs.  Leaving the policy allows multi runs from OneFS UI or set a schedule to keep the target path updated

      1. The Configuration data is only synced one time on first job run using Copy mode on the SyncIQ policy.

      2. Screen Shot 2017-06-10 at 8.06.18 AM.png

      3. Data can by synced incrementally by setting syncIQ policy Sync mode and setting a schedule on the migration policy created by Eyeglass or running it manually.

        1. Screen Shot 2017-06-10 at 8.07.15 AM.png

      4. The policy will appear in the Jobs windows under ZoneMigration section and can be used to incrementally sync configuration data that has changed.

      5. Screen Shot 2017-05-29 at 6.25.42 PM.png

    2. The target Access Zone will be auto detected based on path matching of the Access Zone Base path (on local or remote cluster migrations)

    3. Note: This can be changed and configuration path will be updated on shares,exports,quotas and aliases during the migration.

    4. Note: Path can be on the same cluster or remote cluster (SyncIQ policy will copy data to the target cluster and it must be IP reachable by the source cluster)

    5. Note: Path cannot be the target of an existing SyncIQ policy as it will be in read-only state which will block migration.  Must be writable location on target cluster.

    6. Note: No path or data can exist on the target cluster.  The target path is checked if it exists.  If it exists the migration will not continue.  An empty target path is required.

  10. Enter destination cluster from the drop down (must be managed cluster in Eyeglass)

  11. Select the Preview option to verify which shares, exports and quotas, aliases were discovered for migration and validate this is expected.

  12. Click the Submit button to start the migration

  13. Monitor from the Running Jobs tab of the Jobs Icon.


How To Re-apply Default SMB Share ACL Post Migration - only if Enable write access was disabled


Use this procedure ONLY if you deselected allow “Enable write access” on the migration job.  This migration option will apply ACL’s on the target that block all access.  The ACL’s are applied only at the top level folder path.    Any ACL’s that have been applied to child paths of the parent migration path, will retain the ACL’s post migration.

The top level parent modified ACL blocks access to the data even if the share or export level permission allows write access or full control.    It is the combination of Share/export and ACL’s that allow write access to the data.


How to Re-Apply SMB Share Default ACL’s Post Migration

  1. Add back the original Microsoft Default ACL’s

    1. M8000A-1# chmod  +a# 0  group Administrators allow dir_gen_all,object_inherit,container_inherit group1

    2. M8000A-1# chmod  +a# 1 creator_owner allow dir_gen_all,object_inherit,container_inherit,inherit_only group1

    3. M8000A-1# chmod  +a# 2 everyone allow dir_gen_read,dir_gen_execute group1

    4. M8000A-1# chmod  +a# 3 group Users allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit group1

    5. M8000A-1# chmod  +a# 4 group Users allow std_synchronize,add_file,add_subdir,container_inherit group1


  1. Check ACL’s after changes to migrated parent folder

    1. M8000A-1# ls -lze

    2. total 2

    3. drwxrwxr-x +  4 root  wheel  99 Nov 17 20:42 group1

    4. OWNER: user:root

    5. GROUP: group:wheel

    6. 0: group:Administrators allow dir_gen_all,object_inherit,container_inherit

    7. 1: creator_owner allow dir_gen_all,object_inherit,container_inherit,inherit_only

    8. 2: everyone allow dir_gen_read,dir_gen_execute

    9. 3: group:Users allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit

    10. 4: group:Users allow std_synchronize,add_file,add_subdir,container_inherit

    11. 5: user:root allow dir_gen_read,dir_gen_write,dir_gen_execute,std_write_dac,delete_child

    12. 6: group:wheel allow dir_gen_read,dir_gen_write,dir_gen_execute,delete_child

    13. 7: everyone allow dir_gen_read,dir_gen_execute

  2. Delete extras ACL’s

    1. M8000A-1# chmod  -a# 7 group1

    2. M8000A-1# chmod  -a# 6 group1

    3. M8000A-1# chmod  -a# 5 group1

  3. Check ACL Delete

    1. M8000A-1# ls -lze

    2. total 2

    3. drwxrwxr-x +  4 root  wheel  99 Nov 17 20:42 group1

    4. OWNER: user:root

    5. GROUP: group:wheel

    6. 0: group:Administrators allow dir_gen_all,object_inherit,container_inherit

    7. 1: creator_owner allow dir_gen_all,object_inherit,container_inherit,inherit_only

    8. 2: everyone allow dir_gen_read,dir_gen_execute

    9. 3: group:Users allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit

    10. 4: group:Users allow std_synchronize,add_file,add_subdir,container_inherit

  4. Connect to Smartconnect Name to test mount and write access to the data

    1. This step verifies the ACL’s and SPN smartconnect name mount succeeds.

    2. Using AD account that has permissions to the share moun the FQDN of the smartconnect name of the new location of the data.

    3. Test write access to the share

    4. If successful the ACL’s applied was correctly completed

    5. Done.



Review the default ACL’s applied to Shares by OneFS

  1. Review the default ACL’s on shares created with Default Microsoft ACL’s

  2. Create share (directory does not exist)


  1. Folder permissions after folder is automatically created



  1. Share security settings:







Planning Timeouts for Migration jobs


These can help debug timeouts that may occur for long running jobs.  All Default timers below should be ok for most migrations.

  • Run policy: 15000 minutes (10 days)

  • Wait for migration of config to Complete: 50 sec

  • Wait for locking policy to Complete: 50 sec  

  • Wait for opened files to be closed: 300 sec  (this will delay the start of the job and will fail if force flag not enabled)

  • Cleanup migration policy: 75 sec (only if keep policy unchecked)

  • Cleanup locking policy : 75 sec  (only if allow source access is unchecked)


Interop Issues

  • If migration from 7.1.1.x release that are not access zone aware for NFS exports and any target cluster selected will place all exports into the system access zone only.  They can be migrated again into another access zone once completed.

Known Limitations

  • Cannot migrate igls-dfs shares.  In this case writeable copy of data with non-prefixed shares must be migrated.




End to End Data Migration Steps to Move Data/Config and Users to new Access Zone




Steps Outline

Details of Test Setup


Does not delete original shares/exports

Use the same smartconnect zone to access data post migration


Same cluster access zone migration

testzone1 -> testzone2


Partial data covered by syncIQ policy div1

Share g1: /ifs/testzone/div1/group1

Share g1_1: /ifs/testzone1/div1/group1/group1_1

Share g2:/ifs/testzone1/div1/group2

Share g3:/ifs/testzone1/div1/group3



Share g1: /ifs/testzone1/div1/group1 to /ifs/testzone2/div1/group1

Share g1_1: /ifs/testzone1/div1/group1/group1_1



setup new access zone with AD provider

testzone2 setup



Refrain users from writing data being migrated


Method to be determined by customer



Do not disable Eyeglass job



Eyeglass job should be on all the time

E.g. not all data covered by a policy are migrated





Access zone migration from testzone1  to testzone2

Note: must select ‘Enable Source Write Access’ if source path  is covered by a policy to proceed


Data replicated

Share/export/quota’s created for new Access Zone on source



Isilon: associate new Access Zone to IP pool

For both source and target clusters setup IP pool in the new access zone


Isilon: Setup schedule to incrementally sync data on the zonemigration policy.

This is created by Eyeglass.


Eyeglass:  Run incremental config sync from Jobs window using the Zone migration policy name created by the migration job.

Note: data sync on new directory paths that are created for config data must already exist.


The steps 6A and 6B should be repeated up until the final day of the cut over to the new access zone.  This step should be done before moving the smartconnect zone name from the old IP pool to the New IP pool created in the steps below.



Schedule Maintenance Window:

  1. Repeat step 6C

  2. Rename or delete source access zone smartconnect zone IP pool name.

  3. Create smartconect zone name on new access zone Pool on the new access zone.

  4. Verify DNS resolves correctly to the new IP pool using nslookup FQDN of smartconnect name.

  5. Verify from OneFS shares and exports on target access zone name exist as expected




User: able to access to new share using the same SmartConnect Zone name

Note: having problem with connection is not updated after associated to new AZ. Seems to be a Windows problem, Shows correct shares in zone only after reboot.




Old shares: not accessible using the same SmartConnect Zone name since old SmartConnect Zone name has been renamed

Users left behind who were using that SmartConnect Zone name have no access and now need a new one - requires new pool and new SPN for new SmartConnect Zone name





Reprotect data to DR cluster






Create new synciq policy


Create new policy with new path

Run new policy




Create new corresponding access zone on target cluster




Enable Config Replication Job

Run new job - shares/exports are replicated to target cluster


Note: original job is still running - protecting shares left behind

If select ‘Delete Source Configuration’, original shares are deleted after access zone migration, old shares on target cluster are deleted after config replication.



Readiness job

Re-run readiness



mirror policy

Re-run failover


Successful Migration Job View - Example