Converting a fabric cluster while preserving configuration
There is no specific command that can convert a fabric cluster to a logical chassis cluster while preserving current configurations, but you can accomplish this task as follows:
- Be sure that all nodes are running the same firmware version. Logical chassis cluster functionality is supported in Network OS 4.0 and later.
- Make sure all the nodes that you intend to transition from a fabric cluster to a logical chassis cluster are online. Run either the show vcs or show vcs detail command to check the status of the nodes.
Determine which node contains the global configuration you want to use on the logical chassis cluster, and make a backup of this configuration by running the
command and saving the configuration to a file on a remote FTP, SCP, SFTP, or USB location:
NOTEIf you need to combine the global configurations of two or more nodes, manually combine the required files into a single file which will be replayed after the transition to logical chassis cluster mode by using the copy global-running-config location_config_filename command. Refer to the section Taking precautions for mode transitions
Back up the local configurations of all individual nodes in the cluster, by running the
copy local-running-config command on each node and saving the configuration to a file on a remote ftp, scp, sftp, or usb location:
copy local-running-config location_config_filename
Perform the mode transition from fabric cluster to logical chassis cluster by running the
vcs logical-chassis enable rbridge-id all default-config command, as shown in
Converting a fabric cluster to a logical chassis cluster.
The nodes automatically reboot in logical chassis cluster mode. Allow for some down time during the mode transition.
- Run either the show vcs or the show vcs detail command to check that all nodes are online and now in logical chassis cluster (listed as "Distributed" in the command output) mode.
show vcs command output can also be used to determine which node has been assigned as the cluster principal node.
switch# show vcs R-Bridge WWN Switch-MAC Status ___________________________________________________________________ 1 > 11:22:33:44:55:66:77:81 AA:BB:CC::DD:EE:F1 Online 2 11:22:33:44:55:66:77:82 AA:BB:CC::DD:EE:F2 Online 3 11:22:33:44:55:66:77:83* AA:BB:CC::DD:EE:F3 Online
The RBridge ID with the arrow pointing to the WWN is the cluster principal. In this example, RBridge ID 1 is the principal.
While logged on to the principal node in the logical chassis cluster, copy the saved global configuration file from the remote location to the principal node as follows:
copy location_config_filename running-config
- Verify that the global configuration is available by running the show global-running-config command.
While logged on to the principal node in the logical chassis cluster, copy each saved local configuration file from the remote location to the principal node as follows:
copy location_config_filename running-configNOTEYou must run this command for each local configuration file you saved (one for each node).
The configuration file is automatically distributed to all nodes in the logical chassis cluster. Each node will contain the same global configuration after the previous steps are performed. Each node will also contain the local configuration information of all the other nodes.
- Verify that the local configurations are available by running the show local-running-config command.
Log in to the principal cluster and make any desired global and local configuration changes. These changes then are distributed automatically to all nodes in the logical chassis cluster.
NOTEYou can enter the RBridge ID configuration mode for any RBridge in the cluster from the cluster principal node. You can change the principal node by using the logical-chassis principal priority and logical chassis principal switchover commands. For more information about cluster principal nodes, refer to Selecting a principal node for the cluster.