Eyeglass Clustered Agent vAPP Install Guide

Eyeglass Clustered Agent vAPP Install and Upgrade Guide






Release:  1.9.5


Abstract:

This Guide provides a step by step procedure for installing the Superna Eyeglass clustered agent  vAPP used by Ransomware Defender and Easy Auditor

August, 2017


Table of Content





What's New

  1. Syslog forwarding of ECA logs to eyeglass

    1. Uses FluentD container for local logging and forwarding

  2. Dual CEE instances per VM for higher rate audit processing.  See how to enable in performance CEE configuration.

  3. Cluster Startup now checks HDFS configuration before starting and provides user feedback on validations

Definitions

  1. ECA -  Eyeglass Clustered Agent - the entire stack that runs in a separate VM outside of Eyeglass that processes CEE data

  2. CEE: Common Event Enabler - EMC Specific event protocol (xml based)

Deployment and Topology Overview

Deployment Diagram

This diagram shows a three node ECA cluster



Topologies for Data Between the ECA Cluster and Cluster

The diagrams below show the data flow between the ECA cluster and the Isilon cluster with two possible deployment topologies for ECA cluster.  Both CEE and HDFS are on the same network.  The ECA cluster processes CEE events and stores on the cluster with HDFS.  The ECA cluster needs ip connection via either the Management Network or the Data Network to the Eyeglass appliance that is typically deployed at the DR location over a wide area network.  






Ransomware basic install images form (4).png


ECA Cluster Deployment Topologies with Distributed Isilon Clusters


Review the diagram below on choices to deploy a centralized ECA cluster or distributed.

Considerations:

  1. Centralized ECA deployment is easier to manage and monitor.  For DR scenario ECA would need to be deployed at DR location.  See admin guide failover steps.

  2. CEE bandwidth is low http stream sent from remote sites to the ECA and tolerates latency well.

  3. Higher audit event rates may perform better if the ECA cluster is located lower latency to the cluster.  Topology #1

  4. Best Practice:  Centralize ECA cluster at one site and send CEE data over WAN link to central site. Topology #2


Eyeglass Clustered Agent vAPP Install Guide topologies.png



Firewall Port Requirements

Blue lines = Service broker communication heartbeat 23457

Orange Lines = Isilon REST API over TLS 8080 and SSH

Green lines = CEE messages from the cluster port 12228

Purple Lines  = HDFS ports to store audit data and security events.

Pink Lines = HBASE query ports from Eyeglass to the ECA cluster.

Red lines = ECA to Eyeglass support logging from ECA to Eyeglass.



IP Connection and Pool Requirements for Analytics database


Ransomware basic install images form.png


Sizing and Performance Considerations

ESX Compute

CEE is a real-time intensive processing task. Auditing work load increases with file IO, and the number of users is a good metric to estimate file IO workload per user. The table below is based on an assumption of 1.25 events per second per user with a peak of 1.5 events per second and can be used as a guideline to help determine how many events per second your environment will produce.  This will help you to determine the sizing of the VM and placement on ESX hardware.

Undersized hardware could result in back log of events to process.  Consult the CPU Sizing of the ECA cluster in the admin guide to change CPU default resource limits (guide here).



Number of Users per cluster

ECA VM per Physical Host Recommendation

Estimated Events Guideline

1 to 5000

1 Host

=5000 * 1.25 = 6,250 events per second

5000 - 10000

3 Host

=10,000 * 1.25 = 12,500 events per second

> 10000

3 Host

= Number of users * 1.25 events/second


ECA Cluster Node Network Requirements

Each ECA node process audit events and writes data to the analytics database using HDFS on the same network interface.  Therefore the combined TX and RX  constitutes the peak bandwidth requirement per node.  The table below is is a  minimum bandwidth requirements per ECA VM example calculation.  


See ECA event sample section in the Installation steps to capture total events per 5 minutes to use this bandwidth estimation to write ECA events to Analytics database on Isilon.

Peak Bandwidth requirement


Events per second per ECA cluster Guidelines

Examples

For each 1000 events allocate 40 Mbps per ECA node

2000 events total in 5 minutes  = 2000/1000  * 40 Mbps = 80 Mbps per ECA node  


Pre-requisites

Eyeglass VM

  1. Upgrade VM to 16G from default 8G of memory

    1. Login to Eyeglass VM as admin using ssh

    2. sudo -

    3. Type ”shutdown”

    4. Login to vcenter and wait until vm shows “powered off”

    5. Edit vm settings and increase memory to 16G

    6. Start VM

    7. Verify login after waiting 1-2 minutes for boot time


ESX Host Hardware Recommendation

  1. A Single host configuration should be a dual socket 8 cores per socket host with 64G ram as a minimum

  2. ECA OVA cluster requires:

    1. 18G of RAM

    2. 4 vCPU per VM total of 12 vCPU for the OVA Cluster

    3. 50G per VM Total 150G for the OVA Cluster

NOTE: OVA default sets resource limit of 12000 MHZ for the OVA shared by all ECA VM nodes in the cluster.  This limit can be increased if CEE event load requires more CPU processing.  Consult support before making any changes in vmware.

Deployment Overview

The Eyeglass appliance is required to be installed and configured. The ECA Cluster runs in a separate group of VM’s from Eyeglass. The ECA Cluster is provisioned as a CEE handler in the isilon cluster, and receives all file change notifications.


Superna Eyeglass Isilon Edition Overview v68.png


Easy Auditor.png

The detection of a security events will be contained strictly to the ECA Cluster. Eyeglass will be responsible for taking action against the cluster and notifying users.

  1. Isilon cluster  stores analytics database (this can be the same cluster that is monitored for audit events)

  2. Eyeglass appliance with Ransomware Defender agent licenses or Easy Auditor Agent Licenses

  3. Isilon cluster with HDFS license to store the Analytics database (shared database between Ransomware Defender and Easy Auditor)

  4. Overview of steps to install and configure:

    1. Configure Access Zone for Analytics database using an Access Zone with HDFS enabled

    2. Configure SmartConnect on the Access Zone

    3. Create Eyeglass api token for ECA to authenticate to Eyeglass

    4. Install ECA cluster

    5. Configure ECA cluster master config

    6. Push config to all nodes from master with ECA cli

    7. Start cluster

    8. Verify cluster is up and database is created

    9. Verify Eyeglass Service heartbeat and ECA cluster nodes have registered with Eyeglass

Preparation of Analytics Database Cluster

Prepare the Isilon Cluster for HDFS.

  1. Activate a license for HDFS. When a license is activated, the HDFS service is enabled by default.

  2. Create “eyeglass” Access Zone with path “/ifs/data/igls” for the HDFS connections from hadoop eyeglass compute clients (ECA) and under Available Authentication Providers, select only the Local  System  authentication provider.  


NOTE: Ensure that Local System provider is at the top of the list. Additional AD providers are optional and not required.

NOTE: In OneFS 8.0.1 the Local System provider must be added using the command line.  After adding, the GUI can be used to move the Local System provider to the top of the list.

isi zone zones modify eyeglass --add-auth-providers=local:system


  1. Create a directory on the cluster that will be set as HDFS root directories.

    1. Example: mkdir /ifs/data/igls/eca

  2. Set the HDFS root directory in eyeglass access zone that supports HDFS connections.

Command:

(OneFS 7.2)

isi zone zones modify access_zone_name_for_hdfs --hdfs-root-directory=path_to_hdfs_root_dir


Example:

isi zone zones modify eyeglass --hdfs-root-directory=/ifs/data/igls/eca


(Onefs 8.0)

isi hdfs settings modify --root-directory=path_to_hdfs_root_dir --zone=access_zone_name_for_hdfs


Example:

isi hdfs settings modify --root-directory=/ifs/data/igls/eca --zone=eyeglass


  1. Create One IP pool for HDFS access with at least 3 nodes in the pool to ensure high availability access to each ECA node, the Pool will be configured with static load balancing.   This will be used for datanode and storage node access by the ECA cluster for the Analytics database.

Command:

(OneFS 7.2)

isi networks create pool --name subnet0:hdfspool --ranges=172.16.88.241-172.16.88.242 --ifaces 1-4:10gige-1 --access-zone eyeglass --zone hdfs-mycluster.ad1.test --sc-subnet subnet0 --static


(Onefs 8.0)

isi network pools create groupnet0.subnet0.hdfspool  --ranges=172.22.1.22-172.22.1.22 --ifaces 1-4:10gige-1  --access-zone eyeglass --sc-dns-zone hdfs-mycluster.ad1.test --alloc-method static



Screen Shot 2017-04-26 at 1.49.18 PM.png


A virtual HDFS rack is a pool of nodes on the Isilon cluster associated with a pool of Hadoop compute clients. To configure virtual HDFS racks on the Isilon Cluster:


NOTE: The ip_address_range_for_client = the ip range used the by the ECA cluster VM’s.

Command:

(OneFS 7.2)

isi hdfs racks create /hdfs_rack_name --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool


isi networks modify pool --name subnet:pool --access-zone=access_zone_name_for_hdfs


Example:

isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20  --ip-pools=subnet0:hdfspool


isi networks modify pool --name  subnet0:hdfspool --access-zone=eyeglass


(Onefs 8.0)

isi hdfs racks create /hdfs_rack_name --zone=access_zone_name_for_hdfs --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool


Example:

isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20 --ip-pools=subnet0:hdfspool --zone=eyeglass


isi hdfs racks list --zone=eyeglass

Name        Client IP Ranges        IP Pools

-------------------------------------------------------------

/hdfs-rack0 172.22.1.18-172.22.1.20 subnet0:hdfspool

-------------------------------------------------------------

Total: 1



  1. Create local Hadoop user in the System access zone.  

NOTE: User ID must be eyeglasshdfs.

Command:

(OneFS 7.2)

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system


Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system


(Onefs 8.0)

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system


Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system



  1. Login via SSH to the Isilon cluster to change the ownership and permissions on the HDFS path that will be used by Eyeglass ECA clusters.

    1. chown -R eyeglasshdfs:"Isilon Users" /ifs/data/igls/eca

    2. chmod -R 755 /ifs/data/igls/eca

  2. Analytics Cluster setup Complete.

Installation and Configuration ECA Cluster

System Requirements:


3 VM’s nodes each requiring:


  • vSphere 5.5 or higher

  • 1x ip address on the same subnet for each node

  • Gateway IP

  • DNS server IP that can resolve smartconnect name for analytics database

  • NTP server

  • IP address of the eyeglass appliance

  • API Token created on the Eyeglass appliance to authenticate

  • Unique ECA (Eyeglass Clustered Agent) unique name

Installation Procedure

The deployment is based on three node ECA appliances.

  1. Download the Superna Eyeglass™ OVF from https://www.supernaeyeglass.com/downloads

  2. Unzip into a directory on a machine with vSphere client installed

  3. Install the OVA using the OVF Online Installer:
    deployovf.pngPicture1.png
    Deploy from a file or URL where the OVA was saved


  1. Using vSphere deploy the OVA to build the cluster.  Follow screenshots below:

Picture1.png

Screen Shot 2017-02-17 at 8.44.07 PM.png

Screen Shot 2017-02-17 at 8.44.14 PM.png


  1. Complete the networking sections as follows:

    1. ECA Cluster name

    2. All VM are on the same subnet

    3. Enter network mask (will be applied to all VM’s)

    4. Gateway IP

    5. DNS server (must be able to resolve the igls.<your domain name here>) (Use nameserver IP address)

NOTE: Agent node 1 is the master node where all ECA CLI commands are executed for cluster management

  1. Power on the vAPP

  2. Ping each ip address to make sure each node has finished booting

  1. Login via SSH to the Master Node (Node 1) using the “ecaadmin” account default password 3y3gl4ss and run the following command:

ecactl components install eca

NOTE: during this step a passphrase for SSH between nodes is generated, press the “Enter key” to accept an empty passphrase.


NOTE: A prompt to enter Node 2 and Node 3 password is required on first boot only.  Enter the same default password “3y3gl4ss” when prompted


Refer to below for “passphrase” step:


Generating an ssh key for passwordless ssh access between cluster nodes...

Generating public/private rsa key pair.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/ecaadmin/.ssh/id_rsa.

Your public key has been saved in /home/ecaadmin/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:gbsudR30t2bbnpRJU1egkO7KJnHP1Eke2b9nIHpDb7g ecaadmin@eca194-1

The key's randomart image is:

+---[RSA 2048]----+

|          ..  ...|

|       .  o. .  .|

|      . .o ..o  o|

|       . .o = o o|

|      . So = + = |

|      o.o + * * =|

|     ..= = o * B.|

|    ... + + + =.=|

|     ..o   .E+ +o|

+----[SHA256]-----+

 

Distributing the key to other nodes in the cluster.

You will have to supply the eca password to the other nodes when prompted.

 

The authenticity of host '172.22.1.95 (172.22.1.95)' can't be established.

ECDSA key fingerprint is SHA256:ohMZK1A+2FtgD/vgWHY7dyBUvCv4LDtMd1VSMLLHQak.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '172.22.1.95' (ECDSA) to the list of known hosts.

Password:

id_rsa                                                                                             100% 1679     1.6KB/s   00:00

id_rsa.pub                                                                                         100%  399     0.4KB/s   00:00

authorized_keys                                                                                    100%  399     0.4KB/s   00:00

known_hosts                                                                                        100%  173     0.2KB/s   00:00

The authenticity of host '172.22.1.96 (172.22.1.96)' can't be established.

ECDSA key fingerprint is SHA256:ohMZK1A+2FtgD/vgWHY7dyBUvCv4LDtMd1VSMLLHQak.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '172.22.1.96' (ECDSA) to the list of known hosts.

Password:

Password:

id_rsa                                                                                             100% 1679     1.6KB/s   00:00

id_rsa.pub                                                                                         100%  399     0.4KB/s   00:00

authorized_keys                                                                                    100%  399     0.4KB/s   00:00

known_hosts                                                                                        100%  346     0.3KB/s   00:00

The authenticity of host '172.22.1.94 (172.22.1.94)' can't be established.

ECDSA key fingerprint is SHA256:ohMZK1A+2FtgD/vgWHY7dyBUvCv4LDtMd1VSMLLHQak.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '172.22.1.94' (ECDSA) to the list of known hosts.

passwordless ssh access initialized

 

Success


  1. Step complete

  2. On Eyeglass Appliance: generate a unique API Token from Superna Eyeglass REST API Window. Once a token has been generated for the ECA Cluster, it can be used in that ECA’s startup command for authentication, along with the location of eyeglass. (Eyeglass main menu  Eyeglass REST API menu item)


  1. On ECA Cluster Master node ip 1

    1. Login to that VM. From this point on, commands will only be executed on the master node.

    2. On the master node, edit the file  (using vi) /opt/superna/eca/eca-env-common.conf , and change these five settings to reflect your environment. Replace the variable accordingly


Set the IP address or FQDN of the Eyeglass appliance and the API Token, uncomment the parameter lines before save file. I.e:

      • export EYEGLASS_LOCATION=ip_addr_of_eyeglass_appliance

      • export EYEGLASS_API_TOKEN=Eyeglass_API_token


Verify the IP addresses for the nodes in your cluster. It is important that NODE_1 be the master, (i.e. the IP address of the node you’re currently logged into.)

      • export ECA_LOCATION_NODE_1=ip_addr_of_node_1 (set by first boot from the OVF)

      • export ECA_LOCATION_NODE_2=ip_addr_of_node_2 (set by first boot from the OVF)

      • export ECA_LOCATION_NODE_3=ip_addr_of_node_3 (set by first boot from the OVF)

Set the HDFS path to the SmartConnect name setup in the Analytics database configuration steps. Replace the FQDN hdfs_sc_zone_name with <your domain here>.  

NOTE: Do not change any other value.  Whatever is entered here is created as a subdirectory of the HDFS root directory that was set earlier.

      • export ISILON_HDFS_ROOT='hdfs://hdfs_sc_zone_name:8020/ecahbase


Performance CEE Configuration

Performance tuning value enables 2 CEE servers per VM to process higher event rates.  Best Practise to enable this with > 1000 users access Isilon data.

Verify the extra services setting is enabled and set to true

export LAUNCH_EXTRA_SERVICES=true

NOTE:  To leverage all 6 CEE servers 3 more CEE url’s are added to Isilon to load share CEE events over 6 instances.  See Auditing configuration on Isilon Section below.

  1. On master ECA Node, startup the ECA Cluster

    1. NOTE: This step starts the containers on each node,  connects to Analytics HDFS SmartConnect FQDN, creates Analytics database if database is not detected, Then start up ECA code in each container.

      • ecactl cluster up (Note can take 30 seconds to 1 minute to complete)

Script does the following:

  1. Reads config file and checks the config data is not empty

  2. Checks if the hdfs pool SCZ name is resolvable

  3. Checks if eca can connect to isilon using netcat with port 8020

  4. Mounts hdfs data as "eyeglasshdfs" user and checks if user has permissions


      • Configuration pushed

      • Starting services on all cluster nodes.


      • Checking HDFS connectivity

      • Starting HDFS connectivity tests...

      • Reading HDFS configuration data...

      • ********************************************************************

      • HDFS root path: hdfs://hdfsransomware.ad3.test:8020/eca1/

      • HDFS name node: hdfsransomware.ad3.test

      • HDFS port: 8020

      • ********************************************************************

      • Resolving HDFS name node....

      • Server: 192.168.1.249

      • Address: 192.168.1.249#53


      • Non-authoritative answer:

      • Name: hdfsransomware.ad3.test

      • Address: 172.31.1.124


      • Checking connectivity between ECA and Isilon...

      • Connection to hdfsransomware.ad3.test 8020 port [tcp/intu-ec-svcdisc] succeeded!

      • ********************************************************************

      • Initiating mountable HDFS docker container...


  1. Verifying ECA Cluster

    1. On the master node run these commands:

  1. run the following command “ecactl db shell”

  2. Once in the shell execute command “status”

  3. Output should show 1 active master , 2 backup master server

Screen Shot 2017-02-18 at 4.34.06 PM.png

  1. Type ‘exit’

  1. Verifying ECA containers are running

    1. Command: “ecactl containers ps”

Screen Shot 2017-02-18 at 4.19.12 PM.png

  1. Check cluster status and that all analytics tables exist

    1. Command: ‘ecactl cluster status’

    2. This command verifies all containers are running on all nodes and verifies each node can mount the tables in the Analytics database.

    3. Sample output.

Checking service status on all cluster nodes.


Printing container status on node: 172.31.1.133

Status StartedAt Name

==========================================================

running 2017-04-26T19:34:20.288787049Z /eca_ceefilter_1

running 2017-04-26T19:34:20.327191299Z /eca_fastanalysis_1

running 2017-04-26T19:34:20.446988172Z /eca_cee_1

running 2017-04-26T19:34:18.328104719Z /eca_iglssvc_1

running 2017-04-26T19:32:16.123403209Z /eca_rmq_1

running 2017-04-26T19:32:15.910507548Z /db_ecademo_1



Printing container status on node: 172.31.1.134

Status StartedAt Name

==========================================================

running 2017-04-26T19:34:44.612397346Z /eca_ceefilter_1

running 2017-04-26T19:34:44.547856619Z /eca_fastanalysis_1

running 2017-04-26T19:34:39.274899421Z /eca_cee_1

running 2017-04-26T19:34:39.010306288Z /eca_iglssvc_1

running 2017-04-26T19:32:54.616204257Z /eca_rmq_1

running 2017-04-26T19:32:54.367833165Z /db_ecademo_2



Printing container status on node: 172.31.1.135

Status StartedAt Name

==========================================================

running 2017-04-26T19:35:07.317100091Z /eca_fastanalysis_1

running 2017-04-26T19:35:05.603106726Z /eca_ceefilter_1

running 2017-04-26T19:35:05.611508228Z /eca_iglssvc_1

running 2017-04-26T19:35:03.034694607Z /eca_cee_1

running 2017-04-26T19:33:39.672505904Z /eca_rmq_1

running 2017-04-26T19:33:38.375351913Z /db_ecademo_3



Verifying db connectivity on node 172.31.1.133

This should print that the user table does exist...

2017-04-26 21:16:08,022 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar  1 00:34:48 CST 2017


exists 'user'

Table user does exist

0 row(s) in 0.2610 seconds


This should print that the signal table does exist...

2017-04-26 21:16:36,959 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar  1 00:34:48 CST 2017


exists 'signal'

Table signal does exist

0 row(s) in 0.2820 seconds



Verifying db connectivity on node 172.31.1.134

This should print that the user table does exist...

2017-04-26 21:16:43,681 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar  1 00:34:48 CST 2017


exists 'user'

Table user does exist

0 row(s) in 0.2700 seconds


This should print that the signal table does exist...

2017-04-26 21:17:12,389 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar  1 00:34:48 CST 2017


exists 'signal'

Table signal does exist

0 row(s) in 0.2410 seconds


Verifying db connectivity on node 172.31.1.135

This should print that the user table does exist...

2017-04-26 21:17:17,540 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar  1 00:34:48 CST 2017


exists 'user'

Table user does exist

0 row(s) in 0.2310 seconds


This should print that the signal table does exist...

2017-04-26 21:17:45,815 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar  1 00:34:48 CST 2017


exists 'signal'

Table signal does exist

0 row(s) in 0.2060 seconds


Note: Containers should be running on all nodes and user and signal table should exist on all nodes in the cluster as the output above indicates.  

  1. On the Eyeglass appliance, check the Manage services icon:

    1. Login to Eyeglass as admin user

    2. Check the status of the ECA Cluster, click ‘Manage Service’ Icon and click on + to expand the container or services for each eca node review image below.  

    3. Verify the ip addresses of the ECA nodes are listed.

Time Configuration Isilon, Eyeglass, ECA cluster

Overview: To get accurate auditing features for Ransomware or Easy Auditor time sync between all components is critical step.   NTP should be used on all VM’s and use the same NTP source.


  1. Verify Isilon clusters being monitored are using an NTP server.  Many Internet time sources exist or internal Enterprise server IP address.

    1. Enable NTP on all Isilon clusters

  2. Eyeglass VM configure the same NTP servers used by Isilon by following this guide http://documentation.superna.net/eyeglass-isilon-edition/install/quickinstall#TOC-Setup-Time-zone-and-NTP

  3. On each ECA VM repeat the YAST steps above to configure NTP on each VM.




Isilon Protocol Audit Configuration

Overview

This section configures Isilon file auditing required to monitor user behaviours.   The CEE protocol can be enabled on each Access Zone independently that requires monitoring.   The CEE endpoints should be configured to each node of the ECA cluster.  


NOTE: If you have a CEE server for external auditing applications.  See the next section on how to configure CEE server messaging files to send Rabbitmq events to the ECA cluster.

Enable and configure Isilon protocol audit


  1. Enable Protocol Access Auditing.

Command:

(OneFS 7.2)

isi audit settings modify --protocol-auditing-enabled {yes | no}


Example:

isi audit settings modify --protocol-auditing-enabled=yes


(OneFS 8.0)

isi audit settings global modify --protocol-auditing-enabled {yes | no}


Example:



isi audit settings global modify --protocol-auditing-enabled=yes


  1. Select the access zone that will be audited. This audited access zones are accessed by the SMB/NFS clients.

Command:

(OneFS 7.2)

isi audit settings modify --audited-zones=audited_access_zone


Example:

isi audit settings modify --audited-zones=sales


(OneFS 8.0)

isi audit settings global modify --audited-zones= audited_access_zone


Example:

isi audit settings global modify --audited-zones=sales,system



  1. OneFS 7.2 or 8.0 GUI Auditing Configuration:

    1. Click Cluster Management > Auditing

    2. In the Settings area, select Enable Configuration Change Auditing and Enable Protocol Access Auditing checkbox.

    3. In the Audited Zones area, click Add Zones.

    4. In the Select Access Zones dialog box, select one or more access zones, and click Add Zones.

    5. In the Event forwarding area, specify the ECA nodes to forward CEE events to.


CEE Server URL (point to the ECA, with default port 12228).

NOTE:  If enable dual CEE per VM 6 URL’s are required per ECA node using two different ports.   The second CEE server listens on port 12229


Dual CEE Example

Screen Shot 2017-09-15 at 8.03.55 PM.png


    1. For OneFS 7.2, in Storage Cluster Name, specify the Isilon cluster name:

    1. Click Save Changes.

(Mandatory Step) Isilon Audit Event Rate Validation for Sizing ECA cluster

This is required step to determine ECA configuration to match performance requirements.

  1. Once auditing is enabled wait 10 minutes.

  2. Then  change dates in example (yellow) to cover a 5 minute period after CEE was enabled.  (NOTE this should be repeated if only 1 access  zone was enabled. And repeat with all access zones enabled)

  3. isi_for_array 'isi_audit_viewer -t protocol -s "2017-09-08 12:41:00" -e "2017-09-08 12:46:00" ‘ | wc -l  (where dates are a 5 minute period of time to sample in the past)

  4. Provide this output number to support installation team.  

  5. It counts the number of events on all nodes in the cluster that are recorded.

(Optional skip if external CEE servers not required) Configure External CEE server to send events to the ECA cluster Configuration


NOTE:  This set of steps is not required unless existing auditing products are installed and an existing or shared CEE environment exists.

  1. Go to the path where CEE was installed and edit this file “MsgSys.xml” and update it: (In my case the path to MsgSys.xml is: C:\Program Files\EMC\CEE\MsgSys.xml)

Update file to look like this:

<?xml version="1.0" encoding="utf-8"?>

<MsgSys>

<MsgBus enabled="1">

 <Host name="<HOST>" port="5672" username="eyeglass" password="eyeglass">

   <Exchange name="CEE_Events" vhost="/eyeglass" type="topic">

     <Message persistent="1" />

   </Exchange>

 </Host>

</MsgBus>

</MsgSys>

  1. Instead of <HOST> put the master node IP from ECA. After .xml file is changed reboot your CEE server.

  2. On cluster go to: Cluster Management -> Auditing -> Event Forwarding

    1. Verify the cluster you are monitoring is sending events to the CEE server IP configured above.




ECA Cluster Upgrade Procedures

This section covers the steps to upgrade ECA clusters using the offline method.


NOTE: If if upgrading to 1.9.5 ECA or later see prerequisites firewall port changes required for log collection

Offline ECA upgrade

  1. Login to the master node (node 1) via ssh

  2. ecactl cluster down

  3. Download the offline file from the support site and scp the file to the node

  4. Copy file into /opt/superna (must be copied to this directory)

  5. cd /opt/superna

  6. Chmod 755 eyeXXXX name of the file

  7. ./eyeglass-offline-1.9.2-17112.run  (example only)  

  8. This will automatically run ECA upgrade

  9. You will be prompted to enter ecaadmin password for other nodes to complete cluster upgrade. (prompt is root but sudo has been used to run root level commands)

  10. Once completed successfully

  11. ecactl cluster up

  12. Login to Eyeglass and verify Manage Services Icon can see green health VM’s

Screen Shot 2017-06-13 at 8.19.16 PM.png

Note: can take 2-3 minutes after cluster up

  1. Completed


Ransomware IGLS CLI command Reference


See Eyeglass CLI command for Ransomware

ECA Cluster OS Suse 42.2 to 42.3 Upgrade Procedures - Offline


Use this procedure when No internet access allows online OS upgrade option.  The procedures requires a new OVF deployed and config file used to restore all settings.


  1. Login to the master node of current ECA cluster as ecaadmin user

  2. Copy the contents of /opt/superna/eca/eca-env-common.conf to your local computer as backup of the configuration. (SCP the file or copy and paste to a text file)

  3. Shutdown the cluster

    1. Ecactl cluster down

  4. Power down all the VM’s from vcenter

  5. Deploy new OVF 1.9.4 or later and re-use the same ip addresses and eca name during deployment.

  6. Login to node 1 as ecaadmin

  7. Edit the main conf file

    1. nano /opt/superna/eca/eca-env-common.conf

    2. Paste backup file contents into the file (note can use SCP to copy the file back to the cluster using ecaadmin user to login)

    3. Ctl + x

    4. Answer yes to save the file on exit

  8. ecactl cluster up

  9. Verify normal cluster boot process

  10. Login to Eyeglass

  11. Open Service Manager Icon

  12. Wait up to 5 minutes and verify all cluster nodes are green active

  13. Done


ECA Cluster OS Suse 42.2 to 42.3 Upgrade Procedures - Internet Online

This procedures requires Internet access to the ECA nodes to complete.  If not available use Offline OS upgrade procedure.

IMPORTANT: This procedure must be run AFTER the Offline ECA Upgrade

  1. ssh ecaadmin@x.x.x.x of master node

  2. ecactl cluster down

  3. Sudo -s

  4. Enter enter ecaadmin password

  5. zypper refresh (requires internet)

  6. zypper update (requires internet) (applies current updates)

  7. Change all remaining repo URLs to the new version of the distribution (needs to be run as root)

    1. cp -Rv /etc/zypp/repos.d /etc/zypp/repos.d.Old

  8. makes backup

    1. sed -i 's,42\.2,42.3,g' /etc/zypp/repos.d/*  

  9. Refresh new repositories (you might be asked to accept new gpg key)

    1. zypper --gpg-auto-import-keys ref

  10. Upgrade to 42.3

    1. zypper dup --download-in-advance

  11. Repeat on all 3 nodes

  12. Reboot all 3 nodes

  13. Login to master node with ssh after reboot

  14. ecactl cluster up

  15. Verify startup is normal and tables exist

  16. ecactl cluster status


Advanced 2 pool HDFS configuration.

This describes a 2 pool configuration with a namenode pool and a datanode pool.

  1. Create a hdfspool-namenode, it should be used by Hadoop clients to connect to the HDFS namenode service on Isilon and it should use the dynamic IP allocation method to minimize connection interruptions in the event that an Isilon node fails. For an HDFS workload, round robin is likely to work best. Create a delegation record that DNS requests for the SmartConnect zone name, hdfs-mycluster.ad1.test for example, are delegated to the service IP that will be defined on your Isilon Cluster.

Note: dynamic IP allocation requires a SmartConnect Advanced license.

Example:

Command:

(OneFS 7.2)

isi networks create pool --name subnet0:hdfspool-namenode --ranges=172.16.88.241-172.16.88.242 --ifaces 1-4:10gige-1 --access-zone eyeglass --zone hdfs-mycluster.ad1.test --sc-subnet subnet0 --dynamic

  1. Create a hdfspool-datanode, it should be used for HDFS data node connections and it should use the static IP allocation method to ensure that data node connections are balanced evenly among all Isilon nodes.

This pool is for cluster internal communication and does not require SmartConnect Zone

To assign specific SmartConnect IP address pools for data node connections, you will use the “isi hdfs racks modify” command.

Note: If you do not have a SmartConnect Advanced license, you may choose to use a single static pool for namenode and datanode connections. This may result in some failed HDFS connections immediately after Isilon node failures.

Note: The ip_address_range_for_client = the ip range used the by the ECA cluster VM’s.

Command

(OneFS 7.2)

isi hdfs racks create /hdfs_rack_name --client-ip-ranges=

ip_address_range_for_client --ip-pools=subnet:pool


isi networks modify pool --name subnet:pool --access-zone=ccess_zone_name_for_hdfs


Example:

isi hdfs racks create /hdfs-rack0 --client-ip-ranges=0.0.0.0-255.255.255.255  --ip-pools=subnet0:hdfspool

isi networks modify pool --name  subnet0:hdfspool--access-zone=eyeglass

(Onefs 8.0)

isi hdfs racks create /hdfs_rack_name --zone=access_zone_name_for_hdfs --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool

Example:

isi hdfs racks create /hdfs-rack0 --client-ip-ranges=0.0.0.0-255.255.255.255 --ip-pools=subnet0:hdfspool-datanode --zone=eyeglass

isi hdfs racks list --zone=eyeglass

Name        Client IP Ranges        IP Pools

-------------------------------------------------------------

/hdfs-rack0 0.0.0.0-255.255.255.255 subnet0:hdfspool

-------------------------------------------------------------

Total: 1