Eyeglass Clustered Agent vAPP Install Guide

Eyeglass Clustered Agent vAPP Install and Upgrade Guide

Release:  2.5.1

Abstract:

This Guide provides a step by step procedure for installing the Superna Eyeglass clustered agent  vAPP used by Ransomware Defender and Easy Auditor



.

Table of Content


Contents

  1. 1 Release:  2.5.1
    1. 1.1 Abstract:
  2. 2 .
  3. 3 What's New
  4. 4 Definitions
  5. 5 Deployment and Topology Overview
    1. 5.1 Deployment Diagram
    2. 5.2 Topologies for Data Between the ECA Cluster and Cluster
    3. 5.3 ECA Cluster Deployment Topologies with Distributed Isilon Clusters
    4. 5.4 Considerations:
    5. 5.5 Firewall Port Requirements Ransomware Defender
    6. 5.6 Additional Firewall Ports for  Easy Auditor  
  6. 6 IP Connection and Pool Requirements for Analytics database
  7. 7 Sizing and Performance Considerations
    1. 7.1 ESX Compute
    2. 7.2 ECA Cluster Node Network Requirements to Isilon
  8. 8 Pre-requisites
    1. 8.1 Eyeglass VM
    2. 8.2 ESX Host Hardware Recommendation and VM Requirements
    3. 8.3 Deployment Overview
  9. 9 Preparation of Analytics Database Cluster
  10. 10 Installation and Configuration ECA Cluster
    1. 10.1 System Requirements:
    2. 10.2 Installation Procedure
  11. 11 Auditing Configuration
    1. 11.1 How to Configure Performance CEE Configurations
      1. 11.1.1 How to configure Turbo Audit Very High Event Rate
        1. 11.1.1.1 Prerequisites for all mount methods:
      2. 11.1.2 Instructions:
        1. 11.1.2.1 Configure eca-env-common.conf
    2. 11.2 Manual Mount with Turbo audit
    3. 11.3 Configure and Verify NFS automounter with Turbo Audit
      1. 11.3.1 How To check for successful mount with NFS Auto Mount only
  12. 12 Time Configuration Isilon, Eyeglass, ECA cluster
  13. 13 Backup the Audit Database with SnapshotIQ (Required for Easy Auditor)
  14. 14 Isilon Protocol Audit Configuration
    1. 14.1 Overview
    2. 14.2 Enable and configure Isilon protocol audit
    3. 14.3 (Mandatory Step) Isilon Audit Event Rate Validation for Sizing ECA cluster
  15. 15 How to Purge old Audit logs on Isilon
  16. 16 ECA Cluster Upgrade Procedures
    1. 16.1 Offline ECA upgrade (from 1.9.6 to 2.5.x)
    2. 16.2 Expanding Auditor Cluster for Higher Performance
      1. 16.2.1 How to expand Easy Auditor cluster size
    3. 16.3 How to Enable Real-time Monitor ECA cluster performance
  17. 17 Ransomware IGLS CLI command Reference
  18. 18 ECA Cluster OS Suse 42.2 to 42.3 Upgrade Procedures - Offline
  19. 19 ECA Cluster OS Suse 42.2 to 42.3 Upgrade Procedures - Internet Online
  20. 20 Advanced 2 pool HDFS configuration.
  21. 21 ECA Config file tag Definitions


.

What's New

  1. Syslog forwarding of ECA logs to eyeglass

    1. Uses FluentD container for local logging and forwarding

  2. Dual CEE instances per VM for higher rate audit processing.  See how to enable in performance CEE configuration.

  3. Cluster Startup now checks HDFS configuration before starting and provides user feedback on validations

Definitions

  1. ECA -  Eyeglass Clustered Agent - the entire stack that runs in a separate VM outside of Eyeglass that processes CEE data

  2. CEE: Common Event Enabler - EMC Specific event protocol (xml based)

Deployment and Topology Overview

Deployment Diagram

This diagram shows a three node ECA cluster



Topologies for Data Between the ECA Cluster and Cluster

The diagrams below show the data flow between the ECA cluster and the Isilon cluster with two possible deployment topologies for ECA cluster.  Both CEE and HDFS are on the same network.  The ECA cluster processes CEE events and stores on the cluster with HDFS.  The ECA cluster needs ip connection via either the Management Network or the Data Network to the Eyeglass appliance that is typically deployed at the DR location over a wide area network.  






Ransomware basic install images form (4).png


ECA Cluster Deployment Topologies with Distributed Isilon Clusters


Review the diagram below on choices to deploy a centralized ECA cluster or distributed.

Considerations:

  1. Centralized ECA deployment is easier to manage and monitor.  For DR scenario ECA would need to be deployed at DR location.  See admin guide failover steps.

  2. CEE bandwidth is low http stream sent from remote sites to the ECA and tolerates latency well.

  3. Higher audit event rates may perform better if the ECA cluster is located lower latency to the cluster.  Topology #1

  4. Best Practice:  Centralize ECA cluster at one site and send CEE data over WAN link to central site. Topology #2


Eyeglass Clustered Agent vAPP Install Guide topologies.png

Firewall Port Requirements Ransomware Defender

Blue lines = Service broker communication heartbeat 23457

Orange Lines = Isilon REST API over TLS 8080 and SSH

Green lines = NFS UDP v3 to retrieve audit events

Purple Lines  = HDFS ports to store audit data and security events.

Pink Lines = HBASE query ports from Eyeglass to the ECA cluster.

Red lines = ECA to Eyeglass support logging from ECA to Eyeglass.



Additional Firewall Ports for  Easy Auditor  

IP Connection and Pool Requirements for Analytics database


Ransomware basic install images form.png


Sizing and Performance Considerations

ESX Compute

CEE is a real-time intensive processing task. Auditing work load increases with file IO, and the number of users is a good metric to estimate file IO workload per user. The table below is based on an assumption of 1.25 events per second per user with a peak of 1.5 events per second and can be used as a guideline to help determine how many events per second your environment will produce.  This will help you to determine the sizing of the VM and placement on ESX hardware.

Undersized hardware could result in back log of events to process.  Consult the CPU Sizing of the ECA cluster in the admin guide to change CPU default resource limits (guide here).



Number of active concurrent  Users per cluster 1

ECA VM per Physical Host Recommendation

Estimated Events Guideline

1 to 5000

1 Host

=5000 * 1.25 = 6,250 events per second

5000 - 10000

3 Host

=10,000 * 1.25 = 12,500 events per second

> 10000

3 Host

= Number of users * 1.25 events/second

1  Active tcp connection with file IO to the cluster

ECA Cluster Node Network Requirements to Isilon

Each ECA node process audit events and writes data to the analytics database using HDFS on the same network interface.  Therefore the combined TX and RX  constitutes the peak bandwidth requirement per node.  The table below is is a  minimum bandwidth requirements per ECA VM example calculation.  

See ECA event sample section in the Installation steps to capture total events per 5 minutes to use this bandwidth estimation to write ECA events to Analytics database on Isilon.

HDFS Bandwidth estimates and guidelines for Analytics database access to Isilon.



Product Configuration



Audit Event rate Per Second

Peak Bandwidth requirement


Events per second per ECA cluster (input NFS Reading events from Isilon to ECA cluster)

Audit data Writes Mbps per ECA cluster (output HDFS writing events)

Ransomware Defender only

1000 evts

Into ECA → 50 Mbps

Out of ECA <-- < 150 Mbps  

Unified Ransomware and Easy Auditor - Steady state storing events

2000 evts

Into ECA → 125 Mbps

Out of ECA ← < 350 Mbps  

Easy Auditor Analysis Reports (long reports)

NA

Into ECA (HDFS from Isilon) ← 800 Mbps - 1.5 Gbps



Pre-requisites

Eyeglass VM

  1. Eyeglass must be deployed with or upgraded to the correct compatible release for the ECA release.

  2. Upgrade VM to 16G from default 8G of memory

    1. Login to Eyeglass VM as admin using ssh

    2. sudo -

    3. Type ”shutdown”

    4. Login to vcenter and wait until vm shows “powered off”

    5. Edit vm settings and increase memory to 16G

    6. Start VM

    7. Verify login after waiting 1-2 minutes for boot time

  3. Set spark.driver.maxResultSize

    1. On Eyeglass appliance, sudo su - to assume root user.

    2. vim /opt/spark-2.1.1-bin-hadoop2.7/conf/spark-defaults.conf

    3. Add this new line

spark.driver.maxResultSize=0

    1. Save your changes

:wq!

    1. Restart the sca service

systemctl restart sca

  1. Create custom firewall rule to open port 2013 required for Easy Auditor Wiretap

    1. On Eyeglass appliance, sudo su - to assume root user.

    2. yast

    3. Go to Security and Users

    4. Go to Firewall


    1. Enter

    2. Go to Custom Rules

    3. Enter

    4. Tab to Add

    1. Create Custom Rule

    2. Source: 0/0

    3. Destination port: 2013

    4. Everything else default

    5. Save and Exit



  1. Disable EventAuditProgress task

    1. From Eyeglass appliance CLI

    2. igls  admin schedules set --id EventAuditProgress --enabled false

  2. Done Eyeglass Pre-requisites

ESX Host Hardware Recommendation and VM Requirements

  1. A Single host configuration should be a dual socket 8 cores per socket host with 64G ram as a minimum


Configuration

Memory

CPU

Disk

Unified (Ransomware Defender and Easy Auditor)

16 G per ECA node

4 vCPU per ECA node

80 G per ECA node Total 240 G

Ransomware Only

16 G per ECA node

4 vCPU per ECA node

80 G per ECA node Total 240 G

Easy Auditor Only (for best performance)

16 G per ECA node

4 vCPU per ECA node

80 G per ECA node Total 240 G




NOTE: OVA default sets resource limit of 18000 MHZ for the OVA shared by all ECA VM nodes in the cluster.  This limit can be increased if audit event load requires more CPU processing.  Consult support before making any changes in vmware.

Deployment Overview

The Eyeglass appliance is required to be installed and configured. The ECA Cluster runs in a separate group of VM’s from Eyeglass. The ECA Cluster is provisioned as a CEE handler in the isilon cluster, and receives all file change notifications.


Superna Eyeglass Isilon Edition Overview v68.png


Easy Auditor.png

The detection of a security events will be contained strictly to the ECA Cluster. Eyeglass will be responsible for taking action against the cluster and notifying users.

  1. Isilon cluster  stores analytics database (this can be the same cluster that is monitored for audit events)

  2. Eyeglass appliance with Ransomware Defender agent licenses or Easy Auditor Agent Licenses

  3. Isilon cluster with HDFS license to store the Analytics database (shared database between Ransomware Defender and Easy Auditor)

  4. Overview of steps to install and configure:

    1. Configure Access Zone for Analytics database using an Access Zone with HDFS enabled

    2. Configure SmartConnect on the Access Zone

    3. Create Eyeglass api token for ECA to authenticate to Eyeglass

    4. Install ECA cluster

    5. Configure ECA cluster master config

    6. Push config to all nodes from master with ECA cli

    7. Start cluster

    8. Verify cluster is up and database is created

    9. Verify Eyeglass Service heartbeat and ECA cluster nodes have registered with Eyeglass

Preparation of Analytics Database Cluster

Prepare the Isilon Cluster for HDFS.

  1. Activate a license for HDFS. When a license is activated, the HDFS service is enabled by default.

  2. Create “eyeglass” Access Zone with path “/ifs/data/igls/analyticsdb” for the HDFS connections from hadoop eyeglass compute clients (ECA) and under Available Authentication Providers, select only the Local  System  authentication provider.  

    1. Select create create zone base directory

Screen Shot 2017-11-01 at 6.50.03 PM.png


NOTE: Ensure that Local System provider is at the top of the list. Additional AD providers are optional and not required.

NOTE: In OneFS 8.0.1 the Local System provider must be added using the command line.  After adding, the GUI can be used to move the Local System provider to the top of the list.

isi zone zones modify eyeglass --add-auth-providers=local:system


  1. Set the HDFS root directory in eyeglass access zone that supports HDFS connections.

Command:

(OneFS 7.2)

isi zone zones modify access_zone_name_for_hdfs --hdfs-root-directory=path_to_hdfs_root_dir


Example:

isi zone zones modify eyeglass --hdfs-root-directory=/ifs/data/igls/analyticsdb


(Onefs 8.0)

isi hdfs settings modify --root-directory=path_to_hdfs_root_dir --zone=access_zone_name_for_hdfs


Example:

isi hdfs settings modify --root-directory=/ifs/data/igls/analyticsdb/  --zone=eyeglass


  1. Create One IP pool for HDFS access with at least 3 nodes in the pool to ensure high availability access to each ECA node, the Pool will be configured with static load balancing.   This will be used for datanode and storage node access by the ECA cluster for the Analytics database.

Command:

(OneFS 7.2)

isi networks create pool --name subnet0:hdfspool --ranges=172.16.88.241-172.16.88.242 --ifaces 1-4:10gige-1 --access-zone eyeglass --zone hdfs-mycluster.ad1.test --sc-subnet subnet0 --static


(Onefs 8.0)

isi network pools create groupnet0.subnet0.hdfspool  --ranges=172.22.1.22-172.22.1.22 --ifaces 1-4:10gige-1  --access-zone eyeglass --sc-dns-zone hdfs-mycluster.ad1.test --alloc-method static



Screen Shot 2017-04-26 at 1.49.18 PM.png







A virtual HDFS rack is a pool of nodes on the Isilon cluster associated with a pool of Hadoop compute clients. To configure virtual HDFS racks on the Isilon Cluster:


NOTE: The ip_address_range_for_client = the ip range used the by the ECA cluster VM’s.

Command:

(OneFS 7.2)

isi hdfs racks create /hdfs_rack_name --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool


isi networks modify pool --name subnet:pool --access-zone=access_zone_name_for_hdfs


Example:

isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20  --ip-pools=subnet0:hdfspool


isi networks modify pool --name  subnet0:hdfspool --access-zone=eyeglass


(Onefs 8.0)

isi hdfs racks create /hdfs_rack_name --zone=access_zone_name_for_hdfs --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool


Example:

isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20 --ip-pools=subnet0:hdfspool --zone=eyeglass


isi hdfs racks list --zone=eyeglass

Name        Client IP Ranges        IP Pools

-------------------------------------------------------------

/hdfs-rack0 172.22.1.18-172.22.1.20 subnet0:hdfspool

-------------------------------------------------------------

Total: 1



  1. Create local Hadoop user in the System access zone.  

NOTE: User ID must be eyeglasshdfs.

Command:

(OneFS 7.2)

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system


Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system


(Onefs 8.0)

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system


Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system



  1. Login via SSH to the Isilon cluster to change the ownership and permissions on the HDFS path that will be used by Eyeglass ECA clusters.

    1. chown -R eyeglasshdfs:'Isilon Users' /ifs/data/igls/analyticsdb/

    2. chmod -R 755 /ifs/data/igls/analyticsdb/

  2. Analytics Cluster setup Complete.

Installation and Configuration ECA Cluster

System Requirements:


3 VM’s nodes each requiring:

  • vSphere 5.5 or higher

  • 1x ip address on the same subnet for each node

  • Gateway IP

  • DNS server IP that can resolve smartconnect name for analytics database

  • NTP server

  • IP address of the eyeglass appliance

  • API Token created on the Eyeglass appliance to authenticate

  • Unique ECA (Eyeglass Clustered Agent) unique name

Installation Procedure

The deployment is based on three node ECA appliances.

  1. Download the Superna Eyeglass™ OVF from https://www.supernaeyeglass.com/downloads

  2. Unzip into a directory on a machine with vSphere client installed

  3. Install the OVA using the OVF Online Installer:
    deployovf.pngPicture1.png
    Deploy from a file or URL where the OVA was saved


  1. Using vSphere deploy the OVA to build the cluster.  Follow screenshots below:

Picture1.png

Screen Shot 2017-02-17 at 8.44.07 PM.png

Screen Shot 2017-02-17 at 8.44.14 PM.png


  1. Complete the networking sections as follows:

    1. ECA Cluster name (NOTE: must be lowercase < 8 characters and no special characters, with only letters)

IMPORTANT: ECA Cluster name cannot include _ as this will cause some services to fail

    1. All VM are on the same subnet

    2. Enter network mask (will be applied to all VM’s)

    3. Gateway IP

    4. DNS server (must be able to resolve the igls.<your domain name here>) (Use nameserver IP address)

NOTE: Agent node 1 is the master node where all ECA CLI commands are executed for cluster management

  1. Power on the vAPP

  2. Ping each ip address to make sure each node has finished booting

  1. Login via SSH to the Master Node (Node 1) using the “ecaadmin” account default password 3y3gl4ss and run the following command:

ecactl components install eca

  1. During this step a passphrase for SSH between nodes is generated, press the “Enter key” to accept an empty passphrase.


  1. A prompt to enter Node 2 and Node 3 password is required on first boot only.  Enter the same default password “3y3gl4ss” when prompted


  1. On Eyeglass Appliance: generate a unique API Token from Superna Eyeglass REST API Window. Once a token has been generated for the ECA Cluster, it can be used in that ECA’s startup command for authentication, along with the location admin of eyeglass. (Eyeglass main menu  Eyeglass REST API menu item)

  1. On ECA Cluster Master node ip 1

    1. Login to that VM. From this point on, commands will only be executed on the master node.

    2. On the master node, edit the file  (using vim) /opt/superna/eca/eca-env-common.conf , and change these five settings to reflect your environment. Replace the variable accordingly


Set the IP address or FQDN of the Eyeglass appliance and the API Token, uncomment the parameter lines before save file. I.e:

      • export EYEGLASS_LOCATION=ip_addr_of_eyeglass_appliance

      • export EYEGLASS_API_TOKEN=Eyeglass_API_token


Verify the IP addresses for the nodes in your cluster. It is important that NODE_1 be the master, (i.e. the IP address of the node you’re currently logged into.)

      • export ECA_LOCATION_NODE_1=ip_addr_of_node_1 (set by first boot from the OVF)

      • export ECA_LOCATION_NODE_2=ip_addr_of_node_2 (set by first boot from the OVF)

      • export ECA_LOCATION_NODE_3=ip_addr_of_node_3 (set by first boot from the OVF)

Set the HDFS path to the SmartConnect name setup in the Analytics database configuration steps. Replace the FQDN hdfs_sc_zone_name with <your domain here>.  

NOTE: Do not change any other value.  Whatever is entered here is created as a subdirectory of the HDFS root directory that was set earlier.

      • export ISILON_HDFS_ROOT='hdfs://hdfs_sc_zone_name:8020/eca1








Auditing Configuration

How to Configure Performance CEE Configurations

Performance tuning value enables 2 CEE servers per VM to process higher event rates.  Best Practise to enable this with > 1000 users access Isilon data.

Verify the extra services setting is enabled and set to true and set the default event ingestion rate recommended by the install technician.  This value sets how many events per second enter the cluster per node for processing.  A value of 500 means 1500 events per second will be allowed into the cluster for processing.

export LAUNCH_EXTRA_SERVICES=true

export TURBOAUDIT_MAX_INPUT_RATE=500


NOTE:  To leverage all 6 CEE servers 3 more CEE url’s are added to Isilon to load share CEE events over 6 instances.  See Auditing configuration on Isilon Section below.

How to configure Turbo Audit Very High Event Rate

This option is for Isilon clusters with 1000’s of users connected to the cluster or very high IO rates that generate a lot of audit events per second.

Prerequisites for all mount methods:

  1. Smartconnect name configured in the system zone for the NFS export created on /ifs/.ifsvar/audit/logs

  2. IP pool set to dynamic for NFS mount used by ECA cluster nodes for HA NFS mount

  3. NFS export is read-only mount by each ECA node.

  4. Follow either manual eca mount method OR automounter option

Instructions:

  1. Configure eca-env-common.conf

    1. Login to eca node 1

    2. vim /opt/superna/eca/eca-env-common.conf

    3. Add line

      • export USE_TURBOAUDIT=true

    4. :wq

    5. end

  2. Create a read-only export on the Isilon cluster, using the following syntax (replacing <ECA_IP_1> with the IP address of nodes 1, 2, 3):
     isi nfs exports create /ifs/.ifsvar/audit/logs --root-clients="<ECA_IP_1>,<ECA_IP_2>,<ECA_IP_3>" --read-only=true -f --description "Easy Auditor Audit Log Export"

  3. Manual Mount with Turbo audit

    1. NOTE: only manual mount or automounter should be used.  See auto mount section below.   

    2. Login to eca node 1 ecaadmin (repeat steps on all ECA nodes)

      • sudo -s (enter ecaadmin password)

      • mkdir -p /opt/superna/mnt/audit/<cluster-guid>/<cluster-name>

        • Repeat for each cluster this ECA will monitor

        • TIP: from the Isilon cluster - this CLI command can be used to get the Isilon Cluster GUID

        • grep -A1 serial /etc/ifs/array.xml | grep guid

      • (example only mkdir -p /opt/superna/mnt/audit/0050569f9a9f4d819b58261e950907a632ad/sourcein8)

      • echo "<system-zone-pool-ssip or FQDN in system zone>:/ifs/.ifsvar/audit/logs /opt/superna/mnt/audit/<cluster-guid>/<cluster-name> nfs nfsvers=3 0 0" >> /etc/fstab

      • Then type ‘mount -a’

        • Verify output

      • Repeat for each cluster the ECA will monitor

      • Verify mount by typing ‘mount’

        • All mounts should be shown to each cluster

        • Type ls -R /opt/superna/mnt/

        • Each cluster mount path should be listed with GUID and cluster name

        • cd into each mount and type ls to verify connectivity.

      • exit (to return to ecaadmin user session)

      • Done for this ECA node.

      • Repeat on other 2 ECA nodes.

  4. Configure and Verify NFS automounter with Turbo Audit

    1. This should be used versus manual mount steps but not both.

      • Configure by: Requires NFS auto mount release and eca-env-common.conf  to have the following set

      • export USE_AUDIT_NFS_WATCH=true

      • Then cluster down

        • ecactl cluster down

      • Cluster up  

        • ecactl cluster up

    2. Wait 1-2 minutes to ensure licensed cluster list is downloaded from Eyeglass to the ECA.   The ECA uses the licensed cluster list to automount the exports from the previous step.  Use the steps below to verify the mounts have been created.  This solution allows a single ECA cluster to manage more than one cluster using Turbo CEE processing.

    3. NOTE:  The IP address or FQDN used to add clusters to Eyeglass will be used for the NFS Audit log export mount.   

    4. How To check for successful mount with NFS Auto Mount only

      • sudo -s  (enter ecaadmin password)

      • ls -R /opt/superna/mnt/

      • ls (the command should list files on the exported mount)

      • Exit (to return to ecaadmin shell)

      • You should see a cluster GUID folder under the mnt directory  for each cluster that is licensed

      • Run logs command to verify mount was successful.

        • ecactl logs --follow  audit-nfs-watch

        • Verify output shows mount was successful

        • If not successful double check export and client list is correct



  1. On master ECA Node, startup the ECA Cluster

    1. NOTE: This step starts the containers on each node,  connects to Analytics HDFS SmartConnect FQDN, creates Analytics database if database is not detected, Then start up ECA code in each container.

      • ecactl cluster up (Note can take 30 seconds to 1 minute to complete)

The startup script does the following:

  1. Reads config file and checks the config data is not empty

  2. Checks if the hdfs pool SCZ name is resolvable

  3. Checks if eca can connect to isilon using netcat with port 8020

  4. Mounts hdfs data as "eyeglasshdfs" user and checks if user has permissions


      • Configuration pushed

      • Starting services on all cluster nodes.


      • Checking HDFS connectivity

      • Starting HDFS connectivity tests...

      • Reading HDFS configuration data...

      • ********************************************************************

      • HDFS root path: hdfs://hdfsransomware.ad3.test:8020/eca1/

      • HDFS name node: hdfsransomware.ad3.test

      • HDFS port: 8020

      • ********************************************************************

      • Resolving HDFS name node....

      • Server: 192.168.1.249

      • Address: 192.168.1.249#53


      • Non-authoritative answer:

      • Name: hdfsransomware.ad3.test

      • Address: 172.31.1.124


      • Checking connectivity between ECA and Isilon...

      • Connection to hdfsransomware.ad3.test 8020 port [tcp/intu-ec-svcdisc] succeeded!

      • ********************************************************************

      • Initiating mountable HDFS docker container...


  1. Verifying ECA Cluster

    1. On the master node run these commands:

  1. run the following command: ecactl db shell

  2. Once in the shell execute command: status

  3. Output should show 1 active master , 2 backup master server

Screen Shot 2017-02-18 at 4.34.06 PM.png

  1. Type ‘exit’

  1. Verifying ECA containers are running

    1. Command: “ecactl containers ps”

Screen Shot 2017-02-18 at 4.19.12 PM.png

  1. Check cluster status and that all analytics tables exist

    1. Command: ‘ecactl cluster status’

    2. This command verifies all containers are running on all nodes and verifies each node can mount the tables in the Analytics database.

    3. If any error conditions open a support case to resolve or retry with:

      1. ecactl cluster down

      2. ecactl cluster up

      3. Send boot text to support


  1. On the Eyeglass appliance, check the Manage services icon:

    1. Login to Eyeglass as admin user

    2. Check the status of the ECA Cluster, click ‘Manage Service’ Icon and click on + to expand the container or services for each eca node review image below.  

    3. Verify the ip addresses of the ECA nodes are listed.


Time Configuration Isilon, Eyeglass, ECA cluster

Overview: To get accurate auditing features for Ransomware or Easy Auditor time sync between all components is critical step.   NTP should be used on all VM’s and use the same NTP source.


  1. Verify Isilon clusters being monitored are using an NTP server.  Many Internet time sources exist or internal Enterprise server IP address.

    1. Enable NTP on all Isilon clusters

  2. Eyeglass VM configure the same NTP servers used by Isilon by following this guide http://documentation.superna.net/eyeglass-isilon-edition/install/quickinstall#TOC-Setup-Time-zone-and-NTP

  3. On each ECA VM repeat the YAST steps above to configure NTP on each VM.


Backup the Audit Database with SnapshotIQ (Required for Easy Auditor)

Use the Isilon native SnapshotIQ feature to backup the audit data.  Procedure documented here.

Isilon Protocol Audit Configuration

Overview

This section configures Isilon file auditing required to monitor user behaviours.   The CEE protocol can be enabled on each Access Zone independently that requires monitoring.   The CEE endpoints should be configured to each node of the ECA cluster.  


NOTE: If you have a CEE server for external auditing applications.  See the next section on how to configure CEE server messaging files to send Rabbitmq events to the ECA cluster.

Enable and configure Isilon protocol audit


  1. Enable Protocol Access Auditing.

Command:

(OneFS 7.2)

isi audit settings modify --protocol-auditing-enabled {yes | no}


Example:

isi audit settings modify --protocol-auditing-enabled=yes


(OneFS 8.0)

isi audit settings global modify --protocol-auditing-enabled {yes | no}


Example:



isi audit settings global modify --protocol-auditing-enabled=yes


  1. Select the access zone that will be audited. This audited access zones are accessed by the SMB/NFS clients.

Command:

(OneFS 7.2)

isi audit settings modify --audited-zones=audited_access_zone


Example:

isi audit settings modify --audited-zones=sales


(OneFS 8.0)

isi audit settings global modify --audited-zones= audited_access_zone


Example:

isi audit settings global modify --audited-zones=sales,system



  1. OneFS 7.2 or 8.0 GUI Auditing Configuration:

    1. Click Cluster Management > Auditing

    2. In the Settings area, select Enable Configuration Change Auditing and Enable Protocol Access Auditing checkbox.

    3. In the Audited Zones area, click Add Zones.

    4. In the Select Access Zones dialog box, select one or more access zones, and click Add Zones (do not add Eyeglass access zone).

    5. NOTE: If you have configured Turbo Audit SKIP this section to add CEE URL’s.

    6. In the Event forwarding area, specify the ECA nodes to forward CEE events to.


CEE Server URL (point to the ECA, with default port 12228).

NOTE:  If enable dual CEE per VM 6 URL’s are required per ECA node using two different ports.   The second CEE server listens on port 12229


Dual CEE Example

Screen Shot 2017-09-15 at 8.03.55 PM.png


    1. For OneFS 7.2, in Storage Cluster Name, specify the Isilon cluster name:

    1. Click Save Changes.

(Mandatory Step) Isilon Audit Event Rate Validation for Sizing ECA cluster

This is required step to determine ECA configuration to match performance requirements.

  1. Once auditing is enabled wait 10 minutes.

  2. Then  change dates in example (yellow) to cover a 5 minute period after CEE was enabled.  (NOTE this should be repeated if only 1 access  zone was enabled. And repeat with all access zones enabled)

  3. isi_for_array 'isi_audit_viewer -t protocol -s "2017-09-08 12:41:00" -e "2017-09-08 12:46:00" ‘ | wc -l  (where dates are a 5 minute period of time to sample in the past)

  4. Provide this output number to support installation team.  

  5. It counts the number of events on all nodes in the cluster that are recorded.


How to Purge old Audit logs on Isilon


Isilon stores audit messages in archived compressed files and does not have an automatic purge process.  These steps should be used to correctly remove old GZ files and ensure audit protocol is operating normally after the purge process on all nodes in the cluster.


CAUTION!

This procedure will stop capturing audit events on the cluster during the time auditing is disabled.

IMPORTANT!

This procedure must be performed using the "root" account on the cluster.

  1. Stop the ECA cluster

    1. Ssh eca master node as ecaadmin

    2. ecactl cluster down

  2. Run the following commands to turn off audit logging

    1. OneFS 7.1.0 - 7.2.1:

    2. isi audit settings modify --protocol-auditing-enabled=No

    3. isi audit settings modify --config-auditing-enabled=No (only if enabled before)

  3. Run the following commands to stop the isi_audit_d, isi_audit_cee and isi_audit_syslog processes from automatically restarting:

    1. isi services -a isi_audit_d ignore

    2. isi services -a isi_audit_cee ignore

    3. isi services -a isi_audit_syslog ignore

  4. Run the following commands to end the isi_audit_d and isi_audit_cee processes:

    1. isi_for_array 'pkill isi_audit_d'

    2. isi_for_array 'pkill isi_audit_cee'

    3. isi_for_array 'pkill isi_audit_syslog'

  5. Run the following command to ensure that no isi_audit processes are running on the cluster:

    1. isi_for_array pgrep -l isi_audit

  6. Run the following commands to change directory to the audit directory.

    1. cd /ifs/.ifsvar/audit

  7. Run the following command to backup the audit directory and allow for the files to be recreated:

    1. mv /ifs/.ifsvar/audit /ifs/.ifsvar/audit.bak

    2. Run the following commands to inform the Master Control Program (MCP) to resume monitoring the audit daemons. MCP automatically restarts the audit daemons and reconstructs the audit directory on each node when the isi_audit_d process is running.

      1. isi services -a isi_audit_d monitor

      2. isi services -a isi_audit_cee monitor

      3. isi services -a isi_audit_syslog monitor

  8. Run the following command to check that audit processes have restarted:

    1. isi_for_array -s pgrep -l isi_audit

  9. Run the following command to verify that audit data was removed and reconstructed:

    1. find /ifs/.ifsvar/audit

  10. Run the following command to re-enable audit logging:

    1. isi audit settings modify --protocol-auditing-enabled=Yes

    2. isi audit settings modify --config-auditing-enabled=Yes (only if enabled before)

  11. Run the following command to verify log files are being populated after audit processes have restarted:

    1. Reset audit log to current day and time

      1. isi audit settings global modify --cee-log-time "Protocol@2017-11-21 04:13:00" (use a current date and time)

    2. isi_audit_viewer -t protocol  

      1. Verify output from this command returns correctly last logged event.

  12. Run the following command to delete the audit backup if they are not needed

    1. rm -rf /ifs/.ifsvar/audit.bak

  13. On ECA master node

    1. ecactl cluster up

    2. Login to eyeglass and verify the managed services icon shows active and green ECA nodes.  NOTE: heartbeats take 2-5 minutes before the ECA cluster is completely up

    3. If running Ransomware Defender run the security guard feature to test that audit messages are being processed correctly

  14. End procedure


ECA Cluster Upgrade Procedures

This section covers the steps to upgrade ECA clusters using the offline method.


NOTE: If if upgrading to 1.9.5 ECA or later see prerequisites firewall port changes required for log collection

Offline ECA upgrade (from 1.9.6 to 2.5.x)

Pre-requisites:

  • Eyeglass Upgraded to same 2.5.x release

  • Instructions posted here followed for additional Eyeglass pre-requisites.

Overview of steps:

  • Stop ECA

  • Increase memory allocation to each ECA node to 16G (requires vCenter access and privileges)

  • Remount high performance NFS mount with new mount path

  • Upgrade

  • Modify ECA configuration required for 2.5.x

  • Start ECA

Detailed Steps

  1. Login to the master node (node 1) via ssh

  2. ecactl cluster down

  3. Upgrade each ECA node to 16G from default 8G of memory

    1. Login to the master ECA node VM as ecaadmin using ssh

    2. sudo -

    3. Type ”shutdown”

    4. Login to vcenter and wait until vm shows “powered off”

    5. Edit vm settings and increase memory to 16G

    6. Start VM

    7. Verify login after waiting 1-2 minutes for boot time

    8. Repeat for each ECA node.


  1. Remount high performance NFS mount before continuing  (repeat on all 3 ECA VM’s).  This step is required to remove the host mount as a new mount path used in 2.5.x.

    1. sudo -s

    2. Enter ecaadmin password

    3. vim /etc/fstab

    4. Remove the mount nfs lines to the Isilon clusters that were added for ECA.

    5. Save file

    6. Now unmount the export

      1. Type ‘mount’

      2. Verify the mount path in the output to the Isilon clusters.

      3. umount  /opt/superna/mnt/ (note use mount path from the step above)

      4. Verify that unmounted by typing mount again - the path that was unmounted should no longer be listed.

    7. run the following command (required for upgrade from 1.9.x)

sudo chown -R ecaadmin:ecaadmin /opt

    1. Repeat for each ECA node.

    2. Now follow instructions here in this document to remount with the new mount path.


  1. Download the offline file from the support site and scp the file to the node

  2. Copy file into /opt/superna (must be copied to this directory)

  3. cd /opt/superna

  4. chmod 755 eyeXXXX name of the file

  5. Exit back to ecaadmin user - upgrade must be run as ecaadmin user

  6. ./eca-offline-1.9.2-17112.run  (example only)  

  7. This will automatically run ECA upgrade

  8. You will be prompted to enter ecaadmin password for other nodes to complete cluster upgrade. (prompt is root but sudo has been used to run root level commands)

  9. Once completed successfully

  10. Edit the /opt/superna/eca/eca-env-common.conf file to update configuration for 2.5.x

    1. Modify the ECA Cluster Name if it contains an underscore or uppercase letters (not supported for 2.5.1)

      1. Verify the cluster name - it is assigned to this property:  ECA_CLUSTER_ID

      2. If the name in ECA_CLUSTER_ID includes underscore ( _ ) or upper case letters, edit /opt/superna/eca/eca-env-common.conf and change the name to remove underscore and uppercase letters.

      3. Login to the Eyeglass web page and open Manage Services.   Remove each of the entries with the old ECA Cluster name by clicking on the ‘x’ .

    2. Modify  configuration to create a new database (REQUIRED)

      1. Change the database name in this property: ISILON_HDFS_ROOT

        1. Example original configuration

        2. export ISILON_HDFS_ROOT='hdfs://dssim8003hdfs.ad1.test:8020/ecahbase'

        3. Then this part ecahbase needs to be changed to a new name

        4. Example new configuration

        5. export ISILON_HDFS_ROOT='hdfs://dssim8003hdfs.ad1.test:8020/hbase251'

    3. Remove these environment variables (REQUIRED) - (values now set in default configuration file or are no longer required)

      1. Section ## EXTRA SERVICES

      2. Section ## FASTCEE

      3. All Java settings for memory allocation that match defaults (check /opt/superna/eca/eca-env-defaults.conf).  If custom  memory allocation configured this needs to be reviewed before removing.

      4. export REGISTRY   (set in defaults)

      5. export TURBOAUDIT_CLUSTER_ID

      6. export TURBOAUDIT_CLUSTER_NAME

      7. export TURBOAUDIT_MAX_INPUT_RATE=2000  (if set to any other value than 2000 then do NOT delete)

    4. Comment out this environment variable (REQUIRED)

      1. export ECA_CEEFILTER_BYPASS_HBASE=true

    5. Save your changes.

  11. ecactl cluster up

  12. Login to Eyeglass and verify Manage Services Icon can see green health VM’s

Screen Shot 2017-06-13 at 8.19.16 PM.png

Note: can take 2-3 minutes after cluster up

  1. ECA Upgrade Completed

  2. Backup the Audit database (required for Easy Auditor) following instructions here.


Expanding Auditor Cluster for Higher Performance


The ECA cluster is based on a cluster technology for reading and writing data (Apache HBASE) and searching (Apache Spark).

To expand the ECA clusters search performance for large databases or when greater than 50 scheduled queries are used.  (A large database contains over 1 Billion records).

NOTE:  ECA clusters can be 3 , 6 or 9 nodes in size.



How to expand Easy Auditor cluster size


Follow this steps to add 3 or 6 more VM’s for analytics performance increase for higher event rate or long running queries on a large database.

  1. Login to the master ECA node

  2. ecactl cluster down
    deploy one or two more eca clusters. No special config needs to be added on the newly deployed ECA OVA.
    edit /opt/superna/eca/eca-env-common.conf to add more locations:
    ECA_LOCATION_NODE_4: <IP>
    ECA_LOCATION_NODE_5: >IP>
    add anywhere from nodes 4 to 9.
    ecactl components install eca (accept all defaults)
    ecactl cluster up

  3. This will expand HBASE and Spark containers for faster read and analytics performance

  4. Login to eyeglass and open managed services

    1. preview-image.png

  5. Now HBASE needs to balance the load across the cluster for improved read performance.

    1. Login to ECA node 1

    2. ecactl db shell <enter>

    3. Balancer <enter>

    4. This command will relocate regions of the database to the new HBASE ECA nodes

  6. done.


How to Enable Real-time Monitor ECA cluster performance


Use this procedure to enable container monitor to determine if cpu GHZ are set correctly for query and writing to Isilon performance.

  1. To enable cadvisor, add the following line to eca-env-common.conf:

  2. export LAUNCH_MONITORING=true

  3. This will launch cadvisor on all cluster nodes.

  4. If you want to launch it on a single node, login to that node and execute:

  5. ecactl containers up -d cadvisor

  6. Once the cadvisor service is running, login to http://<IP OF ECA NODE>:9080 to see the web UI.

  7. Done.



Ransomware IGLS CLI command Reference


See Eyeglass CLI command for Ransomware

ECA Cluster OS Suse 42.2 to 42.3 Upgrade Procedures - Offline


Use this procedure when No internet access allows online OS upgrade option.  The procedures requires a new OVF deployed and config file used to restore all settings.


  1. Login to the master node of current ECA cluster as ecaadmin user

  2. Copy the contents of /opt/superna/eca/eca-env-common.conf to your local computer as backup of the configuration. (SCP the file or copy and paste to a text file)

  3. Shutdown the cluster

    1. Ecactl cluster down

  4. Power down all the VM’s from vcenter

  5. Deploy new OVF 1.9.4 or later and re-use the same ip addresses and eca name during deployment.

  6. Login to node 1 as ecaadmin

  7. Edit the main conf file

    1. nano /opt/superna/eca/eca-env-common.conf

    2. Paste backup file contents into the file (note can use SCP to copy the file back to the cluster using ecaadmin user to login)

    3. Ctl + x

    4. Answer yes to save the file on exit

  8. ecactl cluster up

  9. Verify normal cluster boot process

  10. Login to Eyeglass

  11. Open Service Manager Icon

  12. Wait up to 5 minutes and verify all cluster nodes are green active

  13. Done


ECA Cluster OS Suse 42.2 to 42.3 Upgrade Procedures - Internet Online

This procedures requires Internet access to the ECA nodes to complete.  If not available use Offline OS upgrade procedure.

IMPORTANT: This procedure must be run AFTER the Offline ECA Upgrade

  1. ssh ecaadmin@x.x.x.x of master node

  2. ecactl cluster down

  3. Sudo -s

  4. Enter enter ecaadmin password

  5. zypper refresh (requires internet)

  6. zypper update (requires internet) (applies current updates)

  7. Change all remaining repo URLs to the new version of the distribution (needs to be run as root)

    1. cp -Rv /etc/zypp/repos.d /etc/zypp/repos.d.Old

  8. makes backup

    1. sed -i 's,42\.2,42.3,g' /etc/zypp/repos.d/*  

  9. Refresh new repositories (you might be asked to accept new gpg key)

    1. zypper --gpg-auto-import-keys ref

  10. Upgrade to 42.3

    1. zypper dup --download-in-advance

  11. Repeat on all 3 nodes

  12. Reboot all 3 nodes

  13. Login to master node with ssh after reboot

  14. ecactl cluster up

  15. Verify startup is normal and tables exist

  16. ecactl cluster status


Advanced 2 pool HDFS configuration.

This describes a 2 pool configuration with a namenode pool and a datanode pool.  Use only if directed by Support.

  1. Create a hdfspool-namenode, it should be used by Hadoop clients to connect to the HDFS namenode service on Isilon and it should use the dynamic IP allocation method to minimize connection interruptions in the event that an Isilon node fails. For an HDFS workload, round robin is likely to work best. Create a delegation record that DNS requests for the SmartConnect zone name, hdfs-mycluster.ad1.test for example, are delegated to the service IP that will be defined on your Isilon Cluster.

Note: dynamic IP allocation requires a SmartConnect Advanced license.

Example:

Command:

(OneFS 7.2)

isi networks create pool --name subnet0:hdfspool-namenode --ranges=172.16.88.241-172.16.88.242 --ifaces 1-4:10gige-1 --access-zone eyeglass --zone hdfs-mycluster.ad1.test --sc-subnet subnet0 --dynamic

  1. Create a hdfspool-datanode, it should be used for HDFS data node connections and it should use the static IP allocation method to ensure that data node connections are balanced evenly among all Isilon nodes.

This pool is for cluster internal communication and does not require SmartConnect Zone

To assign specific SmartConnect IP address pools for data node connections, you will use the “isi hdfs racks modify” command.

Note: If you do not have a SmartConnect Advanced license, you may choose to use a single static pool for namenode and datanode connections. This may result in some failed HDFS connections immediately after Isilon node failures.

Note: The ip_address_range_for_client = the ip range used the by the ECA cluster VM’s.

Command

(OneFS 7.2)

isi hdfs racks create /hdfs_rack_name --client-ip-ranges=

ip_address_range_for_client --ip-pools=subnet:pool


isi networks modify pool --name subnet:pool --access-zone=ccess_zone_name_for_hdfs


Example:

isi hdfs racks create /hdfs-rack0 --client-ip-ranges=0.0.0.0-255.255.255.255  --ip-pools=subnet0:hdfspool

isi networks modify pool --name  subnet0:hdfspool--access-zone=eyeglass

(Onefs 8.0)

isi hdfs racks create /hdfs_rack_name --zone=access_zone_name_for_hdfs --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool

Example:

isi hdfs racks create /hdfs-rack0 --client-ip-ranges=0.0.0.0-255.255.255.255 --ip-pools=subnet0:hdfspool-datanode --zone=eyeglass

isi hdfs racks list --zone=eyeglass

Name        Client IP Ranges        IP Pools

-------------------------------------------------------------

/hdfs-rack0 0.0.0.0-255.255.255.255 subnet0:hdfspool

-------------------------------------------------------------

Total: 1


ECA Config file tag Definitions

Note: Use only if directed by support


Tag

Definition

Impact

export ECA_CEEFILTER_BYPASS_HBASE=TRUE




Disables storing audit events with Ransomware Defender

Last hour of history will not work if set to True.  Use only if support directs you to change defaults.

export LAUNCH_EXTRA_SERVICES=true

Enables 2 CEE instances per ECA node using port 12229

Only use if directed by support

< 1.9.6

export TURBOAUDIT_CLUSTER_NAME=

export TURBOAUDIT_CLUSTER_ID=

> 2.0

export TURBOAUDIT_MAX_QUEUE_DEPTH=1000

export TURBOAUDIT_OUTPUT_RMQ_MULTIPLICITY=0

export TURBOAUDIT_BATCH_MS=0

export TURBOAUDIT_BATCH_SIZE=0



TurboCEE tuning values

Do not change leave at defaults.

export TURBOAUDIT_MAX_INPUT_RATE=2000

Rate limit ingress audit rate for Turbo audit

Can be changed to increase processing speed, this will increase CPU utilization of the cluster.  

export ECA_BUFFER_DB_FLUSH_MILLIS=1000

export RMQ_MAX_QUEUE_LENGTH=50000

export DEAD_LETTER_EXCHANGE_NAME=eca_dead_letter

export DEAD_LETTER_ROUTE_KEY=route_eca_dead_letter