DS210: DataStax Enterprise 6 Operations with Apache Cassandra™

Configuring clusters: YAML #

cassandra.yaml - the main configuration file

  • Cassandra nodes read this file on start-up
    • restart of the node if needed for the changes to take effect
  • Located in the following directories:
    • cassandra package installations: /etc/dse/cassandra
    • cassandra tarball installations: ${install_root}/resources/cassandra/conf

Minimal properties #

  • cluster_name
  • listen_address
  • native_transport_address (ip address, which use by the clients to connect to the node)
  • seeds

Commonly user YAML settings #

  • endpoint_snitch

  • initial_token / num_token

  • commitlog_directory

  • data_file_directories

  • hints_directory

  • saved_caches_directory

  • hinted_handoff_enabled

  • max_hint_windows_in_ms

  • row_cache_size_in_mb

  • file_cache_size_in_mb

  • memtable_heap_space_in_mb/memtable_offheap_space_in_mb

Cluster sizing #

  • estimates are only a rough-order of magnituge dua to metadata
  • things to consider when estimating cluster size:
    • throughput - how much data per second?
    • growth rate - how fast does capacity increase?
    • latency - how quickly must the cluster respond?

Throughput:

  • measure throughput in data movement per time period (e.g. GB/S)
  • consider reading and writing separately
  • a function of:
    • operation generators (e.g. users)
    • rate of operation generations (e.g. 3 clicks per minute)
    • size of the operations (number of rows X row width)
    • operation mix (read/write ratio)

Growth rate:

  • how big must the cluster be just to hold the data?
  • given the write throughput, we can calculate growth
    • what is the new/update ratio?
    • what is the replication factor?
    • additional headroom for operations

Latency:

  • calculating cluster capacity is not enough
  • understand you SALs
    • in terms of latency
    • in terms of throughput
  • relevant factors:
    • IO rate
    • workload shape
    • access patterns
    • table width
    • node profile (i.e. cores, memory, storage, network)
  • improve estimates with benchmarking

Cassandra stress #

cassandra-stress = utility fir benchmarking/load testing a cluster

  • simulates a user-defined workload
  • use the cassandra-stress to:
    • determine schema performance
    • understand how your database scales
    • optimize your data model and settings
    • determine production capacity
  • try out your database berore you go into production

You can configure cassandra-stress with yaml file:

  • define your schema
  • specify any compaction strategy
  • create a characteristic workload
  • without writing a custom tool

config sections:

  • schema description
    • defines the keyspace and table information
    • if the schema is not yet defined the test will create it
    • if the schema already exists, only defines the keyspace and table names
  • column description
    • describes how to generate the data for each column
    • the data values are meaningless, but simulate the patterns in terms of size and frequency
    • generated values follow a specified distribution such as Uniform, Exponential, Gaussian
    • parameters include:
      • data size
      • value population
      • cluster distribution (the number of values for the column appearing in a single partition (cluster columns only))
        • EXP(min…max) - an exponential distribution over tha range
        • EXTREME(min…max, shape) - an extreme value distribution over the range
        • GAUSSIAN(min…max, stdvrng) - a gaussian/normal distribution, where mean=(min+max)/2 and stdev is (mean-min)/stdvrng
        • GAUSSIAN(min…max, mean, stdev) - a gaussian/normal distribution. with explicity defined mean and stdev
        • UNIFORM(min…max) - a uniform distribution over the range
        • FIXED(val) - a fixed distribution, always returning the same value
  • batch description
    • specifies how the test inserts data
    • for each insert operation, specifies the following distributions/ratios:
      • partition distribution - number of partitions to update per batch (default FIXED(1))
      • select distribution ratio - portion of rows from a partition included in particular batch (default FIXED(1)/1)
      • batch type - the type if CQL batch to use; either LOGDEG/UNLOGDEG (default LOGDEG)
  • query description
    • you can specify any CQL query on the table by naming them under the queries section
    • fields specifies if the bind variables should be from the same row or across all rows in the partition

Nodetool for performance analysis #

nodetool sub-commands:

  • info
  • compactionhistory
  • gcstats (gets Java’s GC statistics)
  • gossipinfo
  • ring (gets info about tokens range assignments)
  • tablestats
  • teblehistograms
  • tpstats

Low GC times are desirable so Cassandra can spend more time servicing requests.

System and output logs #

  • by default. the log file is in /var/log/cassandra/system.log
  • also check debug.log in the same directory
  • system.log logs INFO messages and above
  • debug.log logs all messages
  • change the location by adding the following line to /etc/dse/cassandra/jvm.options:
-Dcassandra.logdir=${PATH_TO_NEW_LOG_DIR}

Logging levels:

  • OFF
  • ERROR
  • WARN
  • INFO (default)
  • DEBUG
  • TRACE
  • ALL

Logging configuration:

  • logback.xml (in the same directory as cassandra.yaml)
  • nodetool setlogginglevel - sets log level for particular Java class, until node restart
nodetool getlogginglevels

JVM GC logging #

Turn on GC logging:

  • statically by editing jvm.options
    • -XX:+PrintGC - simple, prints a line for every GC and every full GC
    • -XX:+PrintGCDetails - detailed, young generation as well as old and perm gen
    • -XX:+PrintGCTimeStamps - adds time ti a simple or detailed GC log
    • -XX:+PrintGCDateStamps - adds date to a simple or detailed GC log
  • dinamically by using jinfo
    • jinfo -flag +PrintGC ${NODE_PID}
    • jinfo -flag +PrintGCTimeStamps ${NODE_PID}
    • jinfo -flag +PrintGCDateStamps ${NODE_PID}

In either case, edit /etc/dse/cassandra/jvm.options git specify the GC log file:

-Xloggc:${PATH_TO_GC_LOG_FILE}

Adding/removing nodes #

  • reached data capacity problem
    • your data has outgrown the node’s hardware capacity
  • reached traffic problem
    • your application needs more rapid response with less latency
  • to increase operational headroom
    • need more resources for node repair, compaction, and other resource intensive operations

Adding nodes, best practices #

  • Single-Token nodes

    • double the size of your cluster
  • VNodes

    • we can add nodes incrementally
  • adding a single node at a time will:

    • result in more data movement
    • will have a gradual impact on cluster performance
    • will take longer to grow cluster
  • adding multiple nodes at the same time:

    • is possible
    • use extreme caution

Bootstrapping #

Bootstrapping is a process of a new node joining a cluster:

  • the joining node contacts a seed node
  • the seed node communicates cluster info, including token ranges, to the joining node
  • cluster nodes prepare to stream necessary SSTables
  • cluster nodes stream SSTables to the joining node (can be time consuming)
  • existing cluster nodes continue to satisfy writes, but also forward write to joining node
  • when streaming is complete, joining node changes to normal state and handles read/write requests

To bootstrap a node:

  • set up the node’s configuration files (cassandra.yaml, etc.)
    • four main parameters:
      • cluster_name
      • native_transport_address
      • listen_address
      • -seeds
    • start up the node normally

Seed node is a just one of the cluster’s node.

When bootstrapping fails, we have two scenarios:

  • bootstrapping node could not connect to cluster
    • examine the log file to understand what’s going on
    • change config and try again
  • streaming portion fails
    • node exists in cluster in joining state
    • first, try restarting the node
    • if restarting fails, try deleting data directories and rebooting
    • or, worst case, remove the node from the cluster and try again

Node cleanup

  • perform cleanup after a bootstrap on the OTHER nodes
  • reads all SSTables to make sure there is no token out of range for that particular node
  • if the SSTable is not out of range, cleanup just a copy
  • there are options for running these operations is parallel
bin/nodetool cleanup

Removing a node #

Other nodes need to pickup the removed-node’s data

The cluster needs to know the node is gone.

Three options for dealing with the data:

  • redistribute date from the node that is going away
    • nodetool decomission
      • need to decrease the size of the cluster
      • the node must still be active
      • decomission will transfer the data from the decomissioned node to other active nodes in the cluster
        • with VNodes, the rebalance happens automatically
        • with Single-token nodes, you will need to manually rebalance the tiken ranges on the remaining nodes
      • after running the nodetool decommossion command:
        • the node is offline
        • the JVM process is still running (use dse cassandra-stop to kill the process)
        • the data is not deleted from the decommissioned node
        • if you want to add the node back to the cluster, delete the data first!!1!
          • node deleting the data may cause data resurrection issues
  • redistribute the data from replicas
    • nodetool removenode
      • do this if node is offline and never coming back
      • you can run this command only from other node
      • nodetool removenode will:
        • make the remaining nodes in the cluster aware that the node is gone
        • copy data from online nodes ti the appropriate replicas to satysfy the replication factor
  • don’t redistribute the data, just make the node go away
    • nodetool assassinate
      • do this as a last resort if the node is offline and never coming back
      • nodetool assassinate will:
        • make the remaining nodes in the cluster aware that the node in gone
        • NOT copy any data
      • you should use nodetool repair of the remaining nodes to fix the data replication

Replacing a down node #

Benefits of replacing a downed node:

  • you don’t have to move the data twice
  • backup for a node will work for a replaced node, because same tokens are used to bring replaced node into cluster
  • best option is to replace rather than remove and add

Replacing a downed node using nodetool:

  • configure a new node fir the cluster normally with one additional step:
    • in jvm.options add a replace)address JVM option with IP address of the replaced node:
-Dcassandra.replace_address=${DEAD_NODE_IP_ADDRESS}
  • once you have configured the node, start the node in the cluster
  • monitor the bootstrapping process using nodetool netstats
  • after the new node if bootstrapperd, you need to remove this option from jvm.options manually

What if the dewned node was also a seed node?

  • make sure the old IP address does not appear in seeds list in cassandra.yaml
  • also make sure the new IP address is not in the seeds list in cassandra.yaml
  • perform a rolling restart on all nodes se the nodes are aware of the changes to the seeds list
  • start the replacement node using replace_address in the jvm.options file
  • once the replacement bode is fully up:
    • renove replace_address from jvm.options
    • add the replacement node’s IP to the seeds lists in all the nodes' cassandra.yaml

Compaction #

Leveled compaction #

  • leveled compaction uses a multiplier of 10 per level by default

  • SSTable max size is 160MB (sstable_size_in_mb)

  • SSTable exceed this amount to ensure the last partioion written is complete

  • Leveled compaction is best for read-heavy workload

    • occasional writes but high reads
  • each pertition resides in only one SSTable per level (max)

  • generally reads handled by just a few SSTables

    • partitions group together in a handful of levels as they compact down
    • 90% of the data resides in the lowest level (due to 10x rule)
    • unless the lowest level is not yet full
  • leveled compaction wastes less disk space

  • obsolete records compact out quickly

    • a single partition’s records group as they compact down
    • updated records merge with older records due to this grouping

Disadvantages:

  • IO intensive
  • compacts many mode SSTables at once size tiered compaction
  • compacts mode frequently than size tierd
  • can’t ingest data at high insert speeds

Size tiered compaction #

Default compaction type

Size tiered compaction triggers compaction based on the number of SSTables.

  • groups similarly sized tables together

  • tiers with less than min_threshold (four) SSTables are not considered for compaction

  • the smaller the SSTables, the “thinner” the distance between min_threshold and max_threshold

  • SSTables qualifying for more than one tier distribute rabdomly amongst buckets

  • buckets with nore than max_threshold SSTables are trimmed to just that many SSTables

    • 32 by default
    • coldest SSTables dropped
  • Size tiered compaction chooses the hottest tier first to compact

  • SSTable hotness determined by number of reads per second per partition key

  • cassandra compacts several tiers concurrently

  • concurrent_compactors

    • default to smaller of number of disks or number of cores, with a minimum of 2 and a maximum of 8 per CPU core tables concurrently compacting are not considered for new tiers

Triggering a compaction:

  • compaction starts every time a MemTable flushes to as SSTable
  • MemTable too large, commit log too large or manual flush
  • or when the cluster streams SSTable segments to the node
    • Bootstrap, rebuild, repair
  • Compaction continues until there are no more tiers with at least min_threshold tables in it

Tombstones

  • if no eligible buckets, size tiered compaction compacts a single SSTable
  • this eliminates expired tombstones
  • the number of expired tombstones must be above 20%
  • largest SSTable chosen first
  • table must be at least one day old before considered
    • tombstome_compaction_interval
  • compaction ensures that tombstones DO NOT overlap old records in other SSTables

Absorbs high write-heavy workloads ny procrastinating compaction as losg as possible

Other compaction strategies don’t handle ingesting data as well as size tiered

compaction_throughput_mb_per_sec controls the compaction IO load on a node

Major compaction #

  • you can issue a major compaction via nodetool
  • compacts all SSTables into a singla SSTable
  • new monolithic SSTable will qualify for the largest tier
  • future updates/deletes will fall into smaller tiers
  • data in laegest tier will become obsolete yet still hog a log of disk space
  • takes a long tine for changes to propagate up to large tier
  • major compactions not recommended

Time window compaction #

Built for time series data

An SSTables spanning two windows simply falls into the second window

Good practice to aim for 50ish max SSTables on disk:

  • 20ish for active window
  • 30ish for all past windows combined

for example: one month of data would have window of a day

Tuning:

  • expired_sstable_check_frequency_seconds determines how often to check for fully expired (tombstoned) SSTables
  • good to tune when using a TTL

Repair #

This is consistency check across all the node, than examine that all the data is correct.

  • Think of repair as synchronizing replicas
  • Repair ensures that all replicas have identical copies of given partition
  • Repair occurs:
    • if necessaty when detected by reads (e.g. CL=QUORUM)
    • randomly with non-quorum reads (table property read_repair_chance or dclocal_read_repair_chance)
    • manually using nodetool repair

How does repair work?

  • nodes build Merkel trees from partitions to represent how current the data value are
  • nodes exchange the Merkel trees
  • nodes compare the Merkel trees to identify specific values that need synchronization
  • nodes exchange data values and update their data

Merkel tree

  • a binary tree of hash values
  • the leaves of the tree represent hashes of the values in the partition
  • each tree-nodes is a hash of its children’s hash values
  • when tree-nodes hashes are the same, the sub-trees are the same

When to perform a repair:

  • if node has been down for a while
  • on a regular basis:
    • once every gc_grace_seconds
    • mekr sure the repair can complete within the gc_grace_seconds window
    • schedule for lower utilization periods

Is repair a lot of work for the node:

  • a full repair can be a lot of work
  • but there are ways to mitigate the work:
    • primary range repair
    • sub-range repair

Primary range repair:

  • the primary range is the set of tokens the node is assigned
  • repairing only the node’s primary range will make sure that data is synchorized for that range
  • repairing only the node’s primary range will eliminate redundant repairs

Sub-range repair:

  • repairs can consume significant resources depending on how much data is under consideration
  • targeting sub-ranges of the table will reduce the amount of work done by a single repair

Nodesync #

DSE 6+ replacement for repair.

Behaves like continues background repair that delivers:

  • low overhead
  • consistent performance
  • easy to use

How to use:

  • create a cluster with at least 2 nodes
  • create keyspace with RF >= 2
CREATE KEYSPACE MyKeyspace
WITH replication={'class': 'SimpleStrategy', 'replication_factor': 2}; 
  • create table within the keyspace with NodeSync enables
CREATE TABLE MyTable (k int PRIMARY KEY)
WITH nodesync={'enabled': 'true'};
  • NodeSync will now automatically make sure the table data is synchronized

sstabelesplit #

Brakes large SSTable files in a pieces. Before use this tool? you need to stop node.

Multi DC concepts #

  • node - the virtual or physical host of a single Cassandra instance
  • rack - a logiacl grouping os physically related nodes
  • DC - a logical grouping of set of racks
  • enables geographically aware read and write request routing
  • each node belongs to one rack in one DC
  • the identity of each node;s rack and DC may be configured in its conf/cassandra-rackdc.properties file

implementing a multi DC cluster:

  • use the NetworkTopologyStrategy rather than SimpleStrategy
  • use LOCAL_* consistency level for read/write operations to limit latency
  • specify the snitch
ALTER KEYSPACE MyKeyspace
WITH replication={'class': 'NetworkTopologyStrategy', 'DC1': 1, 'DC2': 2};
nodetool rebuild -- name_of_existing_data_center

CQL copy #

  • cassandra expects every row in the delimited input to contain the same number of columns
  • the number of columns in the delimited input is the same as the number of columns in the Cassandra table
  • empty data for a column is assumed by default NULL value
  • COPY FROM is intended for importing small datasets (a few million rows or less) into Cassandra
  • for impoting larger datasets, use DSBulk

options:

  • DELIMITER (default is comma)
  • HEADER (default is false)
  • CHUNKSIZE - set the size of chunks passed to worker process (default value is 1000)
  • SKIPROWS - the number of rows to slip *default value is 0

sstabledump #

dumps the content of the specified SSTable in the JSON format

you may wish to flush the table to the disk before dumping its contests

sstableloader #

provides the ability to:

  • bulk load external data into a cluster
  • load pre-existing cluster or new cluster
  • a cluster with the same number of nodes or a different number of nodes
  • a cluster with different replication strategy or partitioner

it doesn’t simply copy the set of SSTables to every node, but transfers the relevant parts of the data to each node confirming of the replication strategy of the cluster.

DSE DsBulk #

Moves Cassandra data to/from files in the file system

Uses both CSV or JSON formats

Backup #

Cassandra uses spanshots fir backup data, because:

  • they don;n copy out all the data from DB

  • it’s a distributed system; every node has only a portion of the data

  • SSTables are immutable, which makes them easy to back up

  • snapshot create hadrlinks on the file system as opposed to coping data

    • this is different than coping actual data files (takes less disk space)
  • therefile very fast

  • represents the state of the data files at a particular point in the time

  • can consist of a single table, single keyspace ot multiple keyspace

incremental backup:

  • create a hard link to every SSTable upon flush
    • user must manually delete them after creating a new shapshot
  • incremental backups are disabled by default (cassandra.yaml, incremetal_backups: true)
  • need a snapshot before taking inkremental backup
  • snapshot information is stored in a snapshots directory under each table directory

backups storred per node and contains only data from this node.

nodetool snapshot
nodetool clearsnapshot

JVM settings #

JVM memory areas:

  • code

  • stack

  • heap (is where Java programs allocated and deallocated transient memory)

  • GC refers to when the JVM reclaims the deallocated memory in the heap

Settings:

  • MAX_HEAP_SIZE (set to max of 8 gb)
    • large heaps can introduce GC pauses that lead to latency
  • HEAP_NEWSIZE (set to 100MB per core)
    • the larger this is, the longer GC pauses time will be; the shorter it is, the more frequently GC will run

Garbage collection #

DSE 6+ keeps one core available for GC and other maintanance activities

What consider when tuning GC:

  • pause time
    • length of time the collector stops the application while it frees up memory
  • throughput
    • determined by how often the GC runs and pauses the application
    • more often the GC runs, the lower throughput
  • we want to minimize length of pauses as well as frequency of collection

JVM available memory:

  • Permanent generation
  • new generation (ParNew)
    • contains:
      • eden
      • 2 survivor spaces
    • once eden fills up with new object, JVN trigger a minor GC
    • a minor GC stops execution, iterates over the objects in eden, copies any object that are not (yet) garbage to the active survivor space, and clears eden
    • if the minor GC fills up the active survivor space, it performs the same process on the survivor space
    • objects that are still active are moved to the other survivor space, and the JVM clears the old survivor space
    • it’s a stop-the-world algorithm
    • fast:
      • findong and removing garbage
    • slow:
      • moving active objects from eden to survivor space
      • moving active objects from survivor spaces to the old gen
  • old generation (CMS)
    • contains objects that have survived long enough to not be collected by a minor GC
    • the CSM collector runs then 75% full

Full GC:

  • multi-second GC pauses = Major collections happening
  • if the old gen fills up before the CMS collector can finish, the application is paused while a full GC runs
  • checks everything: new gen, old gen and perm gen
  • significant (multi-second) pauses

Heap dump #

  • useful when troubleshooting high memory utilization ot OutOfMemoryErrors
  • show exactly which objects are consuming most of the heap
  • Cassandra starts Java with the option -XX:+HeapDumpOnOutOfMemoryError

Tuning the kernel #

Time sync

  • Cassandra nodes identify valid data using timestamps
    • all nodes within a Cassandra cluster need to have synchronized clocks
  • Time Stamp Counter (TSC) is a simple register within the CPU that counts the number of clock cycles
    • over time, TSC will drift because the clock cycles may vary between CPUs
  • Network Time Protocol (NTP) is a way to synchronize CPU clocks
    • nodes communicate wirh a hierarchy of tine servers to djust their clock
    • adjustments occurs every 1-20 minutes
# to view current limits
ulimit -a

Since Cassandra nodes don’t need to share resources, these limits are not helpful. Turn them off globally by editing limits.conf. Limits take effect when you login. For Ubuntu, use root intead of *.

  • *-nofile 1048576
  • *-memlock unlimited
  • *-fsize unlimited
  • *-data unlimited
  • *-rss unlimited
  • *-stack unlimited
  • *-cpu unlimited
  • *-nproc unlimited
  • *-as unlimited
  • *-locks unlimited
  • *-sigpending unlimited
  • *-msgqueue unlimited

Swap:

  • for cassandra, swapping if a very bad event
  • you are better having a node go down than limp along swapping
  • to thoroughly disable swap:
    • turn off swap for the current kernel process
    • remove swap entries from fstab
    • change the swappiness setting
  • you can check the current list of swap devices by:
swapon -s
  • you can turn off swap without rebooting
  • this command will not persist (i.e. will not survive a reboot):
swapoff -a
  • to look at the current swappiness settings use:
cat /proc/sys/vm/swappiness
  • this lavlue has range of 0-200 (0 is low an 200 is high)
  • to make sure your kernell deiables swapping after a reboot, edir /etc/sysctl.conf
  • change or add a line to set vm.swappiness = 0
  • use sysctl -p to get the kernel to reload the changes nade to /etc/sysctl.conf

Changing network kernel settings

  • net.ipv4.ip_local_port_range = 10000 65535
  • net.ipv4.tcp_window_scaling = 1
  • net.ipv4.tcp_rmem = 4096 87380 16777216
  • net.ipv4.tcp_wmem = 4096 65536 16777216
  • net.core.rmem_max = 16777216
  • net.core.wmem_max = 16777216
  • net.core.netdev_max_backlog = 2500
  • net.core.somaxconn = 65000

Hardware selections #

  • persistent storage type
    • avoid:
      • SAN storage
      • NAS diveces
      • NFS
    • Need to use SSD
  • memory
    • for both bare metal and VMs:
      • prod: 16-64GB; the minimum is 8GB
      • dev in non-loading testing environments: no less than 4GB
    • more memory means:
      • better read performanse due to caching
      • memtables hold more recently written data
  • CPU
    • cassandra if highly concurent and uses as many CPU cores as available
    • prod:
      • for bare metal: 16-core CPUs are the current price-performance sweet spot
    • dev:
      • 2-4 core CPUs
  • network
    • you should bind your OS interface to separate NetworkInterface Card (NIC)
    • recommended bandwidth os 1000 Mbits/s or grater
    • native protocols use the native_transport_address
    • cassandra’s internal storage protocol uses the listen_address

Security considerations #

Authentication #

  • desabled by default
  • when enabled, client programs must supply a username and password:
  • enable in dse.yaml

Apache Cassandra supports only pluggable authentication mechanisms.

DseAuthenticator options has three schemes:

  • internal
    • need to restart the node(s)
    • loggin as cassandra with password cassandra
    • the cassandra user is a superuser - has all permissions:
      • change the default cassandra password
      • Cassandra stores the credentials in the sys_auth keyspace, so lossing data here could be disastrous
ALTER USER cassandra WITH PASSWORD 'new_pass';

ALTER KEYSPACE "system_auth"
WITH REPLICATION = {'class': ;NetworkTopologyStrategy', 'dc1': 2};
  • LDAP
  • Kerberos

Cassandra users:

CREATE TABLE system_auth.roles (
  role text PRIMARY KEY,
  can_login boolen.
  is_superuser boolen,
  member_os set<text>,
  salted_hash text
)

Role operations:

CREATE ROLE SomeRole
WITH PASSWORD = 'some_pass'
AND LOGIN = true;

LIST ROLES;

DROP ROLE SomeRole;

Auth best practices:

  • create a second superuser role
  • change the default cassandra password and forget it
  • be sure to replicate system_auth keyspace

Authorisation #

GRANT SELECT ON someKeyspace.someTable to somerole;

Permissions:

  • ALTER
    • A. KEYSPACE
    • A. TABLE
    • CREATE INDEX
    • DROP INDEX
  • AUTHORIZE
    • GRANT
    • REVOKE
  • CREATE
    • C. KEYSPACE
    • C. TABLE
  • DROP
    • D. KEYSPACE
    • D. TABLE
  • MODIFY
    • INSERT
    • DELETE
    • UPDATE
    • TRUNCATE
  • SELECT
    • SELECT

OpsCenter and Lifecycle #

WebUI for DSE

Life Cycle manager (LCM) - mostly configuration and deployment OpsCenter Monitoring - monitoring and management