system

Run operations that affect the whole system or return overall status information.
system appliance update <image-name> [ ignoreoff ] [ factory ]
        	WARNING: The 'factory' parameter will delete all data on this system
        	NOTE: The image must be copied to /tmp on the primary manager node
system appliance update list
system appliance update reattach
system appliance version
system blackout <minutes>
system chassis beacon <chassis> on | off
system chassis setup
system chassis status
system cmp beacon <chassis>/<cmp> on|off
system diagnostics [ fast | full ]
system diagnostics event <event-time> [ event-window ]
system diagnostics list
system factory
       WARNING: The 'system factory' command will delete all data on this system
system failover [ force ]
system maintenance [ on | off ]
system register <tenant-name> <system-name>
system register list
system register <certificate>
system session kill <id>
system session show
system shutdown
system status
system timestamp [ on | off ]
appliance update
Update the Yellowbrick appliance. Specify the software image, which must be copied to the /tmp directory on the primary manager node. The ignoreoff option causes any blades that are powered off to be ignored. No attempt is made to power-cycle them.
Note: Any permissions that were granted to users on the sys schema are not preserved when the software is upgraded. The DBA will need to reapply these permissions.
The factory option resets the system to its factory defaults.
Warning: Using the factory option will delete all of the data on the database system.
appliance update list
List the installers that exist on the system. For example:
YBCLI(29459) (PRIMARY - yb00-mgr0)> system appliance update list

The following Yellowbrick installers exist on this system:
	Yellowbrick installer: /tmp/ybd-3.0.0-12050-release
	Yellowbrick installer: /tmp/ybd-3.0.0-12086-release
...
appliance update reattach
Use the reattach option to reattach to an installation that was interrupted because of a loss of network connectivity. For example:
YBCLI(22827) (PRIMARY - yb100-mgr0)> system appliance update reattach
 
This command will look for previous system update sessions still running and attach to them.
Note: Only use this command if a previous install was interrupted due to loss of network connectivity.
Continue (yes/no)? yes
...
appliance version
Return the appliance and ybcli versions that are running. This command also returns the date and time of the last software upgrade. For example:
YBCLI(4885) (PRIMARY - yb00-mgr0)> system appliance version
YBCLI version                : 3.1.0-1190
YBD appliance version        : 3.1.0-1190-release
YBD appliance SHA            : 4b4f4821a66dabb06e7f8cdb005592fc15e984d1
Software update in-progress  : NO
Software last updated        : 10-29-2019 at 11:21:03
Software update history      :
	 3.0.5-14201 -> 3.1.0-1031   (succeeded) - 10-15-2019 at 13:51:35
	  3.1.0-1031 -> 3.1.0-1167   (succeeded) - 10-28-2019 at 05:12:34
	  3.1.0-1167 -> 3.1.0-1179   (succeeded - FACTORY) - 10-28-2019 at 14:41:20
	  3.1.0-1179 -> 3.1.0-1187   (succeeded) - 10-28-2019 at 21:14:19
	  3.1.0-1187 -> 3.1.0-1190   (succeeded - FACTORY) - 10-29-2019 at 11:21:03
blackout [ minutes ]
Suppress alerts on the system for 30 minutes by default.
YBCLI(16533) (PRIMARY - yb00-mgr0)> system blackout

Doing a system blackout means all alerts will be suppressed for 30 minutes.
Continue (yes/no)? yes

This system has entered an alert blackout period of 30 minutes
You can also specify the length of time for the blackout, using a range of 30 to 1440 minutes. For example:
YBCLI(17528) (PRIMARY - yb00-mgr0)> system blackout 60

Doing a system blackout means all alerts will be suppressed for 60 minutes.
Continue (yes/no)? yes

This system has entered an alert blackout period of 60 minutes
chassis beacon
Specify the chassis number (0, 1, or all) and on or off:
  • on: turns on the blue LEDs for all installed blades in the specified chassis. The blue LED on the front panel of the chassis is also turned on if it is not already blinking.
  • off: turns off the blue LEDs.

This command syntax also accepts chassis0 and chassis1 as alternatives to 0 and 1, and it accepts 0-1 as alternatives to all.

For example, turn on the LEDs for chassis 0:
YBCLI (PRIMARY)> system chassis beacon 0 on

Chassis 0 beacon turned on
For example, turn off the LEDs on both chassis:
YBCLI (PRIMARY)> system chassis beacon all off

Chassis 0 beacon turned off
Chassis 1 beacon turned off
chassis setup
Detect and configure an additional chassis on the appliance. Yellowbrick appliances support single-chassis and dual-chassis configurations. After running this command, respond to the prompts, as shown in the following example.
Note: You cannot run this command by accessing the system through the floating IP address. Use the dedicated IP address for the primary manager node.
YBCLI (PRIMARY)> system chassis setup

This command detects and sets up additional chassis on an existing system.
Note: All chassis must be connected to the system and powered up.
WARNING: While a chassis is being configured, the database will be shut down.

Type yes to continue: yes
Running system chassis setup

Note: In the field, Yellowbrick appliances support expansion only (the addition of chassis or blades).
Are you sure you want to expand the number of chassis on this system?

Type yes to continue: yes

Stopping Yellowbrick services prior to system chassis setup. Standby...  Done
Preparing network for multi-chassis detection. Standby... Done

Detecting chassis configuration...

Manager node is ready for multi-chassis configuration

Remote manager node
-------------------

Manager node is ready for multi-chassis configuration

Configuring chassis on this cluster
Supported chassis : 2
Detected chassis  : 2
	Chassis: 0 - Address: 192.168.2.4
	Chassis: 1 - Address: 192.168.5.4
	Chassis: 2 - Not Installed
	Chassis: 3 - Not Installed

Configuring HA. Standby... Waiting for HA stack to initialize. Standby... Done

2 chassis have been configured successfully.

The database can now be started with the 'database start' command.
chassis status
Return the status of the chassis configuration on the appliance. Yellowbrick appliances support single-chassis and dual-chassis configurations. For example:
YBCLI (PRIMARY)> system chassis status

Chassis configuration
---------------------
Found: 2 - Configured: 2

Retrieving chassis wiring details...

Chassis processor wiring
------------------------
Chassis: 0 -> CMP1 - Serial: TAB18050311170 - MAC: 38:D2:69:45:65:7E
Chassis: 0 -> CMP2 - Serial: TAB18050311164 - MAC: 38:D2:69:44:C2:21
Chassis: 1 -> CMP1 - Serial: TAB1803281113C - MAC: 38:D2:69:45:56:3A
Chassis: 1 -> CMP2 - Serial: TAB18050311174 - MAC: 38:D2:69:45:65:27

Retrieving chassis blade details...

Chassis: 0
----------
Blades installed: 11

Chassis: 1
----------
Blades installed: 11
cmp beacon <chassis>/<cmp> on|off
Turn the CMP beacon on or off. For example:
YBCLI (PRIMARY)> system cmp beacon 0/1 on
diagnostics [ fast | full ]
Send a diagnostics report to Yellowbrick for support to investigate. In addition to the sent report, a copy is left in the /tmp directory on the manager node.

The fast option does not check the status of the hardware or the blades. Because the analysis covers fewer subsystems, the report is generated much faster. The full option retrieves more detailed diagnostics and can take a significant amount of time. (It may also disrupt other operations on the system.) Do not specify fast or full unless requested to do so by Customer Support.

For example:
YBCLI (PRIMARY)> system diagnostics

This command will gather system diagnostics information and send it to Yellowbrick Data.
Are you sure you want to do this?

Type yes to continue: yes

Retrieving system log...Done
Retrieving installer log...Done
Retrieving cluster manager log...Done
Retrieving front-end database log...Done
Retrieving kernel log...Done
Retrieving HW events from blades...Done
Retrieving HW events from managers...Done
Retrieving possible blade asserts/crashes...Done
Retrieving minidumps...Done
Retrieving cluster manager status...Done
Retrieving ybstack details...Done
Retrieving stack traces...Done
Retrieving YBDB contents...Done
Retrieving full hardware and system status...Done
Compressing data...Done

System diagnostics has been collected and submitted for phonehome.

The diagnostics data has been left on this system for manual copy at: /tmp/20180706155223-ybdiag-155223.tar.gz
Note: If the database is not running, only a partial report can be submitted.
diagnostics event
Return the logs from the specified timestamp. The event-time parameter is defined using the YYYY-MM-DDTHH:MM:SS format.
The optional event-window parameter specifes the number of minutes before and after the event-time during which logs should be collected. The window-event parameter has a default of 15 minutes and a maximum of 480 minutes (8 hours).
For example:
YBCLI(20258) (PRIMARY - yb00-mgr0)> system diagnostics event 2020-04-02T23:00:00

Starting dump for event with event time = 2020-04-02 23:00:00
Printing logs between 2020-04-02 22:45:00 and 2020-04-02 23:15:00
This command will gather system diagnostics information and send it to Yellowbrick Data.
Are you sure you want to do this?
Response (yes/no): yes
WARNING: The database may be unresponsive, and may reconfigure while full system diagnostics is executing.
Are you sure you want to do this?
Response (yes/no): yes

Retrieving system log...Done
Retrieving installer log...Done
Retrieving installer screen log...Done
Retrieving YBCLI log...Done
Retrieving DCS ybdiag details...Done
Retrieving SMC ybdiag details...Done
Retrieving LIME ybdiag details...Done
Retrieving cluster manager log...Done
Retrieving front-end database log...Done
Retrieving DCS log...Done
Retrieving external logs...Done
Retrieving SMC log...Done
Retrieving kernel log...Done
Retrieving blade (kernel.Worker) (HW-DAEMON) (REPRINT) logs...Done
Retrieving cmp logs...Done
Retrieving HW events from blades...Done
Retrieving HW events from managers...Done
Retrieving sensor readings from managers...Done
Retrieving possible blade asserts/crashes...Done
Retrieving minidumps...Done
Retrieving cluster manager...queries...status...meminfo...workers...ybstatus...smartdata...drives...Done
Retrieving database activity and locks...Done
Retrieving database session time out info...Done
Retrieving stack traces...Done
Retrieving replication details...Done
Skipping YBDB content
Retrieving full hardware and system status...Done
Retrieving networking stats...Done
Retrieving network routes...Done
Retrieving CPU/process stats...Done
Retrieving netstat status...Done
Retrieving manager nvme list (local) ...Done
Retrieving manager nvme list (remote) ...Done
Compressing data...Done
The size of the file /tmp/20200403001211-ybdiag-event-001211.tar.gz is 1.6 MB

System diagnostics has been collected and submitted for phonehome.

The diagnostics data has been left on this system for manual copy at: /tmp/20200403001211-ybdiag-event-001211.tar.gz
diagnostics list
Return a list of the diagnostics packages that exist on the system. For example:
YBCLI(31444) (PRIMARY - yb100-mgr1)> system diagnostics list

The following Yellowbrick diagnostics packages exist on this system:
	Diagnostics package: /tmp/20190222061328-ybdiag-061328.tar.gz
	Diagnostics package: /tmp/20190222061758-ybdiag-061758.tar.gz
	Diagnostics package: /tmp/20190222062303-ybdiag-062303.tar.gz
	Diagnostics package: /tmp/20190222062040-ybdiag-062040.tar.gz
	Diagnostics package: /tmp/20190222115628-ybdiag-115628.tar.gz
factory
Wipe the system of all data and return it to its factory defaults. Run a low-level format on all NVMe drives. When you run the system factory command, you see a series of warnings and prompts for your protection. The final protective prompt will ask you to enter the manager node's hostname to ensure you are wiping the correct node. Be aware that entering the incorrect hostname will abort the system factory command.
Warning: This command will delete all of the data on the database system.
YBCLI(20440) (PRIMARY - yb98-mgr1)> system factory

WARNING: Performing a factory reset will delete all user data, including all 
tables, databases, statistics, users, keys and configuration information.
This operation cannot be undone. 
Are you sure you want to do this?
Response (yes/no): yes

All data on this system will now be deleted, and the system will be reset to its factory defaults.
Please verify again that you want to complete this operation.
Continue (yes/no)? yes

Please verify whether system factory should be run on this system:
System factory about to be performed on:

System IP                  : 10.10.198.10
Local manager node hostname: yb98-mgr1
Local manager node IP      : 10.10.198.14

Remote manager node hostname: yb98-mgr0
Remote manager node IP      : 10.10.198.12

Database running        : YES
Database ready          : YES
Database read-only      : NO
Database uptime         : 00:15:24
Database users connected: 4 (including system users)

Enter local manager node hostname to continue: yb98-mgr1
In the background, the system factory command performs the following actions:
  • Runs a ybinit (database initialization command) to clean out the database.
  • Removes old log files from the manager nodes.
  • Removes all users (except ybdadmin) from the manager nodes.
  • Does a low-level erase of all SSDs on the blades (if requested at the prompt).
  • Generates new ssh keys for the manager nodes.
After this command is run, the system is fully clean. This command does not take the manager nodes fully back to their defaults (that is, it cannot detect if an administrator made custom changes to the manager nodes). The compute blades are provisioned as worker nodes and the system is ready for use.
failover [ force ]
Fail over to the other manager node. First, connect to one of the manager nodes directly. You cannot perform a failover while connected to the floating IP address for the HA cluster.

The force option forces a system failover regardless of other commands that are currently executing on the manager node. This option is needed occasionally in some emergency situations.

[ybdadmin@yb100-mgr0 ~]$ ybcli system failover

Curent cluster roles:
Local node  : PRIMARY   - ACTIVE 
Remote node : SECONDARY - ACTIVE 
Failing over to another node is a disruptive process and not guaranteed to work
This should only be done if the current manager node is malfunctioning

Are you sure you want to do this?
Type yes to continue: yes

Initiating system failover
Monitoring completion. This can take 2 minutes. Notifications may appear. Standby...

System failover was successful. Yellowbrick database started.
Primary manager node is now: Remote node (yb100-mgr1.ybtest.io)

WARNING: A SYSTEM NODE ROLE CHANGE WAS DETECTED
Current roles
–------------
LOCAL NODE  : SECONDARY (ACTIVE)
REMOTE NODE : PRIMARY   (ACTIVE)
After the failover, logging into ybcli returns:
[ybdadmin@yb100-mgr1 ~]$ ybcli
...
No redundant manager node detected
YBCLI is currently running on the PRIMARY manager node. 
Local manager node : yb100-mgr1.ybtest.io -> (PRIMARY ACTIVE)
Remote manager node: NOT PRESENT
maintenance
Put the system into maintenance mode (on) or take it out of maintenance mode (off). In maintenance mode, the system does not accept database client connections (however, you can still log into the manager nodes via ssh).
YBCLI (PRIMARY)> system maintenance on

Enabling system maintenance mode will also shut down the database layer

Are you sure you want to do this?
Type yes to continue: yes

Stopping YBD services for maintenance mode. Standby... Done
Successfully enabled system maintenance mode

Run system status to find out if the database is currently in maintenance mode.

register <tenant-name> <system-name>
Register the system for the Yellowbrick phonehome application. Enter a user-defined tenant name and the name of the system. For example:
system register YB yb100

Respond to the prompts and provide the requester name (your full name).

register list
List all compatible certificate files under /tmp for in-field registration. For example:
YBCLI(69050) (PRIMARY - yb98-mgr0)> system register list
 
The following p12 certificates exist on this system:
       newcustomer-newcluster1.p12
register <certificate>
Register the system with Phonehome using a certificate file for customers who do not have access to Phonehome. For example:
YBCLI(69050) (PRIMARY - yb98-mgr0)> system register newcustomer-newcluster1.p12
 
This cluster appears to already be registered. Do you want to re-register?
Response (yes/no): yes
 
Are you sure you want to register this system using:
Certificate bundle : /tmp/newcustomer-newcluster1.p12
Response (yes/no): yes
 
Performing system registration. Standby... Done
System registration was successful
The database must be restarted for the new certificate to take effect
session kill <id>
Kill a system session by providing the session ID. Use session show to list session IDs.
YBCLI(31444) (PRIMARY - yb100-mgr1)> system session kill 31444
...

YBCLI(31444) (PRIMARY - yb100-mgr1)> system session kill 31444

YBCLI cannot kill its own session.

YBCLI(31444) (PRIMARY - yb100-mgr1)> system session kill 40000

No YBCLI session with id: 40000 is currently executing.
session show
Return a list of active ybcli sessions and their session IDs:
YBCLI(31444) (PRIMARY - yb100-mgr1)> system session show

YBCLI ID:   3904 User: ybdadmin
YBCLI ID:   6123 User: user2
YBCLI ID:  22712 User: user2
YBCLI ID:  22893 User: ybdadmin
YBCLI ID:  24690 User: ybdadmin
YBCLI ID:  25025 User: user3
YBCLI ID: 31444 User: ybdadmin (this session)

7 YBCLI session(s) found running on this manager node.
shutdown
Shut down the entire appliance and all blades. See also Powering the Appliance Off and On.
YBCLI(70095) (PRIMARY - yb98-mgr0)> system shutdown
 
Shutting down the system will halt all blades and all manager nodes.
When in this state, all components may safely be powered off.
To start the system again, all components will have to be power cycled manually.
Continue (yes/no)? yes
 
Stopping services. Standby...  Done
 
Shutting down all blades...
Gracefully shutting down blade in bay:  1 -> OK
Gracefully shutting down blade in bay:  2 -> OK
Gracefully shutting down blade in bay:  3 -> OK
Gracefully shutting down blade in bay:  4 -> OK
Gracefully shutting down blade in bay:  5 -> OK
Gracefully shutting down blade in bay:  6 -> OK
Gracefully shutting down blade in bay:  7 -> OK
Gracefully shutting down blade in bay:  8 -> OK
Gracefully shutting down blade in bay:  9 -> OK
Gracefully shutting down blade in bay: 10 -> OK
Gracefully shutting down blade in bay: 11 -> OK
Gracefully shutting down blade in bay: 12 -> OK
Gracefully shutting down blade in bay: 13 -> OK
Gracefully shutting down blade in bay: 14 -> OK
Gracefully shutting down blade in bay: 15 -> OK
Gracefully shutting down blade in bay:  1 -> OK
Gracefully shutting down blade in bay:  2 -> OK
Gracefully shutting down blade in bay:  3 -> OK
Gracefully shutting down blade in bay:  4 -> OK
Gracefully shutting down blade in bay:  5 -> OK
Gracefully shutting down blade in bay:  6 -> OK
Gracefully shutting down blade in bay:  7 -> OK
Gracefully shutting down blade in bay:  8 -> OK
Gracefully shutting down blade in bay:  9 -> OK
Gracefully shutting down blade in bay: 10 -> OK
Gracefully shutting down blade in bay: 11 -> OK
Gracefully shutting down blade in bay: 12 -> OK
Gracefully shutting down blade in bay: 13 -> OK
Gracefully shutting down blade in bay: 14 -> OK
Gracefully shutting down blade in bay: 15 -> OK
Gracefully shutting down blade in bay:  1 -> OK
Gracefully shutting down blade in bay:  2 -> OK
Gracefully shutting down blade in bay:  3 -> OK
Gracefully shutting down blade in bay:  4 -> OK
Gracefully shutting down blade in bay:  5 -> OK
Gracefully shutting down blade in bay:  6 -> OK
Gracefully shutting down blade in bay:  7 -> OK
Gracefully shutting down blade in bay:  8 -> OK
Gracefully shutting down blade in bay:  9 -> OK
Gracefully shutting down blade in bay: 10 -> OK
Gracefully shutting down blade in bay: 11 -> OK
Gracefully shutting down blade in bay: 12 -> OK
Gracefully shutting down blade in bay: 13 -> OK
Gracefully shutting down blade in bay: 14 -> OK
Gracefully shutting down blade in bay: 15 -> OK
 
Powering off all blades...
Blade(s) in chassis: 0 were instructed to power off
Blade(s) in chassis: 1 were instructed to power off
Blade(s) in chassis: 2 were instructed to power off
Waiting for blade(s) to power off
Blades powered off: 0/45
Blades powered off: 1/45
Blades powered off: 30/45
Blades powered off: 34/45
Blades powered off: 45/45
 
Shutting down remote manager node. Standby...
Initiating shutdown. This process can take 60 seconds
Shutting down local manager node. Standby...
Initiating shutdown. This process can take 60 seconds
Shutting down YBD services if running
Shutting down cluster services. This can take up to 120 seconds
Stopping Cluster (pacemaker)... Stopping Cluster (corosync)...
Connection to yb98-mgr0 closed by remote host.
Connection to yb98-mgr0 closed.
status
Return overall system status information. For example:
YBCLI(31031) (PRIMARY - yb98-mgr0)> system status

Manager nodes configured: 2
---------------------------
Node 1 (PRIMARY   - LOCAL NODE  ) : yb98-mgr0 -> ONLINE
Node 2 (SECONDARY - REMOTE NODE ) : yb98-mgr1 -> ONLINE

Database system running            : YES
Database system ready              : YES (Responding: YES)
Database system read-only          : NO
Database system rowstore           : NORMAL
Database system storage used       : 20%
Database system uptime             : 00:00:33
System work status                 : gc:idle  parityrebuild:idle  analyzer:idle  system:idle  user:idle 
Data collection running            : YES
Blade parity                       : Enabled
Blade parity rebuilding            : NO (Progress: N/A)
Blade data check in-progress       : NO
Cluster degraded mode              : NO
Maintenance mode                   : NO
Software update in-progress        : NO (Version: 3.3.0-20548)
Floating system IP                 : 10.10.198.10 - 255.255.255.0
System registered                  : YES (Tenant: yb - System: yb98)
LDAP status                        : Not configured
Encryption keystore                : Available - Status: Ready - Locked: NO
Chassis configuration              : Found: 3 - Configured: 3
timestamp on | off
Turn on timestamp display for ybcli commands. Execution Start and Execution End times are displayed for each command. This command is effective per user per ybcli session. The default is off. For example:
YBCLI(5422) (PRIMARY - yb100-mgr0)> system timestamp on

System execution timestamp has been turned on.
Execution End: Fri Apr  5 12:25:03 PDT 2019
...
YBCLI(5422) (PRIMARY - yb100-mgr0)> status blade 0/1

Execution Start: Fri Apr  5 12:51:13 PDT 2019
Chassis:  0
-----------
Blade Bay:  1 -> BOOTED  UUID: 00000000-0000-0000-0000-38B8EBD00578 - Version: YBOS-2.0.2-DEBUG   
		 BIOS: v05.04.21.0038.00.011 - Memory total/free: 65587652/1788928 KiB
		 CPU: Intel(R) Xeon(R) CPU E5-2618L v4 @ 2.20GHz - Cores: 10 - Load: 66%
		 Address: 192.168.10.10 - Uptime: 0 day(s), 01:06:45 - Worker: Running
		 Encryption Supported: YES - Encryption Enabled: NO - locked: N/A
		 Cluster status: OPERATIONAL - Cluster role: MEMBER - Last seen: just now

Execution End: Fri Apr  5 12:51:15 PDT 2019