Jump to content

About This Club

Tools that are used in the IT world
  1. What's new in this club
  2. Nagios is a great tool, especially for free. Find your config find / -name nagios.cfg Test nagios against that config (below is where my nagios.cfg was located but change to where yours is) nagios -v /etc/nagios/nagios.cfg When your system is checking processes on a system and shows RSZDT and you are wondering what that means. R = running, S = interruptible sleep (waiting to complete), Z = defunct ("zombie") process, D = uninterruptible sleep, T = stopped. Perhaps this is on the cluster (total) -- how many servers is it checking, unless it's on a single system (in which case there may be a problem). Find what plugins are installed ls /usr/lib64/nagios/plugins/
  3. Here you will find some examples of how to utilize splunk in different ways. Example of how to find all hostnames and source files that are reporting data for a sourcetype index=* sourcetype="f5:bigip:syslog" hostname="*" | stats count by hostname host source This example will show you hostname - source and what are the stats per device so you can identify if all your devices are reporting to splunk as you thought. Also what devices are reporting a lot of data (maybe debug is turned on). Another pretty quick query that I prefer is this one | tstats count as totalCount earliest(_time) as firstTime latest(_time) as lastTime where index="*" sourcetype="f5:bigip:syslog" by host sourcetype | fieldformat firstTime=strftime(firstTime,"%Y/%m/%d %H:%M:%S") | fieldformat lastTime=strftime(lastTime,"%Y/%m/%d %H:%M:%S")
  4. Just about any appliance you receive from the Enterprise world it comes with tcpdump, especially if the host operating system is linux based. Here are some commands that I run that have proven helpful and they may prove to help you as well. My main use is on our F5 appliances or our linux application servers. Below you will find different uses of tcpdump NO DNS RESOLUTION To disable name resolution, use the -n flag as in the following examples: tcpdump -n tcpdump -ni 0.0 CAPTURE TO FILE To save the tcpdump output to a binary file, type the following command: tcpdump -w dump1.pcap Note: The tcpdump utility does not print data to the screen while it is capturing to a file. To stop the capture, press CTRL-C. READ CAPTURED FILE To read data from a binary tcpdump file (that you saved by using the tcpdump -w command), type the following command: tcpdump -r dump1.pcap In this mode, the tcpdump utility reads stored packets from the file, but otherwise operates just as it would if it were reading from the network interface. As a result, you can use formatting commands and filters. FILTER ON HOST ADDRESS To view all packets that are traveling to or from a specific IP address, type the following command:tcpdump tcpdump host 10.40.89.188 To view all packets that are traveling from a specific IP address, type the following command:tcpdump src tcpdump src host 10.40.89.188 To view all packets that are traveling to a particular IP address, type the following command:tcpdump dst tcpdump dst host 10.40.89.188 ***NOTE: To get accurate IP Addresses on the F5, I like to check existing connections on the F5 device first so you aren’t just waiting for something that isn’t ever going to happen Looking for connections to a member IP Address # tmsh show sys connection | grep 10.40.89.188 10.40.89.188:48520 10.40.212.23:2075 10.40.89.188:48520 10.40.212.23:2075 tcp 37 (slot/tmm: 1/9) none Explanation of columns cs-client-addr:cs-client-port | cs-server-addr:cs-server-port | ss-client-addr:ss-client-port | ss-server-addr:ss-server-port Computer IP & PORT | Virtual Server IP & PORT | SNAT IP & PORT | Server IP & PORT FILTER ON PORT To view all packets that are traveling through the BIG-IP system and are either sourced from or destined to a specific port, type the following command:tcpdump port <port number> For example: tcpdump port 80 To view all packets that are traveling through the BIG-IP system and sourced from a specific port, type the following command:tcpdump src port <port number> For example: tcpdump src port 80 To view all packets that are traveling through the BIG-IP system and destined to a specific port, type the following command:tcpdump dst port <port number> For example: tcpdump dst port 80 FILTER ON TCP Flag To view all packets that are traveling through the BIG-IP system that contain the SYN flag, type the following command: tcpdump 'tcp[tcpflags] & (tcp-syn) != 0' To view all packets that are traveling through the BIG-IP system that contain the RST flag, type the following command: tcpdump 'tcp[tcpflags] & (tcp-rst) != 0' Isolate TCP RST flags tcpdump 'tcp[13] & 4!=0' tcpdump 'tcp[tcpflags] == tcp-rst' Isolate TCP SYN flags tcpdump 'tcp[13] & 2!=0' tcpdump 'tcp[tcpflags] == tcp-syn' Isolate packets that have both the SYN and ACK flags set tcpdump 'tcp[13]=18' Isolate TCP URG flags tcpdump 'tcp[13] & 32!=0' tcpdump 'tcp[tcpflags] == tcp-urg' Isolate TCP ACK flags tcpdump 'tcp[13] & 16!=0' tcpdump 'tcp[tcpflags] == tcp-ack' Isolate TCP PSH flags tcpdump 'tcp[13] & 8!=0' tcpdump 'tcp[tcpflags] == tcp-psh' Isolate TCP FIN flags tcpdump 'tcp[13] & 1!=0' tcpdump 'tcp[tcpflags] == tcp-fin' COMBINING FILTERS You can use the and operator to filter for a mixture of output. Following are some examples of useful combinations: tcpdump host 10.40.89.188 and port 80 tcpdump src host 172.67.134.121 and dst port 80 tcpdump src host 172.67.134.121 and dst host 10.40.89.188 AND and or && OR or or || EXCEPT not or ! Let’s find all traffic from 10.40.89.188 going to any host on port 3389. tcpdump -nnvvS src 10.40.89.188 and dst port 3389 Let’s look for all traffic coming from 192.168.x.x and going to the 10.x or 172.67.x.x networks, and we’re showing hex output with no hostname resolution and one level of extra verbosity. tcpdump -nvX src net 192.168.0.0/16 and dst net 10.0.0.0/8 or 172.67.0.0/16 tcpdump 'src 10.0.2.4 and (dst port 3389 or 22)' Find HTTP Host Headers tcpdump -vvAls0 | grep 'Host:' Find HTTP Cookies tcpdump -vvAls0 | grep 'Set-Cookie|Host:|Cookie:' Cleartext GET Requests tcpdump -vvAls0 | grep 'GET' As an example of Cleartext GET Requests tcpdump -vvAls0 -i 0.0 | grep 'GET' tcpdump: listening on 0.0, link-type EN10MB (Ethernet), capture size 65535 bytes (}.x.......4..ZP...Ye.."Splunk Logging"|"usfnt1slbdz02.thezah.com"|"1591809773553"|"1591809773553621"|"2.21.5.15"|"33289"|"/Production/vs.secure-prodd.thezah.com"|"192.168.8.64"|"443"|"/Production/pool.secure-prodd.thezah.com"|"10.43.197.68"|"33289"|"10.40.64.89"|"443"|"GET"|"/resources/apps/mobile/ipad/content/PersistentSectionVersions.json"|"HTTP/1.1"|""|"Mobile/1 CFNetwork/1125.2 Darwin/19.4.0"|"200"|"9"|257|"9693" (}......$...r.gP...Y..."Splunk" Find HTTP User Agents tcpdump -vvAls0 | grep 'User-Agent:' As an example of capturing User Agent tcpdump -vvAls0 -i 0.0 | grep 'User-Agent:' tcpdump: listening on 0.0, link-type EN10MB (Ethernet), capture size 65535 bytes User-Agent: Go-http-client/1.1 User-Agent: Go-http-client/1.1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36 Both SYN and RST Set tcpdump 'tcp[13] = 6' Find SSH Connections This one works regardless of what port the connection comes in on, because it’s getting the banner response. tcpdump 'tcp[(tcp[12]>>2):4] = 0x5353482D' Find DNS Traffic tcpdump -vvAs0 port 53 Find FTP Traffic tcpdump -vvAs0 port ftp or ftp-data Find NTP Traffic tcpdump -vvAs0 port 123 Find Cleartext Passwords tcpdump port http or port ftp or port smtp or port imap or port pop3 or port telnet -lA | egrep -i -B5 'pass=|pwd=|log=|login=|user=|username=|pw=|passw=|passwd= |password=|pass:|user:|username:|password:|login:|pass |user ' Find traffic with evil bit There’s a bit in the IP header that never gets set by legitimate applications, which we call the “Evil Bit”. Here’s a fun filter to find packets where it’s been toggled. tcpdump 'ip[6] & 128 != 0' Snarf/Snaplen The tcpdump utility provides an option that allows you to specify the amount of each packet to capture. You can use the -s (snarf/snaplen) option to specify the amount of each packet to capture. To capture the entire packet, use a value of 0 (zero). For example: tcpdump -s0 src host 172.67.134.121 and dst port 80 Alternatively, you can specify a length large enough to capture the packet data you need to examine. For example: tcpdump -s200 src host 172.67.134.121 and dst port 80 If you are using the tcpdump utility to examine the output on the console during capture or by reading from an input file with the -r option, you should also use the -X flag to display ASCII encoded output along with the default HEX encoded output. For example: tcpdump -r dump1.pcap -X src host 172.67.134.121 and dst port 80 Disable DNS resolution with a -n but also disable port lookups with another n so you would have -nn For example: tcpdump -nn src host 172.67.134.121 and dst port 80 STOPPING tcpdump You can stop the tcpdump utility using the following methods: If you run the tcpdump utility interactively from the command line, you can stop it by pressing the Ctrl + C key combination. If you run the tcpdump utility in the background, you can return the tcpdump session to the foreground by typing the following command:fg To stop the session, press Ctrl + C. If you run multiple instances of tcpdump utility in the background, you can terminate all instances at the same time by typing the following command:killall tcpdump
  5. Ran into a ton of issues using Cacti (mainly no one really supports the templates, plugins, etc for F5) so I'm trying a different flavor of monitoring solution called Zabbix which is another open source monitoring solution that has a few articles referencing F5 and a lot of the articles I'm finding are more recent (2017 and newer) where Cacti was pre 2017. So let's start with some instructions Install the Zabbix Repository Download the RPM wget https://repo.zabbix.com/zabbix/4.4/rhel/7/x86_64/zabbix-release-4.4-1.el7.noarch.rpm --no-check-certificate Install the RPM rpm -Uvh zabbix-release-4.4-1.el7.noarch.rpm No longer will PHP 5.x work.. which I had to downgrade my box to PHP 5 for Cacit but Zabbix wants PHP 7.2 or newer so we need to do some work here. NOTE: this will probably break Cacti if you have it running Disable PHP 5 repositories yum-config-manager --disable remi-php54 Enable PHP 7.2 repo. yum-config-manager --enable remi-php72 Clean up yum yum clean all Install PHP yum install -y php php-pear php-cgi php-common php-mbstring php-snmp php-gd php-pecl-mysql php-xml php-mysql php-gettext php-bcmath Modify the PHP time Zone by editing the php.ini file. vim /etc/php.ini Uncomment the following line and add your time zone (note: if you already had php configured like I did for Cacti, even with an older version, then this is probably already set for you) date.timezone = America/Detroit Install MariaDB Check to see if you have mariadb installed mysql -u root -p (if you get prompt to Enter password then its installed and you don't need to do this) If you don't have mariadb installed on your server then run the following command. yum --enablerepo=remi install mariadb-server Start the MariaDB service. systemctl start mariadb.service Enable MariaDB on system boot. systemctl enable mariadb Run the following command to secure MariaDB. mysql_secure_installation Add a new root password and continue. Then it will ask a few questions. Type “Y” to agree to that. Configure Database for Zabbix Create a the zabbix database mysql -u root -p password create database zabbix character set utf8 collate utf8_bin; create user 'zabbixuser'@'localhost' identified BY 'OMGsup3Rs3cret!!'; grant all privileges on zabbix.* to zabbixuser@localhost identified by 'OMGsup3Rs3cret!!'; flush privileges; quit; On Zabbix server host import initial schema and data. You will be prompted to enter your newly created password. cd /usr/share/doc/zabbix-server-mysql-4.4.6/ Import the MySQL file zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -u zabbixuser -p zabbix Configure the database for Zabbix server vim /etc/zabbix/zabbix_server.conf Modify the following parameters DBHost=localhost DBName=zabbix DBUser=zabbixuser DBPassword=OMGsup3Rs3cret!! Then save and exit the file. Restart Zabbix service. systemctl status zabbix-server.service Enable Zabbix on system boot. systemctl enable zabbix-server.service Modify firewall rules. firewall-cmd --add-service={http,https} --permanent firewall-cmd --add-port={10051/tcp,10050/tcp} --permanent firewall-cmd --reload Now restart httpd service. systemctl restart httpd Install Zabbix and any needed dependencies Use YUM to install Zabbix server, frontend and agent yum -y install zabbix-server-mysql zabbix-web-mysql zabbix-agent zabbix-get Configure Zabbix Update the Time Zone vim /etc/httpd/conf.d/zabbix.conf Uncomment php_value date.timezone and add your correct timezone (for me its America/Detroit) Restart HTTPD service systemctl restart httpd.service Setup Zabbix You can access Zabbix using following URL: http://Server-Host-Name Or IP /zabbix/ You should see the welcome page. The default login name is “Admin” and password is “zabbix”. You will go to the Zabbix Dashboard. YES the user and password are case sensitive so please remember to use a capital A for Admin
  6. If you are using an older version of Cacti then version 1 you can use Weathermap.. currently not available for anything 1.x or newer. Something helpful is to add Weather map by going to this website and clicking downloads and downloading the latest php-weathermap.zip file but if you are being blocked due to this being on github then here is the latest as of Jan 2020 php-weathermap-0.98a.zip upload the zip file to /var/tmp/ type the following commands unzip /var/tmp/php-weathermap-0.98a.zip cd /usr/src mv weathermap /usr/share/cacti/plugins/ cd /usr/share/cacti/plugins/weathermap/ Log into GUI with admin Click under the Console Tab - Configuration - Plugins Update permissions for user admin to view & edit Weathermap In the GUI menu on the left under Utilities, click User Management Click admin (and any other users you've created that need weathermap access) Under Realm Permissions, check Plugin->Weathermap: Configure/Manage Under Realm Permissions, check Plugin->Weathermap: View click Save
  7. Seems like this should be easy, especially since there is a yum install for cacti but oh no, nothing is what it seems. If you follow the below instructions (well as of January 2020) then you have a good chance at being successful. Please note, this is best case and no guarantee it will work but it worked for me. This also will help me keep track of some helpful commands I used for the future. DISABLE FIREWALL Open and edit SELinux configuration file. vim /etc/sysconfig/selinux Change SELINUX=enforcing to SELINUX=disabled. Save and exit. Reboot system reboot ENABLE REPOS (where you need software installed from) Head over to the Fedora website and copy the latest download link of the latest file. Download the EPEL repository: wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm --no-check-certificate Install EPEL repository rpm -Uvh epel-release-latest-7.noarch.rpm Download the REMI repository: wget http://rpms.famillecollet.com/enterprise/remi-release-7.rpm --no-check-certificate Install REMI repository rpm -Uvh remi-release-7.rpm List repositories. yum repolist Loaded plugins: fastestmirror, priorities Loading mirror speeds from cached hostfile * base: centos.host-engine.com * centos-sclo-rh: bay.uchicago.edu * centos-sclo-sclo: linux.mirrors.es.net * epel: fedora-epel.mirror.lstn.net * extras: centos.sonn.com * remi-safe: fr2.rpmfind.net * updates: centos.mirrors.hoobly.com repo id repo name status Tuleap/x86_64 Tuleap 65 base/7/x86_64 CentOS-7 - Base 10,097 centos-sclo-rh/x86_64 CentOS-7 - SCLo rh 8,968 centos-sclo-sclo/x86_64 CentOS-7 - SCLo sclo 878 cwp/x86_64 CentOS Web Panel repo for Linux 7 - x86_64 76 epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 13,212 extras/7/x86_64 CentOS-7 - Extras 335 mariadb MariaDB 85 remi-safe Safe Remi's RPM repository for Enterprise Linux 7 - x86_64 3,690 updates/7/x86_64 CentOS-7 - Updates 1,487 repolist: 38,893 Install APACHE Install Apache and start the service. Now I thought I could just install httpd but boy was I wrong. You need that httpd-devel to make things happen with cacti which is another reason you need to enable both repositories. yum install -y httpd httpd-devel You can confirm install by running httpd -v Server version: Apache/2.4.6 (CentOS) Server built: Aug 8 2019 11:41:18 Now lets make it active by starting Apache systemctl start httpd Install SNMP and RRDTool To install SNMP and RRD Tool, enter the following command: yum install -y net-snmp net-snmp-utils net-snmp-libs rrdtool Start SNMP. systemctl start snmpd Install Maria Database Server Use the following command to install the MariaDB server yum -y install mariadb-server Now start MariaDB Service systemctl start mariadb Secure MariaDB Installation by running: mysql_secure_installation Set root password? [Y/n] Y Remove anonymous users? [Y/n] Y Disallow root login remotely? [Y/n] Y Remove test database and access to it? [Y/n] Y Reload privilege tables now? [Y/n] Y Install PHP and needed packages Run the following command to install necessary PHP and required packages. yum --enablerepo=remi install -y php-mysql php-pear php-common php-gd php-devel php php-mbstring php-cli php-intl php-snmp To see what version of php your system is running, run the following php -v PHP 5.4.45 (cli) (built: Oct 22 2019 13:26:02) Copyright (c) 1997-2014 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2014 Zend Technologies You can see what modules (packages) that are installed by running php -m [PHP Modules] bz2 calendar Core ctype curl date dom ereg exif fileinfo filter ftp gd gettext gmp hash iconv intl json ldap libxml mbstring mhash mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_sqlite Phar posix readline Reflection session shmop SimpleXML snmp sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tokenizer wddx xml xmlreader xmlwriter xsl zip zlib [Zend Modules] Create a Cacti Database Import the timezone sql file mysql -u root -p mysql < /usr/share/mysql/mysql_test_data_timezone.sql Log in to the database server with the previously configured password you used when you secured your installation mysql -u root -p Create a Database and user. MariaDB [(none)]> create database zahlinuxcacti; MariaDB [(none)]> CREATE USER 'zahlinuxuser'@'localhost' IDENTIFIED BY 'OMGsup3Rs3creT!!'; Grant permission and flush privileges. MariaDB [(none)]> grant all privileges on zahlinuxcacti.* to zahlinuxuser@localhost ; MariaDB [(none)]> GRANT SELECT ON mysql.time_zone_name TO zahlinuxuser@localhost ; MariaDB [(none)]> FLUSH PRIVILEGES; Optimize Database We need to modify database parameters for better performance. Use the following command. vim /etc/my.cnf.d/server.cnf Add the following lines to the [mysqld] section. [mysqld] collation-server = utf8mb4_unicode_ci init-connect='SET NAMES utf8' character-set-server = utf8mb4 max_heap_table_size = 128M max_allowed_packet = 16777216 tmp_table_size = 64M join_buffer_size = 128M innodb_file_per_table = on innodb_file_format = Barracuda innodb_large_prefix = 1 innodb_buffer_pool_size = 932M innodb_doublewrite = on innodb_additional_mem_pool_size = 80M innodb_lock_wait_timeout = 50 innodb_flush_log_at_trx_commit = 2 innodb_flush_log_at_timeout = 3 innodb_read_io_threads = 32 innodb_write_io_threads = 16 innodb_io_capacity = 5000 innodb_io_capacity_max = 10000 Now that you made changes to the MariaDB config, you gotta restart them services systemctl restart mariadb.service Install and configure Cacti Now that you did all the prep work, let's install Cacti now using YUM. yum -y install cacti Import default cacti database file to the created database. cd /usr/share/doc/cacti-1.2.10/ Import the SQL file provided by cacti mysql -u root -p zahlinuxcacti < cacti.sql Edit the Cacti configuration file which includes a database, password details etc. vim /usr/share/cacti/include/config.php Modify database details. $database_type = 'mysql'; $database_default = 'zahlinuxcacti'; $database_hostname = 'localhost'; $database_username = 'zahlinuxuser'; $database_password = 'OMGsup3Rs3cret!!'; $database_port = '3306'; $database_ssl = false; Set Cron for Cacti Open cacti cron file. vim /etc/cron.d/cacti Uncomment the following line. */5 * * * * apache /usr/bin/php /usr/share/cacti/poller.php > /dev/null 2>&1 Save and exit the file Configure Apache for Cacti This will help us to do a remote installation. Edit the cacti config file. vim /etc/httpd/conf.d/cacti.conf Change “Require host localhost” to “Require all granted” and “Allow from localhost” to “Allow from all.” Here is an example of what I have (not positive its correct but it does work for me) Alias /cacti /usr/share/cacti/ <Directory /usr/share/cacti/> <IfModule mod_authz_core.c> # httpd 2.4 Require all granted </IfModule> <IfModule !mod_authz_core.c> # httpd 2.2 Order deny,allow Deny from all Allow from all </IfModule> </Directory> <Directory /usr/share/cacti/install> </Directory> Change Time Zone. vim /etc/php.ini date.timezone = your time Zone E.g.:- date.timezone = Australia/Sydney Restart apache systemctl restart httpd.service Restart MariaDB systemctl restart mariadb.service Restart SNMP systemctl restart snmpd.service Configure the Firewall Use these commands: firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --reload Start Cacti installation Open a web browser and use the following URL to access Cacti web interface. http://Your-Server-IP/cacti License Agreement: click Accept GPL License Agreement and click Begin Pre-installation checks. Click next if there are no issues. Next window is Installation Type. It will show Database connection details. Click Next to continue. Verify Critical Binary Locations and Versions and then click next. Verify Directory Permissions and continue. In the Template Setup window, you can select all templates, and click finish to the complete installation. IMPORTANT: as of Cacti 1.2.10 you will get a screen like the one below that has a blank line with a checkmark to the far right. Uncheck this box or what will happen is your install will get to 42% and hang there forever. Only way I figured out how to start the install over again was delete the entire cacti database and re-create database, import sql, create user etc.. After the installation, it will redirect to Login Page. Using default user name “admin” and default password “admin” you can log in to Cacti server. You should be asked to change the password after that. Change Password and click save. Then you should see the Cacti Dashboard. You can add new devices from Managemnt-> Devices Then click plus mark “+” on top of the right-hand corner. That’s it! You have successfully configured Cacti 1.2.10 on CentOS 7!
  8. This is what I have working at the moment. All the remote devices just point to ubuntu box that is running syslog-ng $ cat /etc/syslog-ng/syslog-ng.conf @version: 3.5 @include "scl.conf" @include "`scl-root`/system/tty10.conf" # Syslog-ng configuration file, compatible with default Debian syslogd # installation. # First, set some global options. options { flush_lines(0); use_dns(persist_only); use_fqdn(yes); owner(root); group(adm); perm(0640); stats_freq(0); bad_hostname("^gconfd$"); normalize_hostnames(yes); keep_hostname(yes); create_dirs(yes); }; ######################## # Sources ######################## source s_local { system(); internal(); }; source s_stunnel { # tcp(ip("127.0.0.1") tcp( port(1000) max-connections(100)); }; source s_udp { udp(); }; ######################## # Filters ######################## filter f_emerg { level (emerg); }; filter f_alert { level (alert .. emerg); }; filter f_crit { level (crit .. emerg); }; filter f_err { level (err .. emerg); }; filter f_warning { level (warning .. emerg); }; filter f_notice { level (notice .. emerg); }; filter f_info { level (info .. emerg); }; filter f_debug { level (debug .. emerg); }; # Facility Filters filter f_kern { facility (kern); }; filter f_user { facility (user); }; filter f_mail { facility (mail); }; filter f_daemon { facility (daemon); }; filter f_auth { facility (auth); }; filter f_syslog { facility (syslog); }; filter f_lpr { facility (lpr); }; filter f_news { facility (news); }; filter f_uucp { facility (uucp); }; filter f_cron { facility (cron); }; filter f_local0 { facility (local0); }; filter f_local1 { facility (local1); }; filter f_local2 { facility (local2); }; filter f_local3 { facility (local3); }; filter f_local4 { facility (local4); }; filter f_local5 { facility (local5); }; filter f_local6 { facility (local6); }; filter f_local7 { facility (local7); }; # Custom Filters filter f_user_none { not facility (user); }; filter f_kern_debug { filter (f_kern) and filter (f_debug); }; filter f_daemon_notice { filter (f_daemon) and filter (f_notice); }; filter f_mail_crit { filter (f_mail) and filter (f_crit); }; filter f_mesg { filter (f_kern_debug) or filter (f_daemon_notice) or filter (f_mail_crit); }; filter f_authinfo { filter (f_auth) or program (sudo); }; ######################## # Destinations ######################## destination l_authlog { file ("/var/log/authlog"); }; destination l_messages { file ("/var/log/messages"); }; destination l_maillog { file ("/var/log/maillog"); }; destination l_info { file ("/var/log/info"); }; destination l_ipflog { file ("/var/log/ipflog"); }; #destination l_debug { file ("/var/log/debug"); }; destination l_imaplog { file ("/var/log/imaplog"); }; destination l_syslog { file ("/var/log/syslog"); }; destination l_console { file ("/dev/console"); }; destination r_authlog { file ("/var/log/clients/$YEAR/$MONTH/$HOST/authlog"); }; destination r_messages { file ("/var/log/clients/$YEAR/$MONTH/$HOST/messages"); }; destination r_maillog { file ("/var/log/clients/$YEAR/$MONTH/$HOST/maillog"); }; destination r_info { file ("/var/log/clients/$YEAR/$MONTH/$HOST/info"); }; destination r_ipflog { file ("/var/log/clients/$YEAR/$MONTH/$HOST/ipflog"); }; #destination r_debug { file # ("/var/log/clients/$YEAR/$MONTH/$HOST/debug"); }; destination r_imaplog { file ("/var/log/clients/$YEAR/$MONTH/$HOST/imaplog"); }; destination r_console { file ("/var/log/clients/$YEAR/$MONTH/$HOST/consolelog"); }; destination r_syslog { file ("/var/log/clients/$YEAR/$MONTH/$HOST/syslog"); }; destination r_fallback { file ("/var/log/clients/$YEAR/$MONTH/$HOST/$FACILITY-$LEVEL"); }; ######################## # Log paths ######################## # Local sources log { source (s_local); filter (f_authinfo); destination (l_authlog); }; log { source (s_local); filter (f_mail); destination (l_maillog); }; log { source (s_local); filter (f_info); destination (l_info); }; log { source (s_local); filter (f_local0); destination (l_ipflog); }; #log { source (s_local); filter (f_debug); destination (l_debug); }; log { source (s_local); filter (f_local1); destination (l_imaplog); }; log { source (s_local); filter (f_syslog); destination (l_syslog); }; log { source (s_local); filter (f_emerg); filter (f_user_none); destination (l_console); }; log { source (s_local); filter (f_mesg); filter (f_user_none); destination (l_messages); }; # All sources, since we want to archive local and remote logs log { source (s_local); source (s_stunnel); filter (f_authinfo); destination (r_authlog); }; log { source (s_local); source (s_stunnel); filter (f_mail); destination (r_maillog); }; log { source (s_local); source (s_stunnel); filter (f_info); destination (r_info); }; log { source (s_local); source (s_stunnel); filter (f_local0); destination (r_ipflog); }; #log { source (s_local); source (s_stunnel); filter (f_debug); # destination (r_debug); }; log { source (s_local); source (s_stunnel); filter (f_local1); destination (r_imaplog); }; log { source (s_local); source (s_stunnel); filter (f_syslog); destination (r_syslog); }; log { source (s_local); source (s_stunnel); filter (f_emerg); filter (f_user_none); destination (l_console); }; log { source (s_local); source (s_stunnel); filter (f_mesg); filter (f_user_none); destination (l_messages); }; ### # Include all config files in /etc/syslog-ng/conf.d/ ### @include "/etc/syslog-ng/conf.d/*.conf"
  9. >hpiLO-> help status=0 status_tag=COMMAND COMPLETED Mon Aug 18 18:39:01 2014 DMTF SMASH CLP Commands: help : Used to get context sensitive help. show : Used to display values of a property or contents of a collection target. show -a : Recursively show all targets within the current target. show -l : Recursively show targets within the current target based on 'level' specified. Valid values for 'level' is from 1 to 9. create : Used to create new instances in the name space of the MAP. Example: create /map1/accounts1 username= password= name= group= delete : Used to destroy instances in the name space of the MAP. Example: delete /map1/accounts1/ load : Used to move a binary image from an URL to the MAP. Example : load /map1/firmware1 -source http://192.168.1.1/images/fw/iLO4_100.bin reset : Causes a target to cycle from enabled to disabled and back to enabled. set : Used to set a property or set of properties to a specific value. start : Used to cause a target to change state to a higher run level. stop : Used to cause a target to change state to a lower run level. cd : Used to set the current default target. Example: cd targetname date : Used to get the current date. time : Used to get the current time. exit : Used to terminate the CLP session. version : Used to query the version of the CLP implementation or other CLP elements. oemhp_ping : Used to determine if an IP address is reachable from this iLO. Example : oemhp_ping 192.168.1.1 , where 192.168.1.1 is the IP address that you wish to ping oemhp_loadSSHKey : Used to authorize a SSH Key File from an URL. Example : oemhp_loadSSHKey -source http://UserName:password@192.168.1.1/images/SSHkey1.pub oemhp_deleteSSHKey : Used to remove a SSH Key associated with a user Example : oemhp_deleteSSHKey HP CLI Commands: POWER : Control server power. UID : Control Unit-ID light. NMI : Generate an NMI. VM : Virtual media commands. LANGUAGE : Command to set or get default language VSP : Invoke virtual serial port. TEXTCONS : Invoke Remote Text Console. >hpiLO->
  10. what a cool command you can run on your cisco IOS switches Switch#sho int capabilities mod 5 GigabitEthernet5/1 Model: WS-X4548-GB-RJ45V-RJ-45 Type: 10/100/1000-TX Speed: 10,100,1000,auto Duplex: half,full,auto Auto-MDIX: no Trunk encap. type: 802.1Q,ISL Trunk mode: on,off,desirable,nonegotiate Channel: yes Broadcast suppression: percentage(0-100), sw Flowcontrol: rx-(off,on,desired),tx-(off,on,desired) VLAN Membership: static, dynamic Fast Start: yes Queuing: rx-(N/A), tx-(1p3q1t, Shaping) CoS rewrite: yes ToS rewrite: yes Inline power: yes (Cisco Voice Protocol/IEEE Protocol 802.3af) SPAN: source/destination UDLD: yes Link Debounce: no Link Debounce Time: no Port Security: yes Dot1x: yes Maximum MTU: 1552 bytes (Baby Giants) Multiple Media Types: no Diagnostic Monitoring: N/A[/code]
  11. Send job to background Syntax bg Options: If PID is specified, the jobs with the specified group ids are put in the background. Send the specified jobs to the background. A background job is executed simultaneously with fish, and does not have access to the keyboard. If no job is specified, the last job to be used is put in the background. The PID of the desired process is usually found by using process expansion. Example Put the job with job id 0 in the background: bg %0[/code] "I'm not kidding myself, my voice is ordinary. If I stand still while I'm singing, I might as well go back to driving a truck" - Elvis Presley Related bash commands: fg - Send job to foreground
  12. An arbitrary precision calculator language Syntax bc options file... Options: -h, --help Print the usage and exit. file A file containing the calculations/functions to perform. May be piped from standard input -i, --interactive Force interactive mode. -l, --mathlib Define the standard math library. -w, --warn Give warnings for extensions to POSIX bc. -s, --standard Process exactly the POSIX bc language. -q, --quiet Do not print the normal GNU bc welcome. -v, --version Print the version number and copyright and quit. bc is a language that supports arbitrary precision numbers with interactive execution of statements. bc starts by processing code from all the files listed on the command line in the order listed. After all files have been processed, bc reads from the standard input. All code is executed as it is read. (If a file contains a command to halt the processor, bc will never read from the standard input.) The most common use of bc is within a shell script, using a "here" document to pass the program details to bc. Example shell script #!/bin/bash # bcsample - An example of calculations with bc if [ $# != 1 ] then echo "A number argument is required" exit fi bc scale=6 /* first we define the function */ define myfunc(x){ return(sqrt(x) + 10); } /* then use the function to do the calculation*/ x=$1 "Processing";x;" result is ";myfunc(x) quit END-OF-INPUT echo "(to 6 decimal places)"[/code] Run the script above with: $ chmod a+x bcsample $ ./bcsample 125 Standard functions supported by bc length ( expression ) The value of the length function is the number of significant digits in the expression. read ( ) Read a number from the standard input, regardless of where the function occurs. Beware, this can cause problems with the mixing of data and program in the standard input. The best use for this function is in a previously written program that needs input from the user, but never allows program code to be input from the user. scale ( expression ) The number of digits after the decimal point in the expression. sqrt ( expression ) The square root of the expression. Most standard math expressions are of course supported: + - / * % ^ ++ var increment the variable by one and set the new value as the result of the expression. var ++ The result of the expression is the value of the variable and the variable is then incremented by one. -- var decrement the variable by one and set the new value as the result of the expression. var -- The result of the expression is the value of the variable and the variable is then decremented by one. ( expr ) Brackets alter the standard precedence to force the evaluation of an expression. var = expr The variable var is assigned the value of the expression. Relational expressions and Boolean operations are also legal, look at the full bc man page for more Comments /* In-line comments */ # single line comment. The end of line character is not part of the comment and is processed normally. “If I were again beginning my studies, I would follow the advice of Plato and start with mathematics” - Galileo Related bash commands: dc - Desk Calculator
  13. Find and Replace text, database sort/validate/index Syntax awk 'Program' Input-File1 Input-File2 ... awk -f PROGRAM-FILE Input-File1 Input-File2 ... Key -F FS --field-separator FS Use FS for the input field separator (the value of the `FS' predefined variable). -f PROGRAM-FILE --file PROGRAM-FILE Read the `awk' program source from the file PROGRAM-FILE, instead of from the first command line argument. -mf NNN -mr NNN The `f' flag sets the maximum number of fields, and the `r' flag sets the maximum record size. These options are ignored by `gawk', since `gawk' has no predefined limits; they are only for compatibility with the Bell Labs research version of Unix `awk'. -v VAR=VAL --assign VAR=VAL Assign the variable VAR the value VAL before program execution begins. -W traditional -W compat --traditional --compat Use compatibility mode, in which `gawk' extensions are turned off. -W lint --lint Give warnings about dubious or non-portable `awk' constructs. -W lint-old --lint-old Warn about constructs that are not available in the original Version 7 Unix version of `awk'. -W posix --posix Use POSIX compatibility mode, in which `gawk' extensions are turned off and additional restrictions apply. -W re-interval --re-interval Allow interval expressions, in regexps. -W source=PROGRAM-TEXT --source PROGRAM-TEXT Use PROGRAM-TEXT as `awk' program source code. This option allows mixing command line source code with source code from files, and is particularly useful for mixing command line programs with library functions. -- Signal the end of options. This is useful to allow further arguments to the `awk' program itself to start with a `-'. This is mainly for consistency with POSIX argument parsing conventions. 'Program' A series of patterns and actions: see below Input-File If no Input-File is specified then `awk' applies the Program to "standard input", (piped output of some other command or the terminal. Typed input will continue until end-of-file (typing `Control-d') Basic functions The basic function of awk is to search files for lines (or other units of text) that contain a pattern. When a line matches, awk performs a specific action on that line. The Program statement that tells `awk' what to do; consists of a series of "rules". Each rule specifies one pattern to search for, and one action to perform when that pattern is found. For ease of reading, each line in an `awk' program is normally a separate Program statement , like this: pattern { action } pattern { action } ...[/code] e.g. Display lines from my_file containing the string "123" or "abc" or "some text": awk '/123/ { print $0 } /abc/ { print $0 } /some text/ { print $0 }' my_file A regular expression enclosed in slashes (`/') is an `awk' pattern that matches every input record whose text belongs to that set. e.g. the pattern /foo/ matches any input record containing the three characters `foo', *anywhere* in the record. `awk' patterns may be one of the following: /Regular Expression/ - Match = Pattern &amp;&amp; Pattern - AND Pattern || Pattern - OR ! Pattern - NOT Pattern ? Pattern : Pattern - If, Then, Else Pattern1, Pattern2 - Range Start - end BEGIN - Perform action BEFORE input file is read END - Perform action AFTER input file is read In addition to simple pattern matching `awk' has a huge range of text and arithmetic Functions, Variables and Operators. `gawk' will ignore newlines after any of the following: , { ? : || &amp;&amp; do else Comments - start with a `#', and continue to the end of the line: # This program prints a nice friendly message Examples This program prints the length of the longest input line: awk '{ if (length($0) &gt; max) max = length($0) } END { print max }' data This program prints every line that has at least one field. This is an easy way to delete blank lines from a file (or rather, to create a new file similar to the old file but from which the blank lines have been deleted) awk 'NF &gt; 0' data This program prints seven random numbers from zero to 100, inclusive. awk 'BEGIN { for (i = 1; i print int(101 * rand()) }' This program prints the total number of bytes used by FILES. ls -lg FILES | awk '{ x += $5 } ; END { print "total bytes: " x }' This program prints a sorted list of the login names of all users. awk -F: '{ print $1 }' /etc/passwd | sort This program counts lines in a file. awk 'END { print NR }' data This program prints the even numbered lines in the data file. If you were to use the expression `NR % 2 == 1' instead, it would print the odd numbered lines. awk 'NR % 2 == 0' data "Justice is such a fine thing that we cannot pay too dearly for it" - Alain-Rene Lesage Related: GNU Awk User Guide - awk examples awk one liners - Eric Pement awk one liners explained & pt2 - Peteris Krumin (CatOnMat.net) Patrick Hartigan - How to use awk `awk', `oawk', and `nawk' - Alternative, older and newer versions of awk egrep - egrep foo FILES ...is essentially the same as awk '/foo/' FILES ... expr - Evaluate expressions eval - Evaluate several commands/arguments for - Expand words, and execute commands grep - search file(s) for lines that match a given pattern m4 - Macro processor tr - Translate, squeeze, and/or delete characters Equivalent Windows command: FOR - Conditionally perform a command several times.
  14. Create an alias, aliases allow a string to be substituted for a word when it is used as the first word of a simple command. Syntax alias [name ...] unalias If arguments are supplied, an alias is defined for each name whose value is given. If no value is given, `alias' will print the current value of the alias. Without arguments or with the `-p' option, alias prints the list of aliases on the standard output in a form that allows them to be reused as input. `unalias' will remove each name from the list of aliases. If `-a' is supplied, all aliases are removed. Examples alias ls='ls -F' [/code] Now issuing the command 'ls' will actually run 'ls -F' alias la='ls -lAXh --color=always|less -R' Now issuing the command 'la' will actually run a long listing, in color, sorted by extension. Make an alias permanent Use your favorite text editor to create a .bash_aliases file, and type the alias commands into the file. .bash_aliases will run at login (or you can just execute it with ..bash_aliases ) Details The first word of each simple command, if unquoted, is checked to see if it has an alias. If so, that word is replaced by the text of the alias. The alias name and the replacement text may contain any valid shell input, including shell metacharacters, with the exception that the alias name may not contain `='. The first word of the replacement text is tested for aliases, but a word that is identical to an alias being expanded is not expanded a second time. This means that one may alias ls to "ls -F", for instance, and Bash does not try to recursively expand the replacement text. If the last character of the alias value is a space or tab character, then the next command word following the alias is also checked for alias expansion. There is no mechanism for using arguments in the replacement text, as in csh. If arguments are needed, a shell function should be used . Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt . The rules concerning the definition and use of aliases are somewhat confusing. Bash always reads at least one complete line of input before executing any of the commands on that line. Aliases are expanded when a command is read, not when it is executed. Therefore, an alias definition appearing on the same line as another command does not take effect until the next line of input is read. The commands following the alias definition on that line are not affected by the new alias. This behavior is also an issue when functions are executed. Aliases are expanded when a function definition is read, not when the function is executed, because a function definition is itself a compound command. As a consequence, aliases defined in a function are not available until after that function is executed. To be safe, always put alias definitions on a separate line, and do not use alias in compound commands. `alias' and `unalias' are BASH built-ins. For almost every purpose, shell functions are preferred over aliases. "The odds against there being a bomb on a plane are a million to one, and against two bombs a million times a million to one. Next time you fly, cut the odds and take a bomb." - Benny Hill Related: export - Set an environment variable env - Display, set, or remove environment variables echo - Display message on screen readonly - Mark variables/functions as readonly shift - Shift positional parameters Equivalent Windows command: SET - Display, set, or remove Windows environment variables.
  15.  

Announcements


  • Upcoming Events

    No upcoming events found
  • Newsletter

    Want to keep up to date with all our latest news and information?
    Sign Up
×
×
  • Create New...