Jump to content

rev.dennis

Administrators
  • Posts

    666
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by rev.dennis

  1. Turns out the dnf has changed the way it deals with proxies. If you’re using a basic proxy authentication then you need to specify it:

    vi /etc/dnf/dnf.conf
    # proxy settings
    proxy=http://proxy.domain.com:3128/
    proxy_username=username
    proxy_password=password
    proxy_auth_method=basic
  2. On a mac instead of doing ifconfig you can run networksetup like below to show all your interfaces and the assigned Mac-Address

    dennis$ networksetup -listallhardwareports
    
    Hardware Port: Wi-Fi
    Device: en0
    Ethernet Address: 78:4f:43:8d:54:d8
    
    Hardware Port: Bluetooth PAN
    Device: en6
    Ethernet Address: 78:4f:43:90:0b:f0
    
    Hardware Port: Thunderbolt 1
    Device: en1
    Ethernet Address: 82:dc:af:e0:cc:01
    
    Hardware Port: Thunderbolt 2
    Device: en2
    Ethernet Address: 82:dc:af:e0:cc:00
    
    Hardware Port: Thunderbolt 3
    Device: en3
    Ethernet Address: 82:dc:af:e0:cc:05
    
    Hardware Port: Thunderbolt 4
    Device: en4
    Ethernet Address: 82:dc:af:e0:cc:04
    
    Hardware Port: Thunderbolt Bridge
    Device: bridge0
    Ethernet Address: 82:dc:af:e0:cc:01
    
    VLAN Configurations
    ===================
    dennis$

    Anyhow, it appears I just had a typo in my command using the whole word network instead of net like below

    dennis$ sudo tcpdump -i any -n net 192.168.2.0/24
    
    tcpdump: data link type PKTAP
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on any, link-type PKTAP (Apple DLT_PKTAP), capture size 262144 bytes
    12:38:26.702344 ARP, Request who-has 192.168.2.1 (7a:4f:43:d8:9d:64) tell 192.168.2.2, length 28
    12:38:26.702350 ARP, Request who-has 192.168.2.1 (7a:4f:43:d8:9d:64) tell 192.168.2.2, length 28
    12:38:26.702360 ARP, Reply 192.168.2.1 is-at 7a:4f:43:d8:9d:64, length 28
    12:38:26.702362 ARP, Reply 192.168.2.1 is-at 7a:4f:43:d8:9d:64, length 28

    Just kept plugging away.

  3. Tried to capture packets for a NAT address (192.168.2.0/24 is NAT Pool) for my VMWare Fusion session.  When on my mac I ran the following command I'm getting some weird error messages.

    dennis$ sudo tcpdump -i any -v network 192.168.2.0/24
    
    tcpdump: data link type PKTAP
    tcpdump: listening on any, link-type PKTAP (Apple DLT_PKTAP), capture size 262144 bytes
    pktap_filter_packet: pcap_add_if_info(en9, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed
    pktap_filter_packet: pcap_add_if_info(bridge100, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed
    pktap_filter_packet: pcap_add_if_info(en0, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed
    pktap_filter_packet: pcap_add_if_info(en0, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed
    pktap_filter_packet: pcap_add_if_info(bridge100, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed
    pktap_filter_packet: pcap_add_if_info(en9, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed
    pktap_filter_packet: pcap_add_if_info(en9, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed
    pktap_filter_packet: pcap_add_if_info(bridge100, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed
    pktap_filter_packet: pcap_add_if_info(en0, 1) failed: pcap_if_info_set_add: pcap_compile_nopcap() failed

    Anyone have any ideas?

  4. I use a terminal program on my Mac called zoc by emtec and in comparison to all the other program terminals I have used, its by far the best all around program.  With mac I have tried iTerm2 (garbage and very feature less), MacTerm, 

    In your home directory (so I just type cd and press enter which bring me there) I type vi .bash_profile and my bash profile looks like the one below which gives me color.  Its really about the PS1 command mainly.

    # .bash_profile
    if [ -f ~/.bashrc ]; then
            . ~/.bashrc
    fi
    
    # User specific environment and startup programs
    
    PATH=$PATH:$HOME/.local/bin:$HOME/bin
    
    export PATH
    export {http,https,ftp}_proxy="http://NAO\dhosang:qMtzWRhSTZD8rNHm@10.43.196.154:80"
    [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function*
    
    parse_git_branch() {
         git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/'
    }
    # PS1 Base :x
    # Origin [\u@\h \W]\$
    export PS1="\u@\h \[\033[32m\]\w\[\033[33m\]\$(parse_git_branch)\[\033[00m\] $ "
    LS_COLORS="di=4;33"

    ..

  5. Okay today, having a bad day.  Tried to do an yum update and it locked out my user account. 

    dhosang@usdet1lvdwb001:$ sudo yum update -y
    Loaded plugins: fastestmirror, product-id, search-disabled-repos
    Determining fastest mirrors
    Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=x86_64&infra=stock&content=centos error was
    14: curl#22 - "Invalid file descriptor"
     * base: mirror.dal.nexril.net
     * centos-sclo-rh: centos.mirrors.tds.net
     * centos-sclo-sclo: repos.lax.layerhost.com
     * epel: mirror.arizona.edu
     * extras: centos.mirrors.tds.net
     * remi-php72: mirror.team-cymru.com
     * remi-safe: mirror.team-cymru.com
     * rpmfusion-free-updates: mirror.math.princeton.edu
     * updates: repos.mia.quadranet.com
    https://ci.tuleap.net/yum/tuleap/rhel/6/dev/x86_64/repodata/repomd.xml: [Errno 14] curl#22 - "Invalid file descriptor"
    Trying other mirror.
    http://mirror.chpc.utah.edu/pub/centos/7.9.2009/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 407 - Proxy Authentication Required
    Trying other mirror.
    http://centos.mirror.lstn.net/7.9.2009/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 407 - Proxy Authentication Required
    Trying other mirror. 

    The above is definitely cut down from the pages long of Proxy Authentication Required error messages that eventually locks my account out.

    So once I unlocked the account that is displayed when I type echo $http_proxy I do a quick test to see if I have internet access by running:

    dhosang@usdet1lvdwb001:$ curl -I https://thezah.com
    HTTP/1.1 200 Connection established
    
    HTTP/1.1 200 OK
    Date: Wed, 02 Dec 2020 18:56:36 GMT
    Content-Type: text/html;charset=UTF-8
    Pragma: no-cache
    X-IPS-LoggedIn: 0
    Vary: cookie,Accept-Encoding,User-Agent
    X-XSS-Protection: 0
    X-Frame-Options: sameorigin
    Expires: Wed, 02 Dec 2020 18:57:06 GMT
    Cache-Control: max-age=30, public
    Last-Modified: Wed, 02 Dec 2020 18:56:36 GMT
    CF-Cache-Status: DYNAMIC
    cf-request-id: 06c66958fd0000f36115800000000001
    Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
    Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=8ihM44Rlq7k6fVvaopR4jDTQ6o5jmyxqBw4Lkp%2B2TKsSw4dqrJ1IWbMRD%2FMy%2Fp9pnYGRTyUMgWdMbbQNcNWfIiHIIS9qDhdN1ux9"}],"group":"cf-nel","max_age":604800}
    NEL: {"report_to":"cf-nel","max_age":604800}
    Server: cloudflare
    CF-RAY: 5fb744d4cb58f361-ATL
    Proxy-Connection: Keep-Alive
    Connection: Keep-Alive
    Set-Cookie: __cfduid=d75cc7060b11361cce1c54a2cf72f113d1606935396; expires=Fri, 01-Jan-21 18:56:36 GMT; path=/; domain=.thezah.com; HttpOnly; SameSite=Lax; Secure
    Set-Cookie: ips4_IPSSessionFront=49e2f0fb3cb3740e6db92fccb9b7b35c; path=/; secure; HttpOnly
    Set-Cookie: ips4_guestTime=1606935396; path=/; secure; HttpOnly

    All that is really important is that you got HTTP/1.1 200 OK

    So what's next.. you proved that you can get to the internet fine but your yum update or yum upgrade is failing proxy authentication.

    Try doing a search for something like wireshark

    dhosang@usdet1lvdwb001:~$ sudo yum search wireshark
    Loaded plugins: fastestmirror, product-id, search-disabled-repos
    Loading mirror speeds from cached hostfile
     * base: mirror.dal.nexril.net
     * centos-sclo-rh: centos.mirrors.tds.net
     * centos-sclo-sclo: repos.lax.layerhost.com
     * epel: mirror.arizona.edu
     * extras: centos.mirrors.tds.net
     * remi-php72: mirror.team-cymru.com
     * remi-safe: mirror.team-cymru.com
     * rpmfusion-free-updates: mirror.math.princeton.edu
     * updates: repos.mia.quadranet.com
    =========================================================================== N/S matched: wireshark ============================================================================
    wireshark-devel.i686 : Development headers and libraries for wireshark
    wireshark-devel.x86_64 : Development headers and libraries for wireshark
    wireshark-gnome.x86_64 : Gnome desktop integration for wireshark
    wireshark.i686 : Network traffic analyzer
    wireshark.x86_64 : Network traffic analyzer
    
      Name and summary matches only, use "search all" for everything.

    This clearly shows that yum is getting through the proxy...  but wait, you still are getting proxy authentication errors when trying to do a yum update?

  6. sudo dnf remove --duplicates

    Tried again: 

    sudo dnf install 'dnf-command(config-manager)' --allowerasing

    Running transaction check
    Error: transaction check vs depsolve:
    (flatpak-selinux = 1.6.2-3.el8_2 if selinux-policy-targeted) is needed by flatpak-1.6.2-3.el8_2.x86_64
    rpmlib(RichDependencies) <= 4.12.0-1 is needed by flatpak-1.6.2-3.el8_2.x86_64
    To diagnose the problem, try running: 'rpm -Va --nofiles --nodigest'.
    You probably have corrupted RPMDB, running 'rpm --rebuilddb' might fix the issue.
    The downloaded packages were saved in cache until the next successful transaction.
    You can remove cached packages by executing 'dnf clean packages'.
    [dhosang@net1 ~]$    

     

    -rw-r--r-- 1 root root  173 Jul 12  2019 google-chrome.repo
    -rw-r--r-- 1 root root 1203 Dec 18  2019 epel-testing.repo
    -rw-r--r-- 1 root root 1266 Dec 18  2019 epel-testing-modular.repo
    -rw-r--r-- 1 root root 1104 Dec 18  2019 epel.repo
    -rw-r--r-- 1 root root 1249 Dec 18  2019 epel-playground.repo
    -rw-r--r-- 1 root root 1167 Dec 18  2019 epel-modular.repo
    -rw-r--r-- 1 root root  928 Jun  2 21:02 CentOS-Media.repo
    -rw-r--r-- 1 root root  338 Jun  2 21:02 CentOS-fasttrack.repo
    -rw-r--r-- 1 root root  756 Jun  2 21:02 CentOS-Extras.repo
    -rw-r--r-- 1 root root  668 Jun  2 21:02 CentOS-Debuginfo.repo
    -rw-r--r-- 1 root root 1043 Jun  2 21:02 CentOS-CR.repo
    -rw-r--r-- 1 root root  712 Jun  2 21:02 CentOS-Base.repo
    -rw-r--r-- 1 root root  731 Jun  2 21:02 CentOS-AppStream.repo
    -rw-r--r-- 1 root root 1075 Nov  3 15:15 epel.repo.rpmsave
    -rw-r--r-- 1 root root  798 Nov  3 15:21 CentOS-centosplus.repo
    -rw-r--r-- 1 root root  738 Nov  3 15:24 CentOS-HA.repo
    -rw-r--r-- 1 root root  736 Nov  3 15:25 CentOS-PowerTools.repo
    -rw-r--r-- 1 root root 1382 Nov  3 15:25 CentOS-Sources.repo
    -rw-r--r-- 1 root root  743 Nov  3 15:27 CentOS-Devel.repo

    Official Centos Repos

    [Base] – The packages that make up Centos, as it is released on the ISOs. It is enabled by default

    [Updates] – Updated packages to [Base] released after the Centos ISOs. This will be Security, BugFix, or Enhancements to the [Base] software. It is enabled by default

    [Addons] – Contains packages required in order to build the main Distribution or packages produced by SRPMS built in the main Distribution, but not included in the main Red Hat package tree (mysql-server in Centos-3.x falls into this category). Packages contained in the addons repository should be considered essentially a part of the core distribution, but may not be in the upstream Package tree. It is enabled by default

    [Contrib] – Packages contributed by the Centos Users, which do not overlap with any of the core Distribution packages. These packages have not been tested by the Centos developers, and may not track upstream version releases very closely. It is disabled by default

    [Centosplus] – Packages contributed by Centos developers and the users. These packages might replace rpm’s included in the core distribution. You should understand the implications of enabling and using packages from this repository. It is disabled by default

    [csgfs] – Packages that make up the Cluster Suite and Global File System. It is disabled by default

    [Extras] – Packages built and maintained by the Centos developers that add functionality to the core distribution. These packages have undergone some basic testing, should track upstream release versions fairly closely and will never replace any core distribution package. It is enabled by default

    [Testing] – Packages that are being tested prior to release, you should not use this repository except for a specific reason. It is disabled by default

    You can have a look at the packages here:
    http://dev.centos.org/centos/6/
    http://dev.centos.org/centos/7/

    Base Repository:

    Updates Repository:

    Addons Repository:

    Contrib Repository:

    Centosplus Repository:

    CSGFS:

    Extras:

    Testing:

    Section 2

    Then tried 

    sudo rpm --rebuilddb

  7. Total                                                                                                                                                                                             1.3 MB/s | 755 MB     09:36
    Running transaction check
    Error: transaction check vs depsolve:
    (flatpak-selinux = 1.6.2-3.el8_2 if selinux-policy-targeted) is needed by flatpak-1.6.2-3.el8_2.x86_64
    rpmlib(RichDependencies) <= 4.12.0-1 is needed by flatpak-1.6.2-3.el8_2.x86_64
    To diagnose the problem, try running: 'rpm -Va --nofiles --nodigest'.
    You probably have corrupted RPMDB, running 'rpm --rebuilddb' might fix the issue.
    The downloaded packages were saved in cache until the next successful transaction.
    You can remove cached packages by executing 'dnf clean packages'.
    [dhosang@net1 ~]$ 

  8. When I try and run sudo dnf update I get a bunch of errors that state

    ... conflicts with file from package ...

    So in researching the wonderful world of the web I found a suggestion to check for duplicates and if running the following command produces any results, you are in a bad way.

    sudo dnf repoquery --duplicated

    [dennis@net1 ~]$ sudo dnf repoquery --duplicated
    Extra Packages for Enterprise Linux 8 - x86_64                                                                                                                                                    0.0  B/s |   0  B     00:00
    Docker CE Stable - x86_64                                                                                                                                                                         0.0  B/s |   0  B     00:00
    Failed to synchronize cache for repo 'epel', ignoring this repo.
    Failed to synchronize cache for repo 'docker-ce-stable', ignoring this repo.
    Last metadata expiration check: 0:39:17 ago on Tue 03 Nov 2020 11:49:50 AM EST.
    kernel-devel-0:3.10.0-1127.10.1.el7.x86_64
    kernel-devel-0:3.10.0-1127.13.1.el7.x86_64
    kernel-devel-0:3.10.0-1127.18.2.el7.x86_64
    kernel-devel-0:3.10.0-1127.19.1.el7.x86_64
    kernel-devel-0:3.10.0-1127.el7.x86_64
    [dennis@net1 ~]$ 

     So as you can see, I'm in a bad way.  Since I'm running this server on proxmox, I went to the GUI and backed up this virtual before I run the next command which "could" render the server inaccessible (so I need the ability to restore)

    sudo dnf --disableplugin=protected_packages remove $(sudo dnf repoquery --duplicated --latest-limit -1 -q)

  9. Here you will find some examples of how to utilize splunk in different ways.

    Example of how to find all hostnames and source files that are reporting data for a sourcetype

    index=* sourcetype="f5:bigip:syslog" hostname="*" | stats count by hostname host source

    This example will show you hostname - source and what are the stats per device so you can identify if all your devices are reporting to splunk as you thought.  Also what devices are reporting a lot of data (maybe debug is turned on).

    Another pretty quick query that I prefer is this one

    |  tstats count as totalCount earliest(_time) as firstTime latest(_time) as lastTime where index="*" sourcetype="f5:bigip:syslog" by host sourcetype
    |  fieldformat firstTime=strftime(firstTime,"%Y/%m/%d %H:%M:%S")
    |  fieldformat lastTime=strftime(lastTime,"%Y/%m/%d %H:%M:%S")

     

  10. First it would be helpful to get a list of users that are already on your Linux box.

    Get a List of All Users using the /etc/passwd File

    Local user information is stored in the /etc/passwd file. Each line in this file represents login information for one user.

    less /etc/passwd

    Below is an example

    $ less /etc/passwd
    root:x:0:0:root:/root:/bin/bash
    bin:x:1:1:bin:/bin:/sbin/nologin
    daemon:x:2:2:daemon:/sbin:/sbin/nologin
    adm:x:3:4:adm:/var/adm:/sbin/nologin
    lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
    sync:x:5:0:sync:/sbin:/bin/sync
    shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
    halt:x:7:0:halt:/sbin:/sbin/halt
    mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
    operator:x:11:0:operator:/root:/sbin/nologin
    games:x:12:100:games:/usr/games:/sbin/nologin
    ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
    nobody:x:99:99:Nobody:/:/sbin/nologin
    systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
    dbus:x:81:81:System message bus:/:/sbin/nologin
    polkitd:x:999:997:User for polkitd:/:/sbin/nologin
    postfix:x:89:89::/var/spool/postfix:/sbin/nologin
    sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
    tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin
    nginx:x:998:996:nginx user:/var/cache/nginx:/bin/sh
    mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin
    apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
    dockerroot:x:997:993:Docker User:/var/lib/docker:/sbin/nologin
    netadm1n:x:1000:1000:netadm1n:/home/netadm1n:/bin/bash

    Each line has seven fields delimited by colons that contain the following information:

    1. User name
    2. Encrypted password (x means that the password is stored in the /etc/shadow file)
    3. User ID number (UID)
    4. User’s group ID number (GID)
    5. Full name of the user (GECOS)
    6. User home directory
    7. Login shell (defaults to /bin/bash)

    If you want to display only the username you can use either awk or cut commands to print only the first field containing the username:

    Using awk example:

    $ awk -F: '{ print $1}' /etc/passwd
    
    root
    bin
    daemon
    adm
    lp
    sync
    shutdown
    halt
    mail
    operator
    games
    ftp
    nobody
    systemd-network
    dbus
    polkitd
    postfix
    sshd
    tss
    nginx
    mysql
    apache
    dockerroot
    netadm1n

    Using cut example:

    $ cut -d: -f1 /etc/passwd
    
    root
    bin
    daemon
    adm
    lp
    sync
    shutdown
    halt
    mail
    operator
    games
    ftp
    nobody
    systemd-network
    dbus
    polkitd
    postfix
    sshd
    tss
    nginx
    mysql
    apache
    dockerroot
    netadm1n

    So you may have identified your Linux system doesn't have a user on it that needs to exist.  Let's go to the next section that describes how to add a user

    How to Create Users in Linux

    In Linux, you can create a user account and assign the user to different groups using the useradd command.

    The general syntax for the useradd command is as follows:

    useradd [OPTIONS] USERNAME

    NOTE: To be able to use the useradd command and create new users you need to be logged in as root or a user with sudo access.

    To create a new user account type useradd followed by the username.

    For example to create a new user named username you would run:

    useradd username

    The command adds an entry to

    • /etc/passwd
    • /etc/shadow
    • /etc/group
    • /etc/gshadow files

    To be able to log in as the newly created user, you need to set the user password. To do that run the passwd command followed by the username:

    passwd username

    You will be prompted to enter and confirm the password.

    In most Linux distros, when creating a new user account with the useradd command the user home directory is not created.

    Use the -m (--create-home) option to create the user home directory as /home/username:

    useradd -m username

    The command above creates the new user’s home directory and copies files from /etc/skel directory to the user’s home directory.

     

  11. I tried changing save to /tmp and no luck then I started to look at some of my other sites and most settings were same but memory and file upload sizes were different so I decided to just copy the php.ini from working site and paste to broken sites and they all started to work.

    Working php.ini for me for IPB 3.4.x

    ; cPanel-generated php ini directives, do not edit
    ; Manual editing of this file may result in unexpected behavior.
    ; To make changes to this file, use the cPanel MultiPHP INI Editor (Home >> Software >> MultiPHP INI Editor)
    ; For more information, read our documentation (https://go.cpanel.net/EA4ModifyINI)
    
    allow_url_fopen = Off
    allow_url_include = Off
    asp_tags = Off
    display_errors = Off
    enable_dl = Off
    file_uploads = On
    max_execution_time = 30
    max_input_time = 60
    max_input_vars = 1000
    memory_limit = 128M
    post_max_size = 80M
    session.gc_maxlifetime = 1440
    session.save_path = "/var/cpanel/php/sessions/ea-php56"
    upload_max_filesize = 20M
    zlib.output_compression = Off

    Once this was pasted into each broken site I was able to save settings once again.

  12. I know this version is no longer supported and we are working on upgrading our applications to support IPB 4.  No emails are being sent out so trying to update from PHP to SMTP but anytime I click save it goes back to the original..  it doesn't save any settings.

    I have also tried to update someones password and it won't let me.

    I have tried to use Chrome and Safari, I have cleared all cookies and cache and tried and really just out of ideas.

    We do not use friendly URLs (so no .htaccess file).....

    It says Settings saved but nothing changes.. it reverts back to what it was.  Doesn't matter if its Email or SEO settings...  the system just will not allow anything new to be saved.

    I tried to change my php setting from /var/cpanel/php/sessions/ea-php56 to /tmp thinking it might be an issue but no difference.

    I look at the error log under admin directory and I see things like

    [19-Jul-2019 15:37:50 UTC] PHP Warning:  session_start(): open(/var/cpanel/php/sessions/ea-php56/sess_p7ql10krrei9ak22fnvc9qtai1, O_RDWR) failed: No such file or directory (2) in /home/mirf/admin/applications_addon/ips/nexus/sources/support.php on line 1637
    [19-Jul-2019 15:37:50 UTC] PHP Warning:  session_start(): open(/var/cpanel/php/sessions/ea-php56/sess_p7ql10krrei9ak22fnvc9qtai1, O_RDWR) failed: No such file or directory (2) in /home/mirf/admin/applications_addon/ips/nexus/sources/support.php on line 1640
    [19-Jul-2019 15:37:50 UTC] PHP Warning:  Unknown: open(/var/cpanel/php/sessions/ea-php56/sess_p7ql10krrei9ak22fnvc9qtai1, O_RDWR) failed: No such file or directory (2) in Unknown on line 0
    [19-Jul-2019 15:37:50 UTC] PHP Warning:  Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/cpanel/php/sessions/ea-php56) in Unknown on line 0
    [19-Jul-2019 15:49:38 UTC] PHP Warning:  session_start(): open(/var/cpanel/php/sessions/ea-php56/sess_p7ql10krrei9ak22fnvc9qtai1, O_RDWR) failed: No such file or directory (2) in /home/mirf/admin/applications_addon/ips/nexus/sources/support.php on line 1637
    [19-Jul-2019 15:49:38 UTC] PHP Warning:  session_start(): open(/var/cpanel/php/sessions/ea-php56/sess_p7ql10krrei9ak22fnvc9qtai1, O_RDWR) failed: No such file or directory (2) in /home/mirf/admin/applications_addon/ips/nexus/sources/support.php on line 1640
    [19-Jul-2019 15:49:38 UTC] PHP Warning:  Unknown: open(/var/cpanel/php/sessions/ea-php56/sess_p7ql10krrei9ak22fnvc9qtai1, O_RDWR) failed: No such file or directory (2) in Unknown on line 0
    [19-Jul-2019 15:49:38 UTC] PHP Warning:  Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/cpanel/php/sessions/ea-php56) in Unknown on line 0

    Any ideas?

  13. https-certs.jpg.2c5e6dbf725db69d618f5297f1c62a08.jpgPre-Req that you have openssl installed on your linux box.  I utilize Centos 7 but you can use any Linux distribution you prefer.

    Let's first discuss the different formats

    PEM Format
    The PEM format is the most common format that Certificate Authorities issue certificates in. PEM certificates usually have extentions such as .pem, .crt, .cer, and .key. They are Base64 encoded ASCII files and contain "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" statements. Server certificates, intermediate certificates, and private keys can all be put into the PEM format.

    Apache and other similar servers use PEM format certificates. Several PEM certificates, and even the private key, can be included in one file, one below the other, but most platforms, such as Apache, expect the certificates and private key to be in separate files.

    DER Format
    The DER format is simply a binary form of a certificate instead of the ASCII PEM format. It sometimes has a file extension of .der but it often has a file extension of .cer so the only way to tell the difference between a DER .cer file and a PEM .cer file is to open it in a text editor and look for the BEGIN/END statements. All types of certificates and private keys can be encoded in DER format. DER is typically used with Java platforms. The SSL Converter can only convert certificates to DER format. If you need to convert a private key to DER, please use the OpenSSL commands on this page.

    PKCS#7/P7B Format
    The PKCS#7 or P7B format is usually stored in Base64 ASCII format and has a file extention of .p7b or .p7c. P7B certificates contain "-----BEGIN PKCS7-----" and "-----END PKCS7-----" statements. A P7B file only contains certificates and chain certificates, not the private key. Several platforms support P7B files including Microsoft Windows and Java Tomcat.

    PKCS#12/PFX Format
    The PKCS#12 or PFX format is a binary format for storing the server certificate, any intermediate certificates, and the private key in one encryptable file. PFX files usually have extensions such as .pfx and .p12. PFX files are typically used on Windows machines to import and export certificates and private keys.

    When converting a PFX file to PEM format, OpenSSL will put all the certificates and the private key into a single file. You will need to open the file in a text editor and copy each certificate and private key (including the BEGIN/END statments) to its own individual text file and save them as certificate.cer, CACert.cer, and privateKey.key respectively.

    Now for the commands....

    Convert x509 to PEM

    openssl x509 -in certificatename.cer -outform PEM -out certificatename.pem


    Convert PEM to DER

    openssl x509 -outform der -in certificatename.pem -out certificatename.der


    Convert DER to PEM

    openssl x509 -inform der -in certificatename.der -out certificatename.pem


    Convert PEM to P7B

    Note: The PKCS#7 or P7B format is stored in Base64 ASCII format and has a file extension of .p7b or .p7c.
    A P7B file only contains certificates and chain certificates (Intermediate CAs), not the private key. The most common platforms that support P7B files are Microsoft Windows and Java Tomcat.

    openssl crl2pkcs7 -nocrl -certfile certificatename.pem -out certificatename.p7b -certfile CACert.cer


    Convert PKCS7 to PEM

    openssl pkcs7 -print_certs -in certificatename.p7b -out certificatename.pem


    Convert pfx to PEM

    Note: The PKCS#12 or PFX format is a binary format for storing the server certificate, intermediate certificates, and the private key in one encryptable file. PFX files usually have extensions such as .pfx and .p12. PFX files are typically used on Windows machines to import and export certificates and private keys.

    openssl pkcs12 -in certificatename.pfx -out certificatename.pem


    Convert PFX to PKCS#8
    Note: This requires 2 commands

    STEP 1: Convert PFX to PEM

    openssl pkcs12 -in certificatename.pfx -nocerts -nodes -out certificatename.pem


    STEP 2: Convert PEM to PKCS8

    openSSL pkcs8 -in certificatename.pem -topk8 -nocrypt -out certificatename.pk8


    Convert P7B to PFX
    Note: This requires 2 commands

    STEP 1: Convert P7B to CER

    openssl pkcs7 -print_certs -in certificatename.p7b -out certificatename.cer


    STEP 2: Convert CER and Private Key to PFX

    openssl pkcs12 -export -in certificatename.cer -inkey privateKey.key -out certificatename.pfx -certfile  cacert.cer

    another example

    openssl pkcs12 -export -inkey zahsystems.com.key -in zahsystems.crt -certfile L1k.Chain.Bundle.2018.crt -out zahsystems.pfx

     

  14. ContainerLayers.pngThis is a topic I need help on.  So starting a topic that I'll post what I find and experience during my discovery of Linux Containers.

    First I can start off by describing to you what a Linux Container is and why I'm interested.  

    Linux containers are technologies that allow you to package and isolate applications with their entire runtime environment—all of the files necessary to run. This makes it easy to move the contained application between environments (dev, test, production, etc.) while retaining full functionality.

    Both full machine virtualization and containers have their advantages and disadvantages. Full machine virtualization offers greater isolation at the cost of greater overhead, as each virtual machine runs its own full kernel and operating system instance. Containers, on the other hand, generally offer less isolation but lower overhead through sharing certain portions of the host kernel and operating system instance. In my opinion full machine virtualization and containers are complementary; each offers certain advantages that might be useful in specific situations.

    Linux containers help reduce conflicts between your development and operations teams by separating areas of responsibility. Developers can focus on their apps and operations can focus on the infrastructure. And, because Linux containers are based on open source technology, you get the latest and greatest advancement as soon as they’re available. Container technologies—including CRI-O, Kubernetes, and Docker—help your team simplify, speed up, and orchestrate application development and deployment.

    Docker - by Docker, Inc

    a container system making use of LXC containers so you can: Build, Ship, and Run Any App, Anywhere http://www.docker.com

    LXD - by Canonical, Ltd

    a container system making use of LXC containers so that you can: run LXD on Ubuntu and spin up instances of RHEL, CentOS, SUSE, Debian, Ubuntu and just about any other Linux too, instantly.

    Docker vs LXD

    Docker specializes in deploying apps

    LXD specializes in deploying (Linux) Virtual Machines

    container-stack.png

    Some great reference material include:

    https://linuxcontainers.org

    https://opensource.com/resources/what-are-linux-containers

  15. Here are some notes regarding using containers in Ubuntu via lxd

    Install was pretty simple since lxd said it was already installed when I tried to run sudo apt-get install lxd

    I got no errors running: newgrp lxd

    Also no errors running: sudo lxd init

    So you must have some images to create lxd instances.. Here is a manual way of adding an image to the image store

    lxc image import <file> --alias <name>

    If you want to see a list of images you can simply run: lxc image list images:

    By default lxd comes with three image stores built in

    1. ubuntu: (for stable Ubuntu images)
    2. ubuntu-daily: (for daily Ubuntu images)
    3. images: (for a bunch of other distros)

    So if you wanted to check out what images are available at the default image stores you could run

    1. lxc image list ubuntu:
    2. lxc image list ubuntu-daily:
    3. lxc image list images:

    Once you found an image you would like to install you simply type in:

    1. lxc launch ubuntu:14.04 my-ubuntu
    2. lxc launch ubuntu-daily:16.04 my-ubuntu-dev
    3. lxc launch images:centos/6/amd64 my-centos

    But let's say you already have an image and its not on one of these stores.  This is how you import it

    • Import it with: lxc image import \<file\> --alias my-alias
    • Run it with: lxc launch my-alias my-container

    Now if you wanted to run a command against one of your containers (let's say my-ubuntu that you created above) you would run: lxc exec my-ubuntu -- /bin/bash

    Here are some other examples:

    • lxc exec my-ubuntu -- apt-get update
    • lxc file pull my-ubuntu /etc/hosts .
    • lxc file push hosts my-ubuntu /tmp/

    You may want to stop the container and to do so you would run: lxc stop my-ubuntu

    And to delete the container: lxc delete my-ubuntu

     

  16. So how to locate or update or refresh or rescan iTunes library?

    Easy-Fast way (not always successful)

    1. First quit iTunes app.
    2. Re-launch iTunes app while holding down the Option key.
    3. Select “choose library.”
    4. Go the iTunes folder and choose it.

    Another method that always works for me (just takes forever)

    1. Launch iTunes app on your computer
    2. Navigate to File -> Add to library
    3. Select the location of your iTunes library (ex: iTunes folder)

    iTunes app will relocate or refresh all meta data and files you already have to library. Don’t worry it won’t recreate duplicate files.

  17. Anytime I try and upgrade my iWatch it says it can't because there is no internet connection.  I verified I have internet connection.  I also went to airplane mode and just had WiFi and Bluetooth, validated my internet (WiFi) worked and still won't upgrade.

    Any ideas?

  18. You utilize EC2 (which stands for Elastic Cloud)

    Launch Instance (I personally choose Red Hat since its used in more of a Business model)

    Create a Key Pair (it will download automatically and this file is very important for access)

     

    Accessing site

    chmod 600 /path/to/your_keyname.pem

    Get your EC2 name (something of the form *.compute.amazonaws.com). 

    Now we can SSH into the EC2 instance by running the following command

    ssh -i /path/to/your_keyname.pem ubuntu@your_instance.compute.amazonaws.com

     

     

  19. netcat is known as the swiss army of network tools found in linux.  It's used for monitoring, testing and sending data across network connections and its free.

    By default, netcat (nc) operates by initiating a TCP connection to a remote host.  The basic syntax is:

    nc [options] host port

    This attempts a TCP connection to the defined host on the port number specified.  Something very similar to the old telnet command. Keep in mind your connection is entirely unencrypted.

    If you prefer to test a UDP connection instead of TCP, you can use the -u option

    nc -u host port

    Sometimes you prefer to test a range of ports

    nc host firstport-lastport

    Of course this is typically used with other flags.

     

    Netcat for Port Scanning

    nmap is a better tool for this sort of thing but you can use netcat to do this also to find what ports are open.

    We simply specify a range of ports to scan along with the -z option which performs a scan instead of trying to initiate a connection.

    For example, we can scan all ports up to 1000 by issuing:

    nc -z -v domain.com 1-1000

    Notice I also threw in the -v option to tell netcat to provide more verbose information.

    You would get an output like the following:

    nc: connect to domain.com port 1 (tcp) failed: Connection refused
    nc: connect to domain.com port 2 (tcp) failed: Connection refused
    nc: connect to domain.com port 3 (tcp) failed: Connection refused
    nc: connect to domain.com port 4 (tcp) failed: Connection refused
    nc: connect to domain.com port 5 (tcp) failed: Connection refused
    nc: connect to domain.com port 6 (tcp) failed: Connection refused
    nc: connect to domain.com port 7 (tcp) failed: Connection refused
    . . .
    Connection to domain.com 22 port [tcp/ssh] succeeded!
    . . .

    If you know the IP address you want to scan on instead of domain name it will go much faster if you don't need to resolve the address using DNS, but you have to include in the command to not use DNS to resolve.  Something like this

    nc -z -n -v 12.34.56.78 1-1000

    You can also start filtering on the results like this...

    nc -z -n -v 12.34.56.78 1-1000 2>&1 | grep succeeded

    Now it will only show successful connections on the successful port.

     

    usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length]
              [-P proxy_username] [-p source_port] [-q seconds] [-s source]
              [-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol]
              [-x proxy_address[:port]] [destination] [port]
         The options are as follows:
    
         -4      Forces nc to use IPv4 addresses only.
         -6      Forces nc to use IPv6 addresses only.
         -b      Allow broadcast.
         -C      Send CRLF as line-ending.
         -D      Enable debugging on the socket.
         -d      Do not attempt to read from stdin.
         -h      Prints out nc help.
         -I length
                 Specifies the size of the TCP receive buffer.
         -i interval
                 Specifies a delay time interval between lines of text sent and received.  Also causes a delay time between connections
                 to multiple ports.
         -k      Forces nc to stay listening for another connection after its current connection is completed.  It is an error to use
                 this option without the -l option.
         -l      Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host.  It
                 is an error to use this option in conjunction with the -p, -s, or -z options.  Additionally, any timeouts specified with
                 the -w option are ignored.
         -n      Do not do any DNS or service lookups on any specified addresses, hostnames or ports.
         -O length
                 Specifies the size of the TCP send buffer.
         -P proxy_username
                 Specifies a username to present to a proxy server that requires authentication.  If no username is specified then
                 authentication will not be attempted.  Proxy authentication is only supported for HTTP CONNECT proxies at present.
         -p source_port
                 Specifies the source port nc should use, subject to privilege restrictions and availability.
         -q seconds
                 after EOF on stdin, wait the specified number of seconds and then quit. If seconds is negative, wait forever.
         -r      Specifies that source and/or destination ports should be chosen randomly instead of sequentially within a range or in
                 the order that the system assigns them.
         -S      Enables the RFC 2385 TCP MD5 signature option.
         -s source
                 Specifies the IP of the interface which is used to send the packets.  For UNIX-domain datagram sockets, specifies the
                 local temporary socket file to create and use so that datagrams can be received.  It is an error to use this option in
                 conjunction with the -l option.
         -T toskeyword
                 Change IPv4 TOS value.  toskeyword may be one of critical, inetcontrol, lowcost, lowdelay, netcontrol, throughput,
                 reliability, or one of the DiffServ Code Points: ef, af11 ... af43, cs0 ... cs7; or a number in either hex or decimal.
         -t      Causes nc to send RFC 854 DON'T and WON'T responses to RFC 854 DO and WILL requests.  This makes it possible to use nc
                 to script telnet sessions.
         -U      Specifies to use UNIX-domain sockets.
         -u      Use UDP instead of the default option of TCP.  For UNIX-domain sockets, use a datagram socket instead of a stream
                 socket.  If a UNIX-domain socket is used, a temporary receiving socket is created in /tmp unless the -s flag is given.
         -V rtable
                 Set the routing table to be used.  The default is 0.
         -v      Have nc give more verbose output.
         -w timeout
                 Connections which cannot be established or are idle timeout after timeout seconds.  The -w flag has no effect on the -l
                 option, i.e. nc will listen forever for a connection, with or without the -w flag.  The default is no timeout.
         -X proxy_protocol
                 Requests that nc should use the specified protocol when talking to the proxy server.  Supported protocols are â4â
                 v.4), â5âconnectâ
         -x proxy_address[:port]
                 Requests that nc should connect to destination using a proxy at proxy_address and port.  If port is not specified, the
                 well-known port for the proxy protocol is used (1080 for SOCKS, 3128 for HTTPS).
         -Z      DCCP mode.
         -z      Specifies that nc should just scan for listening daemons, without sending any data to them.  It is an error to use this
                 option in conjunction with the -l option.
    
         destination can be a numerical IP address or a symbolic hostname (unless the -n option is given).  In general, a destination
         must be specified, unless the -l option is given (in which case the local host is used).  For UNIX-domain sockets, a destination
         is required and is the socket path to connect to (or listen on if the -l option is given).
    
         port can be a single integer or a range of ports.  Ranges are in the form nn-mm.  In general, a destination port must be speciâ
         fied, unless the -U option is given.

     

  20. iOS Roaming

    Trigger threshold

    The trigger threshold is the minimum signal level a client requires to maintain the current connection.  

    iOS clients monitor and maintain the current BSSID’s connection until the RSSI crosses the -70 dBm threshold. Once crossed, iOS initiates a scan to find roam candidate BSSIDs for the current ESSID.

    This information is important to consider when designing wireless cells and their expected signal overlap. For example, if 5 GHz cells are designed with a -67 dBm overlap:

    1. iOS uses -70 dBm as the trigger and will therefore remain connected to the current BSSID longer than you expect.
    2. Review how the cell overlap was measured. The antennas on a portable computer are much larger and more powerful than a smartphone or tablet, so iOS devices see different cell boundaries than expected. It is always best to measure using the target device.

    Roam scan

    A roam scan is when stations check the available channels in a given band (either 2.4 or 5 GHz) for access points (APs) that support the current ESSID.

    The time it takes to scan depends on a variety of factors, but the best way to streamline this process is to enable 802.11k on your control plane since iOS leverages the first 6 entries in the neighbor report and reviews the candidates to prioritize its scanning. Without 802.11k iOS has to scan more methodically, potentially adding several seconds to the discovery process.

    For example, if a user is on a call and walks to the other side of the building, the device crosses the -70 dBm threshold and looks for roam targets. Using the neighbor report provided by 802.11k, it knows there are APs supporting the current ESSID on channels 36, 44 and 11. It immediately scans those channels, finds the AP on channel 44 has the appropriate signal strength, and roams. However, without 802.11k the client must scan all of the various channels on each band to find a roam target, adding several seconds to the process.

    Roam candidate selection criteria

    iOS 8 and later selects target BSSIDs based on two criteria:

    1. Is the client transmitting or receiving a series of 802.11 data packets?
    2. The difference in signal strength against the current BSSID’s RSSI.

    iOS 8 and later selects target BSSIDs whose reported RSSI is 8 dB or greater than the current BSSID’s RSSI if the client is transmitting or receiving data. Clients not sending or receiving data, for example sitting idle in a pocket, use a 12 dB differential.

    For example, if the RSSI of the current connection drops to -75 dBm, and the user is engaged in a VoWLAN call, then iOS 8 searches for BSSIDs with an RSSI of -67 dBm or better.

    If that same user isn't in a call, or transmitting or receiving a series of data packets, then iOS 8 only considers BSSIDs with an RSSI of -63 dBm or better.

    802.11 Management and Control frames do not count as data.

    Understanding the selection criteria of iOS allows administrators to reevaluate their current wireless design to make sure that it provides the expected and required performance to support real-time services like voice or video.

    Roam performance

    Roam performance indicates the time a client requires to successfully authenticate to a new BSSID.

    Finding a valid roam candidate is only part of the process—the client has to actually complete the roam process quickly and unobtrusively so the user experiences no interruption in service. Roaming itself involves the client authenticating against the new BSSID and deauthenticating from the current BSSID. The security and authentication method dictates how quickly this can be achieved.

    802.1X-based authentication requires the client to complete the entire EAP key exchange before it can deauthenticate from the current BSSID. This can take several seconds, depending on the environment’s authentication infrastructure, and translates into interrupted service to the end user in the form of dead air.

    The best way to streamline this process is to utilize the fast roam capabilities of 802.11r if this is supported by your networking equipment. 802.11r allows clients to pre-authenticate against potential access points, reducing the authentication time from potential seconds to milliseconds.

    Measuring Client RSSI using AirPort Utility

    Apple’s AirPort Utility for iOS 1.3.4 includes a wireless scanning feature that provides a log of the client’s view of the network. Administrators can use this feature to validate the iOS client’s view of the network at a given location or walking a path as the scanner maintains a log of scan events for review.

    Don't attempt to use AirPort Utility on the same device that you're running your application as that can produce inaccurate results. Apple recommends using a separate device (of the same model) dedicated to the scanning process.

    The scanning feature is enabled in the AirPort preferences pane in the iOS Settings app.

     

    Cached Roaming with WPA2-Enterprise.pdf

  21. Download your ISO first

    Open the Terminal (in /Applications/Utilities/ or query Terminal in Spotlight)

    Convert the .iso file to .img using the convert option of hdiutil

    hdiutil convert -format UDRW -o /Users/dennis/Downloads/ubuntu-14.04.3-desktop-i386.img /Users/dennis/Downloads/ubuntu-14.04.3-desktop-i386.iso

    Reading Master Boot Record (MBR : 0)…

    Reading Ubuntu 14.04.3 LTS i386 (Apple_ISO : 1)…

    Reading (Windows_NTFS_Hidden : 2)…

    .......................................................................................................................................................................

    Elapsed Time: 4.670s

    Speed: 217.3Mbytes/sec

    Savings: 0.0%

    created: /Users/dennis/Downloads/ubuntu-14.04.3-desktop-i386.img.dmg

    Run diskutil list to get the current list of devices

    diskutil list

    /dev/disk0 (internal, physical):

    #: TYPE NAME SIZE IDENTIFIER

    0: GUID_partition_scheme *751.3 GB disk0

    1: EFI EFI 209.7 MB disk0s1

    2: Apple_CoreStorage Macintosh HD 750.4 GB disk0s2

    3: Apple_Boot Recovery HD 650.0 MB disk0s3

    /dev/disk1 (internal, virtual):

    #: TYPE NAME SIZE IDENTIFIER

    0: Apple_HFS Macintosh HD +750.1 GB disk1

    Logical Volume on disk0s2

    C1E17208-C4A2-4DF9-8B91-CAC2275AAE42

    Unlocked Encrypted

    Insert your flash media

    Run diskutil list again and determine the device node assigned to your flash media

    diskutil list

    /dev/disk0 (internal, physical):

    #: TYPE NAME SIZE IDENTIFIER

    0: GUID_partition_scheme *751.3 GB disk0

    1: EFI EFI 209.7 MB disk0s1

    2: Apple_CoreStorage Macintosh HD 750.4 GB disk0s2

    3: Apple_Boot Recovery HD 650.0 MB disk0s3

    /dev/disk1 (internal, virtual):

    #: TYPE NAME SIZE IDENTIFIER

    0: Apple_HFS Macintosh HD +750.1 GB disk1

    Logical Volume on disk0s2

    C1E17208-C4A2-4DF9-8B91-CAC2275AAE42

    Unlocked Encrypted

    /dev/disk2 (external, physical):

    #: TYPE NAME SIZE IDENTIFIER

    0: FDisk_partition_scheme *8.1 GB disk2

    1: DOS_FAT_32 MYLINUXLIVE 8.1 GB disk2s1

    Run diskutil unmountDisk /dev/diskN

    diskutil unmountDisk /dev/disk2

    Unmount of all volumes on disk2 was successful

    Execute sudo dd if=/path/to/downloaded.img of=/dev/rdiskN bs=1m

    (replace /path/to/downloaded.img with the path where the image file is located; for example, ./ubuntu.img or ./ubuntu.dmg).

    Using /dev/rdisk instead of /dev/disk may be faster

    If you see the error dd: Invalid number '1m', you are using GNU dd. Use the same command but replace bs=1m with bs=1M

    If you see the error dd: /dev/diskN: Resource busy, make sure the disk is not in use. Start the 'Disk Utility.app' and unmount (don't eject) the drive

    sudo dd if=/Users/dennis/Downloads/ubuntu-14.04.3-desktop-i386.img of=/dev/rdisk2 bs=1m

    Password:

    1015+0 records in

    1015+0 records out

    1064304640 bytes transferred in 238.599983 secs (4460623 bytes/sec)

    Run diskutil eject /dev/diskN and remove your flash media when the command completes.

    diskutil eject /dev/disk

    Disk /dev/disk2 ejected

    Restart your Mac and press alt/option key while the Mac is restarting to choose the USB stick.

  22. Yes CentOS 7 is out but it won't work on my old HP Proliant DL380 G4 so I am using my old CentOS 6.5 install CD and it is working great.

    So what did I do....

    First if you have a Smart Array, I configured two of the 6 drives in 1+0 for the default drive and the other 4 are RAID 5. Once these logical drives are configured you can move on and if you don't have a smart array, you can also move on.

    CentOS booted up, I picked my location information and I did configure network device just because it makes it easier later.

    When CentOS boots up, I had no network connection. I did a ifconfig and all it showed me was the loopback (lo) interface.

    vi /etc/sysconfig/network-sccripts/ifconfig-eth0

    had to change ONBOOT=no to ONBOOT=yes and saved my changes

    Ran

    /etc/init.d/network restart

    I was able to ping 8.8.8.8 (which is google DNS server)

    Next is to update the old CentOS system so I ran

    yum -y update

    This installed 561 updates on my system.

×
×
  • Create New...