7 Replies Latest reply on Mar 13, 2015 10:10 AM by btlyric

    Proxy Monitoring

    btlyric

      I know that this isn't an approved/supported methodology, but in version 7.2 I was able to implement additional monitoring via nagios-plugins and nrpe via rpm installs through the EPEL repo.

       

      In 7.4, this (unsupported) functionality via install through the EPEL repo fails.

       

      nagios-plugins installs correctly, however nrpe installation returns errors:

       

      ---> Package nrpe.x86_64 0:2.14-5.el5 will be installed

      --> Processing Dependency: libssl.so.6()(64bit) for package: nrpe-2.14-5.el5.x86_64

      --> Processing Dependency: libcrypto.so.6()(64bit) for package: nrpe-2.14-5.el5.x86_64

      epel/filelists_db                                                                                                                                  | 4.7 MB     00:00

      --> Finished Dependency Resolution

      Error: Package: nrpe-2.14-5.el5.x86_64 (epel)

                 Requires: libcrypto.so.6()(64bit)

      Error: Package: nrpe-2.14-5.el5.x86_64 (epel)

                 Requires: libssl.so.6()(64bit)

      You could try using --skip-broken to work around the problem

      You could try running: rpm -Va --nofiles --nodigest

       

      Obviously, since they are used by the MWG code/daemons/functions/etc.I don't want to modify ssl/crypto libraries on the base system. Haven't entirely decided what my next step is, but do know that I want more monitoring flexibility than afforded by the core product.

       

      If anyone has run into similar issue and/or successfully implemented this solution on 7.3, 7.4 or 7.5, please let me know.

       

      Cheers!

        • 1. Re: Proxy Monitoring
          otruniger

          Maybe you could install the missing libraries into a different location and (temporariliy) adjust library path when installing the plugins.

          • 2. Re: Proxy Monitoring
            michael_schneider
            1. If you would like to continue using SSL Scanner and the product in general, don't mess with the SSL libs.
            2. Again - I would not, under no circumstances, touch the SSL libs.
            3. You could try though to get a copy of libs you need and place them in to the nagios folder and change LD_Library or LD_Preload specifically for the nagios plugins to only use the ones in the folder. Other than that, see 1 and 2.

             

            thanks,

            Michael

            • 3. Re: Proxy Monitoring
              trishoar

              Hi,

               

              All our MWG servers are monitored over SNMP by our Nagios server. What is is you are monitoring that SNMP does not provide?

               

              Tris

              • 4. Re: Proxy Monitoring
                mkr81

                Tris, we are using Nagios as well and want to monitor MWG over SNMP. Glad to hear someone has managed that. Do you have any guide at hand how to implement this? And did you use snmptrap using the MWG error handler policy or something like a Nagios perl script with snmp-request?

                 

                Looking forward to your reply,

                Marco

                • 5. Re: Re: Proxy Monitoring
                  trishoar

                  Hi Marco,

                   

                  Our monitoring is done by polling. At some point I'll might add traps, but as yet that not really be necessary for us.

                  Here are the scripts / commands we are using for polling the servers.

                  This is a shell script we call from Nagios to check the memory, though there are lots of existing Nagios ones I did not like they way they formatted the data, [Bash] check_mem.sh - Pastebin.com

                   

                  These are bits that are written directly into the Nagios config:

                   

                  Check CPU

                  command_line    $USER1$ check_snmp -H $HOSTADDRESS$ -p 9161 -P2c -C $ARG1$ -o ".1.3.6.1.4.1.2021.10.1.3.1 .1.3.6.1.4.1.2021.10.1.3.2 .1.3.6.1.4.1.2021.10.1.3.3 .1.3.6.1.4.1.2021.11.9.0 .1.3.6.1.4.1.2021.11.10.0 .1.3.6.1.4.1.2021.11.11.0" -w $ARG2$ -c $ARG3$ -l 1-min -l 10-min -l 15-min -l User -l System -l Idle -l CPU
                  

                   

                  Check Vend

                  command_line    $USER1$/check_http -H $HOSTADDRESS$ -p 80 -u http://www.google.co.uk/intl/en-GB/policies/terms/regional.html -s "Terms of Service"
                  

                  (I will change this to use mwginternal.com rather than abusing Google, when I have some time)

                   

                  Check Filter

                  command_line    $USER1$/check_http -H $HOSTADDRESS$ -p 80 -u http://www.playboy.com -e "403 URLBlocked"
                  

                   

                  We also have a webpage that polls things like real time RPS / Ingress /Egress traffic. The scripts behind that can be found here:

                  [Ruby] proxy-table.rb - Pastebin.com

                  [CSS] proxy-table.css - Pastebin.com

                   

                  That script calls a file that looks a bit like this, which defines a list of servers to poll. It supports Bluecoat, MWG 6 and 7:

                  [root@srvman ~]# cat /usr/local/share/bgfl-monitor/proxy-servers.txt

                  host,port,community,cluster,shortName,type,pollTime,rpsTotal,clientTotal,serverTotal
                  10.0.0.1,9161,Public,Production,Proxyname,mwg7,0,0,0,0
                  10.0.0.2,9161,Public,Production,Proxyname,mwg7,0,0,0,0
                  10.0.0.3,9161,Public,Production,Proxyname,mwg7,0,0,0,0
                  10.0.0.4,9161,Public,Production,Proxyname,mwg7,0,0,0,0
                  

                  The ruby script is run every 5 minutes, by cron.

                   

                  We don't bother monitoring disk space in Nagios as we do not keep logs on the servers for more than 24 hours. We do have Cacti which probes them though, and this also gives historical info on through put etc.

                   

                  Finally, I also have some scripts that load data in to RRD directly.

                  This script makes the RRD files [Bash] mkrrd.sh - Pastebin.com

                  This puts data in, by scraping a block page from the proxy server which shows internal MWG counters [Bash] mwg-stats.sh - Pastebin.com

                  Finally, this is an example of making the RRD Graph [Bash] big-prod-filter-rps.sh - Pastebin.com

                   

                  As a disclaimer, I'm not much of a programmer, so there could be bugs or errors in some / all of these. but they do work for us. Hope this is of some use.

                   

                  Tris

                  • 6. Re: Re: Proxy Monitoring
                    mkr81

                    Wow, thank you Tris. This has been very helpful indeed! I appreciate your input.

                     

                    Best regards,

                    Marco

                    • 7. Re: Proxy Monitoring
                      btlyric

                      Figured out the problem.

                       

                      I have a deployment package that I use to perform tasks that I don't want to have to manually do whenever I build a new proxy.

                       

                      The deployment package was originally written for 7.2 aka MLOS1.0 aka Red Hat/CentOS 5.x and included the correct EPEL repo for that version, but not for MLOS 2.x. Once I installed the correct EPEL repo, the nrpe installation succeeded.

                       

                      Things that I monitor on MWG via nrpe include:

                       

                      - Is iptables enabled and running

                      - Is the RAID array behaving? This is done by using OpenIPMI-tools-2.0.16-11.mlos1 and the check_megaraid_sas Nagios plugin which is available on the Nagios Exchange

                      - Other points of interest that I cannot obtain via SNMP queries

                       

                      The RAID queries are extremely useful as they return data about the BBU, the # of detected drives, the state of the drives and the # of errors.