Skip navigation
McAfee Secure sites help keep you safe from identity theft, credit card fraud, spyware, spam, viruses and online scams
1239 Views 11 Replies Latest reply: Sep 5, 2013 11:22 AM by sliedl RSS 1 2 Previous Next
krzysztof.anzorge Newcomer 19 posts since
Mar 18, 2011
Currently Being Moderated

Aug 29, 2013 2:27 PM

SSH connection refused to NODE01 in cluster config

Hi,

 

I have Active/Passive cluster of MFE 8.3.1 virtual frewalls.

cluster IP is 10.0.0.50
NODE01 IP is 10.0.0.111
NODE02 IP is 10.0.0.222

 

I can connect through Admin Console (port 9003) to cluster IP.
Connection is redirect to active NODE (by ARP) and everything working OK even if cluster do failover.

 

 

Problem:
When NODE01 act as PRIMARY, I have a problem with SSH connection to NODE01 on cluster IP (10.0.0.50).
When NODE02 act as PRIMARY, I can connect to SSH on cluster IP (10.0.0.50) without any problem.


When NODE01 act as PRIMARY, SSH connection to IP 10.0.0.50, I have error in audit log:
============================
2013-08-29 14:54:49 -0400 f_kernel a_nil_area t_netprobe p_minor
hostname: NODE01.mcafee.lab event: TCP netprobe srcip: 10.0.0.150
srcport: 2109 srczone: internal dstip: 10.0.0.50 dstport: 22 protocol: 6
interface: em1
reason: Received a TCP connection attempt destined for a service that the current policy does not support.
============================

 


When NODE02 act as PRIMARY, SSH connection to IP 10.0.0.50 working OK with audit log:
============================
2013-08-29 15:03:17 -0400 f_ssh_server a_server t_auth_attempt p_major
pid: 1416 logid: 0 cmd: 'sshd' hostname: NODE02.mcafee.lab
event: auth allow user_name: admin auth_method: Password
reason: Authentication succeeded.
============================

 

 

I have enabled standard (default) SSH policy rule on both firewall nodes:
=====================================
policy add table=rule name='Secure Shell Server' rulegroup=Administration \
    pos=5 action=allow appdefense=defaultgroup:defaultgroup \
    application='custom:SSH Server' audit=standard \
    authenticator=authenticator:Password authgroups='*' dest=all:v4 \
    dest_zones=zone:internal disable=no exclude_capability='' ipsresponse='' \
    nat_addr='' nat_mode=none redir='' redir_port='' sign_category_grp='' \
    source=all:v4 source_zones=zone:internal ssl_ports='' tcp_ports='' \
    timeperiod='*' ts_enable=no ts_reputation=medium_unverified_threshold \
    udp_ports='' description='Allow SSH server access from the internal zone' \
    last_changed_by='admin on Thu Aug 29 12:37:02 2013'
======================================

 

 

Question:
Why I can't log by SSH to NODE01 on cluster IP?

 

 

Best regards
Krzysztof Anzorge

  • vetterous Newcomer 9 posts since
    May 27, 2011

    Do you have 10.0.0.50 within /etc/ssh/sshd_config as a listen address?

  • sliedl McAfee SME 535 posts since
    Nov 3, 2009

    If you have an active-standby cluster are you turning OFF Node2 when you try to login to Node1?


    What might be happening is that you think Node1 is the primary but it has not fully 'taken over' as primary.  This could happen if the firewall has an interface unplugged that is configured for 'Link Monitoring'.  The link is down so the firewall does not take over.

     

    When Node1 is primary (and Node2 is off) run these commands to see if there is anything listening on port 22:

    $> sockstat -4lp 22

    $> lsof -nPi :22

     

    When Node1 is Primary, what does 'cf clus stat' say and what does 'cf clus fail' report back on the screen?

     

    If none of those things work you should open a Support ticket as there could be a myriad of things going on here.  It 'seems' simple enough, as this is a netprobe, which means you do not have a rule configured on that port (but you DO).  This means to me that either the policy is incorrect on one firewall or that the SSH server is not starting on Node1 for whatever reason.

  • sliedl McAfee SME 535 posts since
    Nov 3, 2009

    Run an audit and restart the SSH Server on node1 and then look at the audit to see if it fails to start for some reason (it's obviously not listening on port 22 on that IP):

    $> acat -kb > audit.raw&

    -- The & puts the command into the background

    $> cf daemond restart agent=ssh_server

    $> fg

    -- Now hit CTRL+C to stop the audit.raw file

    -- Look at the audit file:

    $> acat audit.raw | less

     

    Also look at the /var/log/daemond.log file to see if the SSH server fails to start for some reason.

     

    Can you please paste the output of 'cf int q' from node1 and also 'ifconfig -a'?

  • sliedl McAfee SME 535 posts since
    Nov 3, 2009

    The audit says it's listening on 10.0.0.50 now on port 22.  Strange.


    What is the subnet-mask on your PC?  Not that it should really matter since it seems to be hitting the firewall.

1 2 Previous Next

More Like This

  • Retrieving data ...

Bookmarked By (0)

Legend

  • Correct Answers - 5 points
  • Helpful Answers - 3 points