Ivy Consultants Inc.

Consulting Services for Security, Networking, Wi-Fi and Windows Server

I have recently started to design and develop a Juniper/Mist SD-WAN POC for an Enterprise Customer and thought it would be nice to share it with you all in case you are doing the same.

Customer Background

The customer is a national retailer with hundreds of branches/stores throughout Canada. All branches currently connect through MPLS to two of its Data Centers. Customer was interested in better network performance, reliability, manageability along with lower admin costs. We had presented the Juniper/Mist SD-WAN solution and before making a firm decision they wanted to see if the SD-WAN Technology would help them achieve all their objectives and thus asked for a POC

In discussion with the Customer, we identified one typical site + two DCs for this POC.

The following hardware (BOM) was ordered and received:

Juniper SRX 4100: qty 4 (2 each for cluster Setup at each DC)

Juniper SRX 320: qty 2 (one cluster at the Branch location)

Juniper EX4300: qty 2 (for the Branch site, The two DCs already have existing switches)

Mist AP 41: Qty 4 (for the Branch site)

High Level Diagram of the POC

The Branch connects to the two DC’s with two Internet Links for redundancy as shown in the diagram above. The needed physical connections are also shown.

Chassis Cluster

In HA mode the SRX devices act like a single device creating a chassis cluster. In chassis cluster the two devices acts like one. The flexible PIC concentrator (FPC) starts from zero (0) in one device and ends at other device’s last FPC number. For example in the given figure, the FPC is starting from zero in device A and ends with FPC nine in device B.

Control Plane

The control plane and data plane in SRX is separated. In HA, there can be only one RE no matter what. If the primary RE fails, then only secondary device takes the initiative of primary RE. The control plane synchronizes the state between the routers by exchanging the Hello messages. On RE the process called JSRPD and KSYNCD. JSRPD stands for Junos Stateful redundancy protocol daemon. This process is responsible for exchanging messages and doing failover between devices. Similarly, KSYNCD stands for kernel state synchronization daemon. This process is responsible for synchronizing the kernel state between the two devices.

Data Plane

Remember when we talked about traffic sessions and device configuration that second device must know when first device fails. This information is exchanged between devices by the data plane. Data plane simply synchronizes the sessions and services between the devices. Sessions are current information about the traffic flow. For example if a user is browsing Google’s mail then the session is maintained by the router. This session information is synchronized between devices. Now, to dive more deeper let’s learn about some JunOS High Availability concepts.

SRX 4100 initial Configuration

Connect to the Console Port

  1. Plug the RJ-45 end of the DB9-to-RJ-45 cable into the Console port on your
    services gateway.
  2. Connect the other end of the cable to the serial port on the management device. Use the following values to configure the serial port: Baud rate—9600; Parity—N; Data bits—8; Stop bits—1; Flow control—None

To enable J-Web (GUI)

Before you can use J-Web to configure your device, you will need to configure Root Authentication. you must access the CLI to configure root authentication.

1. Log in to the device as root. When the device is powered on with the factory default configuration, you do not need to enter a password.

2. At the (%) prompt, type cli to start the CLI and press Enter. The prompt changes to an angle bracket (>) when you enter the CLI operational mode. root%cli root>

3. At the (>) prompt, type configure and press Enter. The prompt changes from > to # when you enter configuration mode. root> configure Entering configuration mode root#

4. Set the root authentication password by entering a cleartext password, an encrypted password, or an SSH public key string (DSA or RSA). root# set system root-authentication plain-textpassword New password: password Retype new password: password

5. Configure the route for the management interface (optional, required only if you do not connect the MGMT port directly to the management device). root# set routing-options static route next-hop

6. Commit the configuration changes. root# commit

7. Connect the MGMT port on the device to the Ethernet port on the management device using an RJ-45 cable.

8. Configure an IP address on the 192.168.1.0/24 subnetwork for the management device. By default, the management interface is configured with the 192.168.1.1/24 IP address.

9. Launch a Web browser from the management device and access the services gateway using the URL https://192.168.1.1.

NOTE: As the system-generated certificate is not trusted by default, an alert is displayed. You can ignore this alert and proceed to access the services gateway.

10. The J-Web login page is displayed. This indicates that you have successfully completed the initial configuration and that your SRX4100 Services Gateway is ready for use.

I prefer to use the CLI so will provide the CLI commands moving forward

DC-1 and DC-2 SRX 4100 Cluster Configuration

An SRX Series chassis cluster is created by physically connecting two identical cluster-supported SRX Series devices together using a pair of the same type of Ethernet connections. The connection is made for both a control link and a fabric (data) link between the two devices. Control links in a chassis cluster are made using specific ports.

Below is how we connected the two SRX4100s to form a cluster

When a device joins a cluster, it becomes a node of that cluster. With the exception of unique node settings and management IP addresses, nodes in a cluster share the same configuration.

As per Juniper User guide, the following are the pre-requisite to build a cluster:

  • Confirm that hardware and software are the same on both devices.
  • Confirm that license keys are the same on both devices.

The first thing in building a cluster is to configure the Cluster ID and Node ID.

A cluster is identified by a cluster ID (cluster-id) specified as a number from 1 through 255. Setting a cluster ID to 0 is equivalent to disabling a cluster. Keep in mind that a cluster ID greater than 15 can only be set when the fabric and control link interfaces are connected back-to-back.

Here is how to do it step-by-step,

connect to the console port of Node 0

user@host> set chassis cluster-id 1 node 0 reboot

connect to the console port of Node 1

user@host> set chassis cluster-id 1 node 0 reboot

After the reboot, you can check the status with the following command:

user@host> show chassis cluster status

Courtesy of Juniper Networks

Configuring Management Interface

You must assign a unique IP address to each node in the cluster to provide network management access. This configuration is not replicated across the two nodes. If you try to access the nodes in a cluster over the network before you configure the fxp0 interface, you will lose access to the cluster.

Don’t forget that after the cluster formation both chassis are acting as one device

Use the following commands to configure the host name and IP address

user@host#

# Configure the name of node 0 and assign an IP address.

set groups node0 system host-name node0-router                                         

set groups node0 interfaces fxp0 unit 0 family inet address 10.1.1.1/24

# Configure the name of node 1 and assign an IP address.

set groups node1 system host-name node1-router                                         

set groups node1 interfaces fxp0 unit 0 family inet address 10.1.1.2/24

# Apply the groups configuration to the nodes

set apply-groups “${node}”                   

Always remember to “Commit”

Commands to test the configuration above:

show groups

show apply-groups

show apply-groups

show interfaces terse | match fxp0

show configuration groups node0 interfaces

Branch SRX 320 Cluster Configuration

This is slightly different from the SRX 4300 at the DCs as per the model. The difference is in the interfaces to be used for control and fabric paths.

You can use the configuration below for a quick cut and paste (don’t forget to edit the IP addresses)

set groups node0 system host-name srx320-1

set groups node0 interfaces fxp0 unit 0 family inet address 192.16.35.46/24

set groups node1 system host-name srx320-2

set groups node1 interfaces fxp0 unit 0 family inet address 192.16.35.47/24

set groups node0 system backup-router <backup next-hop from fxp0> destination <management network/mask>

set groups node1 system backup-router <backup next-hop from fxp0> destination <management network/mask>

set apply-groups “${node}”

set interfaces fab0 fabric-options member-interfaces ge-0/0/2

set interfaces fab1 fabric-options member-interfaces ge-3/0/2

set chassis cluster redundancy-group 0 node 0 priority 100

set chassis cluster redundancy-group 0 node 1 priority 1

set chassis cluster redundancy-group 1 node 0 priority 100

set chassis cluster redundancy-group 1 node 1 priority 1

set chassis cluster redundancy-group 1 interface-monitor ge-0/0/3 weight 255

set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight 255

set chassis cluster redundancy-group 1 interface-monitor ge-3/0/3 weight 255

set chassis cluster redundancy-group 1 interface-monitor ge-3/0/4 weight 255

set chassis cluster reth-count 2

set interfaces ge-0/0/3 gigether-options redundant-parent reth1

set interfaces ge-0/0/4 gigether-options redundant-parent reth1

set interfaces reth1 redundant-ether-options redundancy-group 1

set interfaces reth1 unit 0 family inet address 203.0.113.233/24

set interfaces ge-3/0/3 gigether-options redundant-parent reth0

set interfaces ge-3/0/4 gigether-options redundant-parent reth0

set interfaces reth0 redundant-ether-options redundancy-group 1

set interfaces reth0 unit 0 family inet address 198.51.100.1/24

set security zones security-zone Untrust interfaces reth1.0

set security zones security-zone Trust interfaces reth0.0

Commit

Reference: Juniper SRX User Guide