I have been using 2x2 MIMO Nv2 on a few ~1km, ~2km, ~5km links for months and I always wanted to quantify my perceptions of how good I think it is.
This is a post with low-on-effort bandwidth and latency measurements on a 2x2 MIMO wireless link using 802.11 N and Nv2.
The link used is a point to point ~1km link in the 5GHz band used by the Athens Wireless Metropolitan Network (AWMN).
Later this week I will add a another wireless interface on the other side and use the same hardware on my side
for one more point-to-point link. Then, I will repeat the same measurements.
We are trying to find out how good Nv2 really is.
The Nv2 wireless protocol is a proprietary Time Division Multiple Access ( TDMA ) protocol available on MikrotTik-routerOS systems with Atheros based cards. Ubiquity has a TDMA protocol called airMAX.
RouterOS systems come with a Bandwidth Test Server by default. However, I think that putting the Bandwidth Test Server and the Bandwidth Test Client on other systems `behind` the routers produces accurate results. Especially, when it comes to TCP bandwidth throughput testing using little busy routers.
80cm off-axis --offset-- satellite dishes and
double polarization `awmn-nvac` type feeders are used on both sides.
One of the routers is a PC and the other router is a MIPS machine (RB433GL).
Unfortunately, at this moment the PC router is connected to the iperf server in his LAN using a Fast Ethernet 100Mb/s NIC. Hence, I have to use the Bandwidth Server and the Bandwidth Tester made by MikroTik.
Gladly, the Standalone MikroTik Windows Bandwidth Test runs on Linux with Wine and the LAN bottleneck is on the PC router side. The MIPS machine has a Gigabit NIC and it is connected to a Gigabit LAN.
The PC router is an x86 at 2673MHz and the MIPS router is a RB433GL at 680MHz.
For the UDP tests I will just use the Bandwidth Servers and Clients on the routers.
For the TCP tests I will use the Bandwidth Server on the PC router and a Bandwidth Client on a PC behind the MIPS router.
PCrouter <-air-> MIPSrouter <-GigabitLAN-> PC-BWclient
Nv2 N - One Client
Latency with Low Amounts of Traffic --no BW tests
From the AP to the Client
$ ping 10.21.241.67 -c 100
PING 10.21.241.67 (10.21.241.67) 56(84) bytes of data.
64 bytes from 10.21.241.67: icmp_req=1 ttl=63 time=2.27 ms
64 bytes from 10.21.241.67: icmp_req=2 ttl=63 time=6.70 ms
64 bytes from 10.21.241.67: icmp_req=3 ttl=63 time=6.27 ms
...
64 bytes from 10.21.241.67: icmp_req=100 ttl=63 time=2.67 ms
--- 10.21.241.67 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99141ms
rtt min/avg/max/mdev = 1.610/5.470/9.517/2.111 ms
Latency with Large Amounts of UDP Traffic --while BW testing
From the AP to the Client
$ ping 10.21.241.67 -c 100
PING 10.21.241.67 (10.21.241.67) 56(84) bytes of data.
64 bytes from 10.21.241.67: icmp_req=1 ttl=63 time=16.6 ms
64 bytes from 10.21.241.67: icmp_req=2 ttl=63 time=9.52 ms
64 bytes from 10.21.241.67: icmp_req=3 ttl=63 time=9.02 ms
64 bytes from 10.21.241.67: icmp_req=4 ttl=63 time=5.60 ms
...
64 bytes from 10.21.241.67: icmp_req=100 ttl=63 time=22.2 ms
--- 10.21.241.67 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99148ms
rtt min/avg/max/mdev = 4.754/10.402/37.854/4.615 ms
while testing UDP bandwidth
Latency with Low Amounts of Traffic --no BW tests
From the Client to the AP.
$ ping 10.27.224.237 -c 100
PING 10.27.224.237 (10.27.224.237) 56(84) bytes of data.
64 bytes from 10.27.224.237: icmp_seq=1 ttl=63 time=3.42 ms
64 bytes from 10.27.224.237: icmp_seq=2 ttl=63 time=3.06 ms
64 bytes from 10.27.224.237: icmp_seq=3 ttl=63 time=7.85 ms
...
64 bytes from 10.27.224.237: icmp_seq=100 ttl=63 time=5.79 ms
--- 10.27.224.237 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99116ms
rtt min/avg/max/mdev = 1.617/4.341/15.546/2.508 ms
Latency with Large Amount of UDP traffic
From the Client to the AP
$ ping 10.27.224.237 -c 100
PING 10.27.224.237 (10.27.224.237) 56(84) bytes of data.
64 bytes from 10.27.224.237: icmp_seq=1 ttl=63 time=5.73 ms
64 bytes from 10.27.224.237: icmp_seq=2 ttl=63 time=2.75 ms
64 bytes from 10.27.224.237: icmp_seq=3 ttl=63 time=6.67 ms
64 bytes from 10.27.224.237: icmp_seq=4 ttl=63 time=10.9 ms
...
64 bytes from 10.27.224.237: icmp_seq=100 ttl=63 time=8.68 ms
--- 10.27.224.237 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99070ms
rtt min/avg/max/mdev = 2.752/7.739/24.250/3.273 ms
while running a UDP bandwidth test
> /tool bandwidth-test protocol=udp direction=transmit address=10.27.224.237
status: running
duration: 2m27s
tx-current: 205.3Mbps
tx-10-second-average: 189.3Mbps
tx-total-average: 184.0Mbps
random-data: no
direction: transmit
tx-size: 1500
-- [Q quit|D dump|C-z pause]
Ping while testing TCP bandwidth throughput.
From the client to the AP.
$ ping 10.27.224.237 -c 100
PING 10.27.224.237 (10.27.224.237) 56(84) bytes of data.
64 bytes from 10.27.224.237: icmp_seq=100 ttl=63 time=3.83 ms
...
64 bytes from 10.27.224.237: icmp_seq=100 ttl=63 time=4.02 ms
--- 10.27.224.237 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99108ms
rtt min/avg/max/mdev = 2.726/5.791/21.109/3.734 ms
802.11 N One Client
Latency with `low` amounts of traffic --no BW testing
Client to the AP
$ ping 10.27.224.237 -c 100
PING 10.27.224.237 (10.27.224.237) 56(84) bytes of data.
64 bytes from 10.27.224.237: icmp_seq=1 ttl=63 time=0.934 ms
64 bytes from 10.27.224.237: icmp_seq=2 ttl=63 time=0.664 ms
64 bytes from 10.27.224.237: icmp_seq=3 ttl=63 time=0.572 ms
...
64 bytes from 10.27.224.237: icmp_seq=100 ttl=63 time=14.0 ms
--- 10.27.224.237 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99029ms
rtt min/avg/max/mdev = 0.486/1.043/15.114/1.977 ms
Latency with large amounts of UDP traffic
Client to the AP
$ ping 10.27.224.237 -c 100
PING 10.27.224.237 (10.27.224.237) 56(84) bytes of data.
64 bytes from 10.27.224.237: icmp_seq=1 ttl=63 time=12.4 ms
64 bytes from 10.27.224.237: icmp_seq=2 ttl=63 time=9.88 ms
64 bytes from 10.27.224.237: icmp_seq=3 ttl=63 time=7.68 ms
64 bytes from 10.27.224.237: icmp_seq=5 ttl=63 time=89.8 ms
...
64 bytes from 10.27.224.237: icmp_seq=100 ttl=63 time=45.4 ms
--- 10.27.224.237 ping statistics ---
100 packets transmitted, 89 received, 11% packet loss, time 99089ms
rtt min/avg/max/mdev = 7.688/42.859/141.709/16.173 ms
while testing UDP bandwidth throughput
/tool bandwidth-test protocol=udp direction=receive address=10.27.224.237
status: running
duration: 2m51s
rx-current: 207.5Mbps
rx-10-second-average: 200.0Mbps
rx-total-average: 108.7Mbps
lost-packets: 2308
random-data: no
direction: receive
rx-size: 1500
With one client to the AP Nv2 increases the latency on the link when there is not traffic
but behaves well --in terms of latency-- when we stress the link using a bandwidth tester.
Hmm, Nv2 seems better already ... more tests by me , hence suffering ( sorry ) for the West and South West Athens Neighborhoods are coming.
References:
MikroTik Nv2
2x2 MIMO nv2 measurements