winbox debian 64b jessie

Install winbox on a
# uname -a
Linux some-desktop 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64 GNU/Linux
# cat /etc/debian_version 
8.1


Uninstall wine and remove it's configuration directory
# apt-get remove wine --purge
# rm -r ~/.wine


Install wine32
# apt-get install wine32


Run winbox
$ wine winbox.exe




Install winbox on debian jessie

svn immediates

Notes on svn commits ( add a directory to a project ) without having to download the whole project.

checkout project skeleton --empty 2ond level project directories
$ svn co --username=user --depth=immediates http://myrepo.net/svn/dev/

 --depth immediates 

Include the immediate target of the operation and any of its immediate file or directory children. The directory children will themselves be empty.


add a directory
$ svn add mydir


$ svn ci -m "importing mydir"


For more complicated sparse checkouts lookup
co --depth files ,
up --set-depth infinity dir1 dir2 ,
up --set-depth emty dir3 dir4






svn immediates

iperf

Messing around with iPerf in a 1000BASE-T Ethernet LAN



Set an iPerf TCP server
beryllium:~# iperf -s -B 172.31.1.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.31.1.2
TCP window size: 85.3 KByte (default)
------------------------------------------------------------




Measure achievable TCP bandwidth with iPerf from another host
lithium:~# iperf -c 172.31.1.2
------------------------------------------------------------
Client connecting to 172.31.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 172.31.1.1 port 44159 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   982 MBytes   823 Mbits/sec



Seen on server
beryllium:~# iperf -s -B 172.31.1.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.31.1.2
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.31.1.2 port 5001 connected with 172.31.1.1 port 44159
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   982 MBytes   823 Mbits/sec



Set an iPerf UDP server
beryllium:~# iperf -s -u -B 172.31.1.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.31.1.2
Receiving 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------



Measure achievable UDP bandwidth with iPerf from another host
lithium:~# iperf -u -c 172.31.1.2
------------------------------------------------------------
Client connecting to 172.31.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 172.31.1.1 port 60431 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec
[  3] Sent 893 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec   0.055 ms    0/  893 (0%)

Which appears to be a NOT Good way to do UDP bandwidth measurements, because iperf tries only the default or the set rate and UDP has no way of determining the pipe capacity or congestion.

Since, I know that that both hosts have gigabit ethernet interfaces and they are in the same gigabit LAN, I should try to set the bandwidth to something closer to 1000Mb/s
lithium:~# iperf -u -c 172.31.1.2 -b 1000M
------------------------------------------------------------
Client connecting to 172.31.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 172.31.1.1 port 49712 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   909 MBytes   763 Mbits/sec
[  3] Sent 648395 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec   908 MBytes   762 Mbits/sec   0.012 ms  549/648394 (0.085%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order


hmm, what if
# iperf -u -c 172.31.1.2 -b 800M
------------------------------------------------------------
Client connecting to 172.31.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 172.31.1.1 port 46421 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   919 MBytes   771 Mbits/sec
[  3] Sent 655661 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec   919 MBytes   771 Mbits/sec   0.012 ms    1/655660 (0.00015%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order


greater rate or just better timing ...
less stuff going on in the LAN,
we need a few more tests ...
lithium:~# iperf -u -c 172.31.1.2 -b 1000M
------------------------------------------------------------
Client connecting to 172.31.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 172.31.1.1 port 56018 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   942 MBytes   790 Mbits/sec
[  3] Sent 671758 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec   940 MBytes   789 Mbits/sec   0.008 ms  941/671757 (0.14%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order
lithium:~# iperf -u -c 172.31.1.2 -b 1100M
------------------------------------------------------------
Client connecting to 172.31.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 172.31.1.1 port 48590 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   941 MBytes   789 Mbits/sec
[  3] Sent 671138 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec   937 MBytes   786 Mbits/sec   0.016 ms 2547/671137 (0.38%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order


but still less bandwidth than in the TCP tests, I want to make it say more in UDP , what if I try 2 or 3 client threads?
lithium:~# iperf -u -c 172.31.1.2 -b 1000M -P 2
------------------------------------------------------------
Client connecting to 172.31.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  4] local 172.31.1.1 port 44169 connected with 172.31.1.2 port 5001
[  3] local 172.31.1.1 port 46647 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   570 MBytes   478 Mbits/sec
[  4] Sent 406535 datagrams
[  3]  0.0-10.0 sec   570 MBytes   478 Mbits/sec
[  3] Sent 406862 datagrams
[SUM]  0.0-10.0 sec  1.11 GBytes   957 Mbits/sec
[  4] Server Report:
[  4]  0.0-10.0 sec   570 MBytes   478 Mbits/sec   0.023 ms  268/406534 (0.066%)
[  4]  0.0-10.0 sec  1 datagrams received out-of-order
[  3] Server Report:
[  3]  0.0-10.0 sec   570 MBytes   478 Mbits/sec   0.112 ms  169/406861 (0.042%)
[  3]  0.0-10.0 sec  25 datagrams received out-of-order
lithium:~# iperf -u -c 172.31.1.2 -b 1000M -P 3
------------------------------------------------------------
Client connecting to 172.31.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  5] local 172.31.1.1 port 42214 connected with 172.31.1.2 port 5001
[  4] local 172.31.1.1 port 43282 connected with 172.31.1.2 port 5001
[  3] local 172.31.1.1 port 45155 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   380 MBytes   319 Mbits/sec
[  5] Sent 271093 datagrams
[  3]  0.0-10.0 sec   380 MBytes   319 Mbits/sec
[  3] Sent 271140 datagrams
[  4]  0.0-10.0 sec   380 MBytes   319 Mbits/sec
[  4] Sent 271280 datagrams
[SUM]  0.0-10.0 sec  1.11 GBytes   957 Mbits/sec
[  5] Server Report:
[  5]  0.0-10.0 sec   380 MBytes   319 Mbits/sec   0.162 ms   46/271092 (0.017%)
[  5]  0.0-10.0 sec  69 datagrams received out-of-order
[  3] Server Report:
[  3]  0.0-10.0 sec   380 MBytes   318 Mbits/sec   0.021 ms  284/271139 (0.1%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order
[  4] Server Report:
[  4]  0.0-10.0 sec   380 MBytes   319 Mbits/sec   0.282 ms  242/271279 (0.089%)
[  4]  0.0-10.0 sec  10 datagrams received out-of-order


What if I try TCP bandwidth tests with 2 and 3 client threads ?
lithium:~# iperf -c 172.31.1.2
------------------------------------------------------------
Client connecting to 172.31.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 172.31.1.1 port 44165 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   977 MBytes   819 Mbits/sec
lithium:~# iperf -c 172.31.1.2 -P 2
------------------------------------------------------------
Client connecting to 172.31.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  4] local 172.31.1.1 port 44167 connected with 172.31.1.2 port 5001
[  3] local 172.31.1.1 port 44166 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   564 MBytes   473 Mbits/sec
[  3]  0.0-10.0 sec   553 MBytes   464 Mbits/sec
[SUM]  0.0-10.0 sec  1.09 GBytes   937 Mbits/sec
lithium:~# iperf -c 172.31.1.2 -P 3
------------------------------------------------------------
Client connecting to 172.31.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  5] local 172.31.1.1 port 44170 connected with 172.31.1.2 port 5001
[  4] local 172.31.1.1 port 44169 connected with 172.31.1.2 port 5001
[  3] local 172.31.1.1 port 44168 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   373 MBytes   313 Mbits/sec
[  3]  0.0-10.0 sec   374 MBytes   313 Mbits/sec
[  5]  0.0-10.0 sec   375 MBytes   314 Mbits/sec
[SUM]  0.0-10.0 sec  1.10 GBytes   940 Mbits/sec
lithium:~# iperf -c 172.31.1.2 -P 4
------------------------------------------------------------
Client connecting to 172.31.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  5] local 172.31.1.1 port 44173 connected with 172.31.1.2 port 5001
[  6] local 172.31.1.1 port 44174 connected with 172.31.1.2 port 5001
[  3] local 172.31.1.1 port 44171 connected with 172.31.1.2 port 5001
[  4] local 172.31.1.1 port 44172 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   281 MBytes   236 Mbits/sec
[  4]  0.0-10.0 sec   280 MBytes   235 Mbits/sec
[  5]  0.0-10.0 sec   280 MBytes   235 Mbits/sec
[  6]  0.0-10.0 sec   281 MBytes   236 Mbits/sec
[SUM]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
lithium:~# iperf -c 172.31.1.2 -P 5
------------------------------------------------------------
Client connecting to 172.31.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 172.31.1.1 port 44176 connected with 172.31.1.2 port 5001
[  5] local 172.31.1.1 port 44178 connected with 172.31.1.2 port 5001
[  4] local 172.31.1.1 port 44175 connected with 172.31.1.2 port 5001
[  6] local 172.31.1.1 port 44179 connected with 172.31.1.2 port 5001
[  7] local 172.31.1.1 port 44180 connected with 172.31.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   223 MBytes   187 Mbits/sec
[  4]  0.0-10.0 sec   225 MBytes   189 Mbits/sec
[  6]  0.0-10.0 sec   225 MBytes   188 Mbits/sec
[  7]  0.0-10.0 sec   224 MBytes   188 Mbits/sec
[  5]  0.0-10.0 sec   226 MBytes   189 Mbits/sec
[SUM]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec





next, meausure in the same setup with iperf3



Options Used


-B, --bind 
    bind to , an interface or multicast address

-s, --server
    run in server mode

-D, --daemon
    run the server as a daemon

-b, --bandwidth n[KM]
    set target bandwidth to n bits/sec (default 1 Mbit/sec).  This setting requires UDP (-u).

-u, --udp
    use UDP rather than TCP

-P, --parallel n
    number of parallel client threads to run







iperf

debian add printer

Add an IP printer in debian

Install the Common UNIX Printing System --PPD/driver support, web interface, and the client programs
# apt-get install cups cups-client


Use a web-browser to go to http://localhost:631/
and then,
Adding Printers and Classes -> Add Printer
--you may use the root account credentials or add a user to lpadmin.

There is a good chance that you will not see the appropriate PDD there.

For HP printers you may use hplip, go to http://hplipopensource.com/hplip-web/supported_devices/index.html and select the printer model e.g. http://hplipopensource.com/hplip-web/models/officejet/hp_officejet_pro_8610.html for office jet pro 8610.



Download hplip and run it, the automatic installation worked for me in debian jessie



debian add printer



wpa debian

Notes on connecting to an 802.11X/WPA AP with wpa_cli

enable interface
# ifconfig wlan0 up


scan
# iwlist wlan0 scan
or

# iwlist wlan0 scan |egrep -i "ssid|signal|frequency|authenti"
to see just ESSID , Signal Strength , Quality, Frequency and Authentication suites for each cell

Interactive configuration of wpa_supplicant with wpa_cli
# echo "ctrl_interface=/run/wpa_supplicant" >> /etc/wpa_supplicant/t.conf
# echo "update_config=1" >> /etc/wpa_supplicant/t.conf
# wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/t.conf


Associate/Authenticate with the "thESSID" ssid
# wpa_cli
>scan
> scan_results
> add_network
0
> set_network 0 ssid "thESSID"
OK
> set_network 0 psk "thePASSWD"
OK
> enable network 0
OK
> save_config
OK
> quit


and then request an IP address from the DHCP server or set one manually
# ifconfig wlan0 192.168.168.13/24




wpa_cli debian

unmount all samba filesystems

Unmount all cifs ( samba ) shares.
# umount -a -t cifs -l






unount samba shares



GLBer

GLBer Notes



GLBer Creates the RouterOS configuration commands and a RouterOS script for the g0 Load BalanER aka GLBer. Then the Mikrotik RouterOS Router with the multiple point-to-point or point-to-multipoint uplinks balances the traffic among all uplinks without using source based policy routing.



You need to copy the configuration commands and the RouterOS script that GLBer produces from a host that has bash to the RouterOS router e.g. from a bash shell in a Terminal to a winbox terminal in the RouterOS.



RouterOS flushes the routing table every 10 minutes and then there is a good chance to reset the masqueraded connections. The RouterOS script created by GLBer runs every 10 minutes and resets the equal cost multipath route raising more the chance for the masqueraded connections to reset in a 10 minutes period.



Install GLBer
# wget https://raw.githubusercontent.com/ipduh/glber/master/glber -O /usr/bin/glber && chmod 755 /usr/bin/glber




Create the RouterOS GLBer Configuration For 3 point-to-point uplinks
$ glber 

GLBer, g0 2014
Quick How-To: http://sl.ipduh.com/glber

Enter gateways: alfa beta gama
Enter interfaces: 

If all the uplink interfaces are point-to-point just enter their names when asked for gateways and just hit enter when glber asks you for interfaces.



Create the RouterOS GLBer configuration for 4 point-to-point uplinks and an uplink available in the LAN through the router's eth5 interface.
$ glber 

GLBer, g0 2014
Quick How-To: http://sl.ipduh.com/glber

Enter gateways: 10.21.241.101 alfa beta gama delta
Enter interfaces: eth5 alfa beta gama delta





GLBer logs all runs in ~/glber/UTC-UNIX-EPOCH.log

To Clean a RouterOS from the GLBer configuration find the UTC-UNIX-EPOCH in the RouterOS created by GLBer e.g. for the epoch 1420624338 you would run
$ glber file ~/glber/1420624338.log
and run the GLBer RouterOS commands under
###RouterOS commands to remove the GLBer configuration###
in the RouterOS terminal.











old glber



glber

Virtualbox or VMware vmdk to KVM qcow2

Migrate Virtualbox or VMware guest (on vmdk) to KVM




See disk image information.
# qemu-img info lwa-flat.vmdk 
image: lwa-flat.vmdk
file format: raw
virtual size: 50G (53687091200 bytes)
disk size: 50G




Convert the vmdk image to a qcow2 image.
# qemu-img convert -O qcow2 lwa-flat.vmdk lwa-flat.qcow2




Create a guest definition and start guest.
# virt-install --connect qemu:///system --import -n lwa \
--vcpus=1 --ram=2048 \
--disk path=/home/vm/fromvbox/lwa-flat.qcow2,device=disk,format=qcow2 \
--vnc --noautoconsole --os-type linux --description lwa \
--network=bridge:b0 --hvm




Migrate VMware or Virtualbox vmdk to KVM qcow2



wpa_passphrase linux

Associate with 802.11X/WPA cell with wpa_passphrase in linux

# wpa_passphrase 
usage: wpa_passphrase  [passphrase]

If passphrase is left out, it will be read from stdin


# wpa_passphrase thESSID passphrase
network={
 ssid="thESSID"
 #psk="passphrase"
 psk=a104b1103fd32169a27080273828e77120234f9e113a5f15ff37e17729a0c266
}


to asscociate with the thESSID
# wpa_supplicant -B -i wlan0 -c <(wpa_passphrase thESSID passphrase)
or
# echo "update_config=1" > /etc/wpa_supplicant/thESSID.conf
# echo "fast_reauth=1" >> /etc/wpa_supplicant/thESSID.conf
# echo "ap_scan=1" >> /etc/wpa_supplicant/thESSID.conf
# wpa_passphrase thESSID passphrase > /etc/wpa_supplicant/thESSID.conf
# wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/thESSID.conf




wpa_passphrase debian voyage

convert qcow2 ( kvm ) images to vdi ( virtual box )

Convert qcow2 ( kvm ) images to vdi ( virtual box )

You will need at least qemu-utils
# apt-get install qemu-utils


Check the qcow2 image
# qemu-img info apollo.qcow2 
image: apollo.qcow2
file format: raw
virtual size: 50G (53687091200 bytes)
disk size: 50G


Convert the qcow2 image to vdi
# qemu-img convert -O vdi apollo.qcow2 apollo.vdi








convert qcow2 kvm images to vdi virtualbox images