Droid mipmap sizes

expert@punkinnovation

xlarge screens are at least 960dp x 720dp
large screens are at least 640dp x 480dp
normal screens are at least 470dp x 320dp
small screens are at least 426dp x 320dp
Generalised Dpi values for screens:

ldpi Resources for low-density (ldpi) screens (~120dpi)
mdpi Resources for medium-density (mdpi) screens (~160dpi). (This is the baseline density.)
hdpi Resources for high-density (hdpi) screens (~240dpi).
xhdpi Resources for extra high-density (xhdpi) screens (~320dpi).
Therefore generalised size of your resources (assuming they are full screen):

ldpi
Vertical = 426 * 120 / 160 = 319.5px
Horizontal = 320 * 120 / 160 = 240px
mdpi
Vertical = 470 * 160 / 160 = 470px
Horizontal = 320 * 160 / 160 = 320px
hdpi
Vertical = 640 * 240 / 160 = 960px
Horizontal = 480 * 240 / 160 = 720px
xhdpi
Vertical = 960 * 320 / 160 = 1920px
Horizontal = 720 * 320 / 160 = 1440px

px = dp*dpi/160

EZ git merge

expert@punkinnovation

git checkout master
git pull origin master
git merge branch
git push origin master

Increase VM hdd size

expert@punkinnovation

binary-code-475664_1280-e1450757241545

Nagios warning shows root partition full

Vsphere, increase the HDD size while running

ls /sys/class/scsi_device/

echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan

cfdisk /dev/sda

pvdisplay

pvcreate /dev/sda3

if fails run: partx -v -a /dev/sda

pvcreate /dev/sda3

vgdisplay

vgextend hdd_vg /dev/sda3

pvscan

lvdisplay

lvextend /dev/hdd_vg/root_lv /dev/sda3

resize2fs /dev/hdd_vg/root_lv

linux screen

expert@punkinnovation

.screenrc

deflog off
defutf8 on
defscrollback 30000
logfile /dev/null
caption always "%3n %t - [%-w(%n %t)%+w]"

screen

libvirt virsh troubleshooting PXE VM

expert@punkinnovation

Things to check for PXE DHCP issues.

* Check for a DHCP helper on the source L3 interface

* Check the KVM host routing. PXE/DHCP to server routing must be in place

* Check L3 interface info, no duplicate IPs, correct mask, static routes, bonding for mgmt

* Check virtual bridging

brctl show – check for MAC address and interface

if no MAC address, run ifconfig virbr0, add the MAC to the XML, destroy the network and restart it

virsh net-edit default
virsh net-destroy default
virsh net-start default

* Check VM XML

virsh dumpxml

* Check network settings vs ovs-vsctl list – add any missing ports

uptime

expert@punkinnovation

login@o2r1.ewr1:~$ uptime
14:05:37 up 1467 days, 3:55, 2 users, load average: 0.01, 0.08, 0.08

Openstack & Neutron DHCP troubleshoot

expert@punkinnovation
Openstack & Neutron DHCP troubleshoot

Log in to fuel

source it

neutron subnet-list | grep subnet

neutron router-list | grep

ip netns exec qrouter- ifconfig

——————–

grep for vrouters

ssh to vrouter

ssh to compute

Ping across underlay IPs

Check route table

Add static route if needed

root@node-30:~# ping 172.29.7.19
PING 172.29.7.19 (172.29.7.19) 56(84) bytes of data.
^C
--- 172.29.7.19 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1006ms

root@node-30:~# route add -net 172.29.7.0/24 gw 172.29.0.3

root@node-30:~# ping 172.29.7.19
PING 172.29.7.19 (172.29.7.19) 56(84) bytes of data.
64 bytes from 172.29.7.19: icmp_seq=1 ttl=61 time=0.147 ms
64 bytes from 172.29.7.19: icmp_seq=2 ttl=61 time=0.133 ms

Verify tunnel is up

ovs-vsctl show

Port “gre-ac1d000b”
Interface “gre-ac1d000b”
type: gre
options: {df_default=”false”, in_key=flow, local_ip=”172.29.7.17″, out_key=flow, remote_ip=”172.29.0.11″}

root@node-30:~# ifconfig br-mesh
br-mesh Link encap:Ethernet HWaddr a0:36:9f:35:e8:88
inet addr:172.29.0.11 Bcast:172.29.0.255 Mask:255.255.255.0

root@node-40:~# ovs-vsctl show | grep 0.11
options: {df_default=”false”, in_key=flow, local_ip=”172.29.7.17″, out_key=flow, remote_ip=”172.29.0.11″}
—————

On compute host

nova reboot –hard

Check for DHCP DISCOVER, OFFER, REQUEST, etc.

————–

Check console

nova console-log

Single Interface, multiple subnets DHCP & DNSmasq

expert@punkinnovation
Single Interface, multiple subnets DHCP & DNSmasq

Set conf-dir to /etc/dnsmasq.d

Put each DHCP config info in a seprate file in the conf-dir

[root@fuel ~]# cat /etc/dnsmasq.d/range_*

# Generated automatically by puppet
# Environment: 98
# Nodegroup: rack3
# IP range: [“172.29.215.10”, “172.29.215.11”]
dhcp-range=range_34d32145,172.29.215.10,172.29.215.11,255.255.255.192,120m
dhcp-option=net:range_34d32145,option:router,172.29.215.1
dhcp-boot=net:range_34d32145,pxelinux.0,boothost,172.29.214.75
dhcp-match=set:ipxe,175
dhcp-option-force=tag:ipxe,210,http://172.29.214.75/cobbler/boot/

# Generated automatically by puppet
# Environment:
# Nodegroup:
# IP range: [“172.29.214.80”, “172.29.214.126”]
dhcp-range=range_a06a0a6f,172.29.214.80,172.29.214.126,255.255.255.192,120m
dhcp-option=net:range_a06a0a6f,option:router,172.29.214.75
dhcp-boot=net:range_a06a0a6f,pxelinux.0,boothost,172.29.214.75
dhcp-match=set:ipxe,175
dhcp-option-force=tag:ipxe,210,http://172.29.214.75/cobbler/boot/
[root@mtn98fueld2 ~]#

hpsscli commands

expert@punkinnovation

ESXi Install

Install from ESXi host, with offline bundle on ESXi host:
esxcli software vib install -d

HP SMART ARRAY CLI COMMANDS ON ESXI
Show configuration
/opt/hp/hpssacli/bin/hpssacli ctrl all show config

Controller status
/opt/hp/hpssacli/bin/hpssacli ctrl all show status

Show detailed controller information for all controllers
/opt/hp/hpssacli/bin/hpssacli ctrl all show detail

Show detailed controller information for controller in slot 0
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 show detail

Rescan for New Devices
/opt/hp/hpssacli/bin/hpssacli rescan

Physical disk status
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd all show status

Show detailed physical disk information
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd all show detail

Logical disk status
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld all show status

View Detailed Logical Drive Status
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 show

Create New RAID 0 Logical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 create type=ld drives=1I:1:2 raid=0

Create New RAID 1 Logical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 create type=ld drives=1I:1:1,1I:1:2 raid=1

Create New RAID 5 Logical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 create type=ld drives=1I:1:1,1I:1:2,2I:1:6,2I:1:7,2I:1:8 raid=5

Delete Logical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 delete

Add New Physical Drive to Logical Volume
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 add drives=2I:1:6,2I:1:7

Add Spare Disks
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 array all add spares=2I:1:6,2I:1:7

Enable Drive Write Cache
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 modify dwc=enable

Disable Drive Write Cache
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 modify dwc=disable

Erase Physical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd 2I:1:6 modify erase

Turn on Blink Physical Disk LED
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 modify led=on

Turn off Blink Physical Disk LED
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 modify led=off

Modify smart array cache read and write ratio (cacheratio=readratio/writeratio)
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 modify cacheratio=100/0

Enable smart array write cache when no battery is present (No-Battery Write Cache option)
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 modify nbwc=enable

Disable smart array cache for certain Logical Volume
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 logicaldrive 1 modify arrayaccelerator=disable

Enable smart array cache for certain Logical Volume
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 logicaldrive 1 modify arrayaccelerator=enable

Enable SSD Smart Path
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 array a modify ssdsmartpath=enable

Disable SSD Smart Path
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 array a modify ssdsmartpath=disable

Open vSwitch Cheat Sheet

expert@punkinnovation

Base commands
ovs-vsctl : Used for configuring the ovs-vswitchd configuration database (known as ovs-db)
ovs-ofctl : A command line tool for monitoring and administering OpenFlow switches
ovs-dpctl : Used to administer Open vSwitch datapaths
ovs−appctl : Used for querying and controlling Open vSwitch daemons

ovs-vsctl –V : Prints the current version of openvswitch.
ovs-vsctl show : Prints a brief overview of the switch database configuration.
ovs-vsctl list-br : Prints a list of configured bridges
ovs-vsctl list-ports : Prints a list of ports on a specific bridge.
ovs-vsctl list interface : Prints a list of interfaces.
ovs-vsctl add-br : Creates a bridge in the switch database.
ovs-vsctl add-port : Binds an interface (physical or virtual) to a bridge.
ovs-vsctl add-port tag= : Converts port to an access port on specified VLAN (by default all OVS ports are VLAN trunks).
ovs-vsctl set interface type=patch options:peer= : Used to create patch ports to connect two or more bridges together.

ovs-ofctl show : Shows OpenFlow features and port descriptions.
ovs-ofctl snoop : Snoops traffic to and from the bridge and prints to console.
ovs-ofctl dump-flows : Prints flow entries of specified bridge. With the flow specified, only the matching flow will be printed to console. If the flow is omitted, all flow entries of the bridge will be printed.
ovs-ofctl dump-ports-desc : Prints port statistics. This will show detailed information about interfaces in this bridge, include the state, peer, and speed information. Very useful
ovs-ofctl dump-tables-desc : Similar to above but prints the descriptions of tables belonging to the stated bridge.
ovs-ofctl dump-ports-desc is useful for viewing port connectivity. This is useful in detecting errors in your NIC to bridge bonding.
ovs-ofctl add-flow : Add a static flow to the specified bridge. Useful in defining conditions for a flow (i.e. prioritize, drop, etc).
ovs-ofctl del-flows : Delete the flow entries from flow table of stated bridge. If the flow is omitted, all flows in specified bridge will be deleted.

ovs-dpctl is very similar to ovs-ofctl in that they both show flow table entries. The flows that ovs-dpctl prints are always an exact match and reflect packets that have actually passed through the system within the last few seconds. ovs-dpctl queries a kernel datapath and not an OpenFlow switch. This is why it’s useful for debugging flow data.
ovs-dpctl add-dp dp1
ovs-dpctl add-if dp1 eth0
ovs-dpctl dump-flows

ovs-appctl bridge/dump-flows : Dumps OpenFlow flows, including hidden flows. Useful for troubleshooting in-band issues.
ovs-appctl dpif/dump-flows : Dumps datapath flows for only the specified bridge, regardless of the type.
ovs-appctl vlog/list : Lists the known logging modules and their current levels. Use ovs-appctl vlog/set to set/change the module log level.
ovs-appctl ofproto/trace : Used to show entire flow field of a given flow (flow, matched rule, action taken).

Troubleshooting

One of the most common issues I’ve encountered has been problems with linking an interface to an OVS bridge. Take this configuration for example:

ovs-vsctl add-br brbm
ovs-vsctl add-port brbm eth2

The above configuration creates an OVS bridge (brbm) and links the physical interface eth2 to brbm. If you’ve enabled ip_forwarding and have created the bridge interfaces in your network interfaces file but have zero connectivity to the new interface, then how do you troubleshoot? Let’s use some of the tools above to verify our configuration:

root@testnode1:~# ovs-vsctl show
cae63bc8-ba98-451a-a652-a3b0e8a0f553
Bridge brbm
Port “eth2”
Interface “eth2”
Port brbm
Interface brbm
type: internal

root@testnode1:~# ovs-vsctl list-ports brbm
eth2

root@testnode1:~# ovs-ofctl dump-ports brbm
OFPST_PORT reply (xid=0x2): 1 ports
port LOCAL: rx pkts=23, bytes=1278, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=369369, bytes=62820789, drop=0, errs=0, coll=0

root@testnode1:~# ovs-ofctl dump-ports-desc brbm
OFPST_PORT_DESC reply (xid=0x2):
LOCAL(brbm): addr:78:e7:d1:24:73:85
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max

root@testnode1:/etc/network# ifconfig
brbm Link encap:Ethernet HWaddr 78:e7:d1:24:73:85
inet addr:10.23.32.15 Bcast:0.0.0.0 Mask:255.255.248.0
inet6 addr: fe80::16:e1ff:fe1f:f3e4/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:369369 errors:0 dropped:159944 overruns:0 frame:0
TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:62820789 (62.8 MB) TX bytes:1278 (1.2 KB)

eth2 Link encap:Ethernet HWaddr 78:e7:d1:24:73:85
inet6 addr: fe80::7ae7:d1ff:fe24:7385/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14148 errors:0 dropped:68 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2198636 (2.1 MB) TX bytes:648 (648.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:17701 errors:0 dropped:0 overruns:0 frame:0
TX packets:17701 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1487216 (1.4 MB) TX bytes:1487216 (1.4 MB)

root@testnode1:/etc/network# cat /proc/sys/net/ipv4/ip_forward
1
From the output above I see that although the OVS and interfaces file looks correct, I do not see any port traffic aside from LOCAL. LOCAL traffic is traffic generated from the host (ICMP in/out, ARP, etc). What is missing from the original OVS configuration statement is a restart of the networking stack. After the restart, we can see proper flow generation:

root@testnode1:~# ovs-ofctl dump-ports-desc brbm
OFPST_PORT_DESC reply (xid=0x2):
1(eth2): addr:78:e7:d1:24:73:85
config: 0
state: 0
current: 10GB-FD FIBER
advertised: 10GB-FD FIBER
supported: 10GB-FD FIBER
speed: 10000 Mbps now, 10000 Mbps max

root@testnode1:~# ovs-ofctl dump-ports brbm
OFPST_PORT reply (xid=0x2): 3 ports
port 5: rx pkts=6071934, bytes=37086750067, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6888905, bytes=626021363, drop=0, errs=0, coll=0
port 1: rx pkts=32317009, bytes=32290813174, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=25212056, bytes=83553302356, drop=0, errs=0, coll=0
port LOCAL: rx pkts=12293904, bytes=1780442549, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=24816664, bytes=31363410668, drop=0, errs=0, coll=0
Now that’s a proper link :)

News & Info

Confidence with the CLI

Sourcefire vs Palo Alto UTM Appliances

Unified threat from Sourcefire and Palo Alto Solutions

palo-alto

Version 4.10 from Sourcefire was a stable, robust, competent piece of software. The detection engines performed their duties as expected and IPS/IDS functionality worked as expected.

(more…)

Vendor Sites

Juniper Networks
Cisco
Sourcefire
F5 Networks
Arista Networks
NetApp

Punk Innovation

Legacy Archives