Posts

Showing posts from 2013

IPv6 reverse lookup configuration

Named.conf // // Sample named.conf BIND DNS server 'named' configuration file // for the Red Hat BIND distribution. // // See the BIND Administrator's Reference Manual (ARM) for details, in: //   file:///usr/share/doc/bind-*/arm/Bv9ARM.html // Also see the BIND Configuration GUI : /usr/bin/system-config-bind and // its manual. // options {         directory "/var/named";         forwarders { "IP address of external DNS"; };         };         zone "blr.stglabs.ibm.com" {                 type master;                 file "/var/named/my.ibmisl.zone.db";         };         zone "139.126.9.in-addr.arpa" {         type master;         file "/var/named/9.126.139.rev"; };         zone "9.in-addr.arpa" {         type master;         file "/var/named/9.rev"; };         zone "0.10.in-addr.arpa" {         type master;         file "

client ldap setting in PAM.d

[root@hmc64 ~]# vi /etc/pam.d/login [root@hmc64 ~]# vi /etc/pam.d/system-auth [root@hmc64 ~]# vi /etc/pam.d/password-auth [root@hmc64 ~]# vi /etc/pam.d/login [root@hmc64 ~]# vi /etc/pam.d/su [root@hmc64 ~]# vi /etc/pam.d/login [root@hmc64 ~]# authconfig-tui Starting nslcd:                                            [  OK  ] Starting oddjobd:                                          [  OK  ]

NUMA mode with pci mapping

[root@islpfdkvm12 ~]# dmesg | grep -i numa NUMA: Allocated memnodemap from a000 - a140 NUMA: Using 31 for the hash shift. pci_bus 0000:00: on NUMA node 0 (pxm 0) pci_bus 0000:80: on NUMA node 1 (pxm 1)

pci details with CPU mapping

[root@islpfdkvm12 ~]# lspci | grep -i ether 06:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 1b:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) 1b:00.1 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) 1b:00.2 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) 1b:00.3 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) [root@islpfdkvm12 ~]# cat /sys/devices/pci0000\: pci0000:00/ pci0000:80/ [root@islpfdkvm12 ~]# cat /sys/devices/pci0000\:00/ 0000:00:00.0/ 0000:00:02.2/ 0000:00:04.0/ 0000:00:04.3/ 0000:00:04.6/ 0000:00:05.2/ 0000:00:1c.0/ 0000:00:1e.0/ 000

Command to check CPU details

[root@islpfdkvm12 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 45 Stepping: 7 CPU MHz: 1200.000 BogoMIPS: 5399.21 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-7,16-23 NUMA node1 CPU(s): 8-15,24-31

NUMA commands

Image
numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 0 size: 131045 MB node 0 free: 34018 MB node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 node 1 size: 131072 MB node 1 free: 15748 MB node distances: node 0 1 0: 10 11 1: 11 10 [root@islpfdkvm12 ~]# free -mt total used free shared buffers cached Mem: 258451 208978 49472 0 309 141901 -/+ buffers/cache: 66768 191682 Swap: 9999 458 9541 Total: 268451 209436 59014 [root@islpfdkvm12 ~]# numastat node0 node1 numa_hit 155199455 157169588 numa_miss 18363861 35640093 numa_foreign 35640093 18363861 interleave_hit 57466 57452 local_node 155198395 157104460 other_node 18364921 35705221 [root@islpfdkvm12

Linux Tools

Image

Processor types & locations

Processor types & locations [root@intels3e3601 node1]# cat /proc/cpuinfo processor : 0 <logical cpu #> physical id : 0 <socket #> siblings : 16 <logical cpus per socket> core id : 0 <core # in socket> cpu cores : 8 <physical cores per socket> # cat /sys/devices/system/node/node*/cpulist node0: 03 node1: 47 # cat /proc/cpuinfo |grep -i "processor\|processor id\|core\|sibling" processor       : 0 siblings        : 8 core id         : 0 cpu cores       : 4 processor       : 1 siblings        : 8 core id         : 1 cpu cores       : 4 processor       : 2 siblings        : 8 core id         : 2 cpu cores       : 4 processor       : 3 siblings        : 8 core id         : 3 cpu cores       : 4 processor       : 4 siblings        : 8 core id         : 0 cpu cores       : 4 processor       : 5 siblings        : 8 core id         : 1 cpu cores       : 4 processor       : 6 siblings        : 8 core id         :

LVM volume path missed ...recovered vg volume path

[root@gfscluster1 ~]# ls -l /dev/vgvol/lv_brick1 ls: cannot access /dev/vgvol/lv_brick1: No such file or directory #lvdiskplay --- Logical volume --- LV Path /dev/vgvol/lv_brick1 LV Name lv_brick1 VG Name vgvol LV UUID xdWayg-cZMy-CsFG-KZR1-y1Ox-Ntgi-FMpET1 LV Write Access read/write LV Creation host, time gfscluster1, 2013-07-26 17:27:54 -0400 LV Status suspended # open 0 LV Size 65.00 GiB Current LE 16639 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 Answer: [root@gfscluster1 lvm]# vgscan --mknode The link /dev/vgvol/lv_brick1 should had been created by udev but it was not found. Falling back to direct link creation. [root@gfscluster1 lvm]# ls -l /dev/vg vga_arbiter vgvol/ [root@gfscluster1 lvm]# ls -l /dev/vgvol/lv_brick1 lrwxrwxrw

other way to see LVM --- DMSETUP command

[root@gfscluster1 ~]# dmsetup info Name: VolGroup-lv_swap State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 1 Number of targets: 1 UUID: LVM-bz62bsGCF0kQGQZXZqkuJcm5LXN94FEr5RPQnqBPwRxTupWMYJOFU1w0PIfkPemh Name: VolGroup-lv_root State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 0 Number of targets: 1 UUID: LVM-bz62bsGCF0kQGQZXZqkuJcm5LXN94FErSBYAwCAc9MttPuFK6BgX4hD1NJzSGv1k Name: VolGroup-lv_home State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 2 Number of targets: 1 UUID: LVM-bz62bsGCF0kQGQZXZqkuJcm5LXN94FErvHTieePAhYfZoYArYtqD2ejy5cyMe2SQ Name: vgvol-lv_brick1 State: SUSPENDED Read Ahead: 256 Tables present: None Open count: 0 Event number:

How to enable logs for LVM.

cat /etc/lvm/lvm.conf # Should we send log messages through syslog? # 1 is yes; 0 is no. syslog = 1 # Should we log error and debug messages to a file? # By default there is no log file. file = "/var/log/lvm2.log" # Should we overwrite the log file each time the program is run? # By default we append. overwrite = 0 # What level of log messages should we send to the log file and/or syslog? # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive. # 7 is the most verbose (LOG_DEBUG). # level = 0 level = 7

LVM showming so many error messages

#lvdisplay /dev/sda: read failed after 0 of 4096 at 0: Input/output error /dev/sda: read failed after 0 of 4096 at 37580898304: Input/output error /dev/sda: read failed after 0 of 4096 at 37580955648: Input/output error /dev/sda: read failed after 0 of 4096 at 4096: Input/output error /dev/sdc: read failed after 0 of 4096 at 0: Input/output error /dev/sdc: read failed after 0 of 4096 at 37580898304: Input/output error /dev/sdc: read failed after 0 of 4096 at 37580955648: Input/output error /dev/sdc: read failed after 0 of 4096 at 4096: Input/output error /dev/sdf: read failed after 0 of 4096 at 0: Input/output error /dev/sdf: read failed after 0 of 4096 at 37580898304: Input/output error /dev/sdf: read failed after 0 of 4096 at 37580955648: Input/output error /dev/sdf: read failed after 0 of 4096 at 4096: Input/output error /dev/sdh: read failed after 0 of 4096 at 0: Input/output error /dev/sdh: read failed after 0 of 4096 at 37580898304: Input/output error /dev/sdh: r

command for Rescan LUNs

[root@gfscluster1 ~]# rpm -qf /usr/bin/rescan-scsi-bus.sh sg3_utils-1.28-4.el6.x86_64

Find UUID & mount the file system

acb:~ # blkid /dev/sdk1: UUID="57c5a0f3-ba9c-40bb-ab13-1520a7667d5f" SEC_TYPE="ext2" TYPE="ext3" /dev/sda1: UUID="e446ceab-b704-4b22-91d1-b5eb84c4f4b5" TYPE="swap" /dev/sdi1: UUID="1b8c1063-4fdd-4606-859a-44e840935cf4" TYPE="ext3" /dev/sda2: UUID="19e8e2fb-1eb3-4376-8c0f-9b646969c75a" TYPE="ext3" /dev/sdd1: UUID="97f10709-c33e-4232-8235-6b26eb4ec185" SEC_TYPE="ext2" TYPE="ext3" /dev/sdc1: UUID="1b8c1063-4fdd-4606-859a-44e840935cf4" TYPE="ext3" /dev/sdf1: UUID="1b8c1063-4fdd-4606-859a-44e840935cf4" TYPE="ext3" /dev/sdg1: UUID="97f10709-c33e-4232-8235-6b26eb4ec185" SEC_TYPE="ext2" TYPE="ext3" /dev/sdj1: UUID="97f10709-c33e-4232-8235-6b26eb4ec185" SEC_TYPE="ext2" TYPE="ext3" You have new mail in /var/mail/root acb:~ # ls -l ../../sda2 ls: cannot access ../../sda2: No such

Rescan LUN with command

echo "- - -" > /sys/class/scsi_host/host0/scan to know what will come in place of "host0 or host1" Run the below command to know this [root@fspnfs Nagios]# systool -c fc_host -v Class = "fc_host" Class Device = "host5" Class Device path = "/sys/class/fc_host/host5" fabric_name = "0x100000051ed40e68" issue_lip = node_name = "0x2000001b32901fd0" port_id = "0x010400" port_name = "0x2100001b32901fd0" port_state = "Online" port_type = "NPort (fabric via point-to-point)" speed = "4 Gbit" supported_classes = "Class 3" supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit" symbolic_name = "QLE2460 FW:v5.03.16 DVR:v8.03.07.03.05.07-k" system_hostname = "" tgtid_bind_type = &qu

check state of FC link

[root@fspnfs Nagios]# cat /sys/class/scsi_host/host5/state Link Up - F_Port

rescan LUN in linux with script.

create file rescan-scsi-lun.sh & copy below contents. ./rescan-scsi-bus.sh -l -w #!/bin/bash # Skript to rescan SCSI bus, using the # scsi add-single-device mechanism # ( w ) 1998 -03 -19 Kurt Garloff ( c ) GNU GPL # ( w ) 2003 -07 -16 Kurt Garloff ( c ) GNU GPL # $Id : rescan-scsi-bus. sh ,v 1.15 2004 / 05 / 08 14 : 47 : 13 garloff Exp $   setcolor ( ) { red= " \e [0;31m" green= " \e [0;32m" yellow= " \e [0;33m" norm= " \e [0;0m" }   unsetcolor ( ) { red= "" ; green= "" yellow= "" ; norm= "" }   # Return hosts. sysfs must be mounted findhosts_26 ( ) { hosts= if ! ls /sys/class/scsi_host/host* >/dev/null 2 >& 1 ; then echo "No SCSI host adapters found in sysfs" exit 1 ; # hosts= " 0" #return fi for hostdir in /sys/class/scsi_host/host*; do hostno= $ { hostdir #/sys/class/scsi_h