2018年8月14日

esxi 6.7使用intel p4500 SSD經驗

因為某些因素,使用者指定要裝intel ssd來增加i/o速度,在這邊大概寫一下這一路走來一個月的debug過程

使用工具

  • AS SSD benchmark
  • ATTO disk benchmark

原始環境為esxi 6.5,需要安裝intel P4500 SSD,預設即可使用,不過SSD效能較差,大約用AS SSD測試約只有500MB/s效能。同仁強調該片SSD可測得接近4GB/s的讀取,2GB/s的寫入效能,故希望改善。以下為所有測試紀錄

  1. 查網站文件指出intel已無提供給esxi使用的驅動程式,在vmware官網上唯一提供的NVMe驅動只能用在esxi 6.7上,嘗試用esxi 6.5吃 for esxi 6.7用的NVMe驅動
    結果:失敗
  2. 升版至esxi 6.7,測試SSD讀寫效能仍不佳
    結果:失敗
  3. 安裝vmware官網釋出的NVMe驅動 intel-nvme-vmd version 1.4.0.1016,讀寫效能佳,但...會當機
    結果:大失敗

    ...無止盡測試SSD
  4. 更新esxi 6.7最新釋出patch,仍當
    結果:失敗
  5. 更新SSD firmware,需安裝Intel® SSD Data Center Tool 3.0.14
    結果:持續運作測試中,幹...無效
-------------------------
SSD更新動作log
  1. vib install
  2. /opt/intel/isdct/isdct show -all -intelssd
    show ssd 查出要升級的該片SSD index
  3. /opt/intel/isdct/isdct load -intelssd 0
    用load指令升級ssd firmware,後面0為該片ssd index
SSD需要分多次升級,所以esxi會反覆重開




2018年4月25日

vcsa問題 Unable to load resource module from /vsphere-client/VDP2/locales/VDP-zh_TW.swf

VDP因為沒有zh_TW語系,所以只要裝了VDP後,vCenter一定會出現無法找到zh_TW語系
沒有治本方法,只有治標方法如下 (重開機後系統會還原,還好vc重開機會不多)

1.SSH登入VCSA主機
2.找出語系檔放置目錄
3.拷貝zh_CN成zh_TW


root@VCSA [ ~ ]# find / -name VDP-en_US.swf

/usr/lib/vmware-vsphere-client/server/work/deployer/s/global/124/0/vdr-ui-war-6.1.6.war/locales/VDP-en_US.swf

root@VCSA [ ~ ]# cd /usr/lib/vmware-vsphere-client/server/work/deployer/s/global/124/0/vdr-ui-war-6.1.6.war/locales/

root@VCSA [ /usr/lib/vmware-vsphere-client/server/work/deployer/s/global/124/0/vdr-ui-war-6.1.6.war/locales ]# cp VDP-zh_CN.swf VDP-zh_TW.swf


2018年3月8日

vm升級方法

1. 下載offline bundle,放到資料存放區
2. esxcli software sources profile list -d "Offline Bundle 路徑"
3. esxcli software profile update -d "Offline Bundle 路徑" -p "Profile 名稱"

或,使用下列網站
https://esxi-patches.v-front.de/ESXi-6.0.0.html



2017年4月21日

尋找linux肥大檔案

1.
du -h --max-depth=1
max-depth是表示查詢子目錄的層級
就可查到目錄佔用的情形,再到較大的目錄,重覆利用此指令去找出佔用較大的檔案

vm開機進BIOS選單

因為最近被Assign管VM,所以來寫一些VM常見的小bug排除

1. VM開機要怎樣進BIOS選項
A:更改該虛擬機器附檔名為vmx的資料設定檔案,並在檔案最後,加入下面的指令後儲存檔案:bios.bootDelay = "10000"

2.cluster警示沒有重複網路
A:將cluster HA進階選項新增值das.ignoreRedundantNetWarning true

3.vSphere更新
A:https://esxi-patches.v-front.de/


2015年5月8日

brocade ntp time

Synchronizing local time using NTP

Perform the following steps to synchronize the local time using NTP.
  1. Log into the switch using the default password, which is password.
  2. Enter the tsClockServer command:
    switch:admin> tsclockserver  ""
    
    where ntp1 is the IP address or DNS name of the first NTP server, which the switch must be able to access. The second ntp2 is the second NTP server and is optional. The operand "" is optional; by default, this value is LOCL, which uses the local clock of the principal or primary switch as the clock server.
    switch:admin> tsclockserver
    LOCL
    switch:admin> tsclockserver "132.163.135.131"
     
    switch:admin> tsclockserver
    132.163.135.131
    switch:admin>
    The following example shows how to set up more than one NTP server using a DNS name:
    switch:admin> tsclockserver
     "10.32.170.1;10.32.170.2;ntp.localdomain.net"
    Updating Clock Server configuration...done.
    Updated with the NTP servers
    Changes to the clock server value on the principal or primary FCS switch are propagated to all switches in the fabric.

2015年4月28日

fio效能測試手記

FIO安裝測試範例
[root@supermic2 ~]#
[root@supermic2 ~]# mkdir fio
[root@supermic2 ~]# cd fio/
[root@supermic2 fio]# wget ftp://195.220.108.108/linux/dag/redhat/el7/en/x86_64/dag/RPMS/fio-2.1.10-1.el7.rf.x86_64.rpm
[root@supermic2 fio]# rpm -ivh fio-2.1.10-1.el7.rf.x86_64.rpm
warning: fio-2.1.10-1.el7.rf.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEY
error: Failed dependencies:
        libc.so.6(GLIBC_2.14)(64bit) is needed by fio-2.1.10-1.el7.rf.x86_64
        libm.so.6(GLIBC_2.15)(64bit) is needed by fio-2.1.10-1.el7.rf.x86_64
請勿跟我一樣犯傻裝成RHEL7
[root@supermic2 fio]# wget ftp://195.220.108.108/linux/dag/redhat/el6/en/x86_64/dag/RPMS/fio-2.1.10-1.el6.rf.x86_64.rpm
[root@supermic2 fio]# rpm -ivh fio-2.1.10-1.el6.rf.x86_64.rpm
warning: fio-2.1.10-1.el6.rf.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEY
error: Failed dependencies:
        libibverbs.so.1()(64bit) is needed by fio-2.1.10-1.el6.rf.x86_64
[root@supermic2 fio]# yum install libibverbs
[root@supermic2 fio]# rpm -ivh fio-2.1.10-1.el6.rf.x86_64.rpm

powerpath
[root@supermic2 ~]# wget https://www.dropbox.com/s/na23jm0w4s6hikx/EMCPower.LINUX-6.0.0.00.00-158.RHEL6.x86_64.rpm
[root@supermic2 ~]# rpm -ivh EMCPower.LINUX-6.0.0.00.00-158.RHEL6.x86_64.rpm
[root@supermic2 ~]# emcpreg -i
Reboot
[root@supermic2 ~]# powermt display dev=all
Pseudo name=emcpowera
VNX ID=CETV2150900078 [SG_Test]
Logical device ID=60060160E1603C00BC6C544A52EDE411 [SPMIC4-2_1TB_IOPS_Test]
state=alive; policy=CLAROpt; queued-IOs=0
Owner: default=SP B, current=SP B       Array failover mode: 4
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
   1 bfa                    sda        SP A0     active   alive      0      0
   1 bfa                    sdb        SP B0     active   alive      0      0

[root@supermic2 ~]# cd /dev/
[root@supermic2 dev]# mkfs -t ext4 /dev/emcpowera
[root@supermic2 dev]# mkdir /emcdisk
[root@supermic2 dev]# mount /dev/emcpowera /emcdisk/
[root@supermic2 dev]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc3       450G  2.1G  426G   1% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
/dev/sdc1       477M   63M  389M  14% /boot
/dev/emcpowera 1008G   72M  957G   1% /emcdisk
測試FIO效能
[root@supermic2 emcdisk]# fio -filename=/dev/emcpowera -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=64G -numjobs=30 -runtime=100 -group_reporting -name=mytest
mytest: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.1.10
Starting 30 threads
Jobs: 30 (f=30): [mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [100.0% done] [116.6MB/52576KB/0KB /s] [7460/3286/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=8028: Tue Apr 28 11:47:30 2015
  read : io=16586MB, bw=169621KB/s, iops=10601, runt=100130msec
    clat (usec): min=140, max=367472, avg=1137.68, stdev=2374.99
     lat (usec): min=140, max=367472, avg=1137.94, stdev=2374.99
    clat percentiles (usec):
     |  1.00th=[  374],  5.00th=[  470], 10.00th=[  524], 20.00th=[  604],
     | 30.00th=[  692], 40.00th=[  772], 50.00th=[  852], 60.00th=[  948],
     | 70.00th=[ 1048], 80.00th=[ 1176], 90.00th=[ 1368], 95.00th=[ 1576],
     | 99.00th=[ 9920], 99.50th=[13632], 99.90th=[27008], 99.95th=[34560],
     | 99.99th=[62720]
    bw (KB  /s): min=  504, max= 8032, per=3.34%, avg=5673.48, stdev=882.68
  write: io=7098.6MB, bw=72595KB/s, iops=4537, runt=100130msec
    clat (usec): min=414, max=325774, avg=3931.50, stdev=4172.32
     lat (usec): min=415, max=325775, avg=3932.92, stdev=4172.32
    clat percentiles (usec):
     |  1.00th=[ 1064],  5.00th=[ 1592], 10.00th=[ 2008], 20.00th=[ 2512],
     | 30.00th=[ 2896], 40.00th=[ 3216], 50.00th=[ 3568], 60.00th=[ 3920],
     | 70.00th=[ 4384], 80.00th=[ 4960], 90.00th=[ 5792], 95.00th=[ 6560],
     | 99.00th=[10048], 99.50th=[14144], 99.90th=[48384], 99.95th=[79360],
     | 99.99th=[168960]
    bw (KB  /s): min=   94, max= 3424, per=3.34%, avg=2427.76, stdev=361.93
    lat (usec) : 250=0.07%, 500=5.30%, 750=20.54%, 1000=20.27%
    lat (msec) : 2=24.73%, 4=16.11%, 10=11.99%, 20=0.78%, 50=0.19%
    lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
  cpu          : usr=0.28%, sys=1.53%, ctx=1533832, majf=0, minf=6
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=1061507/w=454307/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=16586MB, aggrb=169620KB/s, minb=169620KB/s, maxb=169620KB/s, mint=100130msec, maxt=100130msec
  WRITE: io=7098.6MB, aggrb=72594KB/s, minb=72594KB/s, maxb=72594KB/s, mint=100130msec, maxt=100130msec

Disk stats (read/write):
  emcpowera: ios=1060910/454033, merge=0/2, ticks=1184825/1769804, in_queue=2952885, util=99.95%

[root@supermic2 emcdisk]#
================================
參考網誌
http://www.cnblogs.com/Skyar/p/3488100.html