Quantcast
Channel: VMware Communities: Message List
Viewing all 199079 articles
Browse latest View live

Re: vSAN node restart for disk expansion

$
0
0

Hello Thouvou,

 

 

"Would there be data loss when we will place in maintenance mode a vsan node (Ensure accessibility option), due to the small free space which is less that 30% of the total, as stated in vSAN best practices?"

When you place a node in MM with EA option you are essentially pausing update to the data components residing on that nodes disks - thus there is only a single data-replica still up to date until that node is taken out of MM and data it missed while it was gone is resynced. If you have a hardware failure of a physical disk while you are running on a single copy of the data then yes you will lose that data, thus why it is best practice to have current back-ups before doing this and if this is not possible, have the data cold (e.g. if no components are being updated then they can't be out of sync and it will remain FTT=0) this would of course require a maintenance window with all VMs off and thus why the former option is what most do.

 

"Could we avoid data loss by setting FTT=0 on all VMs, in order to "erase" the replicas and consume less space which would then lead to increasing the free percentage (more than 30%)?"

No that wouldn't help as you would be in the same situation of only having one data-replica which could of course be affected by the same scenario above. As the data is stored as RAID1 FTT=1 currently there would be no available Fault-Domain(node here) to rebuild the Absent 3rd component when you put a host in MM so no resync will occur regardless of how long the node was in MM. With one node not contributing storage, the space of the cluster will be 2/3rd of it's current size and ~2/3rd of current used and free and thus is proportionally static.

 

 

Bob


Re: vSAN node restart for disk expansion

$
0
0

Hello TheBobkin,

 

Thank you very much for the explanation. I am mainly worried for the 12.5TB free space (<30%) and more specificaly, during the MM where some of the data and metadata of the host are evacuated to the rest of the nodes: is there a chance that the free capacity won't be adequate and not all data would be evacuated? We are thinking to shutdown all the affected VMs as we have enough time window, prior to entering MM.

 

Thank you again

Horizon Blast Gateway

$
0
0

Hello,

I wanted to change my server connection server configuration, but unfortunately I have difficulties and I can't understand where the problem is.

 

Currently the configuration is: Use Blast Secure Gateway for all Blast connections, and I wanted to change it to Donot use Blast secure Gateway.

 

Unfortunately, if I change this setting, two things I can't understand: either from the pc zero client or from the client on windows, as soon as I enter the login credentials, the vm machine assigned to the user goes into error (blue screen). This happens on all of us on numerous vm.

 

Can you help me understand how I can understand what's going on?

 

Thanks Alessandro

Re: VMware ESXi control panel | VM Management & Bandwidth Monitoring | AutoVM

$
0
0

New Auto update: (Version 188)

* Bug fixes.

Re: Deploy Error : A required disk image was missing.

$
0
0

I had the same issue and it turned out to be the ISO image attached via the emulated CD/ROM

Re: Virtual machine are always been deleted for me.

$
0
0

i don't have synaptic opened or anyrthing eles.

Re: VMware “Unable to connect to the MKS: Login (username/password) incorrect.” after Upgrade

$
0
0

Is it safe to assume at this point that this bug is just going to remain present in VMware Workstation at this point? It still has not been fixed.

Re: vSAN node restart for disk expansion

$
0
0

As I said above: the used and free space should stay approximately proportional e.g. you will have ~8.2TB free space after placing one node in MM with EA option in a 3-node cluster. And should be no resync or evacuation of data unless you have some FTT=0 data in the cluster, FTT=1 Objects require 3 usable Fault Domains for component placement and there are only 2 available with one node in MM/rebooting.

 

"We are thinking to shutdown all the affected VMs as we have enough time window, prior to entering MM."

Note that merely shutting down just the VMs running on one node won't make the data on any one node cold - you would require to shut down all VMs on the cluster during the maintenance window and thus why most just take current back-ups then perform the maintenance with rolling MM EA.

 

 

Bob


Re: Datastores disappeared after a reboot

$
0
0

#offset="128 2048"; for dev in `esxcfg-scsidevs -l | grep "Console Device:" | awk {'print $3'}`; do disk=$dev; echo $disk; partedUtil getptbl $disk; { for i in `echo $offset`; do echo "Checking offset found at $i:"; hexdump -n4 -s $((

0x100000+(512*$i))) $disk; hexdump -n4 -s $((0x1300000+(512*$i))) $disk; hexdump -C -n 128 -s $((0x130001d + (512*$i))) $disk; done; } | grep -B 1 -A 5 d00d; echo "---------------------"; done

/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0

gpt

243 255 63 3913728

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

---------------------

/vmfs/devices/disks/naa.600605b009617b801cfe4186271b2743

gpt

243133 255 63 3905945600

1 2048 3905945566 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Checking offset found at 2048:

0200000 d00d c001

0200004

1400000 f15e 2fab

1400004

0140001d  64 69 73 6b 32 00 00 00  00 00 00 00 00 00 00 00  |disk2...........|

0140002d  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

---------------------

/vmfs/devices/disks/naa.600605b009617b801cfe4186271b7c33

gpt

364733 255 63 5859442688

1 2048 5859442654 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Checking offset found at 2048:

0200000 d00d c001

0200004

1400000 f15e 2fab

1400004

0140001d  64 69 73 6b 31 00 00 00  00 00 00 00 00 00 00 00  |disk1...........|

0140002d  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

---------------------

# fdisk -l /dev/disks/naa.600605b009617b801cfe4186271b2743

 

 

***

*** The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil

***

 

 

Found valid GPT with protective MBR; using GPT

 

 

Disk /dev/disks/naa.600605b009617b801cfe4186271b2743: 3905945600 sectors, 3725M

Logical sector size: 512

Disk identifier (GUID): ffa051fd-46a5-4151-a6d3-b62a3a5b9ec0

Partition table holds up to 128 entries

First usable sector is 34, last usable sector is 3905945566

 

 

Number  Start (sector)    End (sector)  Size       Code  Name

   1            2048      3905945566       3724M   0700

#fdisk -l /dev/disks/naa.600605b009617b801cfe4186271b7c33

 

 

***

*** The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil

***

 

 

fdisk: device has more than 2^32 sectors, can't use all of them

Found valid GPT with protective MBR; using GPT

 

 

Disk /dev/disks/naa.600605b009617b801cfe4186271b7c33: 4294967295 sectors, 4095M

Logical sector size: 512

Disk identifier (GUID): cd7dd043-9548-4f87-bba7-9b900622d9e4

Partition table holds up to 128 entries

First usable sector is 34, last usable sector is 5859442654

 

 

Number  Start (sector)    End (sector)  Size       Code  Name

   1            2048      5859442654       5587M   0700

# esxcli storage core path list

usb.vmhba32-usb.0:0-mpx.vmhba32:C0:T0:L0

   UID: usb.vmhba32-usb.0:0-mpx.vmhba32:C0:T0:L0

   Runtime Name: vmhba32:C0:T0:L0

   Device: mpx.vmhba32:C0:T0:L0

   Device Display Name: Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)

   Adapter: vmhba32

   Channel: 0

   Target: 0

   LUN: 0

   Plugin: NMP

   State: active

   Transport: usb

   Adapter Identifier: usb.vmhba32

   Target Identifier: usb.0:0

   Adapter Transport Details: Unavailable or path is unclaimed

   Target Transport Details: Unavailable or path is unclaimed

   Maximum IO Size: 122880

 

 

unknown.vmhba2-unknown.2:1-naa.600605b009617b801cfe4186271b7c33

   UID: unknown.vmhba2-unknown.2:1-naa.600605b009617b801cfe4186271b7c33

   Runtime Name: vmhba2:C2:T1:L0

   Device: naa.600605b009617b801cfe4186271b7c33

   Device Display Name: Local LSI Disk (naa.600605b009617b801cfe4186271b7c33)

   Adapter: vmhba2

   Channel: 2

   Target: 1

   LUN: 0

   Plugin: NMP

   State: active

   Transport: parallel

   Adapter Identifier: unknown.vmhba2

   Target Identifier: unknown.2:1

   Adapter Transport Details: Unavailable or path is unclaimed

   Target Transport Details: Unavailable or path is unclaimed

   Maximum IO Size: 131072

 

 

unknown.vmhba2-unknown.2:0-naa.600605b009617b801cfe4186271b2743

   UID: unknown.vmhba2-unknown.2:0-naa.600605b009617b801cfe4186271b2743

   Runtime Name: vmhba2:C2:T0:L0

   Device: naa.600605b009617b801cfe4186271b2743

   Device Display Name: Local LSI Disk (naa.600605b009617b801cfe4186271b2743)

   Adapter: vmhba2

   Channel: 2

   Target: 0

   LUN: 0

   Plugin: NMP

   State: active

   Transport: parallel

   Adapter Identifier: unknown.vmhba2

   Target Identifier: unknown.2:0

   Adapter Transport Details: Unavailable or path is unclaimed

   Target Transport Details: Unavailable or path is unclaimed

   Maximum IO Size: 131072

# esxcli storage core device list

naa.600605b009617b801cfe4186271b7c33

   Display Name: Local LSI Disk (naa.600605b009617b801cfe4186271b7c33)

   Has Settable Display Name: true

   Size: 2861056

   Device Type: Direct-Access

   Multipath Plugin: NMP

   Devfs Path: /vmfs/devices/disks/naa.600605b009617b801cfe4186271b7c33

   Vendor: LSI

   Model: MR9240-4i

   Revision: 2.13

   SCSI Level: 5

   Is Pseudo: false

   Status: on

   Is RDM Capable: false

   Is Local: true

   Is Removable: false

   Is SSD: false

   Is Offline: false

   Is Perennially Reserved: false

   Queue Full Sample Size: 0

   Queue Full Threshold: 0

   Thin Provisioning Status: unknown

   Attached Filters:

   VAAI Status: unsupported

   Other UIDs: vml.0200000000600605b009617b801cfe4186271b7c334d5239323430

   Is Local SAS Device: false

   Is USB: false

   Is Boot USB Device: false

   No of outstanding IOs with competing worlds: 32

 

 

mpx.vmhba32:C0:T0:L0

   Display Name: Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)

   Has Settable Display Name: false

   Size: 1911

   Device Type: Direct-Access

   Multipath Plugin: NMP

   Devfs Path: /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0

   Vendor: General

   Model: USB Flash Disk

   Revision: 1.00

   SCSI Level: 2

   Is Pseudo: false

   Status: on

   Is RDM Capable: false

   Is Local: true

   Is Removable: true

   Is SSD: false

   Is Offline: false

   Is Perennially Reserved: false

   Queue Full Sample Size: 0

   Queue Full Threshold: 0

   Thin Provisioning Status: unknown

   Attached Filters:

   VAAI Status: unsupported

   Other UIDs: vml.0000000000766d68626133323a303a30

   Is Local SAS Device: false

   Is USB: true

   Is Boot USB Device: true

   No of outstanding IOs with competing worlds: 32

 

 

naa.600605b009617b801cfe4186271b2743

   Display Name: Local LSI Disk (naa.600605b009617b801cfe4186271b2743)

   Has Settable Display Name: true

   Size: 1907200

   Device Type: Direct-Access

   Multipath Plugin: NMP

   Devfs Path: /vmfs/devices/disks/naa.600605b009617b801cfe4186271b2743

   Vendor: LSI

   Model: MR9240-4i

   Revision: 2.13

   SCSI Level: 5

   Is Pseudo: false

   Status: on

   Is RDM Capable: false

   Is Local: true

   Is Removable: false

   Is SSD: false

   Is Offline: false

   Is Perennially Reserved: false

   Queue Full Sample Size: 0

   Queue Full Threshold: 0

   Thin Provisioning Status: unknown

   Attached Filters:

   VAAI Status: unsupported

   Other UIDs: vml.0200000000600605b009617b801cfe4186271b27434d5239323430

   Is Local SAS Device: false

   Is USB: false

   Is Boot USB Device: false

   No of outstanding IOs with competing worlds: 32

Re: Help diagnosing purple screen

$
0
0

Hello David,

 

 

System has encountered a Hardware Error - Please contact the hardware vendor

 

If you can reproduce the issue readily and/or under load you may be able to narrow down what component is potentially broken/failing by whether you get consistent backtrace and/or things such as specific cores always indicated (e.g. always cores from one CPU but not the other if dual-socket) then again this could indicate slot or other board failure as Chip said above or potentially even the memory bank local to that socket.

Either way switching components around would likely be the only way to deduce it further e.g. if it always follows a CPU when switched. Probably good idea to check your out-of-band management and call your hardware vendor before doing the above of course.

 

 

Bob

Re: Datastores disappeared after a reboot

Re: Datastores disappeared after a reboot

$
0
0

One of the first things I tried.

 

vmkernel.log output when rescanning:

2019-04-07T19:11:25.957Z cpu3:38976)VC: 2059: Device rescan time 16 msec (total number of devices 6)

2019-04-07T19:11:25.957Z cpu3:38976)VC: 2062: Filesystem probe time 59 msec (devices probed 6 of 6)

2019-04-07T19:11:25.957Z cpu3:38976)VC: 2064: Refresh open volume time 0 msec

...

2019-04-07T19:15:06.493Z cpu2:32791)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412e8081b380, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. Act:NONE

2019-04-07T19:15:06.493Z cpu2:32791)ScsiDeviceIO: 2338: Cmd(0x412e8081b380) 0x1a, CmdSN 0x418 from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

...

2019-04-07T19:16:27.080Z cpu6:34890)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x85 (0x412e8081db80, 34362) to dev "naa.600605b009617b801cfe4186271b7c33" on path "vmhba2:C2:T1:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2019-04-07T19:16:27.080Z cpu6:34890)ScsiDeviceIO: 2338: Cmd(0x412e8081db80) 0x85, CmdSN 0x6 from world 34362 to dev "naa.600605b009617b801cfe4186271b7c33" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2019-04-07T19:16:27.080Z cpu6:34890)ScsiDeviceIO: 2338: Cmd(0x412e8081db80) 0x4d, CmdSN 0x7 from world 34362 to dev "naa.600605b009617b801cfe4186271b7c33" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2019-04-07T19:16:27.081Z cpu6:34890)ScsiDeviceIO: 2338: Cmd(0x412e8081db80) 0x1a, CmdSN 0x8 from world 34362 to dev "naa.600605b009617b801cfe4186271b7c33" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2019-04-07T19:16:27.083Z cpu6:32795)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x85 (0x412e8081db80, 34362) to dev "naa.600605b009617b801cfe4186271b2743" on path "vmhba2:C2:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2019-04-07T19:16:27.083Z cpu6:32795)ScsiDeviceIO: 2338: Cmd(0x412e8081db80) 0x85, CmdSN 0x9 from world 34362 to dev "naa.600605b009617b801cfe4186271b2743" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2019-04-07T19:16:27.083Z cpu6:32795)ScsiDeviceIO: 2338: Cmd(0x412e8081db80) 0x4d, CmdSN 0xa from world 34362 to dev "naa.600605b009617b801cfe4186271b2743" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2019-04-07T19:16:27.083Z cpu6:32795)ScsiDeviceIO: 2338: Cmd(0x412e8081db80) 0x1a, CmdSN 0xb from world 34362 to dev "naa.600605b009617b801cfe4186271b2743" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

Datastores disappeared after a reboot

$
0
0

Guys,

 

Had to shut down the host to put more memory in. The new memory is detected fine, but both my datastores disappeared, so I can't bring the VMs back online.

 

Can't figure out why this happened or how to fix it.


ESXi v5.5. Running off USB (mpx.vmhba32:C0:T0:L0)

 

Terminal outputs:

 

# esxcfg-scsidevs -c

Device UID                            Device Type      Console Device                                            Size      Multipath PluginDisplay Name

mpx.vmhba32:C0:T0:L0                  Direct-Access    /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0                  1911MB    NMP     Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)

naa.600605b009617b801cfe4186271b2743  Direct-Access    /vmfs/devices/disks/naa.600605b009617b801cfe4186271b2743  1907200MB NMP     Local LSI Disk (naa.600605b009617b801cfe4186271b2743)

naa.600605b009617b801cfe4186271b7c33  Direct-Access    /vmfs/devices/disks/naa.600605b009617b801cfe4186271b7c33  2861056MB NMP     Local LSI Disk (naa.600605b009617b801cfe4186271b7c33)

 

# partedUtil getptbl /vmfs/devices/disks/naa.600605b009617b801cfe4186271b2743

gpt

243133 255 63 3905945600

1 2048 3905945566 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

 

# partedUtil getptbl /vmfs/devices/disks/naa.600605b009617b801cfe4186271b7c33

gpt

364733 255 63 5859442688

1 2048 5859442654 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Edit:
Things attempted so far:
VMware Knowledge Base - Recreating a missing VMFS datastore partition in VMware vSphere 5.x and 6.x (2046610)

VMware Knowledge Base - LUN suddenly becomes unavailable to one or more ESXi/ESX hosts and LUN needs to be reset (1000044)

Re: Datastores disappeared after a reboot

$
0
0

Do you can sharing vmkernel.log and hosts.log with the date next to reboot?

Re: ESXi 6.5 - HP P420i - HBA mode - Failed to create VMFS datastore

$
0
0

So I went to the DC past week to finally make some progress with these controllers.

We do have along with the P420i Controllers also some SmartArray P220i controllers installed which I had disabled via BIOS.

These controllers had a 512MB FBWC module installed each, so I was hoping to be able to use that Caching module with the P420i.

The FBWC modules did fit and the P420i would recognise the newly fitted Caching module.

 

OK, next step and just to rule out that the previously missing Caching Module is an issue with the Pass-Through option for that controller: Set the Controller to HBA-Mode and try to claim the disks for VSAN: failed again (well that was expected most likely anyway).

 

Next try: Set the controller to RAID0 mode and create R0 arrays for each single disk. This worked as expected, although I discovered some discrepancies between using the StorageAdministrator  from the latest SPP Gen8 ISO ( P03093_001_spp-Gen8.1-SPPGen81.4.iso ) and the Onboard Array Configuration Utility - ACU ( hit F5 when the P420i is being initialised ).

I preferred using the ACU - F5 method for configuring the P420i as this had the following advantages for me:

  • Able to control the Read/Write Cache per LogicalVolume
  • Resulting Queue depth of 1024 instead of 1020 compared to using the SA from the ISO. Which is really weird as the Queue depth should be a result of the Controllers actual capabilities no matter which Array config tool you use.
    Screen Shot 2019-04-05 at 17.48.04.png
  • The Array config actually get's saved. Yes, that's right, when using the ISO method, my Controller Config gets overwritten as soon as the server boots and initialises the P420i. It will re-create an RAID0 array for the disk in the first slot only ( the SSD in my case ). When you re-enter the CLI config utility for the controller ( F8 not F5 on Controller boot-up ), you can actually re-create the other R0 arrays and then they will be persistent on reboot.

The only thing which I have found missing when using the ACU - F5 method is the possibility to turn off "SSD Smart Path", so I am unsure whether this is enabled by default or not. I know its enabled by default when using the ISO SA method but you can simply disable it after the SSD R0 array creation, so I might reboot the servers into that mode, to double-check this setting.

 

For reference, below are the complete settings I use with the ACU - F5 method:

Screen Shot 2019-04-05 at 14.49.40.png

 

Creating a VSAN Datastore with those Controller settings worked just flawless, and after enabling the VSAN Performance service and the CEIP, I prepared my config for running some benchmarks using HCIBench later.

 

The first test I've done was just creating a simple 64bit CentOS VM, installed open-vm-tool, and ran a couple of dd's. Now, bear in mind that this can not really be considered as benchmarking or performance-testing, as there are so many variables in this equation, which are influencing your results.

 

Below are the results from using dd with "direct I/O for data" and "synchronised I/O for data", which literally did not make any difference. The results are pretty underwhelming actually, considering I am using:

  • 1 x "HPE 240GB SATA 6G READ INTENSIVE SFF (2.5IN) SC DS SSD - PartNo. 875652-001" for Caching
  • 3 x "HPE 600GB SAS 12G ENTERPRISE 10K SFF (2.5IN) SC DIGITALLY SIGNED FIRMWARE HDD - PartNo. 872736-001" for Capacity

on each of my 3-Node VSAN boxes.

Screen Shot 2019-04-05 at 23.24.05.png

 

Anyway, moving right along to HCIBench, which will actually simulate intended Workloads for your installation.

( I have turned off the Linux testbox while using HCIBench to get a more accurate result. HCI-Bench was the only VM running on my VSAN Datastore during all those tests. )

I also used FIO as the Benchmark tool instead of VDBENCH, for those who are interested.

 

1st Test: IOP's heavy, using the following settings:

  • 15 VM's, each VM with 8 Data VMDK's of 10GB/VMDK
  • 8 Threads per Disk, WORKSET=100%, READ=0%, RANDOM=100%
  • BlockSize=4K, TestTime=3600s

15vms_8x10GBvmdks_8tpd_100workset_0read_100random_4KBS.png

 

2nd Test: IOP's heavy, using the following settings:

  • 15 VM's, each VM with 8 Data VMDK's of 10GB/VMDK
  • 8 Threads per Disk, WORKSET=100%, READ=100%, RANDOM=0%
  • BlockSize=4K, TestTime=3600s

15vms_8x10GBvmdks_8tpd_100workset_100read_0random_4KBS.png

 

3rd Test: Throughput heavy, using the following settings:

  • 15 VM's, each VM with 8 Data VMDK's of 10GB/VMDK
  • 8 Threads per Disk, WORKSET=100%, READ=0%, RANDOM=100%
  • BlockSize=256K, TestTime=3600s

15vms_8x10GBvmdks_8tpd_100workset_0read_100random_256KBS.png

 

4th Test: Throughput heavy, using the following settings:

  • 15 VM's, each VM with 8 Data VMDK's of 10GB/VMDK
  • 8 Threads per Disk, WORKSET=100%, READ=100%, RANDOM=0%
  • BlockSize=256K, TestTime=3600s

15vms_8x10GBvmdks_8tpd_100workset_100read_0random_256KBS.png

 

All in all I can say that the results are somewhat mediocre. On the 4K Blocksize tests, the throughput is just through the floor in my opinion. Also whats even more disappointing, is that the read/write latency is nowhere near to acceptable. Agree'd, the tests I've ran put some real load on the VSAN Datastore, but I really had hoped for better results. The only positive result for me is the amount of IOPS this configuration can push.

 

Lastly, to rule out that I had overstretched the capabilities of my VSAN config, I've ran a final test with the following settings:

 

5th Test: Throughput heavy, using the following settings:

  • 3 VM's, each VM with 8 Data VMDK's of 10GB/VMDK
  • 4 Threads per Disk, WORKSET=100%, READ=50%, RANDOM=50%
  • BlockSize=1024K, TestTime=3600s

3vms_8x10GBvmdks_4tpd_100workset_50read_50random_1024KBS.png


Re: Problem Booting Win10 in VMWare Workstation for First Time

$
0
0

Number 6 BIOS is  under on "Settings"  First Tab is "Hardware" second tab is "Options".

Select "Options" --> Advanced -- Go to Firmware Type and Select BIOS. (15.02). 

Only enable vmware virtual webcam if an actual webcam is connected to the device

$
0
0

We are currently getting ready to deploy Horizon Windows 10 Desktops to our workforce however within testing i have found that the VMware Virtual Webcam device causes web meeting's in our 3CX phone system to fail to launch when the thin client doesn't have a web cam attached.

 

I am able to mitigate this in machines without a webcam by disabling the vmware Virtual Webcam in device manager, This allows the web meeting to launch successfully.

 

I am wondering if there is a way within vmware to disable the virtual webcam device by default unless an actual webcam is attached to the desktop?

Re: Problem Booting Win10 in VMWare Workstation for First Time

$
0
0

When you creating your machine

1-Create New Machine Custom (Advance) -->

2-Customise your Hardware to your needs or leave default.-->

3-Select bottom option which is I will install OS later-->

4-Select your guest OS-->

5-Name Your OS and select the location-->

6-NOW SELECT FIRMWARE  "BIOS"-->

(

"Settings"  First Tab is "Hardware" second tab is "Options".

Select "Options" --> Advanced -- Go to Firmware Type and Select BIOS. (15.02).

 

7-Select your Processor Config or leave default--->

8-Adjust your memory or leave default-->

9-Select your Network Type or leave default (NAT)-->

10-Leave I/O Controller Types as is default-->

11-Select your Disk Type or leave default-->

12-Select Disk or leave default-->

13-Specify your disk Capacity or leave default-->

14-Specify Disk File or leave default-->

15-Finish

16 Now go back to your newly machine right click and "Settings"-->

17 Go to "CD/DVD and select your CD Driver or ISO image --> Hit Ok or Apply

18-Now select your OS and click "Play" button and follow regular MS operating system installation and your preferences.

Re: Using VMware on Windows 1903 host machine cannot turn on guest machine

$
0
0

It doesn't work.

Just shows up black screen and nothing works, like deadlock...

Re: Manual Removal of Collector from Skyline

$
0
0

Hi,

Can you please remove collector ID: 1703bf3a-e981-4975-9355-514145fc7f1b

from my account ( Customer ID: 5103612754 ).

Thank you

Meir

Viewing all 199079 articles
Browse latest View live