Friday, 7 August 2015

Configuring SNMP traps for the vCenter Server

0 comments


Steps to configure the vCenter Server to generate SNMP traps:

A.In the Home page of vSphere Client
B.Select vCenter Server Settings 
C.Select SNMP configuration
D.Enable one of the SNMP receivers
E. Provide the details for : 

  • Receiver URL : Provide the host name of the Management Server (target SNMP server / monitoring tool) which will be connected to the VMware vCenter Server.(VMware vCenter Server sends the SNMP traps to this Management Server)
  • Configure port 162 as the SNMP port.
  • Community String: Provide community string (default string is "public") SNMP versions v1/v2/v3 are supported


That is all that is needed for the configuration.  Now you need to configure alarm for generating SNMP traps in the vCenter server. When ever there is a change in the environment ( host state change, VM state change ,etc) the trigger will be generated and send an alert to the monitoring server. 

Configure the Alarms

After you have setup the external SNMP server, vCenter Server can now ready to send SNMP traps to it. There are  alarms in vCenter Server that are configured to send traps by default. So your SNMP server should receive alarms as soon as you have the SNMP setup.

Steps: 


  • Add an alarm to monitor the changes related to VM state and vCenter Server status, and then adding the appropriate action (ie, send a notification trap).
  • In the Home page of the VMware vSphere Client, select Hosts and Clusters and right-click on the VMware vCenter Server, data-center or an individual virtual machine to set the alarm. You can set the alarm at an individual virtual machine level, at the data center level, or at the entire VMware vCenter Server level. It is recommended to set it at the VMware vCenter Server level.
  • In the General tab, provide alarm details with alarm type set for monitoring the virtual machines. Enter the details as listed in the following table:
  • Alarm Name :Provide the name of the alarm.
  • Description :Provide additional information about the alarm.
  • Alarm Type :Select Virtual Machines in the Monitor drop-down list.
  • Select Monitor for specific events occurring on this object, for example, VM powered On option. Ensure that Enable this alarm check box is selected.
  • In the Triggers tab, add the required triggers to monitor the states of the virtual machine. For example, VM created, VM migrated, VM powered on, VM powered off, VM suspended, and so on.





Provide information on when to send the notification trap.

In the Actions tab of the Alarm Settings panel, click Add to add a new action. In the Action drop-down list, select Send a notification trap option. 

That's it. You now will be able to see the alerts in the monitoring tool dashboard. 

Cheers ! 

Wednesday, 25 March 2015

Explore vsphere 6.0

1 comments
VVols
Perhaps the most wanted feature in vSphere 6 is Virtual Volumes, or VVOLs. VVOLs extends the VMware software-defined storage (SDS) story to its storage partners, and completely changes the way its hypervisor consumes storage; it radically changes how storage is presented, consumed and managed by the hypervisor. No longer is virtual machine (VM) storage bound by the attributes of the LUN, as each VM disk (VMDK) can have its own policy-driven SLA. VMware has a passel of storage vendors on board to equip its storage, with the ability to offer VVOLs storage to the VMware hypervisor. I'm sure this feature will get much press and customer attention in the coming days.

vMotion
vSphere vMotion just got 10 times better, and a lot more interesting. For one thing, it supports live VM migration  across vCenter servers, and over long distances. It used to support round trip times (RTTs) of 10 ms, but now supports RTTs of 100 ms. A ping from Portland, Ore., to Boston, Mass., is 90 ms; so, in theory, you could move a live VM across the entire United States using vMotion. I'm not sure which of these I find more interesting: long distance vMotion, Cross vCenter vMotion or how shared storage will span a continent.

Fault Tolerance
Multi-processor fault tolerance is a feature unique to the VMware hypervisor. It allows a workload to run simultaneously on two different ESXi servers; if a server or VM goes down, the other one will continue running the load uninterrupted. This feature, until today, only supported one vCPU; now it can protect a four-vCPU VM in the Enterprise Plus version, and a 2-vCPU VM in other editions. VMware stressed the fact that to be effective, a 10Gb connection is required between the ESXi servers. From what the engineers have told me, this required a major rewrite of the FT code base.

Bigger VMs
The VMware VMs have gotten even bigger. VMware has doubled the number of vCPUs a VM can have from 64 to 128, and has quadrupled the amount of RAM from 1TB to 4TB. This opens up some unique possibilities; consider, for example, resource-intensive applications, such as the SAP HANA memory database.

As more powerful hosts come online, vSphere will be ready to support them, because an ESXi host is capable of supporting 480 CPU and 6TB of RAM.

vSphere now supports 64-node clusters. This change has been a long time coming, and it's good that VMware finally is supporting larger clusters. This should be a big boon to the 1,000-plus VMware customers running Virtual SAN.

Instant Clone 
VMware has a feature called Instant Clone that I'm dying to try in my lab, as it creates clones 10 times faster than vSphere does presently. This is a welcome relief for all the test and dev shops that have been hampered, waiting for a clone to finish.
A new feature in the stack is vSphere Content Library. For those who have ISOs, VM templates and virtual appliances stored in multiple locations, this will be a nice central repository that can be synchronized and accessed from different sites and vCenter instances. In its initial release, vSphere Content Library has basic versioning and a publish-and-subscribe mechanism to keep things in sync.

On the network side, vSphere now supports Network I/O control on a per-VM basis, and can reserve bandwidth to guarantee SLAs.

Ready for VDI
VMware has also thrown in support for NVIDIA GRID vGPU, which will allow an individual VM to take advantage of all the goodness of a physical vGPU housed in an ESXi server. vSphere has had vDGA for a while, but whereas it tied a GPU to one guest, vGPU allows the GPU to share among eight. A lot of the people I've talked to have been waiting for this feature.

This change shows how serious VMware is about the virtual desktop infrastructure market, as it joins soft 3D, vSGA and vDGA as a way to make desktop VMs more performant.

As I mentioned before, this release is, for the most part, evolutionary rather than revolutionary -- and there's nothing wrong with that. Yes, it has VVOLs, and it might be the game changer the community believes it to be. But I also hope that people finally fully utilize fault tolerance to protect critical applications, use long-distance vMotion to move workloads as needed and create some truly monstrous VMs to run their in-memory databases.

vSphere 6 demonstrates that VMware wants to maintain its leadership in hypervisor development.