Wednesday 14 August 2013

Configure FreeNAS for create virtual environment in LAB



What is FreeNAS you say? Put simply, it's is an operating system based on FreeBSD that brings with it a snazzy web interface for management, and all the protocols you need to share files between Windows, Mac and Linux. In other words, a perfect solution for your digital bookshelf. Let's get to it.
Once you've got your hardware squared away we have to get some things out in the open:
  • FreeNAS needs to be installed on a USB drive separate from the disks you intend to use for your storage volumes. Put simply, if you were to install FreeNAS (which only requires 2GB of storage) on a spankin' new 1TB HD, you'd lose 1022GB of said hard disk. FreeNAS can not utilize the drive on which it's installed for storage. So, that's why you need that USB stick.
  • Think about where you want to keep your FreeNAS box. Once you install the OS you can throw the box in a closet with power and a network connection, and let 'er run. Once the initial setup is complete, you can manage the configuration using the web interface. Just a thought.
  • Forget about WiFi. We know your little wheels are spinning -- just forget it. Trust us on this one.
·         FreeNAS Installation
·         1. The very first step is to download the FreeNAS ISO image and burn it to a blank CD-R/CD-RW. You can get the file here.
·        
·        
2. Place the USB stick into a USB port that's attached directly to your system board. Don't insert it into one of those front panel sockets; to be safe it should be in the back of the PC. Yours truly had some weird results using front panel USB ports, which included installations crashing and very slow operation.

3. Power up your machine and head directly to your BIOS config. Do not pass go, do not collect $200. We have to be sure to set the boot devices in the proper order. Since BIOS options vary from device to device, here's the basic the order you want: CD/DVD drive, USB HDD, disable all other devices. Save your settings, place the freshly baked CD in your drive and reboot.

4. If everything went well with the last step you should now be booting to the first bootloader. You'll see some text scrolling and gibberish like so:
·        
·        
Next you'll get to the boot loader, which looks like this:

·        
·        
5. At this point you can either press Enter or allow the timer to count down. Whichever you choose, you'll end up in the actual FreeNAS installer here:
·        
·        
 Odds are your device will be listed as da0 on this screen as well. Double check the description and size to be sure. As you can see, in our case it plainly reads, "SanDisk Cruzer 8.02 -- 7.5GiB," the name of our USB Stick. Select your device and press Enter.
·        
·        
7. The installer here gives us a nice little warning which states that all data will be wiped from your drive for installation. Hit "Yes" to proceed.
·        
·        
8. As soon as you press Enter you'll notice the dialogue beginning at the bottom of the screen. Man, that's flashy. Eventually, you'll see a message reassuring you the installation is complete and that it's time to reboot again.
·        
·        
9. It does as it's told. Hit Enter and remove the CD from your drive so you boot to your newly minted FreeNAS installation. Once your computer reboots, you'll be inside the FreeNAS OS.
·        
·        
At this point, if you see this screen, go ahead and let out a single "woot!" You deserve it. Congratulations, you've now got FreeNAS installed. Okay, now get a hold of yourself, as we've still gotta carve out some disk volumes and share 'em.
·         Create disc volumes
Add caption
·        
1. Make note of the next-to-last line on the screen (highlighted in green below): http://192.168.11.48/. That's telling us the URL through which we can access the FreeNAS management interface.
·        
·        
Sidenote: By default, FreeNAS utilizes DHCP for obtaining IP addresses; your IP is almost certainly going to be different. In most home environments, DHCP is used for serving out IP addresses, so it's easiest to leave the FreeNAS configuration as is to avoid any IP conflicts on your home network. If a storm knocks out power to your home and everything reboots, you may have to check this screen again if your DHCP client tables gets wiped out, as the address may change. If you happen to be running a network where you statically set IP addresses, good for you. You'll of course need to set a static address on your FreeNAS system by choosing option one on the Console Setup Screen. We won't cover configuring static addresses in this how-to, so you're on your own there.

2. Let's open up the management interface now. From another computer on your network, open up a web browser and enter the address you see on your FreeNAS machine. You should see this:
·        

After successfully configure FreeNas, add NAS logical drive in VMware virtualization.

After complete this process, install multiple VM, This is very useful to create virtual environment in LAB..

Monday 5 August 2013

PXE BOOT installation

Mount ISO file
mount -o loop /mnt/e/centos5/CentOS-5.0-i386-bin-DVD.iso /mnt/iso

Configure DHCP Server ...................
#

# DHCP Server Configuration file.

#   see /usr/share/doc/dhcp*/dhcpd.conf.sample 

#

ddns-update-style interim;

subnet 10.10.10.0 netmask 255.255.255.0 {

range 10.10.10.98 10.10.10.99;

default-lease-time 3600;

max-lease-time 4800;

option routers 10.10.10.29;

option domain-name-servers 10.10.10.29;

option subnet-mask 255.255.255.0;

option domain-name "qa.net";

option time-offset -8;

next-server 10.10.10.29;

}

host test {

hardware ethernet 00:10:F3:09:89:48;

fixed-address 10.10.10.98;

option host-name "test";

filename "/linux-install/pxelinux.0";

}

Configure HTTP server...................
<Directory /mnt/iso>

Options Indexes

AllowOverride None

</Directory>

Alias /linux /mnt/iso

Enable TFTP server using chkconfig.. restart xinetd

#Chkconfig tftp on
#Service xinetd restart
Configure PXE Installation Service

#pxeos -a -i "test" -p HTTP -D 0 -s 10.10.10.29 -L /linux test

VMware HA Best Practices

Use the VMware HA best practices in this section that are applicable to your ESX Server
Implementation and networking architecture.

Networking Best Practices
The configuration of ESX Server host networking
and name resolution, as well as the networking
infrastructure external to ESX Server hosts (switches, routers, and firewalls), is critical to optimizing
VMware HA setup. The following suggestions are best practices for configuring these components for improved HA performance:

Ensure that the following firewall ports are open for communication by the service console for all ESX Server 3 hosts:
Incoming Port: TCP/UDP 8042-8045
Outgoing Port: TCP/UDP 2050-2250
For better heartbeat reliability, configure end-to-end dual network paths between servers for
Service console networking. You should also configure shorter network paths between the
Servers in a cluster. Routes with too many hops can cause networking packet delays for
Heartbeats. If redundant service consoles are on separate subnets, specify “isolation address” for each Service console that is on its subnet. By default, gateway address for the network is used as isolation address.
Disable VMware HA (using VirtualCenter, deselect the
Enable VMware HA
check box in the cluster’s Settings dialog box) when performing any networking maintenance that might disable all heartbeat paths between hosts.
Use DNS for name resolution rather than the error prone method of manually editing the local
/etc/hosts
file on ESX Server hosts. If you do edit
/etc/hosts
 you must include both long and short names.
Use consistent port names on VLANs for public networks on all ESX servers in the cluster. Port
Names are used to reconfigure access to the network by virtual machines. If the names are
used on the original server and the failover server are inconsistent, virtual machines are
Disconnected from their networks after failover


Setting Up Networking Redundancy

Networking redundancy between cluster nodes is important for VMware HA reliability. Redundant
Service console networking on ESX Server allows the reliable detection of failures and prevents isolation conditions from occurring, because heartbeats can be sent over multiple networks.
You can implement network redundancy at the NIC level or at the service console or VMKernel
port level. In most implementations, NIC teaming provides sufficient redundancy, but you can use or add service console or port redundancy if you need additional redundancy.

NIC Teaming
Two NICs connected to separate physical switches can
Improve the reliability of a service console (or, in ESX Server 3i, VMkernel) network. Because
Servers connected to each other through two NICs (and through separate switches) have two
Independent paths for sending and receiving heartbeats, the cluster is more resilient.
To configure a NIC team for the service console, configure the vNICs in vSwitch configuration for the ESX Server host, for Active/standby configuration. The recommended parameters for the
vNICs are

.Rolling Failover = Yes
Default Load Balancing = route based on originating port ID


The following example illustrates the use of a single service console network with NIC teaming for network redundancy:

You assume some risk if you configure hosts in the cluster with only one service console
Network (subnet 10.20.XX.XX), so this example uses two teamed NICs to protect against NIC
Failure.
The default timeout is increased to 60 seconds (
das.failuredetectiontime = 60000
.
Secondary Service Console Network
As an alternative to NIC teaming for providing redundancy for heartbeats, you can create a
secondary service console network (or VMkernel port for ESX Server 3i), then attach that port to a separate virtual switch. The primary service console network is still used for network and management purposes. When you create the secondary service console network, VMwareHAsends heartbeats over both the primary and secondary service console networks. If one path fails,
VMware HA can still send and receive heartbeats over the other path.
By default, the gateway IP address specified in each ESX Server host’s service console network configuration is used as the isolation address. Each service console network should have one
Isolation address it can reach. When you set up service console network redundancy, you must specify an additional isolation response address (
das.isolationaddress2) for the
Secondary service console network. When you specify this secondary isolation address, VMware also recommends that you increase the
Das.failuredetectiontimesetting to 20000 milliseconds or greater.
Also, make sure you configure isolation addresses properly for the redundant service console Network that you create. Follow the networking best practices when designating isolation
Addresses. A further optimization you can make (if you have already configured a VMotion network) is to add a secondary service console network to the Motion vSwitchswitch can be shared between VMotion networks and a secondary service console network. each host in the cluster is configured with two service consoles. Each of these service console networks is connected to a separate physical NIC. The two networks are also on different subnets.
Use the default gateway for the first network and specify
das.isolationaddress2 = 192.168.1.103as the additional isolation address for the second network. Increase the default timeout to 20 seconds (
das.failuredetectiontime = 20000)

Other HA Cluster Considerations

Other considerations for optimizing the performance of your HA cluster include:
•Use larger groups of homogenous servers to allow higher levels of utilization across an HA-enabled cluster (on average). More nodes per cluster can tolerate multiple host failures while still guaranteeing failover capacities.
Admission control heuristics are conservatively weighted so that large servers with many virtual
Machines can fail over to smaller servers.
•To define the sizing estimates used for admission control, set reasonable reservations for the minimum resources needed.
Admission control exceeds failover capacities when reservations are not set; otherwise, VMware HA uses the largest reservation specified for a virtual machine in the cluster when deciding failover capacity.
At a minimum, set reservations for a few virtual machines considered average.
Admission control may be too conservative when host and virtual machine sizes vary widely. You may choose to do your own capacity planning by choosing
Allow virtual machines to be powered on

Saturday 3 August 2013

which VM backup should be more reliable ?

Type of VM backup through Symantec
agent-based backup is also known as guest based backup. An agent is installed in every virtual machine and treats each virtual machine as if it was a physical server. The agent in this scenario is reading data from disk and streaming the data to the backup server
Advantage-
  • Both physical and virtual machines are protected using the same method
  • Application owners can manage backups and restores to guest OS
  • Time tested and proven solution
  • Meets their recovery needs
  • This is the only way to protect VMware Fault Tolerant virtual machines and VMs with Physical Raw Disk Mappings RDMS
Disadvantage-
  • Significantly higher CPU, memory, I/O and network resources utilization on virtual host machines when backups run.
  • Need to install and manage agents on each virtual machine
  • Cost may be high for solutions that license on a per agent basis as opposed to per hypervisor based licensing
  • lack of visibility into changing virtual infrastructure
  • No visibility for backups from VM administrators’ point of view; for example, backups are not visible at vSphere client level
  • Complex disaster recovery strategies
  • Lack of SAN transport backups to offload backup processing job from virtual infrastructure
  • No protection for offline virtual machines and virtual machine templates
  • Slow file by file backup by agent sending the even unchanged data over and over again

Agentless backup-
Agentless backup, also known as host-based backup, refers to solutions that do not require an agent to be installed on each VM by the administrator. However, it’s important to note that the software may be injecting an agent onto the guest machines without your knowledge.
Advantage-
  • VMs can be backed up online or offline
  • Less CPU, memory, I/O and network impact on the virtual host
  • An agentless architecture doesn't require the management of agent software
  • No per VM agent licensing fees
  • Extremely difficult to recover granular object data - first restore the entire VM and its virtual disks
  • Traditional login techniques to log into the server
  • Temporary “injected” drivers can destabilize the system and compromise data integrity
  • Troubleshooting is more complex when using injected (temporary) agents
  • A centralized controller is a single-point-of-failure
  • Requires a fully-virtualized environment. Physical machines still require agent-based backup. If you have physical and virtual you will need two backup solutions – one for physical and the other for virtual.

Agent-Assisted Backup-
Agent assisted backups are also known as host based backup and integrate with VMware’s VADP and Microsoft VSS to provide fast and efficient online and offline backups of ESX, vSphere and Hyper-V. The primary difference between agentless and this design is its perspective: it pairs the VMware VADP or Microsoft VSS with an agent that gathers application metadata to enable multiple avenues of recovery (full VM, applications, databases, files, folders and granular objects).
  • The backup is for the entire virtual machine. This is important because it means the entire VM can be recovered from the image.  It also means that products like Backup Exec & NetBackup can offer “any-level” of recovery from the image contents: Files / Folders, Databases and Granular database contents, like email and documents.
  • The backup can be offloaded from both VM as well as the hypervisor. This means that Backup Exec & NetBackup have the flexibility to offload VM backup onto an existing backup server, instead of burdening the hypervisor.  It also means that users have the option of deploying a dedicated VM, e.g. a virtual appliance, when a physical backup server is not practical.
  • Application owner can self-serve restore requests: The application owner can request restores directly back to the application.
  • Enhanced security: The agent installed for assisting with VM backup can be managed by the application owner. Thus you are avoiding the need to share guess OS credentials with backup administrator.
Best Way to store Backup-
  1. Backup VM directly from storage location for example, SAN, iSCSI, NAS, without having to install any software agent inside the VMs
  1. Centralized backups for Virtual machines
  2. Keep all schedule in simple way for easily understand, thing wrest scenario and make backup plan
  3. Make it documented for every backup policy.

Virtual PC Performance Checklist

• Make sure your Host Operating System's disk is defragmented.
  This includes the System Disk (the disk your OS boots off of) as well as the Disk that holds your Virtual Hard Disk File.
• Run Fewer Applications.
I'm continually amazed when folks complain about VM performance and when I get to their desk I see that they are running Outlook. That 200+megs could be better used by the system. Are you running a VM or checking your email? Consider checking your email on a schedule,  or using Outlook Web Access while you work on your VM.
• Enable Hardware Assisted Virtualization
If you've got this on your computer, turn it on. There IS some concern about really sophisticated Trojans that can use this technology for evil, but for me, it's all good as it speeds most Guest Operating Systems (especially non-Microsoft ones) up quite a bit.
• Give your Virtual Machines LESS MEMORY
o I've found that 512 megs is just about the Ideal Amount of memory for 90% of your Virtual Machines. Don't bother trying to give them 1024 megs, it's just not worth the pressure it'll put on the Host Operating System.
• Considering making a custom Windows install for your VMs.
Rather than going to all the effort to REMOVE things, why not create a Windows installation that can be shared across your organization that doesn't include the crap ahead of time. There's a Windows Installation Customizer called nLite that lets you prepare Windows installations so they never include the stuff you don't want. Makes it easier if Solitaire is never installed
• Make sure the Guest Operating System is defragmented.
Disk Defragmenter that runs in that "Text Mode" place before Windows really starts up. This allows it to get at files that don't always get defragmented.
Don't use NTFS Compression on the Virtual Machine Hard Drive File in the Host Operating System
NTFS Compression doesn't work on files larger than 4 gigs, and can cause corruption.
Don't Remote Desktop or VNC into Host Operating Systems that are hosting Virtual Machines.
If you're remoting into a machine where THAT machine is running a VM, note that to the Remote Desktop protocol (and VNC) the VM just looks like a big square bitmap that is constantly changing. That guarantees you slow performance. If you can, instead, Remote Desktop into the Virtual Machine itself.
Make sure you've install the Virtual Machine Additions (or Tools, or Utilities, or Whatever)
Virtual PC and VMWare and Parallels all include drivers and tools that improve the performance of your Virtual Machine. They are there for good reason, make sure you've installed them.
 Also, if you're running a Virtual Machine created under and older version, like Virtual PC 2004, and you're now running under a newer one, like 2007, pay attention to the upgrade warnings and install the latest drivers and Virtual Machine Additions.

Host CPU Spikes at 100 Percent when install new VM

You experience high CPU usage in the guest operating system. However, when you examine Task Manager, no CPU usage issues are displayed in the host operating system.
There are instances where performance problems or symptoms may arise, but the cause may be due to the VM environment/configuration. The information and screenshots provided below are available to help determine if the performance problem may exist due to virtual instance instead of at the Traveler level.
Please check the following areas to determine if the VM is the cause of the performance related issue.
Check for VM Alarms
Click the Alarms tab to determine if there are any alerts available.

This example shows a high CPU Alarm condition for the timeframe being investigated. Engage your VMWare team as soon as possible to investigate errors such as these.

Check the CPU of the Traveler server by following these steps:
Click the Performance tab.
Click the "Advanced" button

Click the "Chart Options..." link
For CPU, choose "Past week" or the appropriate time frame for investigation.
Under the Counters section, check the boxes for "Usage" and "Ready"

Click Apply / OK
Save the chart (screenshot or click the save icon in the top right)
Notice that one of these (the Usage one) is in percentage of CPU. The key is to look for critical thresholds and peak times. Look for patterns.

Usage = CPU Usage as a percentage during the interval (during the amount of time that was selected)
Ready = Percentage of time that the virtual machine was ready, but could not get scheduled to run on the physical CPU.
A short spike in CPU usage or CPU ready indicates that the system is making the best use of the host resources. However, if both values are constantly high, the hosts are probably overcommitted. Generally, if the CPU usage value for a virtual machine is above 90% and the CPU ready value is above 20%, performance is impacted.
It can be many reason for 100 CPU spike by the host, one of the hardware compatibility issue can be also for 100 CPU spike.
 
Poor performance when virtual machines reside on local storage on VMware ESXi 5.0 Affected configurations
The system may be any of the following IBM servers:
  • BladeCenter HS23, type 7875, any model
The system is configured with at least one of the following:
  • UpdateXpress Service Pack Installer, any version
  • VMware vSphere Hypervisor 5.0 with IBM Customization Installable, Base Install
This tip is not option specific.
The mpt2sas device driver for the VMware ESXi 5.0 is affected.
  • VMware ESXi 5.0
  • Solution
  • Issue the following esxcli commands in the ESXi Shell to remove ilfu and LSIProvider VIBs.
 esxcli software vib remove -n ilfu
esxcli software vib remove -n LSIProvider

Wednesday 17 July 2013

How to install VM Tool



VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system and improves management of the virtual machine. Without VMware Tools installed in your guest operating system, guest performance lacks important functionality. Installing VMware Tools eliminates or improves these issues:
  • Low video resolution
  • Inadequate color depth
  • Incorrect display of network speed
  • Restricted movement of the mouse
  • Inability to copy and paste and drag-and-drop files
  • Missing sound
  • Provides the ability to take quiesced snapshots of the guest OS
VMware Tools includes these components:
  • VMware Tools service
  • VMware device drivers
  • VMware user process
  • VMware Tools control panel
Installing VMware Tools

The following are general steps used to start the VMware Tools installation in most VMware products. Certain guest operating systems may require different steps, but these steps work for most operating systems. Links to more detailed steps for different operating systems are included in this article. Make sure to review the VMware documentation for the product you are using.

To install VMware Tools in most VMware products:
  1. Power on the virtual machine.
  2. Log in to the virtual machine using an account with Administrator or root privileges.
  3. Wait for the desktop to load and be ready.
  4. Click Install/Upgrade VMware Tools. There are two places to find this option:
    • Right-click on the running virtual machine object and choose Install/Upgrade VMware Tools.
    • Right-click on the running virtual machine object and click Open Console. In the Console menu click VM and click Install/Upgrade VMware Tools.