May 2011


FCoE vs. iSCSI – Making the Choice

A well written article by Stephen Foskett describing the decreasing disparity between Fibre Channel and iSCSI due to the increases in LAN speeds and decreases in cost.

“The notion that Fibre Channel is for data centers and iSCSI is for SMB’s and workgroups is outdated. Increases in LAN speeds and the coming of lossless Ethernet position iSCSI as a good fit for the data center. Whether your organization adopts FC or iSCSI depends on many factors like current product set, future application demands, organizational skill-set and budget. In this session we will discuss the different conditions where FC or IsCSI are the right fit, why you should use one and when to kick either to the curb.”

FCoE vs. iSCSI – Making the Choice


Cloud Computing Comparison

Cloud computing can be defined as a data service, software and storage service, where the end user is not aware of the physical location and system configuration that delivers the services. Comparison to this concept can be made with electricity power grid where the consumer is mostly ignorant of the component devices that are needed to give this service.
Cloud computing has evolved from virtualization, autonomic utility computing as well as service oriented architecture.
Here are some clouds computing comparison, which have similar characteristics but should not be confused with the following:
1.Autonomic computing-This is can be defined as a self-capable management computer system.
2.Grid computing-This is a form of parallel computing that is distributed and connected to a main super computer. This super and virtual computer is networked to a cluster of loosely interconnected couple of computers working in unison to perform large tasks.
3.Client-server model-This kind of server computing is a general term that refers to any application that is distributed and is able to differentiate between service requesters (clients) and service providers (server).
4.Mainframe computer-These are very powerful computers that are found mostly in large organizations that work with important applications, usually bulky data processing like census, statistics of both consumers and industry, financial transactions and resource planning.
5.Utility computing-This can be defined as packaging of computing resources that involves computation and storage. A good example is the metered service that draws similarities to public utility like electricity.
6.Peer to peer-here the distribution architecture is devoid or doesn’t require central coordination, having the participants acting as both the suppliers and consumers of the resources unlike the client server model.
7.Service oriented computing-it models around computing techniques that revolve on software service. Cloud on the other hand relies on services that have a relation with computing.
One of the notable characteristic of cloud computing is that the processing and data is dynamic meaning that it is not found in a static place. The model is totally different from the ones in which the processes take place in known specified servers and “not in the cloud” like cloud computing. In other words all the other concepts act as complementary or supplementary to this concept.
The cloud computing comparisons don’t end there. The system software architecture that is involved in delivery of cloud computing involves the following: Multiple cloud components that inter-communicate over interfaces application programming. This is achieved through web services on the 3 tier architecture. The principle follows that of UNIX where multiple programs work concurrently over universal interfaces.
Front end and back end are the two most significant components of cloud computing architecture. The computer user is able to see the front end which is the computer as well as the applications that are used to access the cloud on web browser and other interfaces. The back end, on the other end of the cloud architecture is the ‘cloud’ that comprises of data storage devices, servers and various computers.
Those are some of the cloud computing comparisons.


Operating System Deployment with MS Deployment Toolkit

I just deployed an 8GB image to four HP/Compaq 6510b laptops.
Amazingly, the image write over the LAN to all four machines took under 8 minutes!!! This was significantly faster than 20 minutes with earlier attempts. The specs of the setup are as follows:

  1. 1 Cisco Catalyst 3560G 24 port gigabit network switch
  2. 1 HP / Compaq 8000 Elite Ultra-Lite workstation
    • Intel Core 2 Duo @ 3.00 GHZ
    • 3.46 GB RAM
    • 1GB onboard NIC
    • Hitachi Travelstar Hard Drive (HTS725025a98a364) 2.5″ SFF SATA Hard Drive, 250GB, 7200RPM, 16MB Buffer
  3. 4 HP / Compaq 6510b laptops


  1. Windows XP Service Pack 3 (x86)
  2. Microsoft Deployment Workbench Version 5.1.1642.01
  3. Windows Automated Installation Kit 6.1.7600.16385
  4. 8GB Windows XP SP3 Image, originally ported from a Ghost capture.

Nonetheless, I think the major speed improvement came from using the enterprise-grade Cisco Catalyst 3560G switch.
I used WinPE and the WAIK to convert the orginal Ghost Image to ImageX and easily added the image file to the Operating Systems node of the Deployment Workbench. I configured the Task Sequence with the defaults of deploying an operating system, but excluded the state capture, windows Update (Pre/Post-Application Installation) as well as Capture Image. Placing the boot media on a USB proved nothing but efficient.
Being very pleased with these results, I will investigate more complicated deployments, as well as scripting unattended post-sysprep operations.


Web Hosting is for Long-Term Use-Be Cautious When Choosing It

by: Dwsbharat

It is true to say that web hosting is for a long-term and thus one should be careful when choosing the right company to host his or her site. Caution here may have two meanings: One should choose those websites that may have illegally copyrighted contents. Due to the rise in technology, emergency of internet has brought both benefits and harm to the world. Raising of scams all over the world for example the Nigerian Email Scam was caused by the evolving technology and thus we should be careful when handling some sites on the internet so that we do not fall a prey of scams.
On the other hand, web hosting is for long-term and we should be cautious when choosing it simply because it might be your long-lasting investment. There are several factors that one should consider before choosing a company to host his or her site. First, one should look at the location of the web hosting company. It is advisable to look for a company within your locality and the one that can serve your customers without any problems. Since your aim is to settle in that company for a long time, it is good to look for a web based company in your area. One is also expected to check whether the company of his or her choice to all the countries he or she needs to be serving. This enables easy access of the potential customers all over the world and hence the long-term business starts to boom.
It is advised to look for a cheaper host whereby you are not going to spend all your money which you could have otherwise used elsewhere. We should avoid extravagancy of resources through joining of high priced companies when you know very well that it is a long-term business where you can invest little capital and get high returns as time goes on. When choosing a company to host your site, it will depend on what your site is all about. One should identify the reasons for having that particular site so that in future he or she does not regret in case any loss occurs. A customer support available 24/7 is also another factor to consider when looking for a long-term online business. The customer support is supposed to be in a position to use the method of communication that is easy and understandable. All these cautions should be put into consideration when choosing a long-term company to host your site.


The Benefits of Completing a Network Audit

by: Derek Rogers
The benefits of completing a network audit on your computer network are numerous. Not only does it help keep your computer network at optimal condition through analyzing power consumption, needed equipment upgrades or security issues, but can help establish an asset base and future cash flow needs for equipment and office space planning.
Typically, the benefits of completing a network audit can keep you informed of versions of software and licenses to help detect shortages or plan mass upgrades. Network audits can locate hard drives, network adapters, CPU details, motherboard specifications and peripherals and detect security issues, such as where antivirus or firewalls need to be installed.
Typically, one of the benefits of completing a network audit is to give you a comprehensive, no-hassle network inventory database that is extremely helpful in determining future needs when it comes to hardware and software. Another benefit of a network audit, is analyzing security needs, which is especially important to keep from costly downtime, or complete data loss, especially on large and scattered out networks.
Since many networks are built over a period of time by adding additional offices, computers and software, it can become difficult to determine whether upgrades will be easy and affordable or costly and lengthy, if you don’t have a network audit. Sometimes, cost is a factor when thinking about upgrades, keeping up with new technology and a network audit can give a full picture of future needs in keeping your network efficient.
Certain businesses have government regulations in place that require them to protect secured and private information, such as credit card numbers, or whether your emails should be encrypted, for example. Depending on your business, a security breach could cost hundreds of thousands of dollars, if a hacker gets hold of confidential information that was entrusted to you for a transaction from a customer.
Through a network audit, any possible areas where breaches could occur can be uncovered and dealt with. In a time where so much business is transacted over the Internet, security is of utmost importance, and the benefits of a network audit can help ensure that you are in compliance with protecting information, especially if you transmit financial transactions electronically.
Network audit can analyze physical networks, such as routers, telecom equipment, network switches and ports, as well as audit capacity management, configurations and network database populations, in the case of remote users. A network audit should be performed by an expert in the field that have in-depth telecom knowledge and comprehensive network audit experience.
This is not a field for amateurs, although there are some programs on the market that are do-it-yourself network audit software packages, which might be fine for a small office or home network audit.
If you have a number of computers or locations, a more comprehensive network audit would be needed to get sound advice on assets, upgrades, future needs and technological advancement options, in addition to security needs.
The benefits of completing a network audit are many, but they are necessary to assess future expenses and security breach weaknesses for businesses that handle sensitive information electronically.


Storage Area Networks For Dummies

by: William Hauselberg

A Storage Area Network (SAN) is a collection of storage devices that are tied together via a high-speed network to create one large storage resource that can be accessed by multiple servers. SANs are typically used to store, share and access enterprise data in a more secure and efficient manner compared to traditional dedicated storage models. With dedicated storage, each server is equipped with, and uses an attached storage capability. A SAN meanwhile basically acts as a common, shared storage resource for multiple servers. The storage devices in a SAN can include disk arrays, tapes and optical jukeboxes all of which can be accessed and shared by all of the attached servers.
How a Storage Area Network Works
In a storage area network, the pooled storage resources and the servers that access this storage are separated by a layer of management software. The software allows IT administrators to centralize and manage multiple storage resources as if it were one consolidated resource. Because of the software, each of the servers sees just one storage device, while each of the storage devices in the SAN sees just one server. Data can be moved at will to any of the devices on a SAN.
Factors Driving SAN Adoption
A variety of factors have been driving enterprise adoption of SAN architectures over the past few years. One of the biggest factors has been increased cost-efficiencies. Storage area networks allow companies to optimize the utilization of their storage resources. With an attached storage disk, any extra storage capacity on that disk would remain unused because no other server could use it. With a SAN on the other hand, all memory resources are pooled, resulting in better usage of existing capacity. Since SAN’s allow data to be moved freely, enterprises can also move old and outdated data to inexpensive storage devices while freeing up the more costly devices for more important data.
Storage area networks make it easier for companies to expand their storage capacity, add resources on the fly, allot additional resources to an application, and maintain systems far more easily than traditional storage technologies. In addition, SANs allow companies to swap out a disk or tape-drive more easily and enable faster data replication. Importantly, SAN architectures allow storage devices from multiple vendors to be tied together into a common shared storage pool. Another advantage of SAN architectures is that they allow the storage network to be located at long distances away from the server hardware, thereby enabling greater disaster recoverability.
SAN Security a Big Concern
Despite such benefits, there are some caveats associated with the implementation of SAN architectures. Storage area network security is by far the biggest issue that companies need to do deal with when moving to a SAN storage model. With a SAN, companies are literally putting all of their most important data in one central resource. As a result the need for security controls such as firewalls, intrusion detection systems, SAN encryption and network monitoring are greatly heightened.