Cloud computing can be defined as a data service, software and storage service, where the end user is not aware of the physical location and system configuration that delivers the services. Comparison to this concept can be made with electricity power grid where the consumer is mostly ignorant of the component devices that are needed to give this service.
Cloud computing has evolved from virtualization, autonomic utility computing as well as service oriented architecture.
Here are some clouds computing comparison, which have similar characteristics but should not be confused with the following:
1.Autonomic computing-This is can be defined as a self-capable management computer system.
2.Grid computing-This is a form of parallel computing that is distributed and connected to a main super computer. This super and virtual computer is networked to a cluster of loosely interconnected couple of computers working in unison to perform large tasks.
3.Client-server model-This kind of server computing is a general term that refers to any application that is distributed and is able to differentiate between service requesters (clients) and service providers (server).
4.Mainframe computer-These are very powerful computers that are found mostly in large organizations that work with important applications, usually bulky data processing like census, statistics of both consumers and industry, financial transactions and resource planning.
5.Utility computing-This can be defined as packaging of computing resources that involves computation and storage. A good example is the metered service that draws similarities to public utility like electricity.
6.Peer to peer-here the distribution architecture is devoid or doesn’t require central coordination, having the participants acting as both the suppliers and consumers of the resources unlike the client server model.
7.Service oriented computing-it models around computing techniques that revolve on software service. Cloud on the other hand relies on services that have a relation with computing.
One of the notable characteristic of cloud computing is that the processing and data is dynamic meaning that it is not found in a static place. The model is totally different from the ones in which the processes take place in known specified servers and “not in the cloud” like cloud computing. In other words all the other concepts act as complementary or supplementary to this concept.
The cloud computing comparisons don’t end there. The system software architecture that is involved in delivery of cloud computing involves the following: Multiple cloud components that inter-communicate over interfaces application programming. This is achieved through web services on the 3 tier architecture. The principle follows that of UNIX where multiple programs work concurrently over universal interfaces.
Front end and back end are the two most significant components of cloud computing architecture. The computer user is able to see the front end which is the computer as well as the applications that are used to access the cloud on web browser and other interfaces. The back end, on the other end of the cloud architecture is the ‘cloud’ that comprises of data storage devices, servers and various computers.
Those are some of the cloud computing comparisons.
I just deployed an 8GB image to four HP/Compaq 6510b laptops.
Amazingly, the image write over the LAN to all four machines took under 8 minutes!!! This was significantly faster than 20 minutes with earlier attempts. The specs of the setup are as follows:
- 1 Cisco Catalyst 3560G 24 port gigabit network switch
- 1 HP / Compaq 8000 Elite Ultra-Lite workstation
- Intel Core 2 Duo @ 3.00 GHZ
- 3.46 GB RAM
- 1GB onboard NIC
- Hitachi Travelstar Hard Drive (HTS725025a98a364) 2.5″ SFF SATA Hard Drive, 250GB, 7200RPM, 16MB Buffer
- 4 HP / Compaq 6510b laptops
- Windows XP Service Pack 3 (x86)
- Microsoft Deployment Workbench Version 5.1.1642.01
- Windows Automated Installation Kit 6.1.7600.16385
- 8GB Windows XP SP3 Image, originally ported from a Ghost capture.
Nonetheless, I think the major speed improvement came from using the enterprise-grade Cisco Catalyst 3560G switch.
I used WinPE and the WAIK to convert the orginal Ghost Image to ImageX and easily added the image file to the Operating Systems node of the Deployment Workbench. I configured the Task Sequence with the defaults of deploying an operating system, but excluded the state capture, windows Update (Pre/Post-Application Installation) as well as Capture Image. Placing the boot media on a USB proved nothing but efficient.
Being very pleased with these results, I will investigate more complicated deployments, as well as scripting unattended post-sysprep operations.
It is true to say that web hosting is for a long-term and thus one should be careful when choosing the right company to host his or her site. Caution here may have two meanings: One should choose those websites that may have illegally copyrighted contents. Due to the rise in technology, emergency of internet has brought both benefits and harm to the world. Raising of scams all over the world for example the Nigerian Email Scam was caused by the evolving technology and thus we should be careful when handling some sites on the internet so that we do not fall a prey of scams.
On the other hand, web hosting is for long-term and we should be cautious when choosing it simply because it might be your long-lasting investment. There are several factors that one should consider before choosing a company to host his or her site. First, one should look at the location of the web hosting company. It is advisable to look for a company within your locality and the one that can serve your customers without any problems. Since your aim is to settle in that company for a long time, it is good to look for a web based company in your area. One is also expected to check whether the company of his or her choice to all the countries he or she needs to be serving. This enables easy access of the potential customers all over the world and hence the long-term business starts to boom.
It is advised to look for a cheaper host whereby you are not going to spend all your money which you could have otherwise used elsewhere. We should avoid extravagancy of resources through joining of high priced companies when you know very well that it is a long-term business where you can invest little capital and get high returns as time goes on. When choosing a company to host your site, it will depend on what your site is all about. One should identify the reasons for having that particular site so that in future he or she does not regret in case any loss occurs. A customer support available 24/7 is also another factor to consider when looking for a long-term online business. The customer support is supposed to be in a position to use the method of communication that is easy and understandable. All these cautions should be put into consideration when choosing a long-term company to host your site.
by: Derek Rogers
The benefits of completing a network audit on your computer network are numerous. Not only does it help keep your computer network at optimal condition through analyzing power consumption, needed equipment upgrades or security issues, but can help establish an asset base and future cash flow needs for equipment and office space planning.
Typically, the benefits of completing a network audit can keep you informed of versions of software and licenses to help detect shortages or plan mass upgrades. Network audits can locate hard drives, network adapters, CPU details, motherboard specifications and peripherals and detect security issues, such as where antivirus or firewalls need to be installed.
Typically, one of the benefits of completing a network audit is to give you a comprehensive, no-hassle network inventory database that is extremely helpful in determining future needs when it comes to hardware and software. Another benefit of a network audit, is analyzing security needs, which is especially important to keep from costly downtime, or complete data loss, especially on large and scattered out networks.
Since many networks are built over a period of time by adding additional offices, computers and software, it can become difficult to determine whether upgrades will be easy and affordable or costly and lengthy, if you don’t have a network audit. Sometimes, cost is a factor when thinking about upgrades, keeping up with new technology and a network audit can give a full picture of future needs in keeping your network efficient.
Certain businesses have government regulations in place that require them to protect secured and private information, such as credit card numbers, or whether your emails should be encrypted, for example. Depending on your business, a security breach could cost hundreds of thousands of dollars, if a hacker gets hold of confidential information that was entrusted to you for a transaction from a customer.
Through a network audit, any possible areas where breaches could occur can be uncovered and dealt with. In a time where so much business is transacted over the Internet, security is of utmost importance, and the benefits of a network audit can help ensure that you are in compliance with protecting information, especially if you transmit financial transactions electronically.
Network audit can analyze physical networks, such as routers, telecom equipment, network switches and ports, as well as audit capacity management, configurations and network database populations, in the case of remote users. A network audit should be performed by an expert in the field that have in-depth telecom knowledge and comprehensive network audit experience.
This is not a field for amateurs, although there are some programs on the market that are do-it-yourself network audit software packages, which might be fine for a small office or home network audit.
If you have a number of computers or locations, a more comprehensive network audit would be needed to get sound advice on assets, upgrades, future needs and technological advancement options, in addition to security needs.
The benefits of completing a network audit are many, but they are necessary to assess future expenses and security breach weaknesses for businesses that handle sensitive information electronically.
A Storage Area Network (SAN) is a collection of storage devices that are tied together via a high-speed network to create one large storage resource that can be accessed by multiple servers. SANs are typically used to store, share and access enterprise data in a more secure and efficient manner compared to traditional dedicated storage models. With dedicated storage, each server is equipped with, and uses an attached storage capability. A SAN meanwhile basically acts as a common, shared storage resource for multiple servers. The storage devices in a SAN can include disk arrays, tapes and optical jukeboxes all of which can be accessed and shared by all of the attached servers.
How a Storage Area Network Works
In a storage area network, the pooled storage resources and the servers that access this storage are separated by a layer of management software. The software allows IT administrators to centralize and manage multiple storage resources as if it were one consolidated resource. Because of the software, each of the servers sees just one storage device, while each of the storage devices in the SAN sees just one server. Data can be moved at will to any of the devices on a SAN.
Factors Driving SAN Adoption
A variety of factors have been driving enterprise adoption of SAN architectures over the past few years. One of the biggest factors has been increased cost-efficiencies. Storage area networks allow companies to optimize the utilization of their storage resources. With an attached storage disk, any extra storage capacity on that disk would remain unused because no other server could use it. With a SAN on the other hand, all memory resources are pooled, resulting in better usage of existing capacity. Since SAN’s allow data to be moved freely, enterprises can also move old and outdated data to inexpensive storage devices while freeing up the more costly devices for more important data.
Storage area networks make it easier for companies to expand their storage capacity, add resources on the fly, allot additional resources to an application, and maintain systems far more easily than traditional storage technologies. In addition, SANs allow companies to swap out a disk or tape-drive more easily and enable faster data replication. Importantly, SAN architectures allow storage devices from multiple vendors to be tied together into a common shared storage pool. Another advantage of SAN architectures is that they allow the storage network to be located at long distances away from the server hardware, thereby enabling greater disaster recoverability.
SAN Security a Big Concern
Despite such benefits, there are some caveats associated with the implementation of SAN architectures. Storage area network security is by far the biggest issue that companies need to do deal with when moving to a SAN storage model. With a SAN, companies are literally putting all of their most important data in one central resource. As a result the need for security controls such as firewalls, intrusion detection systems, SAN encryption and network monitoring are greatly heightened.
In the ever-evolving world of business, companies are often faced with a number of challenges while attempting to keep up with service level agreements (SLA’s) while their budgets continue to shrink. These challenges often translate into large capital (both human and monetary) requirements to simply keep up with these trends. One piece of technology that has risen to this challenge is virtualization. VMware, a leading virtualization technology in the market, tackles these issues in a number of ways.
Efficiency, Control, and Scalability
Traditionally, most companies had a ‘one server, one application’ philosophy. However, with VMWare, multiple applications can be run on one server. This translates into a high level of efficiency. In addition, virtualization means there are fewer pieces of hardware to be managed and as such, having control over the VMWare deployment is easier. Another significant advantage of virtualization is flexibility or scalability. Virtualization means that as the business grows, there is no need to buy any physical upgrades but rather a simple seamless scaling of one’s existing cloud infrastructure.
Costs : Real and Incidental
This leads us to another critical aspect of IT infrastructure: cost. Two costs that are affected due to the high requirements of traditional server configurations are CapEx (Capital Expenditure) and OpEx (Operational Expenditure). In traditional models, buying new servers, setting up applications, migrating data and so on required large amounts of capital expenditure. This is not the case with VMWare.
Virtualization requires minimal expenditure on hardware due to the location of all major infrastructures in the cloud. That slashes CapEx by close to 80%. Next, running and maintaining the traditional server configurations required large numbers of IT personnel, thousands of work hours per year spent on maintenance only and so on. VMWare again reduces these OpEx costs by another 80%.
One other advantage that needs to be mentioned is the ability of companies using VMWare technology to go green. Due to the pooling of resources within the cloud, companies around the world do not need to set up huge server resources individually. They simply use the available cloud resources on demand. This significantly lowers each company’s energy footprint.
Virtualization as a technology and as a service has come at a critical time when economic times have fallen into dire straits and companies the world over are turning to innovation to sustain their growth. Together with other great innovations such as cloud computing, SaaS (Software as a Service), and others, companies may rest assured that as far as their IT needs are concerned, they are adequately covered.
File / Data recovery services can be an emergency situation. Of less importance of whether the device is repairable, the more concerning part is whether your important data, documents, and family pictures are able to be saved. All data recovery technicians and associated labs should be able to prove a consistent, reliable, and most of all industry trusted reputation.
Faulty Hard Drive
There can be two main reasons your data is in accessible – mechanical & non-mechanical. Mechanical failures are problems of the physical hard drive heads, platters, or even main board of the hard drive. For this there are two main services to perform. Firstly to repair the drive and try to get a read of the drive’s contents. Secondly transfer the recovered data to a new drive. Non-Mechanical issues are usually whereby data has been corrupted, and/or there are bad sectors on the drive. Additionally other problems with the drive could lead to data access issues. For this repair type it I usual to attempt repair of the drive to a readable level, then start the recovery processes to obtain the drive’s data.
Backup Software Tools
Current backups can be one of the most time and money saving assets when a data loss event happens. File recovery services can be mitigated with a recent backup and usually savings can be hundreds, and sometimes thousands of dollars with loss productivity and valuable documents. There exist many cloud (online) backup tools and also a simple USB portable hard drive can assist with some additional tools. If consistent changes are being completed on important documents it may not be convenient to keep a recent backup so consider arranging an Hotmail account and email the various dated versions to this account. This process will give you a dated archive as well as a current version in the case of a hard drive failure which can be ideal.
Estimated Recovery Prices
Generally a data recovery lab will be able to give you an idea of costs. A data recovery lab should be able to give you a general estimation of costs, or at least give an idea of what it could cost depending on what the issues are. Usually the cost is around $100-$250 for low level recoveries. $250-$700 for medium and upper level recoveries. With the top range of $700 and more for extensive damage to a drive and likely physical malfunctions of the hard drive itself. As always keep a back up to save you this expense and stress, and if you do have such a problem, call around to find a trusted repair centre in your area.
What is a Trojan virus?
A virus that looks like a harmless and useful program but actually contains a code that can destroy data or install adware or spyware on your computer. Trojan viruses can be embedded in email attachments, programs that have been downloaded, or even through operating system vulnerabilities on your computer.
A recent tactic that hackers are using is to put the virus in pictures. Never download anything you do not recognize. Unlike the regular computer viruses Trojans do not replicate.
What does a Trojan virus do to your computer?
Trojan viruses can do much damage to your computer or worse, hackers can read the files and personal information from your computer and steal your identity. They can also add unwanted spyware and ad ware, deliver unsolicited pop-ups and advertising, all without your consent.
So how can get rid of them?
Most if not all antivirus programs will detect and remove Trojans, viruses and other unwanted programs from your computer automatically. For e-mail attachments, you may have to scan them individually. There are programs you can download that will do it automatically. There is plenty of anti-virus software free and paid out there that can look at. Payment programs usually have more features that can be used as a registry cleaner.
Whatever your choice may be, make sure you have installed the latest version, periodically perform system analysis, and updated with the latest version to keep your computer protected.
One of the most evil and insidious things ever invented was the Trojan virus. What person with dementia thought of this is beyond comprehension. Trojan virus infiltrates your entire computer system and affects its ability to quickly browse the Internet and in the worst cases can cause the entire system to crash, permanently delete data stored on your hard drive.
There are some steps you can take to the removal of Trojan viruses, to save all your data and installed software. If you take your computer to a repair shop or the Geek Squad you want to delete the entire hard drive after saving valuable documents. After going to reinstall the operating system software and make your computer virus free.
The problem with this method is that it is expensive, time consuming, and you can never recover the installed applications. Unless you have the original application software installation discs and registration number to go with it, you have to buy new programs. That can reach hundreds and even thousands of dollars if you have a lot of expensive programs installed as Photoshop, Dreamweaver, InDesign, ProSeries, and many others.
You can also simply download a virus cleaner and run it. That will solve more problems and run the Trojan virus removal desired. If your computer crashes or can not connect to the Internet, which happens a lot with Trojan virus infections.