Tuesday, March 26, 2013

Wireless Networks (Nostalgia Version)

The following is another article from my old website.  This one's about wireless networking from 2007.  The information is a little dated but for the most part is still valid.  Hard to believe 802.11 is still so important to our mobile connectivity.  Most IPADs are worthless about it.

Enjoy!


Wireless Networking


Ubiquitous is a favorite description in many articles on the subject and it is, almost.  Wireless connectivity has rapidly become a standard feature of many of our electronic devices.  Cellular phones have offered some form of internet access in addition to voice functionality for years, wireless connectivity in your laptop is a given and even waiting for a table in a restaurant can expose you to wireless devices in the form of those vibrating coasters they hand you that alert you when your table is ready.  There are many ways to interact with your world without wires.

It’s likely the form of wireless access you’re most familiar with is Wi-Fi.  Wi-Fi stands for Wireless Fidelity and is comprised of a set of standards for wireless equipment that is meant to serve small private areas.  If you’ve ever been in a coffee shop that offered a “wireless hotspot” this form of wireless connectivity is likely what was offered.

It may be confusing to think of a private network operating in a public space but the distinction has more to do with the physical range of coverage available.  Many Wi-Fi devices have a limited range of less than 500 feet from the wireless access point.

Wireless networks can be designed in 3 basic configurations.  The most common is referred to as an “Infrastructure” or Single point of Access and it utilizes a centrally located “access point” that all wireless clients connect to for wireless services.  The second type called an “Ad-Hoc” network is generally found in networks that don’t need a centralized access point.  Simple file sharing between two laptops with wireless cards or sharing an internet connection with an internet connected computer with a wireless adapter would fall into this category.  The third type is commonly found in business locations with multiple access points.

This configuration is known as a “Multiple Access point” or floating access point configuration.  This configuration is much like the “Infrastructure” configuration but as the name implies employs multiple access points that can “hand off” a wireless client from one access point coverage area to another as the wireless client moves.  This is similar to the process of a cell phone call made in a moving vehicle.  As the phone moves from a weaker signal area into a stronger signal area the stronger “tower” (our access point in WI-Fi context) picks up the signal and allows the communication to continue uninterrupted.

A wireless access point is a device that provides a kind of bridge for wireless connected Wi-Fi devices to connect to wired networks.  In this way many wireless client devices can access wired network devices without the need to be cabled to them.  Wireless client devices use wireless adapters that are matched to the type of wireless access points they’re meant to connect to.  The reason that range is so limited has to do with the radio frequencies that Wi-Fi operates in.  Generally Wi-Fi operates in the 2.4Ghz or 5Ghz radio frequency range.
           
These are very high radio frequency ranges which can offer a relatively high speed and data carrying capability (or bandwidth) but cannot traverse long distances.  A basic rule of radio frequencies is that with higher frequency you have decreased distance due to the need for more power to drive the radio signal further distances.  This is much akin to why low frequency, low power AM radio can be heard 100’s of miles from its point of broadcast but FM radio using much higher amounts of power can only traverse around 10% of that distance.  Generally speaking to make WI-FI devices traverse farther distances you’d either have to increase the power of the radio transmitter to an impractical level or create a very large, cumbersome antenna.

The Standards


There are currently three viable wireless options available for WI-Fi.  All of these are a variation on the IEEE 802.11 specification and vary in speed and/or frequency range used.  Connection speeds are expressed in Megabits or Mbits and equate to the number per million bits of data that can be transmitted in one second.  This should not be confused with Megabytes which is commonly used to specify storage capacity for devices such as hard disks.  A simple way to remember the difference between Megabits and Megabytes is to keep in mind that one byte is comprised of 8 bits.  That means that any calculation showing Megabits will always be 1/8th the quantity of the same number of Megabytes.  A Megabit is always going to be a fraction of a Megabyte.  When we are working with wireless devices using the 802.11 standard we will always be talking about Megabits for the foreseeable future.

Up front we should mention that wireless speeds are a theoretical maximum under ideal conditions.  Wireless connection quality can vary due to distance, obstructions in the line of sight to the wireless signal source and interference from other wireless devices such as wireless phones.  Most newer wireless 802.11 devices will automatically adjust their speed downward when signal quality degrades in order to maintain a quality signal between wireless devices.  For example a wireless access point 200 feet away from a wireless client in an open room will likely provide full signal strength at maximum speeds.  Those same devices put the same distance apart but in separate rooms will suffer a loss of signal strength and possibly connection speed.  Many factors go into the strength of a Wi-Fi signal but the most critical is to locate your wireless sources (such as access points) in an area that is high and relatively free of obstructions to client locations.  Suffice it to say, putting a Wireless Access point under your desk is probably not the best location for optimal coverage.

The most common and lowest cost Wi-FI option due to it’s length of time on the market is 802.11B.
802.11B offers up to 11Mbits (Mb=Megabits) of speed and operates in the 2.4Ghz frequency range.  This was the first commercially viable wireless standard and is still very popular for economical wireless networking.  It’s effective range is roughly 300 feet from the access point or wireless peer device.
802.11B devices are currently the most universal of all the devices within the three popular standards because all other standards allow for interoperation with it.

Next up in the Standards is 802.11G.  802.11G offers up to 54Mbits of speed and also operates in the 2.4Ghz frequency range.  It is the newer of the more popular standards available.  Along with speed, 802.11G introduced enhanced security features over 802.11B.  As mentioned earlier most “G” devices can communicate with “B” devices but there can be a performance penalty when operating with them.  The reason is that any “G” device will drop its speed back to “B” levels (11Mbits) in order to allow communication.  This can cause other connected “G” devices to drop their speed back as well thus eliminating much of the speed benefit of a “G” device.  Some manufacturers have devised methods to minimize this effect but as a rule it’s best to not mix “B” and “G” devices if possible.

One drawback to “B” and “G” devices concerns interference issues.  The 2.4Ghz frequency range is commonly used by devices other than wireless network equipment.  These devices can suffer interference issues that can effectively decrease their performance.  Consideration should be given to locating these devices away from 2.4Ghz wireless phones and microwaves.

The last standard is not as popular due to issues with range and relatively high cost.  This standard is known as 802.11A and is also a 54Mbit connection that operates in the 5Ghz range.  This standard was actually developed at the same time as 802.11B to deal with the speed limitations and interference issues present with 802.11B’s 2.4Ghz range.  802.11A is hindered by a very short range of roughly 100 feet on average but tends to suffer less of the speed drop-off that can occur on 802.11B networks.  Because of its different operating frequency range 802.11A devices do not interoperate with 802.11B or 802.11G devices.  Interoperability is most likely the reason for the lesser popularity of the 802.11A standard.  802.11A equipment is found primarily in corporate environments where speed and reduced interference is more of a factor than distance.

When purchasing wireless networking equipment it’s best not to mix devices of different standards.  Often for a small network it’s advisable to stay with the same manufacturer as well.  Many manufacturers have optimized their product offerings to offer enhanced speed, security and administrative features when products are within the same family.  Match components as closely as availability and budget allow.
Security

Another consideration with wireless networks concerns the security of the wireless conversation between host and client.  To some this may be a surprise but security is one of the most important aspects of wireless networking.  Why? Simply put, when you access your wired resources via a wireless network you are connecting via a medium of air.  A wired network in the typical small office has limited access to outside world.  Generally access from the internet or other public network is controlled via routing and/or firewall devices that employ complex mechanisms to control access.  Access from clients on the wired network can be controlled by addressing schemes, passwords and enhanced security settings set at the network server level.

Wireless connectivity by default has no security mechanisms enabled.  Out of the box a wireless access point and wireless card will successfully connect to each other and create a connection “over the air” that can be easily compromised.  Perhaps you’ve heard the term “war driver” used when discussing wireless networking.  This term refers to a malicious or opportunistic individual that attempts to make unauthorized use of wireless and by extension wired network resources.  Most home computer networks are relatively unsecured and don’t employ security measures mostly for reasons of convenience.  Many small offices aren’t configured any better and the introduction of a wireless access point provides easy access to sensitive data by unauthorized individuals.

The reason wireless equipment is delivered in this configuration is to ensure ease of installation for end users.  Security settings can be difficult to implement and misconfiguration can cause issues with connectivity to wireless clients.  Since wireless manufacturers are more concerned with selling product than wireless security it’s no surprise that they keep configurations simple.

Wireless security settings can be complex and mobile users often find themselves reconfiguring them to match the settings of the wireless resource they’re attaching to.

There are three basic types of wireless security authentication mechanisms in common use today.  These are WEP or Wireless Encryption Protocol, WPA or Wi-Fi Protected Access and WPA-PSK which is the same as WPA but primarily meant for home and small office use.  Along with these mechanisms another setting commonly found in the settings dialogs of wireless devices concern whether network access is negotiated via “Open” or “shared key”.

A network that authenticates as “open” does not attempt to encrypt data between the wireless client and the wireless host during connection negotiations.  A shared key will initiate a process of sending encrypted data during connection negotiations.  The natural security choice would seem to be to use the shared key option, however it’s been shown that the negotiation process in this method is imperfect and at one point sends unencrypted data over the link during the negotiation.  This could allow an eavesdropping malicious user to gather enough information to deduce the key by comparing the unencrypted data to the encrypted data.  Therefore it’s generally recommended to run an “Open” system since no negotiation process is available to the malicious user to eavesdrop on.

WEP


Wireless Encryption Protocol was one of the first attempts to secure wireless networks.  It uses a 40, 64, 128 bit encryption “key” to validate wireless clients to wireless access hosts.  This method works by configuring a matching string of characters (how many is determined by the size of the key used) on both the wireless client and wireless hosting device.  The key is only known to the users and connectivity cannot be established from client to server without the proper entry of this key in the wireless settings of both devices.
This was an effective method of security but had one basic flaw.  The key was manually set and did not change until the configuration was manually changed on both the host and the clients.  It was possible for a malicious “war-driver” to conduct a “brute force” attack against the wireless access point.  The malicious user could run a program that would continually send keys to the device until it finally found one that was accepted thus allowing access to the wireless network.

WPA & WPA2


Wi-Fi Protected access is the successor to WEP.  WPA has two flavors; WPA-PSK which is meant for home and SOHO users.  It uses a shared passphrase and periodically changes the encryption key automatically.  The second type of WPA is WPA-802.1x which is generally used in larger enterprises utilizing separate RADIUS (a special type of password server) and an encryption protocol called EAP (Extensible authentication protocol)
There’s also a newer standard called WPA2 that uses a different type of encryption algorithm during the periodic encryption key change or rekeying process.  WPA uses a rekeying process known as TKIP (Temporal Key Integrity Protocol).  WPA2 uses a rekeying process known as AES (Advanced Encryption Standard).

Other options


There are other options available to secure wireless networks and standards continue to evolve to improve security.  One interesting option we’ve deployed for a client involves a device known as a Wireless Authentication Server.  We have a client that wanted to offer free wireless internet access to their guests but didn’t want to sacrifice the security of their wired business network.  They also wanted to enable their guests to have easy access to the new amenity but still have control over the connection.

The solution was a stand-alone hardware device (the Wireless Authentication Server) that connected directly to the client’s internet connection independent of any internal business network connections.  We connected the Wireless Authentication Server to a dedicated network switch with Power over Ethernet capability (POE).  We used the (POE) feature to power two wireless access points also connected to the switch.  The POE function allowed us more freedom to position the access points since we didn’t need to stay near power outlets when mounting them.


The wireless security features of the Access points were disabled to simplify client connectivity.  This was done because while there are standards for wireless security not all wireless manufacturers implement these features in the same way.  This could be confusing to less experienced users and even cause connection failures.  Since the two major requirements were security and ease of access we had to make the system user friendly as well as secure.  While disabling security features on an access point is generally not good security practice for this project there was no compromise in security.  This was because we were able to transfer the security functions to the Wireless Authentication Server.  The Wireless Authentication Server encrypts all authentication and controls access to the resource via its own internal authentication mechanisms including password and username combinations.  The wireless access points were positioned to provide maximum signal coverage in the desired areas without bleeding coverage into areas that didn’t require it.

We should also mention the many wireless broadband options available to mobile users.  These services differ in that they are meant to provide internet access to one client via the service provider’s private network.  Configuration and security features are controlled by the service provider usually utilizing customized software installed on the client device.

Wireless broadband, as it’s called, has improved in the past few years.  Not so long ago slow data rates and high cost made these services impractical.  Now with mature technology and better wireless signal coverage we are now seeing more reasonable pricing and data rates rivaling High speed wired connections.  Services such as Sprint’s mobile broadband service or AT&T and Verizon’s 3G networks all use similar wireless technology.  These services require specialized wireless cards that have no interoperability with Wi-Fi networks.  Their use is more point to point in nature and not meant to be a peer network as Wi-Fi is.  Think of such services as being more akin to your DSL or Cable modem than your network card that connects to your home network.

The Future (as in now, ha ha)


Wireless access continues to evolve and the next standard on the horizon is Wireless N.  Wireless N promises speeds up to 108Mbits using the same 2.4Ghz frequency band as 802.11 B and G.  Enhanced security features, better speeds and longer range are the major improvements with the latest iteration of the standard. One of the ways “N” devices improve speed is by using additional antennas.  The additional antennas are there because “N” devices transmit Multiple signals in and out of these devices on multiple channels also known as MIMO.  In addition each channel is able to carry more data than previous “A, B or G” devices.  There are devices utilizing this technology now but the standard is still not official making the purchase of potentially risky.  There is a risk of a “pre-N” device not being compatible with devices produced after the standard is official.  Our recommendation is to wait for the standard to become official for business purposes.  If you wish to try out the technology at home it’s a good idea to stay with the same vendor to ensure compatibility.

Hopefully you have a better idea about Wi-Fi and some of the options available to you.  Wireless standards are always evolving and the next few years will likely see speeds approaching the fastest wired connections.

    

PC upgrade or replacement (nostalgia version)

The following is from my old website.  It's a little humorous in light of where technology is now and product cycles are much shorter.  Even though the article is 5 years old a lot of it is still relevant.

Enjoy a bit of geek nostalgia!



March 2008


The question of upgrade or replacement.

Could it be the software?

Often times I’ll be working on a project at a client’s site and will be asked to look at a pc that just doesn’t seem to be working as well as it used to.  In almost every case the primary complaint is poor performance with reliability issues running a close second.

When I spend some time with the afflicted “patient” I often find the performance issue has less to do with the hardware and more to do with an errant application or overdue maintenance tasks. 
In many cases the offending pc can be restored to acceptable performance by performing simple maintenance tasks such as defragmenting the hard disk, applying operating system and application updates and removing unused applications that load a portion of themselves into memory but are never used. 

Many Internet security suites are guilty of this behavior.  Perhaps you bought the security suite to protect you from spyware and virus infection but ended up getting extra pop-up blockers and search toolbars that you either didn’t need or didn’t use.  They’re installed and enabled by default and will use memory and slow performance.  This will be more evident on pc’s with less powerful processors and smaller amounts of RAM.
At this point you may be asking, “What does this have to do with upgrading or replacing my pc?”  The answer is that software issues can often appear as a hardware problem.  Often just addressing maintenance and software issues can return a pc to better than new performance.  This is especially true if there are applications present from the original pc manufacturers configuration.  This is a common practice and these applications can eat up system resources unnecessarily.

I recently had just such an issue with a brand new pc preloaded with extra applications that would never be used in a business setting.  One of these was a proprietary encryption program that the client neither wanted nor needed;  even worse, this application used 25% of the processor’s resources! 
The net effect was to make a brand new pc act like one with a serious hardware issue.  Removal of the extraneous applications greatly improved system performance.

It’s not the software…

Ok, so we’ve gone through the pc, got rid of all the extra software clutter, updated all your programs, defragmented your hard drive and tweaked settings in the operating system.  “Great! Now we’ve done all that and it’s still too slow!”

Now comes the question of whether a business pc should be upgraded or just replaced.

The criteria for making this decision should include the age of the pc and your current and future usage. 
The short answer is that if the pc is over 3 years old it’s best to replace it rather than attempt an upgrade.  The lifecycle for computer hardware is still very short and often parts availability for an older pc can be limited.  These parts can be quite expensive especially when the hardware was based on a platform that was only available for a short period of time such as INTEL 850 chipset based pc’s using the RAMBUS memory platform.  Once the industry moves on to the next hardware platform, the previous platform will be abandoned rather quickly by parts manufacturers.  

I recently had a client with an ailing 2 year old pc.  It had performed satisfactorily until recently when it began to exhibit reliability and response issues.  Finally this pc completely failed and would no longer boot.  Fortunately I was able to obtain compatible hardware to repair this pc but had this problem occurred 6 months later the outcome would have been much different.  In that case I would have advised replacement as the availability and cost of parts as well as the downtime caused by limited parts supply would have been more cost effective.  This pc was, in effect, on the cusp of the repair/replace trigger.

In a business setting it’s not uncommon to see a three to five year replacement cycle for desktop pc’s with shorter cycles for larger firms.  Most manufacturers offer one year warranties with longer terms available at an additional cost.  Rarely do these warranties extend past 3 years.  The reason for this has to do with the rate of technology change and the cost to the manufacturer maintain a service inventory on platforms that become relatively obsolete within one year’s time.  The computer industry has very thin profit margins and a business model based on having just enough inventories to process active orders.  The major players don’t want to maintain a large cache of inventory that they can’t immediately liquidate. 

Businesses know that after the warranty expires on a pc; the cost of repairing hardware issues can exceed the costs associated with replacement.  It’s more than simple hardware costs; it’s the cost of lost productivity from the end user as the problem is addressed as well as the cost of labor to deal with the hardware issue.  It’s far more cost effective to replace rather than repair or upgrade a pc in this case.

What works for a larger business doesn’t always work for a smaller one, however.  A larger business will usually have the capital to absorb the initial costs of hardware purchases over shorter periods of time.  Many times a larger business’s assets are depreciated over shorter periods of time than those of a smaller business.  Computer hardware tends to lose its value quickly and thus quickly erases any depreciation benefit past one year.  Couple this with potential costs mentioned earlier in this article and it’s easy to see the reason for the turnover.

What's different about a small business pc??

A smaller business may not have the luxury of short turnover periods and may choose to depreciate the cost of a new pc over a longer period of time.  In this case the best path is to buy as much pc as you can reasonably afford.  That means a better processor such as the Intel or AMD Dual or Quad core based processors ( Leave Single c
ore an
Semprons out of the mix as their presence in a configuration  indicates a low-end system), at least 2GB of RAM and a Hard disk at least 250GB (preferably 500GB or more) in size. 

For newer pc’s that will eventually run Windows Vista or Windows 7 a good mid level graphics card such as the current Nvidia GTX 250 or ATI 5650 is mandatory.  While thought of as mainstream gaming cards, these cards actually handle Microsoft Vista and Windows 7'’s 3D graphics capability much more effectively than any integrated graphics options.  Vista makes use of the 3D capabilities of these cards and can offload much of the image processing duties from the system processor.  While not the only Operating system option available it’s likely that any new pc in the near future will come pre-installed with Vista and applications expecting to make use of it’s more graphics intensive interface.

The advantage of buying as much pc as you can afford is of course having a faster pc and a longer depreciation period.  What you also gain is a bit more “future proofing” allowing the possibility of upgrades in the future as your needs change.  Changing the video card, adding memory and even processor upgrades are much more likely with the higher end configurations.  Manufacturer’s higher end pc offerings often use more robust system level components that are unavailable in lower end models.  This can help extend the useful life of the pc and may be offered with a longer warranty term.  The rule of diminishing returns still applies however and eventually any upgrade potential will evaporate.  

For a small business that time will likely occur in the 4th or 5th year due either to major hardware failure or requirements of a critical business application that the hardware cannot meet.
I should mention that there is such a thing as “too much of a good thing” when it comes to buying a pc for business.  Generally speaking, a well-equipped business pc should not exceed 1200.00 base price.  Extra network adapters, High End video cards, special paint jobs and the like do nothing for the business user.  On the opposite end of the spectrum a pc offered for 399.99 is likely to have insufficient RAM, a low-end processor and video system and offer disappointing overall  performance. 

As an example; I own three pc’s not counting my laptop. Two are used exclusively for my business and one is reserved for Saturday night recreation driving a virtual Ferrari ENZO around Hawaii J
The cost to build the gaming pc is within a few dollars of what it is going to cost me to replace my two production pc’s.  That’s because my business machines do not require the level of hardware of my recreational pc.  In fact the gaming pc would be overkill for anything but gaming and an inefficient use of resources.  That doesn’t mean I plan to have inferior hardware in my production rigs, quite the contrary.  My production pc’s are purpose built for the jobs I need them to perform.  Luckily those purposes don’t require the hardware expense of a dedicated gaming pc.

What about laptops?


For the most part many of the above rules apply but the time periods are much shorter.  Laptops are not designed to be long term and their design severely limits upgrade potential as well as longevity.  For business use most laptops are outmoded within 2 years with the only upgrade path available being RAM, Hard Drive and wireless adapters.  Some models do have modular video cards but these are generally reserved for the high end enthusiast market with little value for the business user.  Again, buy as much as you can afford but realize that after a few years items such as batteries, power supplies, keyboards and LCD displays will often fail.  Replacement can be costly and often more than the cost of a replacement laptop with the added frustration of poor parts availability and user downtime.

Final Word

The choice of upgrade or replacement of a business pc is governed primarily by practical factors.  Time is always a factor no matter what size the business.  Generally a pc that is over 3 years old that is no longer performing adequately for the applications installed should be replaced.  Often, the cost of upgrade will likely exceed the value of the pc and bring little or no improvement.  While many pc’s will function reliably after the 3 year mark; their viability will decrease as software places increased demands on hardware and replacement parts become scarce.
Your best option for upgrade of a brand name pc is at the point of purchase when the pc is new.  Often, hardware upgrades can be more economical when the system is initially configured than after delivery.

In troubling economic times it more important than ever to get as much as you can out of all of your business assets.  As with any purchase, the best value isn’t always the cheapest option.  I hope I’ve been of some assistance.  Feel free to contact us if you’d like assistance with this or any other IT need.