• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Deerfield Hosting, Inc.

High Performance Web Hosting

  • Home
  • Domain Names
  • Shared Hosting
  • Optimized Hosting
  • Customer Logins
  • Help Tickets
  • Help Pages
  • Service Terms and Privacy

At Deerfield Hosting, Inc.

The Need for Speed

April 10, 2014 by dennis

When a Facebook post gets several million likes and is backed by a WordPress post on one of our servers, we handle some serious traffic.  Maintaining the fast response we strive for can be a challenge.

We continually search for ways to increase server performance for our busiest sites.  For the sake of reliability and stability, most of our servers run Centos 6, but Centos has become a source of frustration.  It’s obsolete.  Centos 7 (based on Redhat Enterprise Linux 7) is in the works and it will be a big relief when it finally appears.  When I read that RHEL 7 will be based on Fedora 19, I decided it was time to try Fedora as a server.  Fedora lists “First” as one of its core values and tries to implement the most recent innovations with releases twice per year.

Our latest server is running Fedora 20.   I can hardly believe the performance gain.  To test, we moved 5 very busy sites onto this server.  A server load average is a rough estimate of how many processes had to wait over the last second.  It correlates directly with web site performance, speed as perceived by a site visitor.  At quiet times on the previous server I was seeing loads at 1 and 2.  At the same times on this server, the load is 0.0, sometimes spiking up to 0.2.  At busy times I was seeing loads of 2 and 3, sometimes spiking to 5 and 7.  On this server it’s 0.2 and 0.3 with spikes up to 0.5.  On top of that, spikes disappear much more rapidly.  Where I had been seeing spikes fade away in 5 or 10 seconds, now I am seeing them fade in 2 or 3 seconds.  That’s huge.  It means not only that site visitors are getting page loads 10 times (or more) faster, but the fastest performance is being seen by 5 times as many visitors.

The gains came from a long list of improvements.  Before getting into what they are, a disclaimer.  To make this understandable to people who aren’t techno nerds, I’m oversimplifying a bit. 

Where loads are coming from can get really complicated with so many things going on at the same time in a server.  It matters what those things are because they are dynamically interacting with each other.  For example, under certain conditions we see loads more than double when the visitor hit rate doubles.  Sometimes the increase is logarithmic and sometimes quadratic.   But it can also increase by less than double.  It depends on the server software, the site application software and on the interaction between them.  Having said that, here’s the list:

  • The Linux kernel V 3.13 – Recent optimizations have a substantial effect.
  • The network stack and drivers have been improved.
  • Web server software: Apache 2.4.9 or Nginx 1.5.13 – Nginx is a bit faster, but when some features are essential, Apache is a better choice.  Apache 2.4 outperforms 2.2.
  • XFS file system.  This is disk reads and writes.  Under load it’s twice as fast as EXT4.  Disk operations are the most common performance bottleneck.
  • MariaDB (from MySQL 5.5) – this version is said to be 2 – 3 times faster in the most common use cases.
  • PHP 5.5.10 – by being less tolerant of errors and changing semantics, performance gains of 20 to 40% are often possible.
  • Zend Opcache – saves the step of compiling scripts into executable code by holding the code in memory.  It also saves a disk read.
  • PHP-FPM – this keeps the php binary up and running so that the web server process can pass it script names to execute, passing back the results.

We are now offering virtual private servers set up this way.  We are also offering the opportunity to run sites in a shared VPS set up this way.   You can find it in our ordering system.  This is a strong value for sites too busy to run on a cPanel shared hosting server.

I was asked, “Where’s Varnish?”  Varnish is a page cache which resides in memory.  It saves a disk access when a static file (html page, image, etc.) has been read recently.   This is useful on proxy servers with a server farm behind them.  In other situations it’s counter productive.  Modern operating systems try to take full advantage of available memory.  All memory not in immediate use is allocated to disk cache buffers.   When more memory is needed by a program it is taken from the least recently used (LRU) memory pages.  In other words, it’s already doing what Varnish does and much more efficiently in terms of the whole system.   Adding Varnish is much more likely to increase disk reads than reduce them.

You will find many pages on the Internet recommending Varnish incorrectly.  Something like 80% of the information and recommendations about getting the most out of your web site or server is dead wrong, too incomplete to be useful or doesn’t apply to the most common environments.   People like to write about the wonderful new things they’ve learned, but often fail to realize they don’t know enough about the bigger picture.  Be careful what you believe.

Filed Under: At Deerfield Hosting, Inc.

NSO Transparency Report

January 28, 2014 by dennis

On January 27th, the United States Department of Justice announced new rules regarding the disclosure of National Security Orders.  This included National Security Letters (NSLs) received by a company.   The DOJ and Director of National Intelligence (DNI) now allow a company to disclose the number of letters and orders it has received as a single number in bands of 250.  The first band would be 0-249.  It continues to be illegal to make this disclosure as an exact number.

At Deerfield Hosting, we believe that disclosing the exact number of orders a company has received poses no threat to national security.

Many people assume that tech companies receive and comply with large numbers of such orders daily.  However, the real numbers are probably quite small.  For example, Apple has just reported a band of 0-249.

In keeping with the new rules, our report is this:

  • National Security Orders Received: 0 – 249
  • Total Accounts Affected: 0 – 249

We believe that the new rules are a step in the right direction, but do not go far enough.  We also believe that orders of these types are a violation of due process and are unconstitutional.  It is our policy to refuse to comply with such orders unless received by a court and not merely by the DOJ.

Moreover, if you wish to know of any orders we have received which pertain to any accounts you may have with us, simply ask us.  We are quite prepared to break the law and give you an answer.

Filed Under: At Deerfield Hosting, Inc.

DDOS and Your Web Site

April 12, 2013 by dennis

At this moment, Friday April 12 2:15 EDT 2013, global Internet traffic is well above normal and in some places more than 100% above normal. The trouble is, it isn’t normal traffic.  The extra is attacks on web sites.

DDOS stands for “distributed denial of service”.  This is the most difficult threat to defend against because it comes from thousands of computers simultaneously, each making service requests.  Usually the requests are designed to be as resource intensive as possible such as attempting to log in to services.

It’s not hard to account for where the attacks are coming from.  More and more computers are connected full time to the Internet and owned by ever less sophisticated users.  They make ripe targets for hackers.  Literally hundreds of thousands of such machines have been compromised.   Large networks of compromised machines have been put together this way.

You may not think so, but yours may be among them.  We have been seeing more sites compromised lately than ever before and the explanation tends to be hackers getting in by means of compromised passwords.   There have been some very effective viruses active lately which silently steal passwords and watch and wait for accounts to compromise using them.  Some are sophisticated enough to disable virus scanners unnoticed.  This means it is essential to occasionally scan your machine using software newly installed on it.  Probably less than 1 tenth of 1 percent of users do this.

You may have noticed a rise in spam to your inbox lately and a decline in that over the last few days.  Last week, the main service we use to help filter out spam was hit by a DDOS attack on a scale never seen before.  In the past, traffic has generally peaked at about 100 Billion bits per second.  Yes, 100 GBps.  That’s about 10 times faster than typical networks can go.  The attack on Spamhaus peaked out at 300 GBps.  This meant that we could barely reach them to do the usual spam checks.  Not to lose email, we sometimes have no choice but to let email in unchecked.

DDOS attacks are a serious threat to the entire Internet and they are going to get worse.

Currently Word Press blogs are a particular target. Some sources are reporting as many as 90,000 to 100,000 different IP addresses (individual computers) launching login attempts against sites on a single server.  The default installation uses “admin” as the login name, so all an attacker has to do is keep trying different passwords.  The goal is to further enlarge networks of compromised computers.

We run strong firewalls and scan every web request against known attacks, more than 10,000.  It is not possible to prevent compromises while still allowing normal activity like user logins.  Entry is gained by means of vulnerable scripts and weak passwords.

What you can do is make sure your passwords are up to the threat.  A good password is at least 8 characters long, contains a number, upper and lower case letters and a special character (#!@$ for example).  If you are running Word Press, install the limit-login-attempts plugin.  If you are using a weak password, please, change it right now.

We have added failed login checking to our firewalls.  When more than a few login attempts fail, the source is blocked.  This may cause some inconvenience, but will help considerably with server performance.  Note that this covers only service logins and not logins you may have on your web site.

None the less, you may notice some sluggishness as these attacks escalate.  Please understand that we are on it.

 

Filed Under: At Deerfield Hosting, Inc.

Hard Drive Failure !

February 6, 2013 by dennis

100% uptime is impossible. All we can do is get close.

Last week 2 hard disks failed (simultaneously !) in node 2 of cluster 2. Besides being backed up in real time on a second cluster node, each node is running RAID-5 with hot swap drives. A single drive can fail and be replaced with no down time. But if a second drive fails, it’s fatal.

It wasn’t a clean failure and the performance of the fail-over system and failure reporting was less than perfect. Initial symptoms seemed to point at network card failure. The cluster software did fail over properly, but we had to clean up some databases. Some sites were not in good shape for 2 or 3 hours.

The next day, the remaining node was bombarding us with emails about the failed node. I had to shut everything down and power it back up outside the cluster. Total down time with this was probably 10 minutes. This was necessary because otherwise we could easily miss emails about failures of which we are not yet aware. The signal to noise ratio was way too high.

Monday we replaced all the hard drives in the failed node, re-installed the operating system and all the cluster software and began the process of manually syncing the drives from the node which was still in operation. Synchronization completed overnight last night, Tuesday night.

This morning at 5 AM I began the task of moving services back into the cluster. I will spare you the details, but it’s a nasty and error prone process. All the safeguards, checks and balances in the cluster software really get in the way while doing this. Sites were up and down several times. My guess at total down time today was something like 30 minutes.

Everything is completely back to normal now.

This was the first major real world test of the clustered live fail-over system we put in place 18 months ago. I’m not totally happy with it. Previous tests were done by pulling plugs – total failures. In that situation, performance was flawless. Down time was so short no one noticed. Real world failures are usually messy like this one was. The fail-over system worked, but it needed a little help. It was still a big win compared to re-installing a server and restoring backups. That could take a day or more.

There is a recurring pattern with problems like these. There is a period of a few days or a week during which problems come up and quickly or gradually get ironed out. These periods in retrospect feel like they are much longer than they really are because the worry and frustration when a server is down is intense. An hour is remembered as half a day. Related problems recurring a few times over several days is remembered as lasting a week or more. It’s human nature. Problem periods are followed by long periods, many months or a year during which everything runs smoothly.

If you look at our up time in longer periods it’s actually very good. It’s something over 99.99%. My perfectionist nature often makes me lose site of that. But nobody does any better so it’s worth a reminder.

Filed Under: At Deerfield Hosting, Inc.

10 YEARS !!

November 14, 2012 by dennis

The domain names deerfieldhosting.com and deerfieldhosting.net were registered on November 16th, 2002.

A lot has changed since then.

We had a reseller account on a server which was overloaded and provided terrible service.  BUT, the pay per click advertising was working.  The initial $300 was getting recycled over and over.  The revenue from new signups was slightly more than the advertising cost.  Since it was clear that the business model was working we took the plunge and rented our first dedicated server.

We were running a control panel called Ensim, having taken a cue from the former account.  It was (is?) so bad that the number of accounts you could add to a server was constrained not by how busy the sites were, but by the overhead imposed by the control panel itself.  And I mean by a factor of 10.  The control panel hogged up server resources at a rate easily 10 times the sites.

We limped along with Ensim for several years, adding server after server to get good performance.  What a relief it was to ditch it and move to cPanel and FreeBSD.  It felt like I’d gone to heaven.

Of the first 5 customers to sign up, we still have 3.  The other 2 are no longer on the Internet.  I like to think that means we are doing something right.

Anyway, Happy Birthday Deerfield Hosting !!

Filed Under: At Deerfield Hosting, Inc.

Primary Sidebar

Copyright © 2023 · Deerfield Hosting on Genesis Framework · WordPress · Log in