Saturday, June 12, 2010

Rethinking analysis of Google's AP data capture

In "Rethink things in light of Google's Gstumbler report," Kim Cameron asks that we rethink our analysis of Google's wireless data capture in light of the third-party analysis of the gstumbler data capture software. In particular he seems to have a particular fondness for the phrase "wrong," "completely wrong," and "wishful thinking" when referring to my comments on the topic. In my defense, I will say that there was no "wishful thinking" going on in my mind. I was just examining the published information rather than jumping to conclusions -- something that I will always advocate. In this case, after examining the published report, it does appear that those who jumped to conclusions happened to be closer to the mark, but I still think they were wrong to jump to those conclusions until the actual facts had been published.

I read through the entire report and have to say that the information in the report is quite different than the information that had been published at the time I expressed my opinions on the events at hand. The differences include:

  1. We had been led to believe that Google had only captured data on open wireless networks (networks that broadcast their SSIDs and/or were unencrypted). The analysis of the software shows that to be incorrect -- Google captured data on every network regardless of the state of openness. So no matter what the user did to try to protect their network, Google captured data that the underlying protocols required to be transmitted in the clear.
  2. We had been led to believe that Google had only captured data from wireless access points (APs). Again the analysis shows that this was incorrect -- Google captured data on any device for which it was able to capture the wireless traffic for (AP or user device). So portable devices that were currently transmitting as the Street View vehicle passed would have their data captured.

    One factor that is potentially in the user's favor is that the typical wireless configuration would encourage portable devices to transmit at just enough power for the AP to hear them (devices on wireless networks do not talk directly to each other). Depending upon the household configuration, it is possible (probable?) that a number of devices would not be transmitting strongly enough for them to be detected from a vehicle out in the middle of the street. However, if Google had a big honking antenna on the vehicle with lots of gain in the right frequencies, it could have detected every device within the house.

Given this new information I would have to agree that Google has clearly stepped into the arena of doing something that could be detrimental to the user's privacy.

That said, however, we need to be a little careful about the automatic assumption that the intent was to put all of this data into some global database. In fact, the way the data was captured -- the header of every data packet was captured, many of which would contain duplicate information -- makes it clear that Google intended to do some post-processing of the data. One could hope that they would use this post-processing step to restrict the data making it into any general, world-wide database. Of course, we don't know whether or not they would do this and even if they would, they still have that raw data capture which contains information that could clearly be used to the users detriment.

In addition, the fact that we know that Google did this, doesn't preclude the fact that others can be doing this (or have already done this) without publicizing that they have done so -- especially those who do intend to use this information for nefarious purposes.

We should take this incident as a wake-up call to start building privacy into the foundations of our programs and protocols.

Tags : / / / / /

Monday, June 07, 2010

Kim vs. Google summary...

As Kim and my ongoing blog discussion seems to have gone off on various tangents (what some might call "rat holes") I thought it best to try to bring things together in a single summary (which I'm sure will probably generate more tangents.

Lets list some of the facts/opinions that have come out in the discussion:

  1. MAC addresses typically are persistent identifiers that by the definition of the protocols used in wireless APs can't be hidden from snoopers, even if you turn on encryption.
  2. By themselves, MAC addresses are not all that useful except to communicate with a local network entity (so you need to be nearby on the same local network to use them.
  3. When you combine MAC addresses with other information (locality, user identity, etc.) you can be creating worrisome data aggregations that when exposed publicly could have a detrimental impact on a user's privacy.
  4. SSIDs have some of these properties as well, though the protocol clearly gives the user control over whether or not to broadcast (publicize) their SSID. The choice of the SSID value can have a substantial impact on it's use as a privacy invading value -- a generic value such as "home" or "linksys" is much less likely to be a privacy issue than "ConorCahillsHomeAP".
  5. Google purposely collected SSID and MAC Addresses from APs which were configured in SSID broadcast mode and inadvertently collected some network traffic data from those same APs. Google did not collect information from APs configured to not broadcast SSIDs.
  6. Google associated the SSID and MAC information with some location information (probably the GPS vehicle location at the time the AP signal was strongest).
  7. There is no AP protocol defined means to differentiate between open wireless hotspots and closed hotspots which broadcast their SSIDs.
  8. I have not found out if Google used the encryption status of the APs in its decision about recording the SSID/MAC information for the AP.

Now we get to the point where there are differences of opinion.

  1. Kim believes that since there's no way for the user to configure whether or not to expose their MAC address and because the association of the MAC address to other information could be privacy invasive, that Google should not have collected that data without express user consent to do so and that in this case Google did not have user consent.

    I believe that Google's treatment of the user's decision to broadcast their SSID as an implicit consent for someone to record that SSID and the associated MAC address is a valid and reasonable interpretation. If the user doesn't want their SSID and MAC address collected, they should configure their system to not broadcast their SSID.

    Yes, even with the SSID broadcast turned off, some other party can easily determine the APs MAC address and this would clearly have potential negative impacts on the user's privacy, but that's a technical protocol issue not Google's issue since they clearly interpreted SSID silence to be a user's decision to keep their information private and respected that decision.

  2. In "What harm can come from a MAC address?" Kim seems to argue that because there's some potential way for an entity to abuse a piece of data, that any and all uses of that data should be prohibited. So, because an evil person could capture your mac address of your phone and then drive along the neighborhood to find that mac address and therefore find your home, any use of mac addresses other than their original intent is evil and should be outlawed.

    I believe that it's much better to outlaw what would clearly be illegal activity rather than trying to outlaw all possible uses. So, in this particular case, the stalker should be prohibited from using *any* means to track/identify users with the intent of committing a crime (or something like that).

    Blindly prohibiting all uses will block useful features. For example, giving my device a means of establishing a location of where it is to obtain some location services without revealing to me the basis for that location is a useful feature that I have made use of on my iPhone and I don't believe that I've violated anyone's privacy in using this type of information to know where I am (to do things such as get a list of movies playing at the nearest theatre via the Fandango application).

  3. Kim doesn't seem to have responded at all to my criticism of the privacy advocates failing to use this case as a learning experience for users to help them configure their APs in a way that best protects their privacy.

In summary, I do agree that MAC addresses could be abused if associated with an end-user and used for some nefarious purpose. However, I don't believe that Google was doing either of these.

Tags : / / / /

Sunday, June 06, 2010

House numbers vs SSIDs

In "Are SSIDs and mac addresses like house numbers?" Kim Cameron argues against my characterization of SSIDs and mac addresses being like house numbers:

Let’s think about this. Are SSIDs and MAC addresses like house numbers?

Your house number is used - by anyone in the world who wants to find it - to get to your house. Your house was given a number for that purpose. The people who live in the houses like this. They actually run out and buy little house number things, and nail them up on the side of their houses, to advertise clearly what number they are.

So let’s see:

  1. Are SSIDS and MAC addresses used by anyone in the world to get through to your network? No. A DNS name would be used for that. In residential neighborhoods, you employ a SSID for only one reason - to make it easier to get wireless working for members of your family and their visitors. Your intent is for the wireless access point’s MAC address to be used only by your family’s devices, and the MACs of their devices only by the other devices in the house.
  2. Were SSIDS and MAC addressed invented to allow anyone in the world to find the devices in your house? No, nothing like that.
  3. Do people consciously try to advertise their SSIDs and MAC addresses to the world by running to the store, buying them, and nailing them to their metaphorical porches? Nope again. Zero analogy.

So what is similar? Nothing.

That’s because house addresses are what, in Law Four of the Laws of Identity, were called “universal identifiers”, while SSIDs and MAC addresses are what were called “unidirectional identifiers” - meaning that they were intended to be constrained to use in a single context.

Keeping “unidirectional identifiers” private to their context is essential for privacy. And let me be clear: I’m not refering only to the privacy of individuals, but also that of enterprises, governments and organizations. Protecting unidirectional identifiers is essential for building a secure and trustworthy Internet.

This argument confuses house address with house number. A house number is not able to be used as a universal identifier (I presume that there are many houses out there with the number 15, even in the same town, many times even on the same street in the same zip code (where the only difference is the N.W. and S.E. on the end of the street name).

Like SSIDs and mac addresses, the house number is only usable as an identifier once you get to the neighborhood and very often only once you get to the street.

People choose to advertise SSIDs so they themselves and others will have an easy time connecting with their network once they are within range of the AP - as evidenced by Mike's comment on my previous article (and, the reason why I have chosen to configure my SSID as broadcast). Yes, many people don't know enough to make that decision and perhaps sometimes choose to do what others might consider a wrong thing, but a) that's part of my issue with the wireless AP industry and with the privacy folks not using this as a good educational example.

So while people don't need to go to the hardware store to buy the number to put up on their house, they can, and many do, choose the electronic equivalent when they setup their AP.

House numbers are very much unidirectional identifiers used within the context of a given address (street, city, state, country, postal cod) just as SSIDs and MAC addresses are.

I will admit that there are some differences with the mac address because of how basic Ethernet networking was designed. The mac address is designed to be unique (though, those in networking know that this isn't always the case and in fact most devices let you override the mac address anytime you want). So this could be claimed to be some form of a universal identifier. However, it's not at all usable outside of the local neighborhood. There is no way for me to talk to a particular mac address unless I am locally on the same network with that device.

I do believe that a more privacy enabled design of networking would have allowed for scenarios where mac addresses were more dynamic and thus reducing the universal-ness and persistence of the mac address itself. However, that's an issue for network design and I don't think that what Google did was a substantial privacy issue for the user.

Tags : / / / / /

Friday, June 04, 2010

Privacy Theatre

In a series of blog articles, Kim Cameron and Ben Adida discuss Google's capturing of open access point information as part of its Street View project.

Kim's assertion that Google was wrong to do so is based upon two primary factors:

  • Google intended to capture the SSID and mac address of the access points
  • SSIDs and mac addresses are persistent identifiers

And it seems that this has at least gotten Ben re-thinking his assertion that this was all about privacy theater and even him giving Kim a get-out-of-jail-free card.

While I agree that Kim's asserted facts are true, I disagree with his conclusion.

  • I don't believe Google did anything wrong in collecting SSIDs and mac addresses (capturing data, perhaps). The SSIDs were configured to *broadcast* (to make something known widely). However, SSIDs and mac addresses are local identifiers more like house numbers. They identify entities within the local wireless network and are generally not re-transmitted beyond that wireless network.
  • I don't believe that what they did had an impact on the user's privacy. As I pointed out above, it's like capturing house numbers and associating them with a location. That, in itself, has little to do with the user's privacy unless something else associates the location with the user.
  • I hold the wireless AP industry responsible for the fact that many users don't have their APs setup in SSID stealth and data encrypted mode. The AP industry should have designed things so that they were encrypted by default with hidden SSIDs and required the user to do something to create an open network if they wanted to.
  • The user has to assume some responsibility here, though I really don't expect my mother to know how to configure encryption on an AP (nor do I expect her to know enough to know it's necessary). So I'm back to the AP industry.
  • And, perhaps most of all, I fault the various privacy pundits and all the news outlets who did not take this as an opportunity to teach the users and the industry about how to protect their data. Not one report that I read/saw went into any detail on how the user could protect themselves (which, if they still broadcast their SSIDs and leave their network unencrypted they are open to much worse attacks than Google capturing their SID & mac address).

Perhaps my view is contrarian for one who is somewhat active on the privacy side. However, I think it is a much more pragmatic view that will ultimately bring value to the user far beyond giving Google a hard time for capturing SSIDs and mac addresses which have little privacy value (in my opinion).

Tags : / / / / /

Saturday, January 23, 2010

Consent for my own software to look at my data???

Here in the US (and I presume elsewhere) the annual rite of passage of doing one's taxes is upon use yet again. Once you've done something crazy like getting married, having kids or buying a house, the whole process gets more and more complicated as you try to minimize the amount of taxes you owe Uncle Sam.

I've always done my own taxes and for the past 10 or 15 years, I've used Intuit's Turbo Tax software to do so. I still can't bring myself to do the taxes online -- I can't help feeling that there's just something wrong about not keeping that data in house.

Anyway, yesterday I installed TurboTax to start working on my 2009 taxes (yeah, I'm a bit early, but I like to do it piecemeal as I receive tax reports and have spare cycles here and there).

Following the installation, I was prompted with the following consent screen:

If you look carefully, essentially the screen is asking if you give consent to release data to Intuit so that they can figure out whether they should offer you extra paid tax preparation services (paid out of your return) and/or a chance for you to get some portion of your return on a debit card. So they want to use the information on your return to market additional services to you.

What isn't clear to me from the disclosure is: are you actually giving the information to Intuit (in other words, is it being transferred to an Intuit server off-system) or is the consent is about the software that you've purchased and installed on your computer looking at the data locally.

If the former, then I think that they should explicitly say that the data is being transferred to an Intuit server as that isn't clear in the disclosure.

If the latter, why the heck is that necessary. It's my software that I purchased and it's keeping the data locally on my system. Intuit never sees the data unless I specifically send it to them for one reason or another. If this is really required by law, how does that match up with the "instant refund" or "refund anticipation loans" offered by the likes of H&R Block or Jackson Hewitt? Do they have to get you to sign a similar consent form before they can "notice" that you're getting a loan and offer to give it to you instantly (at great cost to you, of course)?

Even if it is the crazy latter situation, the consent should clearly state that the data is not leaving the system, that it is only being used by the software I just installed.

In any case, I did not consent to any data release... I'll wait for my money to show up when it shows up (assuming I even get a refund, which isn't always the case).

Tags : / / /

Sunday, January 10, 2010

Setting up a new ubuntu server

I've been running my own mail server for close to 20 years... Through years, I've gone from Interactive Unix (how many of you remember that one!) to Red Hat Linux to Fedora Linux and now I'm moving to Ubunto (in part thanks to the strong recommendations I've gotten from friends, especially Pat Patterson.

I host several services on my server and because we're at the end of a relatively slow pipe, I use a dedicated server hosted at Superb Hosting. I use a dedicated server rather than the more typical web hosting or shared hosting because it gives me better control over my services and because I host a bunch of email domains for friends (some of which I simply forward to their current ISP and some who actually get their mail from my system.

So, I needed to setup the following services on my server:

  • DNS services for the 12 or so domans I manage (2 of my own and the rest friends & family).
  • Web server for my personal site.
  • Email services for something like 12 domains as well.

Sounds simple, doesn't it?

Well, it wasn't that simple, but mostly becuase a) I was learning new ways that things are done on Ubuntu vs Fedora, b) the tweaks of how I wnat to do things typically involves manual configuration changes that aren't always easily discerned from reading the man pages, and c) I like to understand the why as well as the how when doing administrative stuff so I spend a lot of time reading/reviewing/searching as I make my changes.

BTW - I'm not only willing, but actually want to do this so that I can keep my hands a bit dirty (and maintain some basic understanding of the current technologies used on servers). At work, they keep my grubby little hands far away from the system adminstartion side of the houose.

Anyway, I thougt it would be useful to document what I went through as I setup the server as it may help others trying to do the same things.

One note about the commands shown below: I logged in as root to do the configuration, so you don't see "sudo (command)" for all of the various commands. Some would say this might be a more dangerous way to configure the system and I would agree for onsey twosey administrative commands. However, for a long term session where you're doing nothing other than administrative commands, sudo just gets in the way. And yes, you need to be careful when you're logged in as root.

The following sections are presented below

OS Updates

First step with any new system is to ensure that I have the latest and greatest software installed -- this is expecially important on an internet visible server.

This involved running the following commands:

apt-get update         # to update the apt-get configuration/data files
apt-get dist-upgrade   # to upgrade all insalled packages to latest versions

This made sure I had the latest patches for this release of the OS. However, I wanted also to make sure I had the latest OS version. For Ubuntu, they have two development lines for servers: a somewhat frequently changing/evolving line and a more stable Long Term Support (LTS) line. Both lines get security patches regularly but LTS gets them for several years longer while the fast changing line will more frequently require you to upgrade to the latest OS version for patches.

Given what I do with the server, using the LTS line is the right thing for me to do (which is the version that was installed by my provider). So I ran the follwing commands to ensure I had the latest version:

apt-get install update-manager-core

WHich reported that there was "No new release found" which is correct as 8.04LTS is the latest LTS.

If, on the other hand, I wanted the latest OS rev (not just the latest LTS OS rev), I could have edited the file:


and changed the line "Prompt=lts" to "Prompt=normal"

Miscellaneous Tools

As I went throught the isntallation and setup, I found a number of tools were missing that I had to install to do the things I wanted to do, so I'll list them here...

  1. System V Configuration files

    I like to use the System V commands for managing the system (including the service command to start/stop init.d services).

    apt-get install sysvconfig
  2. Make

    I use a lot of Makefiles for managing the build and installation of software and packages. I was a bit suprised that my server didn't include that by default, but I presume that was because it is a server and doesn't have the development system installed either.

    apt-get install make


First thing to do is get the firewall up and running. While I plan to tightly control which services are exposed on which ports, I still feel much more comfortable having an explisit list of ports which are accessible from the internet at large. I also like to setup and test services locally while the are still blocked (including only opening up access from my own systems so I can even do remote testing without worrying about others getting into the server while it is a work-in-progress.

I use an iptables based firewall that is manually configured for the system. I've been using pretty much the same setup for years though I continuously tweak it. The script is written as an init.d service script so that I can install it there and have it automatically run it at system startup.

In addition to the typicall port protections, I also keep a blacklist of IPs for which I block all access to my server. Systems get on this list when I see that they are trying to hack into my system via repeated SSH login attempts.

The core iptables rules in the script include:

# Create a new chain named "filter" and "OFilter"
iptables -N filter                # add the new chain

# allow established connections
iptables -A filter -m state --state ESTABLISHED,RELATED -j ACCEPT

# if there are any ports to be dropped
if [ -f "${FILE_DroppedPorts}" ]; then
  grep -v "^#" "${FILE_DroppedPorts}" | while  read proto port
      # for non-blank lines
      if [ x${proto} != x ]; then
          iptables -A filter -i eth0 -p ${proto} --dport ${port} -j DROP

# if there are any blocked IPs
if [ -f "${FILE_BlockedIPs}" ]; then
  grep -v "^#" "${FILE_BlockedIPs}" | while  read ip
      if [ x${ip} != x ]; then
          iptables -A filter -s ${ip} -j LOG
          iptables -A filter -s ${ip} -j DROP

# allow ssh to this host from anywhere
iptables -A filter -p tcp --dport ssh -j ACCEPT

# allow HTTP/HTTPS to this host
iptables -A filter -i eth0 -p tcp --dport http  -j ACCEPT
iptables -A filter -i eth0 -p tcp --dport https -j ACCEPT

# allow SMTP, SMTPS and SMTP/TLS to this host
iptables -A filter -i eth0 -p tcp --dport smtp  -j ACCEPT
iptables -A filter -i eth0 -p tcp --dport smtps -j ACCEPT
iptables -A filter -i eth0 -p tcp --dport 587   -j ACCEPT

# allow IMAPs & POP3s to this host
iptables -A filter -i eth0 -p tcp --dport 993 -j ACCEPT
iptables -A filter -i eth0 -p tcp --dport 995 -j ACCEPT

# Allow DNS lookups to this host
iptables -A filter -i eth0 -p tcp --dport domain -j ACCEPT
iptables -A filter -i eth0 -p udp --dport domain -j ACCEPT
iptables -A filter -i eth0 \
             -p udp --sport domain --dport 1024: -j ACCEPT

# allow outgoing ftp connections
iptables -A filter  -p tcp --sport 21 \
              -m state --state ESTABLISHED -j ACCEPT
iptables -A filter -p tcp --sport 20 \
              -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A filter -p tcp --sport 1024: --dport 1024:  \
              -m state --state ESTABLISHED -j ACCEPT

# let people ping us
iptables -A filter -p icmp -j ACCEPT

# Log all else
iptables -A filter -j LOG

# drop all else
iptables -A filter -j DROP

# install the input and output filters for input transactions
iptables -A INPUT   -j filter

If you're interested, you can download the script and associated files here.

Note that at this point, while I'm setting up the system, many of those ports opened above are commented out and then, as I install the various components (such as Apache2) I open the respective port.

Once completed, I installed the script in /etc/init.d using the install directive in my Makefile (make install) and then used the following command to setup the necessary /etc/rc*.d files to ensure the firewall started as necessary when the system was booted.

update-rc.d firewall defaults


Whether or not we actually do it, we all know that we should be backing up our systems and data. This is especially true for remote systems where we don't have easy local access.

My hosting provider does have backup options, but they cost re-occuring money that I don't want to spend if I don't have to. So, my solution is to backup my remote server onto one of my home servers (where I have TBs of space anyway).

Since I have a firewall at home, I have to do the backup across a two step ssh tunnel similar to what I decribed in Backing up using S SH & Rsync. The first connection goes from my remote server to my firewall and the second connection goes from the remote server through the first connection to my backup server. I then rsync a number of directories on the remote server to the backup sever including:

/etc, /var, /usr/local, /home

For security reasons, I require private key authentication for this connection on both my gateway and my backup server, I use a user account which has no login shell and no login directory and I configure that the only service that can be accessed is the rsync service. Not perfect, but it's good enough that I can get some sleep at night.

One problem with this setup is that the second ssh tunnel connects to a port on localhost in order to establish the connection to the remote system which can be a problem if there's other ssh connection tunnels setup similarly. To get around that, I add an alias for my backup server to the localhost entry in /etc/hosts file. So, rather than connecting to localhost the second tunnel connects to the host backup_server and thus keeps all of the SSH host keys separate.

If you're interested, you can download a modified (I removed any credentials & system names) of the script from here.

Bind9 (DNS Server)

I host DNS for a most of the domains for which I host mail (a few of my friends host their own DNS, but use my mail server). A long time ago, I wrote a shell script that creates the necessary configuration files for the set of domains I manage (which makes it easy to add new domains which are following the same rules and makes it easy to change things around when I change my configuration).

Preparation for the move

Since nameserver changes can take some time to propogate through the internet, this is the first service that I installed, configured and exposed on the new system. In preparation for the move, I went to my old nameserver and cranked down the caching settings for the domains I hosted there in order to reduce the propagation time. My typical settings are:

@     IN    SOA (
      2010010200      ; serial number
      86400           ; refresh every 24 hours
      3600            ; retry after an hour
      604800          ; expire after 7 days
      86400           ; keep 24 hours

In preparation for the move, about a week in advance I reduced these settings to:

@     IN    SOA (
      2010010800      ; serial number
      3600            ; refresh every hour
      1800            ; retry after a half hour
      7200            ; expire after 2 hours
      3600            ; keep 1 hour
And finally, the day before the switch, I moved to:
@     IN    SOA (
      2010010900      ; serial number
      1800            ; refresh every half hours
      1800            ; retry after a half hour
      600             ; expire after 10 mins days
      600             ; keep 10 mins

Installation and configuration

I installed the nameservice daemon software and utilities using:
apt-get install bind9 dnsutils bind9-doc resolvconf

I then copied my setup files from the old server to the new server. The way that /etc/named.conf is managed has changed. On my old server all of the settings were in that one file. However, in Ubuntu, that file is intended to be unchanged and the local options are supposed to be placed into /etc/named.conf.options while the host references are intended to be placed into /etc/named.conf.local. So I changed my scripts to match the new model and modified the Makefile to correctly installe the new compoonents.

I've always run my named (the nameserice daemon) within a chrooted environment and every time I do this I have to yet again figure out what pieces need to be there in order to get things working. So this time, I wrote a script and ran it to create the chroot environment for me (and now I don't have to figure it out from scratch the next time!). In addition to creating the chroot environment, I had to change the OPTIONS directive in /etc/default/bind to include "-t /var/cache/bind" so that the file now looks like:

OPTIONS="-u bind -t /var/cache/bind"
#OPTIONS="-u bind"
# Set RESOLVCONF=no to not run resolvconf

In first setting up the new server, I made no changes other than to add a new entry for my new server. So my new nameserver had pretty much the same host entries that were on the old server. So I ran my script for creating and installing my named configuration and restarted the bind9 service.

At this point, I opened the DNS TCP & UDP ports on my filewall so that I could accept incoming nameservice requests. In order to test the service, I went to my old server and used nslookup to test the new server:

# nslookup
> server
Default server:



This showed that things were working as I intended.

The Switchover

At this point, everything was ready to go, so I went to my domain registry (Network Solutions) and changed the host records for my nameservers to make the new nameserver my primary dns server and my old server to be the secondary server.

This worked fine (though they warned me it could take 72 hours to propagate) and I ran a bunch of tests from my home network, my work network and my old server and everything was peachy keen.

Web Server

I run a web server for my own family web site. It's all hand-coded html (yeah, kinda old fangled, but I haven't had the time, energy or inclination to re-architect it. Setting it up on the new server was pretty simple.

First step was to copy over the directory heirarchy from the old server to the new server. Just tar'd it up and scp'd it over to the new server and untar'd it within the /home/www directory.

Next step involved geting apache2 installed...

apt-get install apache2

The configuration for the web servers is located in: /etc/apache2/sites-available which comes with a single default file. I renamed this file to be (allowing for more sites at some point in the future) and editet that file to match up the settings from the old server.

Server Side Includes (SSI)

SSI is a capability on the server which allows an html file to include html from other files on the same web server. I use this feature extensively to maintain a consistent menu structure by placing it in one file and including it in all the html files on the server.

To enable this feature, I did the following:

  1. Set the Includes option within the configuration section for my virtual host.
  2. Set the +XBitHack option as well. This allows me to indicate to the server that there's an include directive in the file by simply setting the executable bit on the file (rather than having to have a particular suffix on the html file).
  3. Enabled mod-include by running the following command:
    a2enmod include


I run a few proxy severs on my remote server that I have found useful when I'm behind some crazy firewalls or when an ISP has tight controls on the number of outgoing connections -- I've run into racheted down connection limits on my former home ISP (RoadStar Internet and at some hotels while on the road.

So I setup the proxies on my remot server, SSH to the server and then tunnel my services through that server.

WARNING: You have to be very careful when you setup proxies so that you don't end up creating an open proxy that others can use to make it appear that bad things are coming from your server. If you do set one up, do so carefully.

Socks 5 Proxy

Socks 5 is used for proxying many of my different Instant Messenger connections (I have like 5 of them). For Ubuntu, the common/best one seems to be the Dante-Server wich I installed using:

apt-get install dante-server

I configured it to only allow connections from the local system (since I will have an SSH tunnel to the server). This prevents others from using it unless they have internal access to my server.

*** /etc/danted.conf.orig       2009-12-31 11:29:41.000000000 -0500
--- /etc/danted.conf    2009-12-31 11:39:16.000000000 -0500
*** 37,43 ****

# the server will log both via syslog, to stdout and to /var/log/lotsoflogs
#logoutput: syslog stdout /var/log/lotsoflogs
! logoutput: stderr

# The server will bind to the address, port 1080 and will only
# accept connections going to that address.
--- 37,43 ----

# the server will log both via syslog, to stdout and to /var/log/lotsoflogs
#logoutput: syslog stdout /var/log/lotsoflogs
! logoutput: syslog

# The server will bind to the address, port 1080 and will only
# accept connections going to that address.
*** 45,54 ****
--- 45,58 ----
# Alternatively, the interface name can be used instead of the address.
#internal: eth0 port = 1080

+ internal: port=1080
# all outgoing connections from the server will use the IP address

+ external:
# list over acceptable methods, order of preference.
# A method not set here will never be selected.
*** 57,66 ****

# methods for socks-rules.
! #method: username none #rfc931

# methods for client-rules.
! #clientmethod: none

#or if you want to allow rfc931 (ident) too
#method: username rfc931 none
--- 61,70 ----

# methods for socks-rules.
! method: username none #rfc931

# methods for client-rules.
! clientmethod: none

#or if you want to allow rfc931 (ident) too
#method: username rfc931 none
*** 106,112 ****
# can be enabled using the "extension" keyword.
# enable the bind extension.
! #extension: bind

--- 110,116 ----
# can be enabled using the "extension" keyword.
# enable the bind extension.
! extension: bind

*** 162,167 ****
--- 166,178 ----
#     method: rfc931 # match all idented users that also are in passwordfile

+ #
+ # Allow any connections from localhost (they will get here via SSH tunnels)
+ #
+ client pass {
+       from: port 1-65535 to:
+ }
# This is identical to above, but allows clients without a rfc931 (ident)
# too.  In practise this means the socksserver will try to get a rfc931
# reply first (the above rule), if that fails, it tries this rule.

Web (HTTP/HTTPS) Proxy Server

Since I already had the web server up and running, setting up a web proxy was easy. First I had to ensure that the necessary modules were installed and enabled:

a2enmod proxy
a2enmod proxy-connect
a2enmod proxy-ftp
a2enmod proxy-http

Then I edited the /etc/apache2/httpd.conf file and added the following entries:

ProxyRequests On

<Proxy *>
  AddDefaultCharset off
  Order deny,allow
  Deny from all
  Allow from

AllowConnect 443 563 8481 681 8081 8443 22 8080 8181 8180 8182 7002

The AllowConnect option is necessary if your're going to proxy other connections (such as HTTPS). Most of those numbers are legacy from some point in the past. The really necessary one is 443 (for HTTPS), some of the 8xxx ones were from when I was doing some web services testing from behind a firewall at work (so I could invoke the web service endpoint from my test application). Not sure about all the others, but I'm not to worried about it since I only accept proxy requests from the local system.

Mail Server

Setting up a mail server can be somewhat complex, especialy when you throw in the fact that I was moving a running mail server to a new system and adding new client capabilities. On my old server, all of my users had to SSH into my server with private key authetnication and then tunnel POP & SMTP over the SSH connection. This could be a pain (to say the least) and restricted access for clients like the iphone or other devices. Most of my users (family & friends) are using an ssh tunnelling product from VanDyke that was discontinued back in 2004.


First step is to install the necessary components. Some of these were already installed with the server OS package (e.g. Postfix) but there's nothing wrong with making sure...
apt-get update
apt-get install postfix
apt-get install courier courier-pop-ssl courier-imap-ssl courier-doc 
apt-get install spell mail

Before I start actually accepting and processing mail, I thought it best to get the clients protocols all working, so onto the clients.

Mail Clients

I needed to enable support for your typical mail clients such as Outlook and Thunderbird (which require IMAP or POP3 to retrive mail and SMTP to send mail) as well as web browser clients. In the past, I have not supported web clients and I have required mail clients to tunnel their POP3 & SMTP over ssh tunnels. With the new server, I wanted to allow access without requiring ssh tunnels so that other clients (such as my iPhone) that didn't have ready support for ssh tunneling could get to the mail server. I also wanted to add browser based support so that people could check their email from other locations (such as a friends computer).

This involved the following steps:

Secure Sockets Layer (SSL)

For remote access to my server I needed to enable SSL so that user credentials were protected. My intent was to enable SSL on all the standard mail client protocols (SMTP, IMAP and POP) and to enable browser based access to mail via HTTPS and a web server based mail client.

Certificate Generation

In order to support SSL, I needed to get an SSL certificate. I could have created my own certificate and signed it myself, but that would have lead to error messages from the clients telling my users that perhaps they shoudln't trust my server. Instead, I signed up for an SSL certificate from GoDaddy which was running a special for $12.95/year for up to 5 years.

In order to create the certificate, I had to generate my private key and then a certificate signing request using the following commands:

*** make sure openssl is installed
# apt-get install openssl

*** Generate 4096 bit RSA server key
# openssl genrsa -des3 -out server.key 4096
Generating RSA private key, 4096 bit long modulus
e is 65537 (0x10001)
Enter pass phrase for server.key: abcd
Verifying - Enter pass phrase for server.key: abcd

*** Generate certificate signing request for server key (note that the
*** "Common Name" must be the name of the host that the clients will connect
*** to if you don't want to get ssl errors)
# openssl req -new -key server.key -out server.csr
Enter pass phrase for server.key: abcd
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]: US
State or Province Name (full name) [Some-State]: Virginia
Locality Name (eg, city) []: Waterford
Organization Name (eg, company) [Internet Widgits Pty Ltd]: Cahills
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

At this point, I took the server signing request, server.csr, and sent it to GoDaddy to get them to sign it and create my certificate. If, on the other hand, I wanted to do a self-signed certificate, I would have performed the following steps:

*** sign the csr using our own server key (making this a self-signed cert)
# openssl x509 -req -days 1825 -in server.csr \
  -signkey server.key -out server.crt
Signature ok
Getting Private key
Enter pass phrase for server.key: abcd

To test this, I configured Apache2 to support SSL and tested access to I first needed to enable the SSL module using the following command:

a2enmod ssl

I took the server key and server certificate and place them into a secure non-standard location (no need to advertise where) and set the access modes on the directory to restrict it to root only. In order for the server key to be used without a pass phrase, I ran the following commands to remove the pass phrase from the file:

mv server.key
openssl rsa -in -out server.key

I copied the default Apache2 site file into one for and set it up using the following commands:

cp /etc/apache2/sites-available/default /etc/apache2/sites-available/
ln -s /etc/apache2/sites-available/ /etc/apache2/sites-enabled/mail.cahillfamil

I then edited the configuration file to enable SSL and to point to the newly installed certificate and key files:

NameVirtualHost *:443
<VirtualHost *:443>
      ServerAdmin webmaster

      DocumentRoot /home/www/
      ErrorLog /var/log/apache2/error.log

      # Possible values include: debug, info, notice, warn, error, crit,
      # alert, emerg.
      LogLevel warn

      CustomLog /var/log/apache2/access.log combined
      ServerSignature On

      SSLEngine On
      SSLCertificateFile /path-to-ssl-files/server.crt
      SSLCertificateKeyFile /path-to-ssl-files/server.key

I also wanted to automatically redirect any http: access to to https: access, so I added the following section to the default site file which uses the RedirectPermanent directive to automatically redirect access on port 80:

<VirtualHost *:80>
      ServerAdmin webmaster
      RedirectPermanent /


After poking about some, I came to the conclusion that the right mail server for me to use to expose IMAP and POP interfaces for my mail clients is the Courier Mail Server.

Courier requires that you use the MailDir structure for user mailboxes while Postfix uses the mbox structure by default. So I changed Postfix to use the MailDir structure by adding the following setting to /etc/postfix/

home_mailbox = Maildir/

I manually created an empty Maildir structure for all my user accounts.

For SSL, Courier requires the key and the certificate to be in a single .pem file. So I concatenated server.key and server.crt into a single server.pem file.

I edited the /etc/courier/imapd-ssl file to make the following changes:

  • Set SSLPort to 993.
  • Set both IMAPDSSLSTART and IMAPDSTARTTLS options to YES to allow both IMAP over SSL and TLS within IMAP (the latter being a TLS session that's started from within the IMAP session while the former is a plain IMAP session over an SSL tunnel).
  • Set IMAP_TLS_REQUIRED to 0 so that local connections from the web mail server could make use of imap without having to do TLS on the local (same system) connection. I planned to still block the standard IMAP port (143) in the firewall, so remote clients would not be able to access their mail without SSL/TLS).
  • Set TLS_CERTFILE to point to the recently created server.pem file.

I edited the /etc/courier/imapd file to make the following changes:

  • Added "AUTH=PLAIN" to the IMAP_CAPABILITY setting so that plain text authentication is allowed on non-tls connections to the imap server. This is necessary for the local connection from some web server mail clients which don't come with support for CRAM-MD5 or other non-PLAIN authentication mechanisms.

I edited the /etc/courier/pop3d-ssl file to make the following changes:

  • Set SSLPort to 995.
  • Set both POP3DSSLSTART and POP3DSTARTTLS options to YES to allow both POP3 over SSL and TLS within POP3 (the latter being a TLS session that's started from within the POP3 session while the former is a plain POP3 session over an SSL tunnel).
  • Set POP3_TLS_REQUIRED to 0 so that local connections from the web mail server could make use of imap without having to do TLS on the local (same system) connection. I planned to still block the standard POP3 port (110) in the firewall, so remote clients would not be able to access their mail without SSL/TLS). However, this would enable my existing clients which ssh to the server and then use non-TLS POP to still be able to get their email.
  • Set TLS_CERTFILE to point to the recently created server.pem file.

Restarted the courier related services:

service courier-imap stop
service courier-imap-ssl stop
service courier-pop stop
service courier-pop-ssl stop
service courier-imap start
service courier-imap-ssl start
service courier-pop start
service courier-pop-ssl start

Yeah, I probably could have simply used the "restart" command on each of them but I wanted to have them all stopped and then start them all so I was sure that they call came up cleanly under the same configuration.

Now it was time to test things. First a quick telnet connection to the local imap port (143):

# telnet server 143
yright 1998-2005 Double Precision, Inc.  See COPYING for distribution informatio
01 LOGIN username password
0000 logout
* BYE Courier-IMAP server shutting down
0000 OK LOGOUT completed

So that worked. I ran a similar test for POP3 which also worked. Now I was ready for some remote testing. First step was to go back to my firewall and open ports 993 (IMAPS) and 995 (POP3S) to allow incomming connections to the IMAP and POP services.

Then I went to and ran several tests with the POP3S implementation (with test accounts, of course) which all worked fine.

I didn't see a similar testing tool for IMAP, so I ran some tests from one of my home computers using the following command:

openssl s_client -crlf -connect

Which worked like a charm (with some finagling with the /etc/hosts file to override's IP address). This also worked like a charm, so at this point I figured I had IMAP and POP up and running.

Authenticated SMTP

When setting up an SMTP server, you have to be very careful that you don't configure your server as an open relay (where it will send mail from anyone to anyone). It seems that hackers, scammers and spammers are forever looking for new open relays that they can use to send out spam and shortly after opening an SMTP port on the internet you can usually find attempts to make use of the server as a relay.

For basic unauthenticated SMTP (e.g. where there's no local user authentication within the SMTP session), I configured the server to only accept incomming mail whose delivery address is within one of my managed domains. Any mail with a destination address outside of my domain is rejected before we accept the mail message itself.

However, that configuration wouldn't work very well for my users who typically do want to send mail to people outside of my domain. In the past, my solution was simple: ssh tunnel to my host then sent mail via SMTP on the local host interface where I could treat any local connections as, by default, authenticated.

While I am continuing to allow that configuration with the new server setup, it wouldn't work for those users trying to use a mail client without the ssh tunnel. So I had to enable authenticated SMTP and I had to configure it to require such sessions over SSL.

The SMTP server is managed by Postfix itself. So first step was to modify the /etc/postfix/ configuration file to only accept main with recipients in my networks:

# restrict smtp operations on unauthenticated (port 25) connections
smtpd_recipient_restrictions = permit_mynetworks,reject_unauth_destination

Then I modified the /etc/postfix/ configuration file to enable both TLS within SMTP sessions and SMTP over SSL/TLS by including the following directives:

submission inet n       -       -       -       -       smtpd
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o smtpd_recipient_restrictions=permit_sasl_authenticated,reject
smtps     inet  n       -       -       -       -       smtpd
-o smtpd_tls_wrappermode=yes
-o smtpd_sasl_auth_enable=yes
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o smtpd_recipient_restrictions=permit_sasl_authenticated,reject

These settings, along with the base configuration, should give me server to server SMTP on port 25 and client to server user authenticated SMTP over TLS/SSL on ports 465 and 587.

Now that I have SMTP which allows for authentication, I had to install and configure the sasl authentication daemon as follows:

  1. I installed the package using:

    apt-get install libsasl2 sasl2-bin
  2. I edited the /etc/defaults/saslauthd to make the following changes:

    • Set START=yes so the daemon will start.
    • Configured saslauthd to place it's runtime information underneath the postfix chroot environment by changing the OPTION parameter and adding the following lines:
      OPTIONS="-c -m ${PWDIR}"
  3. I created the saslauthd run directory using:
    mkdir -p /var/spool/postfix/var/run/saslauthd
  4. Configured saslauthd to leave its files readable by postfix (so postfix could communicate with the daemon) using the following command:
    dpkg-statoverride --force --update --add root sasl 755 \
  5. Created /etc/postfix/sasl/smtpd.conf file and added the following lines:
    pwcheck_method: saslauthd
    mech_list: plain login
  6. Restarted both saslauthd and postfix

Now I was ready to start testing, so I went back to my firewall and opened ports 25 (SMTP), 465 (SMTP over SSL) and 587 (TLS within SMTP) so that I could start testing.

To test all of this you could use a mail client, or if you're a bit more adventurous (and want to see exactly what's going on) you can do this manually within a telnet/openssl connection). The following is an example test session:

$ telnet localhost 25
Connected to localhost.
Escape character is '^]'.
220 ESMTP Postfix (Ubuntu)
250-SIZE 10240000
250 DSN
mail from: user@localhost
250 2.1.0 Ok
rcpt to:
250 2.1.5 Ok
354 End data with .
Subject: Test message

Sending yet another test... hope it gets there...

250 2.0.0 Ok: queued as B28C461A0A9
221 2.0.0 Bye
Connection closed by foreign host.

That's a standard, unauthenticated SMTP session. I find using the manual sesssion for testing makes it easier to identify what the problem is when there is a problem. For example, in the list of responses after my "ehlo" command, you see "250=-STARTTLS" - this indicates that TLS is enabled within the server).

To test an authenticated SMTP session, you will need to enter a command similar to the following (I usually do this right after the "ehlo" command, though I'm not sure if it has to be exactly there):

auth plain AHVzZXJpZABwYXNzd29yZA==
235 2.7.0 Authentication successful

The "AHVzZXJpZABwYXNzd29yZA==" parameter is a base64 encoding of a plain text SASL authentication string. You can generate one manually using the following perl command:

perl -MMIME::Base64 -e 'print encode_base64("\000userid\000password")'

Where userid = the test user's id and password = the test user's password If you have a special character in either string (such as an @ in the user id (e.g. user@host) you need to escape the character (e.g. "\@").

So, now that I have all that, I ran the following tests:

  • Test local unauthenticated SMTP connection to send mail to remote system (for my clients that ssh to server and send out from there)
    telnet localhost 25 
    and then run through SMTP session described above.
  • Test remote unauthenticated SMTP connection doesn't allow sending mail to remote locations. Go to a remote system and run:
    telnet 25
    and try SMTP session above - should fail with either a) permission denied or with relay access denied when you enter the "rcpt to" command.
  • Test remote unauthenticated SMTPS connection as follows:
    openssl s_client -starttls smtp -crlf -connect

    and try SMTP session above - should also fail, this time with permission denied since we only setup authenticated SASL connections on this port.

  • Test remote authenticated SMTPS connection using the following:
    openssl s_client -starttls smtp -crlf -connect
    and this time include the "AUTH PLAIN" command at the start of the session. This should succeed.
  • Test remote authenticated SMTP over TLS connection as follows:
    openssl s_client -crlf -connect
    and include the "AUTH PLAIN" command at the start of the session. This should succeed.

Web Server Mail Client

For browser clients, there are a couple of obvious possibilities that come to mind:

  • SqWebMail - a component of the Courier Mail Server which provides access to mail files via direct access to the mailboxes.
  • Squirrel Mail - a web server based mail client that gets lots of good recommendations as being one of the best open source solutions. This tool uses the IMAP interface to access the user's mail files rather than direct manipulation.

    As a bonus, this tool also has an available Outlook-like plug-in that gives users the look/feel of Outlook 2003.

I took a look at the two tools and decided to go with Squirrel Mail and, for now, just install the basep toolkit. I'll explore the Outlook model at some point in the future. Ubuntu has SquirrelMail available as a standard package so I installed it using the following command:

apt-get install squirrelmail

I then modified the /etc/apache2/sites-available/ configuration file to use the squirrelmail application as the document root, so my users go straight into the application when they visit in a browser. The modified file looks as follows:

NameVirtualHost *:443
<VirtualHost *:443%gt;
      ServerAdmin webmaster@localhost

      DocumentRoot /usr/share/squirrelmail
      ErrorLog /var/log/apache2/error.log

      # Possible values include: debug, info, notice, warn, error, crit,
      # alert, emerg.
      LogLevel warn

      CustomLog /var/log/apache2/access.log combined
      ServerSignature On

      SSLEngine On
      SSLCertificateFile /etc/ssl/server.crt
      SSLCertificateKeyFile /etc/ssl/server.key

Include /etc/squirrelmail/apache.conf


Used by browser to test it and everything seems kosher.

SPAM filtering

To filter or not to filter.... For many years I ran my server with no server-side filtering and instead relied on client filtering. However, the abundance of crap that keeps on coming only seems to grow exponentially every year and I finally convinced myself that not only was server side filtering necessary, but it was mandatory. This is especially evident when you're trying to download mail after having been disconnected for a day or so and find that you have hundreds of email messages, most of which are clearly SPAM.

I use spamassassin for spam filtering. Looking around at most of the how-to's/docs I see that most people recommend usiing spamassassin to just flag spam, but then go ahead and deliver it to the user's mailbox. This is probably the best solution if you don't want to lose any potential emails that have incorrectly been marked as SPAM. However, that means that my clients have to download hundreds of spam messages just to throw them out when the got to the client.

For my system, I'd rather have Spamassassin get rid of at least some spam and then let some of the questionalbe stuff through. So, I've setup things such that mail messages that get a Spamassassin grade of 10 or higher get saved off into a directory on the server (one directory for each day to ease management). For messages that have a grade between 5 and 10, the subject gets re-written to include a SPAM indicator, but the message is still delivered to the intended recipient.

I've been doing it this way for the past 2 years. We get on the order of five thousand (yeah: 5,000) messages culled this way each day and I've yet to find or get a report of any false positives. Note that there's still a bunch of email that gets through with grades between 5 and 10.

Anyway, to set this up on the new server:

  • Install the latest version of spamassassin using:
    apt-get update
    apt-get install spamassassin spamd
  • Installed spamchk script (not sure where I originally got it, but I've been using it on my old mail server for several years now) in /usr/local/bin/spamchk
  • Created /var/spool/postfix/spam/save and /var/spool/postfix/spam/tmp directories for processed messages
  • Edited the /etc/postfix/ file to add an output filter for mail coming in the default smtp connection (we don't need it on the SSl connections since they are authenticated) and to add the spamck invocation. Modified lines look as follows:
    smtp      inet  n       -       -       -       -       smtpd
      -o content_filter=spamchk:dummy

    And at the end of the file, added:

    # SpamAssassin check filter
    spamchk   unix  -       n       n       -       10      pipe
      flags=Rq user=spamd argv=/usr/local/bin/spamchk -f ${sender} -- ${recipient}
  • By default, Spamassassin places a detailed spam report in any message that is flagged as spam (spam score >= 5) and moves the original message to an attachement. I find this cumbersome and so instead I like to flag the subject of the message with a "[SPAM]" flag and otherwise leave the message alone (you do still get the Spamassassin headers added to the message, but they are hidden from the default view in most mailers).

    To achieve this, I edited the /etc/mail/spamassassin/ file and make the following changes:

    ***       2010-01-02 10:45:58.000000000 -0500
    ---    2010-01-09 20:54:46.000000000 -0500
    *** 9,21 ****
    #   Add *****SPAM***** to the Subject header of spam e-mails
    ! # rewrite_header Subject *****SPAM*****
    #   Save spam messages as a message/rfc822 MIME attachment instead of
    #   modifying the original message (0: off, 2: use text/plain instead)
    ! # report_safe 1
    #   Set which networks or hosts are considered 'trusted' by your mail
    --- 9,21 ----
    #   Add *****SPAM***** to the Subject header of spam e-mails
    ! rewrite_header Subject [SPAM]
    #   Save spam messages as a message/rfc822 MIME attachment instead of
    #   modifying the original message (0: off, 2: use text/plain instead)
    ! report_safe 0
    #   Set which networks or hosts are considered 'trusted' by your mail
  • Spamassassin likes to learn about its mistakes (both positive and negative). Since my users don't have local access to the system, I need to add aliases which allow people to forward mail attachments that are or are not spam so that Spamassassin can use that information in its learnings.

    First step was to get the script from Stefan Jakobs. This script had a dependency on the perl modlue MIME::Tools which I used the following comand to download and install it (as well as a bunch of dependencies it had):

    cpan -i MIME::Tools

    Then I setup the aliases in /etc/aliases as follows:

    # Spam training aliases
    spam: "|/usr/local/bin/ -L spam"
    ham: "|/usr/local/bin/ -L ham"

    When I tested it, the script failed because it couldn't open/write to the log file. I manually created the log file and set it be writable by the tool.

The Switchover

The switchover had to be handled carefully in an attempt to not loose any mail as I moved things (or as little as possible). The sequence I worked out and used was as follows:

  1. Stop mail services on both the old and the new servers -- ALL mail services: SMTP, POP3, IMAP, etc.
  2. On the old server, tar up all of the existing user accounts and user mailboxes and transfer them to the new server.
  3. Copy the /etc/passwd and /etc/shadow files to the new server and copy out the user accounts that are moving and add them to the existing /etc/passwd and /etc/shadow files on the new server.
  4. Copy the /etc/postfix configuration files from the old server to the new server and merge in any of the local settings from the old server. In particular the virtual domains information for all of the domains I host had to be incorporated into the new setup.
  5. Copy the /etc/aliases file from the old server to the new server editing the file to remove any extraneous/old/useless entries. Run newaliases to notify Postfix of the changes.
  6. Untar the user accounts in /home on the new server and set the owner/group ownership as necessary.
  7. Convert Mbox mailboxes to the new Maildir format on the new server.

    While I do alot of relaying of mail, there are a number of people who actually get their mail off of my server and so I needed to move their incomming mail to the new server and beccause we changed from mbox format to Maildir format, I needed to split the mail up in to individual files.

    I found a perl script to do the conversion (mb2md) which I downloaded from here. Ran a few tests and figured out that I would use the command as follows:

    mb2md -s "full path to mbox file" -d "full path to Maildir directory"
    And, since I was doing this as root, I would need to:
    chown -R "full path to Maildir directory"
    so that the right user owned all the files.
  8. Create Maildir structures for those users who didn't have mail in their mailboxes.

    For those users who didn't have mail sitting in their mbox files on the old system, I would need to create the correct heirarchy within their login directory for Maildir delivery. So I ran a script similar to the following (I just did it from the command line, so I don't have an actual copy of the script) in /home:

    for user in user_list
        mkdir $user/Maildir $user/Maildir/cur $user/Maildir/new $user/curdir/tmp
        chown -R $user $user/Maildir
  9. On both servers: Edit the DNS records to change the IP address for to be the new server and assign the name to the old server. And, of course, pubish these changes.
  10. Enable mail services on the new server (do not, for at least a day or so, enable mail services on the old server in order to force any mail in other SMTP queues to go to the new server).
  11. Test the setup by sending emails to various users in my hosted domains from local clients, clients in my hame and from my work email account to ensure that the changes had propogated out to the real world.


That's about it... At least what I remember. I'm sure that there are things I did during the move that I forgot to write down, but I did try to record everything. I'll update this if/when I figure out anything I did wrong or forgot to note.

I hope someone out there finds this useful. I know I will the next time I need to move the mail server to a new system.

Tags : / / / / /

Tuesday, March 03, 2009

Cool gadget #14

I've always been an anti-multifunction office device kind of person. If you got a good printer, it sucked at scanning or faxing. If you got a good fax, it sucked at printing or scanning. If you wanted to print a lot inexpensively, you used a monochrome laser printer. If you wanted to print color, you used an ink jet type printer. None of the multifunction devices seemed to be good enough to replace multiple dedicated devices.

In my home office, I've had a good monochrome laserjet printer (HP 4000TN), a good inkjet printer (HP 1200DN), excellent fax machine (Xerox something or other), a good copier (again a Xerox something or other) and a decent scanner.

Well, that has finally changed. The quality of all-in-one devices has gotten good enough that I now find them acceptable for most office tasks. Well, I guess I should clarify that I find the higher end devices satisfactory. The low end devices still are missing or have brain dead implementations of many of the core features that I require. The HP Color Laserjet CM2320fxi Multifunction Printer is one such all-in-one printer. The cost is a bit high for some home purchases (I paid a discounted $850), but the functionality gives me all the magic features I needed and does them all well enough that I can get rid of the existing multiple devices I have lying about which, together, accomplish some of the same tasks.

This device does the following tasks very well:

  • Built-in network printing from any computer in the house.
  • Black/white laser printing
  • Color laser printing -- looks as good as anything I've gotten off inkjets
  • Automatic duplex printing (printing both sides of the paper).
  • Black/white copying (single or multi-page)
  • Color copying (single or multi-page)
  • Fax sending/receiving with auto document feed
  • Automatic Scanning to email of multi-page documents (PDF)
  • Print directly from camera memory cards

My only complaints are:

  • it is somewhat more noisy than my old laserjet printer, though after a few weeks I've gotten used to it and don't notice it all that much
  • it is tall (because of the scanner unit on top with space for paper outputs and with 2 input trays). So tall that I haven't hooked up the 2nd input tray or the top would hit the cabinets above. It would be nice if the scanner/control unit could be separated and placed to the side of the printer. Yeah that would look like two devices, but it would make it easier for my kids to see the top buttons.

Those are relatively minor nits. We are extremely happy with this printer and all of its features..

That said, I do continue to own a desktop flatbed photo scanner and a dedicated film scanner. I could probably do most of what I want to do with the flatbed scanner with the new device. However, there's a lot of convenience to having it on my desk easily reachable when scanning many prints and I can take it with me when I go to the parents house to scan old pictures there.

So while I have gotten rid of the fax machine, copier, laser printer and inkjet, I still have some specialized devices lying about. And, BTW, I sold the old devices for $100 and sent back the inkjet printer to HP for an upgrade rebate of another $100, so the net cost to me was just $650.

Tags : / / / / /