How to configure an SMTP Black Hole

Requirement:  Build a web server that will receive email for a particular domain and discard it immediately (and silently).

I started with a canned Debian 9 system at OVH, but this should work with other and older Debian versions too.

Log in as root and make sure you’re all up to date:

apt-get update && apt-get upgrade

Set your hostname.

hostname fake
echo "fake.fakemail.ca" > /etc/hostname

Install postfix

apt-get -y install postfix
# Use the below as a guide for any configuration questions.  You will probably only see the first two unless you do a 'dpkg-reconfigure postfix'.
# on the "General type of mail configuration" screen, select "Internet Site"
# on the "System mail name:" screen, accept the default of "localdomain".
# on the "Root and postmaster mail recipient:" screen, leave the field blank.
# on the "Other destinations to accept mail for (blank for none):" screen, remove your hostname from the list so that it reads "localhost, localhost.localdomain"
# on the "Force synchronous updates on mail queue?" screen, select "No" because we don't care about the mail queue.  Like at all.
# on the "Local networks:" screen, accept the default "127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128" entry.
# on the "Mailbox size limit (bytes):" screen, accept the default "0".
# on the "Local address extension character:" screen, accept the default "+".
# on the "Internet protocols to use:" screen, choose whatever applies to you.  The default of "all" is probably OK.

Now we can configure aliases and postfix.

echo "devnull: /dev/null" > /etc/aliases
newaliases
echo "@fakemail.ca devnull@localhost" > /etc/postfix/virtual
postmap /etc/postfix/virtual
echo "@fakemail.ca /dev/null" > /etc/postfix/vmailbox
postmap /etc/postfix/vmailbox
echo "virtual_mailbox_domains = fakemail.ca" >> /etc/postfix/main.cf
echo "virtual_mailbox_base = /var/mail/vhosts" >> /etc/postfix/main.cf
echo "virtual_mailbox_maps = hash:/etc/postfix/vmailbox" >> /etc/postfix/main.cf
echo "virtual_minimum_uid = 100" >> /etc/postfix/main.cf
echo "virtual_uid_maps = static:5000" >> /etc/postfix/main.cf
echo "virtual_gid_maps = static:5000" >> /etc/postfix/main.cf
echo "virtual_alias_maps = hash:/etc/postfix/virtual" >> /etc/postfix/main.cf
postfix reload

Now watch the mail log and see everthing addressed to @fakemail.ca go to /dev/null

tail -f /var/log/mail.log

Example log entries for an email going to /dev/null:

Jun 27 15:09:09 fake postfix/smtpd[3449]: connect from mail-bl2nam02on0124.outbound.protection.outlook.com[104.47.38.124]
Jun 27 15:09:09 fake postfix/smtpd[3449]: 355BB1F4F6: client=mail-bl2nam02on0124.outbound.protection.outlook.com[104.47.38.124]
Jun 27 15:09:09 fake postfix/cleanup[3454]: 355BB1F4F6: message-id=<CO2PR16MB0028E0F1D3FAF4817E66B622B0DC0@CO2PR16MB0028.namprd16.prod.outlook.com>
Jun 27 15:09:09 fake postfix/qmgr[3446]: 355BB1F4F6: from=<jeremy@jeremycole.com>, size=19947, nrcpt=1 (queue active)
Jun 27 15:09:09 fake postfix/local[3455]: 355BB1F4F6: to=<devnull@localhost>, orig_to=<blah@fakemail.ca>, relay=local, delay=0.22, delays=0.22/0.01/0/0, dsn=2.0.0, status=sent (delivered to file: /dev/null)
Jun 27 15:09:09 fake postfix/qmgr[3446]: 355BB1F4F6: removed
Jun 27 15:09:09 fake postfix/smtpd[3449]: disconnect from mail-bl2nam02on0124.outbound.protection.outlook.com[104.47.38.124] ehlo=2 starttls=1 mail=1 rcpt=1 data=1 quit=1 commands=7

How to remotely run Disk Repair (fsck) on OS X

The first thing you should do is make sure your remote control program is running properly at startup.  I’m using TeamViewer so I just added it to the main user’s startup items.  If you’re using VNC on your mac you probably don’t have to do anything special once it’s enabled.

The problem with Disk Repair (fsck) is that it needs to be run in single user mode, but you won’t have a GUI to work with on the remote side so the system needs to reboot, run fsck automatically, then reboot back to your normal GUI.

Here’s how:
Read more

Remove junk from a default Debian Linux install

Just some stuff to get rid of when building a basic-ish LAMP server in a VM:

apt-get --purge remove gnome* baobab bluez brasero brasero-common cdrdao cheese cheese-common dvd+rw-tools ekiga empathy empathy-common eog espeak espeak-data evolution evolution-common evolution-data-server evolution-data-server-common evolution-exchange evolution-plugins evolution-webcal fancontrol foo2zjs foomatic-db foomatic-db-engine foomatic-filters foomatic-filters-ppds gstreamer0.10-* hamster-applet hp-ppd hpijs hplip hplip-cups hplip-data inkscape isc-dhcp-client isc-dhcp-common laptop-detect lm-sensors mousetweaks openoffice.org* pm-utils ppp telepathy-* tomboy totem* transmission-* uno-libs3 vino xsane*

apt-get autoremove

 

Move swap partitions off of RAID1 array

After upgrading my new server from Lenny to Squeeze, I noticed that the iWeb default install on this machine had the swap partition on a mirror.  Not that I expect this machine to be doing much swapping, but I figured I should fix it anyways.  To destroy the array and make those partitions “plain” swap partitions, follow these steps:

### find out which array is the swap partition and see who the members are
cat /etc/fstab
cat /proc/mdstat
### fstab told us md1 is the swap partition and xvda5 and xvdb5 are the member partitions in that array
### turn off swap
swapoff -a
### stop the array and delete the superblocks.  without zeroing the superblocks the array will still be auto-assembled at boot and we'll get no swap
mdadm --stop /dev/md1
mdadm --zero-superblock /dev/xvda5
mdadm --zero-superblock /dev/xvdb5
### use fdisk to change the partition types for xvda5 and xvdb5 to "82  Linux swap / Solaris"
fdisk /dev/xvda #(t, 5, 82, w)
fdisk /dev/xvdb #(t, 5, 82, w)
mkswap /dev/xvda5
mkswap /dev/xvdb5
swapon /dev/xvdb5
swapon /dev/xvda5
### take the reference to /dev/md1 out of mdadm.conf (1 less error message at boot)
vi /etc/mdadm/mdadm.conf
### modify fstab - change the reference to /dev/md1 to /dev/xvda5, then add another line for /dev/xvdb5
vi /etc/fstab
### reboot if you want to make sure it's all happy at boot time
reboot

I mainly wanted to do this so that if the swap space is ever actually being used, the system won’t have to mirror every write to the other drive.  I’ve also read that linux is smart enough to distribute writes to swap space between partitions if more than one are available, which makes sense to me but I’d have to confirm that rumor.  This change will potentially cut down on a bunch of CPU usage and IO at some point in the future, and a side benefit to doing this is that we have increased the amount of swap space from 2GB on this system to 4GB.

Still don’t want to ever use it though…  😉

Upgrade iWeb “SmartServer” from Debian Lenny to Squeeze

iWeb currently only offers Debian Lenny as a pre-install option on these servers, but since Lenny is dead as of February 2012 I wanted to start with Squeeze.  I tried the upgrade process from the debian.org site linked below first, but my server didn’t reboot properly after the GRUB2 install and I couldn’t even connect to the VNC console of my VM.  Since the hardware is about a bazillion miles away I don’t know if there’s anything I could have done from the console to fix grub and rescue the install; my only option was the auto re-image in the iWeb control panel.  I re-imaged the server with Lenny and while that was happening I did a bunch of reading about other people having fun with the new grub.

This is just a quick step-by-step.  Basically the standard instructions break grub-pc (GRUB2) on this Xen-based system.  Follow along until the end, then remove grub-pc and re-install grub-legacy.  Your system will then be ready to go!

All of this information is here: http://www.debian.org/releases/stable/amd64/release-notes/ch-upgrading.en.html but this page is far less reading.

Also, if your system is not a “stock” Lenny install, i.e. with 3rd party deb sources and a bunch of custom stuff, your mileage may vary.  I did this to a fresh, new server before anything else.

apt-get purge splashy
apt-get update
apt-get upgrade
vi /etc/apt/sources.list
### Change all references of "lenny" to "squeeze"
apt-get update
apt-get upgrade
uname -r
### Need to find our architecture which turns out to be "xen-amd64"
apt-get install linux-image-2.6-xen-amd64
apt-get install udev
reboot
apt-get dist-upgrade
### When asked if you want to chainload GRUB2, say "NO"*
apt-get install grub-legacy
update-grub
apt-get autoremove
reboot
### enjoy Debian Squeeze!

*Answering “NO” here tells the installation script to just go ahead and fully install GRUB2 and not mess with the legacy grub .conf file and fart around with chainloading and stuff.  I just felt better uninstalling a “complete” GRUB2 install rather than a half-assed hodge-podge of grubbery.

Sheevaplug – 512MB is just not enough…

OK, so I still have about 200MB of free space on the on-board flash in my Sheevaplug, but there would be much less shuffling things around and cleaning of things like the apt cache if there was a bigger flash chip in there. And it would be cool to have room for X and Gnome and Apache and MySQL and a bunch of junk, just like a real computer.

You’d think I could find one (a Hynix H27UAG8T2B that is) on eBay or something… or find someone to send me a sample even. Maybe Hynix just developed the 16Gb version and didn’t actually manufacture any… who knows.

Anyways, I thought I’d post some scans of the Sheevaplug motherboard anyways, since I couldn’t seem to find any good ones anywhere and had to crack mine open to see what kind of chip I was going to need.

phpMyAdmin – Wrong permissions on configuration file, should not be world writable!

[vc_row][vc_column][vc_column_text]If you install phpMyAdmin on your web host and all you see when you access www.yoursite.com/phpmyadmin (or whatever) is “Wrong permissions on configuration file, should not be world writable!” you are supposed to just change the permissions of /phpmyadmin/config.inc.php to not be world writable (i.e. chmod 755 config.inc.php, or by using your FTP client).

Some hosts (Primus for one) do not let you change the permissions on your files, so there is no way to set this up properly.  But if you are in a hurry and need to back up a database so you can get the site migrated to a decent web host, you can still get phpMyAdmin to run.

Edit /phpmyadmin/libraries/Config.class.php (yes, there is a capital “C” on this file name for some reason), and comment out the line that checks the permissions.  (Line 390 in the source code for phpMyAdmin version 3.4.3.1-english.)

Change

$this->checkPermissions();

to

//$this->checkPermissions();

and re-upload the file.  Now you should be able to log in, assuming you have setup the proper information in your /phpmyadmin/config.inc.php file in the first place!

In order for it to work, you must also listen to this while you edit your files. Feel free to sing along![/vc_column_text][/vc_column][/vc_row]

How to change RAID1 superblock from 1.2 back to 0.9 to install grub (debian squeeze)

[vc_row][vc_column][vc_column_text]For some reason, it seems like everything I do is not like what everyone else does… or at least not what the people writing the software I use do.  I started writing this as a how-to for others in this situation, but in the end it turned out to be more of an amusing story.  Maybe someone will find it useful anyways…

The background:

I was building a new linux server for my home office and since I have been having good luck with Debian in the last couple of years, I decide to use it as the OS on this box too. Intending to keep it as simple as possible, I created a basic partitioning scheme on all of the drives (the same scheme I have been using for years now) and run into fatal errors when I get to the installation of grub.

Here’s how I partition the drives:
Partition 1: Primary, 8GB, Linux RAID – going to use RAID1 for the /boot file system
Partition 2: Primary, 20GB, Linux RAID – going to use RAID5 for the / file system
Partition 3: Primary, 1960GB, Linux RAID – going to use RAID5 for the /data file system
Partition 4: Primary, 1GB, Linux Swap
Plus a little bit of slack at the end of the drive.

All of the drives are identical 2TB Western Digital Green SATA Drives.  There are now 7 in the system.

The error:

[/vc_column_text][hcode_simple_image hcode_mobile_full_image=”1″ alignment_setting=”1″ desktop_alignment=”aligncenter” ipad_alignment=”sm-aligncenter” mobile_alignment=”xs-aligncenter” padding_setting=”1″ desktop_padding=”padding-five” ipad_padding=”sm-padding-three” mobile_padding=”xs-padding-one” hcode_image=”346″][vc_column_text]If you press ALT-F4, you will switch over to the install log console and you will see some mumbo-jumbo about grub not finding anything it can use to live on. I didn’t copy down the error message, but it’s cryptic and scary like any good linux error message should be. Read more

Powershell: Automated Weekly Status Reports

This is a pretty simple Powershell script that generates a random weekly status report and emails it. I wrote this script because it is important that I produce a weekly status report at my place of employment. The report is graded based on if it was submitted at all and secondly if it was submitted on time. As far as I know nobody actually reads these silly things as the process of checking them is automated. So if the “checker” is automated why shouldn’t the “reporter” be automated? The script is executed via a Scheduled Task.

Here is the script.

#
# Cobbled together by Don
#
# Lazy weekly reports
#

# Set the possible weekly activities
$wrkItems = @("Activity 1",
"Activity 2",
"Activity 3",
"Activity 4",
"Activity 5",
"Activity 6",
"Activity 7",
"Activity 8",
"Activity 9",
"Activity 10",
"Activity 11",
"Activity 12"
)

# Create the email body using three random elements
$outData = "My activities for the week may include but are not limited to:`n`n"
foreach ($ele in get-random -input 0,1,2,3,4,5,6,7,8,9,10,11 -count 3) {
$outData += " - "
$outData += $wrkItems[$ele]
$outData += "`n"
}

# Email the weekly report to the appropriate email address
$SMTPserver = "mail.yours.com"
$from = "don@nowhere.com"
$to = "WeeklyReports@nowhere.com"
$subject = "Don's Weekly Status Report"
$emailbody = $outData
$mailer = new-object Net.Mail.SMTPclient($SMTPserver)
$msg = new-object Net.Mail.MailMessage($from, $to, $subject, $emailbody)
$msg.IsBodyHTML = $false
$mailer.send($msg)

Beagleboard: Upgrading from Debian 5 to Debian 6

I recently wanted/needed to upgrade Debian on my Beagleboard. The original Debian install was done following the instructions over at elinux.org. when I update distributions I usually prefer doing a clean install but since 90+% of my time on the Beagleboard is done while working remotely I thought I would give the upgrade route a try.

Before beginning I backed up my system. I took a copy of /etc, /var, /root, /home. I also made backups of my webmin config. Lastly I also took copies of all the binaries I have compiled (some needed major tuning). Once I was satisfied I could rebuild should I encounter a catastrophic meltdown during the upgrade I decided to proceed.

My first stop was a Google search on the subject. This yielded an excellent x86 centric guide over at HowtoForge. The first section of the guide detailing package cleanup was helpful as aptitude identified 27 packages that could be removed. My second avenue for information, given the unsatisfactory results from my Google searches, was the Beagleboard mailing list. I posted a message requesting steps for a Debian upgrade and got some immediate feedback. The Beagleboard group is great!

So, to recap, here is what I assembled from the HowtoForge and Beagleboard group posts as my upgrade procedure.

Clean up Apt source list file /etc/apt/sources.list, mine looked like this post clean-up:

deb http://ftp.ca.debian.org/debian lenny main
deb-src http://ftp.ca.debian.org/debian lenny main

deb http://security.debian.org/ lenny/updates main
deb-src http://security.debian.org/ lenny/updates main

deb http://volatile.debian.org/debian-volatile lenny/volatile main
deb-src http://volatile.debian.org/debian-volatile lenny/volatile main

# webmin
# deb http://download.webmin.com/download/repository sarge contrib

Next I cleaned up all the packages beginning with making sure the current distribution is up to date:

apt-get update
apt-get upgrade
apt-get dist-upgrade

Now I regularly update my system so no actions were required for the above commands. (I’ve written about apt-* before as I was learning about it)

Next was package cleanup, I followed the instructions by Deninix here exactly as he wrote them.

Ensure that no packages on hold with:

dpkg –audit
dpkg –get-selections | grep hold

For the final go ahead test use:

aptitude

Press g and the list shows which packages need your attention. In my case they were 27 packages listed as needing to be removed. So I removed them and then I was clean.

Next I followed the advice from the Beagleboard group.

I upgraded to the latest 2.6.35.x kernel for lenny using:

wget http://rcn-ee.net/deb/lenny/v2.6.35.9-x9/install-me.sh
/bin/bash install-me.sh

*I had to remove the “sudo” commands from the script

and rebooted.

Then I updated my sources list for squeeze, here’s what it looks like now

deb http://ftp.ca.debian.org/debian squeeze main
deb-src http://ftp.ca.debian.org/debian squeeze main

deb http://security.debian.org/ squeeze/updates main
deb-src http://security.debian.org/ squeeze/updates main

# deb http://volatile.debian.org/debian-volatile squeeze/volatile main
# deb-src http://volatile.debian.org/debian-volatile squeeze/volatile main

# webmin
# deb http://download.webmin.com/download/repository sarge contrib

Then I started the upgrade process with:
sudo apt-get update
sudo apt-get install apt aptitude udev
sudo aptitude update

The next recommned step was:
sudo aptitude safe-upgrade

I did run into some fairly significant issues with “aptitude safe-upgrade”. On the first pass just about all running processes on the beagleboard became defunct and nothing was working correctly. So I rebooted with an absolute minimal system running little more than kernel and sshd and ran “aptitude safe-upgrade” again. This time I let it run for 18+ hours and during that time the CPU was pegged at 100% and I was getting into some pretty serious swapping so I decided it wasn’t likely working as intended. I decided to move on with

sudo aptitude dist-upgrade

Here, dist-upgrade wanted to remove the “sysvconfig” package. I didn’t have any issues with this so I said “Yes”. The dist-upgrade command completely successfully and took about an hour.

I rebooted to make sure everything was sane. Next I decided to upgrade to the latest stable squeeze kernel with:

wget http://rcn-ee.net/deb/squeeze/v2.6.37.2-x4/install-me.sh
/bin/bash install-me.sh

*I had to remove the “sudo” commands from the script

I rebooted to make sure everything was sane. Next I tested a few of my applications:

Apache : ok
Webmin : Requested I re-detect the OS, after that it was OK
Anyterm : ok
munin : ok
Various scripts : ok

All and all it was a fairly painless process and was able to complete it without needing the console.

My .docx file opens as a .zip in Outlook Web Access – WTF?

TF is that Exchange 2003 doesn’t know about these new-fangled file extensions so it looks at the headers and sees that they are zip-compressed and assumes they are ZIP files… We just need to add the proper MIME types to Exchange which will in turn set up the attachment properly in OWA.

  1. Open Exchange System Manager
  2. Open the Global Settings branch
  3. Right-click on Internet Message Formats and choose Properties. This brings up the current list of known MIME-types.
  4. Click the Add button. Add the MIME type descriptions (application/vnd.openxmlformats-officedocument.presentationml.presentation) below into the first textbox, and then enter the extension (pptx) in the other textbox.
  5. Click OK and get out of all the dialogs, close ESM.
  6. When you’ve added all 8 of the new MIME-types, restart the Microsoft Exchange Information Store service.

MIME-types:

  • application/vnd.openxmlformats-officedocument.presentationml.presentation pptx
  • application/vnd.openxmlformats-officedocument.presentationml.slide sldx
  • application/vnd.openxmlformats-officedocument.presentationml.slideshow ppsx
  • application/vnd.openxmlformats-officedocument.presentationml.template potx
  • application/vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx
  • application/vnd.openxmlformats-officedocument.spreadsheetml.template xltx
  • application/vnd.openxmlformats-officedocument.wordprocessingml.document docx
  • application/vnd.openxmlformats-officedocument.wordprocessingml.template dotx

After doing this on the server side, close and re-open Internet Explorer and re-login to OWA.  I cleared my cache in between just to be on the safe side and now .docx files download properly.  Yay!

FWIW, when searching for this solution I found a number of people suggesting that these MIME-types needed to be set up in IIS.  They were already there on my system, just not in ESM.  Also, there is this KB article about opening documents in or not in IE here: http://support.microsoft.com/kb/162059 if that is part of your problem.  Just sayin’.

Make Windows Server 2003 use external NNTP servers in just a few steps:

[vc_row][vc_column][vc_column_text]For future reference, running these commands in a shell is much faster than trying to follow Microsoft’s Knowledge Base instructions (http://support.microsoft.com/kb/816042).

C:> w32tm /config /manualpeerlist:"0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org",0x8 /syncfromflags:MANUAL
C:> w32tm /config /update
C:> net stop w32time
C:> net start w32time
C:> w32tm /resync /nowait

Thanks to Dave Nickason who posted this in 2008 to the Microsoft Forums![/vc_column_text][/vc_column][/vc_row]

Blackberry Pearl 9100 Enterprise Activation Icon Missing

OK, so I know these cell phone companies think they’re doing everyone a favor by setting up their own custom software loads and wallpaper and crap, but come on…

Basically, if you go to Telus and buy a Pearl 9100, they will helpfully install your SIM card in it and make sure you can make calls and surf the web and all that jazz, then they hand it to you and away you go. They will not ask you what you intend to do with it – like say, use it WITH A BLACKBERRY SERVER. So you take your new phone home, install the latest version of BlackBerry Professional on your Exchange server, create your account, set your activation password, pick up your Pearl and hunt (and hunt, and Google) for the Enterprise Activation icon.

Stop looking, it’s not there. But don’t worry.

Hopefully since you just brought the phone home, there’s not much on it. Go into “Options” -> “Security Options” and choose “Security Wipe”. Check off “Emails, Contacts, etc.” and “User Installed Applications”, enter the word “blackberry” in the field provided, and click on the “Wipe” button.

Let the phone reboot, and when it turns on and asks if you want to connect to the network, say “No”. Whip through the setup wizard and get to the home screen. You will find that the Telus wallpaper is there and the standard welcome message is in your Inbox, but you won’t find all of their custom crap. You will also find Enterprise Activation under “Options” -> “Advanced Options”. Enter your email address and activation password and tell the phone to activate. It will ask you to connect to the network. Say “Yes” now, and the activation should succeed if your server is working properly.

Oh, and before you tell me that you have to have an Enterprise (BES) plan from Telus in order for the icon to show up, or for BES to work, don’t bother. The new version of BES (5.0.1 at the moment) doesn’t require a BES data plan, and the removal of the Enterprise Activation program on the phone is just asinine.

At some point the Telus network may push their crapware down to my phone again, and hopefully it doesn’t break the already activated enterprise setup that I have going on. If it does, there will be another post. Trust me.

Ubuntu from the command line: Package Management

My Linux background centers around RedHat and Centos. I have been using yum for a long time and I am very comfortable with it. One of the greatest frustrations I have with ubuntu is having difficulty finding the packages I need easily from the command line.

I realize there are a bunch of “apt-*” how to articles/blog posts out there but all of the ones I have read did not provide me with the required golden nugget which is, “I need file X, what package contains this file?” Specifically I needed mkimage to complete my boot image for a 1BeagleBoard and I did not know what package I needed to install to get it. The required command is apt-file, which is not installed by default on 10.04, so let’s go through some more basic commands first.

It’s probably a good idea to always start package management by running ‘apt-get update’ which fetches latest software list and version numbers

apt-get install –> installs packages and resolves dependencies
apt-get remove –> remove a package, leaves configuration files intact
apt-get purge –> remove package and configuration files

OK, so now we can install apt-file (sudo apt-get apt-file), once the package is installed, apt-file update must be run.

apt-file update –> updates file cache, takes a while

When the update has completed, we can search using apt-file search . Let’s look at my example above, what package do I need to install to get mkimage


don@S10:~$ apt-file search mkimage
cvsgraph: /usr/share/doc/cvsgraph/examples/mkimage.php3
grub-efi-amd64: /usr/bin/grub-mkimage
grub-efi-amd64: /usr/share/man/man1/grub-mkimage.1.gz
grub-efi-ia32: /usr/bin/grub-mkimage
grub-efi-ia32: /usr/share/man/man1/grub-mkimage.1.gz
grub-pc: /usr/bin/grub-mkimage
grub-pc: /usr/share/man/man1/grub-mkimage.1.gz
jigit: /usr/bin/jigit-mkimage
jigit: /usr/share/man/man1/jigit-mkimage.1.gz
lupin-support: /usr/share/lupin-support/grub-mkimage
opennebula: /usr/lib/one/tm_commands/nfs/tm_mkimage.sh
opennebula: /usr/lib/one/tm_commands/ssh/tm_mkimage.sh
opennebula: /usr/share/doc/opennebula/examples/tm/tm_mkimage.sh
python-freevo: /usr/share/pyshared/freevo/helpers/mkimagemrss.py
uboot-mkimage: /usr/bin/mkimage
uboot-mkimage: /usr/share/doc/uboot-mkimage/changelog.gz
uboot-mkimage: /usr/share/doc/uboot-mkimage/copyright
don@S10:~$

The output is quite verbose and some interpretation is required. In my case the required package is uboot-mkimage. This was fairly easy to identify since uboot is the grub equivalent on the 1BeageBoard.

In my personal quest to learn about apt-* I came across a a few more useful commands which I will summarize here:

apt-get upgrade -u –> get a list of what can be upgraded (run apt-get update first)
apt-get upgrade –> install available upgraded packages
dpkg-query -l “search_string” –> query package database (of installed packages)
dpkg-query -l –> list all installed packages
dpkg -i <*.deb> –> manually install a package, use this with care and make sure the package is trusted
deborphan –> with no arguements lists orphaned packages, must be installed, apt-get remove can then be used to remove orphans manually, use care with this!

Ii can now say that I can do all my required package management with apt-* as well as i can with yum which make using ubuntu somewhat less irritating.

————————
1 The Beagleboard is an embedded ARM platform that can run many flavours of Linux, I run Debian Lenny on mine (http://beagleboard.org/)

BIND9 on Debian Squeeze and problems with zone transfers.

That title might be longer than this post, but if you’re running into problems with zone transfers that don’t appear to happen, check to make sure you are putting your zone files in /var/cache/bind.

Read more

OpenVPN Bridge under VMware ESXi

So, as these things go, I spent 2 days looking at my OpenVPN config files and bridging setup before finding the solution to my problem was elsewhere.

A little background: I created a new OpenVPN VM using the Debian Squeeze net install CD, configured it to match what was already working on a physical Windows XP box, but only had limited success. I was able to connect to the VPN, ping the OpenVPN server on the network, but couldn’t connect to anything else. Trying to ping another server from the VPN client, and running tcpdump on that other server showed that it was receiving ICMP requests and replying, but they were not making it back to the VPN client. I tried a hundred different ways of creating the bridge on the OpenVPN server, but nothing worked. Finally, good old Google found the answer. The ESXi virtual switch drops promiscous packets by default. To fix it, open the vSphere Client, click on the ESXi host on the left side, click on the “Configuration” tab on the right, click “Networking” in the Hardware box, click on “Properties…” at the top-right of your “Virtual Switch: vSwitch#” graphic. Now on the “Tools” tab of this popup window, select the “vSwitch” and click the “Edit…” button. In this popup, click on the “Security” tab and change “Promiscuous Mode” from “Reject” to “Accept”. Click “OK” then “Close” and you should be all set.
Read more

Quick way to disable IPv6 – Debian Squeeze (6)

echo "# Disable IPv6" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "#" >> /etc/sysctl.conf
reboot

Oh yeah, and if exim freaks out,
vi /etc/exim4/update-exim4.conf.conf
and change dc_local_interfaces so it reads
dc_local_interfaces='127.0.0.1'

VMware Performance Tuning : Disabling System Restore in Windows Guests

Snapshots are available with VMware Workstation, VMware Fusion, VMware Server and ESX/ESXi.

Windows has a feature called system restore.  The idea behind System Restore is to revert to a point in history in case something goes wrong – i.e.: patch Tuesday. It’s an undo feature for your Windows OS. There is overhead for this undo feature, especially on the IO front which is generally acceptable for a dedicated desktop but not ideal for VMs.

One can approximate the function of System Restore with snapshots. Snapshots should be used only when modifying a VM, i.e. patch Tuesday, new software, etc… The idea is “I know I’m making a change so I snapshot the VM”, I execute the change, and test. Once testing reveals no issues and everything works as expected, the snapshot is removed.  (Of course this is now a manual process instead of an auto magic one so it’s a compromise)

Snapshots should not be persistent! They should only have a life of a few hours to a few days.  It is important to remove the snapshots since it creates a delta VMDK file and can grow quite large.  Also, if one creates many snapshots this can affect performance of the guest.

Personally I usually snapshot when the guest is powered down – this gives me the cleanest possible snapshot and I can revert to it with confidence.   It also means the snapshot is created instantly since the file system does not need to be quiesced.  Additionally, it is important to remember your VMware hypervisor can quiesce the file system (so long as VMware tools installed) but not necessarily the applications running in Windows.  i.e.:  SQL cannot be snapshotted reliably.

It is also very important to remember Snapshots are not backups in anyway shape or form. For “backup” purposes the entire VM package should be copied and compressed and stored on a medium other that the host running the VM.  i.e.:  I store my VM archives on my DIY NAS.

Disabling System Restore in XP: Right-Click My Computer and select properties and click the system restore tab, Select “Turn off System Restore” and click OK.
Disabling System Restore in Windows 7: Right-Click My Computer and select properties, click the System Protection link, Click the Configure Button and select “Turn off system Protection”, OK, OK

In conclusion, one can use snapshots in lieu of system restore to gain some performance in the Windows guest but it’s a compromise since the taking and deleting of snapshots is a manual process.

Examining the Western Digital Raptor WD360GD

It is not a mystery to performance enthusiasts that hard drives are typically the slowest part of any computer.  I’ve always been curious about the Western Digital Raptor series of hard drives.  The original series was released in 2003 which makes this technology six years old.  Being “old” technology one can now purchase WD Raptor hard drives on eBay for a very reasonable sum, which I recently did.  These drives being “old” technology I was curious if they still hold up against the current crop of 7200 RPM hard drives.

I was using two test beds for this particular round of tests.  A 2006 Mac Mini running Snow Leopard and an Intel D945GCLF2 Atom 330 board running CentOS 5.4.

The hard drives tested were the aforementioned Western Ditigal Raptor, Western Ditigal Caviar Blue, Western Ditigal Green and an OEM Toshiba 2.5″ 5400 RPM drive.

The tests carried out were very simple, I used dd to write out a 4 GB file and then used dd to read it back.  Each test was carried out 3 times after a clean boot once the system had settled down.  The three passes were averaged to produce the final results.  Here are the dd commands I used:

time dd if=/dev/zero bs=1024k of=tstfile count=4096
time dd if=tstfile bs=1024k of=/dev/null

Throughput was calculated using base 2 (i.e.:  divide by 1024)

I also decided to add a couple of tests using mdadm raid 0 sets on the Linux test host.  raid was created with mdadm and formatted ext3 – I didn’t bother aligning the partitions.

Hard Drive Performance Chart

Hard Drive Performance in MB/Sec

The results are really quite surprising.  First and foremost it is quite refreshing to see six year old technology still holding its own.  Though the WD Raptors are no longer setting speed records they are certainly fast enough to be usable.  We can also see that 7200 RPM hard drives have come a long way as far as performance goes.  Secondly we can see that the results varry quite a bit between the Mac Mini and the Intel D945GCLF2 motherboard.  Now it is impossible to compare the two directly due to OS and file system differences but we can certainly put come credibility in the results since both platforms are based on the Intel 945 chipset.

All in all the Raptors are impressive considering their age and will be used.

We got a robot – a Roomba 530

Sometimes you get an idea, Canadian Tire has a sale and tada you have a brand new Roomba to play with.  In this case a Roomba 530.  The Roomba has been very well documented and there are plenty of reviews out there so I’m not going to do that here.

With a 16 month old & a cat running around the house, there is little time to clean most days and it doesn’t take long till the main floor gets full of crumbs, Cheerios and  assorted pet bits.  It’s still early days but our Roomba has been able to clean up very well after our little monsters and with significantly less effort than pulling out the hose for the central Vacuum.  It’s even been profitable as it has some loose change in its dust bin this morning.
Read more