SSH public RSA key errors

Seen these before when trying to login via SSH with your new RSA public key?

Nov 2 12:09:17 hostname sshd[12712]: error: buffer_get_ret: trying to get more bytes 257 than in buffer 73
Nov 2 12:09:17 hostname sshd[12712]: error: buffer_get_string_ret: buffer_get failed
Nov 2 12:09:17 hostname sshd[12712]: error: buffer_get_bignum2_ret: invalid bignum
Nov 2 12:09:17 hostname sshd[12712]: error: key_from_blob: can't read rsa key
Nov 2 12:09:17 hostname sshd[12712]: error: key_read: key_from_blob AAAAB3N[...] failed

In my case these were the result of copying a public key from e-mail, which tends to mangle long text lines. I usually don’t have this problem because I use the ssh-copy-id script to copy my keys to a remote host before attempting to log in.


Apache custom logging

Aren’t you interested in seeing what requests users, bots, or script kiddies make of your site, especially those things that client-side JavaScript-based analytics packages don’t tell you?

Under Apache, custom logging can give you lots of information you might not have seen otherwise. I’ll let the documentation for Apache’s mod_log_config say most of this, but as a quick preview, you could try defining a custom log format up near the top of your httpd.conf with

LogFormat "%a %t %{Host}i \"%r\"" hostlog

for example, then in all of your Directory containers, you could do

CustomLog logs/forest-monsen-site-host-log hostlog

Then, in my case, /var/log/httpd/forest-monsen-site-host-log would contain lines like
192.168.0.3 [31/Aug/2010:08:53:24 -0500] www.forestmonsen.com "GET /aggregator/sources/2 HTTP/1.0"
192.168.0.5 [31/Aug/2010:08:53:24 -0500] www.forestmonsen.org "GET /images/house.gif HTTP/1.1"

And I’d be able to tell which hostname was originally requested by the user — before any of my mod_rewrite rules got to it. Good stuff.


sftp chroot jail in Ubuntu

(Update 16 Mar 2011: Since writing this post, I’ve learned of an easier way to create this chroot jail. Newer versions of OpenSSH enable the “ChrootDirectory” configuration directive. I recommend that you take a look at George Ornbo’s tutorial on chrooting sftp users in Intrepid for the details.)

(Updated 08 Feb 2011 to reflect xplicit’s experience on Ubuntu 10.04.)

I wanted to give a buddy access to a website hosted on my box. So I tried scponly, since I only wanted to provide SFTP access to that particular directory, using a chroot jail. The steps are as follows.

  1. Install the scponly package using Ubuntu’s APT package management system.
  2. Use the script provided to set up your first jail and your user’s home directory. For the location of the user’s jail, give the path of the directory you want to share.
  3. Provide a password for the new user.
  4. Ensure that the new user has permissions to read and write all the necessary directories in your Web site.


$ sudo apt-get install scponly
$ gzip -dc /usr/share/doc/scponly/setup_chroot/setup_chroot.sh.gz > /tmp/setup_chroot.sh
$ cp /usr/share/doc/scponly/setup_chroot/config.h /tmp

The previous step copies the “config.h” file to help things go more smoothly, as Luke found.

$ chmod +x /tmp/setup_chroot.sh
$ cd /tmp
$ sudo ./setup_chroot.sh


Next we need to set the home directory for this scponly user.
please note that the user's home directory MUST NOT be writeable
by the scponly user. this is important so that the scponly user
cannot subvert the .ssh configuration parameters.
For this reason, a writeable subdirectory will be created that
the scponly user can write into.

Note that I removed the /incoming subdirectory created by this script. There was no need for a separate directory for my buddy to upload files. He could have permissions over the whole site tree.


-en Username to install [scponly]
bob
-en home directory you wish to set for this user [/home/bob]
/var/www/sites/bobsite/htdocs
-en name of the writeable subdirectory [incoming]


-e
creating /var/www/sites/bobsite/htdocs/incoming directory for uploading files


Your platform (Linux) does not have a platform specific setup script.
This install script will attempt a best guess.
If you perform customizations, please consider sending me your changes.
Look to the templates in build_extras/arch.
- joe at sublimation dot org


please set the password for bob:
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
if you experience a warning with winscp regarding groups, please install
the provided hacked out fake groups program into your chroot, like so:
cp groups /var/www/sites/bobsite/htdocs/bin/groups

This script added certain directories to the site root (/var/www/sites/bobsite/htdocs). Every other directory needed to be writable by Bob. So let’s add Bob to a special group, and allow that group write access on all the website’s files.


$ sudo adduser bob www-data

We can ignore /bin, /etc, /lib and other directories added to the chroot jail (the website filesystem):


$ sudo find . \! -user root -exec chgrp www-data \{\} \;
$ sudo find . \! -user root -exec chmod g+w \{\} \;

Good to go!


Server move complete

I migrated a bunch of stuff from a CentOS 4 server to Ubuntu 8.04 LTS over the last couple of days.

  • Five websites: One Moodle and one Drupal site backed by MySQL databases, and three static sites. SSL setup.
  • Added some software. How can I work without vim and slocate?
  • Security hardening, including a service review, permissions, firewall setup, administrative access through SSH, sudo config, and Postfix with spam filtering.
  • Nagios server monitoring config.

I checked my work logs and decided that I did pretty well, considering I got it all done in 10 hours 35 minutes.


Set Debian or Ubuntu server timezone

This one’s an easy one, from the tzselect (1) manpage:

sudo dpkg-reconfigure tzdata


Flush DNS cache in Ubuntu

Interested in flushing your Ubuntu DNS cache? Note: I’m running Jaunty Jackalope as of the date of this post.

Well, Ubuntu doesn’t cache DNS by default. Your cache rests within your router, or your assigned DNS servers. You could restart your router, if you have access to it. Or wait until the time-to-live has expired.

You can install a local resolver that will cache DNS addresses, if you like. It will speed up your Web access slightly, since your Web browser will check the local cache first. I imagine the time you save will be measured in milliseconds.

Do that with:

sudo apt-get update && sudo apt-get install nscd

And to clear your local cache, restart the service:

sudo /etc/init.d/nscd restart


Recursively find and list filesize and full path on the command line

Can’t beat the command line for flexibility and power in accomplishing system administration tasks. Here’s one way to recursively list the filesizes and full paths of files with a particular extension from the command line:

nice find . -name "*.swf" -type f -print0 | xargs -0r ls -skS | less

This is a succinct way to say:
“Show me all Flash files in the current directory hierarchy, descending to unlimited depth. Print the full filename on standard output followed by a null character. Send each filename in turn to the ‘ls’ command, which will look up each file’s size and print that in 1K blocks followed by the filename. (If there aren’t any results from the first command, don’t even run the ‘ls’ command, since that will just give us a list of all the files in the current directory.) Finally, send all that output to the ‘less’ command, which will allow me to page through and view it easily.”

EDIT: Added -r switch to xargs command to ensure we don’t see a list of all files, if the first ‘find’ command doesn’t find any. That sort of thing could be confusing.


Still filtering spam primarily using the “From:” header? Then read this.

I’m working with an organization that has been refusing “share this” e-mails from our Web site; specifically, e-mails that originate at our Web server that have that organization’s domain name in the “From:” header.

Here’s the problem with this. Let’s say that Joe Bloggs works at Bloggy Spot, and his e-mail address is “joe@bloggyspot.com.” His coworker Carl really wants to forward him a relevant article from the Time Magazine Web site, so he fills out the form, enters his e-mail address (which is required), and Joe’s, and hits “send.”

But since that message from Time Magazine does not originate from inside your network — as far as you can tell — and it claims to come from Joe’s coworker Carl (“carl@bloggyspot.com”), you refuse that message. “Sorry, can’t deliver to Joe,” you say. “There’s no way you could be Carl. Carl wouldn’t send e-mail from anywhere other than here.”

Don’t refuse those e-mails. Allow them. Rely on other, more reliable methods, and be happy.

Why shouldn’t you base your filtering on the From: header?

For two reasons.

First, you’re trying to fight against something that has been part of the nature of e-mail since its beginning, and second, you’re trying to fight against the nature of the Web today.

  1. This has been the nature of e-mail since its beginning.
    The e-mail protocol standard has always allowed e-mail clients, and hence people, to put whatever they want in the “from” box — so from the beginning, conscientious system administrators have had to rely
    on much more robust methods of content and spam filtering. Looking in the “From:” header for an e-mail supposedly sent from “bloggyspot.com,” and prohibiting e-mail that way, will only make it harder on users. Regarding the organization I’m negotiating with, their system administrator did point out that they already have multiple other layers of filtering and spam protection in place. I argued that since that was the case and since those methods are much more reliable, they should be relying on these instead.

    Perhaps you see the issue: a system that relied only on this level of filtering would be quite easy to defeat, and a system that relied on more filtering than this, wouldn’t need this type of quasi-effective filtering anyway.

  2. This is the nature of the Web today.
    When you visit a Web site and forward an article to someone you know, your message in the vast majority of cases comes “from” your e-mail address. Obviously, this is done so that the recipient will be more
    likely to accept the e-mail when it arrives. The Web’s most popular sites all follow this practice.

    The New York Times, Time Magazine, CNN, and Fox News sites, for example, allow — and in the case of the Times, require — a user to enter their own e-mail address as the “From:” address. Yahoo!, the Web’s third most visited site, does this as well. I’m sure there are many, many more examples.

Spam is a big problem for organizations, but when filtering spam, you’ve got to choose your battles carefully. If you hamstring your users too much, the costs probably won’t be worth the benefits.


Use crawl-delay in your robots.txt file to slow down robots

You can use the “Crawl-delay” tag in your robots.txt file to slow down Web crawlers:
User-agent: *
Crawl-delay: 15

The time is specified in seconds.