Keep an eye out for this one. When using ssh-copy-id to copy my public key to a remote host, I found that it had not properly appended my id to the remote ~/.ssh/authorized_keys file. It concatenated it on to the end, with no linefeed. Just be sure to check for that!
Original remote ~/.ssh/authorized_keys
ssh-dss AAA...== forest@machine
After ssh-copy-id did its work:
ssh-dss AAA...== forest@machinessh-dss AAAAB...gdA== forest@laptop
Way-cool batch photo processing on Ubuntu, GNU/Linux, Windows, and Mac with Phatch.
Seen these before when trying to login via SSH with your new RSA public key?
Nov 2 12:09:17 hostname sshd: error: buffer_get_ret: trying to get more bytes 257 than in buffer 73
Nov 2 12:09:17 hostname sshd: error: buffer_get_string_ret: buffer_get failed
Nov 2 12:09:17 hostname sshd: error: buffer_get_bignum2_ret: invalid bignum
Nov 2 12:09:17 hostname sshd: error: key_from_blob: can't read rsa key
Nov 2 12:09:17 hostname sshd: error: key_read: key_from_blob AAAAB3N[...] failed
In my case these were the result of copying a public key from e-mail, which tends to mangle long text lines. I usually don’t have this problem because I use the ssh-copy-id script to copy my keys to a remote host before attempting to log in.
To limit the number of rows dumped by
mysqldump, you can do:
mysqldump -u [user] -p[password] --where="true LIMIT 5" [database] [tablename] > outputfilename.sql
You could select other criteria as well:
mysqldump -u [user] -p[password] --where="userid > 24" [database] [tablename] > outputfilename.sql
I archived a directory. It took two hours, then exited with a non-zero exit status (that means an error). Hmm — I was just testing something; I only cared if certain specific subdirectories were present in the archive. So I needed a way to look deep inside, quickly, and find those particular directories.
GNU tar will let you “test” an archive with -t, but I only wanted a list of the directories archived. Then I wanted that sorted. So…
$ nice tar -tjvf data.tar.bz2 | tr -s ' ' | cut -d' ' -f 6- | cut -d / -f -2 > tardirs.txt
$ uniq tardirs.txt > tardirs_uniq.txt
$ sort tardirs_uniq.txt > tardirs_uniq_sorted.txt
The -tjvf arguments to tar let you look inside, the “tr” command collapses adjacent spaces so that the first “cut” command will output only the sixth (file) field, and the second “cut” command will reduce a directory like “folder/folder/folder/fun.txt” to “folder/folder.” Then “uniq” will remove non-unique names.
Okay, so you very likely have the ‘split’ utility installed (it’s in the GNU coreutils package, so… very likely). If you want to burn a file to multiple media, but you don’t have kdar installed on your desktop… don’t worry about it. Just open a terminal and do:
$ split --bytes=600MB --numeric-suffixes filename.zip filename_part_
In my case, I have a 2.8GB file, but I only have 700MB CDs on hand for my burner. So this command will ensure that I get several 600 “megabyte” (1000 bytes * 1000) pieces, named “filename_part_00,” “filename_part_01,” and “filename_part_02,” et cetera.
How about “recursively look at a log of hostnames used to request my site content. Sort them and ensure that only unique ip address and hostname combinations are counted. Find how many use my ‘.biz’ hostname to land on my site”:
find . -iname '*ecommerce-host_log*' | nice cat | nice xargs cut --delimiter=' ' -f 1,4 | nice sort | nice uniq | nice grep \.biz | nice wc -l
I wasn’t sure which commands would be most processor-intensive, so I used “