Thursday, April 3, 2014

Developers vs Humans

The bits of modern desktop environments I sometimes see send me into the depths of despair. All familiar ways of modifying the system behaviour fail. No more scripts. No more pipes. No more rc-files.

My monitor is usually rotated 90° clockwise. Books and magazines look better this way. To rotate the picture, I usually edit /etc/X11/xorg.conf. The exact spell depends on the video card. Now, it says:

Option "Rotate" "left"

Once upon a time, after an OS upgrade, it stopped working. That is, GDM was displayed in the usual landscape mode. If Xorg was started with 'startx', the orientation was correct, but GDM refused to note the "Rotate left" option. And the display remained that way even after the control was passed from GDM to the window manager. For many days I had to type xrandr --rotate left every time I logged in. I was sure that the problem was in Xorg settings. Then, one day, I decided to install a different desktop manager, Slim. And it suddenly appeared in the right way, in portrait mode! So, the problem was in the GDM settings, but I just couldn't find them. I still haven't and I never will. But the logic of the developers who made that piece of crap to pay no attention to the X11 configuration files, escapes me.

By the way, the switch to Slim solved another problem. I never liked the dumb default notification daemons in Ubuntu, NotifyOSD and KNotify. I'd like to switch to much more configurable 'dunst' notifications, but I couldn't get rid of knotify4, which was started automatically. But in the end I found the place where it was defined: /usr/share/dbus-1/system-services/org.kde.knotify.service I can only ask: how was I supposed to find this place or to change the defaults? Or was I supposed not to change them at all?

Now, tell me, please. Did the developers think they were doing the right thing when they took these decisions? Why do distribution owners accept these weird, uncomfortable and unconfigurable solutions? And, finally, isn't it the reason why modern Linux failed on the desktop?

Monitoring multiline logs by email in full color

I use tools like 'logwatch' and 'logcheck' to monitor new events in logs, but they have serious drawbacks. It is most obvious when it comes to multiline logs like PHP error log or MySQL slow log.

So, I wrote a trivial script that would send email messages with new events from the logs. Besides, the messages must be htmlized/colorized, highlighting the SQL/PHP syntax.

So, here's the script:

#!/bin/bash

L=$(/usr/sbin/logtail2 -f $2 )
if [ "x$L" != x"" ]; then
        echo "${L}"|source-highlight -s $3 -f html|mail $1 -a "Content-type: text/html" \
-s "$(hostname): $2" fi

The syntax is: logmail <recipients> <log file> <syntax>

To extract only new events from the log, I use 'logtail2' from 'logcheck' package. For syntax highlighting I chose 'source-highlight' (it was also used to highlight the script code above).

So, to process MySQL slow log, call the script like this:

logmail name@mail.host /var/log/mysql/mysql.slow.log sql

Or, to produce a report from the php-fpm slow log:

logmail name@mail.host /var/log/php-fpm.slow.log php

The results may look like this:

# Time: 140403 12:01:14
# Thread_id: 12983054  Schema: dsa  Last_errno: 0  Killed: 0
# Query_time: 12.672162  Lock_time: 0.000246  Rows_sent: 300  Rows_examined: 5604906  Rows_affected: 0  Rows_read: 5604906
# Bytes_sent: 14625
SET timestamp=1396512074;
SELECT
       document_id
     , external_document_id
     , DATE_FORMAT(created_at, "%Y-%m-%d") AS created_at
     , source_id
    FROM
     document
    WHERE
     (
      #(
      # status = 'preparsed'
      # AND flag = ""
      #)
      #OR (
       status = 'converted'
      #)
     )
     
     
    ORDER BY
     document.source_priority DESC
    LIMIT
     0, 300;

Monday, April 29, 2013

Extending HP SmartArray RAID array and expanding root file system using LVM

Not too ancient Linux distrubutions use LVM even when installed on RAID array (I used to mistake LVM for RAID some time ago). The multitiered construction becomes a bit overcomplicated: you use RAID to merge hard drives into one logical drive, which is split into partitions, which are then merged into logical volumes. What should we do to grow our root file system onto new hard drives when the RAID size is not enough anymore?

Firstly, insert the new drives into the slots and then use hpacucli utility to check that they are recognized now:

hpacucli ctrl all config show

New drives should appear in the 'unassigned' section. Now, merge them into the existing array:

hpacucli ctrl slot=1 array A add drives=allunassigned

Now, wait till RAID finishes the work. It may take some hours. When it's over, attach these drives to an existing logical drive:

hpacucli ctrl slot=1 ld 1 modify size=max

If you try to run fdisk/gdisk now, you will not see the free space, because it was not available to the operating system during the boot time. You'll have to reboot. After reboot, use fdisk or gdisk to create a new partition on the free disk space. The partition must be assigned type 8E00 (Linux LVM). The operating system will not recognize the new partition immediately. You may try using partprobe command, but I preferred to reboot once again.

Next, you have, in sequence, to create LVM physical volume, extend LVM volume group onto the new volume and extend LVM logical volume to use the free space:

pvcreate /dev/cciss/c0d0p3 vgextend VolGroup00 /dev/cciss/c0d0p3 lvextend -l+429214 /dev/mapper/VolGroup00-LogVol01

To determine the sector number you will use in lvextend command, find the information about free physical extents in the output of vgdisplay:

vgdisplay |grep Free Free PE / Size 429214 / 1.64 TiB

And, finally, grow the file system. If you are conservative enough to use the default ext4 system, you can do this using:

resize2fs -p /dev/mapper/VolGroup00-LogVol01

Thursday, August 23, 2012

Find files modified on certain date or in a date range

When you want to find files matching certain conditions, this is most probably job for the find command. The problem, though, is that you can't specify a certain date.

Options like -atime, -ctime or -mtime receive an argument that specifies the number of 24-hour periods. This means that when you run the following command:

find /usr/log/mysql -type f -mtime -1 -exec ls -l {} \;

you will get not only the files modified today, but those modified within the last 24 hours. You can change this behaviour by adding the option -daystart, which means that the time periods are calculated from the beginning of the current day:

find /var/log/mysql -type f -daystart -mtime -1 -exec ls -l {} \;

This command will produce the list of files modified today. IMPORTANT! Note that the -daystart option must precede all date-comparing options to have an effect.

To find files modified between two dates you can join two conditions using -a option:

find /var/log/mysql -type f -daystart -mtime -3 -a -mtime +0 -exec ls -l {} \;

The result will include the files modified yesterday or the day before yesterday.

Sometimes, though, you may want to specify the dates as they are, not as relative number of days from today. Traditionally, it was done using an awkward technique that involved creating two empty files with modification dates corresponding to the lower and upper borders of the range (using touch -t filename) and then using these files in options -newer and -older:

touch temp -t 200604141130
touch ntemp -t 200604261630
find /data/ -cnewer temp -and ! -cnewer ntemp

(example taken from here)

New versions of find allow you to do just that using -newerXY. Letters X and Y here stand for some one-letter codes corresponding to various comparison types. The combinations are pretty much incomprehensible, but what we need is -newermt. With this option, life gets simple and sunny!

find /var/log/mysql -type f -newermt 2012-08-21 ! -newermt 2012-08-23 -exec ls -l {} \;

This command produces the list of files modified exactly between the beginning of August 21 and the beginning of August 23.

And a little bonus for those who made it to the end! To sum the sizes of the found files (so you can find out, for example, how many gigabytes were written to binlogs in the last two days) use du -c:

du -c `find /var/log/mysql -type f -newermt 2012-08-21 ! -newermt 2012-08-23`|tail -n1

Thursday, July 26, 2012

Chroot, but don't chroot

FTP, SFTP and SCP accounts are often restricted to their home directory, so the users don't mess around with the system. This is done using 'chroot'. You can set up SSH demon or FTP server so the user cannot leave his home directory. But what if you want to give him access to some other directory outside his home directory?

The first thing that comes to mind is links. Soft links don't work, though, because the user cannot see anything outside his directory. Hard links won't work, either, because you usually cannot hardlink directories. They'll do if all you want is access to a single file, though. Also, you can change the user's home directory to the directory you want him to modify. But you may want to grant access to more than one directory. Besides, the implementation of chroot requires that if the user is chrooted to /var/data/lib/img, all directories up the tree (i.e., /var, /var/data and /var/data/lib) must belong to root and nobody else should have write permissions there. This is not always possible.

But the right solution would be to mount a directory to your home directory using bind option. Create the mountpoint inside the home directory and mount:

mkdir /home/remote/img
mount --bind /var/data/lib/img /home/remote/img

Now, the user will be able to work with /var/data/lib/img, but not with any other data on the server.

To make the mount persistent across reboots, add the corresponding entry to /etc/fstab:

/var/data/lib/img /home/remote/img none bind 0 0

Friday, February 17, 2012

More than one 'exec' action in 'find' command

When you run find command, you can pass the names of the found files to an arbitrary command using -exec option:

find /tmp -mtime +3 -exec rm {} \;

The curly braces get replaced by the name of found files and the command is executed for every file. However, if you want to run more than one command on the file or use the filename more than once in one command, you cannot do that:

Only one instance of `{}' is allowed within the command.

To bypass the limitation, you can execute a shell, passing the filename as an argument. In the commands executed by the shell, the argument will be available as $0:

find /tmp -mtime +3 -exec sh -c 'ls -ld "$0"; rm "$0"' {} \;

Tuesday, January 3, 2012

Getting rid of GUI password prompt in EasyPG

EasyPG is a package used by GNU Emacs to simplify work with files encrypted by GnuPG (GPG). Extremely useful package, it has a bad habit of relying on external tools to prompt for the password. One of such tools is 'gpg-agent'. In Debian/Ubuntu this agent usually replaces 'ssh-agent'. It's a good tool, but it uses a GUI dialog to prompt for the password. This makes your work with EasyPG and Emacs rather uncomfortable.

There's a number of advices on how you can suppress that GUI dialog. I prefer another one, I simply removed 'gpg-agent'. It did not affect the functionality, since it was replaced by 'ssh-agent'. Now, EasyPG prompts for the password in Emacs mini-buffer