Copyright Notice

This text is copyright by CMP Media, LLC, and is used with their permission. Further distribution or use is not permitted.

This text has appeared in an edited form in WebTechniques magazine. However, the version you are reading here is as the author originally submitted the article for publication, not after their editors applied their creativity.

Please read all the information in the table of contents before using this article.
Download this listing!

Web Techniques Column 9 (December 1996)

Yeah, sure, there are random link scripts, and random ``quote of the day'' scripts out there. But I wanted to tackle the subject as well. And I wanted to give it a twist... some ``memory''.

The random program I describe here ``knows'' what random items have been given out recently, and biases older items with a higher probability of being selected. And it does it in a way that allows a large number of entries to run in parallel, not tying up a central database while making a decision. ``Cool'', you say? Read on!

Not only that, it could be equally easily used for random links, random ads (bleh, but they pay the bills for the coolest free sites), or even just random graphics. ``Now how much would you pay?'' It's free, and can be found in Listing 1 [below]. Much of it is commented, but let me hit the high points here:

Lines 1 and 2 begin nearly every program I write these days. Line 1 enables taint checks and warnings, while line 2 forces me to think about the variables I am using and why.

Line 3 brings in a neat utility module that I use to get the directory part (dirname) and file part (basename) of some of the UNIX paths and URLs. Line 16 forces a particular PATH, necessary if I invoke any child processes.

Lines 19 to 23 create a subroutine called ``ent''. This routine takes an arbitrary string and HTML-encodes it so that <, &, and > appear correctly. Of course, in past columns, I simply brought in HTML::Entities, but I wanted to show you a lightweight solution that doesn't require loading the LWP into place. Line 21 is where the hard work takes place. Each character in the target class is replaced with ``&#'' followed by the ordinal value of the character and then a semicolon.

Lines 26 through 32 define a ``death handler'', causing the error message to come out in a nice format. In past columns, I relied on an ``eval block'' to handle this, but once again, I am trying a different method to compare its relative difficulty, and give you ``more than one way to do it'' (the Perl Creed).

Line 26 assigns the special signals __DIE__ and __WARN__ the value of an anonymous sub, created in-line. Sure, you could have stuck this into a normal named sub, but this is cooler.

Line 27 takes the death reason (what would have gone into $@ on an eval block), and shoves it into a local $why variable. Line 28 takes off the possible newline on the end of the string. Line 29 converts the string to its HTML-safe version, using the ``ent'' routine defined above.

Lines 35 through 37 fetch some CGI/SSI interface environment variables. $DOCUMENT_URI is the invoking document's URL. I'll be using this later to validate the selected ``random'' directory.

$PATH_INFO is the trailing part of the URL invoking this script. For example, if this script was invoked as: /cgi/random/some/path/fred.random, then $PATH_INFO is /some/path/fred.random.

$PATH_TRANSLATED is a Unix file-path corresponding to the document that would have been served if we had asked the web-server for the URL found in $PATH_INFO. I'll be using this later to access the random directory and its contents. The nice thing about this particular solution is that I don't have to know what the rules are for the server to perform this translation (from URL to directory name), unlike some of the programs I've written for previous columns. The server here is performing the translation for me.

Lines 40 and 41 get the basename (everything but the filename) of both the invoking document URL and the requested random directory. Lines 47 and 48 ensure that the invoking document's URL is pointing at the same directory as the requested ``random'' document's URL. This is an important security check: without it, a nefarious user could quite possibly ask for someone else's random directory.

If you are customing this script, you may also choose a standard set of ``random'' directories that everyone can share on your web server. For example, you may consider /random/WHATEVER.random to be legal automatically. In that case, a simple comparison of $doc_dir equal to ``/random'' would do it.

Lines 51 through 54 check the translated path to ensure that the path ends in ``.random'', and represents a valid directory. This further verifies that this directory was deliberately intended to be a random deal, and not just some directory an evil person wanted to scan and mangle.

Next, I need a list of ``.log'' files in that directory. Lines 58 through 60 take care of this. Line 59 performs a complex series of interesting things, and is best read from right to left. The readdir operator in a list context returns an unsorted list of names from the directory. The grep operator keeps only those names that end in ``.log''. The map operator glues the PATH_TRANSLATED pathname in front of each name, yielding an absolute pathname.

Lines 63 through 70 compute a weighted random selection of the candidates in a one-pass algorithm. It's a bit tricky, but at the same time nicely elegant, so let me spend a moment to describe it.

Lines 63 and 64 initialize and create two variables. $total_weight keeps track of the ``weight'' of all elements we've seen so far. This starts out at zero. $winner will utimately be the chosen element of @files. Initially, it's undef. Line 65 spins the random dial so that we get a different initial starting point for each invocation of the program.

Lines 66 to 70 loop through once for each element of @files, sticking a candidate in $_. Remember, at this point, @files has a list of absolute Unix paths to filenames ending in ``.log'' in the selected random directory.

Line 67 computes a ``weight'' for a particular candidate. In this case, I chose to select a weight consisting of the square of the age in seconds of the particular logfile. This means that an item selected 20 seconds ago (with a weight of 400) is 4 times as likely to be selected as an item selected 10 seconds ago (with a weight of 100). This seemed fairest to me, and in some ad-hoc tests, caused a fairly even distribution of selections. To be fairer, I could have based something on the size of the log files, or maybe some ratios read from a file, or something. Line 68 updates the total weight (so far) to include the weight computed for this particular item.

Line 69 is where the cool stuff happens. If a random number between 0 and the total weight is less than the weight of this particular item, then this particular item is a potential winner. Put another way, this item will be chosen (this weight)/(total weight) times. Now, even if this element is chosen, a later element may override it, but again, it'll be in proportion to the weights of that item. Note that on the first pass, the two weight values are identical, meaning that $winner is definitely assigned $_ the first time around. For example, if the total weight is 10 from the previous pass, and this item has a weight of 3, then it has a 3/13 chance of being picked.

When the loop is finished, $winner is a ``.log'' file (or possibly undef). However, since it came from the result of a readdir, it's still considered tainted. Since I already know that it's a pretty safe filename, I untaint the entire value using a straightforward cheat in line 75. Lines 76 and 77 attempt to open this log file for appending.

Lines 79 through 85 form a standard ``flock and wait'', as described in a few of my previous columns. I want to write a timestamp and host address to the log file, but I need to have this script be the only one doing it at the time. So, I stole the example directly from the perlfunc manpage at the flock entry (once again). Note that because I'm writing to the ``.log'' file, the timestamp gets updated, making this particular entry the least likely to be re-used on the next pass. This is exactly what we want -- a fairly fair dealing of items.

Line 89 computes a URL for the selected item. This will be the PATH_INFO (not PATH_TRANSLATED) name, appended with the basename of the selected canidate, minus the ``.log'' name.

Line 93 sends a ``location'' directive to the web server. This will cause the web server to pick the selected URL, and then insert that into the SSI document.

So, how to use this thing? Best illustrated by an example. Let's work it from the top down...

I have a document in the URL http://www.myserver.com/some/path/thisdoc.html, containing the SSI invocation snippet of:

        <P>And here's the random picture of the day:
        <!--#include virtual="/cgi-bin/random/some/path/picture.random"-->

Note that the ``/some/path'' here in the invocation has to be the same as the document in which it gets included. This document also has to be enabled for SSI processing. You'll have to figure out how this is done. On my server, I just make the file executable (Apache XBitHack is turned on).

I then need a subdirectory picture.random, containing three different things for each picture to be randomly dealt:

  1. The picture itself (I'm using whatever.jpg here).

  2. An HTML snippet that when included, creates the proper <IMG SRC=``...''> to select that picture. We'll call that ``whatever.html'' corresponding ``whatever.jpg''.

  3. A log file, writable by the web server, that selects that particular HTML snippet. That'd be ``whatever.html.log'' here.

Luckily, I have a nice GNU-Makefile that does all that:

        J = $(wildcard *.jpg)
        H = $(J:.jpg=.html)
        L = $(J:.jpg=.html.log)
        D = $(shell basename `pwd`)
        
        all: $H $L
                rm -f $(filter-out $H, $(wildcard *.html))
                rm -f $(filter-out $L, $(wildcard *.log))
        
        $H $L: GNUmakefile
        
        %.html.log: %.html
                echo -n >$@
                chmod 646 $@
        
        %.html: %.jpg
                echo "<table><tr><th>$<<tr><td><IMG SRC='$D/$<' ALT='$<'></table>" >$@

This Makefile is nice because I can add and delete whatever.jpg files at will, and it automatically creates or destroys the log files and HTML snippets.

When someone selects thisdoc.html, the server-side include invokes my random CGI script, which looks at the ``.log'' files in the picture.random directory, and picks one of them, biased towards the ones that haven't been selected recently.

Let's say it picks ``whatever.html.log''. That means that the script will then add the date and time and IP source address to that particular logfile, then issue a redirect to cause the server to go fetch ``whatever.html''. That file contains:

        <table><tr><th>whatever.jpg<tr><td>
        <IMG SRC='random.picture/whatever.jpg" ALT="whatever.jpg'>
        </table>

Notice the table markup enclosing the picture reference, which gives a cute label above the picture. If I had wanted to keep it simple, I'd have just used the <IMG SRC=``...''> part.

The server then includes this snippet into thisdoc.html, and the user's browser finally sees:

        <P>And here's the random picture of the day:
        <table><tr><th>whatever.jpg<tr><td>
        <IMG SRC='random.picture/whatever.jpg" ALT="whatever.jpg'>
        </table>

which will be different (probably) on each reload!

Wow. Lots of work, but hopefully, I've illustrated a couple of interesting techniques. See ya next time...

Listing 1

        =1=     #!/usr/bin/perl -Tw
        =2=     use strict;
        =3=     use File::Basename;
        =4=     
        =5=     ## random dealer SSI-CGI, by Randal L. Schwartz (c) 1996
        =6=     
        =7=     ## <!--#include virtual="/cgi-bin/random/DOC-PATH/SUBDIR.random"-->
        =8=     
        =9=     ## where DOC-PATH has to be the *same* as the path to the including
        =10=    ## doc and SUBDIR.random has to exist below that directory.
        =11=    ## SUBDIR.random contains *.log files which must be httpd-writeable.
        =12=    ## Random fairly-dealt *.log will cause return of corresponding *
        =13=    ## (sans .log) inline.  Of course, these notes are cryptic. :-)
        =14=    
        =15=    ## set the path
        =16=    $ENV{"PATH"} = "/usr/local/bin:/usr/ucb:/bin:/usr/bin";
        =17=    
        =18=    ## return $_[0] encoded for HTML entities
        =19=    sub ent {
        =20=      local $_ = shift;
        =21=      $_ =~ s/[<&>]/"&#".ord($&).";"/ge;  # entity escape
        =22=      $_;
        =23=    }
        =24=    
        =25=    ## death handler, presumes no output yet
        =26=    $SIG{"__DIE__"} = $SIG{"__WARN__"} = sub {
        =27=      my $why = shift;
        =28=      chomp $why;
        =29=      $why = ent $why;
        =30=      print "Content-type: text/html\n\n[$why]\n";
        =31=      exit 0;
        =32=    };
        =33=    
        =34=    ## get CGI/SSI variables
        =35=    my $DOCUMENT_URI = $ENV{"DOCUMENT_URI"};
        =36=    my $PATH_INFO = $ENV{"PATH_INFO"};
        =37=    my $PATH_TRANSLATED = $ENV{"PATH_TRANSLATED"};
        =38=    
        =39=    ## compute directory names of requested path and document path
        =40=    my $doc_dir = dirname $DOCUMENT_URI;
        =41=    my $info_dir = dirname $PATH_INFO;
        =42=    
        =43=    ## security checks:
        =44=    
        =45=    ## is user asking for immediate subdir of the doc from which we're
        =46=    ## being included?
        =47=    $doc_dir eq $info_dir
        =48=            or die "security error: $doc_dir ne $info_dir";
        =49=    
        =50=    ## is user asking for directory ending in ".random"?
        =51=    $PATH_TRANSLATED =~ /\.random$/
        =52=      or die "security error: $PATH_TRANSLATED does not end in .random";
        =53=    -d $PATH_TRANSLATED
        =54=      or die "security error: $PATH_TRANSLATED is not a directory";
        =55=    
        =56=    ## everything's validated, so go for it.
        =57=    ## get the candidate log files:
        =58=    opendir PT, $PATH_TRANSLATED or die "opendir: $!";
        =59=    my @files = map "$PATH_TRANSLATED/$_", grep /\.log$/, readdir PT;
        =60=    closedir PT;
        =61=    
        =62=    ## select a weighted random candidate:
        =63=    my $total_weight = 0;
        =64=    my $winner;
        =65=    srand;
        =66=    for (@files) {
        =67=      my $this_weight = (86400 * -M) ** 2; # age in seconds, squared
        =68=      $total_weight += $this_weight;
        =69=      $winner = $_ if rand($total_weight) < $this_weight;
        =70=    }
        =71=    ## $winner might still be undef at this point, in which case we'll die
        =72=    ## on the following...
        =73=    
        =74=    ## untaint and open log:
        =75=    $winner =~ /(.*)/s;
        =76=    open LOG, ">>$1"
        =77=      or die "Cannot append to $1: $!";
        =78=    
        =79=    ## wait for it to be ours:
        =80=    flock LOG, 2;
        =81=    
        =82=    ## record this hit:
        =83=    seek LOG, 0, 2;
        =84=    print LOG scalar localtime, " to $ENV{'REMOTE_ADDR'}\n";
        =85=    close LOG;
        =86=    
        =87=    ## compute URL for redirect
        =88=    ## (original request path and basename of winner, without log):
        =89=    my ($winner_url) = ("$PATH_INFO/" . basename $winner) =~ /(.*)\.log$/s;
        =90=    
        =91=    ## trigger an SSI redirect so that the server will fetch the data (and log it)
        =92=    ## (It had better make sense inside text/html :-)
        =93=    print "Location: $winner_url\n\n";
        =94=    
        =95=    ## done!

Randal L. Schwartz is a renowned expert on the Perl programming language (the lifeblood of the Internet), having contributed to a dozen top-selling books on the subject, and over 200 magazine articles. Schwartz runs a Perl training and consulting company (Stonehenge Consulting Services, Inc of Portland, Oregon), and is a highly sought-after speaker for his masterful stage combination of technical skill, comedic timing, and crowd rapport. And he's a pretty good Karaoke singer, winning contests regularly.

Schwartz can be reached for comment at merlyn@stonehenge.com or +1 503 777-0095, and welcomes questions on Perl and other related topics.