Init scripts are a pain in the ass

Ideally you’d ship no init script and let downstream packaging figure it out, since there are even more init systems than there are package formats: various versions of SysV init; upstart for old versions of Ubuntu; systemd for new versions of Ubuntu, Debian, and Red Hat; OS X launchd for all those Homebrew users; and if you’re generous, or a masochist, the several variants of BSD rc.d; Solaris SMF; or even Windows Services.

If you don’t have the luxury of being popular enough that distros package for you, or you’re working on in-house stuff, then you might end up writing a SysV init script. It won’t be portable. Even if you’re using daemonize to handle the hard stuff like daemonizing, PID/lock files, and running as a non-root user, there are still little matters like the helper functions you need even with daemonize to kill the process or get its status. Are they in /etc/rc.d/init.d/functions? Are they in /lib/lsb/init-functions? Does your target system actually have either, and if so, what’s in them? Do you need to run Red Hat chkconfig after installing the script? How about Debian update-rc.d? Do you need an INIT INFO block? Does your target system claim to use LSB-compliant init scripts? Which LSB version? Do you really need all those entry points? Are you going to test it on systemd or upstart systems to see if their SysV backwards compatibility actually works?

OK, for modern systems, you can probably get away with just systemd and launchd, but no wonder a lot of new kids think “background service” means “run it in tmux“.

Fractalspotting: AVISynth

I mentioned denoising as an application of fractal image processing techniques in my Review of Fractal Image Compression and Related Algorithms, and recently found out that there has been at least one implementation in the wild, called FrFun7, for the AVISynth frameserver. Sadly, it’s closed source and the author appears to have vanished.

Tagged ,

How to package the TeamCity build agent for Debian/Ubuntu

This post brought to you by Thumbtack Engineering and the letters C and I.

How to package the TeamCity build agent for Debian/Ubuntu

We’ll use fpm to package the TeamCity 9 build agent for Debian 8 (“Jessie”) or Ubuntu Linux, including a systemd service file to start it automatically.

First, read the TeamCity 9 docs for “Setting up and Running Additional Build Agents”.

Get the TeamCity build agent

Download TeamCity 9 from JetBrains. I’m going to be using TeamCity 9.0.3. You can also get a version of the agent matched to your TeamCity install by going to the Agents tab and clicking “Install Build Agents” in the upper right.

Install fpm if you don’t have it already.

Unzip TeamCity-9.0.3.tar.gz and find the buildAgent directory. This is where all of the agent’s code and config files live. Since TeamCity wasn’t written with the FHS in mind, we’ll put it in /opt/TeamCity/buildAgent.

Make the agent scripts executable

Run chmod +x buildAgent/bin/*.sh. You won’t need to run them on the system you’re using for packaging, but fpm preserves the executable flag when it creates a package.

Create a systemd service file

Create a file named teamcity-build-agent.service and put this in it.

Description=TeamCity build agent

ExecStart=/opt/TeamCity/buildAgent/bin/ run


The run argument keeps the startup script from forking so that systemd can manage the process itself. The User section should be changed to reflect whatever user you plan on running TeamCity as. (See the systemd.exec man page for more process control options.)

This file will eventually be copied to /lib/systemd/system/teamcity-build-agent.service, because systemd is too cool for /etc.

Build a package with fpm

fpm has some usage examples on its wiki. We’re going to use the -s dir input type to take a directory and the -t deb output type to generate a Debian .deb package.

-s dir
-t deb
--name teamcity-build-agent
--version 9.0.3
--architecture all
--depends java6-runtime-headless

This fpm invocation tells it thje package name and version, that the agent doesn’t care what architecture you’re using but requires Java 6, and that everything should go under /opt/TeamCity, except for the service file, which goes where systemd is expecting it.

You should now have a file named teamcity-build-agent_9.0.3_all.deb.

Create a user

Before installing the package on your target machine, run sudo adduser teamcity --system --group --home /opt/TeamCity, or add something equivalent to your configuration manager if that’s where you manage users. (I’ll be using Puppet to manage users on production machines that will have this package). The --group option creates a teamcity group along with the teamcity user, so that you can then add other users to the teamcity group if they need to look at TeamCity builds or configuration.

For extra credit, you could create and destroy the user in preinstall/postremove scripts.

Install the package

Copy the file to your target machine and run sudo dpkg -i teamcity-build-agent_9.0.3_all.deb.

Configure the package

Copy /opt/TeamCity/buildAgent/conf/ to /opt/TeamCity/buildAgent/conf/ Edit serverUrl to point it at your TeamCity server. You may also want to set name to something recognizable, or the agent will default to your hostname. If you’re using Puppet, this is a good file to templatize.

Create writable directories

You’ll need to create the directories named in bin/ and conf/, including at least logs, work, temp, and system, as well as update and backup directories that I’ve only seen in log output. They should be writable by the teamcity user.

sudo install -d -o teamcity -g teamcity -m ug=rwx /opt/TeamCity/buildAgent/logs
sudo install -d -o teamcity -g teamcity -m ug=rwx /opt/TeamCity/buildAgent/work
sudo install -d -o teamcity -g teamcity -m ug=rwx /opt/TeamCity/buildAgent/temp
sudo install -d -o teamcity -g teamcity -m ug=rwx /opt/TeamCity/buildAgent/system
sudo install -d -o teamcity -g teamcity -m ug=rwx /opt/TeamCity/buildAgent/update
sudo install -d -o teamcity -g teamcity -m ug=rwx /opt/TeamCity/buildAgent/backup

Alternatively, you can map them to more FHS-friendly locations in /tmp and /var. logs is used by for both logging and a PID file, but that can be changed by setting the LOG_DIR and PID_FILE environment variables in the service file.

More extra credit: there’s no easy way to specify file ownership in an fpm-generated .deb package. This is a known issue. This is not a limitation of the .deb format itself, which, broadly speaking, is a pair of tarballs. Solve this problem and your package can create those writable directories itself.

Start the agent

Run sudo systemctl start teamcity-build-agent. You can also run sudo systemctl status teamcity-build-agent to see if it started successfully.

Talking in both directions

Communication between a TeamCity server and a TeamCity agent is two-way. Agent→server connections can and should use HTTPS, but server→agent connections don’t yet support encryption.

You’ll need to make sure the agent can reach the server (outbound on on whatever port you used in the serverUrl section of the agent config file), and the server can likewise reach the agent (inbound on the port specified in ownPort and the IP address specified in ownAddress, if you’ve bound your agent to a specific IP). The inbound port is 9090 by default, and again, it’s not encrypted, so you’ll need to use a VPN, SSH tunnel, or equivalent between the server and agent.

Tagged , , , , , ,

How to find the name of an Elastic Beanstalk environment from inside one of its instances

At Thumbtack, we’ve started using Amazon Elastic Beanstalk (EB) to deploy web services. We already use Papertrail as a log aggregator for our existing systems, and have installed their syslog forwarding agent on our EB apps using .ebextensions. However, I didn’t have a way to group log output from multiple EB-managed EC2 instances, or distinguish EB environments by name, because the default EC2 hostnames are just prefixed IP addresses. So I decided to use the EB environment name, which is unique across all EC2 regions, and tell Papertrail’s agent to use that instead of a hostname.

EB sets the environment ID and name as the elasticbeanstalk:environment-id and elasticbeanstalk:environment-name EC2 tags on all of the parts of an EB app environment: load balancer, EC2 instances, security groups, etc. Surprisingly, EC2 tags aren’t available to instances through the instance metadata interface, but they are available through the normal AWS API’s DescribeTags call. EB app container instances are based on Amazon Linux and have Python 2.6 and Boto preinstalled, so rather than go through the shell gyrations suggested by StackOverflow, I wrote a Python script ( that uses Boto to fetch the instance ID and region, then uses those two together with the instance’s IAM role to fetch its own tags, and prints the environment name.

You’ll need to make sure the IAM role used by your EB app environment has permissions to describe tags, as well as describing instances and possibly instance reservations as well. I’ve included an IAM policy file (EC2-Describe-IAM-policy.json) that you can apply to your IAM role to grant it the permission to describe any EC2 resource.

See my gist:

"Version": "2012-10-17",
"Statement": [
"Action": [
"Effect": "Allow",
"Resource": "*"
#!/usr/bin/env python
import boto.utils
import boto.ec2
iid_doc = boto.utils.get_instance_identity()['document']
region = iid_doc['region']
instance_id = iid_doc['instanceId']
ec2 = boto.ec2.connect_to_region(region)
instance = ec2.get_only_instances(instance_ids=[instance_id])[0]
env = instance.tags['elasticbeanstalk:environment-name']
view raw hosted with ❤ by GitHub
Get an Amazon Elastic Beanstalk environment’s name from inside one of its instances:

There’s an annoying wrinkle around getting the region for an EC2 instance through the instance metadata: you can’t, at least not through a directly documented method. You can get the availability zone (AZ) from, which will be something like us-west-2a, and then drop the last letter, giving you a region name like us-west-2. However, Amazon has never documented the relationship between AZ names and region names, so it’s possible this could break in the future.

You can also fetch an instance identity document in JSON format from Its contents aren’t documented either, but on all the EC2 instances I’ve examined, it looks like this, and contains the region as well as the AZ:

  "instanceId" : "i-12abcdef",
  "billingProducts" : null,
  "architecture" : "x86_64",
  "imageId" : "ami-dc123456",
  "pendingTime" : "2014-12-18T01:20:42Z",
  "instanceType" : "m3.2xlarge",
  "accountId" : "1234567890",
  "kernelId" : null,
  "ramdiskId" : null,
  "region" : "us-west-2",
  "version" : "2010-08-31",
  "privateIp" : "",
  "availabilityZone" : "us-west-2a",
  "devpayProductCodes" : null

Since the instance identity document actually has a key named region, I decided to go with that method to find the region for an EC2 instance from inside that instance.

Previously, we’d set an environment variable or customize a config file to tell Papertrail what “hostname” to use. Finding the EB environment name at runtime from EC2 tags means that we don’t have to customize EB app bundles for each target environment, and thus can use the same app bundle across multiple environments; for example, from a CI build server’s test environment to a staging environment, and then to production.

Tagged , , , , , ,

A Review of Fractal Image Compression and Related Algorithms

I wrote this review paper on fractal image compression, denoising, and enlargement back in 2008 while I was at Caltech, and thought I’d lost it until discovering a copy in my backups. Fractal compression is a fascinating area, and I went on to prototype a GPGPU-accelerated image enlarger as my capstone graphics project. It actually predates OpenCL, and I hope to come back to it someday for an overhaul, using this paper for reference.

A Review of Fractal Image Compression and Related Algorithms

Jeremy Ehrhardt – 2008-03-18


Fractal image compression is a curious diversion from the main road of raster image compression techniques. Fractal compressors do not store pixel values. Rather, they store a fractal, the attractor of which is the image that was encoded. This representation of an image has a number of useful properties.

The property that drove initial development of the field is that images with fractal properties can potentially be encoded in very few bits if a close match for their generating fractal can be found. Fractal-like images are frequently found in nature, so fractal compression promised very high compression ratios for natural scenes such as clouds, landscapes, forests, etc. Another useful property is the scale independence of fractals. Fractal-encoded images may be decoded at larger or smaller sizes than the original without major distortion.

This review covers major developments unique to fractal image compression. Many published algorithms incorporate general data compression techniques such as vector quantization and entropy coding (as many non-fractal compressors do), but the details are best covered by general data compression literature, and have thus been omitted. There is also some recent work hybridizing fractal and wavelet compression, which is best left to a review of wavelet-based methods, and has also been omitted.

The first problem in the field was proving that it was possible to find a fractal representation for any image. This was proved possible with the Collage Theorem, which was followed by the development of an algorithm to generate fractal representations for arbitrary images. The algorithm was slow, so subsequent work looked at improving the speed of the search for fractal representations, and also on improving their quality. The partitioning of the image to be encoded was a common target for improvement, and several of the important schemes are discussed below.

Unfortunately, fractal compression was found to have some problems. Early research did not establish that natural images were actually self-similar to such a degree that fractal compression was the best fit. Additionally, optimal encoding was found to be NP-hard. These difficulties are the reason that fractal codecs are virtually nonexistent in the current graphics universe, having been superseded by other codecs with more general applications and higher speeds.

However, the useful properties of fractal encodings have found some use outside of the performance-sensitive area of compression. Two fractal image-processing algorithms are discussed in the last section; one for image enlargement, and one for noise removal. First, we will discuss the development of fractal image compression, from the initial mathematical development of the field in the late 1980s and the first implementation in the early 1990s, up through the various improvements over the following decades.

The Collage Theorem

Barnsley’s 1985 Collage Theorem [BARNSLEY] p. 89 made fractal image compression possible. It states that, given an iterated function system (IFS, a list of repeatedly applied contractive transforms) and some subset of a complete metric space, in order for the attractor of the IFS to be “close” to the given subset, the union of the images of the subset transformed by the member transforms of the IFS must be “close” to the original subset (for a certain definition of “close”). The Collage Theorem thus establishes a guideline for finding fractal approximations to images: work out transforms that take parts of an image to itself. The attractor of the transforms will then be close to the original image.

Barnsley presents a number of examples of simple fractals for which the reader is encouraged to figure out suitable IFS representations by hand. Barnsley does not, however, elaborate on how to find such an IFS representation for an arbitrary image.

Beginnings of fractal image compression

The first algorithm capable of generating an IFS fractal representation for an arbitrary raster image was proposed by Barnsley’s graduate student Jacquin [JACQUIN1]. Jacquin’s fractal coding scheme is based on non-overlapping square block partitions of the image being encoded. It defines a “distortion” measure based on the L2 distance between the pixels of two image blocks, as well as a number of contractive linear transformations that act on image blocks. The transformations used include geometric transforms from isometries of the square blocks, as well as pixel value transforms that change the overall luminance of the block being transformed.

Figure 1: Transformations used by Jacquin’s original algorithm [JACQUIN1, Table 1]. Transformations used by Jacquin's original algorithm.

Jacquin’s algorithm divides the image to be encoded into “domain” blocks of a given size, and classifies the blocks into classes based on their appearance: shade, midrange, simple edge, or mixed edge. Partitioning the blocks in this way reduces the pool of blocks that must be searched in the next step. The algorithm then divides the image into “range” blocks containing a quarter of the number of pixels, and iterates through them. For each range block, it searches other domain blocks in that class, looking for a transformation or transformations that map a domain block to the range block with minimum distortion. (Domain blocks are downsampled to the same number of pixels as range blocks for the distortion calculation.)

Figure 2: The fractal encoding process [JACQUIN2, Fig. 2]. Rᵢ is a range block, D is the domain block pool, and T is the transform pool. The fractal encoding process.

This process produces a list of self-similarity-producing transforms, from regions of the image to other regions of the image, that constitute an IFS with the original image as the fractal’s attractor. If it can’t find a transformation for some domain block with a measured distortion less than a given threshold, the domain block is partitioned into 4 equally sized square children, and the search is repeated with the children as domain blocks.

Reconstructing an image from its IFS representation is a much simpler procedure than the above encoding algorithm. The list of transforms is applied some number of times to an arbitrary image. With each iteration, since the original image is the fixed point of the IFS, the starting image is transformed to more closely resemble the original encoded image. The image may be reconstructed at a higher or lower quality by increasing or decreasing the number of iterations.

Figure 3: Reconstruction of a fractally encoded image: starting image (a), 1 iteration (b), 2 iterations (c), and 10 iterations (d). [FISHER, Fig. 1.10] Reconstruction of a fractally encoded image.

Jacquin [JACQUIN1] demonstrated encoding and decoding on the standard Lena test image. It does not contain coding speed or memory usage statistics for the test image, so it is impossible to discern whether the paper’s fractal coding scheme is competitive with other image compression schemes of the time. Jacquin published a clarified and expanded version of this encoding procedure two years later [JACQUIN2]. Almost all fractal image compression algorithms are extended versions of this algorithm.

Improvements in partitioning

The simple two-level image partitioning scheme used by Jacquin has been the target for replacement in later work. The motivation in changing the partitioning scheme is reducing the error in the mapping between range and domain blocks, and thus improving the fidelity of the image encoding. Fisher describes two partitioning schemes that improve on Jacquin.

Figure 4: Fixed partitioning [WOHLBERG1, Fig. 2a]. Fixed partitioning.

Figure 5: Quadtree partitioning [WOHLBERG1, Fig. 2b]. Quadtree partitioning.

Figure 6: HV partitioning [WOHLBERG1, Fig. 2c]. HV partitioning.

Figure 7: Delaunay partitioning [WOHLBERG1, Fig. 3c]. Delaunay partitioning.

Fisher describes quadtree partitioning [FISHER, Ch. 3] as “a ‘typical’ fractal scheme and a good first step for those wishing to implement their own.” A quadtree is a data structure that subdivides 2D space: each node represents a square area, and nodes may be subdivided into four equally sized square child nodes. Quadtree partitioning is an “adaptive” scheme that produces different partitions for different images, based on their content. Fisher’s quadtree algorithm first subdivides the image to be encoded several times, producing a balanced quadtree several layers deep. The leaf nodes of this initial quadtree are range blocks as in Jacquin [JACQUIN2]. Domain blocks are chosen from nodes that are one level closer to the root than the range blocks, and thus four times the area. All domain blocks are compared with each range block, and if no transformation can be found that maps a domain block to the range block under consideration with less than some threshold level of distortion, the range block is subdivided. It may be recursively subdivided several times until a low-distortion mapping from a domain block is found, or a maximum tree depth is reached.

Jacquin’s original partitioning scheme is almost a special case of quadtree partitioning with a minimum depth of n and maximum depth of n+1, where n is however many partitions it takes to subdivide the image into blocks of the desired range block size, and the levels of the tree closer to the root are not actually represented in the scheme’s data structures.

HV partitioning [FISHER, Ch. 6] is similar to quadtree partitioning in that it involves recursive partitioning of rectangular areas. When a quadtree range block is split, it is always divided into four equal areas. HV partitioning has an additional adaptive element: when a HV-tree range block is split, the orientation (horizontal or vertical) and position of the dividing line is chosen to maximize the difference between average pixel values on either side of the line, with a bias applied to prefer splits that do not create very thin sub-rectangles. The goal of HV partitioning is to create a partition of the image that better corresponds to the structure of the image than a quadtree partition, and thus use fewer ranges when encoding the image, improving encoding time and quality.

A drawback of HV partitioning is the greater variety of possible transformations between range and domain blocks, since both are rectangles of essentially arbitrary dimensions, rather than squares with power-of-two dimensions. This increases both search times and the number of bits required to represent a mapping, although the generally lower number of mappings required offsets the latter.

Fisher notes a weakness in both quadtree and HV partitioning: reconstructed images suffer from visible blockiness at all but the highest reconstruction qualities. Fisher’s quadtree partitioning [FISHER, Ch. 3] deals with this by applying a heuristic deblocking filter during image reconstruction. The filter averages pixels on either side of a range block boundary, with pixels on the inside weighted by a factor proportional to the range block’s depth in the quadtree. Fisher describes a similar procedure for blocks in an HV tree [FISHER, Ch. 6]. Blocky artifacts are by no means unique to these two schemes or fractal image compression in general; IDCT-based codecs such as JPEG and the MPEG family are also susceptible to blockiness. The JPEG standard does not include a deblocking filter, but some MPEG variants do, such as H.264.

One scheme [DAVOINE] not based on rectangular blocks uses a mesh partition made up of triangles, which is generated by repeated adaptive Delaunay triangulation and triangle splitting. The goal of this scheme is to cover the image in triangles that are as large as possible while maintaining a variance in internal pixel brightness values that is less than some threshold parameter (triangles meeting this condition are “homogenous”). Thus, regions with internal brightness boundaries are split up. Delaunay triangulation was chosen to minimize the number of thin triangles, and thus reduce numerical problems when the internal pixels of a triangle are read from the image raster.

Initially, the image is covered with a regular grid of vertices. In the splitting phase, the Delaunay triangulation of the vertices is calculated; then, for every triangle that is not homogenous, a new vertex is added in the barycenter of that triangle. Splitting is repeated until convergence, or until some iteration limit is reached. In the merge phase, the algorithm removes vertices for which all surrounding triangles have similar pixel value mean and variance. Then a final triangulation is performed. This procedure is used to generate both domain and range sets of triangles, with domain triangles permitted a greater variance.

Adaptive triangular partitions generally use fewer blocks than quadtree or HV partitions. An additional advantage of triangular blocks is that inter-block seams do not always line up with pixel boundaries, so there is less visible blockiness in the decoded image. The major disadvantage is that both encoding and decoding require transformations between arbitrarily shaped triangular range and domain blocks; this involves substantially more interpolation than the rectangle-based schemes.

Difficulties of fractal image compression

Since the idea behind fractal image compression is that the image being compressed can be modeled as a fractal, useful fractal image compression requires that the image actually have fractal characteristics so that it can be efficiently modeled that way. Clarke & Linnett [CLARKE] pointed out that while fractal image compression is frequently proposed as a compression scheme for images of nature (such as plants, landscapes, clouds), it is not necessarily the case that those images have the affine self-similarities required for effective compression by existing fractal compression schemes, or indeed, that they have any fractal characteristics at all.

Figure 8: Fractal fern [FERN1]. Fractal fern.

Figure 9: Real fern [FERN2]. Real fern.

As an example, they observed that there are significant differences between a simple computer-generated fractal fern and a photo of a real fern. A fractal fern is highly idealized and may be represented very efficiently by a fractal model, but a natural fern does not have its perfect regularity, and the self-similarity breaks down at some scales. In particular, the fern leaves resemble the whole fern, but are different structures nonetheless, and can only be approximated by small copies of the whole fern.

Wohlberg & de Jager [WOHLBERG2] examined statistical properties of natural images with regard to fractal image encoding, specifically with regard to the deterministic self-similar fractal representations used by all of the above algorithms. Their conclusions seem to confirm the assertion of Clarke & Linnett: “The form of self-affinity considered here therefore does not appear to represent a more accurate characterization of image statistics than vastly simpler models such as multiresolution autoregressive models.”

It has been proved [MATTHIAS] that finding an optimal fractal encoding for an arbitrary image is NP-hard. Furthermore, it has been proved that algorithms derived from Jacquin’s original Collage Theorem-based algorithm do not generate approximations to optimal encodings. So, at least with currently known encoding methods, there is an unavoidable tradeoff between fast (polynomial-time) encoding and obtaining the highest quality representation that will fit in a given amount of space. This is a huge disadvantage relative to other image compression methods: for example, JPEG’s IDCT block coding processes one fixed-size block of pixels at a time, which results in an encoding time that is a linear function of the number of samples in the original image.

Fractal image processing

Authors focused on image compression have treated the iterative reconstruction process for fractal image encodings as a way to control the quality of a reconstructed image. Polidori & Dugelay [POLIDORI] examined the reconstruction process as a procedure for image enlargement.

The most commonly used algorithms for this purpose are variations on polynomial interpolation of samples (the pixels of the original image). The assumption underlying polynomial interpolation is that the samples are from a smooth continuous function. Polidori & Dugelay made a different assumption: that the function that the image was sampled from is fractal in nature, instead of smooth. Then a fractal encoding of that image would be an approximation of that fractal. Since fractals are scale-independent, the image could then be reconstructed at any size from the encoding.

Polidori & Dugelay first examined fractal image encoding and decoding schemes identical to those developed for image compression, based on nonoverlapping partitions of the image. They found that such schemes tended to produce blocky reconstructed images with undesirable artifacts when used to reconstruct images at a larger size than the original image. They then examined the idea of using overlapped domain blocks, which retain redundant information not desirable for compression applications, but useful for image enlargement. They proposed and tested several methods of recombining the overlapped blocks, with some methods yielding enlargements with visual quality comparable to those obtained through classical interpolation.

Another image processing technique that makes use of the scale independence of fractal image encoding is fractal denoising. Additive white Gaussian noise (AWGN) is common in images acquired from noisy sensors or transmitted through noisy communications channels. Images containing this kind of noise can be modeled as a series of samples where each sample is the sum of the value from the noise-free original and a value taken from a Gaussian distribution. Obviously, this model is scale-dependent, and one might expect that fractal image encoding poorly represents AWGN.

In fact, this is the case. Ghazel, Freeman & Vrscay [GHAZEL] noticed that “straightforward fractal-based coding performs rather well as a denoiser”. This motivated the development of their denoising algorithm, which goes a step further than that, and statistically estimates a fractal encoding of the noise-free image from a fractal encoding of an AWGN-contaminated noisy image.

In the first step of the algorithm, the noisy image is examined for areas with nearly uniform pixel values. The differences in value from pixel to pixel in such regions is likely due to noise, so the variance of the AWGN can be estimated from the variance of these regions. The image is then encoded as a series of transformations from domain blocks to range blocks in the usual fashion. (Adaptive quadtree partitioning is used, as it results in the best quality of reconstructed images.) Each transformation in the encoding that affects pixel values (“gray-level” transforms, as opposed to “geometric” transforms} is adjusted using the previously estimated variance according to a simple relation. Finally, the image is reconstructed from the modified fractal encoding.

Figure 10: Comparison of fractal denoising and Lee filtering [GHAZEL]. Comparison of fractal denoising and Lee filtering.

Ghazel, Freeman & Vrscay report that fractal denoising is competitive with or superior to Lee filtering, which is a common locally adaptive linear denoising algorithm. Furthermore, fractal denoising is more likely to outperform Lee filtering as the variance of the AWGN increases, making the fractal method an attractive choice for processing very noisy images.


A review of fractal image compression cannot fail to state this: despite years of improvements, such as the previously discussed partitioning schemes, fractal image compression is not competitive with current block IDCT or wavelet methods. The speed and quality issues noted above have prevented broad use of fractals for storage of compressed images.

However, fractal image processing techniques have found some application: fractal image enlargement was eventually commercialized as a product known as Genuine Fractals [GENUINEFRACTALS] which is well known within the desktop publishing industry as a high-quality image scaler suitable for making large prints from digital photos.



Barnsley, M. F. (1988). Fractals Everywhere, Academic Press Inc., US.


Jacquin, A. E. (1990). A novel fractal block-coding technique for digital images. Acoustics,
Speech, and Signal Processing, 1990. ICASSP-90., 1990 International Conference on.


Jacquin, A. E. (1992). “Image coding based on a fractal theory of iterated contractive image transformations.” Image Processing, IEEE Transactions on 1(1): 18-30.


Jacquin, A. E. (1993). “Fractal image coding: a review.” Proceedings of the IEEE 81(10): 1451-1465.


Clarke, R. J. and L. M. Linnett (1993). “Fractals and image representation.” Electronics & Communication Engineering Journal 5(4): 233-239.


Dietmar, S. and H. Raouf (1994). “A review of the fractal image compression literature.” SIGGRAPH Comput. Graph. 28(4): 268-276.


Fisher, Y. (1995). Fractal image compression: theory and application, Springer-Verlag London, UK.


Polidori, E. and J. L. Dugelay (1995). Zooming using IFS. NATO ASI Conf. Fractal Image Encoding and Analysis}, Trondheim.


Davoine, F., M. Antonini, et al. (1996). “Fractal image compression based on Delaunay triangulation and vector quantization.” Image Processing, IEEE Transactions on 5(2): 338-346.


Matthias, R. and H. Hannes (1997). Optimal Fractal Coding is NP-Hard. Proceedings of the Conference on Data Compression, IEEE Computer Society.


Wohlberg, B. and G. De Jager (1999). “A review of the fractal image coding literature.” Image Processing, IEEE Transactions on 8(12): 1716-1729.


Wohlberg, B. and G. de Jager (1999). “A class of multiresolution stochastic models generating self-affine images.” Signal Processing, IEEE Transactions on 47(6): 1739-1742.


Ghazel, M., G. H. Freeman, et al. (2003). “Fractal image denoising.” Image Processing, IEEE Transactions on 12(12): 1560-1578.


onOne Software, Inc. “Genuine Fractals 5.” Retrieved 2008-03-17, from


Mihályi, A. “Fractal fern.” Retrieved 2008-03-17, from


“Olegivvit”. “Leaf of fern.” Retrieved 2008-03-17, from

Tagged , , ,

Package your own mod_auth_openid 0.9 for Ubuntu

First, you’ll need the Debian New Maintainers’ Guide… just kidding. It’s totally useless. Don’t bother with it unless you really care about copyright metadata.

We’ll be using Jordan Sissel’s FPM (“Effing package management!”) utility to turn a packaging directory into an actual .deb package. The packaging directory has the same structure as /: an /etc, a /usr, a /usr/lib, and so forth, and we’ll put build products from mod_auth_openid‘s Autotools project into it.

I’ve put the instructions into a gist, which should work on Ubuntu 13 (Ringtail/Salamander) and probably higher.

# Ubuntu version of package:
# FPM man page:
# FPM instructions for autotools:
# install build-time dependencies
# yes, apache2 is a build-time dependency, otherwise APXS breaks:
# "checking Apache version... configure: error: /usr/bin/apxs2 says that your apache binary lives at /usr/sbin/apache2 but that file isn't executable."
sudo aptitude -y install \
git \
apache2 \
apache2-dev \
libopkele-dev \
libcppunit-dev \
autoconf \
libtool \
ruby \
# install the FPM packaging utility
sudo gem install fpm
# check out my version of mod_auth_openid
# should default to the mysql-sessions branch
git clone
cd mod_auth_openid
# creates configure script, then runs it
# actually compile things
# directory for package contents to go in
mkdir pkgtmp
# APXS ignores $DESTDIR, so we can't use make install DESTDIR=... for this package
# instead, copy files manually:
# copy Apache module
mkdir -p pkgtmp/usr/lib/apache2/modules
cp src/.libs/ pkgtmp/usr/lib/apache2/modules/
# copy table maintenance tool
mkdir -p pkgtmp/usr/bin
cp src/modauthopenid_tables pkgtmp/usr/bin/
# create Apache config for module loading
mkdir -p pkgtmp/etc/apache2/mods-available
echo "LoadModule authopenid_module /usr/lib/apache2/modules/" \
> pkgtmp/etc/apache2/mods-available/authopenid.load
# set permissions for package contents
chmod -x pkgtmp/usr/lib/apache2/modules/
chmod -R u=rwX,go=rX pkgtmp
# create package with FPM
fpm \
-s dir \
-C pkgtmp \
-t deb \
--name libapache2-mod-auth-openid \
--version 0.9.0 \
--architecture native \
--depends libc6 \
--depends libgcc1 \
--depends libstdc++6 \
--depends libapr1 \
--depends libaprutil1 \
--depends libopkele3 \
--depends libcurl3-gnutls \
--depends libpcre3 \
etc usr
# package should now exist as libapache2-mod-auth-openid_0.9.0_amd64.deb

Note some minor wrinkles: APXS doesn’t respect the DESTDIR environment variable, so we can’t use make install into the packaging directory, and instead will have to assemble the contents of the packaging directory by hand.

See my GDC 2014 slides!

I recently had the privilege of presenting a talk at GDC 2014 for KIXEYE: Building Customer Support and Loyalty, in which I talked about how and why KIXEYE built the Monocle customer support system as a web app, the challenges and rewards of building a uniform support API across multiple games, and how designing games with support scenarios in mind yields benefits across the whole product. Pretty exciting to get to show off my whole team’s work. The full talk will be in the GDC Vault later this month, but for now, I’ve uploaded slides:

Modifying SCons environment variables, globally

SCons, the Python-based build system, tries to isolate itself from the user’s environment as much as possible, so that one developer’s weird environment variables don’t lead to an irreproducible build. This is great until you want to use tools from places that SCons doesn’t think are standard, in which case you can make use of its site_scons extension mechanism to make them standard.

In this case, I’m using Homebrew. Homebrew normally installs into /usr/local. I think this default is completely mental, because /usr/local is the Balkans of Mac software packaging: routinely invaded and full of land mines. I’ve installed Homebrew into /opt/homebrew, where it’s safe from having parts overwritten at random by unmanaged package installers. For Make builds, I’ve included these lines at the end of my .zshrc to make Homebrew components available:

export PATH="/opt/homebrew/bin:$PATH"
export CFLAGS='-I/opt/homebrew/include
export LDFLAGS='-L/opt/homebrew/lib'

For SCons, I’ve created the file $HOME/.scons/site_scons/ to modify the Environment object used by SCons subprocesses:

Decorate SCons Environment constructor so that Homebrew paths are always included.
from functools import wraps
import SCons.Environment
# Homebrew install directory
homebrew = '/opt/homebrew'
flag_prepends = [
('CFLAGS', '-I{}/include'.format(homebrew)),
('CXXFLAGS', '-I{}/include'.format(homebrew)),
('LDFLAGS', '-L{}/lib' .format(homebrew)),
def add_homebrew_vars(environment_init):
def wrapper(self, *args, **kwargs):
for var, flag in flag_prepends:
flags_list = [flag]
if var in kwargs:
kwargs[var] = ' '.join(flags_list)
environment_init(self, *args, **kwargs)
self.PrependENVPath('PATH', '{}/bin'.format(homebrew))
return wrapper
SCons.Environment.Environment.__init__ = add_homebrew_vars(SCons.Environment.Environment.__init__)
view raw hosted with ❤ by GitHub

Specifically, my decorates the Environment initializer to always add Homebrew to the PATH, and now I have binaries like sdl-config available. (SCons has support for parsing the flags emitted by those config programs.) This should extend to CFLAGS and LDFLAGS, except that they’re most likely strings instead of lists. I’ve since refined the GIST to add CFLAGS, CXXFLAGS, and LDFLAGS to the SCons environment.

The end result is that I can assume that any SCons build on my system will have access to libraries and tools installed through Homebrew.

Tagged ,

connecting a Raspberry Pi to a Nokia N900 using USB networking

I’ve got a Nokia N900 going spare since I upgraded to a phone for which people actually write apps. Lots of possibilities with the N900. It’s got a shedload of radios, a decent CPU, an IR blaster, and not one but two cameras. The one thing it doesn’t have is any way to plug in more things.

Enter the Raspberry Pi I got at PyCon last week. It comes with lots of GPIO pins, and a USB port where one might plausibly plug in an N900. Clearly, these two devices were meant to be friends. Let’s get them talking.

Prepping the N900

The N900 runs Maemo, a heavily customized Debian fork, which was a fine OS for a hackable device. However, I used mine as an actual phone for several years, and it’s gotten kinda janky, so I’m going to blast it back to factory-fresh and then bring it up to the latest community-developed version of Maemo.

Reset N900 to factory state

Maemo wiki: updating the firmware

Using flasher 3.5 on Win7 x64, installed to C:\Program Files (x86)\maemo\flasher-3.5.
Install libusb from SourceForge. Copy amd64 version of libusb0.dll to flasher install dir.

Shut down N900. While holding down U key on N900 keyboard, plug USB cable into PC.

Run libusb bin\inf-wizard.exe. Select “Nokia N900 (Update mode)” from device list. Create a .inf, save it, and install the .inf using the “Install Now” button. Click past the unsigned driver warnings.

Open a console as an administrator. cd to flasher install dir. Run flasher-3.5.exe --read-device-id to see if the flasher app works.

flasher v2.5.2 (Sep 24 2009)

USB device found found at bus bus-0, device address \\.\libusb0-0001--0x0421-0x0105.
Found device RX-51, hardware revision 2101
NOLO version 1.4.14
Version of 'sw-release': RX-51_2009SE_21.2011.38-1.002_PR_002

Flash eMMC image

This is the user filesystem. I got the latest version from

C:\Program Files (x86)\maemo\flasher-3.5>flasher-3.5.exe -F C:\Users\jeremye\Downloads\RX-51_2009SE_10.2010.13-2.VANILLA_PR_EMMC_MR0_ARM.bin -f
flasher v2.5.2 (Sep 24 2009)

Image 'mmc', size 255947 kB
        Version RX-51_2009SE_10.2010.13-2.VANILLA
USB device found found at bus bus-0, device address \\.\libusb0-0001--0x0421-0x0105.
Found device RX-51, hardware revision 2101
NOLO version 1.4.14
Version of 'sw-release': RX-51_2009SE_21.2011.38-1.002_PR_002
Booting device into flash mode.
Suitable USB device not found, waiting.
USB device found found at bus bus-0, device address \\.\libusb0-0001--0x0421-0x01c8.
Raw data transfer EP found at EP2.
[init        20 %   32768 /  255947 kB     0 kB/s]

Unplug N900’s USB cable. Remove battery. Wait until N900 shuts down. Replace battery. While holding down U, reconnect USB cable.

Flash rootfs image

This is where Linux lives. I used Maemo 5 PR 1.3.1, which is the very last official version, postdates the mirror, and can be found on Nokia’s files site by Googling parts of the filename: RX-51_2009SE_21.2011.38-1_PR_COMBINED_MR0_ARM.bin

C:\Program Files (x86)\maemo\flasher-3.5>flasher-3.5.exe -F "C:\Users\jeremye\Downloads\RX-51_2009SE_21.2011.38-1_PR_COMBINED_MR0_ARM.bin" -f -R
flasher v2.5.2 (Sep 24 2009)

SW version in image: RX-51_2009SE_21.2011.38-1_PR_MR0
Image 'kernel', size 1705 kB
        Version 2.6.28-20103103+0m5
Image 'rootfs', size 185728 kB
        Version RX-51_2009SE_21.2011.38-1_PR_MR0
Image 'cmt-2nd', size 81408 bytes
        Version BB5_09.36
Image 'cmt-algo', size 519808 bytes
        Version BB5_09.36
Image 'cmt-mcusw', size 5826 kB
        Version rx51_ICPR82_10w08
Image '2nd', size 14720 bytes
        Valid for RX-51: 2217, 2218, 2219, 2220, 2120
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2217, 2218, 2219, 2220, 2120
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2217, 2218, 2219, 2220, 2120
Image '2nd', size 14720 bytes
        Valid for RX-51: 2101, 2102, 2103
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2101, 2102, 2103
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2101, 2102, 2103
Image '2nd', size 14848 bytes
        Valid for RX-51: 2307, 2308, 2309, 2310
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2307, 2308, 2309, 2310
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2307, 2308, 2309, 2310
Image '2nd', size 14848 bytes
        Valid for RX-51: 2407, 2408, 2409, 2410
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2407, 2408, 2409, 2410
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2407, 2408, 2409, 2410
Image '2nd', size 14848 bytes
        Valid for RX-51: 2301, 2302, 2303, 2304, 2305, 2306
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2301, 2302, 2303, 2304, 2305, 2306
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2301, 2302, 2303, 2304, 2305, 2306
Image '2nd', size 14848 bytes
        Valid for RX-51: 2401, 2402, 2403, 2404, 2405, 2406
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2401, 2402, 2403, 2404, 2405, 2406
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2401, 2402, 2403, 2404, 2405, 2406
Image '2nd', size 14720 bytes
        Valid for RX-51: 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119
Image '2nd', size 14848 bytes
        Valid for RX-51: 2501, 2502, 2503, 2504, 2505, 2506
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2501, 2502, 2503, 2504, 2505, 2506
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2501, 2502, 2503, 2504, 2505, 2506
Image '2nd', size 14848 bytes
        Valid for RX-51: 2607, 2608, 2609, 2610
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2607, 2608, 2609, 2610
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2607, 2608, 2609, 2610
Image '2nd', size 14848 bytes
        Valid for RX-51: 2507, 2508, 2509, 2510
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2507, 2508, 2509, 2510
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2507, 2508, 2509, 2510
Image '2nd', size 14720 bytes
        Valid for RX-51: 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216
Image '2nd', size 14848 bytes
        Valid for RX-51: 2601, 2602, 2603, 2604, 2605, 2606
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2601, 2602, 2603, 2604, 2605, 2606
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2601, 2602, 2603, 2604, 2605, 2606
USB device found found at bus bus-0, device address \\.\libusb0-0001--0x0421-0x0105.
Found device RX-51, hardware revision 2101
NOLO version 1.4.14
Version of 'sw-release': RX-51_2009SE_21.2011.38-1.002_PR_002
Sending xloader image (14 kB)...
100% (14 of 14 kB, avg. 2900 kB/s)
Sending secondary image (106 kB)...
100% (106 of 106 kB, avg. 13359 kB/s)
Flashing bootloader... done.
Sending cmt-2nd image (79 kB)...
100% (79 of 79 kB, avg. 13250 kB/s)
Sending cmt-algo image (507 kB)...
100% (507 of 507 kB, avg. 25381 kB/s)
Sending cmt-mcusw image (5826 kB)...
100% (5826 of 5826 kB, avg. 31839 kB/s)
Flashing cmt-mcusw... done.
Sending kernel image (1705 kB)...
100% (1705 of 1705 kB, avg. 30459 kB/s)
Flashing kernel... done.
Sending and flashing rootfs image (185728 kB)...
100% (185728 of 185728 kB, avg. 13689 kB/s)
Finishing flashing... done
CMT flashed successfully

-R flag reboots N900 after flash. N900 goes through “5 white dots” boot seq, then asks for date and time input. Looks like all settings were wiped out, as planned.

Install community-maintained Maemo (CSSU)

Maemo wiki: Community SSU

Go to in Nokia Web to install Stable variant of CSSU. Add the catalog. Let it install the CSSU Enabler. Click through all the messages. Close the app manager and open the Community SSU app that it just installed. If it complains about HAM (the app manager) still being open, just run it again. Once it’s done, it’ll return you to HAM. Click Update All to install the CSSU Maemo update, which is a 34 MB download and may take a while over WiFi.

Fill the temporal void with fruit salad.

The phone will eventually reboot, and you’ll get an “Operating system updated” banner.

Useful apps

Install “OpenSSH Client and Server” from the Network section in HAM. Set a root password when it asks for one.

Install “rootsh” from the System section in HAM, which gives you the gainroot script (equivalent to su?).

Open an xterm. Become root (sudo gainroot). Edit /etc/passwd using vi or whatever to change the password field of the user account from ! to *, or you won’t be able to log in over SSH using pubkey auth because sshd will think the user account is locked out. (BTW, I learned this from running the OpenSSH server in debug mode with sshd -d, in which it’ll stay attached to the terminal and show useful status messages).

Raspberry Pi network interface config

My Pi is running Raspbian.

Before any N900-specific config, you can plug in the N900 and it’ll be detected as a USB Ethernet adapter, normally network interface usb0.

dmesg output:

[ 1108.249401] usb 1-1.3: new high speed USB device number 55 using dwc_otg
[ 1108.351182] usb 1-1.3: New USB device found, idVendor=0421, idProduct=01c8
[ 1108.351219] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 1108.351241] usb 1-1.3: Product: N900 (PC-Suite Mode)
[ 1108.351258] usb 1-1.3: Manufacturer: Nokia
[ 1108.365353] cdc_acm 1-1.3:1.6: This device cannot do calls on its own. It is not a modem.
[ 1108.365930] cdc_acm 1-1.3:1.6: ttyACM0: USB ACM device
[ 1108.378603] cdc_ether 1-1.3:1.8: usb0: register 'cdc_ether' at usb-bcm2708_usb-1.3, CDC Ethernet Device, 56:7d:26:7a:1a:63

ifconfig output:

usb0      Link encap:Ethernet  HWaddr 56:7d:26:7a:1a:63
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Maemo wiki: N900 USB networking

I’m using the instructions for Debian Lenny. Add a new udev rule matching the N900’s identifiers:


SUBSYSTEM=="net", ACTION=="add", ATTRS{idVendor}=="0421", ATTRS{idProduct}=="01c8", ATTRS{manufacturer}=="Nokia", ATTRS{product}=="N900 (PC-Suite Mode)", NAME="n900"

Reload the udev rules:

udevadm control --reload-rules

Unplug and replug the N900. The N900 now comes up as the n900 interface in dmesg:

[ 2275.378869] usb 1-1.3: USB disconnect, device number 55
[ 2275.397706] cdc_ether 1-1.3:1.8: usb0: unregister 'cdc_ether' usb-bcm2708_usb-1.3, CDC Ethernet Device
[ 2277.147123] usb 1-1.3: new high speed USB device number 56 using dwc_otg
[ 2277.248904] usb 1-1.3: New USB device found, idVendor=0421, idProduct=01c8
[ 2277.248940] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 2277.248961] usb 1-1.3: Product: N900 (PC-Suite Mode)
[ 2277.248979] usb 1-1.3: Manufacturer: Nokia
[ 2277.265004] cdc_acm 1-1.3:1.6: This device cannot do calls on its own. It is not a modem.
[ 2277.265592] cdc_acm 1-1.3:1.6: ttyACM0: USB ACM device
[ 2277.277117] cdc_ether 1-1.3:1.8: usb0: register 'cdc_ether' at usb-bcm2708_usb-1.3, CDC Ethernet Device, 56:7d:26:7a:1a:63
[ 2277.572415] udevd[5046]: renamed network interface usb0 to n900

and ifconfig:

n900      Link encap:Ethernet  HWaddr 56:7d:26:7a:1a:63
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

The N900 expects that the USB host will be using IP address, so let’s make that happen on system boot and when the N900 is plugged in (auto and allow-hotplug respectively). Edit /etc/network/interfaces to add these lines:

allow-hotplug n900
auto n900
iface n900 inet static

Bring it online now:

ifup n900

Confirm that the IP got assigned with ifconfig n900:

n900      Link encap:Ethernet  HWaddr 56:7d:26:7a:1a:63
          inet addr:  Bcast:  Mask:
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Note, as I just found out, that this only works if you connect the N900 in “PC Suite” or “Charging Only” mode. If you connect the N900 in “Mass Storage” mode, it emulates a hard drive instead to give you access to its microSD reader and internal storage, and you’ll see this in dmesg:

[ 3440.675342] usb 1-1.3: new high speed USB device number 64 using dwc_otg
[ 3440.777489] usb 1-1.3: New USB device found, idVendor=0421, idProduct=01c7
[ 3440.777525] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 3440.777547] usb 1-1.3: Product: N900 (Storage Mode)
[ 3440.777564] usb 1-1.3: Manufacturer: Nokia
[ 3440.777580] usb 1-1.3: SerialNumber: 372041756775
[ 3440.788540] scsi2 : usb-storage 1-1.3:1.0
[ 3441.796667] scsi 2:0:0:0: Direct-Access     Nokia    N900              031 PQ: 0 ANSI: 2
[ 3441.802419] sd 2:0:0:0: [sda] Attached SCSI removable disk
[ 3441.807315] scsi 2:0:0:1: Direct-Access     Nokia    N900              031 PQ: 0 ANSI: 2
[ 3441.813912] sd 2:0:0:1: [sdb] Attached SCSI removable disk
[ 3441.969928] sda: detected capacity change from 7948206080 to 0
[ 3444.909499] sd 2:0:0:0: [sda] 56631296 512-byte logical blocks: (28.9 GB/27.0 GiB)
[ 3444.911077] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 3445.128870]  sda:
[ 3448.903605] sd 2:0:0:1: [sdb] 15523840 512-byte logical blocks: (7.94 GB/7.40 GiB)
[ 3448.904939] sd 2:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 3448.913276]  sdb: sdb1

The mode setting seems to be sticky: you can get the N900 to go back to being a fake Ethernet adapter by connecting it at least once in PC Suite mode, after which it can be unplugged and replugged as much as you like, even if you leave it in Charging Only by not selecting a mode. However, the mode setting doesn’t persist across reboots. When I power-cycled my N900 while it was connected to the Pi, it didn’t come up as a network device until I manually selected PC Suite mode again.

Tagged ,

what servers am I running?

nmap localhost will tell you what ports are open:

Nmap scan report for localhost (
Host is up (0.00035s latency).
Not shown: 993 closed ports
22/tcp    open  ssh
445/tcp   open  microsoft-ds
548/tcp   open  afp
631/tcp   open  ipp
3689/tcp  open  rendezvous
9001/tcp  open  tor-orport
10001/tcp open  scp-config

Nmap done: 1 IP address (1 host up) scanned in 3.74 seconds

but sudo lsof -i -n -P | grep LISTEN will tell you what's behind them. -i selects IPv4 and IPv6 sockets, -n turns off hostname translation, and -P turns off port name lookup, because the day /etc/services is correct or relevant will probably be the day the Sun burns out.

launchd       1           root   10u  IPv6 0x83483e3bed83a175      0t0    TCP *:548 (LISTEN)
launchd       1           root   11u  IPv4 0x83483e3bed84015d      0t0    TCP *:548 (LISTEN)
launchd       1           root   27u  IPv6 0x83483e3bed8399b5      0t0    TCP *:445 (LISTEN)
launchd       1           root   28u  IPv4 0x83483e3bed83f2ed      0t0    TCP *:445 (LISTEN)
launchd       1           root   33u  IPv6 0x83483e3bed8395d5      0t0    TCP [::1]:631 (LISTEN)
launchd       1           root   34u  IPv4 0x83483e3bed83ebb5      0t0    TCP (LISTEN)
launchd       1           root   36u  IPv6 0x83483e3bed853175      0t0    TCP *:22 (LISTEN)
launchd       1           root   37u  IPv4 0x83483e3bed83e47d      0t0    TCP *:22 (LISTEN)
kdc          50           root    6u  IPv6 0x83483e3bed852d95      0t0    TCP *:88 (LISTEN)
SpotifyWe   627      jehrhardt    6u  IPv4 0x83483e3befe90bb5      0t0    TCP (LISTEN)
SpotifyWe   627      jehrhardt    7u  IPv4 0x83483e3befe9047d      0t0    TCP (LISTEN)
Spotify     637      jehrhardt   11u  IPv4 0x83483e3bed83d60d      0t0    TCP (LISTEN)
Spotify     637      jehrhardt   12u  IPv4 0x83483e3befe912ed      0t0    TCP (LISTEN)
Spotify     637      jehrhardt   26u  IPv4 0x83483e3c00bd1ed5      0t0    TCP *:32740 (LISTEN)
Spotify     637      jehrhardt   30u  IPv4 0x83483e3bf35eea25      0t0    TCP *:57621 (LISTEN)
Dropbox     639      jehrhardt   19u  IPv4 0x83483e3befe8eed5      0t0    TCP *:17500 (LISTEN)
Dropbox     639      jehrhardt   25u  IPv4 0x83483e3befe8e065      0t0    TCP (LISTEN)
ssh        2408      jehrhardt    6u  IPv6 0x83483e3bedb83175      0t0    TCP [::1]:54334 (LISTEN)
ssh        2408      jehrhardt    7u  IPv4 0x83483e3c07168a25      0t0    TCP (LISTEN)
python    11368      jehrhardt    5u  IPv4 0x83483e3c0182ca25      0t0    TCP (LISTEN)
AptanaStu 11507      jehrhardt  151u  IPv6 0x83483e3c00dbc9b5      0t0    TCP (LISTEN)
AptanaStu 11507      jehrhardt  162u  IPv6 0x83483e3bef15fd95      0t0    TCP *:9980 (LISTEN)
AptanaStu 11507      jehrhardt  200u  IPv6 0x83483e3c00dbcd95      0t0    TCP *:10001 (LISTEN)
AptanaStu 11507      jehrhardt  201u  IPv6 0x83483e3c00dbc5d5      0t0    TCP *:9001 (LISTEN)
AptanaStu 11507      jehrhardt  329u  IPv6 0x83483e3bed30b9b5      0t0    TCP *:51636 (LISTEN)
iTunes    21480      jehrhardt   39u  IPv4 0x83483e3c01b30ed5      0t0    TCP *:3689 (LISTEN)
iTunes    21480      jehrhardt   40u  IPv6 0x83483e3bed8529b5      0t0    TCP *:3689 (LISTEN)
Skype     37217      jehrhardt   50u  IPv4 0x83483e3c0053ed45      0t0    TCP *:62671 (LISTEN)
memcached 85144      jehrhardt   18u  IPv6 0x83483e3bed30c175      0t0    TCP [::1]:11211 (LISTEN)
memcached 85144      jehrhardt   19u  IPv4 0x83483e3c07156bb5      0t0    TCP (LISTEN)
memcached 85144      jehrhardt   20u  IPv6 0x83483e3c00dbd175      0t0    TCP [fe80:1::1]:11211 (LISTEN)