H. Chase Stevens Logo
blog · schedule · résumé · public key · contact


Most Common Tags

  1. programming
  2. python
  3. code
  4. philosophy
  5. evolution
  6. game design
  7. probability
  8. video games
  9. genetic algorithms
  10. government

Archives

Loading

Posts with tag "python":

Making pictures using triangles (an exercise in stochastic hill-climbing algorithms) – Part 1

posted on: Wednesday, December 4, 2013 (4:02 am) by Chase Stevens

In this blog post, I’d like to introduce a project I’ve been tinkering on and off (mostly off) with for the past few months. The idea behind the project was pretty simple: to write a program which, given a set number of triangles and a blank canvas, adjusts the coordinates, color, and transparency of the triangles over time in an attempt to replicate a given image. There were two primary motivations behind my project, at least initially:

  • to create (lossily) compressed versions of the source images that still look decent (and might even reduce image noise), and
  • to have those compressed versions themselves be scalable to any size.
While I found that my program met these goals fairly adequately, what I was both surprised and delighted by was its aesthetically pleasing and almost artful representations of the source images. Therefore, before delving into the more technical aspects of the implementation, I’d like to present a few samples of the program’s results. The first example uses 50 triangles, while the other two use 100.

Example 1

Iteration 9500 Iteration 50000
Output at iteration 9,500
Image size: 3.38KB
Compressed SVG size: 0.88KB
Output at iteration 50,000
Image size: 2.95KB
Compressed SVG size: 0.88KB
Iteration 1293000 Christian Bale
Output at iteration 1,293,000
Image size: 2.74KB
Compressed SVG size: 0.88KB
Original image
Size: 6.53KB

This was one of the very first tests of my program, and is in my opinion also one of its most successful and impressive recreations. As you might infer from the stark difference between iteration 9,500 and 50,000 and moderate difference between iteration 50,000 and 1,293,000, the program’s output approaches complete similarity to the target image roughly logarithmically with each iteration; this trend is plotted in the graph below.
Graph of recreation error over iterations
The corresponding vector for the final result (below) is actually noticeably less accurate than the JPEG posted above, because the program has exploited JPEG artifacts as well as the specific dimensions of the target in creating its reproduction. When the triangles are used to create an SVG, these artifacts are no longer present and the image becomes less faithful.
One of the most noticeable issues with the program’s image is that it entirely fails to capture the eyes in the source image. I explored various methods of solving this issue, which I will discuss in a later post on implementation. Overall, though, I thought that the program produced a more pleasing (if less accurate) image, kilobyte-for-kilobyte, than comparable "traditional" compression techniques (as pictured below), especially if enlarged.
Iteration 1293000 Christian Bale Christian Bale compressed
Program’s final output
Image size: 2.74KB
Compressed SVG size: 0.88KB
Original image
Size: 6.53KB

Image compressed with Paint.NET
Size: 1.91KB
Compressed size: 1.58KB
To finish off this example: a video of the program’s progress in making its output played at 30 frames per second, with frames produced every 500th iteration. The "jumps" in activity visible at 0:23 and 0:56 are instances when I made updates to the program.

Example 2

Iteration 737500 Actor Christian Bale
Output at iteration 737,500
Image size: 4.03KB
Compressed SVG size: 1.54KB
Original image
Size: 3.51KB

This example yet again highlighted an issue with the image similarity evaluation methods employed by the program. While many aspects of the source image are replicated with high fidelity, the facial features of Christian Bale are not. Below again is a video of the program’s progress over time.

Example 3

Iteration 769500 Esteemed actor Christian Bale
Output at iteration 769,500
Image size: 3.27KB
Compressed SVG size: 1.54KB
Original image
Size: 12.9KB

In this instance, I experimented with another evaluation method that incorporated a saliency comparison between the target image and the output. Although the results presented here seem promising, other tests using the method were lacklustre, especially given that the saliency map calculations added a lot of processing time to each iteration.


In my next post, I’ll detail the specifics of the program. In the meanwhile, the program itself can be downloaded here; it requires numpy and PIL to run.

Tags: image processing, numpy, programming, python

The Anonymized Prisoner's Dilemma Challenge – Exploiting Limited Knowledge

posted on: Tuesday, August 20, 2013 (7:28 am) by Chase Stevens

The competition

Recently, a couple of friends and I entered the Brilliant.org Hunger Games competition, in which competitors must create programs which are pitted against others in an arena of sorts. In any given round of the competition, the programs must choose for each other program whether to cooperate with them or take advantage of them. In the Hunger Games competition, these are framed as "hunting" and "slacking".

Tit-for-tat

At face value, the competition is a very standard iterated prisoner’s dilemma. For the general case, the optimal strategy for this game has already been discovered: tit-for-tat. This essence of this strategy is to reciprocate everything done to you in the previous round, and cooperate with everyone in the first round in a show of "good faith". However, the Hunger Games competition had a slight twist: although your program would be able to see the overall reputation of each opponent (id est how many times they hunted versus slacked), no other identifying information would be supplied. This limited knowledge makes true tit-for-tat impossible, since your opponents are effectively anonymized. Although you may know that a player you decided to cooperate with in the previous round defected against you, there’s no way to retaliate against the same player in this round with 100% confidence.

Our strategy

My team’s strategy, nicknamed "The Underminer", both compensated for this lack of knowledge and managed to exploit it. We started with the assumption that tit-for-tat is possible, to a degree. As the number of rounds increases, the reputations of individual opponents becomes solidified, thus making this a means of identification. Although a player with a reputation of 1.0 in round one could drop to a reputation of 0.5 in round two, a player with a reputation of 1.0 in round 20 can only drop to 0.95. Based on this, one can say that the ranking of players by reputation remains mostly static: the worst-ranked opponent will have a very difficult time becoming the highest-ranked. While this is very untrue in the first rounds, at a certain point changing ranks becomes almost impossible. This phenomenon can be imagined like a zipper: before any zipping has occurred, there’s a very large degree of freedom of motion available. However, as you begin to zip, the freedom of motion becomes more and more constrained, until none remains.

Undermining

While our program implements tit-for-tat as described above in most circumstances, there’s a very specific scenario in which it deviates from this otherwise optimal strategy. As mentioned, the "tit-for-tat" allowed by the Hunger Games competition is not foolproof, since player identities can only be approximately tracked at the beginning of the game. Assuming other players are also tracking opponents by reputation, we can exploit this limitation in knowledge by occasionally attempting to "undermine" another player, assuming their identity. This technique is probably best illustrated by example. Suppose in some round we’re ranked 2nd by reputation at 0.7, while another player is ranked 3rd at 0.6 reputation. Assuming both we and the 3rd ranked player slacked completely this round, there would be no way for us to displace our opponent as 3rd ranked player, since they already have the "bad reputation" head-start. However, the likelihood of our opponent slacking completely this round is very low. In fact, the decisions of our opponent can be estimated given their current reputation, the number of rounds elapsed so far, and a rating of how certain we want our estimation to be by using the lower bound of the
Wilson score interval. While this formula is most often used to rank items based on user reviews (and is employed perhaps most famously by Reddit’s "hot" algorithm), in this case we can use it to infer the "true" cooperation rate of opponents and, based on this, their (probable) reputation at the end of the round. Supposing in this circumstance that we predict with a large amount of certainty that our opponent’s reputation at the end of this round will be at worst 0.55, and we can manage to lower our reputation below that, then we choose to slack completely this round. Assuming that the other player remained at 0.6 reputation, while we dove down to 0.5, from a third player’s perspective, this is what happened: the 2nd ranked player went from 0.7 reputation to 0.6 reputation, and the 3rd ranked player went from 0.6 reputation to 0.5 reputation. For the third player to make the assumption that the 2nd ranked player went under the 3rd ranked player - going from 0.7 reputation to 0.5 - would be a strange and unintuitive leap of logic. So, in this way, we can choose to take the advantageous route of fully slacking while passing off some of the negative repercussions of this to a worse-ranked player.

Graph of player reputation over time

In the above, we can see The Underminer repeatedly performing this tactic against a standard tit-for-tat bot. After each undermine, The Underminer reverts to a basic tit-for-tat strategy, which in this simulation caused its reputation to increase over time. As soon as it’s again in a position where it can undermine the tit-for-tat player, it does, repeatedly reaping the rewards of doing so.

Unfortunately, I can’t yet reveal whether The Underminer was successful or not – mostly because I haven’t found out myself. Hopefully, however, in two weeks’ time the results of the Hunger Games competition will be announced, at which point I’ll write another analysis on how our program fared overall. In the meantime, you can go check out the source code for our program as well as the testing/simulation program we used at this github repo.

Tags: anonymity, game theory, probability, programming, python

Generating Contrast and Saliency Maps on the GPU (Using Python and OpenCL)

posted on: Saturday, April 13, 2013 (8:21 am) by Chase Stevens

In the field of Computational Neuroscience, saliency maps are a means of graphically representing the areas of any visual scene presenting the most "bottom-up" saliency to a human observer (i.e. those most likely to draw the viewer's attention). Although the generation of these maps is not particularly difficult on a conceptual level, doing so is quite computationally expensive if using a serial approach. Below, I provide code for quickly generating the component contrast maps needed to build a saliency map by parallelizing the task on the GPU, as adapted from MATLAB code provided by Vicente Ordonez of SUNY. To run this, you'll need pyopencl v0.92, numpy, and PIL.

Christian Bale Saliency map of Christian Bale
Original image Saliency map

Download
from PIL import Image
import itertools
import numpy as np
import pyopencl as cl
import pyopencl.array as cl_array
from pyopencl.elementwise import ElementwiseKernel
import math

## initialization of GPU context, queue, kernel
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
kernel_args = "int width, int len_min_width, float *image, float *contrast"
kernel_code = '''
contrast[i/3] = contrast[i/3] +
 (!(
     (i < width) ||
     (i > len_min_width) || 
     (i % width == 0) ||
     ((i+1) % width == 0) ||
     ((i+2) % width == 0) ||
     ((i+3) % width == 0) ||
     ((i-1) % width == 0) ||
     ((i-2) % width == 0)
   )
 )?
 (
     pown((image[i] - image[i-width]), 2) +
     pown((image[i] - image[i-width-3]), 2) +
     pown((image[i] - image[i-width+3]), 2) +
     pown((image[i] - image[i-3]), 2) +
     pown((image[i] - image[i+3]), 2) +
     pown((image[i] - image[i+width]), 2) +
     pown((image[i] - image[i+width-3]), 2) +
     pown((image[i] - image[i+width+3]), 2)
 ) : 0''' # for each rgb value in image, if not part of a pixel on the image
          # border, sum the squares of the differences between it and the
          # corresponding color values of each surrounding pixel, then add
          # this to the corresponding output pixel's value
contrast_kernel = ElementwiseKernel(ctx,
                                    kernel_args,
                                    kernel_code,
                                    "contrast")

flatten = lambda l: list(itertools.chain(*l))

def contrast_map(image):
    
    width = image.size[0]
    height = image.size[1]
    
    ## creating numpy arrays
    image = np.array(flatten(list(image.getdata()))).astype(np.float32) #image array
    contrast_map = np.zeros((height*width)).astype(np.float32) #blank write array
    
    ## send arrays to the gpu:
    image_gpu = cl_array.to_device(ctx,queue,image) 
    contrast_map_gpu = cl_array.to_device(ctx,queue,contrast_map) 
    
    contrast_kernel(width*3,(image.size - width - 1),image_gpu,contrast_map_gpu) #executing kernel
    contrast_map = contrast_map_gpu.get().astype(np.float32) #retrieving contrast map from gpu
    contrast_map = np.nan_to_num(contrast_map) #conversion of NaN values to zero
    
    ## normalization:
    contrast_map += max(contrast_map.min(),0)
    contrast_map /= contrast_map.max()
    contrast_map *= 255
    return contrast_map.astype(np.uint8)

def saliency_map(image):
    
    width = image.size[0]
    height = image.size[1]
    
    resizes = int(math.floor(math.log(min(width,height),2)) - 3) #calculating necessary number of
                                                                 #images for gaussian pyramid
    resized_images = [image.resize((width/factor,height/factor),Image.BICUBIC) 
                      for factor in [2**x for x in range(resizes)]] #resizing images
    contrast_resized_images = map(contrast_map, resized_images) #generating contrast maps

    ## resizing contrast maps back to original image size:
    images = list()
    for i in range(len(contrast_resized_images)):
        image = Image.new("L",resized_images[i].size)
        image.putdata(list(contrast_resized_images[i]))
        images.append(np.array(list(image.resize((width,height)).getdata())))

    ## combining images:    
    result = np.zeros_like(images[0]).astype(np.float32)
    for image in images:
        result += image.astype(np.float32)
        
    ## normalization:
    result += max(result.min(),0)
    result /= result.max()
    result *= 255
    output = Image.new("L",(width,height))
    output.putdata(list(result.astype(np.uint8)))
    return output

Tags: code, cognitive science, gpgpu, image processing, numpy, opencl, programming, python

Reconstructing Images Using Damaged Copies (in Python)

posted on: Thursday, September 13, 2012 (4:01 pm) by Chase Stevens

I've recently been working on implementing various methods of image reconstruction in python. The idea is to, given several imperfect (that is to say, noisy, incomplete, photoshopped, or otherwise damaged) copies of some original image, attempt to arrive at something close to (if not precisely) the original image by combining said copies. Through these means, an approximation of the original image can be generated should the original be lost, itself damaged, irretrievable, or otherwise unavailable or useless. In writing the various functions for doing this, I implemented techniques used in signal averaging and variants thereof. I also implemented a “modal image" function which, for each pixel, uses the “modal" RGB values across all image copies or, failing that, performs a simple mean of values.

Examples and Analysis

Original Christian Bale photograph

For the following examples, I modified the above image of actor Christian Bale. Ironically enough, in testing for this article, I overwrote the original image and had to employ the use of Google's reverse image-search in order to find it.

Damaged images
Damaged Christian Bale #1 Damaged Christian Bale #2 Damaged Christian Bale #3
Delta: 105.122025 Delta: 88.026425 Delta: 68.5942
Function results (listed with difference from original image as given by get_delta, lower is better):
<function average_images_add at 0x00000000030E8F28> : 155.35693875
<function average_images_sub at 0x00000000030E95F8> : 72.4844575
<function average_images at 0x0000000002EF0208> : 43.92254625
<function average_noisefilter_all_sub at 0x0000000002EF0278> : 51.1805645833
<function average_noisefilter_all_delta at 0x0000000002EF02E8> : 36.9071316667
<function modal_image at 0x0000000002EF0358> : 42.53322
Christian Bale Reconstructed Using modal_image Christian Bale Reconstructed Using average_noisefilter_all_delta
Function: modal_image
Delta: 42.53322
Function: average_noisefilter_all_delta
Delta: 36.9071316667
As is readily visible, in this example the naïve “voting”-type approach used by modal_image is deceived in any case where forms of damage across multiple images “agree” with each other. Given a larger number of copies or a less artificial form of damage, this would likely cease to be an issue; in theory, modal_image could even go so far as to reconstruct the original image perfectly. average_noisefilter_all_delta produced the best results on average, although, to its detriment, its output relies on the order of the list of image copies passed to it. In addition, while it manages to be closer to the original image, the subtraction of image differences it employs creates a slightly “jarring” visual effect (as seen above). The inherent issue in reconstructing damaged images is one of knowledge. To humans, it seems obvious that the original image of Christian Bale didn't have streaks of white across it. However, this is a mere assumption based on our vast pool of experience in viewing photographs. Who's to say that the original photo didn't have white scribbles on it? The computer is hampered, from our perspective, by its inability to make these assumptions when reconstructing the original image, so although it produces results better than a mere superposition of the copies, they rarely are better than what could be accomplished by human hands.

Noisy images
Noisy Christian Bale #1 Noisy Christian Bale #2 Noisy Christian Bale #3
Delta: 92.715475 Delta: 93.352175 Delta: 93.08885
Function results (listed with difference from original image as given by get_delta, lower is better):
<function average_images_add at 0x00000000030E8F28> : 103.01672125
<function average_images_sub at 0x00000000030E95F8> : 81.929801875
<function average_images at 0x0000000002EF0208> : 82.971985
<function average_noisefilter_all_sub at 0x0000000002EF0278> : 69.6356495833
<function average_noisefilter_all_delta at 0x0000000002EF02E8> : 75.23682
<function modal_image at 0x0000000002EF0358> : 65.2880416667
Christian Bale De-Noised Using modal_image Christian Bale De-Noised Using average_noisefilter_all_sub
Function: modal_image
Delta: 65.2880416667
Function: average_noisefilter_all_sub
Delta: 69.6356495833
In this case, modal_image produced the best results, although its output is still quite noisy. Again, having more copies would significantly improve the end product. The output also appears to be slightly brighter than would be expected. average_noisefilter_all_sub comes in at a close second, although (to the human eye) its results appear to be quite a bit noisier than modal_image.

I tried to use these image reconstruction methods on compressed images with jpeg artifacts as well, although the results were much poorer. Output deltas ended up being worse than the least compressed copy of a given set, although better than an average copy. modal_image and average_images_add seemed to perform best. The overall problem was again one of knowledge: could less compressed images be prioritized or weighted given their closeness to the source, results would likely be much better. However, the computer has no means by which to determine what is closest to the original, and thus fails to leverage this. Some kind of programmatic detection of jpeg artifacts could be of great help in improving this situation.

As a final note before my code, I just recently launched Broverbs. Please do check it out if you've got the time!

Download
from __future__ import division
from PIL import Image as PILImage
from Tkinter import *
from tkFileDialog import askopenfilename as openfile
from itertools import product
from random import shuffle

class Image():
    def __init__(self,filename=None,dimensions=None):
        if filename!=None:
            self.pic = PILImage.open(filename)
            self.imgData = self.pic.load()
            self.size = self.pic.size
        else:
            if dimensions!=None:
                self.size = dimensions
            else:
                self.size = [0,0]
            self.pic = None
            self.imgData = {}
            for x,y in product(range(self.size[0]),range(self.size[1])):
                self.setpixel(x,y,0,0,0)
    def setpixel(self,x,y,r,g,b):
        self.imgData[x,y] = tuple(map(max,zip((r,g,b),(0,0,0))))
    def getpixellist(self):
        return product(range(self.size[0]),range(self.size[1]))
    def show(self):
        tempImage = PILImage.new('RGB',self.size)
        for x,y in self.getpixellist():
            tempImage.putpixel((x,y),self.imgData[x,y])
        tempImage.show()

def get_delta(image1,image2):
    if image1.size != image2.size: raise ValueError("Image sizes do not match")
    delta = 0
    for x,y in image1.getpixellist():
        delta += sum(abs_pixel_diff(image1,image2,x,y))
    return delta / (image1.size[0] * image1.size[1])

mode = (lambda l: max([(l.count(i), i) for i in set(l)])[1] if len(set(l)) < len(l) else sum(l)/float(len(l)))

pixel_operation = lambda f: lambda image1,image2,x,y: map(f,zip(image1.imgData[x,y],image2.imgData[x,y]))
abs_pixel_diff = pixel_operation(lambda (x,y): abs(x-y))
pixel_diff = pixel_operation(lambda (x,y): x-y)
pixel_avg = pixel_operation(lambda (x,y): (x+y)/2)
pixel_sum = pixel_operation(lambda (x,y): x+y)

def image_operation(operation,image1,image2):
    if image1.size != image2.size: raise ValueError("Image sizes do not match")
    result = Image(dimensions=image1.size)
    for x,y in result.getpixellist():
        r,g,b = operation(image1,image2,x,y)
        result.setpixel(x,y,r,g,b)
    return result

get_delta_image = lambda image1,image2: image_operation(abs_pixel_diff,image1,image2)
subtract_image = lambda minuend,subtrahend: image_operation(pixel_diff,minuend,subtrahend)
average_image = lambda image1,image2: image_operation(pixel_avg,image1,image2)
add_image = lambda image1,image2: image_operation(pixel_sum,image1,image2)

def get_average_image(image_list):
    output = Image(dimensions=set1[0].size)
    for x,y in output.getpixellist():
        r,g,b = reduce(lambda a,b: (a[0]+b[0],a[1]+b[1],a[2]+b[2]),[image.imgData[x,y] for image in image_list])
        output.setpixel(x,y,r/len(image_list),g/len(image_list),b/len(image_list))
    return output

def generalized_average(image_list, combination_function, function1, function2):
    shuffle(image_list)
    set1 = image_list[0::2]
    image1 = get_average_image(set1)
    set2 = image_list[1::2]
    image2 = get_average_image(set2)
    return combination_function(function1(image1,image2),function2(image1,image2))

def average_images_add(image_list): 
    return generalized_average(image_list,add_image,average_image,subtract_image)

def average_images_sub(image_list):
    return generalized_average(image_list,subtract_image,average_image,subtract_image)

def average_images(image_list): 
    return generalized_average(image_list,subtract_image,average_image,get_delta_image)

def generalized_noisefilter(image_list,function):
    shuffle(image_list)
    set1 = image_list[0::2]
    image1 = get_average_image(set1)
    set2 = image_list[1::2]
    image2 = get_average_image(set2)
    noise = function(image1,image2)
    denoise_set = [subtract_image(image,noise) for image in image_list]
    return get_average_image(denoise_set)

def average_noisefilter_all_sub(image_list):
    return generalized_noisefilter(image_list,subtract_image)

def average_noisefilter_all_delta(image_list):
    return generalized_noisefilter(image_list,get_delta_image)

def modal_image(image_list):
    shuffle(image_list)
    result = Image(dimensions=image_list[0].size)
    for x,y in result.getpixellist():
        r = mode([image.imgData[x,y][0] for image in image_list])
        g = mode([image.imgData[x,y][1] for image in image_list])
        b = mode([image.imgData[x,y][2] for image in image_list])
        result.setpixel(x,y,r,g,b)
    return result

def test_func(f,images,original,trials=25):
    results = list()
    for x in range(trials):
        print "Trial #"+str(x+1)
        results.append(get_delta(f(images),original))
    return sum(results)/len(results)

function_list = [average_images_add,average_images_sub,average_images,average_noisefilter_all_sub,average_noisefilter_all_delta,modal_image]

def test_functions(image_list,original,functions=function_list,trials=10):
    out = ''
    for f in functions:
        out += (str(f) + ' ')
        out += (': ')
        out += (str(test_func(f,image_list,original,trials)))
        out += ('n')
    print out

Tags: code, image processing, information theory, programming, python, voting

More Stupid Higher-Order Function Tricks

posted on: Thursday, March 15, 2012 (11:23 am) by Chase Stevens

A while ago, a friend of mine who was unfamiliar with python asked me how to go about parsing a CSV into a list of lists. Moreover, the data included integers which needed to be parsed as such (and not as strings). My solution was as follows:

(lambda f: [map((lambda x: x.isdigit() and int(x) or x),(line.split(','))) for line in f.readlines()]) #takes a file
Just yesterday, another friend of mine had accidentally merged and subsequently sorted two excel spreadsheets, and overwrote one with the result. She asked if I could take the merged file and the remaining original file and re-construct the file she had overwritten. Incorporating the CSV parsing function I had written earlier, I wrote this:
from tkFileDialog import asksaveasfile, askopenfile
full = (lambda f: [map((lambda x: x.isdigit() and int(x) or x),(line.split(','))) for line in f.readlines()])(askopenfile())
exclude = (lambda f: [map((lambda x: x.isdigit() and int(x) or x),(line.split(','))) for line in f.readlines()])(askopenfile())
(lambda f: map(f.write,(map(','.join,filter((lambda x: x not in exclude),full)))))(asksaveasfile())

Tags: code, csv, filter, lambda, map, programming, python, tkinter