Last night, I was reading a thread on reddit titled "When did the use of gender start showing up in language, and what purpose did it serve?". One of the top posts, by a user named ggchappell, said that the original poster seemed to have it the wrong way around: languages "start complicated and sometimes get simpler", especially when those languages are used for trade. In response to this, user mojitz asked,
"The simplest languages are those that ended up being used for trade between multiple groups of people."I thought that was a really exciting question, and was prompted to write the response below.
Does English conform to this generalization by your estimate?
Depending on how you define "simple", English is a relatively simple language, but probably not because of trade.
The question of how to define language complexity (or, in fact, if languages differ in complexity in the first place) has been a hotly contested, very political issue in the world of linguistics. Because of this, it’s really only been in the last few years that we’ve made substantive advances toward finding quantitative metrics of language complexity. Some really exciting work was done a few years ago by Max Bane, who applied Kolmogorov complexity – a concept from information theory that measures how densely you can compress some data without losing any information – to the inflectional and affixal morphology of various languages . This is to say, instead of measuring how complex a language’s sound system is, or how difficult that language is to learn, Bane was looking at how complex the words of a language are in terms of their internal structure: the rules dictating which prefixes and suffixes you can (or must) add to a "base" word form, e.g. "type" becoming "typed" to indicate the past tense.
If we use Bane’s metric for complexity, we end up with languages like Hungarian and Latin (which, as others have pointed out, has a very rich inflectional system) being ranked as most complex, while morphologically "isolating" languages like Vietnamese and Chinese – languages that have on average fewer units of meaning per word (contrast "establish" with "anti-dis-establish-ment-arian-ism") – are ranked as the least complex.
Slightly more complex than isolating languages are a class of languages called "pidgins". Pidgins are communicative systems formed when two or more groups of people without a common language need to communicate. Getting back to what /u/ggchappel said, pidgins are very frequently the result of different language speakers coming together to engage in trade. Moreover, pidgins are very widely held as being particularly simple languages - in pidgins, we very often observe things like the reduplication of words or morphemes to augment their intensity (e.g., in Nigerian Pidgin, "sikisiki", meaning "someone who is always sickly", and "E big well well" to mean "It is very big" ). When children grow up learning a pidgin, it transitions into what linguists call a "creole", and usually as a result of this undergoes an increase in regularization and linguistic complexity. However, children don’t normally grow up learning a pidgin as a result of trade – this is much more likely to be indicative of one of two situations: either multiple groups of people without a common language have been brought together as a result of slavery, or one group of people has invaded another. In the latter scenario, we see the establishment of a "prestige" language – a language of the elite and the learned, the language of the conquerors – and a "vulgar" language, the language of the conquered masses.
Now, this is where things start to get really neat. Let’s take a quick look at the history of England (and English) from the 9th century through to the 13th century. In 9th century England, Old English was being spoken, a Germanic language strongly related to other continental European languages. In northern England, however, English was hardly being spoken at all: the Old Norse-speaking Danish had invaded, and established Danelaw. At this time Old Norse and Old English would have been mutually intelligible, much like modern-day Swedish and Norwegian – a speaker of Old English would have been able to understand a speaker of Old Norse, and vice-versa, although probably with some difficulty. It’s very hard to overstate the amount of influence Old Norse had on Old English, but to give an example, the pronouns "they", "their" and "them" – core components of Modern English – derive directly from the Old Norse "þeir" ("their") . But the amount of change Old English experienced interacting with Old Norse pales in comparison to what happened next.
In 1066, William the Conqueror invaded England and, in doing so, established French as a prestige language. Now suddenly we see incredible amounts of French influence on Old English vocabulary and grammar, so much so that by the 12th or 13th century Old English isn’t even Old English anymore, it’s Middle English. To give you an idea of how radical a change this was, Modern English speakers can understand Middle English, whereas Old English might as well have no relation to what we think of as English today. Middle English is the language of Chaucer, who in the 14th century wrote lines like
"Whan that Aprill, with his shoures soote The droghte of March hath perced to the roote And bathed every veyne in swich licour" .Middle English is the language of Orm, who wrote in the introduction of his 12th century book Orrmulum,
"[Th]iss boc iss nemmned Orrmulum, for[th]i [th]att Orrm itt wrohhte" .Old English is the language Beowulf is written in, which contains such unintelligible passages as
"Hwæt! We Gardena in geardagum [th]eodcyninga [th]rym gefrunon hu [th]a æ[th]elingas elle fremedon" .About 90% of the words in Modern English are from either Latin or Greek, with the vast majority of these having come through French during this time period or later  (although roughly half of those used in everyday speech are still of Germanic origin ).
Let’s get back to Bane. You might have noticed that I didn’t mention where English ranked in Bane’s complexity survey. Let me tell you, English is much less complex than Latin. English is much less complex than Icelandic, which has changed so little as to be mutually intelligible with Old Norse. English is less complex than French or German. In fact, in Bane’s ranking of 20 languages using his complexity metric, English was less complex than any other European language. Instead, English falls somewhere between Dutch and Maori. More notably, though, the English language’s complexity is, at 16.88%, closer to Nigerian Pidgin (at 9.8%) than French (at 23.05%). Old English underwent creolization to become Middle English .
In short, and to answer your question, yes, English can be construed as being one of the least complex European languages, not because of trade – but because of conquest.
In this blog post, I’d like to introduce a project I’ve been tinkering on and off (mostly off) with for the past few months. The idea behind the project was pretty simple: to write a program which, given a set number of triangles and a blank canvas, adjusts the coordinates, color, and transparency of the triangles over time in an attempt to replicate a given image. There were two primary motivations behind my project, at least initially:
This was one of the very first tests of my program, and is in my opinion also one of its most successful and impressive recreations. As you might infer from the stark difference between iteration 9,500 and 50,000 and moderate difference between iteration 50,000 and 1,293,000, the program’s output approaches complete similarity to the target image roughly logarithmically with each iteration; this trend is plotted in the graph below.
Output at iteration 9,500
Image size: 3.38KB
Compressed SVG size: 0.88KB
Output at iteration 50,000
Image size: 2.95KB
Compressed SVG size: 0.88KB
Output at iteration 1,293,000
Image size: 2.74KB
Compressed SVG size: 0.88KB
The corresponding vector for the final result (below) is actually noticeably less accurate than the JPEG posted above, because the program has exploited JPEG artifacts as well as the specific dimensions of the target in creating its reproduction. When the triangles are used to create an SVG, these artifacts are no longer present and the image becomes less faithful.
One of the most noticeable issues with the program’s image is that it entirely fails to capture the eyes in the source image. I explored various methods of solving this issue, which I will discuss in a later post on implementation. Overall, though, I thought that the program produced a more pleasing (if less accurate) image, kilobyte-for-kilobyte, than comparable "traditional" compression techniques (as pictured below), especially if enlarged.
To finish off this example: a video of the program’s progress in making its output played at 30 frames per second, with frames produced every 500th iteration. The "jumps" in activity visible at 0:23 and 0:56 are instances when I made updates to the program.
Program’s final output
Image size: 2.74KB
Compressed SVG size: 0.88KB
Image compressed with Paint.NET
Compressed size: 1.58KB
This example yet again highlighted an issue with the image similarity evaluation methods employed by the program. While many aspects of the source image are replicated with high fidelity, the facial features of Christian Bale are not. Below again is a video of the program’s progress over time.
Output at iteration 737,500
Image size: 4.03KB
Compressed SVG size: 1.54KB
In this instance, I experimented with another evaluation method that incorporated a saliency comparison between the target image and the output. Although the results presented here seem promising, other tests using the method were lacklustre, especially given that the saliency map calculations added a lot of processing time to each iteration.
Output at iteration 769,500
Image size: 3.27KB
Compressed SVG size: 1.54KB
Recently, a couple of friends and I entered the Brilliant.org Hunger Games competition, in which competitors must create programs which are pitted against others in an arena of sorts. In any given round of the competition, the programs must choose for each other program whether to cooperate with them or take advantage of them. In the Hunger Games competition, these are framed as "hunting" and "slacking".
At face value, the competition is a very standard iterated prisoner’s dilemma. For the general case, the optimal strategy for this game has already been discovered: tit-for-tat. This essence of this strategy is to reciprocate everything done to you in the previous round, and cooperate with everyone in the first round in a show of "good faith". However, the Hunger Games competition had a slight twist: although your program would be able to see the overall reputation of each opponent (id est how many times they hunted versus slacked), no other identifying information would be supplied. This limited knowledge makes true tit-for-tat impossible, since your opponents are effectively anonymized. Although you may know that a player you decided to cooperate with in the previous round defected against you, there’s no way to retaliate against the same player in this round with 100% confidence.
My team’s strategy, nicknamed "The Underminer", both compensated for this lack of knowledge and managed to exploit it. We started with the assumption that tit-for-tat is possible, to a degree. As the number of rounds increases, the reputations of individual opponents becomes solidified, thus making this a means of identification. Although a player with a reputation of 1.0 in round one could drop to a reputation of 0.5 in round two, a player with a reputation of 1.0 in round 20 can only drop to 0.95. Based on this, one can say that the ranking of players by reputation remains mostly static: the worst-ranked opponent will have a very difficult time becoming the highest-ranked. While this is very untrue in the first rounds, at a certain point changing ranks becomes almost impossible. This phenomenon can be imagined like a zipper: before any zipping has occurred, there’s a very large degree of freedom of motion available. However, as you begin to zip, the freedom of motion becomes more and more constrained, until none remains.
While our program implements tit-for-tat as described above in most circumstances, there’s a very specific scenario in which it deviates from this otherwise optimal strategy. As mentioned, the "tit-for-tat" allowed by the Hunger Games competition is not foolproof, since player identities can only be approximately tracked at the beginning of the game. Assuming other players are also tracking opponents by reputation, we can exploit this limitation in knowledge by occasionally attempting to "undermine" another player, assuming their identity. This technique is probably best illustrated by example. Suppose in some round we’re ranked 2nd by reputation at 0.7, while another player is ranked 3rd at 0.6 reputation. Assuming both we and the 3rd ranked player slacked completely this round, there would be no way for us to displace our opponent as 3rd ranked player, since they already have the "bad reputation" head-start. However, the likelihood of our opponent slacking completely this round is very low. In fact, the decisions of our opponent can be estimated given their current reputation, the number of rounds elapsed so far, and a rating of how certain we want our estimation to be by using the lower bound of the Wilson score interval. While this formula is most often used to rank items based on user reviews (and is employed perhaps most famously by Reddit’s "hot" algorithm), in this case we can use it to infer the "true" cooperation rate of opponents and, based on this, their (probable) reputation at the end of the round. Supposing in this circumstance that we predict with a large amount of certainty that our opponent’s reputation at the end of this round will be at worst 0.55, and we can manage to lower our reputation below that, then we choose to slack completely this round. Assuming that the other player remained at 0.6 reputation, while we dove down to 0.5, from a third player’s perspective, this is what happened: the 2nd ranked player went from 0.7 reputation to 0.6 reputation, and the 3rd ranked player went from 0.6 reputation to 0.5 reputation. For the third player to make the assumption that the 2nd ranked player went under the 3rd ranked player - going from 0.7 reputation to 0.5 - would be a strange and unintuitive leap of logic. So, in this way, we can choose to take the advantageous route of fully slacking while passing off some of the negative repercussions of this to a worse-ranked player.
In the field of Computational Neuroscience, saliency maps are a means of graphically representing the areas of any visual scene presenting the most "bottom-up" saliency to a human observer (i.e. those most likely to draw the viewer's attention). Although the generation of these maps is not particularly difficult on a conceptual level, doing so is quite computationally expensive if using a serial approach. Below, I provide code for quickly generating the component contrast maps needed to build a saliency map by parallelizing the task on the GPU, as adapted from MATLAB code provided by Vicente Ordonez of SUNY. To run this, you'll need pyopencl v0.92, numpy, and PIL.
Original image Saliency map
from PIL import Image import itertools import numpy as np import pyopencl as cl import pyopencl.array as cl_array from pyopencl.elementwise import ElementwiseKernel import math ## initialization of GPU context, queue, kernel ctx = cl.create_some_context() queue = cl.CommandQueue(ctx) kernel_args = "int width, int len_min_width, float *image, float *contrast" kernel_code = ''' contrast[i/3] = contrast[i/3] + (!( (i < width) || (i > len_min_width) || (i % width == 0) || ((i+1) % width == 0) || ((i+2) % width == 0) || ((i+3) % width == 0) || ((i-1) % width == 0) || ((i-2) % width == 0) ) )? ( pown((image[i] - image[i-width]), 2) + pown((image[i] - image[i-width-3]), 2) + pown((image[i] - image[i-width+3]), 2) + pown((image[i] - image[i-3]), 2) + pown((image[i] - image[i+3]), 2) + pown((image[i] - image[i+width]), 2) + pown((image[i] - image[i+width-3]), 2) + pown((image[i] - image[i+width+3]), 2) ) : 0''' # for each rgb value in image, if not part of a pixel on the image # border, sum the squares of the differences between it and the # corresponding color values of each surrounding pixel, then add # this to the corresponding output pixel's value contrast_kernel = ElementwiseKernel(ctx, kernel_args, kernel_code, "contrast") flatten = lambda l: list(itertools.chain(*l)) def contrast_map(image): width = image.size height = image.size ## creating numpy arrays image = np.array(flatten(list(image.getdata()))).astype(np.float32) #image array contrast_map = np.zeros((height*width)).astype(np.float32) #blank write array ## send arrays to the gpu: image_gpu = cl_array.to_device(ctx,queue,image) contrast_map_gpu = cl_array.to_device(ctx,queue,contrast_map) contrast_kernel(width*3,(image.size - width - 1),image_gpu,contrast_map_gpu) #executing kernel contrast_map = contrast_map_gpu.get().astype(np.float32) #retrieving contrast map from gpu contrast_map = np.nan_to_num(contrast_map) #conversion of NaN values to zero ## normalization: contrast_map += max(contrast_map.min(),0) contrast_map /= contrast_map.max() contrast_map *= 255 return contrast_map.astype(np.uint8) def saliency_map(image): width = image.size height = image.size resizes = int(math.floor(math.log(min(width,height),2)) - 3) #calculating necessary number of #images for gaussian pyramid resized_images = [image.resize((width/factor,height/factor),Image.BICUBIC) for factor in [2**x for x in range(resizes)]] #resizing images contrast_resized_images = map(contrast_map, resized_images) #generating contrast maps ## resizing contrast maps back to original image size: images = list() for i in range(len(contrast_resized_images)): image = Image.new("L",resized_images[i].size) image.putdata(list(contrast_resized_images[i])) images.append(np.array(list(image.resize((width,height)).getdata()))) ## combining images: result = np.zeros_like(images).astype(np.float32) for image in images: result += image.astype(np.float32) ## normalization: result += max(result.min(),0) result /= result.max() result *= 255 output = Image.new("L",(width,height)) output.putdata(list(result.astype(np.uint8))) return outputTags: code, cognitive science, gpgpu, image processing, numpy, opencl, programming, python
Having learned a little about d3.js at a hackathon I attended, I decided to put it to good use. Below is an interactive implementation of the k-means clustering algorithm, as visualized using d3.js. Click to place points. Centroids are (initially) set on randomly chosen points, and the number of clusters is set by a supposed rule of thumb. You can choose to step through the algorithm one centroid-readjustment/simultaneous point assignment at a time, or view the process in real time.