A few weeks ago I sat down to watch Ashley Blewer's internet artwork, Throttled. It's hard to adequately describe. Whimsical, nostalgic, cutesy, yet with a hard edge; Online Found Art; A performance; A website; A homage. A poem. Whatever it is, there's no doubt it's art. The piece essentially consists of a large number of GIFs stored by the Internet Archive, and a deliberately slow server connection (i.e. a connection that is throttled). Time is one of the actors in this performance - the slow accretion of layers, the speed of the GIFs themselves: both are crucial to understanding the work. It's delightful to experience Internet art that is entirely premised on the idea of slowing down when so much about technology is designed to speed things up.
The Internet has always been obsessed with time. Although I'm about a decade older, around the time Ashley was bouncing around GeoCities I was there too, doing many of the same things as Ashley thanks to the home internet connection accessed via my Mum's University of Tasmania account. Of course, those were the days of dialup, so using the Web was something that was done in "the Computer room", and the time online was rationed - not because of worries about "screen time" as now, but simply because it prevented anyone else in the house from making or receiving phone calls. As the baud rates increased, Web users discovered that induced demand applies as much to internet speeds as it does to car traffic. Webmasters (LOL, remember them?) rushed to install analytics scripts to measure each visitor's "time on site", using tricks to maximise it in order to server more ads using yet more JavaScripts. Visitors rarely paid for whatever was being advertised, but they certainly paid in data downloads (still capped in Australia and other developing countries) and, of course, paid in their time as page load times remained static or even slowed even as overall Internet speeds increased.
The absurdities of modern website architecture have become their own kind of performance art. Finn Brunton, in Spam: A shadow history of the Internet, outlines how web search and spam blogs (amongst many other strange things) grew up together. The Googleverse is a sort of perpetual motion machine with web search, SEO, spam, ad-blockers, "superpixels", "AMP", and analytics scripts all feeding off each other in an endless cycle. In 2018 Max Read reported in New York magazine on a case being prosecuted by the United States Justice Department:
Fake people with fake cookies and fake social-media accounts, fake-moving their fake cursors, fake-clicking on fake websites — the fraudsters had essentially created a simulacrum of the internet, where the only real things were the ads.
As with the absurdities of Bitcoin, the main beneficiaries of this system are electricity companies and the server farms that market themselves as "The Cloud". The Cloud often has a problem with time, which is hardly surprising given that "what is the current time" is not as nearly so straightforward a question as one might imagine. I mean, the Python programming language has date
, time
, and datetime
in the standard library, which gives an indication of how complicated it is. I have two recurring events in my Microsoft Outlook calendar that will be there in perpetuity because Exchange Server can't work out whether or not I deleted them. If the event is still scheduled in the calendar on my phone, and I delete it on my PC, does it still exist? I would hope not, but apparently Exchange is less sure.
In the case of my zombie calender event, "the cloud" has, in a way, erased time. The schedule has been deleted, and also hasn't. It's as if the real event of deleting is being mocked by the scheduled event that will never happen. I've started reading Matthew Kirschenbaum's Track changes: A literary history of word processsing, and it's brought to mind the strange effect of all version-controlled software that tracks every keystroke, edit, and change. In one sense, it could be said that such systems have perfect recall, recording history exactly. The Draftback Chrome extension, for example, "lets you play back the revision history of any Google Doc you can edit. It's like going back in time to look over your own shoulder as you write."
On the other hand, by never forgetting what has come before, and storing that memory across a series of servers to protect against faults, cloud services like Google Docs could be said to erase history by never allowing anything to be forgotten. Whilst the "right to be forgotten" has been much debated in European courtrooms with Alphabet/Google in the dock, the focus of that effort has been on Google's web search index, rather than its database of every keystroke you ever made or unmade whilst drafting your job application, or institutional policy, or love letter, or scientific paper, or novel. The recent public arguments over statues of dead white men across the world are ultimately about who gets forgotten and what gets remembered. When today's current crop of crusty old white men say that people are "erasing history" by demanding a full accounting of the past and refusing to honour racists, slave drivers and genocidal imperialists, they're right. "History" has always been about strategically forgetting more than it is about remembering. "History" is how we make sense of the past, and which stories about it we choose to tell and to honour. When towns across Eastern Europe spent the 1990s pulling down statues of Lenin it didn't change what has happened in the past, but it certainly "rewrote history". That was the point. But those towns and regions and countries still tell stories about the past, and still have statues in their public squares, and still strategically forget some of the things that have happened. That's what history is.
Which brings me to the strangely inspiring story of JavaScript's request
module. Whilst proprietary software companies often end support for software when they want to push users to pay them again - uh, I mean to upgrade - it's fairly unusual for open source software maintainers to deliberately try to kill off the software they've made. Plenty of projects die of course, but usually that's due to neglect, or maintainer burnout, or simply the world moving on and people finding something better.
In the case of the npm request
module, things are slightly different. The relationship between the core http
nodejs library and the request
module on npm is similar to that of Python's http
and requests
. You can use http
if you want, but most people prefer the simpler "abstractions" provided by the modules. In March last year, Mikeal Rogers - the original author of request
- proposed that it be moved into "maintenance mode" and officially deprecated specifically because it is one of the most popular modules in the npm ecosystem. A friend of mine once said of Australia's most elite universities, "They're good at being old". As one of the first modules created for nodejs and for npm, request
holds a similar position. Thousands of modules and millions of users rely on it because hundreds of blog posts and Stack Overflow answers tell them to. It's popular because it's popular. What Rogers was saying was essentially that request
needs to actively get out of the way so that nodejs developers can promote something more in line with modern JavaScript. He was calling time on his own creation in order that something better can flourish.
Right now - as we wallow in a world of obviously broken systems, practices and structures - the world needs more Mikeal Rogers. People with power who are willing to admit that what they have been working to sustain for years is standing in the way of something better, and to work to shut it down for the benefit of the whole community.
Fewer disruptors, more deprecators.