I thought it would be fairly straightforward getting a definition of ‘algorithm’, given how ubiquitous the term is these days. It turns out, though, that (at least according to Wikipedia’s ‘Algorithm characterizations’ page) there is no “generally accepted formal definition.” This one from David Berlinski (found on the same Wikipedia page) is the most concise I could find:
“an algorithm is
a finite procedure,
written in a fixed symbolic vocabulary,
governed by precise instructions,
moving in discrete steps, 1, 2, 3, . . .,
whose execution requires no insight, cleverness,
intuition, intelligence, or perspicuity,
and that sooner or later comes to an end. (2000, p. xviii)
Frank Pasquale likens an algorithm to a ‘black box’ in that it is
a system whose workings are mysterious. We can observe its inputs and outputs, but can’t tell how one becomes the other. Every day, we confront these two meanings. We’re tracked ever more closely by firms and the government. We often don’t have a clear idea of just how far this information can travel, how it’s used, or its consequences.
are replacing the manual models of content filtering and gatekeeping of previous media with complex automated tools. In the process of crawling, indexing, and structuring Web content, search engines create an informational infrastructure with specific characteristics and biases.”
it’s not only – and often not primarily – the algorithms, or even the programmers of algorithms, who are to blame. The algos also serve as a way of hiding or rationalizing what top management is doing. That’s what worries me most – when “data-driven” algorithms that are supposedly objective and serving customers and users, are in fact biased and working only to boost the fortunes of an elite.
Pasquale underestimates the degree to which even those on the inside can’t control the effects of their algorithms. As a software engineer at Google, I spent years looking at the problem from within, so it’s not surprising that I assign less agency and motive to megacorporations like Google, Facebook, and Apple. In dealing with real-life data, computers often fudge and even misinterpret, and the reason why any particular decision was made is less important than making sure the algorithm makes money overall. Who has the time to validate hundreds of millions of classifications? Where Pasquale tends to see such companies moving in lockstep with the profit motive, I can say firsthand just how confusing and confused even the internal operations of these companies can be.
For example, just because someone has access to the source code of an algorithm does not always mean he or she can explain how a program works. It depends on the kind of algorithm. If you ask an engineer, “Why did your program classify Person X as a potential terrorist?” the answer could be as simple as “X had used ‘sarin’ in an email,” or it could be as complicated and nonexplanatory as, “The sum total of signals tilted X out of the ‘non-terrorist’ bucket into the ‘terrorist’ bucket, but no one signal was decisive.” It’s the latter case that is becoming more common, as machine learning and the “training” of data create classification algorithms that do not behave in wholly predictable manners.
Artificial Intuition happens when a computer and its software look at data and analyse it using computation that mimics human intuition at the deepest levels: language, hierarchical thinking — even spiritual and religious thinking. The machines doing the thinking are deliberately designed to replicate human neural networks, and connected together form even larger artificial neural networks. It sounds scary . . . and maybe it is (or maybe it isn’t). But it’s happening now. In fact, it is accelerating at an astonishing clip, and it’s the true and definite and undeniable human future.
So what are we humble language teachers supposed to do in the face of this ‘true and definite and undeniable human future’? I’m not sure but I agree with Neil Selwyn that it
seems sensible to suggest that many of the inequalities and injustices associated with contemporary forms of digital education might be redressed through open discussion, open argument and critical scrutiny of the forms of educational technology that we currently have, and those that we want. (2013, ‘Repositioning Educational Technology as a Site of Public Controversy’, para 1)