Tag Archives: semantics

Logic, Computation and (f*(k?) Meming: On2logi+k,ing

10 Feb

Our human impulses are both sources for an solvers of random behaviour , chaotic order and clean representation. For organisations trying to measure what is happening online is still often unclear, as an individual mix of human and computational logic failures. What is curious about the relationship between organic and circuit based thoughts and actions is that the desire to overcome our own deficiencies and extend our reach leaves us vulnerable to the weaknesses of computing logic.  On a societal level this leaves many questions. For organisational governance it poses the question: should we be trusting our own judgement or should we ‘outsource it to machines’?

The #bigpaper example given in the previous post would to many have seemed a woefully creative and/or academic exercise. Merely to organise rewteeted material, who applauds a workflow which includes?:

  • Scrolling ones own collection of Tweets;
  • Copying a body of tweets into a word document;
  • Printing off that word document;
    • Going to a public environment;
    • Emailing it to the present peer given failure to bring wallet;
    • Printing the document and waiting for it to be printed;
  • Cutting the document into ‘Tweet sized chunks’ to include only image and message (trying to avoid cutting too close);
  • Reading each tweet again, pushing thematically each tweet into an appropriately themed pile;
  • Finding a table and pushing Tweets evenly across 2D plane to try and balance contexts and relationships;
  • Photographing Tweets both as a population, localised and at an angle;
  • Packing away Tweets into representative piles;
  • Examining photos (not nearly enough definition, repeat process with higher resolution);
  • Unpack Tweet piles and rearrange;
  • This time with improved iterative reordering of Tweets;
  • Include token signposting to provide order and visual signposting;
  • Photograph again;
  • Repack again.

Well done having the strength to get past that unexciting workflow!

Why did this need doing, let alone summarizing? Well firstly, when considering BIS, its important to have empathy concerning processes and the people that were/are confined with onerous, repetitive tasks (much in the same way with which a pilgrimage’s value comes from the journey as opposed to the destination). Secondly, it provides direct perspective concerning functions, challenging habits, providing insights and parallels for BIS environments. Thirdly, it provides the hunger for change and direction concerning what priorities and stages a solution should have.

The screencast in the other blog highlighted through photographic analogy informatics weaknesses concerning technology and processes and (seemingly) natural individual and organisational limiting factors (which may still exist as Big Data’s promises start to mature (but which hopefully appropriate BIS approaches would be able to mitigate)).

However, the frustration highlighted above downplays the fact that there were gains from using physical approaches (consideration time, treating information as a durable good and not a disposable resource). To reconcile these seemingly opposable approaches it is best to search for solutions which help to automate functions and logic steps (in a fully digital context, robots tooled with scissors are not quite within commercial reach…).

One of the challenges to implement functionality for ordering my material in a sophisticated way is that machines and computers are only pragmatically capable of operating within the functions trained by them. When arranging Tweets on a surface I had many complex and competing deliberations, which I either made with little effort (because the solution was clear) or considerable thought (because of ambiguities, complexity or too many choices). It is possible for computers to mimic these choices, let alone provide ones resembling (or improving upon!) human decision making was highlighted cleanly by Melanie Mitchell in the book Complexity: A Guided Tour:

Easy Things Are Hard
The other day I said to my eight-year-old son, “Jake, please put your socks on.” He responded by putting them on his head. “See, I put my socks on!” He thought this was hilarious. I, on the other hand, realized that his antics illustrated a deep truth about the difference between humans and computers.

The “socks on head” joke was funny (at least to an eight-year-old) because it violates something we all know is true: even though most statements in human language are, in principle, ambiguous, when you say something to another person, they almost always know what you mean.

Melanie Mitchell compared this human ease for distinction and interpretation with supposedly ‘state of the art spam filters’ which struggle to interpret V!a&®@ as spammer trying to vend. This computational challenge was expressed in terms of a computer being able to observe a pattern and then make the correct inference if the answer was not initially clear. To understand how much better computers can understand and solve analogies Mitchell worked for the AI researcher, Douglas Hofstadter on the “Copycat” program. This involved providing an example letter pattern jump and giving the computer exercises to make inferences. For example logic challenges could include:

“Consider the following problem: if abc changes to abd, what is the analogous change to ijk? Most people describe the change as something
like “Replace the rightmost letter by its alphabetic successor,” and answer ijl. But clearly there are many other possible answers, among them:

• ijd (“Replace the rightmost letter by a d”—similar to Jake putting his socks “on”)

• ijk (“Replace all c’s by d’s; there are no c’s in ijk”), and

• abd (“Replace any string by abd”).

An appropriate mathematical solution was found, involving a slipnet (network of concepts), a workspace (for the letters to reside), codelets (agents which explore possibilities) and temperature (a measure of organisation and control degree of randomness which codelets operated). Like performance management in the real world, the Copycat program had to identify the options, make an informed understanding as to how the decisions would be different and make a commttment.

Mitchell referred to a point earlier in the book, considering the activities of ants (insects which are dumb in isolation but which hold significant levels of intelligence once they reach a certain volume). Whilst ants would normally go for the most obvious food source (the place the other ants were going to or the direction returning ants with food were returning from) there would be a normal deviation involving ants taking new courses. This provides a unconscious balance between the short term expediency for food with longer term opportunities for sustainable food sources.

Screenshot from 2013-02-11 00:39:38

Identifying and implementing logical and mechanical solutions for organising social media paths do take time. However, they can pay dividends if the sheer cost of not automating functions exceeds the cost of either:

  • Outsourcing that functionality,
  • Buying an off the shelf solution,
  • Tinkering/customizing with available solutions,
  • Designing and implementing specific solutions.

To give a practical example, an analysis was taken of a recent Guardian article on the UK’s new spare bedroom tax for those on welfare and its corresponding 100 posts. Using a demo for a keywords text extractor  it was possible to create a breakdown of key terms for the article and each post. Entered into an excel spreadsheet, the exercise became more onerous than the Twitter arrangements. Although technically sifting through appropriate and inappropriate keyword solutions, the comments in isolation created variances that the tool was not going to deal with. The keyword list exceeded the Twitter population in terms of volume and diversity (this is partly because of the lack of a word limit), especially when considering duplicates. Here is one example covering taxes and benefits:

tax 11
tax.It 1
taxes 4
poll tax 2
Poll Tax 1
council tax 6
annual council tax 1
bedroom tax 14
new bedroom tax 1
extra bewdroom tax 1
percent beedroom tax 1
housing tax 1
Negligence Tax 1
window tax 2
tax avoidance schemes 1
tax planning rules 1
income/ benefits 1
pay/benefits 1
benefits 2
benefit 1
tax credits 2
council tax benefit 1
Employment Support Allowance 1
government pay 1
government assistance 1
Work Programme 2
programmes 1
Incapacity Benefit 0
basic benefit 1
Discretionary housing payments 1
Discretionary Housing Payment 1
housing benefit 6
Housing Benefit 2
HB 4
brand new HB 1
ESA 3
PIP 1
PIP conversion 1
decision 1
benefits measure 1
home allowances 1

Aggregating seperate analyses introduced problems in regards to multiple permutations from accidental or deliberate erring from standard explanation, emphasis, plural/singularity or spelling. Given that the process used or the tools analysis does not reconcile this we end up with upper case and lower case keywords being separate and descriptors and terms being welded together. In addition, parent child relationships between terms or titles do not appear strong (perhaps through conservatism of the software that could be tweaked). Terms such as coalition or Liberals are not carried or captured with cultural sensitivity (the UK’s government in this instance).

Copying and then breaking down the keywords into manageable or personalized themes or categories was onerous (although this is partly a lack of tools for reprocessing). Reordering the material takes time on a human level (although ironically resembling the process of disk defragmenting, see image of extracted keywords with markers to post author below after part of the keywords were moved to another excel sheet for clarity).

Screenshot from 2013-02-11 01:34:33

To capture the whole chain of appropriate keywords using this technique although imperfect (it is like considering the world as if it is a grain of sand and then commencing an audit of the universe). It is amazing however examining what keyword extraction is able to offer for just one discussion thread in terms of verbal emphases, especially when related to information, point, emphasis and debate (particularly when sources such as the Guardian offer quantifiable recommend numbers).

The keywords extracted cover the individual topic pretty comprehensively. Once interpreted effectively, especially with terms synthesized and broke down to base meaning and interaction it is capable of providing strong specialised meaning. At a rule base level once that sophistication point is reached scalable and sophisticated analysis, communications and campaigning is possible. As alluding to in my previous post, it is possible to map for solutions problems and issues. In many ways sentiment analysis is already offering this (although is still prone to errors similar to explained above). Getting to a more meanings based level that takes in human and computing errors would provide a clearer understanding regarding the topic (although it would be more consistent using personal judgement for many of the keyword themes in this example, given the cleaning required to counter the volume of computing keywords).

Perhaps it is apt to highlight the work of Joseph Weizenbaum, a member of GE’s team in 1955 to build the first the first computer system dedicated to banking operations and whose technical contributions includes the list processing system SLIP and the natural language understanding program ELIZA, which was an important development in artificial intelligence.

“…Named for the heroine of My Fair Lady, ELIZA was perhaps the first instance of what today is known as a chatterbot program. Specifically, the ELIZA program simulated a conversation between a patient and a psychotherapist by using a person’s responses to shape the computer’s replies. Weizenbaum was shocked to discover that many users were taking his program seriously and were opening their hearts to it. The experience prompted him to think philosophically about the implications of artificial intelligence, and, later, to become a critic of it.

In 1976, he authored Computer Power and Human Reason: From Judgment to Calculation, in which he displayed ambivalence toward computer technology and warned against giving machines the responsibility for making genuinely human choices. Specifically, Weizenbaum argued that it was not just wrong but dangerous and, in some cases, immoral to assume that computers would be able to do anything given enough processing power and clever programming.

“No other organism, and certainly no computer, can be made to confront genuine human problems in human terms,” he wrote.”

In order to circumnavigate historic failures of intelligent comprehension in computing logic the commercial providers online stuck to using “Recommended by…” algorithms comprising of aggregate or contextual navigation and consumption patterns. Perhaps, rather than reinforcing our human approaches online, perhaps we have become more like the ants?

Although the keyword analysis provided a more simple and one off demonstration, one should not discount the value of more complex and custom built analyses. However, the concerns regarding the processes and stages of a human analysis disappear once the reality of having to automate such functions kick in. There are tradeoffs concerning subtlety. For BIS approaches to performance management it is dangerous to assume that buying a machine solves the problems of the human functionality for some cost. Without knowing what is under the hood or at a bare minimum what are the qwerks then there is a risk that complexity will create unknown risks to organisational governance.

—————–

Other blog posts in the Order From Chaos miniseries include:

  1. Order From Chaos: Performance Management and Social Media Analytics in the Age of Big Data;
  2. Abstraction, Perspective and Complexity: Social Media’s Canon of Proportions;
  3. Logic, Computation and (f*(k?) Meming: On2logi+k,ing;
  4. Transposition, Catalysts and Synthesis: Playing with iMacwells eDemon.

More than just eCoal, eSteam and ePower: The Modernizing Dynamics of Change Series

  1. Introduction;
  2. Economic requirements: Catalyst for Invention, Innovation and Progress
  3. Not Just Invention: Change Through The Desire to Innovate, Reimagine and Expand;
  4. New Tools, New Patterns, New Thoughts: the Great Dialogue;
  5. Nobody Will Notice The Slow Death of Dissmeination, They Will Be Too Busy Listening;
  6. The frictions of competition and cooperation to strategic thinking;
  7. The Hot and Cold Wars: Relationships and conflicts between big and small, propriety and open source.

—————————

If you have any suggestions, relevant links or questions to add flavour to this series then please join the dialogue below or contact me via Twitter: