Tag Archives: Quality

Framework for Evaluating IS Success

22 Feb

Group 2 Members:

Christine Coughlan, Clifton Moore, Dermot Lucid, Niamh O’ Farrell &
Ronan Murphy

Introduction

We have created a framework which allows an organisation to evaluate the success of an information system unit. In order to develop our framework, we researched a number of IS success models developed by key authors in the IS field, for example, DeLone & McLean, Sedera, Gable, Seddon and Nelson. In researching these models we have identified both value and flaws within these models and have developed our own framework based on what we think are the most important IS dimensions to evaluate when measuring the success of an IS unit for any organisation today.

Our framework identifies key dimensions which must be measured to evaluate the overall success if an IS unit. Any firm, large or small, can use our success framework to measure the success of their IS unit by choosing suitable metrics to measure each dimension contained in the framework. We have created a framework that we believe to be both flexible and customisable. In our framework we have identified possible metrics used to measure each dimension but it is up to an organisation to decide on the most appropriate metrics to suit their organisational context and IS strategy. It is imperative to choose appropriate and agreed measures or this framework will fail to deliver its potential value.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

Framework for Evaluating the Success of an IS Unit

IS success framework

Click on the image for a better view

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

Explanation of Dimensions

[1] Context

Seddon et al (1999) presented the Cameron and Whetten 1983 (Fig 1, Below) framework for contextualising and evaluating Organisational Performance, adapted in Seddon et al (1999) to IT Effectiveness. [1] We have adopted the same seven-point framework to contextualise IS Success, as endorsed by Petter, DeLone, and McLean (2008) and outlined in the earlier post ‘Relatively Successful IS’.

Though all seven points are important, we suggest, in line with Seddon et al, that
1. Stakeholder Perspective? 2. Type of Information System, and 7. Against which referent is Success to be judged? are central to contextualising Success. The Context dimension of our Success framework is designed to establish, and justify, what is to be deemed ‘Successful’ from the standpoint of the stakeholder concerned, regarding the relevant IS system, and in the particular situation/organisation. To this end the Context aspect should be regarded as a Canvas to identify and outline the perspective or varying perspectives from which the analysis is based.

Stakeholder

“A stakeholder is a person or group in whose interest the evaluation of IS success is being performed” (Seddon et al, 1999). Seddon believes that due to a range of different individuals within an organisation they are going to evaluate IT success in different ways and perspectives. The inclusion of the “stakeholder” section in the framework is aimed to provide an organisation with the tool to adapt and understand the view of a projects success from all stakeholders’ views. Each stakeholder may use their own dedicated canvas, or if deemed useful, multiple stakeholders may outline their perspectives/concerns on a shared canvas. As is the case with Osterwalder’s Business Model Canvas, participants might post their views into the various categories using ‘stickies’, perhaps colour-coded to their individual stakeholder perspectives etc, to build up a visual representation of where their various priorities lie. In either case, this approach allows for comparison of stakeholders various perspectives, priorities and concerns, and will lead various parties to a more complete understanding of the success/weaknesses of the system in question. For example, a user might acknowledge a manager’s concerns for cost and the IT department’s concerns over reliability, versus their own concerns regarding usability, and vice-versa.

The elements ‘Timeframe’, ‘Type of Data’, and ‘Purpose of Evaluation’, are important for clarity, while acknowledging whether the system is for ‘Voluntary or Mandatory Use’ is a key factor to keep in mind within the backdrop to the evaluation. In the operational canvas these four elements (shaded) might be replaced by more relevant concerns, and so, should be regarded as suggestions. Once the vision of success is established stakeholders can turn their attention to the Quality & Impact sections and prioritise and even assign weighting to the various underlying dimensions. The relevant, prioritised/weighted dimensions can be measured using a Likert Scale against Sedera et al’s 27 corresponding measures Figure 3 (Below) as mentioned in the earlier post ‘ IS Success Canvas’.

The table borrowed from Seddon et al (1999), Fig 2 (below) contains examples of various stakeholders and information systems, and this table can be employed to inform the context dimension of the framework. The table column and row headings are useful as prompts but are not exhaustive of potential perspectives. However, the strength of the canvas approach is that it is blank and can therefore accommodate all stakeholder/perspectives and various information systems. Also, though informative as regards Stakeholder and IS type, we favour Sedera’s refined four-dimensional model and its tested measures (Fig 3, below) over the measures contained in Seddon’s table (Fig 2, below)
In a nutshell, the left side of the framework is a canvas to establish and outline what is to be deemed IS success. The right side of our framework is concerned with evaluating the IS against this established vision of success.

[2] Quality and Impact

The DeLone and McLean original IS success model classified measures of success into six constructs; System-Quality, Information-Quality, Organisational-Impact, Individual-Impact, Satisfaction, and Use.
Gable et al (2008) later proposed that information quality and system quality as identified by McLean and DeLone should be elements of a greater construct – IS Quality, while individual and organisational impact should be sub elements of an IS Impact construct.
Furthermore, Gable et al proposed that the ‘Satisfaction’ and ‘Use’ concepts as identified by DeLone and McLean should only be used as a metric to measure IS Impact and IS Quality and should not be treated as dependent constructs.
Thus, in our framework we considered both models and have confined the 6 constructs identified by DeLone and McLean into 2 key constructs as put forth by Gable et al; IS Impact and IS Quality. These constructs can be seen on the top level of the diagram.

The Impact construct is concerned with the eventual outputs delivered by an IS. The reason organisations invest heavily in information systems is because they expect the IS to have positive impacts on individual users and the organisation as a whole. Individual-Impact looks at how the IS has influenced the productivity and capabilities of individual users. Possible measures which can be used include individual productivity, learning and decision effectiveness.
Organisational-Impact is concerned with how the IS contributes to overall organisational results and capabilities. Business process change, cost reductions and overall productivity can be used to measure organisational impact.

The Quality construct is used to measure the IT-Artefact or technology element of IS.
Information-Quality is concerned with the quality of the information produced by the system, for example in reports and on-screen. Some measures which have been developed and successfully measured according to gable et al (2008) include importance, relevance and accuracy.
System-Quality measures the success of IS from a technical and design perspective. Tried and tested measures of system quality include reliability, flexibility, and potential for customisation.

Underneath Quality and Impact in the diagram we have the structure of the IS unit and Net Benefits.

[3] Structure of IS Unit

The structure or make-up of an IS unit can greatly impact its success, for example the level of commitment and support from top management, the quality of communication, culture and the skills of the employees. We will explain each of these to give a greater view of how the structure of an IS unit can influence IS success.

Top Management Support

It is extremely important that top management do not forget about a project after the planning stage but instead are commitment at the time of system implementation. By being directly involved in a project, top management guides the implementation team, allocating resources for projects, and stepping in to solve critical issues likely to affect implementation.

Communication

Management of an IS unit also affects communication within an organization and ultimately the productivity of users. Communication in an enterprise is vital in managing a company more efficiently, keep close monitor on strategies, strong relationships with employees and to have strong relations with partners/clients.

Culture

Culture within an organisation is also critical in determining success as it can impact how innovation affects IT practices and overall performance. Culture can impact organisations in the following three ways
1) Culture within an organisation can provide unwritten guidelines for employees in how to create a good workplace and strengthen relationships in order to improve the social system in the organisation.
2) Culture in an organisation can also affect the ability to deal successfully with issues from both internal and external integration.
3) It can also determine the differentiating between in-group and out-group people.

Employee Skills and Training

Employee skills being one of the most important factors within an organisation are critical in achieving success. If the employee does not meet the requirements/skills needed to carry out the required tasks, it can affect productivity and efficiency. It is also important that a business has a well-established training program for new employees in order to gain the appropriate skills that may be required specific to the company.

[4] Net Benefits

As a group we felt that Net Benefits is needed within an IS framework to support management teams in determining the success of their IS unit. The Net benefit dimension was also used in the DeLone and McLean model (2003) for organising IS success measurements. [ ] Net benefits are the extent to which IS are adding to the success of individuals, organisations and groups. The support management team needs to identify what their net benefits are. Examples of organisational net benefits may include: improved decision making, productivity, increased sales, reductions in cost, profits, economic development and creation of jobs, (DeLone & McLean, 2008).

Conclusion

Our framework is a synthesis of the key dimensions evident in the IS Success model research, and we feel that the framework is applicable or adaptable to all IS evaluations. The framework is intentionally open in nature with regard to its dimensions and measures making it ideal for quickly establishing and explaining across various stakeholders the success or less successful aspects of a system, while if necessary, thorough quantitative methods may be applied to the various dimensions, depending on the nature of the evaluation.

References:

• Seddon, P. B., Staples, S., Patnayakuni, R., & Bowtell, M. (1999). Dimensions of information systems success. Communications of the AIS, 2(3es)
• Petter,S., DeLone,W. & McLean,E. (2008). Measuring information systems success: models dimensions, measures, and interrelationships.
• Gable, Guy G. and Sedera, Darshana and Chan, Taizan (2008) Re-conceptualizing information system success: the IS-Impact Measurement Model. Journal of the Association for Information Systems.
• DeLone, W. and McLean, E. (2003). The DeLone and McLean Model of Information Systems Success: A Ten-Year Update.

Figure 1: Seven Questions to Answer when Measuring Organisational Performance – Cameron and Whetten (1983)

Figure 1: Seven Questions to Answer when Measuring Organisational Performance – Cameron and Whetten (1983)

Figure 2: IS Effectiveness Measures used for different combinations of Stakeholder and System – Seddon et al, (1999)

Figure 2: IS Effectiveness Measures used for different combinations of Stakeholder and System - Seddon et al, (1999)

Figure 3: Gable et al (2008) Impact Measures

Figure 3: Gable et al (2008) Impact Measures

Advertisements

Logic, Computation and (f*(k?) Meming: On2logi+k,ing

10 Feb

Our human impulses are both sources for an solvers of random behaviour , chaotic order and clean representation. For organisations trying to measure what is happening online is still often unclear, as an individual mix of human and computational logic failures. What is curious about the relationship between organic and circuit based thoughts and actions is that the desire to overcome our own deficiencies and extend our reach leaves us vulnerable to the weaknesses of computing logic.  On a societal level this leaves many questions. For organisational governance it poses the question: should we be trusting our own judgement or should we ‘outsource it to machines’?

The #bigpaper example given in the previous post would to many have seemed a woefully creative and/or academic exercise. Merely to organise rewteeted material, who applauds a workflow which includes?:

  • Scrolling ones own collection of Tweets;
  • Copying a body of tweets into a word document;
  • Printing off that word document;
    • Going to a public environment;
    • Emailing it to the present peer given failure to bring wallet;
    • Printing the document and waiting for it to be printed;
  • Cutting the document into ‘Tweet sized chunks’ to include only image and message (trying to avoid cutting too close);
  • Reading each tweet again, pushing thematically each tweet into an appropriately themed pile;
  • Finding a table and pushing Tweets evenly across 2D plane to try and balance contexts and relationships;
  • Photographing Tweets both as a population, localised and at an angle;
  • Packing away Tweets into representative piles;
  • Examining photos (not nearly enough definition, repeat process with higher resolution);
  • Unpack Tweet piles and rearrange;
  • This time with improved iterative reordering of Tweets;
  • Include token signposting to provide order and visual signposting;
  • Photograph again;
  • Repack again.

Well done having the strength to get past that unexciting workflow!

Why did this need doing, let alone summarizing? Well firstly, when considering BIS, its important to have empathy concerning processes and the people that were/are confined with onerous, repetitive tasks (much in the same way with which a pilgrimage’s value comes from the journey as opposed to the destination). Secondly, it provides direct perspective concerning functions, challenging habits, providing insights and parallels for BIS environments. Thirdly, it provides the hunger for change and direction concerning what priorities and stages a solution should have.

The screencast in the other blog highlighted through photographic analogy informatics weaknesses concerning technology and processes and (seemingly) natural individual and organisational limiting factors (which may still exist as Big Data’s promises start to mature (but which hopefully appropriate BIS approaches would be able to mitigate)).

However, the frustration highlighted above downplays the fact that there were gains from using physical approaches (consideration time, treating information as a durable good and not a disposable resource). To reconcile these seemingly opposable approaches it is best to search for solutions which help to automate functions and logic steps (in a fully digital context, robots tooled with scissors are not quite within commercial reach…).

One of the challenges to implement functionality for ordering my material in a sophisticated way is that machines and computers are only pragmatically capable of operating within the functions trained by them. When arranging Tweets on a surface I had many complex and competing deliberations, which I either made with little effort (because the solution was clear) or considerable thought (because of ambiguities, complexity or too many choices). It is possible for computers to mimic these choices, let alone provide ones resembling (or improving upon!) human decision making was highlighted cleanly by Melanie Mitchell in the book Complexity: A Guided Tour:

Easy Things Are Hard
The other day I said to my eight-year-old son, “Jake, please put your socks on.” He responded by putting them on his head. “See, I put my socks on!” He thought this was hilarious. I, on the other hand, realized that his antics illustrated a deep truth about the difference between humans and computers.

The “socks on head” joke was funny (at least to an eight-year-old) because it violates something we all know is true: even though most statements in human language are, in principle, ambiguous, when you say something to another person, they almost always know what you mean.

Melanie Mitchell compared this human ease for distinction and interpretation with supposedly ‘state of the art spam filters’ which struggle to interpret V!a&®@ as spammer trying to vend. This computational challenge was expressed in terms of a computer being able to observe a pattern and then make the correct inference if the answer was not initially clear. To understand how much better computers can understand and solve analogies Mitchell worked for the AI researcher, Douglas Hofstadter on the “Copycat” program. This involved providing an example letter pattern jump and giving the computer exercises to make inferences. For example logic challenges could include:

“Consider the following problem: if abc changes to abd, what is the analogous change to ijk? Most people describe the change as something
like “Replace the rightmost letter by its alphabetic successor,” and answer ijl. But clearly there are many other possible answers, among them:

• ijd (“Replace the rightmost letter by a d”—similar to Jake putting his socks “on”)

• ijk (“Replace all c’s by d’s; there are no c’s in ijk”), and

• abd (“Replace any string by abd”).

An appropriate mathematical solution was found, involving a slipnet (network of concepts), a workspace (for the letters to reside), codelets (agents which explore possibilities) and temperature (a measure of organisation and control degree of randomness which codelets operated). Like performance management in the real world, the Copycat program had to identify the options, make an informed understanding as to how the decisions would be different and make a commttment.

Mitchell referred to a point earlier in the book, considering the activities of ants (insects which are dumb in isolation but which hold significant levels of intelligence once they reach a certain volume). Whilst ants would normally go for the most obvious food source (the place the other ants were going to or the direction returning ants with food were returning from) there would be a normal deviation involving ants taking new courses. This provides a unconscious balance between the short term expediency for food with longer term opportunities for sustainable food sources.

Screenshot from 2013-02-11 00:39:38

Identifying and implementing logical and mechanical solutions for organising social media paths do take time. However, they can pay dividends if the sheer cost of not automating functions exceeds the cost of either:

  • Outsourcing that functionality,
  • Buying an off the shelf solution,
  • Tinkering/customizing with available solutions,
  • Designing and implementing specific solutions.

To give a practical example, an analysis was taken of a recent Guardian article on the UK’s new spare bedroom tax for those on welfare and its corresponding 100 posts. Using a demo for a keywords text extractor  it was possible to create a breakdown of key terms for the article and each post. Entered into an excel spreadsheet, the exercise became more onerous than the Twitter arrangements. Although technically sifting through appropriate and inappropriate keyword solutions, the comments in isolation created variances that the tool was not going to deal with. The keyword list exceeded the Twitter population in terms of volume and diversity (this is partly because of the lack of a word limit), especially when considering duplicates. Here is one example covering taxes and benefits:

tax 11
tax.It 1
taxes 4
poll tax 2
Poll Tax 1
council tax 6
annual council tax 1
bedroom tax 14
new bedroom tax 1
extra bewdroom tax 1
percent beedroom tax 1
housing tax 1
Negligence Tax 1
window tax 2
tax avoidance schemes 1
tax planning rules 1
income/ benefits 1
pay/benefits 1
benefits 2
benefit 1
tax credits 2
council tax benefit 1
Employment Support Allowance 1
government pay 1
government assistance 1
Work Programme 2
programmes 1
Incapacity Benefit 0
basic benefit 1
Discretionary housing payments 1
Discretionary Housing Payment 1
housing benefit 6
Housing Benefit 2
HB 4
brand new HB 1
ESA 3
PIP 1
PIP conversion 1
decision 1
benefits measure 1
home allowances 1

Aggregating seperate analyses introduced problems in regards to multiple permutations from accidental or deliberate erring from standard explanation, emphasis, plural/singularity or spelling. Given that the process used or the tools analysis does not reconcile this we end up with upper case and lower case keywords being separate and descriptors and terms being welded together. In addition, parent child relationships between terms or titles do not appear strong (perhaps through conservatism of the software that could be tweaked). Terms such as coalition or Liberals are not carried or captured with cultural sensitivity (the UK’s government in this instance).

Copying and then breaking down the keywords into manageable or personalized themes or categories was onerous (although this is partly a lack of tools for reprocessing). Reordering the material takes time on a human level (although ironically resembling the process of disk defragmenting, see image of extracted keywords with markers to post author below after part of the keywords were moved to another excel sheet for clarity).

Screenshot from 2013-02-11 01:34:33

To capture the whole chain of appropriate keywords using this technique although imperfect (it is like considering the world as if it is a grain of sand and then commencing an audit of the universe). It is amazing however examining what keyword extraction is able to offer for just one discussion thread in terms of verbal emphases, especially when related to information, point, emphasis and debate (particularly when sources such as the Guardian offer quantifiable recommend numbers).

The keywords extracted cover the individual topic pretty comprehensively. Once interpreted effectively, especially with terms synthesized and broke down to base meaning and interaction it is capable of providing strong specialised meaning. At a rule base level once that sophistication point is reached scalable and sophisticated analysis, communications and campaigning is possible. As alluding to in my previous post, it is possible to map for solutions problems and issues. In many ways sentiment analysis is already offering this (although is still prone to errors similar to explained above). Getting to a more meanings based level that takes in human and computing errors would provide a clearer understanding regarding the topic (although it would be more consistent using personal judgement for many of the keyword themes in this example, given the cleaning required to counter the volume of computing keywords).

Perhaps it is apt to highlight the work of Joseph Weizenbaum, a member of GE’s team in 1955 to build the first the first computer system dedicated to banking operations and whose technical contributions includes the list processing system SLIP and the natural language understanding program ELIZA, which was an important development in artificial intelligence.

“…Named for the heroine of My Fair Lady, ELIZA was perhaps the first instance of what today is known as a chatterbot program. Specifically, the ELIZA program simulated a conversation between a patient and a psychotherapist by using a person’s responses to shape the computer’s replies. Weizenbaum was shocked to discover that many users were taking his program seriously and were opening their hearts to it. The experience prompted him to think philosophically about the implications of artificial intelligence, and, later, to become a critic of it.

In 1976, he authored Computer Power and Human Reason: From Judgment to Calculation, in which he displayed ambivalence toward computer technology and warned against giving machines the responsibility for making genuinely human choices. Specifically, Weizenbaum argued that it was not just wrong but dangerous and, in some cases, immoral to assume that computers would be able to do anything given enough processing power and clever programming.

“No other organism, and certainly no computer, can be made to confront genuine human problems in human terms,” he wrote.”

In order to circumnavigate historic failures of intelligent comprehension in computing logic the commercial providers online stuck to using “Recommended by…” algorithms comprising of aggregate or contextual navigation and consumption patterns. Perhaps, rather than reinforcing our human approaches online, perhaps we have become more like the ants?

Although the keyword analysis provided a more simple and one off demonstration, one should not discount the value of more complex and custom built analyses. However, the concerns regarding the processes and stages of a human analysis disappear once the reality of having to automate such functions kick in. There are tradeoffs concerning subtlety. For BIS approaches to performance management it is dangerous to assume that buying a machine solves the problems of the human functionality for some cost. Without knowing what is under the hood or at a bare minimum what are the qwerks then there is a risk that complexity will create unknown risks to organisational governance.

—————–

Other blog posts in the Order From Chaos miniseries include:

  1. Order From Chaos: Performance Management and Social Media Analytics in the Age of Big Data;
  2. Abstraction, Perspective and Complexity: Social Media’s Canon of Proportions;
  3. Logic, Computation and (f*(k?) Meming: On2logi+k,ing;
  4. Transposition, Catalysts and Synthesis: Playing with iMacwells eDemon.

More than just eCoal, eSteam and ePower: The Modernizing Dynamics of Change Series

  1. Introduction;
  2. Economic requirements: Catalyst for Invention, Innovation and Progress
  3. Not Just Invention: Change Through The Desire to Innovate, Reimagine and Expand;
  4. New Tools, New Patterns, New Thoughts: the Great Dialogue;
  5. Nobody Will Notice The Slow Death of Dissmeination, They Will Be Too Busy Listening;
  6. The frictions of competition and cooperation to strategic thinking;
  7. The Hot and Cold Wars: Relationships and conflicts between big and small, propriety and open source.

—————————

If you have any suggestions, relevant links or questions to add flavour to this series then please join the dialogue below or contact me via Twitter:

What is Information Systems quality and who beholds it?

8 Feb

In this blog we shall review our tentative Fig 1 model to define and include other parameters of the IS measurement model such as quality.

The word ‘quality’ is so frequently used, however its use is somehow ubiquitous, deep and enigmatic. ‘Quality’ is like beauty, it is said to be ‘in the eye of the beholder’. Quality is a comparative attribute or relative characteristics of things which may be observed to be good, bad or ugly; high, medium or low; big, average or small. Quality may be about things and also relative in the way it is observed and described by various people. Therefore we have to determine what is observed and understand its description or attributes in the eye of the observers. In this case there is a product to be observed by people.We have therefore derived some fundamental factors in the theme; that quality comprises of 3 major requirements, the product, the process of observation and the people to observe it. The fundamental questions are therefore, what is the product, how is it observed and who are the people observing.

Based on these questions above we shall analyse the various elements of the products and their qualities and the process of observation and those involved. In this we will also see the information system as a kind of product which comprise of other components. Also we shall understand how it could be observed and why it should be observed, perhaps because there are various misunderstanding and doubts, a kind of murky darkness which requires a brighter light and the need for observation. Also there are various people (the beholders) involved; perhaps the public, the government, the employers and employees ‘in whose eyes the beauty lay’. Finally we shall find how all these fragments of different composites and sub-components can be arranged systematically together as a structure or a framework resulting to a meaningful expression in thought and word from which further actions could be derived.

The overall questions narrows down to the ratings or grading or qualities of the various components of the information system which combine with certain qualities of the functions of business in the environment to produce certain qualities of output service. Therefore the output is a measure of all the qualities of the components which combine to result to a measurable output.

Therefore the system quality measure is equal to the resultant of all the measures of the various components of each system. The organisational quality + IS quality = output quality which also return to feed the other. This is shown in fig 2 below.

Fig 2. Quality components of the information system elements                                                            Fig 2. Quality components of IS Interactive elements IS Interactive Componentsnew

Fox or Hedgehog? A Guide to Developing a Framework for Decision Makers.

6 Feb

The core objective of the blogs in relation to “IS Quality” is to construct a conceptual framework that can be used as a support in the decision making process for top level executives in determining the quality of their information system services and outputs.  The framework will allow management teams to cognitively steer them into making a successful decision.  It may be used as a road-map or guide when collaborating between groups abstaining from affective decision making. (Nichols, Aidan 2003) Eventually a final choice will be made when determining the quality of the IS outputs but it cannot solely be based on a conceptual framework that’s not to say it doesn’t have a contributing factor. Developing a conceptual framework to assist in a choice or decision may be a complex task. It is vital not to be narrow minded adapting the framework through the lens of a single principle but rather be open-minded and develop a frame work from a variety of sources increasing your knowledge bank. This goes back to the idea of two very different ways of thinking; one can be categorised as ‘hedgehog’ the narrow minded person who channels and knows one single organising principle or the alternative the ‘fox’ who knows many things and who pursues various unrelated, even contradictory ends. (Isaiah Berlin 1953) I think one must adapt a mind frame more closely related to the ‘fox’ when developing a framework.

In order to construct a beneficial conceptual framework it must be understood how it has to be developed.  To understand the “how” we have to define what a frame work is and does. So what do we utilise the framework for;

  • Makes it easier for top level management to work with complex technologies, in this case determining the quality of an organisations information services and outputs.
  • It ties together a group of discrete objects/components into something more useful. These components need to identified and correlated when constructing the framework.

Having a general idea of what a framework does we need to understand the specific classifications that defines a framework. A framework can be classified as three terms; a wrapper which is a way of repackaging a function to achieve a goal (determining quality of IS systems and outputs).

 

Wrapper

Figure 1 Wrapper Provides Added Value

 

An architecture which is the design the framework incorporates. It is separate from the collection of wrappers and methodology it implements. It can be seen as the association between the objects. The beauty about the architecture is that it may be re-usable early on the project but once it is implemented it may be hard to change.

architecture

Figure 2 The architecture forming an association between a collection of objects

 

A methodology defines the interaction between architectures, components and objects. Unlike the architecture which deals with the associations between objects, the methodology deals with the interaction between objects. (Marc Clifton,2003)

methodology

Figure 3 Methodology defines the interaction between architectures, Components and Objects.

But a framework is not all “black and white” but will need all a three classifications in order for the framework to be implemented. Taking this on the board, the real challenge lies in identifying the appropriate methodology which targets our goal and also in creating an architecture that associates with a correct set of components or objects.

External factors must not be neglected when developing a framework, such as the alarming nature of the changing world. Continual advancement of IT has a direct effect on IS systems which may leave some frameworks outdated and miss aligned with the IS output. Hence determining the quality of the IS outputs when using a miss aligned framework may be prove inaccurate as a result. It is vital that the rapid advancement of Information Systems is taken into consideration when conducting the necessary research. It can be said that a framework that is inaccurate, not aligned with its information system outputs and is taken only at face value may contribute to direct consequences down the line. Relating it to Per Bak’s sand pile hypothesis which shows that a small event can have a momentous consequence and that seemingly stable systems can behave in highly unpredictable ways. The framework that determines the quality of its’ IS outputs correlating to the ‘small event’ and overall implementation of the IS system being the ‘stable system that can behave in unpredictable ways’.

Sand Pile

Figure 3 Per Bak’s sand pile hypothesis: if grains of sand were dropped on a pile one at a time, the pile, at some point, would enter a critical state in which another grain of sand could cause a large avalanche — or nothing at all.

 

Sources:

Nichols, Aidan (2003). Discovering Aquinas: An Introduction to His Life, Work, and Influence. Wm. B. Eerdmans Publishing.

Joshua Cooper Ramo (2010). The Age of the Unthinkable. Little, Brown & Company.

Isaiah Berlin (1953). The Hedgehog and The Fox. Weidenfeld & Nicolson.

Mark Clifton (2003). What is a Framework?. http://www.codeproject.com.

 

%d bloggers like this: