Monday, January 27, 2020

Engine Failure Of Flight 191 Engineering Essay

Engine Failure Of Flight 191 Engineering Essay The loss of the engine by itself should not have been enough to cause the accident.[12] Flight 191 would have been perfectly capable of returning to the airport using its remaining two engines, as the DC-10 is capable of staying airborne with any single engine out of operation. However, several other factors combined to cause a catastrophic loss of control. The engine separation had severed the hydraulic lines that controlled the aircrafts leading-edge wing slats (retractable devices that decrease a wings stall speed during takeoff and landing). The damage to the lines caused a loss of hydraulic pressure, which in turn led to uncommanded retraction of the outboard slats in the left wing.[1] Unlike other aircraft designs, the DC-10 did not include a separate mechanism to lock the slats in place.[1] Investigators examined the flight data recorder (FDR) and conducted wind tunnel tests and flight simulator tests to understand the trajectory of flight 191 after the engine detached and the slats retracted. These tests established that the damage to the wing leading edge and retraction of the slats increased the stall speed of the left wing from 124kt to 159kt.[1] Comparison of the FDR data and the simulator tests showed that the pilots of flight 191 had followed the procedure for engine failure at take-off. This procedure called for the captain to go to VHYPERLINK http://en.wikipedia.org/wiki/V_Speeds#Other_reference_speeds2 (standard safety takeoff speed) which for flight 191 was 153kt, 6kt below the stall speed.[1] At the time the engine fell off the aircraft, flight 191 was already travelling at 165kt, safely above the stall speed. Thus, by slowing the aircraft to 153kt in accordance with the emergency procedure, the pilots inadvertently induced the stall which proved fatal. Following this accident, McDonnell Douglas revised the procedure, advising that if the aircraft was already flying faster than V2 plus 10kt the pilots should maintain a margin of 10kt above V2.[1] The DC-10 incorporates two warning devices which might have alerted the pilots to the impending stall: the slat disagreement warning light which should have illuminated after the uncommanded retraction of the slats, and the stall warning system (stick-shaker) which activates close to the stall speed. Unfortunately, both of these warning devices were powered by an electric generator driven by the no. 1 engine; following the loss of that engine, they both became inoperative.[1] [edit] Engine separation An FAA diagram of the DC-10 engine and pylon assembly indicating the failed aft pylon attach fitting. From an examinaton of the detached engine, the NTSB concluded that the pylon attachment had been damaged before the crash.[1] Investigators looked at the planes maintenance history and found that its most recent service was eight weeks before the crash, in which engine number one had been removed from the aircraft, however the pylon, the rigging holding the engine onto the wing, had been damaged during the procedure. The original procedure called for removal of the engine prior to the removal of the engine pylon, but American Airlines had begun to use a procedure that saved approximately 200 man-hours per aircraft and more importantly from a safety standpoint, it would reduce the number of disconnects (i.e., hydraulic and fuel lines, electrical cables, and wiring) from 72 to 27.[1] The new procedure involved mechanics removing the engine with the pylon as one unit, rather than the engine, and then the pylon. A large forklift was used to support the engine while it was being detached from the wing a procedure that was found to be extremelly difficult to execute successfully, due to difficulties with holding the engine assembly straight while it was being removed. The field service representative from the manufacturer, McDonnell-Douglas, said it would not encourage this procedure due to the element of risk and had so advised American. However, McDonnell-Douglas does not have the authority to either approve or disapprove the maintenance procedures of its customers.[1] The accident investigation also concluded that the design of the pylon and adjacent surfaces made the parts difficult to service and prone to damage by maintenance crews. The NTSB reported that there were two different approaches to the one-step procedure: using an overhead hoist or using a forklift. United Airlines used a hoist; American and Continental Airlines used a forklift. According to the NTSB, all the cases wherein impact damage was sustained and cracks found involved the use of the forklift.[1] Under the procedure American used, if the forklift was in the wrong position, the engine would rock like a see-saw and jam against the pylon attachment points. The forklift operator was guided by hand and voice signals; the position had to be spot-on or could cause damage. Management was aware of this. The modification to the aircraft involved in Flight 191 did not go smoothly. Engineers started to disconnect the engine and pylon, but changed shift halfway through. When work continued, the pylon was jammed on the wing and the forklift had to be repositioned. This was important evidence because, in order to disconnect the pylon from the wing, a bolt had to be removed so that the flange could strike the clevis. The procedure used caused an indentation that damaged the clevis pin assembly and created an indentation in the housing of the self-aligning bearing, which in turn weakened the structure sufficiently to cause a small stress fracture. The fracture went unnoticed for several fligh ts, getting worse with each flight. During Flight 191s takeoff, enough force was generated to finally cause the pylon to fail. At the point of rotation, the engine detached and was flipped over the top of the wing. [edit] Conclusion The findings of the investigation by the National Transportation Safety Board (NTSB) were released on December 21, 1979:[1] The National Transportation Safety Board determines that the probable cause of this accident was the asymmetrical stall and the ensuing roll of the aircraft because of the uncommanded retraction of the left wing outboard leading edge slats and the loss of stall warning and slat disagreement indication systems resulting from maintenance-induced damage leading to the separation of the No. 1 engine and pylon assembly at a critical point during takeoff. The separation resulted from damage by improper maintenance procedures which led to failure of the pylon structure. Contributing to the-cause of the accident were the vulnerability of the design of the pylon attach points to maintenance damage; the vulnerability of the design of the leading edge slat system to the damage which produced asymmetry; deficiencies in Federal Aviation Administration surveillance and reporting systems which failed to detect and prevent the use of improper maintenance procedures; deficiencies in the practices and communications among the operators, the manufacturer, and the FAA which failed to determine and disseminate the particulars regarding previous maintenance damage incidents; and the intolerance of prescribed operational procedures to this unique emergency. The NTSB determined that the damage to the left wing engine pylon had occurred during an earlier engine change at the American Airlines aircraft maintenance facility in Tulsa, Oklahoma on March 29 and 30, 1979.[1] The evidence came from the flange, a critical part of the pylon assembly. [edit] Aftermath First responders survey the Flight 191 crash site in Des Plaines, Illinois. Problems with DC-10s were discovered as a cause of the accident, including deficiencies in both design specifications and maintenance procedures which made damage very likely. In response to this incident, American Airlines was fined by the United States government $500,000 for improper maintenance procedures[12]. Two weeks after the accident, on June 6, the FAA ordered all DC-10s to be grounded until all problems were solved. The ban was lifted on July 13.[13] The crash of another DC-10 in November 1979, Air New Zealand Flight 901, would only add to the DC-10s negative reputation at the time however, Flight 901 was caused by several human and environmental factors not related to the airworthiness of the DC-10, and the aircraft was later completely exonerated in that accident. Although McDonnell Douglas employees participated in an Im proud of the DC-10 campaign, the companys shares fell more than 20% following the crash of Flight 191. In 1997, the McDonnell Douglas company was taken over by its rival, Boeing. Despite the safety concerns, the DC-10 went on to outsell its closest competitor, the Lockheed L-1011 TriStar, by nearly 2 to 1. This was due to the L-1011s launch being delayed, the introduction of the DC-10-30 long range model without a competing TriStar variant, and the DC-10 having a greater choice of engines (the L-1011 was only available with Rolls-Royce engines, while the DC-10 could be ordered with General Electric or Pratt HYPERLINK http://en.wikipedia.org/wiki/Pratt__WhitneyHYPERLINK http://en.wikipedia.org/wiki/Pratt__Whitney Whitney engines). The DC-10 program also benefited from obtaining a U.S. Air Force contract to develop a long-range refueller, which culminated in the KC-10 Extender. Lockheed had no such support for the TriStar, and halted production in 1982. NTSB investigation The crash of flight 191 brought fierce criticism from the media because it was the fourth fatal accident involving a DC-10 at the time. Six hundred and twenty-two people had died in DC-10 accidents, including flight 191. As the weather was perfect for flying and there was no indication that a flock of birds or another plane caused the crash, the remains of engine #1 raised serious concerns of the safety of the DC-10. The separated engine was not the only concern, as the public wanted to know whether the detached engine was the only cause of the crash. Investigators wondered if a fire was possibly the cause, as this was backed up by testimony from air traffic controller Ed Rucker who said he saw a flash from the wing. This raised concerns that 191 was the result of a terrorist attack. Sixty witnesses who saw the plane on the runway ruled out a bomb, as they all saw engine #1 swing forward then flip up and over the top of the wing, which pointed to structural failure as the cause. The findings of the investigation by the National Transportation Safety Board (NTSB) were released on December 21, 1979. It revealed the probable cause to be attributable to damage to the left wing engine pylon that occurred during an earlier engine change at American Airliness aircraft maintenance facility in Tulsa, Oklahoma on March 29 and 30, 1979. cite web |url=http://amelia.db.erau.edu/reports/ntsb/aar/AAR79-17.pdf |title=NTSB (National Transportation Safety Board) Report] Evidence came from the flange, a critical part of the pylon assembly. It was revealed to be damaged before the crash, and investigators looked at the planes maintenance history and found it was serviced eight weeks before the crash. The pylon was damaged due to an ill-thought-out engine removal procedure. The original procedure called for removal of the engine prior to the removal of the engine pylon. To save time and costs, American Airlines, without the approval of McDonnell Douglas, had begun to use a faste r procedure. They instructed their mechanics to remove the engine with the pylon all together as one unit. A large forklift was used to support the engine while it was being detached from the wing. This procedure was extremely difficult to execute successfully, due to difficulties with holding the engine assembly straight while it was being removed. This method of engine-pylon removal was used to save man hours and was encouraged despite differences with the manufacturers specifications on how the procedure was supposed to be performed. The accident investigation also concluded that the design of the pylon and adjacent surfaces made the parts difficult to service and prone to damage by maintenance crews. According to the History Channel,cite video title = The Crash of Flight 191 url = http://store.aetv.com/html/product/index.jhtml?id=71451 publisher = The History Channel publisherid = AAE-71451 medium = DVD] United Airlines and Continental Airlines were also using a one-step procedure. After the accident, cracks were found in the bulkheads of DC-10s in both fleets. The procedure used for maintenance did not proceed smoothly. If the forklift was in the wrong position, the engine would rock like a see-saw and jam against the pylon attachment points. The forklift operator was guided by hand and voice signals; the position had to be spot-on or could cause damage, but management was unaware of this. The modification to the aircraft involved in flight 191 did not go smoothly; engineers started to disconnect the engine and pylon but changed shift halfway through; when work continued, the pylon was jammed on the wing and the forklift had to be re-positioned. This was important evidence because, in order to disconnect the pylon from the wing, a bolt had to be removed so that the flange could strike the clevis. The procedure used caused an indentation that damaged the clevis pin assembly and created an indentation in the housing of the self-aligning bearing, which in turn weakened the structure sufficiently to cause a small stress fracture. The fracture went unnoticed for several flights, getting worse with each flight that the plane had taken. During flight 191s takeoff, enough force was generated to finally cause the pylon to fail. At the point of rotation, the engine detached and was flipped over the top of the wing. The loss of the engine by itself should not have been enough to cause the accident. During an interview on Seconds From Disaster, Former NTSB investigator Michael Marx mentioned there were other incidents where the engine fell off, yet they landed without incident. Flight 191 would have been perfectly capable of returning to the airport using its remaining two engines, as the DC-10 is capable of staying airborne with any single engine out of operation. Unfortunately, several other factors combined to cause a catastrophic loss of control. The separation of the engine severed electrical wiring and hydraulic lines which were routed through the leading edge of the wing. The damage to the lines caused a loss of hydraulic pressure, which in turn led to uncommanded retraction of the outboard slats in the port wing. The DC-10 design included a back-up hydraulic system which should have been enough to keep the slats in place; however, both lines are too close together, a design also used on the DC-9. There should have been enough fluid to keep the slats extended, so investigators wanted to know why they were never re-extended by the pilot. The answer came from the end of the recording on the CVR. The number 1 engine powered both the recorder and the slat warning system, which left the pilot and co-pilot with no way of knowing about the position of the slats. Investigators examined the FDR to see what occurred after the engine detached. The procedure called for the captain to go to V2 which he did perfectly, but investigators found that it said nothing about incidents where the speed was already above V2, as it was in this case. Therefore, the pilot had to reduce speed. Simulator tests were done to see if this made a difference; 13 pilots followed the procedure 70 times and not one was able to recover. The NTSB concluded that reducing speed when the slats are back may actually have made it more difficult for the pilot to recover control of the aircraft. When a DC-10 is about to stall it gives two warnings: The first is the stick-shaker which causes the yoke to vibrate, and the second is a warning light that flashes. These combined warnings should have alerted the pilots to increase speed immediately. American Airlines had chosen to have the stick-shaker on the pilots side only, but the stick-shaker did not operate because it was powered by the missing left engine. In the event of an engine failure, it is possible for the flight engineer to switch the pilots controls to a backup power supply. However, inv estigators determined that in order for him to access the necessary switch, the engineer would have had to unfasten his seat belt, stand up, and turn around. The DC-10 hit the ground with a bank of 112ÂÂ °, and at a nose-down attitude of 21ÂÂ °. The NTSB concluded that given the circumstances of the situation, the pilots could not be reasonably blamed for the resulting accident. In his book Blind Trust, [cite book | title = Blind Trust | last=Nance | first=John J. | authorlink=John J. Nance | publisher = William Morrow Co | isbn = 0-688-05360-2 | year = 1987] John J. Nance argues that the 1978 Airline Deregulation Act caused havoc and induced cost-cutting in the industry, producing a serious erosion of the margin of safety for passengers. Nance argues that the industry reverted from an industry under partial surveillance to an industry running on the honor system. Aftermath Problems with DC-10s were discovered as a cause of the accident, including deficiencies in both design specifications and maintenance procedures which made damage very likely. Since the crash happened just before a Western Airlines DC-10 crashed in Mexico City and five years after a Turkish Airlines DC-10 crashed near Paris, the FAA quickly ordered all DC-10s to be grounded until all problems were solved. The result of the problem-solving was an arguably more efficient and safe DC-10. The US government fined American Airlines $500,000 for improper maintenance procedures, but the insurance settlement for the replacement of the aircraft gave American Airlines $25,000,000 beyond the amount of the fine.Fact|date=June 2007 Although the companys employees participated in an Im proud of the DC-10 campaign, McDonnell Douglas shares fell more than 20% following the crash of Flight 191. The DC-10 itself had a bad reputation, but ironically it was often caused by poor maintenance procedures, and not design flaw. In 1997 the McDonnell Douglas company was taken over by its rival, Boeing, which moved its corporate headquarters from Seattle to Chicago. Despite the safety concerns, the DC-10 went on to outsell its closest competitor, the Lockheed L-1011, by nearly 2 to 1. This was due to the L-1011s launch being delayed and the DC-10 having a greater choice of engines (the L-1011 was only available with Rolls-Royce engines, while the DC-10 could be ordered with General Electric or Pratt Whitney engines).

Saturday, January 18, 2020

Electronic Literature as an Information System Essay

ABSTRACT Electronic literature is a term that encompasses artistic texts produced for printed media which are consumed in electronic format, as well as text produced for electronic media that could not be printed without losing essential qualities. Some have argued that the essence of electronic literature is the use of multimedia, fragmentation, and/or non-linearity. Others focus on the role of computation and complex processing. â€Å"Cybertext† does not sufficiently describe these systems. In this paper we propose that works of electronic literature, understood as text (with possible inclusion of multimedia elements) designed to be consumed in bi- or multi-directional electronic media, are best understood as 3-tier (or n-tier) information systems. These tiers include data (the textual content), process (computational interactions) and presentation (on-screen rendering of the narrative). The interaction between these layers produces what is known as the work of electronic literature. This paradigm for electronic literature moves beyond the initial approaches which either treated electronic literature as computerized versions of print literature or focused solely on one aspect of the system. In this paper, we build two basic arguments. On the one hand, we propose that the conception of electronic literature as an  information system gets at the essence of electronic media, and we predict that this paradigm will become dominant in this field within the next few years. On the other hand, we propose that building information systems may also lead in a shift of emphasis from one-time artistic novelties to reusable systems. Demonstrating this approach, we read works from the _Electronic Literature Collection Volume 1_ (Jason Nelson and Emily Short) as well as newer works by Mez and the team gathered by Kate Pullinger and Chris Joseph. Glancing toward the future, we discuss the n-tier analysis of the Global Poetic System and the La Flood Project. INTRODUCTION The fundamental attributes of digital narrative have been, so far, mostly faithful to the origin of electronic text: a set of linked episodes that contain hypermedia elements. Whether or not some features could be reproduced in printed media has been subject of debate by opponents and proponents of digital narratives. However, as the electronic media evolves, some features truly unique to digital narrative have appeared. For instance, significant effort has been invested in creating hypertexts responsive to the reader’s actions by making links dynamic; additionally, there have been efforts to create systems capable of producing fiction, with varying degrees of success. Both approaches have in common that they grant greater autonomy to the computer, thus making of it an active part of the literary exchange. The increasing complexity of these systems has directed critical attention to the novelty of the processes that produce the texts. As critics produce a flood of neologisms to classify these works, the field is suffering from a lack of a shared language for these works, as opposed to drawing from the available computer science and well-articulated terminology of information systems. The set {Reader, Computer, Author} forms a system in which there is flow and manipulation of information, i.e. an _information system_. The interaction between the elements of an information system can be isolated in functional tiers. For instance: one or many data tiers, processing tiers, and presentation tiers. In general we will talk about n-tier information  systems. We will expand this definition in the next section. In this system, a portion of information produced (output) is taken, totally or partially, as input, i.e. there is a feedback loop and therefore the process can be characterized as a cybernetic process. Of course, the field has already embraced the notion of the cybertext. The term cybertext was brought to the literary world’s attention by Espen Aarseth (1997). His concept focuses on the organization of the text in order to analyze the influence of media as an integral part of literary dynamics. According to Aarseth, cybertext is not a genre in itself. In order to classify traditions, literary genres and aesthetic value, Aarseth argues, we should inspect texts at a much more local level. The concept of cybertext offers a way to expand the reach of literary studies to include phenomena that are perceived today as foreign or marginal. In Aarseth’s work, cybertext denotes the general set of text machines which, operated by readers, yield different texts for reading. Aarseth (1997, p. 19), refuses to narrow this definition of cybertext to â€Å"such vague and unfocused terms such as digital text or electronic literature.† For the course of this paper, we will use the phrase â€Å"electronic literature,† as we are interested in those works that are markedly literary in that they resonate (at least on one level) through evocative linguistic content and engage with an existing literary corpus. While we find â€Å"cybertext† to be a useful concept, the taxonomies and schematics that attend this approach interfere with interdisciplinary discussions of electronic literature. Instead of using Aarseth’s neologisms such as textons, scriptons and traversal functions, we will use widely-accepted terminology in the field of computer science. This shift is important because the concepts introduced by Aarseth, which are relevant to the current discussion, can be perfectly mapped to concepts developed years earlier in computer science. While the neologisms introduced by Aarseth remain arcane, the terms used in computer science are pervasive. Although the term cybertext adds a sense of increasingly complex interactivity, its focus is primarily on the interaction between a user and  a single art object. Such a framework, however, insufficiently describes the constitution of such an object. Within his treatise, Aarseth is compelled to create tables of attributes and taxonomies to map and classify each of these objects. What is needed is a framework for discussing how these systems operate and how that operation contributes to an overall literary experience. We want to make a clear distinction between this notion of cybertext as a reading process and more thorough description of a work’s infrastructure. Clearly, there are many ways in which the interaction between a reader and a piece of electronic literature can happen; for instance, a piece of electronic literature could be written in HTML or in Flash, yet presenting the same interaction with the reader. In this paper, we adapt the notion of n-tier information systems to provide a scaffolding for reading and interpreting works of electronic literature. The fact that the field of electronic literature is largely comprised of cybertexts (in the sense described above) that require some sort of processing by the computer, has made of this processing a defining characteristic. Critics and public approach new works of electronic literature with the expectation of finding creativity and innovation not only at the narrative level but also at the processing level; in many cases the newness of the latter has dominated other considerations. NEW, NEWER, NEWEST MEDIA Until now, electronic literature, or elit, has been focused on the new, leading to a constant drive to reinvent the wheel, the word, the image, the delivery system, and consequently reading itself. However, such an emphasis raises a number of questions. To what extent does the â€Å"novel† requirement of electronic literature (as the field is currently defined) de-emphasize a textual investment in exploring the (post)human condition (â€Å"the literary†)? How does this emphasis on the â€Å"new† constrain the development of New Media both for authors and for prospective authors? Or how does such an emphasis put elit authors into an artistic arms race taking on the aethetics of the militiary-industrial complex that produces their tools? Literary essays that treat electronic literature focus on Flash movies, blogs, HTML pages, dynamically generated pages, conversation agents, computer games, and other software applications. A recent edition of Leonardo Almanac (AA.VV. 2006) offers several examples. Its critics/poets analyze the â€Å"information landscapes† of David Small, the text art experiments of Suguru Ishizaki (2003), Brian Kim Stefans’ 11-minute Flash performance, and Philippe Bootz’s matrix poetry program. Though not all the objects are new, what they share most of all is the novelty of their surface or process or text. These works bear little resemblance to one another, a definitive characteristic of electronic literature (dissimilarity); however, their inclusion under one rubric reflects the field’s fetishization of the new. This addiction, mimicking that of the hard sciences it so admires, must constantly replace old forms and old systems with the latest system. Arguably, therefore, any piece of electronic literature may only be as interesting as its form or its novel use of the form. Moreover, such an emphasis shifts the critical attention from the content (what we will call data) to its rendering (or presentation plus processes) primarily. Marie-Laure Ryan (2005) raised charges against such an aesthetic in her _dichtung-digital_ article. In this piece, she rails against a certain style of new media, net.art, elit art object that follows WYSINWYG (What you see is _NOT_ what you get), where the surface presents a text that is considered interesting only because of a more interesting process beneath the surface. This approach, according to Ryan, focuses on â€Å"the meta-property of algorithmic operation.† For this aesthetic, â€Å"the art resides in the productive formula, and in the sophistication of the programming, rather than in the output itself† (Ryan). This means that literary, or artistic value, does not reside in what appears on the screen, but in the virtuoso programming performance that underlies the text. While Ryan goes too far in her dismissal of experimentation, her critique holds, in as much as electronic literary criticism that puts process uber alis risks not only minimizing the textual to insignificance but also losing what should be one of elit’s biggest goals: developing new forms for other authors to use and  explore. Such an emphasis reveals a bias that has thus far dominated new media scholarship. This same bias leads new media scholars away from literary venues for their discourse communities and instead to Boing Boing and Siggraph, sites where curiosity or commercial technological development dominate the discussions. It is also what spells instant obsolescence to many authorware forms. The person who uses authorware as it was intended is not the new media artist. It is the person who uses it in a new way or who reconfigures the software to do something unintended. This trend means that electronic literary artists will constantly be compelled to drive their works towards the new, even while it means a perpetual pruning of all prior authorware, cutting them off from the†literary† tree. (We see this same logic in commerical software production where the 4.0 release reconfigures the interface and removes some of the functionality we had grown to love.) A disproportionate emphasis on the new overlooks the tremendous areas of growth in authorship on the stabilizing, if rudimentary, authoring systems. The tide of productivity (in terms of textual output of all levels of quality) is not from an endless stream of innovations but from people who are writing text in established author information formats, from traditional print to blogs. It is through the use of stabilized and reusable information systems that the greater public is being attracted to consume and produce content through digital media. Blogging is the clearest example. This is not equivalent to saying that all blogging is literary, just as not all writing is; however, blogging has created a social practice of reading and writing in digital media, thus increasing the frequency at which literary pieces appear through that venue. This increased community activity would have been impossible if each blogger had to develop their own authoring systems. To help redistribute the scholarly priorities, we propose a reconsideration of electronic literature as an n-tier information system. The consequence of this shift will be twofold: First of all, it will allow us to treat content and processing independently, thus creating a clear distinction between works of literary merit and works of technological craftsmanship. While this  distinction is at best problematic, considering the information system as a whole will move the analysis away from over-priveleging processes. Secondly, we claim that this approach provides a unified framework with which all pieces of electronic literature can be studied. This paper is organized as follows: in Section 1 (Introduction) we describe what is the problem we intend to explore, and what are the type of systems that will be described in this paper. Section 2 (Information Systems) explores the components of an information system and compares the approaches of different researchers in the field. Section 3 (Examples) demonstrates that the n-tier information system approach can be used to describe a multifarious array of pieces of electronic literature. Section 4 (Discussion) explores the conclusions drawn from this study and set future directions. INFORMATION SYSTEMS Since electronic literature is mediated by a computer, it is clear that there must exist methods to enter information into the system, to process it, and to render an output for readers; that is to say, a piece of electronic literature can be considered as an _information system_. The term â€Å"information system† has different meanings. For instance, in mathematics an â€Å"information system† is a basic knowledge-representation matrix comprised of attributes (columns) and objects (rows). In sociology, â€Å"information systems† are systems whose behavior is determined by goals of individual as well as technology. In our context, â€Å"information system† will refer to a set of persons and machines organized to collect, store, transform, and represent data, a definition which coincides with the one widely accepted in computer science. The domain-specific twist comes when we specify that the data contains, but is not limited to, literary information. Information systems, due to their complexity, are usually built in layers. The earliest antecedent to a multi-layer approach to software architectures goes back to Trygve Reenskaug who proposed in 1979, while visiting the Smalltalk group at Xerox PARC, a pattern known as Model-View-Controller  (MVC) that intended to isolate the process layer from the presentation layer. This paradigm evolved during the next decade to give rise to multi-tier architectures, in which presentation, data and processes were isolated. In principle, it is possible to have multiple data tiers, multiple process tiers, and multiple presentation tiers. One of the most prominent paradigms to approach information systems in the field of computer science, and the one we deem more appropriate for electronic literature, is the 3-tier architecture (Eckerson, 1995). This paradigm indicates that processes of different categories should be encapsulated in three different layers: 1. Presentation Layer: The physical rendering of the narrative piece, for example, a sequence of physical pages or the on-screen presentation of the text. 2. Process Layer: The rules necessary to read a text. A reader of Latin alphabet in printed narrative, for example, must cross the text from left to right, from top to bottom and pass the page after the last word of the last line. In digital narrative, this layer could contain the rules programmed in a computer to build a text output. 3. Data Layer: Here lays the text itself. It is the set of words, images, video, etc., which form the narrative space. In the proposed 3-tier model, feedback is not only possible, but also a _sine qua non_ condition for the literary exchange. It is the continuation of McLluhan’s mantra: â€Å"the media is the message†. In digital narrative, the media acts on the message. The cycle of feedback in digital narrative is: (i) Readers receive a piece of information, and based on it they execute a new interaction with the system. (ii) The computer then takes that input and applies logic rules that have been programmed into it by the author. (iii) The computer takes content from the data layer and renders it to the reader in the presentation layer. (iv) step -i – is repeated again. Steps i through v describe a complete cycle of feedback, thus the maximum realization of a cybertext. N-tier information systems have had, surprisingly, a relatively short penetration in the field of electronic literature. Aarseth (1997, p.62) introduced a typology for his textonomy that maps perfectly a 3-tier system: Scriptons (â€Å"strings as they appear to readers†) correspond to the presentation layer, textons (â€Å"strings as they exist in the text†) correspond to the data layer, and traversal function (â€Å"the mechanism by which scriptons are revealed or generated from textons and presented to the user†) corresponds to the process layer. These neologisms, while necessary if we study all forms of textuality, are unnecessary if we focus on electronic literature. The methods developed in computer science permeate constantly, and at an accelerating rate, the field of electronic literature, specially as artists create pieces of increasing complexity. Practitioners in the field of electronic literature will be better equipped to benefit from the advances in information technology if the knowledge acquired in both fields can be bridged; without a common terminology attempts to generate dialog are thwarted. The first reference that used computer science terminology applied to electronic literature appeared in an article by Gutierrez (2002), in which the three layers (data, logic and presentation) were clearly defined and proposed as a paradigm for electronic literature. Gutierrez (2004, 2006) explored in detail the logic (middle) layer, proposing algorithms to manage the processes needed to deliver literary content through electronic media. His proposal follows the paradigm proposed by Eckerson (1995) and Jacobson et al (1999): the system is divided into (a) topological stationary components, (b) users, (c) and transient components (processes). The processes in the system are analyzed and represented using sequence diagrams to depict how the actions of the users cause movement and transformation of information across different topological components. The next reference belongs to Wardrip-Fruin (2006); he proposes not three, but seven components: (i) author, (ii) data, (iii) process, (iv) surface, (v) interaction, (vi) outside processes, and (vii) audiences. This vision corresponds to an extensive research in diverse fields, and the interpretation is given from a literary perspective. Even though  Wardrip-Fruin does not use the terminology already established in computer science, nor he makes a clear distinction between topology, actors and processes, his proposal is essentially equivalent, and independent, from Gutierrez’s model. In Wardrip-Fruin’s model, author -i- and audience -vii- correspond to actors in the Unified Process (UP); process -iii- and interaction -v- correspond to the process layer in the 3-tier architecture (how the actors move information across layers and how it is modified); data -ii- maps directly the data layer in the 3-tier model; finally, surface -iv- corresponds to the presentation layer. The emergence of these information systems approaches marks the awareness that these new literary forms arise from the world of software and hence benefit from traditional computer science approaches to software. In the Language of New Media, Lev Manovich called for such analysis under the rubric of Software Studies. Applying the schematics of computer science to electronic literature allows critics to consider the complexities of that literature without falling prey to the tendency to colonize electronic literature with literary theory, as Espen Aarseth warned in Cybertext. Such a framework provides a terminology rather than the imposition of yet another taxonomy or set of metaphors that will always prove to be both helpful and glaringly insufficient. That is not to say that n-tier approaches fit works without conflict. In fact, some of the most fruitful readings come from the pieces that complicate the n-tier distinctions. EXAMPLES DREAMAPHAGE 1 & 2: REVISING OUR SYSTEMS Jason Nelson’s Dreamaphage (2003, 2004) demonstrates the ways in which the n-tier model can open up the complexities and ironies of works of electronic literature. Nelson is an auteur of interfaces, and in the first version of this piece he transforms the two-dimensional screen into a three-dimensional navigable space full of various planes. The interactor travels through these planes, encountering texts on them, documentation of the disease. It is as if we are traveling through the data structure of the story itself, as if  the data has been brought to the surface. Though in strict terms, the data is where it always was supposed to be. Each plane is an object, rendered in Flash on the fly by the processing of the navigation input and the production of vector graphics to fill the screen. However, Nelsons’ work distances us, alienates us from the visual metaphors that we have taken for the physical structures of data in the computer. Designers of operating systems work hard to naturalize our relationship to our information. Opening windows, shuffling folders, becomes not a visual manifestation but the transparent glimpse of the structures themselves. Neal Stephenson has written very persuasively on the effect of replacing the command line interface with these illusions. The story (or data) behind the piece is the tale of a virus epidemic, whose primary symptom is the constant repetition of a dream. Nelson writes of the virus’ â€Å"drifting eyes.† Ultimately the disease proves fatal, as patients go insane then comatose. Here the piece is evocative of the repetitive lexias of classical electronic literature, information systems that lead the reader into the same texts as a natural component of traversing the narrative. Of course, the disease also describes the interface of the planes that the user travels through, one after the other, semi-transparent planes, dreamlike visions. This version of Dreamaphage was not the only one Nelson published. In 2004, Nelson published a second interface. Nelson writes of the piece, â€Å"Unfortunately the first version of Dreamaphage suffered from usability problems. The main interface was unwieldy (but pretty) and the books hard to find (plus the occasional computer crash)† (â€Å"Dreamaphage, _ELC I_) He reconceived of the piece in two dimensions to create a more stable interface. The second version is two-dimensional and Nelson has also â€Å"added a few more extra bits and readjusted the medical reports.† In the terms of n-tier, his changes primarily affected the interface and the data layers. Here is the artist of the interface facing the uncanny return of their own artistic creation in a world where information systems do not lie in the stable binding in a book but in a contingent state that is always dependent  on the environments (operating systems) and frames (browser) in which they circulate. As the user tries to find a grounding in the spaces and lost moments of the disease, Nelson himself attempts to build stability into that which is always shifting. However, do to a particular difference in the way that Firefox 2.0 renders Flash at the processing layer, interactors will discover that the†opening† page of the second version is squeezed into a fraction of their window, rather than expanding to fill the entire window. At this point, we are reminded of the work’s epigram, â€Å"All other methods are errors. The words of these books, their dreams, contain the cure. But where is the pattern? In sleeping the same dream came again. How long before I become another lost?† (â€Å"opening†). As we compare these two versions of the same information system, we see the same dream coming again. The first version haunts the second as we ask when will it, too, become one of the lost. Though Nelson himself seems to have an insatiable appetite for novel interfaces, his own artistic practices resonate well with the ethos of this article. At speaking engagements, he has made it a practice to bring his interfaces, his .fla (Flash source) files, for the attendees to take and use as they please. Nelson presents his information systems with a humble declaration that the audience may no doubt be able to find even more powerful uses for these interfaces. GALATEA: NOVELTY RETURNS Emily Short’s ground-breaking work of interactive fiction offers another work that, like its namesake in the piece, opens up to this discussion when approached carefully. Galatea’s presentation layer appears to be straight forward IF fare. The interactor is a critic, encountering Galatea, which appears to be a statue of a woman but then begins to move and talk. In this novel work of interactive fiction, the interactor will not find the traditional spacial navigation verbs (go, open, throw) to be productive, as the action focuses on one room. Likewise will other verbs prove themselves unhelpful as the user is encouraged in the help instructions to â€Å"talk† or  Ã¢â‚¬Å"ask† about topics. In Short’s piece, the navigational system of IF, as it was originally instantiated in Adventure, begins to mimic a conversational system driven by keywords, ala Joseph Weizenbaum’s ELIZA. Spelunking through a cave is replaced with conversing through an array of conversational replies. Galatea does not always answer the same way. She has moods, or rather, your relationship with Galatea has levels of emotion. The logic layer proves to be more complex than the few verbs portend. The hunt is to figure out the combination that leads to more data. Galatea uses a novel process to put the user in the position of a safe cracker, trying to unlock the treasure of answers. Notice how novelty has re-emerged as a key attribute here. Could there be a second Galatea? Could someone write another story using Galatea’s procesess. Technically no, since the work was released in a No-Derivs Creative Commons license. However, in many ways, Galatea is a second, coming in the experimental wave of artistic revisions of interactive fiction that followed the demise of the commercially produced text adventures from Infocom and others. Written in Z-Machine format, Galatea is already reimagining an information system. It is a new work written in the context of Infocom’s interactive fiction system. Short’s work is admittedly novel in its processes, but the literary value of this work is not defined by its novely. The data, the replies, the context they describe, the relationship they create are rich and full of literary allusions. Short has gone on to help others make their own Galatea, not only in her work to help develop the natural language IF authoring system Inform 7 but also in the conversation libraries she has authored. In doing so, she moved into the work of other developers of authoring systems, such as the makers of chatbot systems. Richard S. Wallace developed one of the most popular of these (A.I.M.L..bot), and his work demonstrates the power of creating and sharing authorware, even in the context of the tyranny of the novel. A.L.I.C.E. is the base-line conversational system, which can be downloaded and customized. Downloading the basic, functioning A.L.I.C.E. chatbot as a foundation allows users to concentrate on editing recognizeable inputs and systematic responses. Rather than worrying about how the system will respond to input, authors, or botmasters, can focus on creating what they system will say. To gain respect as a botmaster/author, one cannot merely modify an out-of-the-box ALICE. The user should further customize or build from the ground up using AIML, artificial intelligence markup language, the site-specific language created for Wallace’s system. They must change the way the system operates–largely, because the critical attention around chatbots follows more the model of scientific innovation more than literary depth. However, according to Wallace, despite the critics’ emphasis on innovations, the users have been flocking to ALICE, as tens of thousands of users have created chatbots using the system (Be Your Own Botmaster). AIML becomes an important test case because while users may access some elements of the system, because they are not changing fundamentals, they can only make limited forays into the scientific/innovation chatbot discussions. Thus while our n-tier model stresses the importance of creating authorware and understanding information systems, novelty still holds an important role in the development of electronic literature. Nonetheless, interactors can at least use their pre-existing literacies when they encounter an AIML bot or a work of interactive fiction written on a familiar platform. LITERATRONICA Literatronic is yet another example of an n-tier system. Its design was based entirely in the concept of division between presentation, process and data layers. Every interaction of the readers is stored in a centralized database, and influences the subsequent response of the system to each reader’s interactions. The presentation layer employs web pages on which the reader can access multiple books by multiple authors in multiple languages.  The process layer is rather complex, since it uses a specialized artificial intelligence engine to adapt the book to each reader, based upon his/her interaction, i.e. and adaptive system. The data layer is a relational database that stores not only the narrative, but also reader’s interaction. Since there is a clear distinction between presentation, data and process, Literatronica is a 3-tier system that allows authors of multiple language to focus on the business of literary creation. MEZ’S CODE: THE SYSTEMS THAT DO NOT USE A COMPUTER[1] As with many systematic critical approaches, the place where n-tier is most fruitful is the where it produces or reveals contradictions. While some works of electronic literature lend themselves to clear divisions between parts of the information system, many works in electronic literature complicate that very distinction as articulated in such essays as Rita Raley’s code.surface||code.depth, in which she traces out codeworks that challenge distinctions between presentation and processing layers. In the works of Mez (Maryanne Breeze), she creates works written in what N. Katherine Hayles has called a creole of computer and human languages. Mez, and other codework authors, display the data layer on the presentation layer. One critical response is to point out that as an information system, the presentation layer are the lines of code and the rest of the system is whatever medium is displaying her poem. However, such an approach missed the very complexity of Mez’s work. Indeed, Mez’s work is often traditional static text that puts users in the role of the processor. The n-tier model illuminates her sleight of hand. trEm[d]o[lls]r_ [by Mez] doll_tre[ru]mor[s] = var=’msg’ val=’YourPleading’/> † TREMOR Consider her short codework â€Å"trEm[d]o[lls]r_† published on her site and on the Critical Code Studies blog. It is a program that seems to describe (or self-define) the birth pangs of a new world. The work, written in what appears to be XML, cannot function by itself. It appears to assign a value to a variable named â€Å"doll_tre[ru]mor[s]†, a Mez-ian (Mezozoic?) portmenteau of doll_tremors and rumors. This particular rumor beign defined is called, the fifth world, which calls up images of the Native American belief in a the perfected world coming to replace our current fourth world. This belief appears most readily in the Hopi tribe of North America. A child of this fifth world are â€Å"fractures,† or put another way, the tremor of the coming world brings with it fractures. The first, post 2 inscription, contains polymers: a user set to â€Å"YourDollUserName,† a â€Å"3rdperson† set to â€Å"Your3rdPerson,† a location set to â€Å"YourSoddenSelf†, and a â€Å"spikey† set to â€Å"YourSpiKeySelf.† The user then becomes a molecule name within the fracture, a component of the fracture. These references to dolls and 3rd person seem to evoke the world of avatars. In virtual worlds, users have dolls. If the first fracture is located in the avatar of the person, in their avatar, the second centers on communication from this person or user. Here the user is defined with â€Å"YourPolyannaUserName,† and we are in the world of overreaching optimism, in the face of a â€Å"msg† or message of â€Å"YourPleading† and a â€Å"lastword.† Combining these two fractures we have a sodden and spikey self pleading and uttering a last word presumably before the coming rupture into the fifth world. As with many codeworks, the presentation layer appears to be the data and logic layer. However, there is clearly another logic layer that makes these words appear on whatever inerface the reader is using. Thus, the presentation layer is a deception, a challenge to the very division of layers, a revelation that hides. At the same time, we are compelled to execute the presneted code by tracing out its logic. We must take the place of the compiler with the understanding that the coding structures are also  meant to launch or allusive subroutines, that part of our brain that is constantly listening for echoes and whispers To produce that reading, we have had to execute that poem, at least step through it, acting as the processor. In the process of writing poetic works as data, she has swapped our traditional position vis-a-vis n-tier systems. Where traditional poetry establishes idenitity through I’s, Mez has us identify with a system ready to process the user who is not ready for the fifth world, whatever that may bring. At the same time, universal or even mythical realities have been systematized or simulated. There is another layer of data that is missing, supplied by the user presumably. The poem leaves its tremors in a state of potential, waiting to operate in the context of a larger system and waiting for a user to supply the names, pleading, and lastwords. The codework means nothing to the computer. This is not to make some sort of Searlean intervention about the inability of computers to comprehend but to point out that Mez’s code is not valid XML. Of course, Mez is not writing for computer validation but instead relies on the less systematic processing of humans who rely on a far less rigorously specified language structure. Tremors fracture even the process of assigning some signified to these doll_tre[ru]mor[s]. Mez’s poem plays upon the layers of n-tier, exposing them and inverting them. Through the close-reading tools of Critical Code Studies, we can get to her inference and innuendo. However, we should not miss the central irony of the work, the data that is hidden, the notable lack of processing performed by this piece. Mez has hailed us into the system, and our compliance, begins the tremors that brings about this fifth world even as it lies in potential. N-tier is not the fifth world of interpretation. However, it is a tremor of recognition that literacy in information systems offers a critical awareness crucial in these emerging forms of literature. FUTURE PROJECTS Two new projects give the sense of the electronic literature to come. The authors of this paper have been collaborating to create systems that answer Hayles’ call at â€Å"The Future of Electronic Literature† in Maryland to create works that move beyond the desktop. The â€Å"Global Poetic System† and â€Å"The LA Flood Project† combine GPS, literary texts, and civic spaces to create art objects that rely on a complex relationship between various pieces of software and hardware, from mobile phones to PBX telephony to satellite technology. To fully discuss such works with the same approaches we apply to video games or Flash-based literary works is to miss this intricate interaction. However, n-tier provides a scalable framework for discussing the complex networking of systems to produce an artistic experience through software and hardware. These projects explore four types of interfaces (mobile phones, PDAs, desktop clients, and web applications) and three ways of reading (literary adaptative texts, literary classic texts, texts constructed from the interaction of the community). The central piece that glues together literary information is geolocation. When the interactor in the world is one of the input systems, critics need a framework that can handle complexity. Because of the heterogeneity of platforms in which these systems run, there are multiple presentation layers (e.g. phone, laptop, etc.), multiple parallel processing layers, and multiple sources of information (e.g. weather, traffic, literary content, user routes, etc.), thus requiring a n-tier approach for analysis and implementation. It is clear that as electronic literature becomes more complex, knowledge of the n-tier dilineations will be crucial not only to the reception but also the production of such works. Since the interaction of heterogenous systems is the state of our world, an n-tier approach will up critics to open up these works in ways that help identify patterns and systems in our lives. DISCUSSION Let us bring down the great walls of neologisms. Let us pause for reflection  in the race for newer new media. Let us collaborate on the n-tiers of information systems to create robust writing forms and the possibility of a extending the audiences that are literate in these systems. In this paper, we have described an analytical framework that is useful to divide works of electronic literature into their forming elements, in such a way that is coherent with advances in computer science and information technology, and at the same time using a language that could be easily adopted by the electronic literature community. This framework places creators, technicians, and critics on common ground. This field does not have a unified method to analyze creative works; this void is a result, perhaps, in the conviction that works of electronic literature require an element of newness and a reinvention of paradigms with every new piece. Critics are always looking for innovation. However, the unrestrained celebration of the new or novel has lead New Media to the aesthetic equivalent of an arms race. In this article we found common elements to all these pieces, bridging the gap between computer science and electronic literature with the hopes of encouraging the production of sustainable new forms, be they â€Å"stand alone† or composed of a conglomeration of media forms, software, and users. As works of electronic literature continue to become more complex, bringing together more heterogeneous digital forms, the n-tier model will prove scalable and nuanced to help describe each layer of the work without forcing it into a pre-set positions for the sake of theory. We have to ask at this point: how does this framework handle exceptions and increasing complexity? It is interesting to consider how the proposed n-tier model might be adapted to cope with dynamic data, which seems to be the most complex case. Current literary works tend to process a fixed set of data, generated by the author; it is the mode of traversing what changes. Several software solutions may be used to solve the issue of how this traversal is left in the hands of the user or mediated yet in some way by the author through the presentation system. The n-tier model provides a way of identifying three basic ingredients: the data to be traversed, the logic for deciding how to  traverse them, and the presentation which conveys to the user the selected portions at the selected moments. In this way, such systems give the impression that the reader is shaping the literary work by his/her actions. Yet this, in the simple configuration, is just an illusion. In following the labyrinth set out by the author, readers may feel that their journey through it is always being built anew. But the labyrinth itself is already fixed. Consider what would happen when these systems leave computer screens and move into the world of mobile devices and ubiquitous art as Hayles predicted they would at the 2007 ELO conference. How could the system cope with changing data, with a labyrinth that rebuilds itself differently each time based on the path of the user? In this endeavor, we would be shifting an increasing responsibility into the machine which is running the work. The data need not be modified by the system itself. A simple initial approach might be to allow a subset of the data to be drawn from the real environment outside the literary work. This would introduce a measure of uncertainty into the set of possible situations that the user and the system will be faced with. And it would force the author to consider a much wider range of alternative situations and/or means of solving them. Interesting initiatives along these lines might be found in the various systems that combine literary material with real-world information by using, for example, mobile hand-held devices, provided with means of geolocation and networking. With respect to the n-tier model, the changes introduced in the data layer would force additional changes in the other layers. The process layer would grow in complexity to acquire the ability to react to the different possible changes in the data layer. It could be possible for the process layer to absorb all the required changes, while retaining a version of the presentation layer similar to the one used when dealing with static data. However, this may put a heavy load on the process layer, which may result in a slightly clumsy presentation. The clumsiness would be perceived by the reader as a slight imbalance between the dynamic content being presented and the static means used for presenting it. The breaking point would be reached when readers become aware that the material they are receiving is being presented inadequately, and it is apparent that there might have been better  ways of presenting it. In these cases, a more complex presentation layer is also required. In all cases, to enable the computer to deal with the new type of situations would require the programmer to encode some means of appreciating the material that is being handled, and some means of automatically converting it into a adequate format for communicating it to the user. In these task, current research into knowledge representation, natural language understanding, and natural language generation may provide very interesting tools. But, again, these tools would exist in processing layers, and would be dependent on data layers, so the n-tier model would still apply. The n-tier information system approach remains valid even in the most marginal cases. It promises to provide a unified framework of analysis for the field of electronic literature. Looking at electronic literature as an information system may signal another shift in disciplinary emphasis, one from a kind of high-theory humanities criticism towards something more like Human Computer Interface scholarship, which is, by its nature, highly pragmatic. Perhaps a better way would be to try bring these two approaches closer together and to encourage dialogue between usability scientists and the agents of interpretation and meaning. Until this shift happens, the future of â€Å"new† media may be a developmental 404 error page. REFERENCES AA.VV. â€Å"New Media Poetry and Poetics Special† _Leonardo Almanac_, 14:5, September 2006. URL:  «http://www.leoalmanac.org/journal/vol_14/lea_v14_n05-06/index.asp » First accessed on 12/2006. AARSETH , Espen J. _Cybertext: Perspectives on Ergodic Literature_. Johns Hopkins University Press, Baltimore, MD, 1997. CALVI, Licia.†Ã¢â‚¬ËœLector in rebus’: The role of the reader and the characteristics of hyperreading†. In _Proceedings of the Tenth ACM Conference on Hypertext and Hypermedia_, pp 101-109. ACM Press, 1999. COOVER, Robert.†Literary Hypertext: The Passing of the Golden Age of Hypertext.† _Feed Magazine_.  «http://www.feedmag.com/document/do291lofi.html » First accessed 4 August 2006. ECKERSON, Wayne W.†Three Tier Client/Server Architecture: Achieving Scalability, Performance, and Efficiency in Client Server Applications.† _Open Information Systems_ 10, 1. January 1995: 3(20). GENETTE, Gerard. _Paratexts: Thresholds of Interpretations_. Cambridge University Press, New York, NY, 1997. GUTIERREZ, Juan B. â€Å"Literatrà ³nica – sobre cà ³mo y porquà © crear ficcià ³n para medios digitales.† In _Proceedings of the 1er Congreso ONLINE del Observatorio para la CiberSociedad_, Barcelona,  «http://cibersociedad.rediris.es/congreso/comms/g04gutierrez.htm » First accessed on 01/2003. GUTIERREZ, Juan B. â€Å"Literatrà ³nica: Hipertexto Literario Adaptativo.† in _Proceedings of the 2o Congreso del Observatorio para la Cibersociedad_. Barcelona, Spain. URL:  «http://www.cibersociedad.net/congres2004/index_f.html » First accessed on 11/2004. GUTIERREZ, Juan B. â€Å"Literatronic: Use of Hamiltonian cycles to produce adaptivity in literary hypertext†. In _Proceedings of The Bridges Conference: Mathematical Connections in Art, Music, and Science_, pages 215-222. Institute of Education, University of London, August 2006. HAYLES, N. Katherine. â€Å"Deeper into the Machine: The Future of Electronic Literature.† _Culture Machine_. Vol 5. 2003.  «http://svr91.edns1.com/~culturem/index.php/cm/article/viewArticle/245/241 » First accessed 09/2004. — â€Å"Storytelling in the Digital Age: Narrative and Data.† Digital Narratives conference. UCLA. 7 April 2005. HILLNER, Matthias.†Ã¢â‚¬ËœVirtual Typography’: Time Perception in Relation to Digital Communication.† New Media Poetry and Poetics Special Issue, _Leonardo Electronic Almanac_ Vol 14, No. 5 – 6 (2006).  «http://leoalmanac.org/journal/vol_14/lea_v14_n05-06/mengberg.asp » First accessed 25 Sep. 2006 JACOBSON I, BOOCH G, RUMBAUGH J. _The unified software development process_. Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, 1999. LANDOW George P. _Hypertext 2.0_. Johns Hopkins University Press, Baltimore, MD, 1997. MANOVICH, Lev. _The Language of New Media_. MIT, Cambridge, MA, 2002. MARINO, Mark. â€Å"Critical Code Studies.† _Electronic Book Review_, December 2006.  «http://www.electronicbookreview.com/thread/electropoetics/codology » First Accessed 12/2006. MEZ.†trEm[d]o[lls]r_† _Critical Code Studies_. April 2008.  «http://criticalcodestudies.com/wordpress/2008/04/28/_tremdollsr_/ » First accessed 04/2008. MONTFORT, Nick.†Cybertext â€Å". _Electronic Book Review_, January 2001. URL:  «http://www.altx.com/EBR/ebr11/11mon » First accessed on 06/2006. NEA. _Reading At Risk: A Survey of Literary Reading in America_. National Endowment for the Arts, 1100 Pennsylvania Avenue, NW. Washington, DC 20506-0001, 2004. PAJARES TOSCA, Susana and Jill Walker.†Selected Bibliography of Hypertext Critcism.† _JoDI_.  «http://jodi.tamu.edu/Articles/v03/i03/bibliography.html » First accessed October 24, 2006. Raley, Rita. â€Å"Code.surface||Code.depth.† _Dichtung Digital_. 2006.  «http://www.dichtung-digital.org/2006/1-Raley.htm » First accessed 08/2006. RODRà GUEZ, Jaime Alejandro. â€Å"Teorà ­a, Prà ¡ctica y Enseà ±anza del Hipertexto de Ficcià ³n: El Relato Digital.† Pontificia Universidad Javeriana, Bogotà ¡, Colombia, 2003.  «http://www.javeriana.edu.co/relatodigital » First accessed on 09/2003. RYAN, Marie-Laure. â€Å"Narrative and the Split Condition of Digital Textuality.† 1. 2005. URL:  «http://www.brown.edu/Research/dichtung-digital/2005/1/Ryan/ » First accessed 4 October 2006 VERSHBOW, Ben.†Flight Paths a Networked Novel.† _IF: Future of the Book_. December 2007  «http://www.futureofthebook.org/blog/archives/2007/12/flight_paths_a_networked_novel.html » First Accessed 01/2008. WALLACE, Richard S. â€Å"Be Your Own Botmaster.† Alice AI Foundation Inc. 2nd ed. 2004. WARDRIP-FRUIN, Noah. _Expressive Processing: On Process-Intensive Literature and Digital Media_. Brown University. Providence, Rhode Island. May 2006. WARDRIP-FRUIN,Noah. Christopher Strachey: the first digital artist? _Grand Text Auto_. 1 August 2005.  «http://grandtextauto.gatech.edu/2005/08/01/christopher-strachey-first-digital-artist/ » First accessed 3 September 2006. ZWASS, Vladimir. _Foundations of Information Systems_. Mcgraw-Hill College, NY 1997.

Friday, January 10, 2020

Drawing on What You Have Learned About City Road from the Making Social Lives Dvd and Learning Companion 1, Describe Some of the Ways in Which Order Is Made and Repaired on the Street Which You Know

Drawing on what you have learned about City Road from the Making Social Lives DVD and Learning Companion 1, describe some of the ways in which order is made and repaired on the street which you know. The purpose of this assignment is to compare and contrast the social order of City Road with a local road to demonstrate how order is made and is continually repaired over time. Abington Street, has changed considerably over the past 50 years, from a quiet street of individually owned shops such as, Halford Jewellers, Benefit footwear, and only one a big convenience store.Today, you will find fewer individually owned and many more big name high street shops, such as Primark and Tesco Express. Firstly, I will compare Abington Street, and how it has changed to City Road. Abington Street used to be the main thoroughfare to the town centre, with a tram running down the centre of the street creating â€Å"invisible order†, nowadays the street remains invisibly ordered but is used diffe rently as it is now completely pedestrianised. It is both a daytime shopping zone and a night time social space.Meanwhile City Road has changed from a simple country road to a busy town through road, however similarities still exist with Abington Street as both are now shopping and social spaces, as City Roads 1960’s car showrooms, have been replaced by shops, cafes, takeaways and restaurants designed for a wide range of people. With the changes in use, have come the changes in visible order, Abington Streets use changes throughout the day, in a very similar way to City Road.Shopping is the daytime occupation, with people eating and drinking in the cafes, visible order is demonstrated by adherence to the society’s rules as people queue in orderly fashion to purchase goods. Disruption of social order occasionally happens when for example, shoplifting occurs, however this is deterred by CCTV keeping invisible order and is repaired by the presence of security guards preve nting further incidents. Social order is present at night in Abington Street as well as in City Road, at night the shops close, as the takeaways, pubs and clubs open. Young people then use Abington Street for entertainment rather than for shopping.Although, Abington Street brings in different types of people during the different times of day, different shops and venues are aimed at certain groups of people. The younger generation use fast food takeaways or go to socialise at the pubs at night, whilst in the DVD Jose Romas Surez, from Taste bud cafe talks about how mostly elderly customers regularly come back to his cafe during the day, because they feel secure in there this could be to do with the types of people using the streets during a day – the elderly or school children (Making Social Lives on City Road DVD, 2009, scene 3).Most invisible social order at night in Abington Street is maintained by the use of CCTV, whilst visible night time order is maintained by the presen ce of club bouncers, the police and local community support officers. Young people may see the presence of the police and community support workers as a deterrent to them having fun, whilst the shopkeepers rely on the police to maintain social order and protect their property from drunken or accidental bad behaviour.Social order is also affected by the influx of big business; this is demonstrated in the DVD, which shows how the arrival of Tesco Express to City Road results in the closure of smaller businesses. There are inequalities between local shops and the big named supermarkets on both streets. On Abington Street there are two very dominant stores, Tesco Express again as well as Marks and Spencer’s. Both of these shops have a large variety of products on offer at competitive prices; these stores also have a wider range of goods for the convenience of the customers.In City Road, like in Abington Street, the smaller business owner reports adverse affects, an example of thi s is Colin Butwell (the newsagent),he described how he had been affected, saying that Tesco moving in close to his store has resulted in a reduction in trade. On the positive side the opening of well known chains can have a positive effect on remaking society, and social ordering, as it can bring about more jobs, and encourage people to use the area more resulting in other places such as cafes, restaurants being busier.As Georgina Blakely point out some people gain from the reshaping and some people lose (Making Social Lives, 2009, Scene 5). In conclusion, social order will always need to adapt, change and be continually restructured, and repaired to meet society’s requirements. The effects of a single change can have a massive impact on a street and the people that it involves. This can be seen in the effect that pedestrianisation had on Abington Streets main uses, it is also clear to see that the slightest change can have a massive impact on the social ordering of the area. This can be applied to any street in the world. 814 WORDSBibliography, * Blakeley, G. , Bromley, S. ,Clarke, J. , Raghuram, P. , Silva, E. and Taylor, S. (2009) Learning Companion 1,  Introducing the social sciences, Milton Keynes, The Open University. * ‘The street' (2009)  Making Social Lives  [DVD], Milton Keynes, The Open University. * What have you enjoyed about starting this module? I have enjoyed getting back into studying again, after leaving college. I’m definitely looking forward to the rest of this course. What have you found difficult? Time management is my main difficulty, juggling working and writing an assignment but I’m sure I will find this easier as time goes on.

Thursday, January 2, 2020

Treating Abused Adolescents by Eliana Gil Free Essay Example, 1750 words

Many people credit them with a maturity that is actually far beyond their years and are not cognizant of their innate vulnerability and therefore treat them with barely veiled hostility and suspicion. The shocking fact is that many professionals have the same misguided notions about adolescents. Gil (996) tells the case of a professional who said, â€Å"‘That girl knew what she wanted and knew how to get it’, in regard to a case of incest in which the father gave his daughter expensive gifts† (p. 14). She calls for a change in this attitude and bias towards adolescent victims and points out the need for further studies and research to help them. The second chapter is entitled Theories of Adolescent Development and has been compiled with Karren Campbell. In this chapter Gil (1996) stresses that â€Å"A thorough knowledge of theories of development is essential for those who work with adolescents, particularly when it is likely that the developmental process of many such adolescents has been disrupted or compromised by maltreatment† (p. 23). Drawing from her knowledge on the available research material and referring to the work done on the subject, she analyses the factors that make adolescents vulnerable to abuse and the symptoms of abuse that are most likely to be manifested. We will write a custom essay sample on Treating Abused Adolescents by Eliana Gil or any topic specifically for you Only $17.96 $11.86/pageorder now She traces the developmental stages of adolescents and explores the hurdles and difficulties that are likely to hamper progress as the adolescent makes the journey from childhood to adulthood. In the third chapter Gil makes a distinction between current and cumulative abuse of adolescents. Current abuse of adolescents refers to those who suffer abuse only during their adolescent years, whereas cumulative abuse refers to those who have been exposed to sustained or intermittent abuse from their early childhood. With regard to the former instance, abuse is probably the result of an inability to cope with the complexities in the developmental stage for the parent as well as the adolescent. It is usually the result of adolescents chafing under rigid parental authority as they seek to establish their independence and parents who are unwilling to relinquish control. The situation can be resolved by establishing better channels of communication, defining roles and dealing with conflict and control issues. Cumulative abuse is more serious as the damage is far more palpable, leaving the adolescent bruised and battered. Since the trauma is more severe, these victims are likely to h ave deteriorated mentally and physically.