Thursday, August 27, 2020

Cooling curve Essays

Cooling bend Essays Cooling bend Paper Cooling bend Paper A cooling bend is really a line diagram speaking to the adjustment in the condition of matter of a substance, either from strong to fluid or fluid to strong. In this diagram, time is normally spoken to on the x hub and temperature on the y hub. The particles in the wax in the strong state gradually begin to get more vitality when warmed and begin to move all the more quickly at a specific temperature and afterward change to fluid. At the point when it needs to change from fluid to strong, the particles in the wax begin to lose vitality and approach together till they become strong. Factors: Independent Dependant Controlled Temperature change Time taken for wax to harden Environmental change, measure of wax taken. Device: Boiling tube Beaker 250 cm3 (I 25 cm3) Thermometer extend 10i C to 110i C (I 0. 5i C) Clamp stand Bunsen burner Tripod stand Wire measure Paraffin wax Water for water shower (in recepticle) Stopwatch (I 0. 1 seconds) METHOD (GIVEN): 1) Heat a large portion of a measuring glass of water to about 90i C. 2) Clamp a bubbling cylinder with paraffin wax in it and put it in heated water with the thermometer. 3) Measure the temperature of liquid wax and start your stopwatch. 4) Record the temperature at reasonable time stretches till all the wax hardens totally. 5) Present your outcomes reasonably and decipher these as far as the ideas youve adapted up until this point. Perceptions: Amount of water (ml) Initial Temperature (Trial 1) (I C) Initial Temperature (Trial 2) (I C) Water Temperature (I C) Time stretches (s) Temperature (Trial 1) (I C) Temperature (Tria C individually, it began to go clear steadily, until it softened totally at 62. 5i C and 63. 0i C individually so, all in all, it was totally straightforward. As it begin freezing, at 48. 0i C and 50. 5i C individually, it began to pick up its unique grayish shading and it solidified totally at 50. 0i C and 53. 0i C separately, it was totally murky. Charts: Trial 1: Trial 2: ANALYSIS: From the above diagrams, we can see that the temperature of the wax in the test tube continues falling until a specific point where it turns into a steady worth and afterward keeps on falling. The particles of wax that are held together by more fragile intermolecular powers need to a specific temperature at which all the bonds can get more grounded in order to change its state. At the point when warmth is detracted from a fluid substance, the vitality provided to it drops and in this manner, the particles need more vitality to slam into one another and move far separated. In this way, the particles come nearer to one another and the intermolecular powers become more grounded. The temperature required for bonds to frame in a specific substance is the equivalent for all particles in that substance and subsequently, the temperature of the wax stays consistent all through a specific period when all bonds are shaped, changing the condition of the substance. This shows the wax utilized in the trial was an unadulterated substance as it has been demonstrated that a substance is unadulterated just if its dissolving/breaking point is a steady. The temperature fell quickly in the underlying stages in light of the fact that not many bonds or no bonds could be framed because of such a high temperature. Afterward, as the temperature drops, the quantities of bonds shaped become similarly higher and in this way, the temperature begins to fall gradually. When the entire substances hardens, all the securities have been shaped and in this way, the particles of the substance have an extremely low vitality as a result of which they cannot impact and produce heat. Hence, the temperature of the substance keeps on falling. Potential SOURCES OF ERROR: The perusing taken from the thermometer might not have been totally precise all through the investigation. As the test was led in a cooled room, this could have influenced the test. The time taken from the stopwatch wouldnt be totally exact. End: From this investigation, we can presume that the temperature required for bonds to frame in a specific substance is the equivalent for all particles in that substance. This standard applies to every unadulterated substance as all particles in an unadulterated substance are the equivalent and in this manner, they security at a similar temperature, as in this trial. A sullied substance would contain different substances and subsequently, the particles would bond at various temperatures and the bubbling/softening point wouldnt be a consistent.

Saturday, August 22, 2020

Effect of Violence on Television and Video Games

Queston: Discover something in a film, network show, book, computer game, recent development, individual experience, and so forth., and expound on how it identifies with brain research. Answer: Presentation For all intents and purposes since the hour of day break of TV, its effect has been a significant worry for guardians, instructors and officials. One of the most unique concerns has been the utilization of brutality in the media just as the computer games. According to Bryant Vorderer (2013), media savagery can desensitize individuals to viciousness in reality and that, for certain individuals, watching brutality in the media gets charming and doesn't bring about the on edge excitement that is commonly expected in the wake of watching it. This investigation examines the effect of vicious films and computer games on the brain science. Impact of viciousness on TV In a vicious TV program watchers might get into the recognizable proof with a fierce character. Individuals are bound to carry on forcefully themselves when they get the opportunity to distinguish themselves in setting to a rough character. If there should arise an occurrence of kids whatever they hear or find in the media will in general affect them in a few or the other way. Learning of forceful disposition and conduct, desensitization or the expansion in the hardness for the survivors of brutality, tendency or misrepresentation of dread of turning into a casualty by certain methods for viciousness are a portion of the results which have a mental decrease among the watchers (Mller et al.2012). While these impacts reflect unfavorable results, it is the initial an expanded penchant for savage conduct that is at the center of general wellbeing worry about broadcast viciousness. The measurable connection between childrens presentation to savage depictions and their ensuing forceful con duct has been demonstrated to be more grounded than the connection between asbestos introduction and the danger of laryngeal malignancy; there is no discussion in the clinical, general wellbeing, and sociology networks about the danger of hurtful impacts from childrens presentation to media savagery. Or maybe, there is solid accord that presentation to media brutality is a huge general wellbeing worry too. Considering Ramos et al. (2013), most viciousness on TV are either glamorized or cleaned. Glamorized alludes to the way that the greater part of the savagery in the TV is performed by the most fabulous character in the show, and they don't endure regret, analysis, or punishment for the rough conduct. In excess of 33% of the viciousness is performed by alluring characters, and multiple thirds of the brutality being submitted pull off no discipline by any stretch of the imagination. Purified alludes to the disappointment of the depictions in demonstrating reasonable damages to the individual casualties. Torment and enduring which are prompt is remembered for not exactly 50% of the fierce scenes in the show. More than 33% of the vicious associations portray ridiculous and gentle negative results to the people in question, which significantly downplays the seriousness of the injury brought about by specific activities in the genuine viable world (Gunter Harrison, 2013). Subsequently the vast majority of the vicious demonstrates will in general be cleaned which demonstrates least mischief to the casualties outwardly yet it has mental impacts in different manners. Constant introduction to brutal delineations can make desensitization savagery which implies that the watchers may all the more energetically acknowledge viciousness from others and execute fierce exercises without anyone else too. The over presentation of a person to savagery, and particularly the one which is depicted in a reasonable way, may lead watchers, almost certain youngsters to accept that it is dangerous to live on earth since it is likely not a protected spot to be in (Gentile Bushman, 2012). Overestimation of the likelihood that at last there are odds of them to become casualties of brutal exercises is exceptionally expected and prompts the expanded undue pressure, strain and nervousness. Shows like CSI: Crime Scene Investigations, House M.D are shows which have been considered to have consequences for watchers. Notwithstanding, the impacts can incorporate learning and having fun however not get contrarily influenced or may bring about mental impacts which incorporate fi erce streaks, pressure of being a casualty. Impact of brutal computer games The approach of computer games like the Honorable obligation and Excellent Theft Auto are a lot of brutal, yet as the idea of video gaming viciousness is new, it is gradually picking up pace when contrasted with the media savagery impacts. In spite of the fact that the whole idea brings up numerous issues as the computer game client isn't just review it but at the same time is legitimately included by playing it. Around 90-95% of youths are computer game players, and utilize different hotspots for it. A large portion of the computer games have segment of savagery in it. This has offered ascend to meta-diagnostic audits that show the negative impacts of computer games. As indicated by Shaffer Kipp (2013), open to rough computer games is a significant hazard factor which prompts the expansion in hostility, inconsiderate conduct, forceful cognizance, diminished sympathy and master social conduct. This is because of increment in the contrary contemplations with the individual who is playing such computer games all the more regularly. Perhaps the most serious issue with vicious computer games is that they dishearten players from practicing discretion inside them. For instance in Grand burglary Auto computer games the players can take vehicles, execute different characters in the game including police authorities, in such circumstances players are frequently remunerated as opposed to being rebuffed (Greitemeyer, Traut-Mattausch Osswald, 2012). Not many mental characteristics appear to stay stable from youth into adulthood, and less yet have been appeared to complete the expectation of progress or disappointment in ones life. Another difficult that influences the brain science of the youngsters is that the players are bound to relate to a vicious character (Mentzoni et al.2011). In the event that the game is first individual shooter, at that point the player have a similar perception viewpoint as the executioner. Also, if the game is a third individual, the player will in general include in the controlling the activities of the fierce character from an increasingly inaccessible visual viewpoint. Savage games legitimately reward rough conduct, by including compensating focuses to the player for getting into next level. In certain games, players get verbal applauding for the way the play. These impacts will in general go unnoticed as individuals don't comprehend the mental impact of the computer game brutality, as they see just the natural procedure (Montag et al.2012). There are acceptable hypothetical motivations to accept that brutal computer games are considerably more hurtful than rough TV projects or motion pictures. Forceful conduct is multi decided with introduction of rough computer games, then again, even little impacts and the impact of the savage computer games is little to medium in its impact size can have an extremely awful outcomes on the cultural level when numerous individuals are uncovered. Notwithstanding, kids are bound to copy the activities of the characters with whom they can recognize without any problem. In brutal computer games players being the characters get the chance to pick the weapons too, and the procedure requires dynamic support rather than simply uninvolved perception. The reiteration of the procedure builds the learning among the players, which will in general give a conduct practice to the players (Greenfield, 2014). End The examination includes the impacts of fierce games and TV programs on players and watchers individually. Survey brutal TV programs brings about the putting away of a perceptual and psychological portrayal of the occasion with in the memory. It is brought into the people thought. The conversation likewise makes reference to that among the populace, kids are bound to get affected mentally when they play fierce games or watch savage network shows. It is likewise uncovered that every one of these variables will in general get into mind which may likewise turn out as savage streaks or additionally in the structure pressure and strains of getting misled. For better understanding well known computer games like Call of Duty and Great Theft Auto and network shows CSI: Crime Scene Investigations, House M.D are taken as contextual investigation. References Greenfield, P. M. (2014).Mind and media: The impacts of TV, computer games, and PCs. Brain research Press. Montag, C., Weber, B., Trautner, P., Newport, B., Markett, S., Walter, N. T., ... Reuter, M. (2012). Does exorbitant play of brutal first-individual shooter-computer games hose mind action in light of passionate stimuli?.Biological psychology,89(1), 107-111. Mentzoni, R. A., Brunborg, G. S., Molde, H., Myrseth, H., Skouvere, K. J. M., Hetland, J., Pallesen, S. (2011). Tricky computer game use: evaluated predominance and relationship with mental and physical health.Cyberpsychology, conduct, and social networking,14(10), 591-596. Greitemeyer, T., Traut-Mattausch, E., Osswald, S. (2012). The most effective method to enhance negative impacts of savage computer games on collaboration: Play it helpfully in a team.Computers in Human Behavior,28(4), 1465-1470. Shaffer, D., Kipp, K. (2013).Developmental brain research: Childhood and youth. Cengage Learning. Gentile, D. A., Bushman, B. J. (2012). Rethinking media viciousness impacts utilizing a hazard and versatility way to deal with comprehension aggression.Psychology of Popular Media Culture,1(3), 138. Gunter, B., Harrison, J. (2013).Violence on TV: An examination of sum, nature, area and starting point of viciousness. Routledge. Ramos, R. A., Ferguson, C. J., Frailing, K., Romero-Ramirez, M. (2013). Easily numb or at this time another film? Media savagery introduction doesn't lessen watcher sympathy for survivors of genuine brutality among essentially Hispanic viewers.Psychology of Popular Media Culture,2(1), 2. Mller, I., Krah, B., Busching, R., Krause, C. (2012). Adequacy of a mediation to diminish the utilization of media savagery and animosity: An experimen

Friday, August 21, 2020

HOW TO Generate A QR Code Using Goo.gl

HOW TO Generate A QR Code Using Goo.gl Make Money Online Queries? Struggling To Get Traffic To Your Blog? Sign Up On (HBB) Forum Now!HOW TO: Generate A QR Code Using Goo.glUpdated On 19/02/2020Author : Pradeep KumarTopic : GuidesShort URL : http://hbb.me/2ozeLyw CONNECT WITH HBB ON SOCIAL MEDIA Follow @HellBoundBlogHope you guys are aware about the public launch of Googles URL Shortener service Goo.gl. Goo.gl has features like automatic spam detection and it has nearly 100% uptime since their initial launch. You can use your Google account to view URL history, traffic sources, referrers and visitor profiles for countries, browsers and platforms.I came to know Goo.gl has an awesome Easter Egg which instantly turns any link into a QR Code. A QR Code is a matrix barcode (or 2D code), readable by QR scanners, mobile phones with a camera, and smartphones. You can easily create it using this simple URL tweak. Earlier also we were able to generate QR codes using Goo.gl, but that was not the official way, and now since Goo.gl is live, we can easily generate them.How To Create QR Code Using Goo.glReally simple, just go to Goo.gl and shorten the URL.Now add .qr to the end of it.For example, the shortened URL for HellBound Bloggers is http://goo.gl/A15k. Now add .qr to the URL, i.e., http://goo.gl/A15k.qr. Now it will display the following QR code.Also check the Konami Easter egg code for Google Docs.Whats the big deal? Whats the use of this?These QR codes are great for mobile use. Google has been using them a lot for stuffs like easy installation of Android apps.READHOW TO: Develop Simple Web Browser Using C# [With URL Navigating]

Thursday, May 14, 2020

VoIP Security Internet - Free Essay Example

Sample details Pages: 21 Words: 6159 Downloads: 4 Date added: 2017/06/26 Category Statistics Essay Did you like this example? End to end VoIP security Introduction Don’t waste time! Our writers will create an original "VoIP Security Internet" essay for you Create order User communications applications are in high demand in the Internet user community. Two classes of such applications are of great importance and attract interest by many Internet users: collaboration systems and VoIP communication systems. In the first category reside systems like ICQ , MSN Messenger and Yahoo! Messenger while in the latter, systems like Skype and VoipBuster are dominating among the public VoIP clients. In the architecture plane, collaboration systems form a distributed network where the participants communicate with each other and exchange information. The data are either routed from the source through a central server to the recipient or the two clients communicate directly. The participants in such networks are both content providers and content requestors . On the other hand, the data communication path in the VoIP systems is direct between the peers, without any involvement of the service network in the data exchange path with some exceptions like Skypes superno de communications. Data are carried over public Internet infrastructures like Ethernets, WiFi hotspots or wireless ad hoc networks. Security in these networks is a critical issue addressed in several different perspectives in the past. In this assignment I focus on cryptographic security implementation in VoIP. Security is implemented dynamically in cooperation by the two (or more) peers with no prior arrangements and requirements, like out of band exchanged keys, shared secrets etc. Ease of use (simplicity), user friendliness (no special knowledge from the user side) and effectiveness (ensuring confidentiality and integrity of the applications) combined with minimal requirements on end user devices are the goals achieved by our approach. We leverage security of user communications, meeting all the above requirements, by enhancing the applications architecture with VoIPSec security elements. Over the past few years, Voice over IP (VoIP) has become an attractive alternative to more traditional forms of telephony. Naturally, with its in-creasing popularity in daily communications, re-searchers are continually exploring ways to improve both the efficiency and security of this new communication technology. Unfortunately, while it is well understood that VoIP packets must be encrypted to ensure confidentiality, it has been shown that simply encrypting packets may not be sufficient from a privacy standpoint. For instance, we recently showed that when VoIP packets are first compressed with variable bit rate (VBR) encoding schemes to save bandwidth, and then encrypted with a length preserving stream cipher to ensure confidentiality, it is possible to determine the language spoken in the encrypted conversation. As surprising as these findings may be, one might argue that learning the language of the speaker (e.g., Arabic) only affects privacy in a marginal way. If both endpoints of a VoIP call are known (for example, Mexico City and Madrid), then one might correctly conclude that the language of the conversation is Spanish, without performing any analysis of the traffic. In this work we show that the information leaked from the combination of using VBR and length preserving encryption is indeed far worse than previously thought. VOIP This assignment is about security, more specifically, about protecting one of your most precious assets, your privacy. We guard nothing more closely than our words. One of the most important decisions we make every day is what we will say and what we wont. But even then its not only what we say, but also what someone else hears, and who that person is. Voice over IP- the transmission of voice over traditional packet-switched IP networksis one of the hottest trends in telecommunications. Although most computers can provide VoIP and many offer VoIP applications, the term voice over IP is typically associated with equipment that lets users dial telephone numbers and communicate with parties on the other end who have a VoIP system or a traditional analog telephone. (The sidebar, Current voice-over-IP products, de-scribes some of the products on the market today.) As with any new technology, VoIP introduces both opportunities and problems. It offers lower cost and greater flexibility for an enterprise but presents significant security challenges. Security administrators might assume that because digitized voice travels in packets, they can simply plug VoIP components into their already se-cured networks and get a stable and secure voice net-work. Unfortunately, many of the tools used to safeguard todays computer networksfirewalls, network address translation (NAT), and encryptiondont work as is in a VoIP network. Although most VoIP components have counterparts in data networks, VoIPs performance demands mean you must supplement ordinary network software and hardware with special VoIP components. Integrating a VoIP system into an already congested or overburdened network can be disastrous for a companys technology infra-structure. Anyone at- tempting to construct a VoIP network should therefore first study the procedure in great detail. To this end, weve outlined some of the challenges of introducing appropriate security measures for VoIP in an enterprise. End-to-End Security IN this assignment I am going to describe the end-to-end security and its design principle that one should not place mechanisms in the network if they can be placed in end nodes; thus, networks should provide general services rather than services that are designed to support specific applications. The design and implementation of the Internet followed this design principle well. The Internet was designed to be an application-agnostic datagram de-livery service. The Internet of today isnt as pure an implementation of the end-to-end design principle as it once was, but its enough of one that the collateral effects of the network not knowing whats running over it are becoming major problems, at least in the minds of some observers. Before I get to those perceived problems, Id like to talk about what the end-to-end design principle has meant to the Internet, technical evolution, and society. The Internet doesnt care what you doits job is just to deliver the bits, stupid (in the words of David Isenberg in his 1997 paper, Rise of the Stupid Network2). The bits could be part of an email message, a data file, a photograph, or a video, or they could be part of a denial-of-service attack, a malicious worm, a break-in attempt, or an illegally shared song. The Net doesnt care, and that is both its power and its threat. The Internet (and by this, I mean the Arpanet, the NSFNet, and the networks of their successor commercial ISPs) wasnt designed to run the World Wide Web. The Internet wasnt designed to run Google Earth. It was designed to support them even though they did not exist at the time the foundations of the Net were designed. It was designed to support them by being designed to transport data without caring what it was that data represented. At the very first, the design of TCP/IP wasnt so flexible. The initial design had TCP and IP within a single protocol, one that would only deliver data reliably to a destination. But it was realized that not all applications were best served by a protocol that could only deliver reliable data streams. In particular, timely delivery of information is more important than reliable delivery when trying to support interactive voice over a network if adding reliability would, as it does, increase delay. TCP was split from IP so that the application running in an end node could determine for itself the level of reliability it needed. This split created the flexibility that is currently being used to deliver Skypes interactive voice service over the same network that CNN uses to deliver up-to-the-minute news headlines and the US Patent and Trademark office uses to deliver copies of US patents. Thus the Internet design, based as it was on the end-to-end principle, became a generative facility. Unlike the traditional phone system, in which most new applications must be installed in the phone switches deep in the phone net-work, anyone could create new applications and run them over the Internet without getting permission from the organizations that run the parts of the Net. This ability was exploited with irrational exuberance4 during the late 1990s Internet boom. But, in spite of the hundreds of billions of dollars lost by investors when the boom busted, the number of Internet users and Web sites, the amount of Internet traffic, and the value of Internet commerce have continued to rise, and the rate of new ideas for Internet-based services hasnt no- ticeably diminished. Security and privacy in an end-to-end world The end to end arguments paper used se-cure transmission of data as one reason that an end-to-end design was required. The paper points out that network-level or per-link encryption doesnt actually provide assurance that a file that arrives at a destination is the same as the file that was sent or that the data went unobserved along the path from the source to the destination. The only way to ensure end-to-end data integrity and confidentiality is to use end-to-end encryption. Thus, security and privacy are the responsibilities of the end nodes. If you want to ensure that a file will be transferred without any corruption, your data-transfer application had better include an integrity check, and if you didnt want to allow anyone along the way to see the data itself, your application had better encrypt it before transmitting it. There are more aspects to security on a network than just data encryption. For example, to ensure that communication over the net-work is reliable, the network itself needs to be secure against attemptspurposeful or accidentalto disrupt its operation or redirect traffic away from its intended path. But the original Internet design didnt include protections against such attacks. Even if the network is working perfectly, you need to actually be talking to the server or person you think you are. But the Internet doesnt pro-vide a way, at the network level, to assure the identities of its users or nodes. You also need to be sure that the message your computer re receives isnt designed to exploit weaknesses in its software (such as worms or viruses) or in the ways that you use the Net. Protection against such things is the end systems responsibility. Note that there is little that can be done in the Net or in your end system to protect your privacy from threats such as the government demanding the records of your use of Net-based services such as Google, which collect information about your network usage. Many of todays observers assume that the lack of built-in protections against attacks and the lack of a se-cure way to identify users or nodes was a result of an environment of trust that prevailed when the original Internet design and protocols were developed. If you trusted the people on the Net, there was no need for special defensive functions. But a few people who were at the scene have told me that such protections were actively discouraged by the primary sponsor of the early Internetthat is to say, the US military wasnt all that interested in having good nonmilitary security, maybe because it might make its job harder in the future. Whatever the reason, the Internet wasnt designed to provide a secure environment that included protection against the malicious actions of those who would disrupt it or attack nodes or services provided over it. End-to-end security is not dead yet, but it is seriously threatened, at least at the network layer. NATs and firewalls interfere with some types of end-to-end encryption technology. ISPs could soon be required by regulations to, by default, filter the Web sites and perhaps the protocols that their customers can access. Other ISPs want to be able to limit the protocols that their customers can access so that the ISP can give service providers an incentive to pay for the customers use of their linesthey dont see a way to pay for the net-work without this ability. The FBI has asked that it be able to review all new Internet services for tapability before theyre deployed, and the FCC has hinted that it will support the request If this were to happen, applications such as Skype that use end-to-end encryption could be outlawed as inconsistent with law enforcement needs. Today, its still easy to use end-to-end encryption as long as its HTTPS, but that might be short-lived. It could soon reach the point that the use of end-to-end encryption, without which end-to-end security cant exist, will be seen as an antisocial act (as a US justice department official once told me). If that comes to be the case, end-toend security will be truly dead, and we will all have to trust functions in the network that we have no way of knowing are on our side. What is VoIP end to end security? Achieving end-to-end security in a voice-over-IP (VoIP) session is a challenging task. VoIP session establishment involves a jumble of different protocols, all of which must inter-operate correctly and securely. Our objective in this paper is to present a structured analysis of protocol inter-operation in the VoIP stack, and to demonstrate how even a subtle mismatch between the assumptions made by a protocol at one layer about the protocol at another layer can lead to catastrophic security breaches, including complete removal of transport-layer encryption. The VoIP protocol stack is shown in figure 1. For the purposes of our analysis, we will divide it into four layers: signaling, session description, key exchange and secure media (data) transport. This division is quite natural, since each layer is typically implemented by a separate protocol. Signaling is an application-layer (from the viewpoint of the underlying communication network) control mechanism used for creating, modifying and terminating VoIP sessions with one or more participants. Signaling protocols include Session Initiation Protocol (SIP) [27], H.323 and MGCP. Session description protocols such as SDP [20] are used for initiating multimedia and other sessions, and often include key exchange as a sub-protocol. Key exchange protocols are intended to provide a cryptographically secure way of establishing secret session keys between two or more participants in an untrusted environment. This is the fundamental building block in se-cure session establishment. Security of the media transport layerthe layer in which the actual voice datagrams are transmitteddepends on the secrecy of session keys and authentication of session participants. Since the established key is typically used in a symmetric encryption scheme, key secrecy requires that nobody other than the legitimate session participants be able to distinguish it from a random bit-string. Authentication requires that, after the key exchange protocol successfully completes, the participants respective views of sent and received messages must match (e.g., see the notion of matching conversations in [8]). Key ex-change protocols for VoIP sessions include SDPs Security DEscriptions for Media Streams (SDES) , Multimedia Internet KEYing (MIKEY) a nd ZRTP [31]. We will analyze all three in this paper. Secure media transport aims to provide confidentiality, message authentication and integrity, and replay protection to the media (data) stream. In the case of VoIP, this stream typically carries voice datagrams. Confidentiality means that the data under encryption is indistinguishable from random for anyone who does not have the key. Message authentication implies that if Alice receives a datagram apparently sent by Bob, then it was indeed sent by Bob. Data integrity implies that any modification of the data in transit We show how to cause the transport-layer SRTP protocol to repeat the keystream used for datagram encryption. This enables the attacker to obtain the xor of plaintext datagrams or even to completely decrypt them. The SRTP keystream is generated by using AES in a stream cipher-like mode. The AES key is generated by applying a pseudo-random function (PRF) to the session key. SRTP, however, does not add any session-specific randomness to the PRF seed. Instead, SRTP assumes that the key exchange protocol, executed as part of RTP session establishment, will en-sure that session keys never repeat. Unfortunately, S/MIME-protected SDES, which is one of the key ex-change protocols that may be executed prior to SRTP, does not provide any replay protection. As we show, a network-based attacker can replay an old SDES key establishment message, which will cause SRTP to re-peat the keystream that it used before, with devastating consequences. This attack is confirmed by our analysis of the libsrtp implementation. We show an attack on the ZRTP key exchange protocol that allows the attacker to convince ZRTP session participants that they have lost their shared secret. ZID values, which are used by ZRTP participants to retrieve previously established shared secrets, are not authenticated as part of ZRTP. Therefore, an attacker can initiate a session with some party A under the guise of another party B, with whom A previously established a shared secret. As part of session establishment, A is supposed to verify that B knows their shared secret. If the attacker deliberately chooses values that cause verification to fail, A will decidefollowing ZRTP specificationthat B has forgotten the shared secret. The ZRTP specification explicitly says that the protocol may proceed even if the set of shared secrets is empty, in which case the attacker ends up sharing a key with A who thinks she shares this key with B. Even if the participants stop the protocol after losing their shared secrets, but are using VoIP devices without displays, they cannot confirm the computed key by voice and must stop communicating. In this case, the attack becomes a simple and effective denial of service. Our analysis of ZRTP is supported by the AVISPA formal analysis tool . We show several minor weaknesses and potential vulnerabilities to denial of service in other protocols. We also observe that the key derived as the result of MIKEY key exchange cannot be used in a standard cryptographic proof of key exchange security (e.g., ). Key secrecy requires that the key be in-distinguishable from a random bitstring. In MIKEY, however, the joint Diffie-Hellman value derived as the result of the protocol is used directly as the key. Membership in many Diffie-Hellman groups is easily checkable, thus this value can be distinguished from a random bitstring. Moreover, even hashing the Diffie-Hellman value does not allow the formal proof of security to go through in this case, since the hash function does not take any random inputs apart from the Diffie-Hellman value and cannot be viewed as a randomness extractor in the proof. (This observation does not immediately lead to any attacks.) While we demonstrate several real, exploitable vulnerabilities in VoIP security protocols, our main contribution is to highlight the importance of analyzing protocols in con-text rather than in isolation. Specifications of VoIP protocols tend to be a mixture of informal prose and pseudocode, with some assumptionsespecially those about the protocols operating at the other layers of the VoIP stackare left implicit and vague. Therefore, our study has important lessons for the design and analysis of security protocols in general. The rest of the paper is organized as follows. In section 2, we describe the protocols, focusing on SIP (signaling), SDES, ZRTP and MIKEY (key exchange), and SRTP (transport). In section 3, we describe the attacks and vulnerabilities that we discovered. Related work is in section 4, conclusions are in section 5. VoIP security different from normal data network security To understand why security for VoIP differs from data network security, we need to look at the unique constraints of transmitting voice over a packet network, as well as the characteristics shared by VoIP and data networks. Packet networks depend on many configurable parameters: IP and MAC (physical) addresses of voice terminals and addresses of routers and firewalls. VoIP networks add specialized software, such as call managers, to place and route calls. Many network parameters are established dynamically each time a network component is restarted or when a VoIP telephone is restarted or added to the net-work. Because so many nodes in a VoIP network have dynamically configurable parameters, intruders have as wide an array of potentially vulnerable points to attack as they have with data networks. But VoIP systems have much stricter performance constraints than data networks, with significant implications for security. Threats for VoIP VoIP security threats contain Eavesdropping, Denial of Service, Session Hijacking, VoIP Spam, etc. For preventing these threats, there are several VoIP standard protocols. And we discuss this in Section 3. Eavesdropping VoIP service using internet technology is faced with an eavesdropping threat, in which is gathering call setting information and audio/voice communication contents illegally. Eavesdropping can be categorized largely by eavesdropping in a LAN(Local Area Network) environment, one in a WAN( Wide Area Network) environment, one through a PC(Personal Computer) hacking, etc. Denial of Service Denial of Service is an attack, which makes it difficult for legitimate users to take telecommunication service regularly. Also it is one of threats, which are not easy to solve the most. Since VoIP service is based on internet technology, it also is exposed to Denial of Service. Denial of Service in VoIP service can be largely divided into system resource exhaustion, circuit This work was supported by the IT RD program of MIC/IITA resourceexhaustion,VoIP communication interruption/blocking, etc. Session Hijacking Session Hijacking is an attack, which is gathering the communication session control between users through spoofing legitimate users, and is interfering in their communication, as a kind of man-in-the-middle attack. Session Hijacking in VoIP communication can be categorized largely by INVITE session hijacking, SIP Registration hijacking, etc. VoIP Spam VoIP Spam is an attack, which is interrupting, and violating user privacy through sending voice advertisement messages, and also makes VMS(Voice Mailing System) powerless. It can be categorized by Call Spam, IM(Instant Messaging) Spam, Presence Spam, etc. Security trade-offs Trade-offs between convenience and security are routine in software, and VoIP is no exception. Most, if not all, VoIP components use integrated Web servers for configuration. Web interfaces can be attractive, easy to use, and inexpensive to produce because of the wide availability of good development tools. Unfortunately, most Web development tools focus on features and ease of use, with less attention paid to the security of the applications they help produce. Some VoIP device Web applications have weak or no access control, script vulnerabilities, and inadequate parameter validation, resulting in privacy and DoS vulnerabilities. Some VoIP phone Web servers use only HTTP basic authentication, meaning servers send authentication information without encryption, letting anyone with network access obtain valid user IDs and passwords. As VoIP gains popularity, well inevitably see more administrative Web applications with exploitable errors. The encryption process can be unfavorable to QoS Unfortunately, several factors, including packet size expansion, ciphering latency, and a lack of QoS urgency in the cryptographic engine can cause an excessive amount of latency in VoIP packet delivery, leading to degraded voice quality. The encryption process can be detrimental to QoS, making cryptodevices severe bottlenecks in a VoIP net-work. Encryption latency is introduced at two points. First, encryption and decryption take a nontrivial amount of time. VoIPs multitude of small packets exacerbates the encryption slowdown because most of the time consumed comes as overhead for each packet. One way to avoid this slowdown is to apply algorithms to the computationally simple encryption voice data before packetization. Although this improves throughput, the proprietary encryption algorithms used (fast Fourier-based encryption, chaos-bit encryption, and so on) arent considered as secure as the Advanced Encryption Standard,16 which is included in many IPsec implementations. AESs combination of speed and security should handle the demanding needs of VoIP at both ends. following general guidelines, recognizing that practical considerations might require adjusting them: Put voice and data on logically separate networks. You should use different subnets with separate RFC 1918 address blocks for voice and data traffic and separate DHCP servers to ease the incorporation of intrusion-detection and VoIP firewall protection. At the voice gateway, which interfaces with the PSTN, disallow H.323, SIP, or Media Gateway Control Protocol (MGCP) connections from the data network. As with any other critical network management component, use strong authentication and access control on the voice gateway system. Choose a mechanism to allow VoIP traffic through firewalls. Various protocol dependent and independent solutions exist, including ALGs for VoIP protocols and session border controllers. Stateful packet filters can track a connections state, denying packets that arent part of a properly originated call. Use IPsec or Secure Socket Shell (SSH) for all remote management and auditing access. If practical, avoid using remote management at all and do IP PBX access from a physically secure system. Use IPsec tunneling when available instead of IPsec transport because tunneling masks the source and destination IP addresses, securing communications against rudimentary traffic analysis (that is, determining whos making the calls). If performance is a problem, use encryption at the router or other gateway to allow IPsec tunneling. Be-cause some VoIP end points arent computationally powerful enough to perform encryption, placing this Recent studies indicate that the greatest contributor to the encryption bottleneck occurs at the cryptoengine scheduler, which often delays VoIP packets as it processes larger data packets.17 This problem stems from the fact that cryptoschedulers are usually first-in first-out (FIFO) queues, inadequate for supporting QoS requirements. If VoIP packets arrive at the encryption point when the queue already contains data packets, theres no way they can usurp the less time-urgent traffic. Some hardware manufacturers have proposed (and at least one has implemented) solutions for this, including QoS reordering of traffic just before it reaches the cryptoengine.18 But this solution assumes that the cryptoengines output is fast enough to avoid saturating the queue. Ideally, youd want the cryptoengine to dynamically sort incoming traffic and force data traffic to wait for it to finish processing the VoIP packets, even if these packets arrive later. However, this solution adds considerable over head to a process most implementers like to keep as light as possible. Another option is to use hardware-implemented AES encryption, which can improve throughput significantly. Past the cryptoengine stage, the system can perform further QoS scheduling on the encrypted packets, provided they were encrypted using ToS preservation, which copies the original ToS bits into the new IPsec header. Virtual private network (VPN) tunneling of VoIP has also become popular recently, but the congestion and bottlenecks associated with encryption suggest that it might not always be scalable. Although researchers are making great strides in this area, the hardware and soft-ware necessary to ensure call quality for encrypted voice traffic might not be economically or architecturally vi-able for all enterprises considering the move to VoIP. Thus far, weve painted a fairly bleak picture of VoIP security. We have no easy one size fits all solution to the issues weve discussed in this article. Decisions to use VPNs instead of ALG-like solutions or SIP instead of H.323 must depend on the specific nature of both the current network and the VoIP network to be. The technical problems are solvable, however, and establishing a secure VoIP implementation is well worth the difficulty. To implement VoIP securely today, start with the following general guidelines, recognizing that practical considerations might require adjusting them: Put voice and data on logically separate networks. You should use different subnets with separate RFC 1918 address blocks for voice and data traffic and separate DHCP servers to ease the incorporation of intrusion-detection and VoIP firewall protection. At the voice gateway, which interfaces with the PSTN, disallow H.323, SIP, or Media Gateway Control Protocol (MGCP) connections from the data network. As with any other critical network management component, use strong authentication and access control on the voice gateway system. Choose a mechanism to allow VoIP traffic through firewalls. Various protocol dependent and independent solutions exist, including ALGs for VoIP protocols and session border controllers. Stateful packet filters can track a connections state, denying packets that arent part of a properly originated call. Use IPsec or Secure Socket Shell (SSH) for all remote management and auditing access. If practical, avoid using remote management at all and do IP PBX access from a physically secure system. Use IPsec tunneling when available instead of IPsec transport because tunneling masks the source and destination IP addresses, securing communications against rudimentary traffic analysis (that is, determining whos making the calls). If performance is a problem, use encryption at the router or other gateway to allow IPsec tunneling. Be-cause some VoIP end points arent computationally powerful enough to perform burden at a central point ensures the encryption of all VoIP traffic emanating from the enterprise network. Newer IP phones provide AES encryption at reason-able cost. Look for IP phones that can load digitally (cryptographically) signed images to guarantee the integrity of the software loaded onto the IP phone. Avoid softphone systems (see the sidebar) when security or privacy is a concern. In addition to violating the separation of voice and data, PC-based VoIP applications are vulnerable to the worms and viruses that are all too common on PCs. Consider methods to harden VoIP platforms based on common operating systems such as Windows or Linux. Try, for example, disabling unnecessary services or using host-based intrusion detection methods. Be especially diligent about maintaining patches and current versions of VoIP software. Evaluate costs for additional power backup systems that might be required to ensure continued operation during power outages. Give special consideration to E-91 1 emergency services communications, because E-911 automatic location service is not always available with VoIP. VoIP can be done securely, but the path isnt smooth. It will likely be several years before standards issues are settled and VoIP systems become mainstream. Until then, organizations must proceed cautiously and not assume that VoIP components are just more peripherals for the local network. Above all, its important to keep in mind VoIPs unique requirements, acquiring the right hardware and software to meet the challenges of VoIP security. Methods for VoIP end to end security Voice over IP (VoIP) security where security design patterns may prove exceedingly useful. Internet telephony or VoIP has grown in importance and has now passed the tipping point in 2005 U.S. companies bought more VoIP phones than ordered new POTS lines. However, with the powerful convergence of software-based VoIP to enable new functionality to store, copy, combine with other data, and distribute over the Internet also comes security problems that need to be solved in standard ways in order to ensure interoperability. This is further complicated by the fact that various vendors competing for market share currently drive VoIP security. Given the importance of VoIP security, we are only aware of only two other efforts for VoIP security design patterns, a chapter within and an unpublished M.S. thesis supervised by Eduardo Fernandez of Florida Atlantic University. Figure 1. VoIP Infrastructure Vulnerabilities NIST released a report on VoIP security in January 2005 . This report elaborates on various aspects of securing VoIP and the impact of such measures on call performance. The report argues that VoIP performance and security are not seamlessly compatible; in certain areas they are orthogonal. We briefly review this report and group VoIP infrastructure threats into three categories as depicted in Figure 1: (1) Protocol (2) Implementation and (3) Management Quality of Service (QoS) Issues A VoIP call is susceptible to latency, jitter, and packet loss. ITU-T recommendation G.114 has established 150 ms as the upper limit on one-way latency for domestic calls. If Goodes latency budget is considered, very little time ( 29 ms) is left for encryption/decryption of voice traffic. QoS-unaware network elements such as routers, firewalls, and Network Address Translators (NAT) all contribute to jitter (no uniform packet delays). Use of IPsec both contributes to jitter and reduces the effective bandwidth. VoIP is sensitive to packet loss with tolerable loss rates of 1-3%; however, forward error correction schemes can reduce loss rates. Signaling and Media Protocol Security SIP (Session Initiation Protocol) (RFC 3261) and H.323 are the two competing protocols for VoIP signaling. H.323 is an ITU-T umbrella of protocols that supports secure RTP (SRTP) (RFC 3711) for securing media traffic, and Multimedia Internet Keying (MIKEY) (RFC 3830) for key exchange. SIP supports TLS and S/MIME for signaling message confidentiality and SRTP for media confidentiality. Firewalls and NATs RTP is assigned a dynamic port number that presents a problem for firewall port management. A firewall has to be made aware of the ports on which the media will flow. Thus a stateful and application-aware firewall is necessary. However, if a client is behind a NAT, call establishment signaling messages transmit the IP address and RTP port number that is not globally reachable. NAT traversal protocols like STUN (RFC 3489), TURN (RFC 2026), and ICE (14) are necessary to establish a globally routable address for media traffic. For protocols that send call setup messages via UDP, the intermediate signaling entity must send to the same address and port from which the request arrived. Encryption and IPsec IPsec is preferred for VoIP tunneling across the Internet, however, it is not without substantial overhead. When IPsec is used in tunnel mode, the VoIP payload to packet size ratio for a payload of 40 bytes and RTP/UDP headers drops to ~30%. The NIST solution to avoid queuing bottlenecks at routers due to encryption is to perform encryption/decryption solely at endpoints. SRTP and MIKEY are specified for encrypting media traffic and establishing session keys respectively. Categorizing VoIP Threats The threats faced by a VoIP are similar to other applications including: unwanted communication (spam), privacy violations (unlawful intercept), impersonation (masquerading), theft-of-service, and denial-of-service. Table 1 groups these threats into protocol, implementation, and management categories. Protocol Signaling, MediaConfidentiality, Integrity end-to-end protection as well as hop-by- hop (Proxies might be malicious) Configuration, Confidentiality, Integrity most VoIP devices are managed remotely Identity Assertion Users concerned about whether they are talking to the real entity as opposed to a phished entity Reputation Management Implementation Buffer Overflow, Insecure Bootstrapping. Management Access Control protection against unauthorized access to VoIP servers and gateways Power Failures Table 1. Categorizing VoIP Threats Secure VoIP call The Secure VoIP call pattern hides the meaning of messages by performing encryption of calls in a VoIP environment. Context Two or more subscribers are participating in a voice call over a VoIP channel. In public IP networks such as the Internet, it is easy to capture the packets meant for another user. Problem When making or receiving a call, the transported voice packets between the VoIP network nodes are exposed to interception. How to prevent attackers from listening to a voice call conversation when voice packets are intercepted on public IP networks? The solution will be affected by the following forces: Packets sent in a public network are easy to intercept and read or change. We need a way to hide their contents. The protection method must be transparent to the users and easy to apply. The protection method should not significantly affect the quality of the call. Solution To achieve confidentiality we use encryption and decryption of VoIP calls. Implementation In cases where performance is an important issue, symmetric algorithms are preferred. Such algorithms require the same cryptographic key (a shared secret key) on both sides of the channel. If the IPSec standard is used, it is necessary for participants in a call (i.e. Caller and Callee) to agree previously on a data encryption algorithm (e.g. DES, 3DES, AES) and on a shared secret key. The Internet Key Exchange (IKE) protocol is used for setting up the IPSEC connections between terminal devices. The caller encrypts the voice call with the secret key and sends it to the remote user. The callee decrypts the voice call and recovers the original voice packets. Additionally, the Secure Real Time Protocol (SRTP) can be used for encrypting media traffic and the Multimedia Internet KEYing (MIKEY) for exchanging keying materials in VoIP. If public key cryptography is used, the callee must obtain the callers public key before establishing a connection. The caller encrypts the voice call with the callees public key and sends it to her. The callee decrypts the voice call and recovers the original voice packets. The class diagram of Figure 4 shows a Secure-channel communication in VoIP (adapted from the Cryptographic Metapattern in).This model uses the Strategy pattern to indicate choice of encryption algorithems. Both the Caller and Callee roles use the same set of algorithms although they are shown only in the caller side. Consequences The advantages of this pattern include: Symmetric encryption approaches provide good confidentiality. Encryption is performed transparently to the users activities. The need to provide separate VLANs for VoIP security could possibly be removed. It may no longer be necessary to use IPSec tunneling that was previously required in the MAN/WAN. Figure 4 Class Diagram for a VoIP Secure Channel Possible disadvantages include: The quality of the call can be affected if encryption is not performed very carefully [Wal05]. It is hard to scale because of the need for shared keys.

Wednesday, May 6, 2020

Post Traumatic Stress Disorder ( Ptsd ) - 783 Words

Freedom bears a heavy price. Many soldiers pay with their lives, while others relive the sights, sounds, and terror of combat. Post-Traumatic Stress Disorder (PTSD) affects thousands of American veterans and their families each year. Is PTSD simply a weakness, or is it an epidemic? Though historically, the validity of PTSD was argued, the pain is real, and there is a diagnosis to prove it. Combat-related PTSD stems from witnessing the suffering and death of others, and the exposure of destruction, personal danger, and injury. A heightened risk may also result from a soldier’s specific role in the war. One study of Vietnam soldiers provides insight on potential risk factors and reveals an unexpected contributor to the development of PTSD. This study suggests that those who suffered the worst cases of PTSD had sustained stressful and traumatic childhood abuse. The study examined two groups of Vietnam soldiers in an attempt to determine a predisposition for PTSD. The first group consisted of Vietnam soldiers who sought treatment for PTSD; the second group of Vietnam soldiers did not have PTSD. Veterans who were diagnosed with PTSD were shown to have higher rates of childhood abuse than veterans who did not have PTSD (Bremner, Southwick, Johnson, Yehuda, and Charney. 1993). PTSD has not always had an official diagnosis. Prior to the official diagnosis, there was a large gap in psychiatry. Physicians and other members of society mistreated and regularly disregarded those whoShow MoreRelatedPost Traumatic Stress Disorder ( Ptsd )990 Words   |  4 PagesPost-Traumatic Stress Disorder Post-traumatic stress disorder is a common anxiety disorder characterized by chronic physical arousal, recurrent unwanted thoughts and images of the traumatic event, and avoidance of things that can call the traumatic event into mind (Schacter, Gilbert, Wegner, Nock, 2014). About 7 percent of Americans suffer from PTSD. Family members of victims can also develop PTSD and it can occur in people of any age. The diagnosis for PTSD requires one or more symptoms to beRead MorePost Traumatic Stress Disorder ( Ptsd )1471 Words   |  6 PagesRunning head: POST-TRAUMATIC STRESS DISORDER 1 Post-Traumatic Stress Disorder Student’s Name Course Title School Name April 12, 2017 Post-Traumatic Stress Disorder Post-traumatic stress disorder is a mental disorder that many people are facing every day, and it appears to become more prevalent. This disorder is mainly caused by going through or experiencing a traumatic event, and its risk of may be increased by issuesRead MorePost Traumatic Stress Disorder ( Ptsd ) Essay1401 Words   |  6 PagesAccording to the Mayo-Clinic Post Traumatic Stress Disorder, commonly known as PTSD is defined as â€Å"Post-traumatic stress disorder (PTSD) is a mental health condition that s triggered by a terrifying event — either experiencing it or witnessing it. Symptoms may include flashbacks, nightmares and severe anxiety, as well as uncontrollable thoughts about the event† (Mayo Clinic Staff, 2014). Post Traumatic Stress disorder can prevent one from living a normal, healthy life. In 2014, Chris Kyle playedRead MorePost Traumatic Stress Disorder ( Ptsd )1198 Words   |  5 Pages Post-traumatic stress disorder(PTSD) is a mental illness that is triggered by witnessing or experiencing a traumatic event. â€Å"PTSD was first brought to public attention in relation to war veterans, but it can result from a variety of traumatic incidents, such as mugging, rape, torture, being kidnapped or held captive, child abuse, car accidents, train wrecks, plane crashes, bombings, or natural disasters such as floods or earthquakes(NIMH,2015).† PTSD is recognized as a psychobiological mentalRead MorePost Traumatic Stress Disorder ( Ptsd )1423 Words   |  6 Pages Mental diseases and disorders have been around since humans have been inhabiting earth. The field of science tasked with diagnosing and treating these disorders is something that is always evolving. One of the most prevalent disorders in our society but has only recently been acknowledged is Post Traumatic Stress Disorder (PTSD). Proper and professional diagnosis and definitions of PTSD was first introduced by the American Psychiatric Association(APA) in the third edition of the Diagnostic andRead MorePost Traumatic Stress Disorder ( Ptsd ) Essay1162 Words   |  5 PagesSocial Identity, Groups, and PTSD In 1980, Post Traumatic Stress Disorder (PTSD,) was officially categorized as a mental disorder even though after three decades it is still seen as controversial. The controversy is mainly founded around the relationship between post-traumatic stress (PTS) and politics. The author believes that a group level analysis will assist in understanding the contradictory positions in the debate of whether or not PTSD is a true disorder. The literature regarding this topicRead MorePost Traumatic Stress Disorder ( Ptsd ) Essay1550 Words   |  7 PagesPost Traumatic Stress Disorder â€Å"PTSD is a disorder that develops in certain people who have experienced a shocking, traumatic, or dangerous event† (National Institute of Mental Health). Post Traumatic Stress Disorder (PTSD) has always existed, PTSD was once considered a psychological condition of combat veterans who were â€Å"shocked† by and unable to face their experiences on the battlefield. Much of the general public and many mental health professionals doubted whether PTSD was a true disorder (NIMH)Read MorePost Traumatic Stress Disorder ( Ptsd )944 Words   |  4 Pageswith Post-traumatic stress disorder (PTSD Stats). Post-Traumatic Stress Disorder is a mental disorder common found in veterans who came back from war. We can express our appreciation to our veterans by creating more support programs, help them go back to what they enjoy the most, and let them know we view them as a human not a disgrace. According to the National Care of PTSD, a government created program, published an article and provides the basic definition and common symptoms of PTSD. Post-traumaticRead MorePost Traumatic Stress Disorder ( Ptsd )1780 Words   |  8 Pagesmental illnesses. One such illness is post-traumatic stress disorder (PTSD). Post-traumatic stress disorder is a mental illness that affects a person’s sympathetic nervous system response. A more common name for this response is the fight or flight response. In a person not affected by post-traumatic stress disorder this response activates only in times of great stress or life threatening situations. â€Å"If the fight or flight is successful, the traumatic stress will usually be released or dissipatedRead MorePost Traumatic Stress Disorder ( Ptsd )1444 Words   |  6 PagesYim – Human Stress 2 December 2014 PTSD in War Veterans Post Traumatic Stress Disorder (PTSD) is a condition that is fairly common with individuals that have experienced trauma, especially war veterans. One in five war veterans that have done service in the Iraq or Afghanistan war are diagnosed with PTSD. My group decided to focus on PTSD in war veterans because it is still a controversial part of stressful circumstances that needs further discussion. The lifetime prevalence of PTSD amongst war

Tuesday, May 5, 2020

Atomic Bomb Essay Thesis Example For Students

Atomic Bomb Essay Thesis Just before the beginning of World War II, Albert Einstein wrote a letter to President Franklin D. Roosevelt. Urged by Hungarian-born physicists Leo Szilard, Eugene Wingner, and Edward Teller, Einstein told Roosevelt about Nazi German efforts to purify Uranium-235 which might be used to build an atomic bomb. Shortly after that the United States Government began work on the Manhattan Project. The Manhattan Project was the code name for the United States effort to develop the atomic bomb before the Germans did. The first successful experiments in splitting a uranium atom had been carried out in the autumn of 1938 at the Kaiser Wilhelm Institute in Berlin(Groueff 9) just after Einstein wrote his letter. So the race was on. Major General Wilhelm D. Styer called the Manhattan Project the most important job in the war . . . an all-out effort to build an atomic bomb.(Groueff 5) It turned out to be the biggest development in warfare and sciences biggest development this century. The most complicated issue to be addressed by the scientists working on the Manhattan Project was the production of ample amounts of enriched uranium to sustain a chain reaction.(Outlaw 2) At the time, Uranium-235 was hard to extract. Of the Uranium ore mined, only about 1/500 th of it ended up as Uranium metal. Of the Uranium metal, the fissionable isotope of Uranium (Uranium- 235) is relatively rare, occurring in Uranium at a ratio of 1 to 139. (Szasz 15) Separating the one part Uranium-235 from the 139 parts Uranium-238 proved to be a challenge. No ordinary chemical extraction could separate the two isotopes. Only mechanical methods could effectively separate U-235 from U-238.(2) Scientists at Columbia University solved this difficult problem. A massive enrichment laboratory/plant(Outlaw 2) was built at Oak Ridge, Tennessee. H. C. Urey, his associates, and colleagues at Columbia University designed a system that worked on the principle of gaseous diffusion. (2) After this process was completed, Ernest O. Lawrence (inventor of the Cyclotron) at the University of California in Berkeley implemented a process involving magnetic separation of the two isotopes.(2) Finally, a gas centrifuge was used to further separate the Uranium-235 from the Uranium-238. The Uranium-238 is forced to the bottom because it had more mass than the Uranium-235. In this manner uranium-235 was enriched from its normal 0.7% to weapons grade of more than 90%.(Grolier 5) This Uranium was then transported to the Los Alamos, N. Mex. , laboratory headed by J. Robert Oppenheimer.(Grolier 5) Oppenheimer was the major force behind the Manhattan Project. He literally ran the show and saw to it that all of the great minds working on this project made their brainstorms work. He oversaw the entire project from its conception to its completion.(Outlaw 3) Once the purified Uranium reached New Mexico, it was made into the components of a gun-type atomic weapon. Two pieces of U-235, individually not large enough to sustain a chain reaction, were brought together rapidly in a gun barrel to form a supercritical mass that exploded instantaneously.(Grolier 5) It was originally nicknamed Thin Man'(after Roosevelt, but later renamed Little Boy (for nobody) when technical changes shortened the proposed gun barrel. (Szasz 25) The scientists were so confident that the gun-type atomic bomb would work no test was conducted, and it was first employed in military action over Hiroshima, Japan, on Aug. 6, 1945.(Grolier 5) Before the Uranium-235 Little Boy bomb had been developed to the point of seeming assured of success,(Grolier 5) another bomb was proposed. The Uranium-238 that had been earlier ruled out as an option was being looked at. It could capture a free neutron without fissioning and become Uranium-239. But the Uranium-239 thus produced is unstable (radioactive) and decays first to neptunium-239 and then to plutonium-239.(Grolier 5) This proved to be useful because the newly created plutonium-239 is fissionable and it can be separated from uranium by chemical techniques,(6) which would be far simpler than the physical processes to separate the Uranium-235 from the Uranium-238. Once again the University of Chicago, under Enrico Fermis direction built the first reactor. The Scarlet Letter Persuasive Essay Their mission had been successfully accomplished, however, they questioned whether the equilibrium in nature had been upset as if humankind had become a threat to the world it inhabited.(Outlaw 3) Oppenheimer was ecstatic about the success of the bomb, but quoted a fragment from Bhagavad Gita. I am become Death, the destroyer of worlds. Many people who were involved in the creation of the atomic bomb signed petitions against dropping the bomb. The atomic bomb has been used twice in warfare. The Uranium bomb nicknamed Little Boy, which weighed over 4.5 tons, was dropped over Hiroshima on August 6, 1945. At 0815 hours the bomb was dropped from the Enola Gay. It missed Ground Zero at 1,980 feet by only 600 feet. At 0816 hours, in the flash of an instant, 66,000 people were killed and 69,000 people were injured by a 10 kiloton atomic explosion.(Outlaw 4) See blast ranges diagram Nagasaki fell to the same treatment as Hiroshima on August 9, 1945. The plutonium bomb, Fat Man, was dropped on the city. It missed its intended target by over one and a half miles. Nagasakis population dropped in one split-second from 422,000 to 383,000. 39,000 were killed, over 25,000 were injured. That blast was less than 10 kilotons as well. Physicists who have studied the atomic explosions conclude that the bombs utilized only 0.1% of their respective explosive capabilities.(Outlaw 4) Controversy still exists about dropping the two atomic bombs on Japan. Arguments defending the Japanese claim the atomic bomb did not win the war in the Pacific; at best, it hastened Japanese acceptance of a defeat that was viewed as inevitable. (Grolier 8) Other arguments state that the United States should have warned the Japanese, or that we should have invited them to a public demonstration. In retrospect that U.S. use of the atomic bomb may have been the first act of the cold war.(Grolier 8) On the other side, advocates claimed that the invasion of the Japanese islands could and would result in over one million military casualties plus the civilian losses based on previous invasions of Japanese occupied islands

Wednesday, April 8, 2020

In The Novel Robinson Crusoe, Defoe Illustrates The Contradictions Tha

In the novel Robinson Crusoe, Defoe illustrates the contradictions that drench the thoughts and actions of man as he strives to reach for God while also forced to face the realization that he must ensure his own safety in the world. Defoe uses Crusoe's journey on the canoe to exemplify how Crusoe lives in a world where he longs to please and obey God but must also contend with his instinct, which looks to himself for his savior. In the passage in which Crusoe finally reaches land after a tumultuous experience at sea in his canoe, Crusoe falls to his "knees and gave God Thanks for [his] Deliverance, resolving to lay aside all Thoughts of [his] Deliverance by [his] boat" (103). Crusoe strives for the Christian ideal, which is to look to God for assistance and not to humans because inevitably God holds the only power to give and take life. Crusoe appears to achieve the ideal when he drops to his knees and thanks God for his safe return; however, through the use of the word ?resolve,' Defoe shows that the ideal relationship with God contradicts man's instinct. According to the Webster's English Dictionary, resolves means 1. to come to a definite or earnest decision about; determine. 14. to come to a determination; make up one's mind (786). Since Crusoe must come to a determination in order to lay aside his thoughts that his boat saved him and not God then Defoe shows that Crusoe's first instinct is to look to hi s ?self' as his savior, and only after deliberation does he determine to call it providence that saves him. Although it may on the surface appear that Crusoe achieves this ideal relationship with God in which he praises Him and does not look to himself as having the power to save his own life, Defoe shows that this is just a superficial reading because Crusoe never mentions that he does believe that God saved him but only that he would not think about his boat as saving him. Crusoe says that he will claim that God's providence saved him that day and delivered him back to land after the life-threatening journey around the island, but the journey itself contradicts God's providence. The journey is an act against God. The purpose of the journey in this man-made canoe is for Crusoe to obtain more knowledge. He says earlier in the novel that "the Discoveries I made in that little Journey, [on land] made me very eager to see other Parts of the Coast and now I had a Boat, I thought of nothing but sailing round the island" (100). Like the world's first man, Crusoe's longing for knowledge almost costs him his life. Crusoe's island, like the Garden of Eden, provides for all of man's needs. Crusoe has complete dominion over this island and all of its inhabitants, an island that provides for his every need and holds no life-threatening beast to terrorize him, yet he still longs to know the other parts of the island. Like Adam, after his search for knowledge Crusoe m ust sleep on the hard cold ground "being quite spent with the Labor and Fatigue of the Voyage" (103). Before the fall of man labor was not a source of fatigue. Here Defoe reminds us that God punishes man who is not content with what God provides but instead opts to look to the self for more than what God offers. Earlier in the novel Crusoe says that he "had neither the Lust of Flesh, the Lust of Eye, or the Pride of Life" (94) but this journey proves that he does indeed have the Lust of the Eye because he longs for knowledge. Defoe uses Crusoe's journey on the canoe to show that Crusoe lives in a world inflicted with contradictions, many of which he does not even know exist. In a one-sentence paragraph, Defoe illustrates this conflict between living life according to the Bible and giving into instinct. Through his reference to the fall of man, he shows that man's nature is like Crusoe, whose quest for knowledge and ingratitude for what God provides leads to punishment, which eventually leads man back to God.

Monday, March 9, 2020

Santee Sioux essays

Santee Sioux essays The late 1800s were a time of critical change for both white settlers and Native Americans. By the mid 1800s, the United States government was starting to put in place a series of treaties to try and keep the whites and Native Americans in separate territories. These treaties served to ensure the Indians a certain amount of land and therefore putting that land off limits to all settlers. By 1851 many treaties had been accepted and most were violated and eventually ignored. In September of 1851, the United States government enacted the Treaty of 1851 at Fort Laramie. The Treaty of Fort Laramie in 1851 was intended to ease tension between white settlers and Native Americans; however, when the settlers crossed lines guaranteed to the Santee Sioux and the government did not provide goods promised in the treaty, violence soon followed. Prior to the Treaty of 1851, the Santee Sioux was a self sufficient tribe. As white settlers started to take up the tribes land, the Santee began to stray from their typical woodland lifestyle. They began hunting with modern weapons and had many items of European cloth. Due to the rapid growth of the settlers moving into the Santees, and other Sioux tribes land, the United States Government sought out a way to please both sides and prevent or limit violence between the two groups. The Treaty of 1851 at Fort Laramie proposed many suggestions which were eventually agreed upon by both sides. The three major provisions of the treaty were an agreement on no violence between the two sides in the future, guaranteed land for each tribe which was not to be settled by the whites, and a government ration of money to each of the tribes for 10 years. Both sides signed the treaty in which the first article states that both sides agree to peace for all time to come. Due to the hostility between the sides, the chances of this being successful even for a short time was highly u...

Friday, February 21, 2020

A Freedom Fighter or Terrorist Essay Example | Topics and Well Written Essays - 2000 words

A Freedom Fighter or Terrorist - Essay Example His step-father was a known sheep thief and he taught the young Saddam his trade however this turned tragic when Saddam was caught in the act and forced to leave and stay with a far away uncle, Khayrallah Tulfah. His uncle enrolled him in school and tried to do the same in the military but the young Saddam was turned away due to bad grades. Out of anger and rage, he joined the radical faction Ba’ath. One of the Ba’ath’s objectives as a radical faction was to topple the existing regime of King Faisal II and form a Unitary Arabic State. In 1958 after a failed assassination attempt of General Abdul Qassim by the young Saddam Hussein, Saddam fled to Egypt where he enrolled in school to pursue a degree in law. After a short stay in Egypt, back in Iraq the Ba’ath faction managed to have in their control the city of Baghdad in 1963 and General Qassim was publicly tortured and eventually put to death. The group called Saddam back home and gave him the position of head torturer at the â€Å"Palace of the End.† However this did not last for long because the Nationalist soldiers deposed the Ba’ath and arrested several of its members in 1964, one of them was Saddam Hussein. A General Ahmad Hassan al-Bakr, Saddam’s cousin, advocated for Saddam and had him released. He later on endorsed Saddam to the post of assistant secretary general of the Ba’ath Party and saw to it that he formed and made effective an unknown police force, Jihaz Haneen. In 1968 while Saddam was chief of internal security as well as the head of the Revolutionary Command Council, he participated heavily in the coup led by his cousin and he was an undercover agent always secretly searching for those opposing his cousin and intimidating them or even at times killing them. He became highly feared and popular for the next ten years always playing the position of the right hand man of his cousin. In 1978 he swayed his now aging cousin to step down as ruler of Iraq citing poor health and later on had the party heads choose an heir to the throne of Iraq. He outwitted everyone by having them choose him as the heir to the throne. During the first conference of the Revolutionary Command Council in 1979, Saddam’s first order of business was to have all the people he thought might pose a threat to his rule executed. These included judges, military men, legal representatives, bankers, reporters, religious leaders, his fellow party members as well as scholars. In a span of one month he had ordered the putting to death of about 450 people he claimed were foes of his regime (Arnold, 2008). These became known as the Pyramid of Skulls and to create more intimidation and fear among those who opposed him, he had some of these executions done in public and recorded then later on have these recordings delivered to rulers of the other Arab States. The Kurds who were a marginalized group had been calling for their sovereignty for as long as Iraq existed and they faced a lot of oppression and persecution under the reign of Saddam. 1987 saw the total demolish of their villages and killing of many of their own. It is reported that between 1983 and 1988, about 180,000 Kurds were killed by Saddam. These mainly took place in their oil rich province of Kirkuk because Saddam wanted the region to be owned by another tribe and not the Kurds who had been in that place for decades. Saddam had a lot of his people under his mercy because of external enemies like Iran who were always ready to strike. He assured them of their safety under his rule and used this strategy to control them while at the same time oppressing them. He increased his influence over his people by always making himself and his image a constant sign of intimidation. It is said that his portrait appeared in every learning institution, learning text

Wednesday, February 5, 2020

2.Housing association governance puts the interests of the Essay

2.Housing association governance puts the interests of the organisation above those of residents. Discuss - Essay Example is community boasts that they have moved ahead of government by being able to enforce these restrictions through the contracts signed by the homeowners that in the public sector might run afoul of constitutional restrictions and statutory limitations. This particular circumstance underscores a dimension in housing associations – whether its administration puts the interests of the organisation above those of its residents. The very name, housing association, is misleading. Housing associations or homeowners associations are often not association in the sense of an expression of organisc life as the center of communal perceptions and common activities, nor, in many cases, are they controlled by homeowners. Nathaniel Gates (1997) argued that the inhabitants of these communities, drawn from many different backgrounds, often have little in common, and the developer nearly absolute control over the community. (p. 253) In a way, housing associations became some sort of private governments that could one day overshadow cities in significance. The rules of the housing associations, no less than cities, define political spheres. An association, like any community with the power to preserve and perpetuate itself, is coercive. This paper will argue that because of this fact, it must assert its own interests against the interests both of outsiders and, at times, of some of its own members. The basic idea for a home association with common ownership and upkeep of open space started with Leicester Square in London in 1734, which was governed by restrictive covenants. The legal concept was exported to the New World when in 1831, Samuel Ruggles drained a swamp in New York City and built a block of homes around a park. This community was called the Gramercy Park and it consisted an eight-foot high fence. Each resident had a key to a gate in the fence for access. The residents held title to the park in trust. These gated or so-called garden communities did not really become

Tuesday, January 28, 2020

Discussing The Process Of Operations Management Information Technology Essay

Discussing The Process Of Operations Management Information Technology Essay Operations management is a process of managing resources required for production and deliverance of the products and services. Its basic objective is to improve the amount of value-added activities in each of the processes. The part of the company that is entrusted with this process is the operations function. As each and every organization produces products they all are bound to have operations function. The people responsible for managing the operation functions resources are known as operations managers. In different type of organizations they may be called by different names like in supermarket they are store manager, etc. This report demonstrates the three operations management techniques which helped the companies improve their business activities and performance. They were Supply Chain Management (SCM) Enterprise Resource Planning (ERP) Total Quality Management(TQM) In this report the case studies involving the implementation of the above three techniques are explained. The benefits experienced by the companies and any changes they could have made to maximize them are also mentioned. Supply Chain Management (SCM): Supply chain management consists of coordinating the material and information flow, and the finances between the supplier, manufacturer and the consumer. Its main objective is inventory reduction assuming that when products are needed they are available. The supply networks are made up of Supplier-buyer relationship. The flow of Supply chain management flows can be divided into three parts: The product The informational flow The financial flow The behavior of the supply chain is dynamic and is known as bullwhip effect. This means that if there are small changes happening at the end of the supply chain they start causing changes at the start of the supply chain. The reduction of the bullwhip effect can be achieved by: Efficiently distributing the information by connecting all the operations to the demand source. Establishing a similar decision making process along the entire supply chain. Increasing the efficiency of the operations by eliminating sources of waste. Supplier quality management The basic need of any company from their suppliers is the deliverance of good quality products on right time. The best practice of improving the quality of product is by improving the quality of raw materials supplied by suppliers. SQM can be implemented by following the practices mentioned below: Estimating and finding the cost incurred due to poor supplier quality: This is also known as COPQ (Cost of poor quality). The COPQ can be calculated from the following: The costs incurred due to scrapping and reworking. The shutdown of our assembly line due to defective products. The costs of shipping back the defective products to suppliers and the warranty costs. Developing a system for recovering our costs: In this, the suppliers are charged back for supplying poor quality products. Here, we must include not only the material costs but also the non-material costs like packaging defective products, their transportation costs, etc. Auditing and rating of suppliers: This is the most effective way of checking whether our suppliers are conforming to our mentioned processes, quality systems, transporting, etc. It can be done once every year for all of our suppliers. The advantages gained by companies by having effective supply chain are: They have low maintenance and real costs. They can make delivery of better value and have repeat of business with the customers. They can easily remove waste from the process. They get more turnover profits and can make long term plans for the future. Summary of Case study for Supplying fast fashion: This case study best demonstrates how the garment retailing business is carried out in this dane age.It shows how the different fashion ideas which would not have been even considered by a retail store can become must-have in a short period. The working of top retail brands like HM, Zara and Benetton is been explained. It explains the quicker-picker-upper fashion concept which has made Zara, HM todays leading retailers. Reasons: To achieve this science of fast fashion product development cycles need to compress, which can be done through effective supply chain management. The retail brands believed that the only way they can keep stocks to a minimum while meeting the customers demand quickly and flexibly was through the integration of processes along the supply chain All the top 3 brands have their supply chain divided in four stages Designing of garments Manufacturing Distribution to retail outlets Retail operations Designing: Designing is of extreme importance in retailing market. The stores are supposed to deliver high and fast fashion at an inexpensive cost not cheap cost. H M designing -It is carried out by team of 100 designers in Stockholm who operate with group of 50 pattern designers, about 100 buyers and many budget controllers. Zara designing: Here, the design idea is derived from three different sources-the designers, market analysts and the buyers who order consignments to suppliers. The design stage for Zara is divided in three sections: Women, men and children chlothes.The prototype designs are created and tried out by placing all the three sources (designers, market analysts and buyers) in small workshops. The market analysts capture the new happenings in the fashion market as they are always in contact with the retail stores. This way Zaras retail stores are at the start of the supply chain and not at the end. Distribution: The investment costs incurred by Zara and Benetton in automating their warehouses is very high as they want them to be near production centres which could store, pack and develop independent orders for the network of retail stores around the globe. Currently, Zara only uses half of its warehousing capacity while Benetton is still exploring the possibility of using RFID tags for tracking garments. The distribution process at HM is still routine. The stock management is carried out internally and physical distribution is sub-contracted. In HM, goods are routed to retail site from production site through a transit terminal in Hamburg owned by the HM itself. These goods are then inspected and stored in a centralized stock room known as call-off warehouse where stores are replenished on each item level depending on what is sold. Manufacturing: Manufacturing costs can be significantly reduced if there are reduced labor costs. Therefore, most of the Benettons manufacturing operations are carried out in Asia, North Africa and Eastern Europe. The expensive technological operations are carried out in privately owned Benettons sites whereas all the labor intensive operations would be carried out by smaller contractors. The central Benetton facility decides upon how much and what is to be produced by non-Italian networks. Similar is the case with HM, whose 50% production is carried out in Asia. They have 21 offices all over the world which co-ordinate the supplier activities. The healthy relation maintained between suppliers and production offices allows them to buy fabric early. The actual cutting and dyeing of the garments carried out at later stage. This helps in delaying placement of an order, thereby reducing risk of purchasing wrong items. Zara owns much of its manufacturing capabilities which it can manipulate to meet the customer demands at short response. Almost 50% of Zaras productions, most of which are expensive operations (cutting, dyeing) are carried out in plants owned by Zara in Spain and similar to Benetton the labor intensive operations are sub-letted to contractors. Volume flexibility is maintained by Zara and their sub-contractors using a single shift system. Retail: This working is almost similar between all stores.HM stores have average size of 1300 sq.m and are owned and managed by themselves. Zara stores are smaller compared to HM with 800 sq.m size. The Benetton shops on the other hand are 1300-1500 sq.m.Previously the stores used to be run by third parties as small shops. Though there is difference in size they all have similar aim of providing the customer with comfortable atmosphere to make them feel at home and allow them to buy what they want. Benefits: The retail brands were able to achieve a high level of integration using supply chain management. This allows them to quickly react to customers demand and be flexible with minimum stocks posssible.They were able to find the correct balance between fashion, price and quality(Each brand has their own sense of fashion, price and quality).The average supply lead time achieved was about3 weeks 6 months. Of these 3 brands, Zara has achieved shortest lead times called as catwalk to rack time which is as small as 15 days. This means that not a single garment in Zara store is older than two weeks. The designs are also not repeated and are produced in small batches. This ultimately forces customers to avoid delaying their purchase and visit the store frequently lead to increased profits. Effective Supply Chain Management has helped each of the company to become a global brand image in their own way while keeping their production costs low. Suggestions: In the manufacturing stage where the raw materials are supplied by different suppliers a star rating system can be used. In this procedure a 3 star is given to that supplier which has previous record of success on supply factors set by company itself. On contrary, no stars are given to them with whom company has had certain problems before. We can do this as shown in supplier calculation table below: Value can be added to the retail industry by personalizing the needs of customer and improving customer service by using RFID technology. To automate the supply chain RFID can be used. This will help in labor reduction which accounts for about 50-80% of distribution costs. The benefits gained by implementation of RFID through supply chain can be clearly explained by figure given below: (Tajima M 2007) RFID can prove extremely useful in retail industry to control inventory efficiency and also as a theft protection service. (Michael K, McCathie L) ERP The most common and important problem involved in operations management is managing the vast amounts of data while performing it. It is extremely important that the information of each and every function done spread among the entire organization. This is what will enable them to make crucial decisions like when the activity to be done, by whom is and what is the capacity required. ERP-Enterprise Resource Planning is used to perform all the above said activities and overcome the problems arriving from them. ERP is an intelligent IT system which integrates all parts and functions of an organization to plan and control activities required for operations management. This integration also allows for transparency among all parts of organization. ERP is a complex and difficult system to implement as it is basically designed to solve problems involving fragmentation of information. An ERP system almost forces everyone to forces everyone involved in an organization to change the way they used to do their job. ERP automates the processes involved in all business operations right from taking of an order from the customer, delivering it and the billing process. In ERP, when an order is been taken by the company representative, he has full information of the customer like his credit rating and also the companys. As ERP has a single database system the new order can be accessed by all the departments and when one department is finished with the order it is automatically transferred to the next department by ERP.The location of the order can also be easily tracked using a ERP system. The ERP system make the order processing faster and the customer receive them quickly with fewer errors. The key success factors required for successful ERP implementation are: Top level management support and commitment Clear vision and proper planning Having a Project champion A set time frame to deliver the implementation strategy Project and change management Proper IT infrastructure and selecting the right ERP package Maintaining healthy relationship with the Consultant Risk management The investment required for buying and implementing the software is very high. This can be proved by the survey conducted by the META group on the Total cost of ownership of ERP involving all costs like software, hardware and all staff cost. The highest Total Cost of Ownership was of about $300 million and lowest was $400,000.the average price for user of ERP for period of two years was a massive $53,320.which proves ERP is expensive. Some of the risks involved with ERP implementation are: The chances of under estimating the overall cost are high The training and the expertise level required from the consultants will be more than expected. Under estimation of effort and time required. The project scope can be difficult to control and the need for change management may not be recognized on time 3. A case study conducted at Rolls-Royce investigating the implementation of ERP (SAP): In this case study, the Introduction and background of company along with the changes observed by them after the implementation of ERP is discussed. The risks involved with implementation of SAP are also presented. Reasons For implementation: Rolls-Royce returned back to private sector in 1987 and started acquisition of companies which enabled them to consolidate their position in industrial power .The basic reason for implementing ERP was to sort out centralized database from old legacy MRP2 systems. Before ERP, Rolls-Royce had as many as 1500 systems which were developed internally. The operation of these legacy systems was expensive maintenance was equally difficult. They did not assist for accurate and good decision making as they were unable to provide accurate accessible data. The systems implemented were unable to communicate between individual sites. The tracking of the work in progress between sites was inaccurate and causing inventory problems. The legacy systems were unable to communicate direct with suppliers and customers. (Yahaya Y, Gunasekaran A, Abthorpe M S 2004). Rolls-Royce then decided to outsource its IT department to EDS.This allowed Rolls-Royce to concentrate on its main area of expertise which was developing and manufacturing aero-engines. A team of specialists from EDS-the outsourcing firm was assigned the task of implementing ERP project which also had SAP consultants in their team. The team was well equipped with managers and staff that had crucial knowledge of old legacy systems understanding of cross functional business relationships. Yahaya Y, Gunasekaran A, Abthorpe M S 2004). Although the new systems implemented were better than most of the legacy systems they were not fully appreciated as the older ones. The team decided to overcome this problem by conducting seminars for the staff and explaining them the improvements the new systems have made to company. Training was given to about 10000 people through demonstrations, meetings and presentations. Strategy and direction: For the project, Rolls-Royce required over 100 personal computers and the total cost incurred was two million pounds. The scope and the outline plan for the project were made. A team was allocated to look over actual implementation process. After this a prototype was created and installed. This prototype model was based on Rolls-Royce Allison model. In this stage following activities were carried out: Reviewing preliminary design: Here, strategy for designing and implementation was developed along with BPM (Business process model) Development and customization of the vanilla prototype. Reviewing of implementation and the technical operations. Development of the systems and their conversions before they Go-live. The main implementation stage was divided into two waves. The first wave got delayed by 6 months because They wanted to provide more time for line organizations to prepare and clean up data. To allocate time for pilot testing and system development. To overcome difficulties faced with SAP usage. Wave one-The main objective here was to replace all the old systems. In wave one new manufacturing system like SFDMs were introduced. The pilot project of SAP suggested the end of wave one. Wave second: In this wave the engine assembly was implemented. This wave lasted for one year in duration. The second wave was ended when new systems began showing positive results. Enterprise Resource pilot: This pilot system was a small scale system run for 3 months and number 4 shops was chosen as facility where transmissions and structure operations were centre of attention for company. The reason for this facility selection was its low production capacity of only 280 parts. The pilot system was used to demonstrate processes and procedures for businesses .They were also responsible for defining role for each member and demonstrate how to manage data transfers. Go-live The problems encountered on going live were: They had user authorization issues like passwords, etc The route cards were not there due to which work on shop floor was temporarily halted. Transaction problems were observed and they were corrected by comparing old and new systems. The actual main pat of go-live system was difficult as the shear amount of data to be transferred from legacy systems was huge. To achieve this data was required to be kept in a state of stability for up to 10 weeks. The initial data like the list of suppliers was to be transferred and if any error occurred on old system they were recorded and passed on to the new system.MRP system was used to complete the go-love process which took 2 weeks time. After the go-live stage the old system were kept in view only mode which allowed comparisons to be done between new and old systems. Project risks: This project was involved with all the departments and ha its associated risks. These risks were tried to be overcomed by the ERP implementation team by maintaining a risk register. Some of the risks mentioned on the internet page of Rolls-Royce are: If due to some reason there was no delivery or unavailability of the IT hardware. Possibility of failure while loading the data or setting priorities on ERP. The project would have significant impact on the accounts of the company at the year end. Benefits: The effectiveness of such a large scale IT project is often difficult to understand .The benefits achieved from such a huge project requires at least a year to become visisble.The most immediate and important benefits that was achieved was to make a promise to customer and deliver it on time. This led to improved customer satisfaction and boosts their confidence which would result in increased orders for the future. The ERP system improved the relationship within the supply chain where Electronic communications were used to make transactions easier. The ERP system made communications between all the parts of the business absolutely clear. The Rolls-Royce management gained a better sense of control over number of operations which resulted in continuous improvements. It made possible to have accurate and timely information about their customers, business partners and suppliers. Suggestions: The company for the future can create a large data warehouse. In this the data can be stored centrally and extracted from all different places like historical and external databases. The data can be stored in user friendly format which can be accessible by non-external users. This data warehouse will help in collecting all the new data and merge it with the old data.. The management of EIS (Enterprise Information system) to check its sustainability can be done to maximize the benefits gained from an ERP system. TQM: Quality is the only one of the five operations performance criteria to have its own dedicated chapter in this book. There are two reasons for this. First, in some organizations a separate function is devoted exclusively to the management of quality. Second, quality is a key concern of almost all organizations. High-quality goods and services can give an organization a considerable competitive edge. Good quality reduces the costs of rework, waste, complaints and returns and, most importantly, generates satisfied customers. Some operations managers believe that, in the long run, quality is the most important single factor affecting an organizations performance relative to its competitors. Case Study TQM Summary: Rendall owned Preston graphics plant is located in Vancouver. Before, in March 2000 the plant was bought from Georgetown co-operation. This is a small-scale production plant of precision coated paper required in ink-jet printers. The precise coating was applied by coating machines after which they were cut into coated rods in conversion departments. They were then packed and shipped in small containers. Scenario before Implementation: The main customer of the plant was HP(Hewlett-Packard) and they were the one who pointed out the problems they were facing from the paper supplied to them. They were unable to curl the coated paper at low humidity conditions. This problem was noticed by HP personnel as there was no formal complaint made by Hps customers. The plant then hired a team which resolved the problem in the next 7-8 months. The process started producing in acceptable limits but this was due to the fact that they were only concerned about shipping the product within the specification limits. They had a culture which did not care about how close they were to the specification limits and eventually not be able to meet them. This resulted in the plant making loss of $2 million in a year even though they had buoyant sales. This was mainly due to lower productivity and high scrap and rework. To overcome them the management team hastily made a number of changes like increasing the speed of operation line to improve productivity. But still the process charts given by HP showed that the plant was not capable enough to satisfy their need for the next 3 generations. The plant was then bought by Rendall which was not happy with the plants continuing losses and the important customers dissatisfaction (HP).The plant continued to have productivity and quality problems. The full extent of the problem was made visible to the Preston quality manager by the HP engineer in a meeting at Chicago. They clearly explained him the process control charts they had which were given to them by Preston themselves. They convinced the Preston manager that people at Preston were not giving importance to the data showed by process control charts otherwise they would have realized their quality problems. The quality manager then decided to bring the plant under control. He along with his team then reviewed the decisions they made right from the start when the curl problem appeared and they adjusted the process. The team used a set of shut-down rules which enabled the operations to halt a line if they thought the product they were making was of inferior quality. This resulted in throwing away almost 64 large size rolls and about $10000 worth of scrapped product. The guidelines for shut down procedure were that they had to get rid of the defect and when that is done they are allowed to operate. This might cause the managers to tell the workers to improve their productivity but they would harshly criticize the workers if they were violating the quality process procedures. The two more change they implemented were: Daily reviewing of the control chart data The control chart data was then debated by the staff that was kept away from production while doing this. There was uncertainty among quite a few due to no production but it was vital as it got all the 3 shift operators talking about quality issues and control chart data. This caused a positive atmosphere among the workers and boosted the morale of the shop floor team. It led to remarkable improvements on quality front and improved efficiency of plant. The further progressive action taken in quality management by the plant was the implementation of Statistical process control. Then they did zero-based assessment to bring the costs down by reducing labor costs. They began downsizing process. The less number in workforce means that they should produce good quality paper in the first place to avoid inspection process. The plant workforce then decided to develop a portfolio for the ideas of new product which would boost their confidence. The most significant idea was of protowrap in which the new print wrap was able to be repulped. Benefits: Preston Company made profits after Christmas of 2000 after a period of 2 years. Moreover, they had made such a progress that they were beginning to get noticed at corporate level. This caused HP (Hewlett-Packard) to ask them to bid for their new product. It had continuous three months of profits and they also received the new contract from HP. The plants new quality procedures and principles allowed them to produce products more economically. The most significant benefit Preston received by implementing TQM was that they were able to reverse the decision made by Rendall-their owners to shut them down. The plant not only survived but flourished due to implementation of Quality based principles. Suggestions: Implementing QMS(Quality Management System) having corrective actions: This will be required when we encounter problems relating non-conformance of our supplier products. After faced with the problem, we must be able to locate the problem and find its root cause with immediate effect. This is done by (CAPA) corrective action items. The system implemented should be such that it should itself assess the cost of quality and try to initiate the recovery cost process with the supplier. Involving suppliers in quality systems: Suppliers should be encouraged to implement quality systems within their company so that they can easily reach the quality of products required and also save paying the recovering costs. (Metric Stream, 2010)