Monday, September 30, 2019

Digitally Divided Canada

Presently, the world experiences a major and radical transformation primarily because of information and technological revolution. Almost everyday, history witnesses the birth of highly sophisticated gadgets and equipments that have literally altered the lives of many individuals. Nowadays, the hindrances brought about by geographical, spatial and time constraints, no longer affect mankind. In a blink of an eye, tasks which usually take several days or months to be accomplished can be readily addressed with just a single click.Evidently, Marshall McLuhan’s notion of the global village (Baran & Davis, 2006) is no longer a theoretical argument—the global village has readily developed, thus making each and every individual, regardless of their age, social status, race and ethnicity more connected and interactive than ever. Relatively, the establishment of the information superhighway did not only increase the connectedness of different groups and communities. More than any thing else, such situation is instrumental in opening the doors for various opportunities for growth and development within the national level.Canada for example, experienced a major economic shift with the introduction of Information and Communications Technology, or more popularly known s ICT (â€Å"Canada’s Journey,† 2003). A country which was once heavily dependent on its marine and agricultural resources, is now capitalizing on the benefits and advantages of their so-called â€Å"knowledge economy (â€Å"Canada’s Journey,† 2003). † As a matter of fact, the country is considered as one of the most competitive forces within the information technology industry (â€Å"Canada’s Journey,† 2003).However, while it is true that technology fueled Canada’s economic expansion, one of the pressing issues that the country needs to confront is digital divide. Digital divide is a serious social concern that cannot be simply described as a battle between those who are considered as technologically rich and technologically poor. More than anything else, the implications of digital divide tend to contribute to the worsening of the social, economic and cultural gap. These kinds of division are most especially felt between rural and urban settlers.If technology is said to govern man’s life, clearly, those who cannot fully avail of modern tools and equipment are also denied of exploiting technology’s benefits. Evidently, those that are living in the rural area are placed in very uncompromising situations in as far as being â€Å"digitally-connected† is concerned. Given this situation at hand, one may readily ask, how does digital divide affect the marginalization of rural settlers in Canada? For this particular discussion, the statistics presented in the Canadian Social Trends and The Daily was primarily used.Information in such sites is highly significant since it basically provides a wider view on h ow the whole Canadian populace utilizes the internet. However, the above-mentioned sites do not only dabble with internet usage alone. They also provided substantial discussions regarding the availability of personal computers in both rural and urban Canada. On the other hand, another major source that is used to support the arguments of this report is the E-government studies of the Organisation for Economic Co-operation and Development (OECD).In OECD, however, the facts presented are comparisons of internet usage in the global scale. Such information is therefore necessary to be included in this analysis since it presents an overview on how far Canada has fared when it comes to global connectedness and how its current situation contributes to the digital divide experienced by the country. In understanding digital divide in Canada, it is highly important to first understand how information technology works in the country.It is also impetus to know the percentages of individuals who can readily access to have a substantial articulation of the matter. In addition to that, the inputs from the OECD contribute in a much deeper examination of how digital divide affects not only Canada, but also in other parts of the world. This Mc Laren (2002) discussed that mostly of the individuals who own computers are located in Census Metropolitan Areas (CMA) and Census Agglomerations (CA). This is in stark contrast to those who are residing in rural and small towns.More than 50 to 60% of those living in CMAs and CAs have computers at home, whereas, only 40 to 50% of those in the rural areas posses such equipment (please refer to Figure 1 of Appendix). There are two reasons that can possibly explain this particular situation. First, it can be argued in here that urban settings can easily adapt to major technological shifts and transformations. This would not come as much of a surprise since major cities and areas are considered as the center of commerce and trade. In an area l ooming with various business opportunities, the use of an efficient technological platform is a must.Business endeavors that aspire to be globally competitive must take advantage of technology’s positive effects and contributions. Thus, individuals in this area become more aware about the uses and purposes of any technologically-related material. Another reason behind that is the high purchasing power of urban settlers. Suppliers of high-end technological products readily targets consumers in the city since they know that highly urbanized areas can provide them with a solid market base. On the other hand, as for the case of rural residents, digital connectedness seems to fall short.This primarily stems from the existing income discrepancies between the two groups. McLaren (2002) found out that those who earn less than $20,000 in rural areas can hardly afford to have their own computer. Only 20% (please refer to Figure 2 of Appendix) of rural settlers are capable of purchasing personal computers. However, for urban dwellers who also earn less than $20,000, more than 30% (please refer to Figure 2 of Appendix) of the population has their own computers . The same situation is reflected as for the case of those who are earning beyond $20,000.Based from a critical perspective, if Canadians in rural are literally outnumbered when it comes to having access to computers, then it is also relative that is harder for them to acquire internet access as well. While it is true that individuals from both rural and urban areas tend to have the same income, the availability of computers in cities is more prevalent compared to small towns. This means that an urban dweller, despite of the fact that he or she earns less than $20,000, can still own a computer primarily because in the city, one can always find cheaper alternatives.Computer providers in such areas are engaged into a stiff competition that compels them to lower their prices so that they can tap their potential markets. On the other hand, the availability of computer suppliers in rural areas is less than those in highly urbanized ones. Competition is hardly felt and therefore, these suppliers can demand their prices. Relatively, computers sold in rural sectors are literally more expensive than those that are found in the city.Given this aspect at hand, if Canadians in rural areas cannot avail of the basic equipment or material used in connecting via the internet, then it would be harder for them to participate into the digital world. It would be more difficult for these individuals to be updated on recent technological trends and developments. In addition to that, the lack of computers also prevents these individuals from making the most of Canada’s robust knowledge economy. Also, it is important to note that computers nowadays cannot only assist Canadians in connecting through the internet.Computers are also instrumental in making work processes and transactions much faster and eas ier as compared to manual work. One must always bear in mind that digital divide does not merely focus on the capacity to connect online; it is also the ability of owning the required technological platforms or materials. In the meantime, in as far as internet access is concerned, thus, it would not come as too much of a surprise of urban households are more connected. According to the Organisation for Economic Co-operation and Development (2003), from 1999 to 2001, 33.6% of rural homes in Canada have internet connection, whereas a total of 44. 4% of urban settlers enjoy internet services (please see of Appendix). There is no doubt that the internet is a good source of knowledge and information. Relevant data and statistics found in the World Wide Web contribute to empowering and educating individuals. However, with the current situation of rural Canada, they are evidently left behind. Take for example the case of students which primarily uses the internet for research aid and purpo ses.The ones located in urban areas enjoy the benefits of acquiring significant facts and figures that cannot be ordinarily seen in local libraries and other academic sources. Individuals in the urban areas are also able to exploit government services via the internet. It is also important to note that online business opportunities are easily accessed by urban residents since they have the tool to do so. Apparently, the digital divide tends to exclude individuals from the rural sector from using technology to further empower themselves and seek for other opportunities for growth and development.There is an evident inequality in digital divide that basically originates from the unequal distribution of wealth and power. More than anything else, it should be always remembered that only those who have access to a wide array of providers and are financially adequate to participate into the digital arena, are the ones who are most likely to benefit from them. Relatively, these two conditi ons (access to providers and financial adequacy) are commonly found in metropolitan areas. Income disparity is indeed a key factor in the proliferation of digital divide and marginalization of rural Canada.As a matter of fact, one of the primary reasons that prohibit Canadians in rural areas from utilizing the internet is the â€Å"costs† associated with it (McLaren, 2002). The other reason corresponds to the absence of necessary skills and training (McLaren, 2002). In addition to that the geographical economic conditions of rural sectors in Canada are also important factors in analyzing digital divide and its implications. Transforming small towns into a digitally active community translates to building the necessary infrastructures.Thus, in as far as internet and communication providers are concerned; an area should be highly feasible for business operations and profitability before they finally consider the idea of building internet-related structures (Siegan & Walzer, 200 3). Unfortunately, if the concerned area does not qualify to the business needs of providers, then digital connectedness is less likely to grow and flourish. Not unless the Canadian government creates yet another solid and concrete plan to establish technological infrastructures in rural domains, then people living in these areas would remain digitally left behind.The effects of digital divide in rural Canada however cannot be only felt on the economic disadvantages of rural residents. Aside from the tacit or unconscious information monopoly of those that are technologically rich, there is also an apparent exclusion of the technologically inept from participating in issues that require utmost concerns (Jones, 2003). For how can somebody participate if he or she is not well-informed? Aside from that, rural residents are somehow denied of articulating their interests, views and opinions.It is no secret that the internet provides forums and sites wherein participants can express their sentiments and generate possible solutions. It is through the net that groups with similar orientations converge. However, it is pretty difficult for rural settlers to be involved if in the first place, they are not that digitally connected. Another thing to be considered is that the digital divide tends to delimit rural Canada from availing the services of the government on an easier pace (Marshall, Taylor & Yu, 2003). The government use of internet is indeed commendable.However, this would be still useless if not the majority of the populace can readily utilize it. Digital divide between urban and rural residents require immediate action. The opportunities brought forth by technology should not be limited into very few hands. If there is anyone who must be technologically empowered, it is no other than the ones in rural settings primarily because they are the ones who really need it, not the other way around. In as much as technological infrastructures are progressively establishe d in urban areas, then more efforts should be exerted in the rural sectors.

Sunday, September 29, 2019

From 1781 to 1789 the Articles of Confederation Essay

From 1781 to 1785 the Articles of Confederation provided the United States with an effective government. The main goal of the Articles was to allot as much independence as possible to the states compared to the idea of a central government for fear of conflict with Britain. Despite the many advantages of its systematic rule it did not provide enough power to the Congress in order for them to adequately control commerce, land expansion and regulate taxes. This was very expected since the Articles of Confederation were a starting point and this article was used to persuade individual states to adopt a more powerful form of government in the future. After the ratification of the Articles of Confederation, a loose confederation was formed and granted power to a controlled extent. A house of Congress was also established which allotted each individual state one vote. Congress dealt with many important issues such as improving the military and anything relevant to homeland security, declaring wars and loaning money. One prominent conflict was the fact that Congress does not have the adequate power to regulate commerce and trade between foreign countries. This presented a significant problem because states started enforcing individual laws of which Congress had no say in. In turn, this rendered Congress helpless in making laws regarding to taxation and tariffs. In a sense, the Americans were taking full advantage of the lenient government, often passing laws without consulting the Congress. Many states refused to pay tax to the congress stating that it was preposterous and claiming that they saw many similarities to the policies of the British Parliament. In 1782, outspoken representatives from the Rhode Island assembly wrote a letter stating how it was ludicrous that they were subject to paying taxes to the government (Document A). Under the Articles of Confederation the Federal government had no power to coerce the states into complying with their tax demand. This was a dire problem since the government needed to tax the individual states to pay for the debts that were amassed during the war. The Articles of Confederation did not have any control over the economy thus creating much conflict within the states. John Jay, in the year of 1786, tried to negotiate with Spains Minister Diego de Gardoqui because he felt as if it was Americas right to be able to navigate the Mississippi River. This was a difficult feat because not only was America, a newly found nation, going against Spain, a predominant country, but America at that point did not have a strong military to defend their standpoint. The army was growing discontent as the Congress repeatedly failed to pay them. The Articles of Confederation, although flawed, provided a good foundation for the newly founded nation. It was used as a basis in the Constitution and we still feel its effects today. It provided coverage for many important factors in a nation that is ruled based on unity between people and states, independence granted to states, how bills are passed, land dispersion and many other imperative factors. It was apparent that without the Articles of Confederation there would not of been initial agreement amongst the states. Bibliography: 5 Steps to a 5 AP US History. New York: McGraw-Hill, 2004.

Saturday, September 28, 2019

Artist Antonio Martorrell Essay Example | Topics and Well Written Essays - 250 words - 1

Artist Antonio Martorrell - Essay Example His writing career is quiet brilliant and precise at the same time. He wrote La piel de la memoria that came under translation by Andrew Hurley as ‘Memory’s Tattoo’ and ‘El libro dibujado’5 meaning the drawn book. Presently, he is a regular columnist for Escenario that is a section of El Vocero, Puerto Rican newspaper6. His accomplishments as an artist are quite plenty. Winner of Bienal de Arte de San Juan, he has provided illustrations to many authors like ‘Alma Rosa Flor, Heraclio Cepeda, Nicholasa Mohr, and Pura Belprà ©Ã¢â‚¬â„¢7. One of his illustrated books, ‘ABC de Puerto Rico’ was later confronting burning under the governance of Carlos Romero Barcelà ³ by the Educational Department of Puerto Rico. In November 2006, Antonio’s house faced burns leading to a loss of thousands of dollars worth of artifacts. He currently runs workshops in Ponce and New York. Antonio Diaz-Royo’s biography, ‘Martorell: the Adventure of Creation’8 is the most elaborate work on Antonio Martorell’s life to date and an ovation to his terrific contribution to the world of

Friday, September 27, 2019

LAW OF INTERNATIONAL INSURANCE CONTRACT Coursework

LAW OF INTERNATIONAL INSURANCE CONTRACT - Coursework Example The insurance also covers some liabilities that arise in cases where there is a collision with another ship and also the liability for colliding with other objects (FFO-Fixed and Floating Objects). Typically, claims under Hull and Machinery insurance include, total loss of the ship; damage to ship, engines and equipments; explosions and fires; groundings; collisions; and striking other objects. The scope of the type of damages covered by Hull and Machinery Insurance has been defined by International Hull Clauses (IHC). In clause 2.1.6, it states that HM & I covers the losses caused to the ship due to â€Å"contact with land conveyance, dock or harbor equipment or Installation.†1 There are certain risks and liabilities that are not covered under Hull and Machinery insurance. A prudent ship-owner may look to get insurance cover for liabilities to third parties. Such liabilities might arise due to a third party’s legal or contractual claim against the ship. P & I insurance is arranged by entering the ship into a mutual insurance association which is usually referred to as a â€Å"club†. All the members of this club are ship-owners. Therefore, the P & I club is only answerable to its members. A Marine Insurance company, on the other hand, is answerable to its shareholders. P & I clubs provide insurance covers for much broader risks than the Hull and Machinery insurance schemes. When a ship has an accident due to the perils of the sea, Hull & Machinery insurance provides cover for the loss that has occurred to the ship. There are many other things that are connected with the ship. The crew of the ship, the employees, may also get hurt and claim compensation for their injuries. Also, the owner of the cargo that may have been carried in the ship would also claim for his loss against the ship-owner. Hull & Machinery insurance does not provide cover for such liabilities to third parties. However, the ship-owner can get protection from such claims by pursuing P & I insurance. As far as the liability to the owner of the cargo is concerned, the cargo owner has a first claim against the carrier. The cargo owner may not succeed in his claim because either the ship-owner was not responsible for the loss or he is protected under Hague-Visby2 rules. In such cases, the cargo owner claims compensation from his insurer under Cargo Insurance. By the right of subrogation, the insurer, after compensating his client, would be able to pursue the claim in his own right against the carrier. To avoid this claim against him, the carrier seeks the services of P & I club. This means that the same cargo can be insured twice. P & I clubs also settle the claims against the ship-owner when the crew is injured. There can be other â€Å"Third† parties that can have legal or contractual claims against the ship-owner. P & I insurance addresses all of those claims. There are risks that are not covered by P & I insurance because they are covered by an other form of insurance. In relation to Hull & Machinery Insurance, P & I insurance is able to cover almost all the risks that H & M leaves out. Even for the claims that are not fully covered by H & M insurance, the portion of the claim that is left out can be covered under P & I insurance. Therefore, P & I insurance complements Hull and Machinery insurance as the risks that are not covered by one are covered by the other. When both forms of marine insurance are

Thursday, September 26, 2019

Demography Essay Example | Topics and Well Written Essays - 1250 words

Demography - Essay Example I am going to compare average SAT scores, parent’s education, and family income level across different races in California. Also, in this paper I am going to compare the proportion of each race in California, UC Berkeley, and CSU East Bay. The reason why I included CSU East Bay is that I want to compare UC Berkeley with other public schools that have different qualifications in admitting students. My hypothesis is that UC Berkeley is not following the proportion of each race in California since it is a very competitive school with higher tuition fees compared to CSU East Bay. In this paper, I limited the types of race into five categories: White, Black, Asian, Hispanic, and Others. I did this because they are the majority in California so that I can have consistency when gathering data. I began my research by comparing the proportion of each race in California, UC Berkeley and CSU East Bay. In figure 1, I made a clustered histogram that represents the percentage of each race. Here, I gathered the data from US National Census Bureau, UC Berkeley’s official website, and CSU East Bay’s official website.According to the data, White, Black, and Others are well represented in UC Berkeley as their proportions are not significantly different compares to their proportions in California. What captures an attention from this data is that Asians are the majority group in UC Berkeley with up to 43% from the total students compared to California population where it only takes up to 13%. Also, another highlight from this histogram is that Hispanic is unde r represented in UC Berkeley. California has 37.2% of Hispanic and UC Berkeley has only 13% of them. Different things appeared when we compared the race proportion in CSU East Bay. The proportions of White, Asian, Hispanic, and Others are almost equal which is around 20% while Black has lower proportion but not slightly different with its proportion in California

Wednesday, September 25, 2019

GSK Case Study Example | Topics and Well Written Essays - 250 words

GSK - Case Study Example GSK had several arguments in its defense. First, it argued that it was not breaching any competition laws since national governments already controlled and restricted the pharmaceutical sector (Schultz & Killick 2006, p.2). Secondly, GSK averred that opening up the pharmaceutical sector to parallel market operations would negatively impact on their revenues; hence, hampering their role in research and development. In addition, parallel market operations tended to create shortages of medicine in low-price countries as businesses were buying them from such countries and shipping them off to high-price countries within the EU in order to rake in huge profits. Nevertheless, some arguments clearly compromised GSK’s position. To begin with, GSK could not be exempted from Article 81 of the European Union that outlaws any activity that restricts trade among member states (Covingon & Burling 2005). Limiting parallel market operations would amount to undermining competition within EU member states (Morgan Lewis 2009, p. 1). Secondly, pharmaceutical companies, especially GSK, were not contributing towards the improvement of the production and distribution of medical products. Moreover, GSK had not been party to the promotion of technical and economic growth in their countries of operation. Consequently, the giant pharmaceutical corporation lost the case. Covington & Burling (2005) Parallel Trade in Pharmaceutical Products in Europe: The European Court of Justice’s Ruling in Syfait v. GSK.[Online]. Available from http://www.cov.com/files/Publication/13800cb1-53df-44f7-8fc6-acef546be00b/Presentation/PublicationAttachment/02f82d54-11c6-41ce-9dad-b190bfc3309c/oid11576.pdf. [Accessed April 29, 2014]. Morgan Lewis. (2009) European Court of Justice Delivers Mixed Message on Parallel Trade. Morgan Lewis. [Online] October 6, 2009. Available from http://www.morganlewis.com/pubs/ATR_EuropeanParallelTrade_LF_06oct09.pdf. [Accessed

Tuesday, September 24, 2019

What is the REAL link between violence and mental disorder How do the Essay

What is the REAL link between violence and mental disorder How do the media present mental disorder Is this fair and accurate reresentation - Essay Example Although, he had taken the case just for the publicity, as Vail further investigates the case, he develops sympathy for Stampler as he becomes convinced that he is innocent. His sympathy and concern for Stampler reaches at such a level that Vail is ready to fight to all extents against his former lover Janet Venable, who is acting as the prosecutor. Nevertheless, during his investigation, Vail also discovers that the Archbishop had made some powerful enemies due to his insistence on not developing and selling Church lands. Surprisingly enough, he also found out that the Archbishop was sexually abusing altar boys. Vail is tempted to submit this evidence to the court because the same would increase the sympathy for the boy, but at the same time, it will also provide him with a clear motive for murder, something which was missing from the equation of the arguments of the prosecution. Vail decides not to make this evidence public and instead, decides to question Aaron, during which Aaron continuously insists that he is innocent and does not remember anything despite the fact that he was found feeling the crime scene with blood all over the clothes. When Vail intensifies his aggressive line of questioning and mentions the sex tape, Aaron finally breaks down and suddenly transforms into a violent, rude, sociopathic personality and starts to refer himself as â€Å"Roy†. â€Å"Roy† confesses to the murder of Archbishop and cites the molestation and abuse as the reason behind the same. â€Å"Roy† also throws Vail against the wall in the heat of the moment, but when things are allowed to cool down, he transforms back to his original personality. He knows claims that he has no recollection of the events. It becomes apparent to the Vail that he is suffering from some serious mental illness. The psychiatrist examining Aaron also confirms that the Aaron is suffering from multiple personality disorder. As a child, he faced mental problems due to childhood abuse from his

Monday, September 23, 2019

Nissan Essay Example | Topics and Well Written Essays - 2250 words

Nissan - Essay Example These cars were developed first in the late 1960s and have continued to be modified to adapt to the changing the changing trends to date. On the other hand, there is the Nissan Patrol is a suburban utility vehicle (SUV) that was developed around 1951 to compete with car brands such as Toyota’s Land Cruiser. This car has been advanced over the generations and currently it is in its sixth generation, which began in 2010. The Patrol occurs has a four-wheel drive and is available in either short-wheelbase with three doors or long-wheelbase with five door chassis. Both cars come in a variety of models that have continued to attract customers due to the continued development. The development of Nissan Skyline GT-R brand has a long history that is linked to the previous products developed by Nissan. Prince Automobile Company was the first company to use the word ‘Skyline’- they developed sedan cars that fell on a line of Skyline products. However, after the merger with Nissan-Datsun, it adopted the Skyline series of cars. Skyline cars were developed with rear wheel drive, an aspect that continued to the 1990s when other manufacturers started focusing on shifting the drive to the front wheels. The adoption of the GT-R cars for racing purposes made them to have direct market while at other instances some of the versions such as the KPGC110 2000 GT-R made very little sales, a situation that was attributed to a looming energy crisis at the time. Just before the development of Nissan Skyline GT-R, there was the S54 2000 GT-B that was a powerful racecar at the time. The GT-R series saw the development of PGC10 2000 GT-R, which made very impressive wins over a period of almost two years. There were a number of racing victories that were associated with this particular car from 1964 to the time it was discontinued in 1972. The Nissan Motorsport (Nismo) has been on the forefront of developing this car to

Sunday, September 22, 2019

Country Risk Ananlysis Research Paper Example | Topics and Well Written Essays - 2250 words

Country Risk Ananlysis - Research Paper Example Some risks can be diversified away through investing in portfolios of investments which diversify each other’s risk. The risk that can be diversified away is called diversifiable risk or unsystematic risk. On the other hand systematic risk cannot be diversified and is thus also called un-diversifiable risk. Economic and political risk is inherent risks of operating in a country. The economic risk is the risk that economy of that country would change for the worse. This change could be due to bad management or uncomprehend able natural causes, such as reduction in oil prices for a country which has oil as its primary export. Political risk on the other hand points towards the stability in the country. This is the risk that there would be political turmoil in a country which would result in loses on investment. The recent changes in the Arabian Peninsula are changing the shape of Arab politics for ever. There have been major political changes in countries like Egypt, Libya and S yria. These political changes would in due course of time bring positive changes in the region and contribute to the economic stability and wellbeing of local. However these changes have also created a sense and environment of uncertainty in the political environment of the region. There is confusion as to which country would be affected by this political upheaval next. Amongst this political turmoil lies a country of 1.7 million people known as Qatar. The State of Qatar is located in the Middle East and shares its borders with the Gulf and Saudi Arabia. The country is a monarchy controlled by the Al Thani family. Other monarchies in the region are threatened by political upheavals; Qatar is no different due to its political system. However Qatar is still a peaceful country and there have been no apparent signs of any political upheavals. One of the reasons is the sound economic situations of the country. The economy of Qatar is growing rapidly and is considered one of the fasted gr owing economies of the world. The nation has a per capita annual GDP of 97,000 dollars, which is one of the highest in the entire world. The nation is rich with oil and gas reserves. However instead of simply consuming these resources like many other Arab nations, Qatar has strived to build itself as a strong economic power through development of infrastructure and industry. The economic growth is stable due to a strong inflow of foreign capital due to oil exports. This stability plays a primary role in reducing the financial risk of the country. The purchasing power parity of Qatar according to CIA Fact-book is estimated to be $123 billion. This is an increase of approximately 20% from previous years. Due to a rick economy the locals are not an active part of the country’s workforce. However efforts are being made to bring about a change in this department. The main hindrance in these efforts is the availability of an excellent social welfare system, which allows people to l eave lives near 0 % present of poverty. This can be seen by the high per Capita GDP of Qatar which makes it one of the richest nations in the world. The country risk of Qatar has been defined as CRT-3. Risk tier 3 is defined as Developing legal environment, legal system and business environment with developing capital markets; developing insurance regulatory structure. This means that the country has very low economic risk. This low risk is due to the rapid growth of GDP

Saturday, September 21, 2019

British Airways Essay Example for Free

British Airways Essay I remember going to parties in the late 1970s, and, if you wanted to have a civilized conversation, you didnt actually say that you worked for British Airways, because it got you talking about peoples last travel experience, which was usually an unpleasant one. Its staggering how much the airlines image has changed since then, and, in comparison, how proud staff are of working for BA today. British Airways employee, Spring 1990 I recently flew business class on British Airways for the first time in about 10 years. What has happened over that time is amazing. I cant tell you how my memory of British Airways as a company and the experience I had 10 years ago contrasts with today. The improvement in service is truly remarkable. British Airways customer, Fall 1989 In June of 1990, British Airways reported its third consecutive year of record profits,  £345 million before taxes, firmly establishing the rejuvenated carrier as one of the worlds most profitable airlines. The impressive financial results were one indication that BA had convincingly shed its historic â€Å"bloody awful† image. In October of 1989, one respected American publication referred to them as â€Å"bloody awesome,† a description most would not have thought possible after pre-tax losses totalling more than  £240 million in the years 1981 and 1982. Productivity had risen more than 67 percent over the course of the 1980s. Passengers reacted highly favorably to the changes. After suffering through years of poor market perception during the 1970s and before, BA garnered four Airline of the Year awards during the 1980s, as voted by the readers of First Executive Travel. In 1990, the leading American aviation magazine, Air Transport World, selected BA as the winner of its Passenger Service award. In the span of a decade, British Airways had radically improved its financial strength, convinced its work force of the paramount importance of customer service, and dramatically improved its perception in the market. Culminating in the privatization of 1987, the carrier had undergone fundamental change through a series of important messages and events. With unprecedented success under its belt, management faced an increasingly perplexing problem: how to maintain momentum and recapture the focus that would allow them to meet new challenges. Crisis of 1981 Record profits must have seemed distant in 1981. On September 10 of that year, then chief executive Roy Watts issued a special bulletin to British Airways staff: British Airways is facing the worst crisis in its history . . . unless we take swift and remedial action we are heading for a loss of at least  £100 million in the present financial year. We face the prospect that by next April we shall have piled up losses of close to  £250 million in two years. Even as I write to you, our money is draining at the rate of nearly  £200 a minute. No business can survive losses on this scale. Unless we take decisive action now, there is a real possibility that British Airways will go out of business for lack of money. We have to cut our costs sharply, and we have to cut them fast. We have no more choice, and no more time . Just two years earlier, an optimistic British government had announced its plan to privatize British Airways through a sale of shares to the investing public. Although airline management recognized that the 58,000 staff was too large, they expected increased passenger volumes and improved staff productivity to help them avoid complicated and costly employee reductions. While the 1978-79 plan forecasted passenger traffic growth at 8 to 10 percent, an unexpected recession left BA struggling to survive on volumes, which, instead, decreased by more that 4 percent. A diverse and aging fleet, increased fuel costs, and the high staffing costs forced the government and BA to put privatization on hold indefinitely. With the airline technically bankrupt, BA management and the government would have to wait before the public would be ready to embrace the ailing airline. The BA Culture, 1960-1980 British Airways stumbled into its 1979 state of inefficiency in large part because of its history and culture. In August 1971, the Civil Aviation Act became law, setting the stage for the British Airways Board to assume control of two state-run airlines, British European Airways (BEA) and British Overseas Airways Corporation (BOAC), under the name British Airways. In theory, the board was to control policy over British Airways; but, in practice, BEA and BOAC remained autonomous, each with its own chairman, board, and chief executive. In 1974, BOAC and BEA finally issued one consolidated financial report. In 1976, Sir Frank (later Lord) McFadzean replaced the group division with a structure based on functional divisions to officially integrate the divisions into one airline. Still, a distinct split within British Airways persisted throughout the 1970s and into the mid-1980s. After the Second World War, BEA helped pioneer European civil aviation. As a pioneer, it concerned itself more with building an airline infrastructure than it did with profit. As a 20-year veteran and company director noted: â€Å"The BEA culture was very much driven by building something that did not exist. They had built that in 15 years, up until 1960. Almost single-handedly they opened up air transport in Europe after the war. That had been about getting the thing established. The marketplace was taking care of itself. They wanted to get the network to work, to get stations opened up.† BOAC had also done its share of pioneering, making history on May 2, 1952, by sending its first jet airliner on a trip from London to Johannesburg, officially initiating jet passenger service. Such innovation was not without cost, however, and BOAC found itself mired in financial woes throughout the two decades following the war. As chairman Sir Matthew Slattery explained in 1962: â€Å"The Corporation has had to pay a heavy price for pioneering advanced technologies.† Success to most involved with BEA and BOAC in the 1950s and 1960s had less to do with net income and more to do with â€Å"flying the British flag.† Having inherited numerous war veterans, both airlines had been injected with a military mentality. These values combined with the years BEA and BOAC existed as government agencies to shape the way British Airways would view profit through the 1970s. As former director of human resources Nick Georgiades said of the military and civil service history: â€Å"Put those two together and you had an organization that believed its job was simply to get an aircraft into the air on time and to get it down on time.† While government support reinforced the operational culture, a deceiving string of profitable years in the 1970s made it even easier for British Airways to neglect its increasing inefficiencies. Between 1972 and 1980, BA earned a profit before interest and tax in each year except for one. â€Å"This was significant, not least because as long as the airline was returning profits, it was not easy to persuade the workforce, or the management for that matter, the fundamental changes were vital. Minimizing cost to the state became the standard by which BA measured itself. As one senior manager noted: â€Å"Productivity was not an issue. People were operating effectively, not necessarily efficiently. There were a lot of people doing other peoples jobs, and there were a lot of people checking on people doing other peoples jobs† . . . As a civil service agency, the airline was allowed to become inefficient because the thinking in state-run operations was, â€Å"If youre providing se rvice at no cost to the taxpayer, then youre doing quite well.† A lack of economies of scale and strong residual loyalties upon the merger further complicated the historical disregard for efficiency by BEA and BOAC. Until Sir Frank McFadzeans reorganization in 1976, British Airways had labored under several separate organizations (BOAC; BEA European, Regional, Scottish, and Channel) so the desired benefits of consolidation had been squandered. Despite operating under the same banner, the organization consisted more or less of separate airlines carrying the associated costs of such a structure. Even after the reorganization, divisional loyalties prevented the carrier from attaining a common focus. â€Å"The 1974 amalgamation of BOAC with the domestic and European divisions of BEA had produced a hybrid racked with management demarcation squabbles. The competitive advantages sought through the merger had been hopelessly defeated by the lack of a unifying corporate culture.† A BA director summed up how distracting the merger proved: â€Å"There wasnt enough management time devoted to managing the changing environment because it was all focused inwardly on resolving industrial relations problems, on resolving organizational conflicts. How do you bring these very, very different cultures together?† Productivity at BA in the 1970s was strikingly bad, especially in contrast to other leading foreign airlines. BAs productivity for the three years ending March 31, 1974, 1975, and 1976 had never exceeded 59 percent of that of the average of the other eight foreign airline leaders. Service suffered as well. One human resources senior manager recalled the â€Å"awful† service during her early years in passenger services: â€Å"I remember 10 years ago standing at the gate handing out boxes of food to people as they got on the aircraft. Thats how we dealt with service.† With increasing competition and rising costs of labor in Britain in the late 1970s, the lack of productivity and poor service was becoming increasingly harmful. By the summer of 1979, the number of employees had climbed to a peak of 58,000. The problems became dangerous when Britains worst recession in 50 years reduced passenger numbers and raised fuel costs substantially. Lord King Takes the Reins Sir John (later Lord) King was appointed chairman in February of 1981, just a half-year before Roy Wattss unambiguously grim assessment of BAs financial state. King brought to British Airways a successful history of business ventures and strong ties to both the government and business communities. Despite having no formal engineering qualifications, King formed Ferrybridge Industries in 1945, a company which found an unexploited niche in the ball-bearing industry. Later renamed the Pollard Ball and Roller Bearing Company, Ltd., Kings company was highly successful until he sold it in 1969. In 1970, he joined Babcock International and as chairman led it through a successful restructuring during the 1970s. Kings connections were legendary. Hand-picked by Margaret Thatcher to run BA, Kings close friends included Lord Hanson of Hanson Trust and the Princess of Waless family. He also knew personally Presidents Reagan and Carter. Kings respect and connections proved helpful both in recruiti ng and in his dealings with the British government. One director spoke of the significance of Kings appointment: â€Å"British Airways needed a chairman who didnt need a job. We needed someone who could see that the only way to do this sort of thing was radically, and who would be aware enough of how you bring that about.† In his first annual report, King predicted hard times for the troubled carrier. â€Å"I would have been comforted by the thought that the worst was behind us. There is no certainty that this is so.† Upon Wattss announcement in September of 1981, he and King launched their Survival plan— â€Å"tough, unpalatable and immediate measures† to stem the spiraling losses and save the airline from bankruptcy. The radical steps included reducing staff numbers from 52,000 to 43,000, or 20 percent, in just nine months; freezing pay increases for a year; and closing 16 routes, eight on-line stations, and two engineering bases. It also dictated halting cargo-only services and selling the fleet, and inflicting massive cuts upon offices, administrative services, and staff clubs. In June of 1982, BA management appended the Survival plan to accommodate the reduction of another 7,000 staff, which would eventually bring the total employees down from about 42,000 to nearly 35,000. BA accomplished its reductions through voluntary measures, offering such generous severance that they ended up with more volunteers than necessary. In total, the airline dished out some  £150 million in severance pay. Between 1981 and 1983, BA reduced its staff by about a quarter. About the time of the Survival plan revision, King brought in Gordon Dunlop, a Scottish accountant described by one journalist as â€Å"imaginative, dynamic, and extremely hardworking,† euphemistically known on Fleet Street as â€Å"forceful,† and considered by King as simply â€Å"outstanding.† As CFO, Dunlops contribution to the recovery years was significant. When the results for the year ending March 31, 1982, were announced in October, he and the board ensured 1982 would be a watershed year in BAs turnaround. Using creative financing, Dunlop wrote down  £100 million for redundancy costs,  £208 million for the value of the fleet (which would ease depreciation in future years), even an additional  £98 million for the 7,000 redundancies which had yet to be effected. For the year, the loss before taxes amounted to  £114 million. After taxes and extraordinary items, it totalled a staggering  £545 million. Even King might have admitted that the worst was behind them after such a report. The chairman immediately turned his attention to changing the airlines image and further building his turnaround team. On September 13, 1982, King relieved Foote, Cone Belding of its 36-year-old advertising account with BA, replacing it with Saatchi Saatchi. One of the biggest account changes in British history, it was Kings way of making a clear statement that the BA direction had changed. In April of 1983, British Airways launched its â€Å"Manhattan Landing† campaign. King and his staff sent BA management personal invitations to gather employees and tune in to the inaugural six-minute commercial. Overseas, each BA office was sent a copy of the commercial on videocassette, and many held cocktail parties to celebrate the new thrust. â€Å"Manhattan Landing† dramatically portrayed the whole island of Manhattan being lifted from North America and whirled over the Atlantic before awestruck witnesses in the U.K. After the initial airing, a massive campaign was run with a 90-second version of the commercial. The ad marked the beginning of a broader campaign, â€Å"The Worlds Favourite Airline,† reflective of BAs status as carrier of the most passengers internationally. With the financial picture finally brightening, BA raised its advertising budget for 1983-84 to  £31 million, compared with  £19 million the previous year, signalling a clear commitment to changing the corporate image. Colin Marshall Becomes Chief Executive In the midst of the Saatchi Saatchi launch, King recruited Mr. (later Sir) Colin Marshall, who proved to be perhaps the single most important person in the changes at British Airways. Appointed chief executive in February 1983, Marshall brought to he airline a unique resume. He began his career as a management trainee with Hertz in the United States. After working his way up the Hertz hierarchy in North America, Marshall accepted a job in 1964 to run rival Aviss operations in Europe. By 1976, the British-born businessman had risen to chief executive of Avis. In 1981, he returned to the U.K. as deputy chief and board member of Sears Holdings. Fulfilling one of his ultimate career ambitions, he took over as chief executive of British Airways in early 1983. Although having no direct experience in airline management, Marshall brought with him two tremendous advantages. First, he understood customer service, and second, he had worked with a set of c ustomers quite similar to the airline travel segment during his car rental days. Marshall made customer service a personal crusade from the day he entered BA. One executive reported: â€Å"It was really Marshall focusing on nothing else. The one thing that had overriding attention the first three years he was here was customer service, customer service, customer service—nothing else. That was the only thing he was interested in, and its not an exaggeration to say that was his exclusive focus.† Another senior manager added: â€Å"He has certainly put an enabling culture in place to allow customer service to come out, where, rather than people waiting to be told what to do to do things better, its an environment where people feel they can actually come out with ideas, that they will be listened to, and feel they are much more a part of the success of the company.† Not just a strong verbal communicator, Marshall became an active role model in the terminals, spending time with staff during morning and evenings. He combined these actions with a nu mber of important events to drive home the customer service message. Corporate Celebrations, 1983-1987 If Marshall was the most important player in emphasizing customer service, then the Putting People First (PPF) program was the most important event. BA introduced PPF to the front-line staff in December of 1983 and continued it through June of 1984. Run by the Danish firm Time Manager International, each program cycle lasted two days and included 150 participants. The program was so warmly received that the non-front-line employees eventually asked to be included, and a one-day â€Å"PPF II† program facilitated the participation of all BA employees through June 1985. Approximately 40,000 BA employees went through the PPF programs. The program urged participants to examine their interactions with other people, including family, friends, and, by association, customers. Its acceptance and impact was extraordinary, due primarily to the honesty of its message, the excellence of its delivery, and the strong support of management. Employees agreed almost unanimously that the programs message was sincere and free from manipulation, due in some measure to the fact that BA separated itself from the programs design. The program emphasized positive relations with people in general, focusing in large part on non-work-related relationships. Implied in the positive relationship message was an emphasis on customer service, but the program was careful to aim for the benefit of employees as individuals first. Employees expressed their pleasure on being treated with respect and relief that change was on the horizon. As one frontline ticket agent veteran said: â€Å"I found it fascinating, very, very enjoyable. I thought it was very good for British Airways. It made people aware. I dont think people give enough thought to peoples reaction to each other. . . . It was hardhitting. It was made something really special. When you were there, you were treated extremely well. You were treated as a VIP, and people really enjoyed that. It was reverse roles, really, to the job we do.† A senior manager spoke of the confidence it promoted in the changes: â€Å"It was quite a revelation, and I thought it was absolutely wonderful. I couldnt believe BA had finally woken and realized where its bread was buttered. There were a lot of cynics at the time, but for people like myself it was really great to suddenly realize you were working for an airline that had the guts to change, and that its probabl y somewhere where you want to stay.† Although occasionally an employee felt uncomfortable with the â€Å"rah-rah† nature of the program, feeling it perhaps â€Å"too American,† in general, PPF managed to eliminate cynicism. The excellence in presentation helped signify a sincerity to the message. One senior manager expressed the consistency. â€Å"There was a match between the message and the delivery. You cant get away with saying putting people first is important, if in the process of delivering that message you dont put people first.† Employees were sent personal invitations, thousands were flown in from around the world, and a strong effort was made to prepare tasteful meals and treat everyone with respect. Just as important, BA released every employee for the program, and expected everyone to attend. Grade differences became irrelevant during PPF, as managers and staff members were treated equally and interacted freely. Moreover, a senior director came to conclude every single PPF session with a question and answer session. Colin Marshall himself frequently attended these closing sessions, answering employee concerns in a manner most felt to be extraordinarily frank. The commitment shown by management helped BA avoid the fate suffered by British Rail in its subsequent attempt at a similar program. The British Railway program suffered a limited budget, a lack of commitment by management and interest by staff, and a high degree of cynicism. Reports surfaced that employees felt the program was a public relations exercise for the outside world, rather than a learning experience for staff. About the time PPF concluded, in 1985, BA launched a program for managers only called, appropriately, Managing People First (MPF). A five-day residential program for 25 managers at a time, MPF stressed the importance of, among other topics, trust, leadership, vision, and feedback. On a smaller scale, MPF stirred up issues long neglected at BA. One senior manager of engineering summarized his experience: â€Å"It was almost as if I were touched on the head. . . . I dont think I even considered culture before MPF. Afterwards I began to think about what makes people tick. Why do people do what they do? Why do people come to work? Why do people do things for some people that they wont do for others?† Some participants claimed the course led them to put more emphasis on feedback. One reported initiating regular meetings with staff every two weeks, in contrast to before the program when he met with staff members only as problems arose. As Marshall and his team challenged the way people thought at BA, they also encouraged changes in more visible ways. In December 1984, BA unveiled its new fleet livery at Heathrow airport. Preparations for the show were carefully planned and elaborate. The plane was delivered to the hangar-turned-theater under secrecy of night, after which hired audio and video technicians put together a dramatic presentation. On the first night of the show, a darkened coach brought guests from an off-site hotel to an undisclosed part of the city and through a tunnel. The guests, including dignitaries, high-ranking travel executives, and trade union representatives, were left uninformed of their whereabouts. To their surprise, as the show began an aircraft moved through the fog and laser lights decorating the stage and turned, revealing the new look of the British Airways fleet. A similar presentation continued four times a day for eight weeks for all staff to see. On its heels, in May of 1985, British Airways unveiled its new uniforms, designed by Roland Klein. With new leadership, strong communication from the top, increased acceptance by the public, and a new physical image, few on the BA staff could deny in 1985 that his or her working life had turned a new leaf from its condition in 1980. Management attempted to maintain the momentum of its successful programs. Following PPF and MPF, it put on a fairly successful corporatewide program in 1985 called â€Å"A Day in the Life† and another less significant program in 1987 called â€Å"To Be the Best.† Inevitably, interest diminished and cynicism grew with successive programs. BA also implemented an â€Å"Awards for Excellence† program to encourage employee input. Colin Marshall regularly communicated to staff through video. While the programs enjoyed some success, not many employees felt â€Å"touched on the head† by any successor program to PPF and MPF.

Friday, September 20, 2019

The Purpose Of Theory In International Relations Philosophy Essay

The Purpose Of Theory In International Relations Philosophy Essay International Relations (IR) theory aims to provide a conceptual framework upon which international relations can be analyzed. Ole Holsti describes international relations theories act as a pair of colored sunglasses, allowing the wearer to see only the salient events relevant to the theory. An adherent of realism may completely disregard an event that a constructivist might pounce upon as crucial, and vice versa. Robert Coxs ideas on the purpose of theory in International Relations, is not a search to find the truth but it is a tool to understand the world as it is, and to change it through the power of critique. According to Robert Cox, theory has two purposes: one of them is the problem-solving purpose that is synchronic which deals with the givens and tries to manage the smooth functioning of the system. The other kind of theory is the critical theory, and the purpose is to become aware of the situations not chosen by one, and to establish an emancipatory perspective. Once looked from the Coxian lens, it is clear that the discipline of international relations were from the very beginning loyal to this kind of purpose in theorizing, i.e., the smooth working of the system. As Robert Cox articulates, Theory is always for someone and for some purpose; this statement reflects the context in which the theory is being analyzed. Robert Cox says in one of his interview, What I meant is that there is no theory for itself; theory is always for someone, for some purpose. There is no neutral theory concerning human affairs, no theory of universal validity. Theory derives from practice and experience, and experience is related to time and place. Theory is a part of history. It addresses the problematic of the world of its time and place. An inquirer has to aim to place himself above the historical circumstances in which a theory is propounded. Cox has analyzed various theories and he critiques the earlier theories for their absolutism. He presents three challenges to previously established theories of IR.   Firstly, he appreciates the holistic intent behind both neorealism and world systems theory but warns against drawing conclusions which may detract from true formulation of a holistic approach.   Secondly, the state and social forces ought to be considered jointly in order to understand the route created by historical processes.   Finally, he argues for an empirical-historical methodology that accommodates and explains change more effectively than neorealists historical positivism. All theories derive from a perspective which determines their purpose.   By that Cox means all theories are colored by the time, place, and culture which produced them.   Cox identifies two strains of theorizing, the first, problem-solving theory, employs the existing theoretical framework and political conditions in order to isolate and address issues.   Conversely, critical theory is reflective, rejecting the false premise of a fixed social and political order, which Cox asserts is a convenience of method that constitutes an ideological bias in favor of the status quo.   If the purpose of political and social inquiry is indeed to effect change, critical theory is best suited towards that mandate, as a guide to strategic action cognizant of the history and ideology which inevitably impacts theory.   Problem-solving theory restricts the theorist into (perhaps inadvertently) perpetuating the status quo.   That being said, Cox acknowledges (in accordance with his belief tha t theory belongs to its historical climate) that there can be a time and place for problem-solving theory. Problem solving takes the world as it is and focuses on correcting certain dysfunctions, certain specific problems. Critical theory is concerned with how the world, that is all the conditions that problem solving theory takes as the given framework, may be changing. Because problem solving theory has to take the basic existing power relationships as given, it will be biased towards perpetuating those relationships, thus tending to make the existing order hegemonic. What critical theory does is question these very structural conditions that are tacit assumptions for problem-solving theory, to ask whom and which purposes such theory serves. It looks at the facts that problem-solving theory presents from the inside, that is, as they are experienced by actors in a context which also consists of power relations. Critical theory thus historicizes world orders by uncovering the purposes problem solving theories within such an order serve to uphold. By uncovering the contingency of an existing world order, one can then proceed to think about different world orders. It is more marginal than problem solving theory since it does not comfortably provide policy recommendations to those in power. The strength of problem-solving theory relies in its ability to fix limits or parameters to a problem area, and to reduce the statement of a particular problem to a limited number of variables which are amenable to rather close and clear examination. The ceteris paribus assumption, the assumption that other things can be ignored, upon which problem-solving theorizing relies, makes it possible to derive a statement of laws and regularities which appear of general applicability. Critical theory is critical in the sense that it stands apart from the prevailing order, and asks how that world came about. It does not just accept it: a world that exists has been made, and in the context of a weakening historical structure it can be made anew. Critical theory, unlike problem-solving theory, does not take institutions and social power relations for granted, but calls them into question by concerning itself with their origins, and whether and how they might be in process of changing. It is directed towards an appraisal of the very framework for action, the historical structure, which the problem-solving theory accepts as its parameters. Critical theory is a theory of history, in the sense that it is not just concerned about the politics of the past, but the continuing process of historical change. Problem-solving theory is not historical, it is a-historical, in the sense that it in effect posits a continuing present; it posits the continuity of the institutions of p ower relations which constitute the rules of the game which are assumed to be stable. The strength of the one is the weakness of the other: problem-solving theory can achieve great precision, when narrowing the scope of inquiry and presuming stability of the rules of the game, but in so doing, it can become an ideology supportive of the status quo. Critical theory sacrifices the precision that is possible with a circumscribed set of variables in order to comprehend a wider range of factors in comprehensive historical change. Cox believes that Critical theory does not propound remedies or make predictions about the emerging shape of things; world order for example. It attempts rather, by analysis of forces and trends, to discern possible futures and to point to the conflicts and contradictions in the existing world order that could move things towards one or other of the possible futures. In that sense it can be a guide for political choice and action. Cox sums up the salient features the purpose of the Critical Theory as follows: 1.  Action is never absolutely free but takes place within a framework for action with constitutes its problematic 2.  Not only action but also theory is shaped by the problematic 3.  The framework for action changes over time and a principal goal of critical theory is to understand these changes 4.  The framework has the form of an historical structure 5.  The framework is to be viewed from the bottom or from the outside in terms of the conflicts which arise within it and open the possibility of its transformation Having outlined his theoretical perspective, Cox explicates the role of historical structure in the formation of world orders, paying particular attention to hegemony. a structure is defined by its potentials in the form of material capabilities (technological, organizational, and natural resources) and ideas (historically conditioned intersubjective meanings and conflicting collective images of social order) institutionalization, which reflects and entrenches the power relations evident when particular institutions arose, is linked to the Gramscian concept of hegemony. In a hegemonic structure, the dominant interests secure power by co-opting the weak as they express their leadership in terms of universal or general interests these processes are not static; rather, they are limited totalities of a particular time and space which contain the dialectic possibility of change; that is, social forces, forms of state, and world orders can all be represented as a series of dominant and eme rgent rival structures = Social forces, hegemony, and imperialism interact as states mediate global and local social forces, establishing the political economy perspective in which power emerges from social forces and ideas, institutions and material capabilities are assessed on these three levels Cox then discusses the internationalization of the state as fragments of states evolved to become the primary units of interaction in developed states this represents the ascendancy of state ministries as independent actors, while in the periphery the power rests with international organizations. International production is engendering a global class structure which co-exists with national class structures, led by the transnational managerial class. Workers have also been fragmented into non-established and established, working respectively in international and national production, creating problems for social cohesion. Future world order prospects are presented in three hypothetical situations based on configurations of social forces with varying implications for the state system. Firstly, there is the possibility of a new hegemony based on internationalized production, suggesting a continued primacy of international capital and interests in both the core and the periphery. Conversely, a non-hegemonic world structure of conflicting power centers may emerge if neo-mercantilism rises in the core, creating a climate of cooperation with a particular core state for each periphery country. Finally, Cox does not rule out the possibility of a counter-hegemony based in the periphery, resulting in the termination of the core-periphery relationship which is entirely contingent on increased development in the periphery. Coxs strength lies primarily in his thorough assessment of historical examples without downplaying the role of history as neorealists do with their picking historical facts out of a quarry approach. Moreover, his re-orientation and reframing of international relations theory as a normative, emancipatory exercise establishes the discipline as a source of progress, rather than a cottage industry justifying the status quo. Critical theory emphasizes the political aspect of political science, reminding students and observers that each theorist (or diplomat) must contend with their own personal and cultural prejudices as human observers of politics cannot divorce themselves from their subject matter. Ultimately, critical theorys value rests with its reflexivity and hope for progress. Let us take an example to understand the applicability of this statement in real life scenario. Let us look at Climate change as a scenario and apply the statement and the theory relevance. With the example of climate change, the question is not to choose between problem-solving or critical theory. Problem solving theory is practical and necessary since it tells us how to proceed given certain conditions (for instance, the consequences to be expected from carbon generated from certain forms of behavior in terms of damage to the biosphere). Critical theory broadens the scope of inquiry by analyzing the forces favoring or opposing changing patterns of behavior. In the example of climate change, problem-solving theory asks how to support the big and ever increasing world population by industrial means yet with a kind of energy that is not going to pollute the planet. It requires a lot of innovative thought, has to mobilize huge reluctant and conservative social forces within a slow moving established order with vested interests in the political and industrial complex surrounding existing energy sources. Problem-solving theory gives opportunity to innovate and explore new forms of energy. Critical theory would take one step further and envisage a world order focused not just on humanity but on the whole of life, taking into account the web of relations in which humanity is only part in our world. Humans have to come to terms what it means to be part of the biosphere, and not just the dominant feature. In fact, it is a big problem of Western religion and modernist enlightenment thinking alike that nature is seen to be created in service of humans in the first, and is a force to be dominated in the second. Both Western religion and modernism have analytically disembedded humans from nature, turning nature into something to be dominated or an abstracted factor of production. To rethink this, to make humans part of nature, implies seeing humans as an entity with a responsibility vis-à  -vis the bigger world of which they are a part. Conclusion One has to question about the intent, the goal and the purposes of those who construct theories in specific historical situations. Broadly speaking, for any theory, there are two possible purposes to serve. One is for guiding the solving of problems posed within the particular context, the existing structure or the status quo. This leads to a problem-solving form of theory, which takes the existing context as given and seeks to make it work better. The other which is called critical theory is more reflective on the processes of change of historical structures, upon the transformation or challenges arising within the complex of forces constituting the existing historical structure, the existing common sense of reality. Critical thinking then contemplates the possibility of an alternative. We need to know the context in which theory is produced and used; and we need to know whether the aim of the user is to maintain the existing social order or to change it? Ever since, Coxs work has inspired critical students of IR and International Political Economy to think beyond the boundaries of conventional theorizing and to investigate the premises that underpin and link international politics and academic reflection on it. Recognized by many as one of the worlds most important thinkers in both IR and IPE, Cox assembles impressive and complex thinking stemming from history, philosophy, and geopolitics, to illuminate how politics can never be separated from economics, how theory is always linked to practice, and how material relations and ideas are inextricably intertwined to co-produce world orders.

Thursday, September 19, 2019

The Message of Moral Responsibility in To Kill a Mockingbird :: Kill Mockingbird essays

The Message of Moral Responsibility in To Kill a Mockingbird Not only is To Kill a Mockingbird a fun novel to read, it is purposeful. Harper Lee wrote the novel to demonstrate the way in which the world and its people should live together in harmony through a basic moral attitude of treating others with respect and kindness. The novel received the Pulitzer Prize in 1960, which places it among the best adult novels ever written; although it achieved this high recognition, today’s primary readers are adolescents. However, at the turning of the twenty-first century, one might wrongfully assume Harper Lee intended To Kill a Mockingbird a novel for adolescents and ignore its lessons for adults. According to â€Å"’Fine Fancy Gentlemen’ and ‘Yappy Folks’: Contending Voices in To Kill a Mockingbird,† by Theodore Hovet and Grace-Ann Hovet, Lee’s work is important because she does not supply the normal assumptions most in America harbor regarding the origins of racism. To the contrary, they argue that â⠂¬Å"Rather than ascribing racial prejudice primarily to ‘poor white trash’ (qtd. in Newitz and Wray), Lee demonstrates how issues of gender and class intensify prejudice, silence the voices that might challenge the existing order, and greatly complicate many Americans’ conception of the causes of racism and segregation† (67). Reading To Kill a Mockingbird provides its audience with a basic moral code by which to live and encounter individuals who appear different or make choices unlike those made by the mainstream populace. Therefore, this novel becomes part of our moral culture; regardless of age, people learn from the moral codes taught by defense attorney Atticus Finch, his children, and his community. Using the backdrop of racial tension and an episode of southern living, Lee develops To Kill a Mockingbird to point out basic morals by which people should live. By Lee’s combining a fictionalization of the historic Scottsboro Trial and the novel’s use of the community to morally educate two children, her characters demonstrate moral responsibility. In the first part of the novel, Lee establishes conflict as Atticus Finch, the father, and the surrounding community, through various situations and conversations, enlighten Jem and Scout Finch with lessons of moral ethic. The moral responsibility of others is to express kindness and respect to others in a world where people of different races, socioeconomic statuses, and cultures exist. In setting the tone Lee establishes the mood through mentions of the Great Depression to remind her reader of the hardships the nation endured. The Message of Moral Responsibility in To Kill a Mockingbird :: Kill Mockingbird essays The Message of Moral Responsibility in To Kill a Mockingbird Not only is To Kill a Mockingbird a fun novel to read, it is purposeful. Harper Lee wrote the novel to demonstrate the way in which the world and its people should live together in harmony through a basic moral attitude of treating others with respect and kindness. The novel received the Pulitzer Prize in 1960, which places it among the best adult novels ever written; although it achieved this high recognition, today’s primary readers are adolescents. However, at the turning of the twenty-first century, one might wrongfully assume Harper Lee intended To Kill a Mockingbird a novel for adolescents and ignore its lessons for adults. According to â€Å"’Fine Fancy Gentlemen’ and ‘Yappy Folks’: Contending Voices in To Kill a Mockingbird,† by Theodore Hovet and Grace-Ann Hovet, Lee’s work is important because she does not supply the normal assumptions most in America harbor regarding the origins of racism. To the contrary, they argue that â⠂¬Å"Rather than ascribing racial prejudice primarily to ‘poor white trash’ (qtd. in Newitz and Wray), Lee demonstrates how issues of gender and class intensify prejudice, silence the voices that might challenge the existing order, and greatly complicate many Americans’ conception of the causes of racism and segregation† (67). Reading To Kill a Mockingbird provides its audience with a basic moral code by which to live and encounter individuals who appear different or make choices unlike those made by the mainstream populace. Therefore, this novel becomes part of our moral culture; regardless of age, people learn from the moral codes taught by defense attorney Atticus Finch, his children, and his community. Using the backdrop of racial tension and an episode of southern living, Lee develops To Kill a Mockingbird to point out basic morals by which people should live. By Lee’s combining a fictionalization of the historic Scottsboro Trial and the novel’s use of the community to morally educate two children, her characters demonstrate moral responsibility. In the first part of the novel, Lee establishes conflict as Atticus Finch, the father, and the surrounding community, through various situations and conversations, enlighten Jem and Scout Finch with lessons of moral ethic. The moral responsibility of others is to express kindness and respect to others in a world where people of different races, socioeconomic statuses, and cultures exist. In setting the tone Lee establishes the mood through mentions of the Great Depression to remind her reader of the hardships the nation endured.

Wednesday, September 18, 2019

Hard Drives :: science

Hard Drives Hard drives have been around longer than you think. In 1956, I. B. M. had invented a disk storage unit that was very large but did not store a lot of data. It was twenty-four inches in diameter and could hold only five megabytes, which is the equivalent to three and one half floppy disks. Originally called â€Å"fixed disks† later became known as â€Å"hard disks† opposed to floppy disks. In 1973, I. B. M. released a hard drive that could hold seventeen and one half megabytes. In 1980 Seagate made the first five and one quarter inch hard disk. In the late 1980’s, three and one half inch hard disks were invented (PCIN). Although there are smaller hard disks as small as two inches in diameter, three and one half inch hard disks have been made a standard and is used most often today. The capacity in hard drives has excelled thousands of times all over from five megabytes to one hundred sixty gigabytes (160,000 megabytes) which is the equivalent to one hundred eleven thousand one hundred eleven floppy disks. The hard drive or hard disk is one of the most critical components in the operation of a computer. It is also one of the only moving parts in the computer. Sadly, many people do not know the important role it has in the storage of their data or how it even works. When you think of your hard drive, think of it as the computer’s electronic filing cabinet. Everything you load, download, or save is stored on the hard drive. In fact, ten percent of your hard drive is already used when you purchase your computer because it needs certain system operating files that are required to make the basics work. Everything you add later such as word processors, antivirus software, e-mail software, games, and internet software are extra, soon leading to an over stuffed filing cabinet (Matthew Ferrara Seminars). However, many people ask, â€Å"What is the hard drive, physically?† The hard drive can be commonly referred to as â€Å"a box†. That is what it looks like, a three and one half inch metal box. It is located inside your mainframe or tower. It sits in what is called a drive bay. Here it is secured with screws. On the bottom of the hard drive is a chip board which is the really technical and complicated pieces of the hard drive.

Tuesday, September 17, 2019

John Steinbecks East of Eden - The Character of Adam Trask :: East Eden Essays

The Character of Adam Trask in East of Eden In Webster's Encyclopedic Unabridged Dictionary of the English Language, the word love is defined as a profoundly tender, passionate affection for another person. Love can bring two people together but it can also have a person be rejected by another because of love. In the novel East of Eden by John Steinbeck, the main character, Adam Trask, confronts a feeling of love throughout the whole book but he either rejects the love of people who care about him or has his love rejected by the people that he cares about. When Adam was a young man in the beginning of the novel, his father, Cyrus Trask loved him but Adam did not love him back and when Adam went into the army he did not come back home until his father's death. Later on in the story Adam really loved his wife, Cathy, but she didn't love him back and so when she tried to leave him and he would not let her, she shot him. Even though Adam survived he was demoralized for most of his life because he still loved her. Through Adam's ex periences of love in the novel, John Steinbeck shows that Adam Trask has an inability to handle love. When he first appears in the novel, Adam Trask is a young man who is not loved by his brother or mother but only by his father. Cyrus had punished Adam before and had tried to teach him to be a soldier and so Adam hated him for that and when Cyrus told him he loved him, Adam did not accept his love. Cyrus tells Adam, "I think you're a weakling who will never amount to a dog turd. Does that answer your question? I love you better. I always have. This may be a bad thing to tell you, but it's true. I love you better. Else why would I have given myself the trouble of hurting you?" (Steinbeck 28). Cyrus is telling Adam that he has always loved him and that the only reason that he punished him is because he loved him. He wants Adam to go into the army because he knows that Adam would be courageous and since Cyrus was in the army, he wants to pass on the legacy. When Adam came home from his discharge, his brother and him were talking about their father and Adam told him the truth.

Monday, September 16, 2019

Cosmic Education

Rachael Jacobson Cosmic Education Exiled to India during World War II, Maria Montessori developed one of the basic tenets of her philosophy of education. This tenet is what she called cosmic education. In To Educate the Human Potential (ed 2007 p9) Montessori said that, â€Å"the stars, earth, stone, life of all kind form a whole in relation to each other, and so close is this relation that we cannot understand a stone without some understanding of the great sun†. This interconnectedness, the interconnectedness of every element of the universe, is at the heart of cosmic education.As Dr. Montessori explains, â€Å"all things are part of the universe, and are connected with each other to form one whole unity. The idea helps the mind of the child to become focused, to stop wandering in an aimless quest for knowledge. He is satisfied having found the universal center of himself with all things†. Montessori believed that children who received a cosmic education would grow to have a clearer understanding of themselves because they had a better understanding of the natural world and their place in it.She also believed that children are much closer to nature than adults. Therefore, the ideas of cosmic education can be impressed upon them more easily so that they can grow up with an appreciation and sense of wonder about the natural world and keep it as adults. An awareness of the interdependence between humans and the universe and the sense of gratitude that comes from that awareness are absolutely necessary if a child is to grow into a peaceful human being.Montessori believed that providing a cosmic education to children would be a means to this end because children who are exposed to all the elements and forces of nature gain a sense of importance, purpose, and responsibility, which they carry into their adult lives. It was her belief that the future was in the hands of children and that their education would determine whether or not the future humankin d was a peaceful or one fraught with destruction, violence, and war. Cosmic Education is held together by aâ€Å"glue† known as The Great Lessons. The Great Lessons introduce the overall scope of cosmic education .There are five Great Lessons. â€Å"Montessori believed that storytelling was an ideal way to introduce knowledge to elementary children, engaging both their imaginations and their developing powers of reason†. All of these lessons are accompanied by illustrations and charts, and many by scientific demonstrations. They are all told to the children in the first months of school, and are re-told each year to the returning children. They help children build a context for the knowledge that they will acquire throughout their years as EC, EI and E2 students. The Five Great Lessons are: 1.The Coming of the Universe: This lesson introduces scientific thought on the origins of the universe and our own planet. Using charts and experiments, this first Great Lesson desc ribes how minerals and chemicals formed the elements, how matter transforms to three states of solid, liquid, and gas, how particles joined together and formed the earth, how heavier particles sank to the earth's core and volcanoes erupted, and how mountains were formed and the atmosphere condensed into rain, creating oceans, lakes, and rivers. From this story, students are introduced to lessons in physics, astronomy, geology, and chemistry.For example, they learn about light, heat, convection currents, gravity, galaxies, planetary systems, the earth's crust, volcanoes, erosion, climate and physical geography. 2. The Coming of Life: This lesson represents the beginning of life on Earth from the simplest forms through the appearance of human beings. The second Great Lesson explains how single-cell and multi-cell forms of life became embedded in the bottom of the sea and formed fossils. It traces the Paleozoic, Mesozoic, and the Cenozoic periods, beginning with the kingdom of trilobit es and ending with human beings.The teacher indicates on a time line where vertebrates began, followed by fish and plants, then amphibians, reptiles, birds, and mammals. In this lesson students are introduced to the basics of zoology and botany. 3. The Coming of Human Beings: This lesson is an introduction to prehistory and history that continues the exploration of life on Earth, with an emphasis on the development of humans. The aim is for the children to imagine what life was like for early humans. This lesson is the basis for lessons in history and the development of ancient civilizations.They also learn how climate and topography influenced various civilizations. 4. The Story of Our Alphabet: This story is an introduction that follows the development of writing from its appearance in primitive cultures to its role in modern times. From this lesson, students use grammar materials, which help them examine how language is put together, and refine capitalization and punctuation. Stu dents are introduced to the study of the origin of English words from other languages, the meanings of prefixes and suffixes and different forms of writing such as poetry and prose. 5.The Story of Our Numerals: This story is an introduction that emphasizes how human beings needed a language for their inventions to convey measurement and how things were made. The story describes how number systems evolved throughout time and within different civilizations. This story is the basis for the children's learning of mathematics, which is integrated into all studies. The first three stories are what Duffy (2002 p30) calls â€Å"the story of our origin and past,† while the last two stories are illustrations of â€Å"human cultural accomplishments and the evolution of human ideas. Stoll Lillard (2005 p134) calls this â€Å"a core of impressionistic knowledge that is intended to inspire the child to learn more. † The Great Lessons simultaneously raise and answer questions. How d id the universe come to be? Our solar system? Our planet? Our oceans, lakes, mountains, forests, flowers, and animals? The Great Lessons helps children see how interrelated all things are. They instill in children the understanding that all people are one and that we must all be our brother’s keeper. Most importantly, The Great Lessons provide the child with a macro view of the world.Through the stories told in each of the five lessons, the child is introduced to â€Å"the big picture†. â€Å"Children become aware that the universe evolved over billions of years, and that it is based on the law and order through which all the plants, animals, and the rest of creation is maintained. From that point, students are introduced to increasing levels of detail and complexity within these broad areas and gradually understand that they are part of this order and are participants in the ongoing life of the universe.Thus, The Great Lessons provide a springboard of sorts from whic h children can develop their individual interests and shape their own learning. The Great Lessons allow the child to move between macro and micro levels of knowledge. The basic premise of cosmic education maintains that no subjects should be taught in isolation. Rather all elements of the curriculum are viewed as interdependent upon one another. The outcome of cosmic education allows children to become thankful for the world around them and an understanding of their place in it. They will begin to understand that they have been given many gifts from the past and present.They also develop wonder, gratitude, a sense of purpose, and a feeling of responsibility to others, to the earth, and to future generations. If young children grow up with love and respect and the knowledge that they matter, they have the best chance of growing up and meeting their full potential†¦no matter their circumstances. Duffy, M ; D (2002) Cosmic Education in the Montessori Elementary Classroom Parent Ch ild Press: Hollidaysburg. Montessori, M (ed 2007) To Educate the Human Potential The Montessori Series: Amsterdam. Stoll Lillard, A (2005) Montessori: The Science behind the Genius Oxford University Press: Oxford

Benefits of Social Networking

The Benefits of Social Networking Social media sites do more good than bad. They allow people to reconnect and create relationships, show creative expression in a new medium, and also bring people that share common interests together. Mark Zuckerberg said, â€Å"At Facebook, we build tools to help people connect with the people they want and share what they want, and by doing this we are extending people's capacity to build and maintain relationships. † Social media sites allow people to create new relationships and give them the opportunity to reconnect with friends and family.Increasing communication, even without being able to see a person, strengthens a relationship. Mike Chalmers wrote an article in USA Today that was about military families using Facebook and Skype to contact their families. Army Maj. Thomas Murphy would Skype with his wife and two daughters almost daily while his year in Iraq. â€Å"You could break away from the monotony of everyday stress and feel lik e you're back home for a bit,† said Murphy, (Chalmers). The connection made his deployment more bearable and eased his return home, said his wife.Bianca Murphy said, â€Å"He was part of their day-to-day life, so there was no adjustment that this was some stranger in a uniform,† (Chalmers). Some people have been able to keep friendships going after high school with social networking sites. Even though they can’t see that person as much as they once did they can see what’s still going on in their life. They’ve also been able to start new ones with the people they meet at college or work. Social media sites also allow for creative expression through blogging, messaging, photo storage, and much more.AC. Lowney and T. O’Brien presented a case of a 30-year-old patient with pontine glioblastoma multiform. On admission to the Specialist Palliative Care Inpatient Unit, he had a complete right hemiplegia. He would communicate with the staff by using t he notepad function of his iPad, and he would also use his iPad to update his blog. He’d updated the blog on an almost daily basis, describing his physical and psychological status, (Lowney). His blog also had messages of support from others with similar diagnoses.Blogging was this patient’s way to express the existential distress he was feeling since he was diagnosed with pontine glioblastoma multiform. He felt cheated on life, and being unable to hold his 1-year-old son was dreadful to him, (Lowney). Social media sites are a great way to express thoughts and feelings. Blogging is able to help people emotionally heal by connecting with people who also have the same problems and receive advice. Blogging is a creative way to inspire people, (Lowney). Finally, social media sites have the ability to bring people with common interests together. Highlight, works by rummaging through your Facebook account to see whom you know and what topics you like, (McCracken). It uses yo ur iPhone's GPS to inform you when a fellow conference attendee who's a former co-worker's buddy is in your immediate vicinity or when a good-looking patron who loves the same bands you do sits down at the other end of the bar,† (McCracken). Social media sites like Facebook, give people the ability to click on pages you’re interested to see other people with the same interest.Also, people who have a difficulty communicating in person could be more comfortable interacting over the sites, (McCracken). In conclusion, social media sites are able more good than bad. They allow people to reconnect and create relationships, show creative expression in a new medium, and also bring people together that share common interests. â€Å"The thing that we are trying to do at Facebook, is just help people connect and communicate more efficiently,† (Zuckerberg).

Sunday, September 15, 2019

How does dispute resolution save school districts money?

School districts involve multi-party stakeholders holding different, although interrelated, interests that could clash and cause disputes. Disputes are costly by pulling time away from other management tasks and resources for dispute resolution that could be of better use in development projects. Dispute resolution could usher cost savings, which is important given the limited resources of school districts.One way of achieving cost savings through dispute resolution is the mitigation of the further impact of leaving a dispute to self-arrest or preventing the worsening of conditions. Dispute resolution means getting at the core or root of the problem and applying the appropriate solution to stop the impact and prevent the development of more serious problems (Burgess & Burgess, 1997). Doing so means not incurring any additional costs from the extended impact of disputes or the worsening of disputes.Another way of achieving cost savings via resolving disputes is by building better rela tions among the parties involved in managing school districts and affected by the actions and decisions of school district administrators. The dispute resolution process reconciles differing interests to create collaborative relations (Deutsch, Coleman & Marcus, 2006). This settles the existing conflict and prevents future conflicts. This means cost savings on potential conflicts and non-realization of contingency plans that require expenditures.Still another way that dispute resolution saves school districts money is by enhancing the experience of school districts in recognizing potential disputes and applying the appropriate solutions (Deutsch et al. , 2006). This improves the efficiency of school districts not only in handling disputes but also in strategy development. Efficiency means cost effectiveness or optimized outcomes for every input used. Dispute resolution ushers cost savings for school districts as a pro-active strategy that mitigates costs, prevents further costs, and allocates costs for appropriate solutions. References Burgess, H., & Burgess, G. M. (1997). Encyclopedia of conflict resolution. Santa Barbara,  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   CA: ABC-Clio Inc. Deutsch, M., Coleman, P. T., & Marcus, E. C. (2006). The handbook of conflict resolution. San Francisco, CA: Jossey-Bass.

Saturday, September 14, 2019

Disruptive Technology

Disruptive Technology Abstract The objective of this project is to explain the emergence of disruptive technology in the IT industry that will enable and help the organizations growth in a cost effective manner. One of the hottest topics in today’s IT corridors is the uses and benefits of virtualization technologies. IT companies all over the globe are executing virtualization for a diversity of business requirements, driven by prospects to progress server flexibility and decrease operational costs. InfoTech Solutions being dominant IT solution provider can be broadly benefited by implementing the virtualization. This paper is intended to provide the complete details of virtualization, its advantages and strategies for SMEs to migrate. Introduction 2009 IT buzz word is ‘Virtualization’. Small, medium and large business organizations seriously started to re organize their e-business strategy towards the successful disruptive technology of virtualization. Virtualization of business applications permits IT operations in organizations of all sizes to decrease costs, progress IT services and to reduce risk management. The most remarkable cost savings are the effect of diminishing hardware, utilization of space and energy, as well as the productivity gains leads to cost savings. In the Small business sector virtualization can be defined as a technology that permits application workloads to be maintained independent of host hardware. Several applications can share a sole, physical server. Workloads can be rotated from one host to another without any downtime. IT infrastructure can be managed as a pool of resources, rather than a collection of physical devices. Disruptive Technology Disruptive Technology or disruptive Innovation is an innovation that makes a product or service better by reducing the price or changing the market dramatically in a way it does not expect. Christensen (2000) stated that ‘‘disruptive technologies are typically simpler, cheaper, and more reliable and convenient than established technologies’’ (p. 192). Before we do any research on disruptive technology it is useful and necessary to summarize the Christensen’s notion of disruptive technology. Christensen was projected as â€Å"guru† by the business (Scherreik, 2000). His work has been broadly referred by scholars or researchers working in different disciplines and topics like the development of new product, strategies like marketing and management and so on. In his book â€Å"The Innovator’s Dilemma,† (Christensen 1997) Christensen had done significant observations about the circumstances under which companies or organizations that are established lose market to an entrant that was referred as disruptive technology. This theory became extremely influential in the management decision making process (Vaishnav, 2008). Christensen’s arguments, from the academic references (Christensen 1992; Christensen and Rosenbloom 1995; Christensen, Suarez et al. 1996) instead of looking in to his famous paperbacks (Christensen 1997; Christensen and Raynor 2003), explains that the entrant might have more advantage then the incumbent and it requires the understanding of three important forces: technological capability (Henderson and Clark 1990), organizational dynamics (Anderson and Tushman 1990), and value (Christensen and Rosenbloom 1995). He argued further that company’s competitive strategy and mainly its earlier choices of markets to serve, decides its perceptions of economic value in new technology, and improves the rewards it will expect to obtain through innovation. Christensen (1995) classifies new technology into two types: sustaining and disruptive. Sustaining technology depends on rising improvements to an already established technology, at the same time Disruptive technology is new, and replaces an established technology unexpectedly. The disruptive technologies may have lack of refinement and often may have performance problems because these are fresh and may not have a verified practical application yet. It takes a lot of time and energy to create something new and innovative that will significantly influence the way that things are done. Most of the organizations are concerned about maintaining and sustaining their products and technologies instead of creating something new and different that may better the situation. They will make change and minor modifications to improve the current product. These changes will give a bit of new life to those products so that they can increase the sales temporarily and keeps the technology a bit longer. Disruptive technologies generally emerge from outside to the mainstream. For example the light bulb was not invented by the candle industry seeking to improve the results. Normally owners of recognized technology organizations tend to focus on their increased improvements to their existing products and try to avoid potential threat to their business (Techcom, 2004). Compared to sustaining products, disruptive technologies take steps into various directions, coming up with ideas that would work against with products in the current markets and could potentially replace the mainstream products that are being used. So it is not considered as disruption, but considered as innovation. It is not only replacing, but improving ahead what we have now making things enhanced, quicker, and mostly cooler. Either it may be disruptive or innovative; technologies are changing the â€Å"future wave† in to reality and slowly started occupying the world. On one hand, the warning of disruption makes incumbents suspicious about losing the market, while emerging new entrants confident of inventing the next disruptive technology. Perhaps, such expects and worries produce more competition in the market place. It seems that every year there is a laundry list of products and technologies that are going to â€Å"change the world as we know it. † One that seems to have potential to achieve the title of a disruptive technology is something that has been around for a while now: virtualization. Gartner (2008) describes disruptive technology as â€Å"causing major change in the accepted way of doing things, including business models, processes, revenue streams, industry dynamics and consumer behaviors†. Virtualization is one of the top ten disruptive technologies listed by Gartner (Gartner. com). This virtualization technology is not new to the world. As computers turn into more common though, it became obvious that simply time-sharing a single computer was not always ideal because the systems can be misused intentionally or unintentionally and that may crash the entire system to alt. To avoid this multi system concept emerged. This multi system concept provided a lot of advantages in the organizational environment like Privacy, security to data, Performance and isolation. For example in organization culture it is required to keep certain activities performing from different systems. A testing application run in a system sometimes may halt the system or crash the syst em completely. So it is obvious to run the application in a separate system that won’t affect the net work. On the other hand placing different applications in the same system may reduce the performance of the system as they access the same available system resources like memory, network input/output, Hard disk input/output and priority scheduling (Barham, at,. el, 2003). The performance of the system and application will be greatly improved if the applications are placed in different systems so that they can have its own resources. It is very difficult for most of the organization to invest on multiple systems and at times it is hard to keep all the systems busy to its full potential and difficult to maintain and also the asset value keeps depreciating. So investing in multiple systems becomes waste at times, however having multi systems obviously has its own advantages. Considering this cost and waste, IBM introduced the first virtual machine in 1960 that made one system to be as it was multiple. In the starting, this fresh technology allowed individuals to run multiple applications at the same time to increase the performance of person and computer to do multitask abilities. Along with this multi tasking factor created by virtualization, it was also a great money saver. The multitasking ability of virtualization that allowed computers to do more than one task at a time become more valuable to companies, so that they can leverage their investments completely (VMWare. com). Virtualization is a hyped and much discussed topic recently due to its potential characteristics. Firstly it has capacity to use the computer resources in a better potential way maximizing the company’s hardware investment. It is estimated that only 25% of the total resources are utilized in an average data center. By virtualization large number older systems can be replaced by a highly modern, reliable and scalable enterprise servers reduce the hardware and infrastructure cost significantly. It is not just server consolidation, virtualization offers much more than that like the ability to suspend, resume, checkpoint, and migrate running Chesbrough (1999a, 1999b). It is exceptionally useful in handling the long running jobs. If a long running job is assigned to a virtual machine with checkpoints enabled, in any case it stops or hangs, it can be restarted from where it stopped instead of starting from the beginning. The main deference of today’s virtualization compared to the older mainframe age is that it can be allocated any of the service’s choice location and is called as of Distributed Virtual Machines that opens a whole lot of possibilities like monitoring of network, validating security policy and the distribution of content (Peterson et, al, 2002). The way virtual technology breaks the single operating system boundaries is what made it to be a significant part of technology that leads in to the disruptive technology group. It allows the users to run multiple applications in multiple operating systems on a single computer simultaneously. (VMWare. com, 2009) Basically, this new move will have a single physical server and that hardware can be made in to software that will use all the available hardware resources to create a virtual mirror of it. The replications created can be used as software based computers to run multiple applications at the same time. These software based computers will have the complete attributes like RAM, CPU and NIC interface of the physical computers. The only different is that there will be only one system instead of multiple running different operating systems (VMWare. com, 2009) called guest machines. Virtual Machine Monitor Guest virtual machines can be hosted by a method called as Virtual Machine Monitor or VMM. This should go hand-in-hand with virtual machines. In realty, VMM is referred as the host and the hosted virtual machines are referred as guests. The physical resources required by the guests are offered by the software layer of the VMM or host. The following figure represents the relationship between VMM and guests. The VMM supplies the required virtual versions of processor, system devices such as I/O devices, storage, memory, etc. It also presents separation between the virtual machines and it hosts so that issues in one cannot effect another. As per the research conducted by Springboard Research study recently, the spending related to virtualization software and services will reach to 1. 5 billion US dollar by the end of 2010. The research also adds that 50% of CIOs interested in deploying virtualization to overcome the issues like poor performance system’s low capacity utilization and to face the challenges of developing IT infrastructure. TheInfoPro, a research company states that more than 50% of new servers installed were based on virtualization and this number is expected to grow up to 80% by the end of 2012. V irtualization will be the maximum impact method modifying infrastructure and operations by 2012. In reference to Gartner, Inc. 008, Virtualization will renovate how IT is bought, planed, deployed and managed by the companies. As a result, it is generating a fresh wave of competition among infrastructure vendors that will result in market negotiation and consolidation over the coming years. The market share for PC virtualization is also booming rapidly. The growth is expected to be 660 million compared to 5 million in till 2007. Virtualization strategy for mid-sized businesses Virtualization has turn out to be a significant IT strategy for small and mid-sized business (SMEs) organizations. It not only offers the cost savings, but answers business continuity issues and allows IT managers to: †¢Manage and reduce the downtime caused due to the planed hardware maintenance that will reduce the down time resulting higher system availability. †¢Test, investigate and execute the disaster recovery plans. †¢Secure the data, as well as non-destructive backup and restore Processes †¢Check the stability and real-time workloads In these competitive demanding times, SME businesses organizations require to simplify the IT infrastructure and cut costs. However, with various storage, server and network requirements, and also sometimes might not have sufficient physical space to store and maintain systems, the company’s chances can be restricted by both less physical space and budget concerns. The virtualization can offer solutions for these kind issues and SMEs can significantly benefit not only from server consolidation, but also with affordable business continuity. What is virtualization for mid-sized businesses? In the Small business sector virtualization can be defined as a technology that permits application workloads to be maintained independent of host hardware. Several applications can share a sole, physical server. Workloads can be rotated from one host to another without any downtime. IT infrastructure can be managed as a pool of resources, rather than a collection of physical devices. It is assumed that the virtualization is just for large enterprises. But in fact it is not. It is a widely-established technology that decreases hardware requirements, increases use of hardware resources, modernizes management and diminish energy consumption. Economics of virtualization for the midmarket The research by VMWare. om (2009) shows that the SMEs invested on virtualization strategy has received their return of investment (ROI) in less than year. In certain cases, this can be less than seven months with the latest Intel Xeon 5500 series processors http://www-03. ibm. com/systems/resources/6412_Virtualization_Strategy_-_US_White_Paper_-_Apr_24-09. pdf [accessed on 04/09/09] The below image explains how the virtualization simplified a large utility company infrastructure with 1000 systems with racks and cables to a dramatically simpler form. Source : http://www-03. ibm. om/systems/resources/6412_Virtualization_Strategy_-_US_White_Paper_-_Apr_24-09. pdf [accessed on 04/09/09] Virtualization SME advantages 1. Virtualization and management suite presents a stretchable and low -cost development platform and an environment with high capability. 2. Virtualization provides the facility to rotate virtual machines that are live between physical hosts. This ability numerous advantages like business continuity, recovery in disaster, balancing of workload, and even energy-savings by permitting running applications to be exchanged between physical servers without disturbing the service. . Virtualization can help you take full advantage of the value of IT Pounds: †¢Business alertness in varying markets †¢A flexible IT infrastructure that can scale with business growth †¢ High level performance that can lever the majority of d emanding applications †¢ An industry-standard platform architecture with intellectual management tools †¢ Servers with enterprise attributes—regardless of their size or form factor 4. Virtualization can help you to advance IT services: †¢The provision to maintain the workloads rapidly by setting automatic maintenance process that can be configured to weeks, days or even to inutes. †¢Improve IT responsiveness to business needs †¢Down times can be eliminate by shifting the †¢To a great extent decrease, even eliminate unplanned downtime. †¢Reducing costs in technical support, training and mainte ¬nance. Conclusion: This is the right time for Small and mid-sized businesses like InfoTech Solutions to implement a virtualization strategy. Virtualization acts as a significant element of the IT strategy for businesses of all sizes, with a wide range of benefits and advantages for all sized businesses. It helps InfoTech Solutions to construct an IT infrastructure with enterprise-class facilities and with a with a form factor of Return Of Investment. It is expected that more than 80% of organizations will implement virtualization by the end of 2012. So SME organizations like InfoTech Solutions should seriously look in to their E-business strategy for considering the virtualization or they may be left behind the competitors. References 1. Adner, Ron (2002). When Are Technologies Disruptive? A Demand- Based View of the Emergence of Competition. Strategic Management Journal 23(8):667–88. . Anderson, P. and M. L. Tushman (1990). â€Å"Technological Discontinuities and Dominant Designs – a Cyclical Model of Technological-Change. † Administrative Science Quarterly 35(4): 604-633. 3. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the art of virtualization. In Proc. 19th SOSP, October 2003. 4. Chesbrough, Hen ry (1999a). Arrested Development: The Experience of European Hard-Disk-Drive Firms in Comparison with U. S. and Japanese Firms. Journal of Evolutionary Economics 9(3):287–329. 5. Chintan Vaishnav , (2008) Does Technology Disruption Always Mean Industry Disruption, Massachusetts Institute of Technology 6. Christensen, Clayton M. (2000). The Innovator’s Dilemma. When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School Press. 7. Christensen, C. M. (1992). â€Å"Exploring the limits of technology S-curve: Architecture Technologies. † Production and Operations Management 1(4). 8. Christensen, C. M. and R. S. Rosenbloom (1995). â€Å"Explaining the Attackers Advantage -Technological Paradigms, Organizational Dynamics, and the Value Network. † Research Policy 24(2): 233-257. . Christensen, C. M. , F. F. Suarez, et al. (1996). Strategies for survival in fast-changing industries. Cambridge, MA, International Center for Research on the Management 10. Christensen, C. M. (1992). â€Å"Exploring the limits of technology S-curve: Component Technologies. † Production and Operations Management 1(4). 11. Christensen, C. M. (1997). The innovator's dilemma : when new technologies cause great firms to fail. Boston, Mass. , Harvard Business School Press. 12. Christensen, C. M. and M. E. Raynor (2003). The innovator's solution : creating and sustaining successful growth. Boston, Mass. , Harvard Business School Press. 13. Cohan, Peter S. (2000). The Dilemma of the ‘‘Innovator’s Dilemma’’: Clayton Christensen’s Management Theories Are Suddenly All the Rage, but Are They Ripe for Disruption? Industry Standard, January 10, 2000. 14. Gartner Says; http://www. gartner. com/it/page. jsp? id=638207 [ accessed on 04/09/09] 15. Henderson, R. M. and K. B. Clark (1990). â€Å"Architectural Innovation – the Reconfiguration of Existing Product Technologies and the Failure of Established Firms. † Administrative Science Quarterly 35(1): 9-30. 16. MacMillan, Ian C. nd McGrath, Rita Gunther (2000). Technology Strategy in Lumpy Market Landscapes. In: Wharton on Managing Emerging Technologies. G. S. Day, P. J. H. Schoemaker, and R. E. Gunther (eds. ). New York: Wiley, 150–171. 17. Scherreik, Susan (2000). When a Guru Manages Money. Business Week, July 31, 2000. 18. L. Peterson, T. Anderson, D. Culler, and T. R oscoe, â€Å"A Blueprint for Introducing Disruptive Technology into the Internet,† in Proceedings of HotNets I, Princeton, NJ, October 2002. 19. â€Å"VirtualizationBasics. † VMWare. com. http://www. vmware. com/virtualization/ [Accessed on 04/09/09] Disruptive Technology One of the most consistent patterns in business is the failure of leading companies to stay at the top of their industries when technologies or markets change. Goodyear and Firestone entered the radial-tire market quite late. Xerox let Canon create the small-copier market. Bucyrus-Erie allowed Caterpillar and Deere to take over the mechanical excavator market. Sears gave way to Wal-Mart. The pattern of failure has been especially striking in the computer industry. IBM dominated the mainframe market but missed by years the emergence of minicomputers, which were technologically much simpler than mainframes. Digital Equipment dominated the minicomputer market with innovations like its VAX architecture but missed the personal-computer market almost completely. Apple Computer led the world of personal computing and established the standard for user-friendly computing but lagged five years behind the leaders in bringing its portable computer to market. Why is it that companies like these invest aggressively-and successfully-in the technologies necessary to retain their current customers but then fail to make certain other technological investments that customers of the future will demand? Undoubtedly, bureaucracy, arrogance, tired executive blood, poor planning, and short-term investment horizons have all played a role. But a more fundamental reason lies at the heart of the paradox: leading companies succumb to one of the most popular, and valuable, management dogmas. They stay close to their customers. Although most managers like to think they are in control, customers wield extraordinary power in directing a company's investments. Before managers decide to launch a technology, develop a product, build a plant, or establish new channels of distribution, they must look to their customers first: Do their customers want it? How big will the market be? Will the investment be profitable? The more astutely managers ask and answer these questions, the more completely their investments will be aligned with the needs of their Customers. This is the way a well-managed company should operate. Right? But what happens when customers reject a new technology, product concept, or way of doing business because it does not address their needs as effectively as a company's current approach? The large photocopying centers that represented the core f Xerox's customer base at first had no use for small, slow tabletop copiers. The excavation contractors that had relied on Bucyrus-Erie's big-bucket steam- and diesel-powered cable shovels didn't want hydraulic excavators because, initially they were small and weak. IBM's large commercial, government, and industrial customers saw no immediate use for minicomputers. In each instance, companies listened to their customers, gave them the product performance they were looking for , and, in the end, were hurt by the very technologies their customers led them to ignore. We have seen this pattern repeatedly in an ongoing study of leading companies in a variety of industries that have confronted technological change. The research shows that most well-managed, established companies are consistently ahead of their industries in developing and commercializing new technologies- from incremental improvements to radically new approaches- as long as those technologies address the next-generation performance needs of their customers. However, these same companies are rarely in the forefront of commercializing new technologies that don't initially meet the needs of mainstream customers and appeal only to small or emerging markets. Using the rational, analytical investment processes that most well-managed companies have developed, it is nearly impossible to build a cogent case for diverting resources from known customer needs in established markets to markets and customers that seem insignificant or do not yet exist. After all, meeting the needs of established customers and fending off competitors takes all the resources a company has, and then some. In well-managed companies, the processes used to identify customers' needs, forecast technological trends, assess profitability, allocate resources across competing proposals for investment, and take new products to market are focused-for all the right reasons-on current customers and markets. These processes are designed to weed out proposed products and technologies that do not address customers' needs. In fact, the processes and incentives that companies use to keep focused on their main customers work so well that they blind those companies to important new technologies in emerging markets. Many companies have learned the hard way the perils of ignoring new technologies that do not initially meet the needs of mainstream customers. For example, although personal computers did not meet the requirements of mainstream minicomputer users in the early 1980s, the computing power of the desktop machines mproved at a much faster rate than minicomputer users' demands for computing power did. As a result, personal computers caught up with the computing needs of many of the customers of Wang, Prime, Nixdorf, Data General, and Digital Equipment. Today they are performance-competitive with minicomputers in many applications. For the minicomputer makers, keeping close to mainstream customers and ignoring what were initially low-performance desktop technologies used by seemingly insignificant cus tomers in emerging markets was a rational decision-but one that proved disastrous. The technological changes that damage established companies are usually not radically new or difficult from a technological point of view. They do, however, have two important characteristics: First, they typically present a different package of performance attributes- ones that, at least at me outset, are not valued by existing customers. Second, the performance attributes that existing customers do value improve at such a rapid rate that the new technology can later invade those established markets. Only at this point will mainstream customers want the technology. Unfortunately for the established suppliers, by then it is often too late: the pioneers of the new technology dominate the market. It follows, then, that senior executives must first be able to spot the technologies that seem to fall into this category. Next, to commercialize and develop the new technologies, managers must protect them from the processes and incentives that are geared to serving established customers. And the only way to protect them is to create organizations that are completely independent from the mainstream business. No industry of staying too close to customers more dramatically than the hard-disk-drive industry. Between 1976 and 1992, disk-drive performance improved at a stunning rate: the physical size of a 100-megabyte (MB) system shrank from 5,400 to 8 cubic inches, and the cost per MB fell from $560 to $5. Technological change, of course, drove these breathtaking achievements. About half of the improvement came from a host of radical advances that were critical to continued improvements in disk-drive performance; the other half came from incremental advances. The pattern in the disk-drive industry has been repeated in mar/y other industries: the leading, established companies have consistently led the industry in developing and adopting new technologies that their customers demanded- even when those technologies required completely different technological competencies and manufacturing capabilities from the ones the companies had. In spite of this aggressive technological posture, no single disk-drive manufacturer has been able to dominate the industry for more than a few years. A series of companies have entered the business and risen to prominence, only to be toppled by newcomers who pursued technologies that at first did not meet the needs of mainstream customers. As a result, not one of the independent disk-drive companies that existed in 1976 survives today. To explain the differences in the impact of certain kinds of technological innovations on a given industry, the concept of performance trajectories – the rate at which the performance of a product has improved, and is expected to improve, over time – can be helpful. Almost every industry has a critical performance trajectory. In mechanical excavators, the critical trajectory is the annual improvement in cubic yards of earth moved per minute. In photocopiers, an important performance trajectory is improvement in number of copies per minute. In disk drives, one crucial measure of performance is storage capacity, which has advanced 50% each year on average for a given size of drive. Different types of technological innovations affect performance trajectories in different ways. On the one hand, sustaining technologies tend to maintain a rate of improvement; that is, they give customers something more or better in the attributes they already value. For example, thin-film components in disk drives, which replaced conventional ferrite heads and oxide disks between 1982 and 1990, enabled information to be recorded more densely on disks. Engineers had been pushing the limits of the' performance they could wring from ferrite heads and oxide disks, but the drives employing these technologies seemed to have reached the natural limits of an S curve. At that point, new thin-film technologies emerged that restored- or sustained-the historical trajectory of performance improvement. On the other hand, disruptive technologies introduce a very different package of attributes from the one mainstream customers historically value, and they often perform far worse along one or two dimensions that are particularly important to those customers. As a rule, mainstream customers are unwilling to use a disruptive product in applications they know and understand. At first, then, disruptive technologies tend to be used and valued only in new markets or new applications; in fact, they generally make possible the emergence of new markets. For example, Sony's early transistor adios sacrificed sound fidelity but created a market for portable radios by offering a new and different package of attributes- small size, light weight, and portability. In the history of the hard-disk-drive industry, the leaders stumbled at each point of disruptive technological change: when the diameter of disk drives shrank from the original 14 inches to 8 inches, then to 5. 25 inches, and finally to 3. 5 inches. Each of these new architectures, initially offered the market substantially less storage capacity than the typical user in the established market required. For example, the 8-inch drive offered 20 MB when it was introduced, while the primary market for disk drives at that time-mainframes-required 200 MB on average. Not surprisingly, the leading computer manufacturers rejected the 8-inch architecture at first. As a result, their suppliers, whose mainstream products consisted of 14-inch drives with more than 200 MB of capacity, did not pursue the disruptive products aggressively. The pattern was repeated when the 5. 25-inch and 3. 5-inch drives emerged: established computer makers rejected the drives as inadequate, and, in turn, their disk-drive suppliers ignored them as well. But while they offered less storage capacity, the disruptive architectures created other important attributes- internal power supplies and smaller size (8-inch drives); still smaller size and low-cost stepper motors (5. 25-inch drives); and ruggedness, light weight, and low-power consumption (3. 5-inch drives). From the late 1970s to the mid-1980s, the availability of the three drives made possible the development of new markets for minicomputers, desktop PCs, and portable computers, respectively. Although the smaller drives represented disruptive technological change, each was technologically straightforward. In fact, there were engineers at many leading companies who championed the new technologies and built working prototypes with bootlegged resources before management gave a formal go-ahead. Still, the leading companies could not move the products through their organizations and into the market in a timely way. Each time a disruptive technology emerged, between one-half and two-thirds of the established manufacturers failed to introduce models employing the new architecture-in stark contrast to their timely launches of critical sustaining technologies. Those companies that finally did launch new models typically lagged behind entrant companies by two years-eons in an industry whose products' life cycles are often two y. ears. Three waves of entrant companies led these revolutions; they first captured the new markets and then dethroned the leading companies in the mainstream markets. How could technologies that were initially inferior and useful only to new markets eventually threaten leading companies in established markets? Once the disruptive architectures became established in their new markets, sustaining innovations raised each architecture's performance along steep trajectories- so steep that the performance available from each architecture soon satisfied the needs of customers in the established markets. For example, the 5. 25-inch drive, whose initial 5 MB of capacity in 1980 was only a fraction of the capacity that the minicomputer market needed, became fully performance-competitive in the minicomputer market by 1986 and in the mainframe market by 1991. (See the graph â€Å"How Disk-Drive Performance Met Market Needs. ) A company's revenue and cost structures play a critical role in the way it evaluates proposed technological innovations. Generally, disruptive technologies look financially unattractive to established companies. The potential revenues from the discernible markets are small, and it is often difficult to project how big the markets for the technology will be over the long term. As a result, managers typically conclude that the technology cannot make a meaningful contribution to corporate growth and, therefore, that it is not worth the management effort required to develop it. In addition, established companies have often installed higher cost structures to serve sustaining technologies than those required by disruptive technologies. As a result, managers typically see themselves as having two choices when deciding whether to pursue disruptive technologies. One is to go downmarket and accept the lower profit margins of the emerging markets that the disruptive technologies will initially serve. The other is to go upmarket with sustaining technologies and enter market segments whose profit margins are alluringly high. For example, the margins of IBM's mainframes are still higher than those of PCs). Any rational resource-allocation process in companies serving established markets will choose going upmarket rather than going down. Managers of companies that have championed disruptive technologies in emerging markets look at the world quite differently. Without the high cost structures of their established counterparts, these companies find the emerging markets appealing. Once the companies have secured a foothold in the markets and mproved the performance of their technologies, the established markets above them, served by high-cost suppliers, look appetizing. When they do attack, the entrant companies find the established players to be easy and unprepared opponents because the opponents have been looking upmarket themselves, discounting the threat from below. It is tempting to stop at this point and conclude that a valuable lesson has been learned: managers can avoid missing the next wave by paying careful attention to potentially disruptive technologies that do not meet current customers' needs. But recognizing the pattern and figuring out how to break it are two different things. Although entrants invaded established markets with new technologies three times in succession, none of the established leaders in the disk-drive industry seemed to learn from the experiences of those that fell before them. Management myopia or lack of foresight cannot explain these failures. The problem is that managers keep doing what has worked in the past: serving the rapidly growing needs of their current customers. The processes that successful, well-managed companies have developed to allocate resources among proposed investments are incapable of funneling resources into programs that current customers explicitly don't want and whose profit margins seem unattractive. Managing the development of new technology is tightly linked to a company's investment processes. Most strategic proposals-to add capacity or to develop new products or processes- take shape at the lower levels of organizations in engineering groups or project teams. Companies then use analytical planning and budgeting systems to select from among the candidates competing for funds. Proposals to create new businesses in emerging markets are particularly challenging to assess because they depend on notoriously unreliable estimates of market size. Because managers are evaluated on their ability to place the right bets, it is not surprising that in well-managed companies, mid- and top-level managers back projects in which the market seems assured. By staying close to lead customers, as they have been trained to do, managers focus resources on fulfilling the requirements of those reliable customers that can be served profitably. Risk is reduced-and careers are safeguarded-by giving known customers what they want. Seagate Technology's experience illustrates the consequences of relying on such resource-allocation processes to evaluate disruptive technologies. By almost any measure, Seagate, based in Scotts Valley, California, was one of the most successful and aggressively' managed companies in the history of the microelectronics industry: from its inception in 1980, Seagate's revenues had grown to more than $700 million by 1986. It had pioneered 5. 5-inch hard-disk drives and was the main supplier of them to IBM and IBM-compatible personal-computer manufacturers. The company was the leading manufacturer of 5. 25-inch drives at the time the disruptive 3. 5-inch drives emerged in the mid-1980s. Engineers at Seagate were the second in the industry to develop working prototypes of 3. 5-inch drives. By early 1985, they had made more than 80 such models with a low level of company funding. The engineers forwarded the new models to key marketing executives, and the trade press reported that Seagate was actively developing 3. -inch drives. But Seagate's principal customers- IBM and other manufacturers of AT-class personal computers- showed no interest in the new drives. They wanted to incorporate 40-MB and 60-MB drives in their next-generation models, and Seagate's early 3. 5-inch prototypes packed only 10 MB. In response, Seagate's marketing executives lowered their sales forecasts for the new ‘disk drives. Manufacturing and financial executives at the company pointed out another drawback to the 3. 5-inch drives. According to their analysis, the new drives would never be competitive with the 5. 5-inch architecture on a cost-per-megabyte basis-an important metric that Seagate's customers used to evaluate disk drives. Given Seagate's cost structure, margins on the higher-capacity 5. 25-inch models therefore promised to be much higher than those on the smaller products. Senior managers quite rationally decided that the 3. 5-inch drive would not provide the sales volume and profit margins that Seagate needed from a new product. A ‘former Seagate marketing executive recalled, â€Å"We needed a new model that could become the next ST412 [a 5. 5-inch drive generating more than $300 million in annual sales, which was nearing the end of its life cycle]. At the time, the entire market for 3. 5-inch drives was less than $50 million. The 3. 5-inch drive just didn't fit the bill- for sales or profits. † The shelving of the 3. 5-inch drive was not a signal that Seagate was complacent about innovation. Seagate subsequently introduced new models of 5. 25-inch drives at an accelerated rate and, in so doing, introduced an impressive array of sustaining technological improvements, even though introducing them rendered a significant portion of its manufacturing capacity obsolete.