Wednesday, October 30, 2019

Recommendation of process routes for the sepration of LPG Essay

Recommendation of process routes for the sepration of LPG - Essay Example To separate these two constituents, several processes can be used. This method involves the recovery and increase in the purity of the light hydrocarbons butane and propane from their mixture in liquid petroleum gas. It is based on the distillation of the gas by controlled heating and cooling, by taking gain of the diverse boiling points of the hydrocarbons. Fractionating columns are used with labels if the hydrocarbon is separated by evaporation. Liquid petroleum gas composition of propane and butane which makes up to 40% natural gas is extracted as a liquid mixture in a fractionating column (Zlokarnik, 2002). After its extraction from natural gas, the refrigerated liquid petroleum gas is passed through an absorber column where it is mixed with lean oil at a temperature of 238 degrees Celsius to allow for the absorption of the liquid petroleum gas products. This process is accelerated by elevated pressure and low temperature and the refrigeration of the liquid petroleum gas is ensured by using a closed loop circulation of a refrigerant in centrifugal compressors. The liquid petroleum gas is precooled before it enters a de-ethanizer column with a pressure lower than the pressure of the liquid petroleum gas. In the de-ethanizer, ethane and other lighter components in the liquid petroleum gas are removed. A constant temperature is maintained in the column by using a reboiler placed at the bottom of the fractionating column to supply heat. The overhead vapour is recycled to recover any escaped propane from the evaporated gas. The residue is then passed through a rich oil still column where th e lean oil is separated using distillation. The liquid petroleum gas that is separated is condensed in a reflux condenser and is then directed into fractionating columns. Depropanizer and debutanizer systems are used to separate the stabilized

Monday, October 28, 2019

Technology and Social Interaction Essay Example for Free

Technology and Social Interaction Essay Throughout the years technology has gotten more and more advanced. The better the technology the easier it is for people to stay connected with each other. There are so many ways to contact a person now. You can call/text, email, or even video chat. Social Interaction is getting much better in today’s world of technology. In the past 15 years, the Internet has transitioned from a medium that’s interacted with strictly though desktop computers in homes, offices and computer labs to one that a growing number of people take with them everywhere they go. Whether via laptops, ever-evolving mobile phone devices or through Internet-connected workstations in the office and at home, many are online all the time. (Margolis) The cell phone has really improved over the past few decades. It used to look like a giant brick, but now they seem to be getting smaller and smaller. Cell Phones can basically do anything such as call people, text message them, or even go on the Internet. It is very easy to get into contact with people these days because almost everyone has a cell phone. The Pew Research Center found that half of American teenagers — defined in the study as ages 12 through 17 — send 50 or more text messages a day and that one third send more than 100 a day (STOUT) They even have games on cell phones that people play against other people around the world. Computers have also improved drastically over the decades. Almost everyone has a computer or access to one, which means they most likely have an email account. Emails and instant messages are one of the best ways to stay connected with others. Peer networking sites and instant messaging brings people geographically removed from one another into each other’s lives more casually, making it a daily interactive stream (Margolis). Facebook is one of the most popular social networks on the Internet and keeps everyone in touch with everyone they know who are on that site. Social networking also allows people to share their daily lives and thoughts with one another through outbound messages and walls that signal all of their connections. Although some use social networking sites for limited communication, others spend large periods of their day communicating with contacts through computer and mobile social networking interfaces (Feigenbaum). Technology these days had really helped us have a social life without even being with them in person. Technology has led people to write more and talk less. The ubiquitous and cheap nature of email has made it the backbone of business communication. Emails have the advantage of being able to transmit information instantly to numerous recipients. With the increasing use of email-enabled phones and devices, people no longer have to be at their desks to communicate (Feigenbaum). People no longer have to wait for a letter to come in the mail, email is instantaneously. A great way to connect with people through technology theses days is video chatting, where you can talk to someone face to face over the computer or even cell phone. It is basically a phone call, but you get to actually see the person you are talking to. Video chatting is getting much more popular these days Facebook even added it to their site so everyone can do it. Also there is a website called chatroulette which is a site just for video chatting random people from across the world. With the proliferation of built-in cameras and microphones on computers and mobile devices, broadband connections and program refinements, an average of 300 million minutes of Skype video calls made daily globally, an increase of about 900 percent from 2007, according to data provided by the company. Many more calls are made using other popular software programs, like FaceTime and Google chat. (SCELFO) I believe living in this day and age is very easy to have a social life than it was years back when technology was less advanced. There are so many opportunities now to get into contact with someone. Basically everyone has a cell phone, computer, email account, and a way to video chat. It is so crazy how technology has improved and there will be much more in the future ahead.

Saturday, October 26, 2019

Censorship Essay -- essays research papers

Families all over America spend evening’s together watching t.v. This seems to be one of America’s favorite pastimes. But with all the violence that is involved with television programs the question arises on weither or not network television should be censored. It seems unlogical for theses censoring to take place. Network television should not be censored because of our freedom of speech rights, more violence is on cable, and it is the parent’s responsibility to monitor what children are viewing not the networks.   Ã‚  Ã‚  Ã‚  Ã‚  Ã¢â‚¬Å"I do not favor censorship and I am jealous of my First Amendment Rights,† Eron pg 617 To evoke censorship onto network television would most definitely take away our first amendment right. We as Americans deserve the right to freedom of speech. Many people fought long and hard so that we, as Americans, have such rights as they are stated in the Bill of Rights. To start censorship on Network television may seem like a small threat to our rights, but will become so much more. With censorship television shows and producers would not be able to freely show what they want their viewers to see. It in turn is the exact same as telling someone they can not say something they wish to say. Censorship may stop our children from seeing violent acts on t.v., but in return will take away one of our most precious rights as Americans.   Ã‚  Ã‚  Ã‚  Ã‚  In addition to loss of one of our basic rights, it seems unlogical to censor network t...

Thursday, October 24, 2019

John Steinbeck “Of Mice and Men” Character Analysis Essay

When all of the ranch hands went into town, Lennie, Crooks, Candy, and Curley’s wife were left behind. This was due to discrimination and prejudice. While the ranch hands were in town, the true similarities among the others really come out. One can see that they are left out and secluded due the fact that each one of them has either a physical or mental disability, or are considered trouble. In John Steinbeck’s novel, Lennie Small is a mentally handicapped man who traveled with George Milton. George had to speak for Lennie and do a lot of babysitting and thinking for him also. Crooks, the Negro stablebuck, had been injured when a horse kicked him. He had a hard time walking around because his back was hunched over and was very sore. With being a â€Å"nigger†, the boss had a room just for him in the barn. Candy, the swamper, is an old man who had his hand injured in an accident on the ranch, making him partially handicapped. This quote shows how Candy has become useless like his dog â€Å"the best damn sheep-dog I ever seen†. Curley’s wife is a young, flirty lady who is ignored by many of the ranchers because if they talk to her, they would get into trouble with Curley. Over the course of the novel, there is a lot of sexual prejudice towards Curley’s Wife. With the fact that she lived on a ranch where the majority of people were men, she tended to get very lonely. The quote from George â€Å"Ranch with a bunch of guys on it ain’t no place for a girl† is an example of the prejudice towards Curley’s wife. Another part of the sexual prejudice towards her is the fact that none of the ranch hands will talk to her. Overall the ranch hands don’t trust nor understand her. Some of the sexual prejudice she experienced was her fault, she scared the ranch hands with her femininity but she wasn’t really a tart, she just craved attention that she didn’t get from Curley. Being ignored by both the ranch hands and Curley she ended up very lonely, the one thing she wa nted most was to escape. When all things are considered, Lennie, Crooks, Candy, and Curley’s wife are all left out due to a disability or for being a possible wick to start a fire. Lennie has a mental disability that slows him down some on his thinking process. Nobody wanted Lennie to go into town because he might do something stupid. Crooks and Candy both have a physical disability. With their disabilities, the other ranchers see them as useless because Candy has no hand, and Crooks has a hunched back. The ranchers also exclude Crooks because he is a â€Å"nigger† and at that time period, â€Å"niggers† were still considered to be trash, even though slavery had been abolished. Curley’s wife is left out because of the fact that she is very flirty and if she was to come along, Curley would be very irate. Crooks, Candy, and Curley’s wife all suffer from discrimination and prejudice which creates loneliness and isolation for each one of them. They learn to deal with their loneliness by admiring Lennie and George’s friendship. Crooks experiences isolation due to the fact that the society he resided in was racist. â€Å"A guy goes nuts if he ain’t got nobody. Don’t matter no difference who the guy is, longs he you. I tell ya a guy gets too lonely an he gets sick† was the way Crooks found a personal connection with Lennie by letting him know he understands how he feels when George is gone. Another quote â€Å"Cause I’m black, they play cards in there but I can’t play because I’m black. They say I stink. Well I tell you, all of you stink to me† shows that Crooks would do anything to be accepted, but because of his color he has to refrain from the urge. Throughout the story, there is a lot of discrimination and prejudice. Lennie, Crooks, Candy, and Curley’s wife all deal with getting left out while living their lives. Their similarities really show when they aren’t able to go places and are excluded. At times when they were excluded, they came to one another to cope with their loneliness. Each one of them wanted someone to care about them, to own their own place, and to belong somewhere.

Wednesday, October 23, 2019

Learning machine Essay

The author believes that like a learning machine the human brain is capable of adapting anything new regardless of the age of the person. This she deduces this from a number of arguments in the form of research done from different points of view but all leading to her conclusion. The author is not describing other peoples’ opinions. Rather she uses their arguments as the premises to end at the conclusion that she states at the beginning of her excerpt. The author uses information on neural plasticity from presentation done by Gregg Recanzone using animals, Merzenich Michaels’ research on â€Å"shaping the machinery of our brains† using the elderly and   Alison Gopniks’ research on plasticity of the brain using children and the connection to the logic of imagination all from the University of California. Columbia University’s Walter Mischels’ discussion on the ability to control our desires based on the imaginations we put on them and Sir Michael Rutter of King’s college made a presentation of the effects of early institutional deprivation[1]. To arrive at the conclusion, the author used the data from done research as the premises to support her conclusion. Though the author does not use the solid research data to support her argument, she uses the findings of the research as her arguments. The research was done correctly. There are various experiments done where there was need for comparison. There was physical experimenting in the case of study. The author shows only one side of the issue. The author relies on isolated researches, which makes the findings reliable. The author makes a valid conclusion form the data that was well intended for showing that indeed the plasticity and changes in the brain is a life long process. At any time, the conclusions made should be based on concrete and sound arguments. Arguments based on facts or pre-proven researches are valid arguments. To make a conclusion, one needs to provide valid arguments that are in line with the conclusion to be made. The conclusion made should therefore be in tandem with the arguments presented to make a valid conclusion. This is because there can be a situation where there are valid arguments but an invalid conclusion. In the excerpt, the author has used valid arguments in the form done researches to arrive at the conclusion made. The conclusion is also valid. In making a good argument, there should be no assumptions made. In the excerpt, the argument that uses the animal species of a monkey to relate to the human situation is challengeable. Real research in the human context would have been the best line of action in the case study. The education sector in the United States today is a very good example. In the New York Times on 6th July 2010. The unions are accusing the government for â€Å"undermining public education†Ã¢â‚¬ ¦. Is the conclusion right? What is the basis of their conclusion? In the National Education Association’s convention that began on Saturday, there is no one from the Obama administration is set to speak in the convection. This is despite the previous two-year addresses that the president had made to them. They claim that they have not seen the change they hoped for from the government. â€Å"Today our members face the most anti-educator, anti-union, anti-student environment I have ever experienced,† Dennis Van Roekel, president of the union, the National Education Association, told thousands of members gathered at the convention center.[2] The angered teachers are being blamed for the prevailing situations in the public schools. There is a connection in the article and except, the teachers unions are deriving a conclusion from the arguments that I have briefly summarized among the many more in the article. The concept of using valid arguments to arrive at a conclusion is utilized here. Like any other animal man is no exception to nature. Nature requires that the species adapt to survive. The human brain is the control system of the human body, this makes it the first to respond to the changes and hence give directive to the whole body in order to survive. As we grow, the rate of responsiveness to changes will decrease. The brain is an organ in the human body, all the body tissues are subjected to wear and tear and old age makes the body not able to replace the worn out tissues as fast as before. Therefore, the brain will have worn out tissues that will make it unable to adapt as first as before. BIBLIOGRAPHY Dillon, Sam.2010. Teachers’ Union Shuns Obama Aides at Convention. The New York Times July 6th, 2010. Retrieved on 6th July 2010 http://www.nytimes.com/2010/07/05/education/05teachers.html?_r=1&hpw Nelson, Leah.2006. A learning machine: Plasticity and change throughout life. Retrieved on July 6th, 2006 http://www.psychologicalscience.org/observer/getArticle.cfm?id=2029 [1] Leah Nelson. 2006.   A learning machine: Plasticity and change throughout life. Retrieved on 6th July 2006   http://www.psychologicalscience.org/observer/getArticle.cfm?id=2029   [2] Sam Dillon. 2010. Teachers’ Union Shuns Obama Aides at Convention. The New York Times 6th, July 2010. Retrieved on 6th July 2010 http://www.nytimes.com/2010/07/05/education/05teachers.html?_r=1&hpw

Tuesday, October 22, 2019

The Importance Of Sanskrit In Hinduism Theology Religion Essay Example

The Importance Of Sanskrit In Hinduism Theology Religion Essay Example The Importance Of Sanskrit In Hinduism Theology Religion Paper The Importance Of Sanskrit In Hinduism Theology Religion Paper which means a inhabitant in the Indus River part where the earliest roots of Hinduism began. Hindu is usually applied merely to members of the Hindu religion group ; nevertheless it may still mention to anyone from India. Hindooism is different from other faiths, such as, Christianity. It has no Pope and it has no hierarchy. Unlike any other faith, Hinduism has no peculiar laminitis, for case, the laminitis of Christianity is Jesus Christ. This faith is more viewed as the research of assorted work forces throughout the old ages, who were called Rishis, which were Christ like Masterss. Originally, before the Persians gave the name Hinduism to this faith it was called Sanatana Dharma intending Righteousness. Besides its name, Hinduism has gone multiple alterations and developments throughout the old ages. There are two efforts which explain how Hinduism started to develop in India. For a peculiar ground both of these theories draw on the celebrated poetry Ekam Sat, Viprah Bahudha Vadanti for their effectivity. The first theory is the Indo-european Migration Theory , which began started after the relationship between Sanskrit, Greek and Latin was discovered. This theory states that at the terminal of the Indus Valley Civilization ( around 1700BCE ) a figure of Aryans immigrated into northern India from cardinal Europe and Minor Asia. Harmonizing to this theory the Aryans began to blend with the Autochthonal Dravidian. Finally the Aryans spiritual watercourse together with the Indigenous watercourse is what formed and started Hinduism. The 2nd theory is the antonym of the first theory. It is the Out of India Theory , where it states that Hinduism began out of India. There are even transitions in the Mahabharata and other Hindu texts which support this thought. Harmonizing to this theory the Aryan civilization was non developed by migrators or outside encroachers, but it was introduced through the Indus vale civilisation. This theory has two beliefs. First is that Hinduism s spiritual development was wholly original and new. Its 2nd belief is that the similarities between Sanskrit, Greek and Latin linguistic communications are the consequence of the Aryan migration, out of India and into Europe. At this point Aryan folk from India started conveying their civilization, linguistic communication and faith to distribute throughout Europe. Finally it is non really of import whether the Aryans came from exterior or interior of India. Hinduism should be seen as a faith which was born 3,000 old ages through the Aryan civilization, harmonizing to the regulation of Ekam Sat, Viprah Bahudha Vadanti . The consolidative force of this poetry is what created the Hinduism of today. Hindooism has a batch of scriptures.A The Bibles consist of the history and civilization of India. All Hindu Bibles are considered as revealed truths of God. In fact Hindu scriptures province thatA all Hindu Scriptures were written by God. Vedas, intending cognition, are the first sacred books of Hinduism. There are four Vedas, which are supposed to learn work forces the highest facets of truths which can take them to God. Vedas and Upanishads are Shruti Bibles. Harmonizing to Vedas Self Realization is one and the end of human life. Vedas contains a elaborate treatment on rites and ceremonials which lead to achieve self-fulfillment. There are 4 Vedas, which are ; Rig Veda, Yajur Veda, Sama Veda and Atharva Veda. The really first of import book of Hindu, Rig Veda, states ; Ekam Sat, Viprah Bahudha Vadanti , which means that there is merely one truth even if work forces describe it otherwise. Hindu believes that There is One and merely God and One Truth. This book is a aggregation of supplications and congratulationss. All the four Vedas describe different cognition. For case rig Veda describes the cognition of anthem, Yajur Veda describes the cognition of Liturgy, and Sama Veda describes the Knowledge of Music, while Atharva Veda describes the Knowledge given by Sage Athrvana. Hindus believe in One and Merely God, who is BrahmanA which can be expressed in assorted signifiers. Harmonizing to the Hindus God has no human or any other signifier. However they believe that there is still nil incorrect to believe in a God with a name and signifier. In fact in the Shruti Bibles of Hinduism, Brahman has been described both asA Saguna Brahman every bit good as Nirguna Brahman, God with properties and God without properties, severally. In the Upanishads, God is described asA Neti. Despite this, Hindus still believe that there is merely One God. Lord Krishna stated, Name me by whatever name you like ; Worship me in any signifier you like ; All that goes to One and Merely Supreme Reality. Therefore when a Hindu worships any God signifier he is really idolizing the One and Merely God Brahman. Even in Christianity although we believe in one and merely God, He expresses himself in three different signifiers, Father, Son and the Holy Spirit. Language and faith are inseparably related, like Hinduism and Sanskrit. From the really beginning, Vedic thought has been expressed through the Sanskrit linguistic communication. Therefore, Sanskrit forms the footing of Hindu civilisation. Sanskrit literally intending cultured or refined is one of the richest and most systematic linguistic communications in the universe, which is older than Hebrew and Latin. The first words in English linguistic communication came from Sanskrit. For case, the word female parent came from Sanskrit wordA mataA and male parent came from Sanskrit wordA pita . Forbes Magazine, ( July, 1987 ) wrote: Sanskrit is the female parent of all the European linguistic communications . The literature and doctrine expressed in this linguistic communication have a beauty and reconditeness, which is unexcelled. As linguistic communication alterations, so does faith. Although the bass of Hinduism was formed the vocabulary and sentence structure of Sanskrit, modern linguistic communications such as Hindi, Gujarati, Bengali, Telugu, Kannada and others, are now the primary bearers of Hindu thought within India. The displacement from Sanskrit to these linguistic communications formed non merely a alteration in the significance of words but besides a alteration in how faith was interpreted. However in the last century, Hinduism started to emerge in two assorted signifiers. One is from 1896, in Chicago where Swami Vivekananda, a Hindu spiritual instructor, traveled to England and other states in Europe and created several followings. Swami was a trailblazer for most of Hindu instructors who came to the West and are still coming today. Hindu holy work forces have brought a new set of Hindu vocabulary and idea to the western civilization. The 2nd important organ transplant of Hinduism into the West has occurred through the addition of in-migration oh Hindus who were born in India and moved to the West. These members are actively engaged in constructing Hindu temples and other institutuin in the West. As the popularity of Hinduism additions in the West, the emerging signifiers of this ancient tradition are being reflected through the medium of western linguistic communication, largely English. However the significance of words is non easy moved from one linguistic communication to another. It is said that the more distant two linguistic communications are separated by geographics clime and latitude the more the significance of words displacement and finally the more worldview displacements. There is non a batch of difference between Sanskrit and the Indian regional linguistic communication when compared to the difference between a western linguistic communication, for case, Sanskrit and English. The job of Christianization of Hinduism is the difficultly of conveying Hinduism to the West. It is a natural error which we make to near Hinduism with Christian, Jewish or Islamic impressions of God, psyche, heaven, snake pit and wickedness in head. We translate these impressions, to impressions in Christian idea, such as, Brahman as God, atman as psyche, dad as wickedness and Dharma as faith. However this is non right, Brahman is non the same as God, atman is non the same as psyche, dad is non sin and Dharma is more than merely faith. When one is reading sacred Hagiographas of a peculiar faith, for case, Upanishads or Bhagavad-Gita, one must read them on their ain footings and non from the position of some other faith. Because Hinduism is being reflected through Christianity, Judaism and Islam, the theological singularity of Hinduism is going wholly lost. Ideally anyone who is interested in Hinduism and would love to understand he must foremost hold cognition of the Sanskrit linguistic communication. However even the first coevalss of Hindu immigrants did non cognize Sanskrit. The Hindooism of these immigrants is through the regional linguistic communications. In fact Hinduism is still related really closely to its Sanskrit roots through the regional linguistic communications. The job is that these linguistic communications are still non being taught to the new coevals, and finally this will take the regional linguistic communications of India will decease after one or two coevalss. Therefore, this means that the 2nd coevals will lose their regional cultural roots and go more westernized. This job of spiritual and cultural alteration can be resolved by placing and making a lexicon of spiritual Sanskrit words. This will finally halt us to interpret words as Brahman, Dharma and dad, therefore, these words will go portion of the common spoken linguistic communication when speech production of Hindu issues. However this is already go oning with the words karma, yoga and Dharma. They became portion of common English address, but non with their ultimate spiritual significance. These words are footings taken from the sacred Bibles of Hindu, such as, the Bhagavad-Gita and the 10 major Upanishads. Some of the interlingual renditions of Hindu footings are: Brahman refers to the Supreme Principle. Everything which is created and absorbed is a production of Brahman. The word Brahman must non be confused with Brahma. Brahma God of creative activity. Dharma is besides derived from Sanskrit intending to keep up, to transport or to prolong. The word Dharma refers to that which upholds or sustains the existence. Human society, for illustration, is sustained and upheld by the Dharma performed by its members. In doctrine Dharma refers to the specifying quality of an object. For case, coldness is a Dharma of ice. In this instance we can believe that the being of an object is sustained or defined by its indispensable properties, Dharma s. Yoga besides derived from the Sanskrit means to fall in, to unify or to attach. We can believe of yoga as the connection of the atma with the paramatma, the psyche with God. There are legion agencies of fall ining with God: through action, karma-yoga ; through cognition, jnana-yoga ; through devotedness, bhakti-yoga ; through speculation, dhyana-yoga, etc. Yoga has many other significance. For illustration, in uranology and star divination it refers to a concurrence ( brotherhood ) of planets. Papa is what brings one down. Sometimes translated as wickedness or immorality.

Monday, October 21, 2019

Extreme conditional value at risk a coherent scenario for risk management The WritePass Journal

Extreme conditional value at risk a coherent scenario for risk management CHAPTER ONE Extreme conditional value at risk a coherent scenario for risk management CHAPTER ONE1. INTRODUCTION1.1.BACKGROUND1.2   RSEARCH PROBLEM1.3   RELEVENCE OF THE STUDY1.4   RESEARCH DESIGNCHAPTER 2: RISK MEASUREMENT AND THE EMPIRICALDISTRIBUTION OF FINANCIAL RETURNS2.1   Risk Measurement in Finance: A Review of Its Origins2.2   Value-at-risk (VaR)2.2.1 Definition and concepts2.2.2 Limitations of VaR2.3   Conditional Value-at-Risk2.4   The Empirical Distribution of Financial Returns2.4.1   The Importance of Being Normal2.4.2 Deviations From NormalityCHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND ADEQUATE FRAMEWORK?1.3. Extreme Value Theory3.1. The Block of Maxima Method3.2.  Ã‚   The Generalized Extreme Value Distribution3.2.1. Extreme Value-at-Risk3.2.2.   Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of RiskCHAPTER 4: DATA DISCRIPTION.CHAPTER 5: DISCUSION OF EMPIRICAL RESULTSCHAPTER 6: CONCLUSIONS  References Related CHAPTER ONE 1. INTRODUCTION Extreme financial losses that occurred during the 2007-2008 financial crisis reignited questions of whether existing methodologies, which are largely based on the normal distribution, are adequate and suitable for the purpose of risk measurement and management. The major assumptions employed in these frameworks are that financial returns are independently and identically distributed, and follow the normal distribution. However, weaknesses in these methodologies has long been identified in the literature. Firstly, it is now widely accepted that financial returns are not normally distributed; they are asymmetric, skewed, leptokurtic and fat-tailed. Secondly, it is a known fact that financial returns exhibit volatility clustering, thus the assumption of independently distributed is violated. The combined evidence concerning the stylized facts of financial returns necessitates the need for adapting existing methodologies or developing new methodologies that will account for all the stylised facts of financial returns explicitly. In this paper, I discuss two related measures of risk; extreme value-at-risk (EVaR) and extreme conditional value-at-risk (ECVaR). I argue that ECVaR is a better measure of extreme market risk than EVaR utilised by Kabundi and Mwamba (2009) since it is coherent, and captures the effects of extreme markets events. In contrast, even though EVaR captures the effect of extreme market events, it is non-coherent. 1.1.BACKGROUND Markowitz (1952), Roy (1952), Shape (1964), Black and Scholes (1973), and Merton’s (1973) major toolkit in the development of modern portfolio theory (MPT) and the field of financial engineering consisted of means, variance, correlations and covariance of asset returns. In MPT, the variance or equivalently the standard deviation was the panacea measure of risk. A major assumption employed in this theory is that financial asset returns are normally distributed. Under this assumption, extreme market events rarely happen. When they do occur, risk managers can simply treat them as outliers and disregard them when modelling financial asset returns. The assumption of normally distributed asset returns is too simplistic for use in financial modelling of extreme market events. During extreme market activity similar to the 2007-2008 financial crisis, financial returns exhibit behavior that is beyond what the normal distribution can model. Starting with the work of Mandelbrot (1963) there is increasingly more convincing empirical evidence that suggest that asset returns are not normally distributed. They exhibit asymmetric behavior, ‘fat tails’ and high kurtosis than the normal distribution can accommodate. The implication is that extreme negative returns do occur, and are more frequent than predicted by the normal distribution. Therefore, measures of risk based on the normal distribution will underestimate the risk of portfolios and lead to huge financial losses, and potentially insolvencies of financial institutions. To mitigate the effects of inadequate risk capital buffers stemming from underestimation of risk by normality-based financial modelling, risk measures such as EVaR that go beyond the assumption of normally distributed returns have been developed. However, EVaR is non-coherent just like VaR from which it is developed. The implication is that, even though it captures the effects of extreme mar ket events, it is not a good measure of risk since it does not reflect diversification – a contradiction to one of the cornerstone of portfolio theory. ECVaR naturally overcomes these problems since it coherent and can capture extreme market events. 1.2   RSEARCH PROBLEM The purpose of this paper is to develop extreme conditional value-at-risk (ECVaR), and propose it as a better measure of risk than EVaR under conditions of extreme market activity with financial returns that exhibit volatility clustering, and are not normally distributed. Kabundi and Mwamba (2009) have proposed EVaR as a better measure of extreme risk than the widely used VaR, however, it is non-coherent. ECVaR is coherent, and captures the effect of extreme market activity, thus it is more suited to model extreme losses during market turmoil, and reflects diversification, which is an important requirement for any risk measure in portfolio theory. 1.3   RELEVENCE OF THE STUDY The assumption that financial asset returns are normally distributed understates the possibility of infrequent extreme events whose impact is more detrimental than that of events that are more frequent. Use of VaR and CVaR underestimate the riskiness of assets and portfolios, and eventually lead to huge losses and bankruptcies during times of extreme market activity. There are many adverse effects of using the normal distribution in the measurement of financial risk, the most visible being the loss of money due to underestimating risk. During the global financial crisis, a number of banks and non-financial institutions suffered huge financial losses; some went bankrupt and failed, partly because of inadequate capital allocation stemming from underestimation of risk by models that assumed normally distributed returns. Measures of risk that do not assume normality of financial returns have been developed. One such measure is EVaR (Kabundi and Mwamba (2009)). EVaR captures the effect of extreme market events, however it is not coherent. As a result, EVaR is not a good measure of risk since it does not reflect diversification. In financial markets characterised by multiple sources of risk and extreme market volatility, it is important to have a risk measure that is coherent and can capture the effect of extreme market activity. ECVaR   is advocated to fulfils this role of ensuring extreme market risk while conforming to portfolio theory’s wisdom of diversification. 1.4   RESEARCH DESIGN Chapter 2 will present a literature review of risk measurement methodologies currently used by financial institutions, in particular, VaR and CVaR. I also discuss the strengths and weaknesses of these measures. Another risk measure not widely known thus far is the EVaR. We discuss EVaR as an advancement in risk measurement methodologies. I advocate that EVaR is not a good measure of risk since it is non-coherent. This leads to the next chapter, which presents ECVaR as a better risk measure that is coherent and can capture extreme market events. Chapter 3 will be concerned with extreme conditional value-at-risk (ECVaR) as a convenient modelling framework that naturally overcomes the normality assumption of asset returns in the modelling of extreme market events. This is followed with a comparative analysis of EVaR and ECVaR using financial data covering both the pre-financial crisis and the financial crisis periods. Chapter 4 will be concerned with data sources, preliminary data description, and the estimation of EVaR, and ECVaR. Chapter 5 will discuss the empirical results and the implication for risk measurement. Finally, chapter 6 will give concussions and highlight the directions for future research. CHAPTER 2: RISK MEASUREMENT AND THE EMPIRICAL DISTRIBUTION OF FINANCIAL RETURNS 2.1   Risk Measurement in Finance: A Review of Its Origins The concept of risk has been known for many years before Markowitz’s Portfolio Theory (MPT). Bernoulli (1738) solved the St. Petersburg paradox and derived fundamental insights of risk-averse behavior and the benefits of diversification.   In his formulation of expected utility theory, Bernoulli did not define risk explicitly; however, he inferred it from the shape of the utility function (Bulter et al. (2005:134); Brancinger Weber, (1997: 236)). Irving Fisher (1906) suggested the use of variance to measure economic risk. Von Neumann and Morgenstern (1947) used expected utility theory in the analysis of games and consequently deduced many of the modern understanding of decision making under risk or uncertainty.   Therefore, contrary to popular belief, the concept of risk has been known well before MPT. Even though the concept of risk was known before MPT, Markowitz (1952) first provided a systematic algorithm to measure risk using the variance in the formulation of the mean-variance model for which he won the Nobel Prize in 1990. The development of the mean-variance model inspired research in decision making under risk and the development of risk measures. The study of risk and decision making under uncertainty (which is treated the same as risk in most cases) stretch across disciplines. In decision science and psychology, Coombs and Pruitt (1960), Pruitt (1962), Coombs (1964), Coombs and Meyer (1969), and Coombs and Huang (1970a, 1970b) studied the perception of gambles and how their preference is affected by their perceived risk. In economics, finance and measurement theory, Markowitz (1952, 1959), Tobin (1958), Pratt (1964), Pollatsek Tversky (1970), Luce (1980) and others investigate portfolio selection and the measurement of risk of those portfolios, and gambles in general. T heir collective work produces a number of risk measures that vary in how they rank the riskiness of options, portfolios, or gambles. Though the risk measures vary, Pollatsek and Tversky (1970: 541) recognises that they share the following:   (1) Risk is regarded as a property of choosing among options. (2) Options can be meaningfully ordered according to their riskiness. (3) As suggested by Irving Fisher in 1906, the risk of an option is somehow related to the variance or dispersion in its outcomes. In addition to these basic properties, Markowitz regards risk as a ‘bad’, implying something that is undesirable. Since Markowitz (1952), many risk measures such as the semi-variance, absolute deviation, and the lower semi-variance etc. (see Brachinger and Weber, (1997)) were developed, however, the variance continued to dominate empirical finance. It was in the 1990s that a new measure, VaR was popularised and became industry standard as a risk measure. I present this ris k measure in the next section. 2.2   Value-at-risk (VaR) 2.2.1 Definition and concepts Besides these basic ideas concerning risk measures, there is no universally accepted definition of risk (Pollatsek and Tversky, 1970:541); as a result, risk measures continue to be developed. J.P Morgan Reuters (1996) pioneered a major breakthrough in the advancement of risk measurement with the use of value-at-risk (VaR), and the subsequent Basel committee recommendation that banks could use it for their internal risk management. VaR is concerned with measuring the risk of a financial position due to the uncertainty regarding the future levels of interest rates, stock prices, commodity prices, and exchange rates. The risk resulting in the movement of these market factors is called market risk. VaR is the expected maximum loss of a financial position with a given level of confidence over a specified horizon. VaR provides answers to question: what is the maximum loss that I can lose over, say the next ten days with 99 percent confidence? Put differently, what is the maximum loss that will be exceeded only one percent of the times in the next ten day? I illustrate the computation of VaR using one of the methods that is available, namely parametric VaR. I denote by the rate of return and by the portfolio value at time. Then is given by (1) The actual loss (the negative of the profit, which is) is given by (2) When is normally distributed (as is normally assumed), the variable has a standard normal distribution with mean of and standard deviation of. We can calculate VaR from the following equation: (3) where implies a confidence level. If we assume a 99% confidence level, we have (4) In   we have -2.33 as our VaR at 99% confidence level, and we will exceed this VaR only 1% of the times. From (4), it can be shown that the 99% confidence VaR is given byVaR (5)Generalising from (5), we can state the quantile VaR of the distribution as follows (6)VaR is an intuitive measure of risk that can be easily implemented. This is evident in its wide use in the industry. However, is it an optimal measure? The next section addresses the limitations of VaR. 2.2.2 Limitations of VaR Artzner et al. (1997,1999) developed a set of axioms that if satisfied by a risk measure, then that risk measure is ‘coherent’. The implication of coherent measures of risk is that â€Å"it is not possible to assign a function for measuring risk unless it satisfies these axioms† (Mitra, 2009:8). Risk measures that satisfy these axioms can be considered universal and optimal since they are founded on the same mathematical axioms that are generally accepted. Artzner et al. (1997, 1999) put forward the first axioms of risk measures, and any risk measure that satisfies them is a coherent measure of risk. Letting be a risk measure defined on two portfolios and. Then, the risk measure is coherent if it satisfies the following axioms: (1)  Ã‚   Monotonicity:   if then We interpret the monotonicity axiom to mean that higher losses are associated with higher risk. (2)  Ã‚   Homogeneity:   Ã‚   for; Assuming that there is no liquidity risk, the homogeneity axiom mean that risk is not a function of the quantity of a stock purchased, therefore we cannot reduce or increase risk by investing different amounts in the same stock. (3)  Ã‚   Translation invariance: , where is a riskless security; This means that investing in a riskless asset does not increase risk with certainty. (4)  Ã‚   Sub-additivity:   Possibly the most important axiom, sub-additivity insures that a risk measure reflects diversification – the combined risk of two portfolios is less than the sum of the risks of individual portfolios. VaR does not satisfy the most important axiom of sub-additivity, thus it is non-coherent. More so, VaR tells us what we can expect to lose if an extreme event does not occur, thus it does not tell us the extend of losses we can incur if a â€Å"tail† event occurs. VaR is therefore not optimal measure of risk. The non-coherence, and therefor non-optimality of VaR as a measuring of risk led to the development of conditional value-at-risk (CVaR) by Artzner et al. (1997, 1999), and Uryasev and Rockafeller (1999). I discus CVaR in the next section. 2.3   Conditional Value-at-Risk CVaR is also known as â€Å"Expected Shortfall† (ES),     Ã¢â‚¬Å"Tail VaR†, or â€Å"Tail conditional expectation†, and it measures risk beyond VaR. Yamai and Yoshiba (2002) define CVaR as the conditional expectation of losses given that the losses exceed VaR. Mathematically, CVaR is given by the following: (7) CVaR offers more insights concerning risk that VaR in that it tells us what we can expect to lose if the losses exceed VaR. Unfortunately, the finance industry has been slow in adopting CVaR as its preferred risk measure. This is besides the fact that â€Å"the actuarial/insurance community has tended to pick up on developments in financial risk management much more quickly than financial risk managers have picked up on developments in actuarial science† (Dowd and Black (2006:194)). Hopefully, the effects of the financial crisis will change this observation. In much of the applications of VaR and CVaR, returns have been assumed to be normally distributed. However, it is widely accepted that returns are not normally distributed. The implication is that, VaR and CVaR as currently used in finance will not capture extreme losses. This will lead to underestimation of risk and inadequate capital allocation across business units. In times of market stress when extra capital is required, it will be inadequate. This may lead to the insolvency of financial institutions. Methodologies that can capture extreme events are therefore needed. In the next section, I discuss the empirical evidence on financial returns, and thereafter discuss extreme value theory (EVT) as a suitable framework of modelling extreme losses. 2.4   The Empirical Distribution of Financial Returns Back in 1947, Geary wrote, â€Å"Normality is a myth; there never was, and never will be a normal distribution† (as cited by Krishnaiah (1980: 279). Today this remark is supported by a voluminous amount of empirical evidence against normally distributed returns; nevertheless, normality continues to be the workhorse of empirical finance. If the normality assumption fails to pass empirical tests, why are practitioners so obsessed with the bell curve? Could their obsession be justified? To uncover some of the possible responses to these questions, let us first look at the importance of being normal, and then look at the dangers of incorrectly assuming normality. 2.4.1   The Importance of Being Normal The normal distribution is the widely used distribution in statistical analysis in all fields that utilises statistics in explaining phenomenon. The normal distribution can be assumed for a population, and it gives a rich set of mathematical results (Mardia, 1980: 279). In other words, the mathematical representations are tractable, and are easy to implement. The populations can simply be explained by its mean and variance when the normal distribution is assumed. The panacea advantage is that the modelling process under normality assumption is very simple. In fields that deal with natural phenomenon, such as physics and geology, the normal distribution has unequivocally succeeded in explaining the variables of interest. The same cannot be said in the finance field. The normal probability distribution has been subject to rigorous empirical rejection. A number of stylized facts of asset returns, statistical tests of normality and the occurrence of extreme negative returns disputes the normal distribution as the underlying data generating process for asset returns. We briefly discuss these empirical findings next. 2.4.2 Deviations From Normality Ever since Mandelbrot (1963), Fama (1963), Fama (1965) among others, it is a known fact that asset returns are not normally distributed. The combined empirical evidence since the 1960s points out the following stylized facts of asset returns: (1)  Ã‚   Volatility clustering: periods of high volatility tend to be followed by periods of high volatility, and period of low volatility tend to be followed by low volatility. (2)  Ã‚   Autoregressive price changes: A price change depends on price changes in the past period. (3)  Ã‚   Skewness: Positive prices changes and negative price changes are not of the same magnitude. (4)  Ã‚   Fat-tails: The probabilities of extreme negative (positive) returns are much larger than predicted by the normal distribution. (5)  Ã‚   Time-varying tail thickness: More extreme losses occur during turbulent market activity than during normal market activity. (6)  Ã‚   Frequency dependent fat-tails: high frequency data tends to be more fat-tailed than low frequency data. In addition to these stylized facts of asset returns, extreme events of 1974 Germany banking crisis, 1978 banking crisis in Spain, 1990s Japanese banking crisis, September 2001, and the 2007-2008 US experience ( BIS, 2004) could not have happened under the normal distribution. Alternatively, we could just have treated them as outliers and disregarded them; however, experience has shown that even those who are obsessed with the Gaussian distribution could not ignore the detrimental effects of the 2007-2008 global financial crisis. With these empirical facts known to the quantitative finance community, what is the motivation for the continued use of the normality assumption? It could be possible that those that stick with the normality assumption know only how to deal with normally distributed data. It is their hammer; everything that comes their way seems like a nail! As Esch (2010) notes, for those that do have other tools to deal with non-normal data, they continue to use the normal distribution on the grounds of parsimony. However, â€Å"representativity should not be sacrificed for simplicity† (Fabozzi et al., 2011:4). Better modelling frameworks to deal with extreme values that are characteristic of departures from normality have been developed. Extreme value theory is one such methodology that has enjoyed success in other fields outside finance, and has been used to model financial losses with success. In the next chapter, I present extreme value-based methodologies as a practical and better methodology to overcome non-normality in asset returns. CHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND ADEQUATE FRAMEWORK? 1.3. Extreme Value Theory Extreme value theory was developed to model extreme natural phenomena such as floods, extreme winds, and temperature, and is well established in fields such as engineering, insurance, and climatology. It provides a convenient way to model the tails of distributions that capture non-normal activities. Since it concentrates on the tails of distributions, it has been adopted to model asset returns in time of extreme market activity (see Embrechts et al. (1997); McNeil and Frey (2000); Danielsson and de Vries (2000). Gilli and Kellezi (2003) points out two related ways of modelling extreme events. The first way describes the maximum loss through a limit distribution known as the generalised extreme value distribution (GED), which is a family of asymptotic distributions that describe normalised maxima or minima.   The second way provides asymptotic distribution that describes the limit distribution of scaled excesses over high thresholds, and is known as the generalised Pareto distribution (GPD). The two limit distributions results into two approaches of EVT-based modelling the block of maxima method and the peaks over threshold method respectively[2]. 3.1. The Block of Maxima Method Let us consider independent and identically distributed (i.i.d) random variable   with common distribution function â„ ±. Let be the maximum of the first random variables. Also, let us suppose is the upper end of. For, the corresponding results for the minima can be obtained from the following identity (8) almost surely converges to whether it is finite or infinite since, Following Embrechts et al. (1997), and Shanbhang and Rao (2003), the limit theory finds norming constants and a non-degenerate distribution function in such a way that the distribution function of a normalized version of converges to as follows;, as (9) is an extreme value distribution function, and â„ ± is the domain of attraction of, (written as), if equation (2) holds for suitable values of and. It can also be said that the two extreme value distribution functions and belong in the same family if for some   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   and all. Fisher and Tippett (1928), De Haan (1970, 1976), Weissman (1978), and Embrechts et al. (1997) show that the limit distribution function belongs to one of the following three density functions for some. (10) (11) (12) Any extreme value distribution can be classified as one of the three types in (10), (11) and (12).   and   are the standard extreme value distribution and the corresponding random variables are called standard extreme random variables. For alternative characterization of the three distributions, see Nagaraja (1988), and Khan and Beg (1987). 3.2.  Ã‚   The Generalized Extreme Value Distribution The three distribution functions given in (10), (11) and (12) above can be combined into one three-parameter distribution called the generalised extreme value distribution (GEV) given by,, with (13) We denote the GEV by, and the values andgive rise to the three distribution functions in (3). In equation (4) above, and represent the location parameter, the scale parameter, and the tail-shape parameter respectively. corresponds to the Frechet, and distributioncorresponds to the Weibull distribution. The case where reduces to the Gumbel distribution. To obtain the estimates of we use the maximum likelihood method, following Kabundi and Mwamba (2009). To start with, we fit the sample of maximum losses to a GEV. Thereafter, we use the maximum likelihood method to estimate the parameters of the GEV from the logarithmic form of the likely function given by; (14) To obtain the estimates of we take partial derivatives of equation (14) with respect to and, and equating them to zero. 3.2.1. Extreme Value-at-Risk The EVaR defined as the maximum likelihood   quantile estimator of, is by definition given by (15)   The quantity is the quantile of, and I denote it as the alpha percept VaR specified as follows following Kabundi and Mwamba (2009), and Embrech et al. (1997): (16) Even though EVaR captures extreme losses, by extension from VaR it is non-coherent. As such, it cannot be used for the purpose of portfolio optimization since it does not reflect diversification. To overcome this problem, In the next section, I extend CVaR to ECVaR so as to capture extreme losses coherently. 3.2.2.   Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of Risk I extend ECVaR from EVaR in a similar manner that I used to extend CVaR from VaR. ECVaR can therefore be expressed as follows: (17) In the following chapter, we describe the data and its sources. CHAPTER 4: DATA DISCRIPTION. I will use stock market indexes of five advanced economies comprising that of the United States, Japan, Germany, France, and United Kingdom, and five emerging economies comprising Brazil, Russia, India, China, and South Africa. Possible sources of data that will be used are I-net Bride, Bloomberg, and individual country central banks. CHAPTER 5: DISCUSION OF EMPIRICAL RESULTS In this chapter, I will discuss the empirical results. Specifically, the adequacy of ECVaR will be discussed relative to that of EVaR. Implications for risk measurement will also be discussed in this chapter. CHAPTER 6: CONCLUSIONS This chapter will give concluding remarks, and directions for future research.   References [1] Markowitz, H.M.: 1952, Portfolio selection, Journal of Finance 7 (1952), 77-91 2 Roy, A.D.: 1952, Safety First and the Holding of Assets. Econometrica, vol. 20 no 3 p 431-449. 3 Shape, W.F.: 1964, Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. The Journal of Finance, Vol. 19 No 3 p 425-442. 4 Black, F., and Scholes, M.: 1973, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, vol. 18 () 637-59. 5 Merton, R. C.: 1973, The Theory of Rational Option Pricing.   Bell Journal of Economics and Management Science, Spring. 6 Artzner, Ph., F. Delbaen, J.-M. Eber, And D. Heath .: 1997, Thinking Coherently, Risk 10 (11) 68–71. 7 Artzner, Ph., Delbaen, F., Eber, J-M., And Heath , D.: 1999, Thinking Coherently. Mathematical Finance, Vol. 9, No. 3   203–228 8 Bernoulli, D.: 1954, Exposition of a new theory on the measurement of risk, Econometrica 22 (1) 23-36, Translation of a paper originally published in Latin in St. Petersburg in 1738. 9 Butler, J.C., Dyer, J.S., and Jia, J.: 2005, An Empirical Investigation of the Assumption of Risk –Value Models. Journal of Risk and Uncertainty, vol. 30 (2), pp. 133-156. 10 Brachinger, H.W., and Weber, M.: 1997, Risk as a primitive: a survey of measures of perceived risk. OR Spektrum, Vol 19 () 235-250 [1] Fisher, I.: 1906, The nature of Capital and Income. Macmillan. 1[1] von Neumann, J. and Morgenstern, O.: 1947, Theory of games and economic behaviour, 2nd ed., Princeton University Press. [1]2 Coombs, C.H., and Pruitt, D.G.: 1960, Components of Risk in Decision Making: Probability and Variance preferences. Journal of Experimental Psychology, vol. 60 () pp. 265-277. [1]3 Pruitt, D.G.: 1962, Partten and Level of risk in Gambling Decisions. Psychological Review, vol. 69 ()( pp. 187-201. [1]4 Coombs, C.H.: 1964, A Theory of Data. New York: Wiley. [1]5   Coombs, C.H., and Meyer, D.E.: 1969, Risk preference in Coin-toss Games. Journal of Mathematical Psychology, vol. 6 () p 514-527. [1]6 Coombs, C.H., and Huang, L.C.: 1970a, Polynomial Psychophysics of Risk. Journal of Experimental psychology, vol 7 (), pp. 317-338. [1]7 Markowitz, H.M.: 1959, Portfolio Selection: Efficient diversification of Investment. Yale University Press, New Haven, USA. [1]8 Tobin, J. E.: 1958, liquidity preference as behavior towards risk. Review of Economic Studies p 65-86. [1]9 Pratt, J.W.: 1964, Risk Aversion in the Small and in the Large. Econometrica, vol. 32 () p 122-136. 20 Pollatsek, A. and Tversky, A.: 1970, A theory of Risk. Journal of Mathematical Psychology 7 (no issue) 540-553. 2[1] Luce, D. R.:1980, Several possible measures of risk. Theory and Decision 12 (no issue) 217-228. 22 J.P. Morgan and Reuters.: 1996, RiskMetrics Technical document. Available at http://riskmetrics.comrmcovv.html Accessed†¦ 23 Uryasev, S., and Rockafeller, R.T.: 1999, Optimization of Conditional Value-at-Risk. Available at gloriamundi.org 24 Mitra, S.: 2009, Risk measures in Quantitative Finance. Available on line. [Accessed†¦] 25 Geary, R.C.: 1947, Testing for Normality, Biometrika, vol. 34, pp. 209-242. 26 Mardia, K.V.: 1980, P.R. Krishnaiah, ed., Handbook of Statistics, Vol. 1. North-Holland Publishing Company. Pp. 279-320. 27 Mandelbrot, B.: 1963, The variation of certain speculative prices. Journal of Business, vol. 26, pp. 394-419. 28 Fama, E.: 1963, Mandelbrot and the stable paretian hypothesis. Journal of Business, vol. 36, pp. 420-429. 29 Fama, E.: 1965, The behavior of stock market prices. Journal of Business, vol. 38, pp. 34-105. 30 Esch, D.: 2010, Non-Normality facts and fallacies. Journal of Investment Management, vol. 8 (1), pp. 49-61. 3[1] Stoyanov, S.V., Rachev, S., Racheva-Iotova, B., Fabozzi, F.J.: 2011, Fat-tailed Models for Risk Estimation. Journal of Portfolio Management, vol. 37 (2). Available at iijournals.com/doi/abs/10.3905/jpm.2011.37.2.107 32 Embrechts, P., Uppelberg, C.K.L, and T. Mikosch.: 1997, Modeling extremal events for insurance and finance, Springer 33 McNeil, A. and Frey, R.: 2000, Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach, Journal of Empirical Finance, Volume 7, Issues 3-4, 271- 300. 34 Danielsson, J. and de Vries, C.: 2000, Value-at-Risk and Extreme Returns, Annales dEconomie et deb Statistique, Volume 60, 239-270. 35Gilli, G., and Kellezi, E.: (2003), An Application of Extreme Value Theory for Measuring Risk, Department of Econometrics, University of Geneva, Switzerland.   Available from: gloriamundi.org/picsresources/mgek.pdf 36 Shanbhag, D.N., and Rao, C.R.: 2003, Extreme Value Theory, Models and Simulation. Handbook of Statistics, Vol 21(). Elsevier Science B.V. 37 Fisher, R. A. and Tippett, L.H.C.: 1928, Limiting forms of the frequency distribution of the largest or smallest member of a sample. Proc. Cambridge Philos. Soc. Vol 24, 180-190. 38 De Haan, L.: 1970, On Regular Variation and Its Application to the Weak Convergence of Sample Extremes. Mathematical Centre Tract, Vol. 32. Mathematisch Centmm, Amsterdam 39 De Haan, L.: 1976, Sample extremes: an elementary introduction. Statistica Neerlandica, vol. 30, 161-172. 40 Weissman, I.: 1978, Estimation of parameters and large quantiles based on the k largest observations. J. Amer. Statist. Assoc. vol. 73, 812-815. 4[1] Nagaraja, H. N.: 1988, Some characterizations of continuous distributions based on regressions of adjacent order statistics and record values. Sankhy   A 50, 70-73. 42 Khan, A. H. and Beg, M.I.: 1987, Characterization of the Weibull distribution by conditional variance. Snaky A 49, 268-271. 43 Kabundi, A. and Mwamba, J.W.M.: 2009, Extreme value at Risk: a Scenario for Risk management. SAJE Forthcoming.

Sunday, October 20, 2019

Seymour Cray and the Supercomputer

Seymour Cray and the Supercomputer Many of us are familiar with computers. You’re likely using one now to read this blog post as devices such as laptops, smartphones and tablets are essentially the same underlying computing technology. Supercomputers, on the other hand, are somewhat esoteric as they’re often thought of as hulking, costly, energy-sucking machines developed, by and large, for government institutions, research centers, and large firms. Take for instance China’s Sunway TaihuLight, currently the world’s fastest supercomputer, according to Top500’s supercomputer rankings. It’s comprised of 41,000 chips (the processors alone weigh over 150 tons), cost about $270 million and has a power rating of 15,371 kW. On the plus side, however, it’s capable of performing quadrillions of calculations per second and can store up to 100 million books. And like other supercomputers, it’ll be used to tackle some of the most complex tasks in the fields of science such as weather forecasting and drug research. When Supercomputers Were Invented The notion of a supercomputer first arose in the 1960s when an electrical engineer named Seymour Cray, embarked on creating the world’s fastest computer. Cray, considered the â€Å"father of supercomputing,† had left his post at business computing giant Sperry-Rand to join the newly formed Control Data Corporation so that he can focus on developing scientific computers. The title of world’s fastest computer was held at the time by the IBM 7030 â€Å"Stretch,† one of the first to use transistors instead of vacuum tubes.   In 1964, Cray introduced the CDC 6600, which featured innovations such as switching out germanium transistors in favor of silicon and a Freon-based cooling system. More importantly, it ran at a speed of 40 MHz, executing roughly three million floating-point operations per second, which made it the fastest computer in the world. Often considered to be the world’s first supercomputer, the CDC 6600 was 10 times faster than most computers and three times faster than the IBM 7030 Stretch. The title was eventually relinquished in 1969 to its successor the CDC 7600.  Ã‚   Seymour Cray Goes Solo In 1972, Cray left Control Data Corporation to form his own company, Cray Research. After some time raising seed capital and financing from investors, Cray debuted the Cray 1, which again raised the bar for computer performance by a wide margin. The new system ran at a clock speed of 80 MHz and performed 136 million floating-point operations per second (136 megaflops). Other unique features include a newer type of processor (vector processing) and a speed-optimized horseshoe-shaped design that minimized the length of the circuits. The Cray 1 was installed at Los Alamos National Laboratory in 1976. By the 1980s Cray had established himself as the preeminent name in supercomputing and any new release was widely expected to topple his previous efforts. So while Cray was busy working on a successor to the Cray 1, a separate team at the company put out the Cray X-MP, a model that was billed as a more â€Å"cleaned up† version of the Cray 1. It shared the same horseshoe-shape design, but boasted multiple processors, shared memory and is sometimes described as two Cray 1s linked together as one. The Cray X-MP (800 megaflops) was one of the first â€Å"multiprocessor† designs and helped open the door to parallel processing, wherein computing tasks are split into parts and executed simultaneously by different processors.   The Cray X-MP, which was continually updated, served as the standard bearer until the long-anticipated launch of the Cray 2 in 1985. Like its predecessors, Cray’s latest and greatest took on the same horseshoe-shaped design and basic layout with integrated circuits stacked together on logic boards. This time, however, the components were crammed so tightly that the computer had to be immersed in a liquid cooling system to dissipate the heat. The Cray 2 came equipped with eight processors, with a â€Å"foreground processor† in charge of handling storage, memory and giving instructions to the â€Å"background processors,† which were tasked with the actual computation. Altogether, it packed a processing speed of 1.9 billion floating point operations per second (1.9 Gigaflops), two times faster than the Cray X-MP. More Computer Designers Emerge Needless to say, Cray and his designs ruled the early era of the supercomputer. But he wasn’t the only one advancing the field. The early ’80s also saw the emergence of massively parallel computers, powered by thousands of processors all working in tandem to smash though performance barriers. Some of the first multiprocessor systems were created by W. Daniel Hillis, who came up with the idea as a graduate student at the Massachusetts Institute of Technology. The goal at the time was to overcome to the speed limitations of having a CPU direct computations among the other processors by developing a decentralized network of processors that functioned similarly to the brain’s neural network. His implemented solution, introduced in 1985 as the Connection Machine or CM-1, featured 65,536 interconnected single-bit processors. The early ’90s marked the beginning of the end for Cray’s stranglehold on supercomputing. By then, the supercomputing pioneer had split off from Cray Research to form Cray Computer Corporation. Things started to go south for the company when the Cray 3 project, the intended successor to the Cray 2, ran into a whole host of problems. One of Cray’s major mistakes was opting for gallium arsenide semiconductors – a newer technology as a way to achieve his stated goal of a twelvefold improvement in processing speed. Ultimately, the difficulty in producing them, along with other technical complications, ended up delaying the project for years and resulted in many of the company’s potential customers eventually losing interest. Before long, the company ran out of money and filed for bankruptcy in 1995. Cray’s struggles would give way to a changing of the guard of sorts as competing Japanese computing systems would come to dominate the field for much of the decade. Tokyo-based NEC Corporation first came onto the scene in 1989 with the SX-3 and a year later unveiled a four-processor version that took over as the world’s fastest computer, only to be eclipsed in 1993. That year, Fujitsu’s Numerical Wind Tunnel, with the brute force of 166 vector processors became the first supercomputer to surpass 100 gigaflops (Side note: To give you an idea of how rapidly the technology advances, the fastest consumer processors in 2016 can easily do more than 100 gigaflops, but at the time, it was particularly impressive). In 1996, the Hitachi SR2201 upped the ante with 2048 processors to reach a peak performance of 600 gigaflops. Intel Joins the Race Now, where was Intel? The company that had established itself as the consumer market’s leading chipmaker didn’t really make a splash in the realm of supercomputing until towards the end of the century. This was because the technologies were altogether very different animals. Supercomputers, for instance, were designed to jam in as much processing power as possible while personal computers were all about squeezing efficiency from minimal cooling capabilities and limited energy supply. So in 1993 Intel engineers finally took the plunge by taking the bold approach of going massively parallel with the 3,680 processor Intel XP/S 140 Paragon, which by June of 1994 had climbed to the summit of the supercomputer rankings. It was the first massively parallel processor supercomputer to be indisputably the fastest system in the world.   Up to this point, supercomputing has been mainly the domain of those with the kind of deep pockets to fund such ambitious projects. That all changed in 1994 when contractors at NASAs Goddard Space Flight Center, who didn’t have that kind of luxury, came up with a clever way to harness the power of parallel computing by linking and configuring a series of personal computers using an ethernet network. The â€Å"Beowulf cluster† system they developed was comprised of 16 486DX processors, capable of operating in the gigaflops range and cost less than $50,000 to build. It also had the distinction of running Linux rather than Unix before the Linux became the operating systems of choice for supercomputers. Pretty soon, do-it-yourselfers everywhere were followed similar blueprints to set up their own Beowulf clusters.  Ã‚   After relinquishing the title in 1996 to the Hitachi SR2201, Intel came back that year with a design based on the Paragon called ASCI Red, which was comprised of more than 6,000 200MHz Pentium Pro processors. Despite moving away from vector processors in favor of off-the-shelf components, the ASCI Red gained the distinction of being the first computer to break the one trillion flops barrier (1 teraflops). By 1999, upgrades enabled it to surpass three trillion flops (3 teraflops). The ASCI Red was installed at Sandia National Laboratories and was used primarily to simulate nuclear explosions and assist in the maintenance of the country’s nuclear arsenal. After Japan retook the supercomputing lead for a period with the 35.9 teraflops NEC Earth Simulator, IBM brought supercomputing to unprecedented heights starting in 2004 with the Blue Gene/L. That year, IBM debuted a prototype that just barely edged the Earth Simulator (36 teraflops). And by 2007, engineers would ramp up the hardware to push its processing capability to a peak of nearly 600 teraflops. Interestingly, the team was able to reach such speeds by going with the approach of using more chips that were relatively low power, but more energy efficient. In 2008, IBM broke ground again when it switched on the Roadrunner, the first supercomputer to exceed one quadrillion floating point operations per second (1 petaflops).

Saturday, October 19, 2019

Right to Die Essay Example | Topics and Well Written Essays - 3250 words

Right to Die - Essay Example They are against organizations and people who believe that everyone has an intrinsic right and autonomy to choose life or death under any circumstances especially in the face of emotional and physical suffering. People who choose to end their lives under any circumstances have a choice of being euthanized in hospital settings or seek the help of physicians to commit suicide. Euthanasia is the compassionate killing of an individual painlessly. This service is obtainable for people who have terminal, painful and debilitating diseases or handicaps with death being the only hope for them. People who choose death can choose active euthanasia, refuse life prolonging treatments or choose to be assisted to commit suicide. The governing of these services is through various legal requirements including the patient’s state of mind and reasons why they choose to die. Active euthanasia is the deliberate act by a doctor to end a person’s life by use of lethal medicines; passive eutha nasia is the withdrawal of life saving treatments and nourishment that sustains life. Euthanasia is voluntary and must be requested by the patient orally or through written requests. Immediate family members or people bestowed with power of attorneys by patients may also request for the service if the patient is mentally incapacitated, clinically brain dead, or in a persistent vegetative state (PVS). There is persecution of doctors and physicians who administer euthanasia or assist patients who have chosen death over treatment by some sections of the society even in countries that have legalized euthanasia. Some have had their licenses revoked and further punished by jail terms without the consideration that euthanasia takes place on compassionate grounds. Background People are increasingly choosing to die, when medical conditions become unmanageable and they suffer too much emotional and physical pain. Communication for this choice is through both oral and written requests when one is fully competent. Alternatively, through pre- written wills by competent people who direct that they be put to death in the event that they lose their mental faculty due to disease or accidents. People who write advance directives may give instructions on what should be done in case a disease or accident makes them incompetent. Thus, they can refuse life prolonging treatments using life support machines or request for active euthanasia when their diseases make them incompetent, incapacitated or virtually dependent on people for survival. A person may choose death driven by the hate of helplessness and dependence that makes the quality of life poor. When in this state, many people refuse treatment, food and some attempt suicide where euthanasia is not legal. Where euthanasia is legal, it is often the moral responsibility of the family and patient’s physicians to heed the patients requests, upon meeting all legal requirements in which a person has the right to choose to die. Normally, it is only the patient’s doctors and close family members who may decide if the person’s wish to die has any merit, based on medical prognosis, emotional status, mental competence and degree of physical pain. People against the right to choose death believe that causing death on compassionate

UK General Elections Assignment Example | Topics and Well Written Essays - 1250 words

UK General Elections - Assignment Example Basically, this is achieved through awarding political freedom to all people as it is the main platform for the masses to express themselves. The values of liberal democracy are reflected in its basic system where continuous efforts are undertaken to see that no group enjoys special privileges in the society. The values of liberal democracy can therefore be found in a society which struggles to develop through talent and merit rather than rank, privilege and status. The values of liberal democracy are also seen in programmes and policies aimed at restricting the Government intervention in political, economic and moral matters of the citizens. To enrich a democracy with these values, the political system is generally supported by a written constitution which clearly defines the powers and responsibilities of the executive, judiciary and legislature (liberal democracy). Presently, the UK general elections are held as per the First Past the Post (FPTP) voting system. It is also known as plurality system, relative majority system or winner- take-all system. In this, a voter votes for a single candidate and the majority vote-getter among all the contesting candidates would be the winner in a particular constituency. For example, in a 1000-voter constituency, a candidate getting 4oo votes would be the winner if the other 3 contesting candidates receive 200 votes each. Though 400 out of 1000 votes is a clear minority, the number is higher than that of any of the other 3 candidates. It indicates that this system is endowed with the flaw of electing candidates / parties with minority vote, as the majority vote is divided among several contestants / parties. This is the most disadvantageous system but unfortunately most of the world democracies have been adopting this system for many years. This has brought embarrassment to the English in several elections including the 1983 general election in which Conservatives bagged 397 seats in the House of Commons with a minority vote (Hallowell, 2002, P 103). This situation has repeated in 2005 general elections too in the UK with the Labour party gaining power with a minority vote. The resulting disadvantage is that, though it is a representative government, majority voice is not heard in legislatures. This system has the capacity to curtail the political freedom of the majority of people, the basic ingredient of the liberal democracy. To put it the other way, the total number of seats gained by a particular party in the general elections would not be proportionate to the total number of votes received by it. 3 The alternative systems Preferential voting system There are some alternative systems too with regard to voting in a democracy. Let us discuss some of them. The preferential voting system is a method in which the voters are asked to express their preference of candidates in order of priority. In this, voters generally cast their votes by ranking the participating candidates in order of their priority. On the voting slip or card, the names of all candidates are printed and empty boxes are provided against each candidate. When there are 5 candidates, a voter provides rankings for all of them indicating 1,2,3,4 and 5 depending on his/her preference. Most

Friday, October 18, 2019

News Article Assignment Example | Topics and Well Written Essays - 250 words - 1

News Article - Assignment Example According to the report by National Center for health Statistics and the newspaper, data indicates a 76% increase in the rate of twin births in the US (BAKALAR). The article educates about the dangers of using fertility drugs. The fertility drugs interrupts with the cycles and induces hormones in the bodies of females. The drugs also enhance sexual activities leading to more sexual contacts and eventual fertilization. The most important poit to learn from the article is that the fertility drugs interfere with women cycles hence may increase the rate of fertilization. The article is valid scientifically and can be supported by various points. One is the fundamental reason that hormones, which can be induced or suppressed by drugs, control the process of pregnancy. This knowledge has led to increased use of drugs to enhance the process of pregnancy. The drugs have several effects amongst them being the multiple births. BAKALAR, NICHOLAS. Twin Births in the U.S., Like Never Before. 23 January 2012. 2013 . Two-thirds of the increase is probably explained by the growing use of fertility drugs and assisted reproductive technology. The remainder is mainly attributable to a rise in the average age at which women give birth. Older women are more likely to produce more than one egg in a cycle, and 35 percent of births in 2009 were to women over age 30, up from 20 percent in 1980. This age-induced increase applies only to fraternal twins, though; the rate of identical twin births does not change with the age of the mother. From 1980 through 2004, increases in twin birth rates averaged more than 2 percent a year, but from 2004 to 2009, the increase slowed to 1 percent annually. Joyce A. Martin, the lead author of the report, suggested that better techniques in fertility enhancement procedures may have

Airbus Business Plan Essay Example | Topics and Well Written Essays - 2000 words

Airbus Business Plan - Essay Example AS), it leases and finances about 1,680 owned and managed commercial aircrafts and serves over 230 customers in over 75 countries around the world (GE Capital Aviation Services). Results, analysis and discussion We think that a joint venture with GE is a mutually beneficial partnership today and into the future. Strategically each partner will be able to increase business volumes and serve more customers. The partnership has to be focussed on innovation in new cost-efficient aircrafts and lighter but able to do the same work or even higher. World economic crisis impacting the world did not spare the industry as revenue streams thinned, passenger numbers dwindled or stagnated growth making airlines use innovative ways to remain in operation. This is expected to be short-term, but passenger growth is expected to pick and grow at an average of 4.7% in the period under industry forecast. Growth is expected to double the passenger numbers in all routes (Airbus S.A.S. 2009). Competitors in the industry are applying the latest technology in aircraft manufacture to make lighter aircrafts consuming less fuel. Our venture will focus towards innovation to counter the stiff competition by making compatible engines towards this end. Research and development (R&D) are crucial in attaining this goal alongside partners such as GE. Airlines in the world buying planes from us have been finding problems in servicing and repairing planes. This forced them to fly in technicians from either our company or GE, or fly the plane to our factory to be repaired. GE has been investing in setting local repair and servicing centres in the countries where airlines have major operations. We believe this is the best strategy to be closer to clients offering them first-hand services and appropriate...Competitors in the industry are applying the latest technology in aircraft manufacture to make lighter aircrafts consuming less fuel. Our venture will focus towards innovation to counter the stiff c ompetition by making compatible engines towards this end. Research and development (R&D) are crucial in attaining this goal alongside partners such as GE. Airlines in the world buying planes from us have been finding problems in servicing and repairing planes. This forced them to fly in technicians from either our company or GE, or fly the plane to our factory to be repaired. GE has been investing in setting local repair and servicing centres in the countries where airlines have major operations. We believe this is the best strategy to be closer to clients offering them first-hand services and appropriate technical advice. This will enable the airlines to cut repair costs, downtime and turnaround time hence making more money. R&D at GE has enabled them this far, to produce another state-of-the-art advanced environmentally compatible technology GP7200 engine for Airbus A380. The engine is technically advanced fit for the world biggest wide-body planes (GE Aviation 2012). Continued research is imperative in the area of carbon emission and sound pollution to reduce green house gas emissions. The industry estimates that over the last forty years, carbon emission and aircraft fuel burn has been reduced by 70% while noise pollution has been reduced by about 75%

Thursday, October 17, 2019

Why do women only make up 6.5% of consultant surgeons in the UK Essay

Why do women only make up 6.5% of consultant surgeons in the UK - Essay Example The idea of such a work is to find out the reasons for this, so some solutions may be suggested in order to change the present scenario. Feminine has always been regarded to have less status and power and has always been subordinate.. Perhaps due to this reason, although these sex differences in earnings, occupations, and work in the United Kingdom have decreased over the past few decades, sharp differences still persist. Like in any other profession, the women now constitute a large force in terms of number and quality in the medical profession. However, there is a certain pattern of their choices of discipline, especially when the numbers of female professionals in different specialities are considered. The greatest convergence between women and men has occurred in labor force participation (Buyske, 2005). Yet despite this increased participation-and this may help to explain the slower progress with respect to wages and occupational segregation-women, on average, devote far more time than men to housework. One medical profession is surgery, or more specifically general surgery, where particularly male predominance has been observed. As of now, as statistic indicate only 6.5% of the consultant surgeons in the United Kingdom are women. ... In this work, the answers to these questions will be attempted to be found out through evidence from literature. Broadly speaking, this research covers two areas of inquiry. The first attempts to understand the sources of sex differences in labor markets in the context of surgery as a profession without resorting to explanations based on labor market discrimination. The dominant focus would be on how family economic decision making regarding the allocation of time and human capital investment may generate the observed differences between women and men in occupations, participation, and nonmarket work. The second area of concern could be existing discrimination and male predominance that might have led to a situation where female doctors are comparatively less interested in pursuing a career in, otherwise, exciting surgery. History of Women in Medicine Historically, women doctors are simultaneously a part of medicine and have been placed outside it, and their presence in large numbers is actually a destabilizing one. Surgery as a medical profession had always been seen as a symbol of masculinity, and that rests on an opposition between women and medicine. The century long history of medicine suggests that for long western culture was patriarchal in that it did marginalise women in the profession, reluctant to accept them on the same platform, and women as doctors have faced major hostility from the so called social dominance of masculinity. In fact, for quite some time, women were banned from joining surgery. Male Oriented Power and Privilege It had been previously conventional that medical power and privilege were male oriented; the operations and status

Wk2 INTL304 Forum Coursework Example | Topics and Well Written Essays - 250 words

Wk2 INTL304 Forum - Coursework Example Human source intelligence is considered the oldest method of information collection. The intelligence is collected from human sources. Collection of such data entails clandestine acquisition of documents, photographs and other related materials1. Going for the source of information ensures that the data collected is reliable and viable. Human intelligence entails all the information that is directly obtained from various human sources. It includes a wide range of activities from direct observation and reconnaissance to the use of spies and informants. The source of information is of essence since the information can be distorted when being moved from one source to the other. It is important to evaluate the target of collection before actual collection is done to avoid confusion and to ensure the information is thoroughly collected2. The intelligent information may end up not being viable in cases where wrong targets are selected. It may also take a long time to reach the source if the wrong targets are the ones selected in the beginning. Being aware of the source of intelligence information will help to identify the magnitude of the threat and thereby coming up with effective mitigation measures. Intelligence officials are t herefore tasked with a mandate of ensuring that the information collected is reliable and of

Wednesday, October 16, 2019

Why do women only make up 6.5% of consultant surgeons in the UK Essay

Why do women only make up 6.5% of consultant surgeons in the UK - Essay Example The idea of such a work is to find out the reasons for this, so some solutions may be suggested in order to change the present scenario. Feminine has always been regarded to have less status and power and has always been subordinate.. Perhaps due to this reason, although these sex differences in earnings, occupations, and work in the United Kingdom have decreased over the past few decades, sharp differences still persist. Like in any other profession, the women now constitute a large force in terms of number and quality in the medical profession. However, there is a certain pattern of their choices of discipline, especially when the numbers of female professionals in different specialities are considered. The greatest convergence between women and men has occurred in labor force participation (Buyske, 2005). Yet despite this increased participation-and this may help to explain the slower progress with respect to wages and occupational segregation-women, on average, devote far more time than men to housework. One medical profession is surgery, or more specifically general surgery, where particularly male predominance has been observed. As of now, as statistic indicate only 6.5% of the consultant surgeons in the United Kingdom are women. ... In this work, the answers to these questions will be attempted to be found out through evidence from literature. Broadly speaking, this research covers two areas of inquiry. The first attempts to understand the sources of sex differences in labor markets in the context of surgery as a profession without resorting to explanations based on labor market discrimination. The dominant focus would be on how family economic decision making regarding the allocation of time and human capital investment may generate the observed differences between women and men in occupations, participation, and nonmarket work. The second area of concern could be existing discrimination and male predominance that might have led to a situation where female doctors are comparatively less interested in pursuing a career in, otherwise, exciting surgery. History of Women in Medicine Historically, women doctors are simultaneously a part of medicine and have been placed outside it, and their presence in large numbers is actually a destabilizing one. Surgery as a medical profession had always been seen as a symbol of masculinity, and that rests on an opposition between women and medicine. The century long history of medicine suggests that for long western culture was patriarchal in that it did marginalise women in the profession, reluctant to accept them on the same platform, and women as doctors have faced major hostility from the so called social dominance of masculinity. In fact, for quite some time, women were banned from joining surgery. Male Oriented Power and Privilege It had been previously conventional that medical power and privilege were male oriented; the operations and status

Tuesday, October 15, 2019

Economy of China Research Paper Example | Topics and Well Written Essays - 2000 words

Economy of China - Research Paper Example Contextually, with respect to commercial activities, national limitations are lessening in terms of legitimate administrations where independent federations are performing as the principal power over their respective regions (Kojima, 2002, pp. 1-2). A similar notion can also be held true in the context of China’s relation with major global powers, including the US, UK, and other countries. On political and economic grounds, serious economic conflicts have transpired in recent times between China and other economies, especially those concerning the US in numerous aspects. Besides, the Chinese economy is also facing problems which are likely to have the profound impact on the world economy (Xuetong, 2010, pp. 267-269). Considering these aspects, this essay will review the world politics on international business causing conflicts, majorly between China and US along with other nations. Therefore, the prime focus of the essay will be on the economic problems witnessed by China concerning its relations in the global arena. Stating precisely, the objective of the essay is to evaluate the economic issues currently witnessed by China in the international context from different perspectives. In the global political history, two most apparent changes in power have been identified in the recent occurrences; one being the rise of European economy after ‘Industrial Revolution’ and the other being the rise of the American economy in the post-Civil War era (Zhou, 2008, pp.171). These power moves have resulted in international conflicts with the motive to acquire more authority in the global trade systems. It was during this era that weakening nations became more probable to lose the governing position in the international business system, thereby increasing the gap in relation to international power distribution.  Ã‚  

Monday, October 14, 2019

The Philosophy of Freidrich Froebel Essay Example for Free

The Philosophy of Freidrich Froebel Essay Friedrich Froebel was born in 1782 in Oberweissbach, Germany. His mother died when he was 9 months old and his father was away on pastoral duties quite often so he went and lived with his uncle when he was 10 years old. Froebel was not completely interested in school but enjoyed forestry, geometry, and land surveying (Dunn 169). His upbringing and interests, along with his Christian faith strongly influenced his educational philosophy. Friedrich used learner-centered, child-centered, experience-based ideas to develop the worlds first kindergarten, a school for young children (Henson 8). The father of kindergarten was the title usually associated with Froebel and his philosophy. His methods allow children to grow and move on as they conquer new concepts not when educators or administrators decide. Froebels philosophy was influenced by the teaching methods of Pestalozzi (Dunn 169). He agreed with many of Pestalozzis ideas but thought that there was too much focus on memorization and direct instruction. Froebel balanced group activities with individual play, direction from teachers was balanced with periods of freedom, and the studies of nature, mathematics, and art were balanced by exploring (Froebel Web). Through exploration by the child and observation by the teacher education could be distributed as was needed in the best interest of the child. He wanted students to figure things out for themselves through discovery. If a child can discover a concept on their own that child is more likely to grasp and clearly understand that concept because they were the means by which they learned the information. Play was a major aspect of his philosophy because it gave children a chance to externalize their inner nature and a chance to imitate and try out various adult roles. Children had the chance to try on many faces and figures so that they could find out who they were and who they should be. Even today people try to find out who they are because in the essence of each of us we feel that who were are or supposed to be is already in our souls we just have to discover who that is. Through play and role playing children could learn how to solve their own problems. Much of what people learn comes through their experiences, if children are able to practice and experience certain problems they will develop the skills necessary to problem solve. If children could work through these situations there could be a decrease in behavioral problems as children grow because they had the chance to develop their problem solving skills at a young age. According to Froebel, the ultimate purpose of education is the realization of a faithful, pure, inviolate, and hence holy life (Dunn 170). Since Froebels philosophy was based on idealism he believed every person had spiritual worth and dignity. If a person assumes that each individual they encounter has worth and thus should be treated so more people in life would be, simply put, happier. It comes down to respecting each individual for whoever they are. Thus like idealists he believed that children had within him all he was to be at birth. As Dunn states, practice in education should be designed to develop and cultivate individuals toward attainment of their destiny (170). Starting children off in kindergarten gave them a chance to grown and be what they were destined to be, by partaking in play and role playing with plenty of space to develop properly. In todays society there is a lot of talk about finding yourself and taking space to figure out who they are. I think a lot of that is because people never had a chance to do so when they were young. Todays society just speeds through life trying to get one step ahead of the next person and later in life they stop to reexamine who they have become because they didnt take the chance to discover that person when they should have. Froebel stressed the importance of creating a happy, harmonious environment where the child can grow; and where the value of self activity and play are foundation to the development of the whole person (Froebel Web). Teachers should observe students during play so that they know how and what to teach and gear toward each student because you need to cultivate the inner person in each. It isnt all about chaos because there is order and structure in play and free will. Play and freedom are structured through gifts and occupation. The gifts are used to help children understand concepts and the occupations to make products. Froebel was trying to create a school that uses the childs imagination and creativity already in them to foster an education plan that fit their minds and souls. We have been taught in the bible to be like children because they are pure and clean, if more of us became like children then the world would be a better place. The effects this theory has on the classroom can be positive and negative. The idea of a child-centered classroom is a terrific idea but can make the classroom seem very chaotic and haphazard which is difficult for some teachers and parents. With a child-centered classroom the planning a teacher puts into her lessons must be flexible and follow the needs of each individual, which is difficult because each child has different needs so planning could be a lot of different activities and flexibility. This philosophy allows opportunity for all students to completely succeed because it works with the childs strengths and educational pace. A problem with that is that children dont develop at the same rate so children will be going over different material at the same time. By allowing children to work on their own, the behavior they have will improve because they feel that they have more control over their own education and pace. As many positive effects as this free child-centered philosophy has, it also has in negative effects. Students may not reach their potential if they are not challenged by high expectations. There are also fewer concrete assessments to gage child success and failure. The philosophy could be a huge success if employed by a highly committed teacher who is prepared to truly encourage individual growth. The teachers role in the classroom is not just as observer who watches children play and explore independently but to guide the children to make discoveries. Open ended questions are a great way for teachers to foster critical thinking because the teacher does not provide the student with opinions (Froebel Web). Teachers are guiders and helpers for children to explore who and what they are to become. There are a lot of great ideas that have come from this philosophy, one being the introduction of kindergarten into the educational system. Some people today even think that it is too early to start a child in school but when is it really a great time to start? There are more people who are starting to embrace the idea of a child-centered approach because too much of education is focused on what we think children need to learn and not necessarily what they need to learn or are ready to learn. Teachers today need to stop and look at educators and philosophers of the past to recognize simple theories they employed. Todays education has become caught up in speed and necessity to be better than the next guy, we have forgotten to look at the people we are teaching and the fact that some are not ready for what we think they should be. There is a need for adults to get back to a simpler way of life so that we dont forget that children are precious gifts that must be treasured and fostered. Works Cited Dunn, Shelia G. Philosophical Foundations of Education: Connecting Philosophy to Theory and Practice. Upper Saddle River NJ: Merrill/Prentice-Hall, 2005. Froebel Web. Online Resource. 1998. http://www. froebelweb. org/webindex. html. Henson, Kenneth T. (Fall 2003). Foundations for Learner-Based Education: A Knowledge Base. Education, 1, Retrieved 10/28/06.