Saturday, August 31, 2019

Problems and Solutions with G4S

Every two years an Olympic host city led to a heated discussion in mainstream media and academic works. This paper aims to know and explore the overwhelming challenges and opportunities faced by the London Organizing Committee of the Olympic Games (LOCOG), especially the security chaos that its security contractor company G4S PLC’s fail to provide all the staff promised. After analysis of the problem, this study will provide detailed information and some feasible strategies, completing with the critical evaluations of the merits and demerits of some solutions. 2. IntroductionThe London Organising Committee of Olympic and Paralympic Games (LOCOG) is responsible for preparing and staging the London 2012 Games, which is based in Canary Wharf along with the Olympic Delivery Authority. Along with the coming games, several unpredictable problems come out into public view. It is obviously the biggest concern has always got to be a safe and secure game. As a result of this serious iss ue, the world’s largest security company, G4S, is contracted by London 2012 Locog to keep the games safe. This study aims at analyze the problems founded during the contract and examining the appropriate solutions and recommendations.This analysis will consist of eight sections, findings of the evidences and the methods. * First of all, the paper outlines the background of both LOCOG and G4S. * Secondly, the specific descriptions of the problems given above, which presents the troubles it has encountered from different perspectives in the past days. * Finally, reasonable solutions to the problems with some basic assessments. Although the excellent image given by London during these Olympics have already ensure they successfully underwent the challenges, in any case, recommendations will probably beneficial for both LOCOG and the host city in the long-term run. . Background to the problem 3. 1 Brief description of the problems It began with a scandal concerning security when t he contractor firm G4S, which had a ? 284m contract to provide as many as security guards but could not supply enough personnel, apologized for the mishandling of its contractual obligations regarding the Olympic events. Furthermore, the G4S boss Nick Buckles said he regretted ever taking on the Olympic security contract, which was described as a â€Å"humiliating shambles†. In this case, it led to a further 3,500 troops being employed and among 17,000 troops are involved in helping keep the Olympics safe.Meanwhile a further 1,200 military personnel are on standby. 3. 2 Details about the relevant company and organisation ‘We will deliver the best possible Olympic and Paralympic Games experience for everyone involved, ensuring a real legacy and inspiring people to join in and truly make these ‘’everyone’s Games’’’ (LOCOG, 2012) . ‘With operations in more than 125 countries and 657,000 employees, we specialize in outsourced b usiness processes and facilities in sectors where security and safety risks are considered a strategic threat.In more ways than you might realise, G4s is securing your world’ (G4S, 2012). 3. 3 Reasons to explore the problem G4S claim it was a tight schedule and deploying security for the Games is a complex business, but, for such a successful experienced security company with huge contracts around the world, why has their recruitment process so badly failed? Furthermore, in terms of the organisers LOCOG, whether public utilities could be handed over to profit-driven private enterprise? 4. Analysis of the issue 4. 1 G4S’s recruitment and management ‘chaos’ Securing Your World’ is G4S’s maxim (G4S, 2012). However, it seems that the world’s largest security company failed at this time, especially regarding its own world. Although G4S started on the recruitment of management personnel as soon as it took over the order at the end of 2010, t he on-site security guards began to recruit until the beginning of the year (BBC, 2012). The recruitment drive hit the headlines for using a variety of online, printed and outdoor medias, thereby, G4S had made a successful attraction campaign which received more than 100,000 applications (MPs, 2012).They adopted a Just in Time (JIT) management model, which means not to do recruitment until the last moment, thus to avoid the redundant management cost and to deliver maximize efficiency. G4S hired college students and foreign nationals in low reward which was ? 8. 5 per hour compared with the legal minimum hourly wage was ? 6. 19 (BBC, 2012). The recruitment process seemed to be such a complex and unprecedented contract with required so many training, vetting and accredited personnel. Many people then realized it was a serious undertaking.Besides, the labour contract of the successful candidates was only a temporary contract during the Olympic Games, which the wages were pay by actual working day. What’s more, there was no salary during the training and the employees need to pay the uniform fee. The problem is not do G4S unable to recruit enough staffs, even those who are being hired, G4Sl still cannot guarantee the final attendance. Many employees who have completed training and contracted to work for G4S, were not being deployed. According to the recruiters’ experience in G4S, many of them had received no schedules, uniforms or training on x-ray machines.Meanwhile, they had been allocated to venues hundreds of miles from where they lived (BBC, 2012). 4. 2 Contract ‘chaos’ G4S originally signed a contract with LOCOG in 2010 to provide 2,000 security guards and the number raise to 10,400 in December 2011 with the contract now worth ? 284 million (BBC, 2012). The change in the number of personnel required by LOCOG along with the complex procedures required to be done by the applicants are key to understanding the failure of G4S’s matter. This let no time for even a giant private company such as G4S to provide sufficient number by around 400% increased.What's more, the government has agreed to the JTL approach that the security workforce would be in training until the last moment, which was supposed to reduce the security budget (BBC, 2012). A report published by Public Accounts Committee which is responsible for evaluating whether if government spending provides value for money, detailed that it was very concerned about the request for increased by LOCOG and warned that the Olympic budgets get overspend, it also raised concerns about its management of those funds (PAC, 2012).There exists a failure in contract negotiation of further workers between the supplier ‘G4S’ and the consumer ‘LOCOG’. The status has been stated (BBC, 2012) as this is standard practice for any company, which the practice of delayed hiring however will struggle to cope when the contract requirements change in c lose proximity to the event. However, Lord Coe (2012) has promised a ‘safe and secure Games’ despite the failure of G4S. He also insisted that ‘the right and appropriate thing to do is to put the challenge in proportion and to work together to ensure sufficient security guards’. Description and evaluation of some solutions 5. 3 Aims of the solutions In order to ensure the best performance in Olympic Games, several possible suggestions are provided to be reference. The private security in industry is large, fast-growing, and global, in the meantime, it is also perplexed by high turnover and poor training, so that the role of G4S function during the Olympics need to be well formulated and under supervision. 5. 4 As for G4S 5. 5. 1 To admit the possible problems within relative short time.G4S seemed to use the brinkmanship tactics as it didn’t admit their problems until a fortnight before the opening ceremony. Earlier research (Henderson, 1967) demonstr ated that brinkmanship in business is pushing a negotiation to the point of nearly killing a deal in order to achieve the most favorable terms when that deal is finally agreed upon. Brinkmanship often produces a negotiator’s great successes, but it can result in the worst mistakes, which exactly did by G4S, therefor must be used carefully.A risk management (defined in ISO 31000) or at least a risk assessment can be carried out, to mitigate risks under Brinkmanship strategy. Developing an effective Risk Management Plan (RMP) is an important part of any project. RMP is one of the nine knowledge areas defined in PMBOK (PMI, 2008). Firstly, a special risk management team is need to be establish, which may include employees from departments such as audit, information systems, finance and human resources. Secondly, perform a critical risk analysis which should identify, asses and measure the likelihood of an impact that event would have during period of contract.Thirdly, develop pr oper risk strategies, including risk transfer, insurance, retention, loss control, and avoiding risky activities. Also, the strategies program needs to be monitored closely, as all changes will be made keeping in mind the practical aspects of the program. Fourthly, implement a good employee training program. It is noticed that G4S has a really poor employee management which has drawn much attentions from public. There is no substitute for trained, knowledgeable personnel in a disaster.Therefore, RMP helps minimizing losses and reduces the negative effects of risks. However, it is only effective if both the company G4S and the staff act on the findings. It is important for company to follow through with any actions required and review the assessment on a regular basis. Also it faces difficulty in allocating resources and determining the rate of occurrence since statistical information is not available on all kinds of past incidents. 5. 5. 2 The poor operations management needs to be improved.One of the major methods is to improve the internal system, which is reported (BBC, 2012) by the Independent to be the root cause, had not delivered accurate management information back up to senior planners about the true state of recruitment, as a result in a barrier between outsource and services provider. It is quite unreasonable that such a big company has not established a completed internal system. Methods such as establishing a system of incident reporting, performing root cause analysis, defining intended results and establishing performance measures can be taken into practice.Nevertheless, whether an organization achieves operational and strategic objectives may depend on factors outside the enterprise, such as technological innovation which is outside the scope of internal system. Therefore, effective internal system cannot guarantee the achievement of the goals. Suitable staff recruitment and training programs are necessary to be instituted as it can provide war ning of deficiencies and allow chance to rectify the matter. The recruitment process and staff performance should be based on objective criteria assessment.Starting a complete and orderly interview process early and offer the certain training required 3-4 months before the date on people who are still available, therefore at least keep in contact with them so you know if they are available or not. 5. 5 As for LOCOG Cooperation and negotiations are of essential required during the contract period. The failure of the Olympic contract undertakes together by both sides. There is a need for constant directly communication and re-negotiate etween the LOCOG and G4S at high level to check on the existing contract and the progress. Effective contract monitoring can be apply to probably solve the situation. ‘When a public body purchases a service for vulnerable adults from an independent provider, the public body has moral and legal accountability for the duty of care and quality of the service. No matter how far the contract may try to locate legal responsibilities solely with the provider, the purchaser has responsibilities for what does and does not happen.Efficient and effective contract monitoring enables public bodies to fulfill and be comfortable with such responsibilities. ’ (Doug, 2006) Appropriate actions include inspecting and documenting results, making recommendations, providing technical assistance and training, advocating for programs, staff or inmates, and reviewing and preparing documentation. The monitor is also in the position of providing continuous feedback both to and from the Contractor and the Department.It does not direct operations of the Facility, but is more in the observer or the consultant role. 6. Conclusion To sum up, this study has examined the reasons why G4S failed from the contract with LOCOG by analyzing G4S’s recruitment and management chaos and the contract chaos. Meanwhile, the study has put forward several spe cific solutions to each problem, in which a RMP should be used to lower the risk, staff recruitment and training programs should be applied along with an effective contract monitoring. 7. RecommendationsA recommendation is made that the since the company seemed to brinkmanship strategy, a risk management plan therefore should be applied to reducing the potential risks It is proposed that the company needs to pay attentions to its operations management, including recruitment management and training programs, using technique such as risk management plan and just-in-time model to improve the performance of management. It would be helpful for both G4S and LOCOG to maintain a closely relationship which may be established by constant communication, or a special contract monitor. . Reference LOCOG, 2012. What is LOCOG responsible for? [online] Available at: ; http://www. london2012. com/about-us/the-people-delivering-the-games/locog/; [Accessed 20 August 2012]. G4S, 2012. Who we are. [onli ne] Available at: ; http://www. g4s. com/en/Who%20we%20are/; [Accessed 20 August 2012]. BBC, 2012. G4S profile. BBC, [online]17 July. Available at: ; http://www. bbc. co. uk/news/business-18868406; [Accessed 20 August 2012]. Public Accounts Committee, 2012, Committee reports on preparations for London 2012 Olympic and Paralympic Games.London: The house of Commons. Robert, B. , Nick, H. , 2012. Olympic security chaos: depth of G4S security crisis revealed, theguardian, [online]13 July. Available at: ; http://www. guardian. co. uk/sport/2012/jul/12/london-2012-g4s-security-crisis; [Accessed 20 August 2012]. Henderson, B. D. , 1967. Brinkmanship in Business. Chester: The Boston Consulting Group. Project Management Institute, 2012. PMI Risk Management Professional. [online] Available at: http://www. pmi. org/certification/pmi-risk-management-professional-pmi-rmp. aspx; Accessed 24 August 2012] James, B. , 2012. Olympic task G4S’s problems were in training and vetting candidates, not in the original recruitment campaign. People Management, August Issue, p. 9. Care Services Improvement Partnership, 2006. Improving performance through effective contract monitoring. [pdf]. Available at: ; http://www. puttingpeoplefirst. org. uk/_library/Resources/BetterCommissioning/BetterCommissioning_advice/Chap10DGosling. pdf; [Accessed 22 August 2012]. Hannon, C. , 2012. Lessons from the G4S Olympic recruitment disaster.

Friday, August 30, 2019

First let’s define externality Essay

As an example of the above definition: Pollution from a factory can affect the health of nearby residents – negative The same factory can provide jobs to nearby residents – positive Negative externality is two part: production and consumption. I will be using both these externalities in my following discussion on pollution. Companies pollute on three different platforms: air, water and land. Air pollution is caused by: †¢Factories †¢Power plants †¢Vehicles †¢Solvents †¢Domestic/industrial chemicals †¢Military †¢Natural causes – volcano, wildfires (ref: aboutairpollution.co.za) Water pollution is caused by: †¢Urbanization oLand disturbed from construction oChemical pollution from mines, industries, etc oInadequate sewage and treatment †¢Deforestation †¢Damming of rivers †¢Destruction of wetlands (ref: www.randwater.co.za) Land pollution is caused by: †¢Domestic, nuclear wastes and industrial wastes, †¢deforestation †¢human sewage †¢mining and other factories †¢increased mechanization †¢ Sewage discharged into rivers instead of being treated properly †¢ Sanitary/hazardous landfill seepage †¢ Cemeteries †¢Scrap yards (waste oil and chemical drainage) (ref: www.wikianswers.com) You will note from the above that pollution is a hot topic and one of the most commonly used to define negative externality. The above causes of pollution serve as a grim reminder of the visible and mostly invisible effect on the consumers. And it certainly is not calculated into the economy. The costs and benefits of pollution can be calculated by economists but this will not reduce its impact, merely factor into the cost of production of goods. What will impact on the economy is the demand for pollution free living and the purchasing decision that will allow this. The following few examples come from my own ‘green’ list: †¢Diesel instead of petrol †¢Switching of unnecessary electricity sources †¢Organic foods within my budget †¢No smoking †¢Living in a ‘leafy’ suburb A lot of South Africans (and world) citizens are making conscious buying decisions to reduce the effects of global warming and the depletion of the ozone layer. The costs of living ‘green’ is higher than average and the economy will see the impact of this as consumers demand ‘cleaner’ living conditions. How can governments help reduce or eliminate negative externality? †¢Increase taxes on domestic pollutants such as cigarettes, solvents, etc †¢Pollution tax for industries †¢Pollution limits on emissions †¢Focus on residential and business zoning and the effect on the surrounding environment †¢New commercial/residential buildings must include ‘green’ technology and utilities †¢Providing separate bins for proper product recycling – domestic, commercial and industrial †¢The abolishment of ‘shanty towns’ for proper housing (source Ref: www.factsanddetails.com, www.statssa.gov.za, www.thinkquest.org, Economics: Global and South African Perspectives, Michael Parkin )

Thursday, August 29, 2019

Analysis of the Scarlet Letter 1

Meredith Byram Mrs. Allinder English 9 A4 23 February 2009 â€Å"It may serve, let us hope, to symbolize some sweet moral blossom, that may be found along the track, or relieve the darkening close of a tale of human fatality and sorrow,† (Hawthorne 44). In Nathaniel Hawthorne’s The Scarlet Letter, light and dark are used to compare and contrast the inner nature of Hester, Pearl, and Dimmesdale, All of these characters describe the theme of sin and suffering, but throughout their own struggles they strive and succeed to end up on the other end of the spectrum. Ah, but,† she interposed, more softly, a young wife, holding a child by the hand, â€Å"let her cover the mark as she will, the pang of it will always be in her heart,† (Hawthorne 47). Hester’s sin will always be a part of her soul, no matter how deeply she buries it. The scarlet A helps her to stop living her life a lie and forces her to show her sin to society, leaving her with nothing else to hide ( Morey 64). Hester is forced to set everything she has hidden free because her mistake and sin. Her life is turned into sorrow and denial leaving her a part of the dark side according to society’s view. Dimmesdale and Hester, at the time Pearl is conceived, only thought of themselves and their love instead of thinking about the depth of their sin (Morey 91). Their selfishness makes Hester rebel and cause the community to view her rebellion and defiance even deeper. Hester does not realize that she is only burying herself deeper, along with Pearl and secretly Dimmesdale also. â€Å"To Hester’s eye, the Reverend Mr. Dimmesdale exhibited no symptom of positive or vivacious suffering, except that, as little Pearl had remarked, he kept his own hand over his heart,† (Hawthorne 177). This hand that is over Dimmesdale’s heart is covering up his own sin and causes him to have his own scarlet letter. Dimmesdale’s truth constantly is knocking at his heart and soul, begging for a chance to be revealed. He drives himself to the point of insanity, pushing all his agony on top of himself which buries himself deeper and deeper into his grave (Morey 134). Dimmesdale doesn’t realize that he should tell the truth, not only because he is a Puritan clergyman, but also because he is only twisting and tangling his sin into a knot that soon won’t be able to become unraveled. As the novel progresses, Dimmesdale’s soul becomes dirty and scum-like—just as society sees Hester’s inner self. His agony and paranoia is his own punishment and instead pushes himself even farther and farther away from the truth. Before he committed hi sin, Dimmesdale was fully connected to God and then after the sin was committed his godly soul starts to wither away. He becomes even less attached and finally he dies and becomes all the way disconnected from God (Gerber 82-83). Dimmesdale pushes himself away from the Lord out of his selfishness and causes his soul to become dark and dirty. †¦She is my happiness! She is my torture, none the less! Pearl keeps me here in life! Pearl punishes me too! See ye not, she is the scarlet letter†¦Ã¢â‚¬  (Hawthorne 104. † At this point in the novel, Hester is persuading the governor to let Pearl stay with her. Pearl is a consistent reminder of Hester’s sin. Even though Pearl brings Hester up into the light, she darkens it also. Pearl creates a personality that is hostile and reflects her extreme temper. This relates Pearl to the devil which can be seen as darkness and a dark place. This temper and hostility is Hester’s biggest punishment for her sin. Pearl is literally a living example of the scarlet letter. After living many years of sin and suffering, Hester comes to full knowledge of her sin and gains responsibility for her actions (Nagel 90). Hester realizes that her sin will be lifted from her when she is ready to accept it. As Hester unleashes the â€Å"A† form her chest she, â€Å"had not known the weight until she felt the freedom. † Right after Hester reveals and finds her freedom, Hawthorne describes the sun as if it were bursting out onto the leaves and trees as if it were transforming the dark into light (Hawthorne 191). This symbolizes a turning point in the novel because Hester realizes her frustrations and accepts them which turn her dark night into a bright day. Even though Dimmesdale struggles to tell the truth and his secret, his ending can be seen as a light or a bright side. He forgives Chillingworth and the way he accepts his fate can be seen in a light or bright side (Nagel 153). Dimmesdale does have a frightening fate but the way he accepts this at the end of the story reveals a glimpse of light. Also, at the end of the novel Pearl is seen crying for joy which can foreshadow a happy life ahead of her. Society has seen Pearl as a dark spot and a nuisance for her entire life. In the end, Pearl escapes her mother’s shadow and lives in Europe away from the Puritan views and living (Nagel 153). Pearl’s ending can be received in a positive way which resembles lightness. The Scarlet Letter starts in a depressing and cold demeanor releasing feelings of suffering and sorrow. As the story evolves and the characters develop, the inner nature of Pearl, Dimmesdale, and Hester is revealed. However, it isn’t necessarily perceived in a negative way, but in a positive way. Even though Hester’s ending wasn’t the typical â€Å"happy† ending, it still closed in a satisfying bliss of a better life. Dimmesdale finally told his truth and released his sin and Pearl can now live her life out of her mother’s troubles. â€Å"The angel and apostle of the coming revelation must be a woman, indeed, but lofty, pure, beautiful, and wise, moreover, not through dusky grief, but the ethereal medium of joy; showing how sacred love should make us happy, but the truest test of a life successful to such an end! † (Hawthorne 247). Through all the darkness, light has been found and the sorrow has been replaced with happiness. Pearl, Hester, and Dimmesdale are now free. Works Cited Bloom, Harold, ed. Nathaniel Hawthorne: Bloom’s Major Novelists. Broomall: Chelsea House Publishers, 2000. Gerber, John C. , ed. Twentieth Century Interpretations of The Scarlet Letter. Eaglewood Cliffs: Prentice-Hall Inc. , 1968. Morey, Eileen, ed. Readings on The Scarlet Letter. San Diego: Greenhaven Press, 1998. Nagel, James, ed. Critical Essays on Hawthorne’s The Scarlet Letter. Boston: G. K. Hall and Co. , 1988. Hawthorne, Nathaniel. The Scarlet Letter. New York: Penguin Books, 2003.

Wednesday, August 28, 2019

Choose a topic Essay Example | Topics and Well Written Essays - 500 words

Choose a topic - Essay Example These movements not only struggled to receive formal recognition by the Kennedy administration through legislation, but also needed equal access to all benefits of the burgeoning American economy. Consequently, this essay intends to explain how assassination of Martin Luther king Jr. was a major blow to black movements in America and parts of the world characterized by racial discrimination and segregation. Evidences to support arguments in the essay are quoted from two primary sources namely Martin Luther King Jr., Letter from Birmingham Jail written in 1963 and Robert F. Kennedy’s Speech on the Assassination The black movements were at their prime in mid-19th Century, a time when racial discrimination and segregation was at its worse. During this period, Martin Luther King was the pioneer of the strategy and vision of a non-violent campaign by black movements against racism. His strategy argued that it was the moral responsibility of people to break unjust laws. As he notes, â€Å"In any nonviolent campaign there are four basic steps: collection of the facts to determine whether injustices exist; negotiation; self-purification; and direct action. We have gone through all these steps in Birmingham. There can be no gain saying the fact that racial injustice engulfs this community. Birmingham is probably the most thoroughly segregated city in the United States. Its ugly record of brutality is widely known.† (The King Center 1). Moreover, Assassination of Martin Luther King was a sad occurrence to white people alike. White people also participated in civil rights movements in the 1960s when they expanded their grievances to include equality of all in society. During the 1960s, many whites were also discriminated in accessing the benefits realized from the expanding American economy. As a result, the readily joined Martin Luther King Jr.’s strategy of the nonviolent campaign, and this is noticeable in

Tuesday, August 27, 2019

Friction Lab Report Example | Topics and Well Written Essays - 1000 words

Friction - Lab Report Example A proof of an increase or decrease in mass of the Newton’s experiment is determined by the force and acceleration achieved from the experiment. Goals: The goal of the experiment is to verify the existence of Newton’s second law by finding the coefficient of static friction, Â µs and the coefficient of kinetic friction, Â µk. using the experiment stated below. Introduction: Have you wondered the make-ups of mechanics? Well Newton’s second law breaks it down into simple understandable terms by providing a means of translating directly between the acceleration and force acting towards a given mass or object. Theoretical background: The experiments are based on the concepts of force and Newton’s Laws of Motion, particularly Newton’s Second Law which states that: the acceleration of a body is directly proportional to the net force acting on the body, and inversely proportional to the mass of the body. From this definition, the equation Net Force = Mass x Acceleration (Fnet = mass x acceleration) is derived. Air tracks were used to reduce friction; the little amount of friction that remains in the system is negligible in the data. The suspended mass was subject to gravity which has a constant acceleration of -9.81 m/s2. The variables were solved to include: acceleration of the sled, velocities of the sled at each photo-gate, net force acting on the string, and the time taken from release to the first photo-gate and between the photo-gates. Acceleration was calculated using the formula: Acceleration= Velocity/ Time. The experiment is commonly used in mechanics fields to determine the acceleration acti ng towards a given mass or object. ... Theory: The variables to be used in the experiment and their explanation involves F used to show the force, m used to show the mass being used and a used to show the acceleration of the object. The variables used by Newton’s lay emphasis on the net force used exerted on the experiment in question. The relationship between force and motion was initially discussed by Aristotle (384-332B.C). He proposed that the natural state of an object was during rest, and force was required to put an object into motion, therefore, a continuous force was necessary to keep the body in motion. Galileo Galilei (1564-1642) argued that a body at rest is a unique case of a more broad case of constant motion. He noted that in the absence of friction acting on a body to slow it down, the body might continue to move in a straight line forever. He proposed that bodies remain at rest or in a state of constant motion unless an external force acts on them to change this motion. Frictional force is a force unlike other forces which accelerate or slows down a moving body (Lerner & Lawrence, page 51). Isaac Newton (1642-1727) sanctified the relationship between force and motion by proposing that the acceleration of a body is directly proportional to the net force acting on it and inversely proportional to the mass of the body. This law is summarized by the formula F=ma which is verified quantitatively in this experiment. Work done i.e. physical work defined in terms of physical quantities is expressed as a product of positional change multiplied by the component of force Fx in the displacement direction dx. W = Fx ?x = F?xcos? Where ? is the angle between the direction of displacement and the direction of force. This relationship can be written in the vector dot product form W = Fx The

Monday, August 26, 2019

Introduction to Organisations and Management Essay

Introduction to Organisations and Management - Essay Example Hence, in any good organization, there should be a clear road map that decides the specific roles of various people at different tiers so that good coordination would be possible facilitating maximum out put in unit time. In general, several principles of management play vital role in giving any organization a solid strength and proper direction and an element of sustainability. Effective management also facilitates good organizational design and structure, effective team work, ideal organizational culture and dynamic leadership. To understand this properly, practical studies of some organizations and comparison of their operation and management styles would be of immense help. Keeping this in consideration, a relative comparative analysis of two different firms i.e. Watson’s Engine Components vs. H&M Consulting has been made in terms of their style of operation and nature of management. The operation and management of both these firms is discussed with special emphasis to str ucture and design of organization, team work, leadership issues and organizational culture. ... Good organizational design would always facilitate better coherence and coordination among the employees and would also be instrumental in production of high quality end products. In the present case study, the organizational design of Watson’s Engine Components is hierarchial and it is not satisfactory as it lacks proper direction. In any organizational design, the founder or leader has to provide clear road map in which the organization has to concentrate for achieving its predetermined targets. Moreover, an effective organizational structure requires functional and divisional components (Hax and Majluf, 1981). In Watson’s Engine Components, Watson couldn’t give any futuristic model and it is completely family owned company with less diversification and specialization is also wanting as there is no proper division of works under different sections or departments. In this way, there is no proper communication and operation of authority of individual employees, s pan of control and accountability resulted in poor performance of Watson’s Engine Components. There is no coherence at among the employees and flexibility is also found to be lacking which makes Watson’s Engine Components less competitive in the present day market. For meeting the present needs of the market, the design has to be modified in any successful organization (Kikulis et al., 1995). But this has never happened in Watson’s Engine Components. The staff absenteeism and turn over rates of technical staff are quite quite high in Watson’s Engine Components resulting in poor organizational efficiency and this is to be addressed immediately. The Managing Director Gordon

Starbucks in Mexico Global Business Plan Research Paper

Starbucks in Mexico Global Business Plan - Research Paper Example The currency fluctuation is an issue that threatens the profitability of the Starbucks in Mexico. Starbucks in Mexico can be affected by the value changes of currencies due to the company’s exports and imports. Conducting a business transaction in a different country the venture has to convert currencies at some prevalent exchange rate (Evans). The company uses two approaches to reduce the exchange rate risk. The two approaches are netting and hedging. Through hedging, the company uses a standard approach that is the interest rate swap. Interest rate swaps are whereby two companies in different countries agree to exchange their debt obligations (Evans). Through the interest rate swaps, the business avoids the changes in the foreign exchange rates. The other approach used in dealing with foreign exchange risks is using the netting approach. The netting approach is where the company maintains an equal level of foreign payables against foreign receivables (Evans). Through the use of the netting method, the changes in foreign currency are catered for by the balance in the payables and receivables. The Starbucks in Mexico would choose from various domestic and foreign sources of finance in its path to global expansion. A company decides on the form of financing depending on the capital requirement and the period it is needed (Timimi, 2010). One of the sources of financing Starbuck in Mexico would be the ownership capital that involves the raising of substantial funds through the company’s shareholders.

Sunday, August 25, 2019

Oxfam Assignment Example | Topics and Well Written Essays - 250 words

Oxfam - Assignment Example In all that Oxfam does, the organization regards the contribution of partner organizations as essential and the inclusion of susceptible men and women inevitable to bring the injustices that underlie poverty to an end. Oxfam’s Geneva office bears the same objectives – to alleviate poverty (Oxfam, 1). This paper introduces the specific activities endowed to the Geneva, Switzerland offices in the collective goal of poverty reduction. In addition, this paper highlights the different ways through which Oxfam interacts with partner organizations and works with poor people to achieve its objective. Oxfam’s Geneva offices are advocacy offices that work towards influencing key organizations from various parts of the world to take part in poverty alleviation. Oxfam Geneva approaches and interacts with organizations such as WTO, UNHCR, ICRC, UNCTAD, and OCHA, directly and indirectly. Oxfam also incorporates Geneva based civil society organizations and groups in its activities. The structure of the Geneva, Switzerland offices allow the organization to assess and develop strategies regarding global humanitarian matters, in specific humanitarian system development, disaster risk minimization, and global reactions to disasters and humanitarian crises in which the organization takes part on the ground. Other activities that take place in the Geneva offices include lobbying and development of an alliance to support agriculture and food security, access to medications, and climate change. Additionally, Oxfams Geneva offices conduct petitioning at the World Trade Organization (WTO) concerning poverty alleviation (Oxfam,

Saturday, August 24, 2019

Training needs analysis Essay Example | Topics and Well Written Essays - 3750 words

Training needs analysis - Essay Example Training needs analysis involves assessing the needs of the Hospital Services Advisors, through a training programmme and evaluating the training needs analysis. The organisation which will be the focus of such a process is Ramsay Health Care UK. RAMSAY HEALTH CARE UK The founding of Ramsay Health Care (RHC) was back in 1964. The company has developed, and has over 117 hospitals. Furthermore, it has day surgery amenities in France, Australia, Indonesia, and United Kingdom. This made the organisation assume the status of a global private healthcare operator. In England, the company obtained Capio UK and its group of hospitals in 2007. By September 2010, RHC had employed more than 3500 employees, making it the top health care providers in the UK. This achievement provided a solid foundation for expansion. The company benefits from its established trade name because of its wide spread network. It competes effectively with regional players. Currently the RHC is in a lasting affiliation with the National Health Services (NHS), where it provides surgical and diagnostics services to both private and self-sponsored patients. In the year 2010, RHC had a 10% augmentation of revenue after getting revenue of 350.2 million in the 2009 financial year. During the Medibank Private Menders’ Survey, RHC was ranked among those who had the top honors. Moreover, RHC was also among the five finalists of the best Health and Wellbeing Strategy in the Australian Human Resource Award. Furthermore, it has an excellent ability to satisfy its customers’ needs. For this reason, it has constantly emerged the top three after rating the two hundred service companies in the private and public sectors. There has been an escalation of birth rates, growth of outpatients’ treatment and the increase of the ageing people. Consequently, the economic environment and the demand for private health care are steadily escalating. This forces the RHC to expand its hospitals in the UK so as to succeed in catering the needs of the public. Additionally, RHC has successfully maintained satisfaction for its staff by ensuring that the proper work place health and protection risks are on check. Consequently, RHC has had a reduction of the time wasted because of injuries. The aim of RHC is to be at the top in offering the highest quality clinical services among the private healthcare hospitals. Furthermore, the organisation responds to the requirements of its patients by providing the best customer services. For them to meet their objectives, the RHC conducts an operation measure every year toward improving the patients’ experiences, and clinical effectiveness. Additionally it acknowledg es the importance of investing in human capital. Therefore, this has been the reason for its current achievements and developments. Moreover, the RHC offers scholarships to its staff who wishes to further their studies. For example, the company recently offered sixty thousand dollars for the staff to pursue postgraduate at its will. The strengths of the organization incorporate its wide network since it has several outlets that offer the necessary services to the clientele base. This enhances its ability to compete within the health industry. Furthermore, its personnel are well versed with operational processes making it simple to embrace emerging technologies that enhance their skills. An additional strength pertains to the organization’s ability to attain increased revenues as illustrated in the financial outcomes. The most significant weakness arises from the inability to determine the patient’

Friday, August 23, 2019

Link Layer Protocol Services to the Network Layer Assignment

Link Layer Protocol Services to the Network Layer - Assignment Example (Keith 2008) The services offered by the link layer to the network layer are usually hidden from the network layer which only sees the services as a reliable communication channel which can send and receive data packets as frames. Framing, addressing, error detection, error correction, flow control, link management and acknowledgement of frames are some of the services provided by the link protocols. The link layer groups bits of the physical layer into frames enabling transmission of data in a form that can be understood by the network layer. Since various network stations have different speed of operation, the link layer protocol provides flow control service that ensures no station in the network is swamped with data from fast devices within the network. In addition the link layer provides a link management services through collision handling and avoidance. (Nancy 1988) One major services offered by the link layer protocol is error detection and correction. It has error check incl uded in the frame header which provides a more sophisticated error detection and correction since it can detect single bit and a wide range of common multiple -bit errors. Error detection in link layer uses checksum which is the same as those used in IP in the network layer as well as TCP in the transport layer. ... The frames are organized in seven fields: PREAMBLE which is 7 bits long used to warm the receiver that data is coming, SOF indicates that the next bit of data will be the destination address, DA the destination address which identifies the receiver of the data, SA identifies the source address, Length which indicate the length of the payload data, DATA contains the data transmitted its length varies with the size of data and FCS which is used for error checking. In 10BASE-T the PRE field is used for receiver synchronization while 100BASE-T does not require the PRE field for transmission since it has an electrical encoding that is different from 10BASE-T. The signal of 10BASE-T frame is zero when it is not transmitting while 100BASE-T transmits and idle signal between frames. Collision detection Figure 5.14 According to the figure node D detects collision first before the other node B detects since its transmission start time is greater than the transmission start time of node D. Node D detects the collision before node B since it started transmission later after D had already started. Furthermore node D has a shorter round trip time as compared to node B. Node A does not realize that detection has occurred between node B and D since it is not involved in transmission. Node A only waits for the signal from B in case of a collision since after the collision Node B will try to retransmit the signal after random times until it reaches its destination. Jamming signal is a signal sent by the data station, informing the other stations not to transmit. In this case the jamming signal is sent by node D since it is the node that detects a collision

Thursday, August 22, 2019

Racism in the Tuskegee Experiment Essay Example for Free

Racism in the Tuskegee Experiment Essay The Tuskegee experiment, begun in 1932 by the United States Public Health Service in Macon County, Alabama, used 400 black men who suffered from advanced stages of syphilis.   This study was not a means of finding a cure; the patients offered no preventative measures to prolong or better life.   Although the history and nature of syphilis was well understood, certain scientists believed that more research could certainly be done. In terms of whom to study, the doctors developing the format discovered a â€Å"ready-made situation† (Jones 94). Macon County Alabama was impoverished, like much of the country in 1932.   The selection process began during the depression, a time of separation and intolerance.   In the rural South, where we find Tuskegee, the men chosen were not seen, at the time, as equal in any sense of the word. Jones refers to prominent doctors of the region who, in the late 1800s, scientifically defined diseases that were peculiar to the race.   One such disease, Cachexia Africana, caused the subject to eat dirt.   The public did not question such obviously ridiculous claims at the time.   In fact, the public heralded these doctors and requested a manual for treating blacks in order to save slave-owners and the like money in paying for doctors (17).   Given the distaste for the ethnicity of the subjects, could their ethnicity have been a factor in the selection process? At the time, the medical profession had already made some false assumptions about the African American race in general.   Jones reiterated the white-held theory that black men had larger penises and little constraints when it came to sexual intercourse (23).   It was also believed that they were harder to treat for syphilis because African Americans were stupid. In examining this mindset, it becomes clear why the government erringly felt it should go to the poorer black communities in rural Alabama conduct a syphilis study.   Believed to be an immoral sex-centered culture placed at the level of animals, the government would put them in league with mice and rats.   As disgusting as the premise is, the doctors needed lab animals and set out to find them. If this were true – how could the government get away with it?   Blatant disregard for humanity and life could not go unnoticed.   However, the geographical area in question had just been the last state of the union to discontinue chain gang use in its penitentiaries in 1928.   The South had not yet begun to consider African-Americans as people not in the slightest meaning of the word. Jones reiterates the sentiment of the doctors at the time and place with, â€Å"short of a ‘quick-fix’ by science requiring no behavior changes by blacks, there was no hope for the race† (26). The Health service claimed they informed the subjects of their disease, although an internship at the time the experiments began, Dr. J.W. Williams, stated the men received no such information.   He also claims the internships registered the data collected without understanding the nature of the experiment either (Jones 5). The term ‘racist’ as defined in the Random House Webster’s College Dictionary reads, †a belief or doctrine that inherent differences among various human races determine cultural or individual achievement, usually involving the idea that one’s own race is superior† (1072).   Given this definition, it is clear that the Tuskegee experiments were racist.   To withhold the nature of the experiments from the subjects, the name of the disease, the treatment of its symptoms and to feel no remorse in inflicting this sort of medical indictment on fellow human beings is not just racist, but also immoral and unjust. Jones points out the Health Services did investigate the treatment of these patients in an Ad Hoc committee.   The resulting medical treatments for the wives and children of the male subjects was offered with no cash restitution allowed (214).   In the end, the government did agree to $10 million dollars in payments to the â€Å"living syphiltics†, the next of kin for those already dead, â€Å"living controls† and the next of kin for the dead controls.   If you had been living with the disease and never treated, you would get a grand total of $37, 500; a paltry amount for the pain and suffering from neglect and racist bigotry (217). Works Cited Jones, James H. Bad Blood: The scandalous story of the Tuskegee experiment – when government doctors played God and science went mad. New York, NY: The Free Press, 1981. Random House Webster’s College Dictionary, 2nd Ed.   New York, NY: Random House, 1997.

Wednesday, August 21, 2019

Assessment Philosophy Essay Example for Free

Assessment Philosophy Essay In my opinion, assessment is what teachers do in order to better understand where their students are on the cognitive learning level of a subject matter. Assessment is a continuous process that takes time and understanding. As a teacher I will constantly assess my students by getting feedback from them in class. I feel that it is important for me to do this as a teacher, so I know which students need more attention on certain areas of the subject. By assessing my students, I will gain knowledge on how to use types of differentiated instruction where necessary. I believe assessment is a tool used to evaluate both the teaching and learning of  content of the student. A variety of assessment tools should be utilized to effectively reach students strengths. I realize that children learn differently and at their own pace. The types of assessment I will use to determine if my students have gotten where I want them to go will vary. Samples of my formative assessment tools I would use in the classroom would include informal and formal questioning, oral presentations, peer evaluations, variety of projects, quizzes, test, demonstrations, drawings, and web quest observations. We all have strengths and weaknesses and we all have different means of  demonstrating each. If I use a lot of assessments, and vary the types I use, it gives my students the best opportunity to show me what they have. I will allow students the opportunity to pick from various projects that will enhance their learning ability, so I can see what they are able to accomplish. As a teacher I will need to use assessments in my classroom to determine how to act upon the assessment to improve the students’ learning. I think assessments are an important part of being a successful teacher and I hope to encourage my students and show that I care about each one of them.

Tuesday, August 20, 2019

Credit Risk Dissertation

Credit Risk Dissertation CREDIT RISK EXECUTIVE SUMMARY The future of banking will undoubtedly rest on risk management dynamics. Only those banks that have efficient risk management system will survive in the market in the long run. The major cause of serious banking problems over the years continues to be directly related to lax credit standards for borrowers and counterparties, poor portfolio risk management, or a lack of attention to deterioration in the credit standing of a banks counterparties. Credit risk is the oldest and biggest risk that bank, by virtue of its very nature of business, inherits. This has however, acquired a greater significance in the recent past for various reasons. There have been many traditional approaches to measure credit risk like logit, linear probability model but with passage of time new approaches have been developed like the Credit+, KMV Model. Basel I Accord was introduced in 1988 to have a framework for regulatory capital for banks but the â€Å"one size fit all† approach led to a shift, to a new and comprehensive approach -Basel II which adopts a three pillar approach to risk management. Banks use a number of techniques to mitigate the credit risks to which they are exposed. RBI has prescribed adoption of comprehensive approach for the purpose of CRM which allows fuller offset of security of collateral against exposures by effectively reducing the exposure amount by the value ascribed to the collateral. In this study, a leading nationalized bank is taken to study the steps taken by the bank to implement the Basel- II Accord and the entire framework developed for credit risk management. The bank under the study uses the credit scoring method to evaluate the credit risk involved in various loans/advances. The bank has set up special software to evaluate each case under various parameters and a monitoring system to continuously track each assets performance in accordance with the evaluation parameters. CHAPTER 1 INTRODUCTION 1.1 Rationale Credit Risk Management in todays deregulated market is a big challenge. Increased market volatility has brought with it the need for smart analysis and specialized applications in managing credit risk. A well defined policy framework is needed to help the operating staff identify the risk-event, assign a probability to each, quantify the likely loss, assess the acceptability of the exposure, price the risk and monitor them right to the point where they are paid off. Generally, Banks in India evaluate a proposal through the traditional tools of project financing, computing maximum permissible limits, assessing management capabilities and prescribing a ceiling for an industry exposure. As banks move in to a new high powered world of financial operations and trading, with new risks, the need is felt for more sophisticated and versatile instruments for risk assessment, monitoring and controlling risk exposures. It is, therefore, time that banks managements equip them fully to grapple with the demands of creating tools and systems capable of assessing, monitoring and controlling risk exposures in a more scientific manner. According to an estimate, Credit Risk takes about 70% and 30% remaining is shared between the other two primary risks, namely Market risk (change in the market price and operational risk i.e., failure of internal controls, etc.). Quality borrowers (Tier-I borrowers) were able to access the capital market directly without going through the debt route. Hence, the credit route is now more open to lesser mortals (Tier-II borrowers). With margin levels going down, banks are unable to absorb the level of loan losses. Even in banks which regularly fine-tune credit policies and streamline credit processes, it is a real challenge for credit risk managers to correctly identify pockets of risk concentration, quantify extent of risk carried, identify opportunities for diversification and balance the risk-return trade-off in their credit portfolio. The management of banks should strive to embrace the notion of ‘uncertainty and risk in their balance sheet and instill the need for approaching credit administration from a ‘risk-perspective across the system by placing well drafted strategies in the hands of the operating staff with due material support for its successful implementation. There is a need for Strategic approach to Credit Risk Management (CRM) in Indian Commercial Banks, particularly in view of; (1) Higher NPAs level in comparison with global benchmark (2) RBI s stipulation about dividend distribution by the banks (3) Revised NPAs level and CAR norms (4) New Basel Capital Accord (Basel -II) revolution 1.2 OBJECTIVES To understand the conceptual framework for credit risk. To understand credit risk under the Basel II Accord. To analyze the credit risk management practices in a Leading Nationalised Bank 1.3 RESEARCH METHODOLOGY Research Design: In order to have more comprehensive definition of the problem and to become familiar with the problems, an extensive literature survey was done to collect secondary data for the location of the various variables, probably contemporary issues and the clarity of concepts. Data Collection Techniques: The data collection technique used is interviewing. Data has been collected from both primary and secondary sources. Primary Data: is collected by making personal visits to the bank. Secondary Data: The details have been collected from research papers, working papers, white papers published by various agencies like ICRA, FICCI, IBA etc; articles from the internet and various journals. 1.4 LITERATURE REVIEW * Merton (1974) has applied options pricing model as a technology to evaluate the credit risk of enterprise, it has been drawn a lot of attention from western academic and business circles.Mertons Model is the theoretical foundation of structural models. Mertons model is not only based on a strict and comprehensive theory but also used market information stock price as an important variance toevaluate the credit risk.This makes credit risk to be a real-time monitored at a much higher frequency.This advantage has made it widely applied by the academic and business circle for a long time. Other Structural Models try to refine the original Merton Framework by removing one or more of unrealistic assumptions. * Black and Cox (1976) postulate that defaults occur as soon as firms asset value falls below a certain threshold. In contrast to the Merton approach, default can occur at any time. The paper by Black and Cox (1976) is the first of the so-called First Passage Models (FPM). First passage models specify default as the first time the firms asset value hits a lower barrier, allowing default to take place at any time. When the default barrier is exogenously fixed, as in Black and Cox (1976) and Longstaff and Schwartz (1995), it acts as a safety covenant to protect bondholders. Black and Cox introduce the possibility of more complex capital structures, with subordinated debt. * Geske (1977) introduces interest-paying debt to the Merton model. * Vasicek (1984) introduces the distinction between short and long term liabilities which now represents a distinctive feature of the KMV model. Under these models, all the relevant credit risk elements, including default and recovery at default, are a function of the structural characteristics of the firm: asset levels, asset volatility (business risk) and leverage (financial risk). * Kim, Ramaswamy and Sundaresan (1993) have suggested an alternative approach which still adopts the original Merton framework as far as the default process is concerned but, at the same time, removes one of the unrealistic assumptions of the Merton model; namely, that default can occur only at maturity of the debt when the firms assets are no longer sufficient to cover debt obligations. Instead, it is assumed that default may occur anytime between the issuance and maturity of the debt and that default is triggered when the value of the firms assets reaches a lower threshold level. In this model, the RR in the event of default is exogenous and independent from the firms asset value. It is generally defined as a fixed ratio of the outstanding debt value and is therefore independent from the PD. The attempt to overcome the shortcomings of structural-form models gave rise to reduced-form models. Unlike structural-form models, reduced-form models do not condition default on the value of the firm, and parameters related to the firms value need not be estimated to implement them. * Jarrow and Turnbull (1995) assumed that, at default, a bond would have a market value equal to an exogenously specified fraction of an otherwise equivalent default-free bond. * Duffie and Singleton (1999) followed with a model that, when market value at default (i.e. RR) is exogenously specified, allows for closed-form solutions for the term-structure of credit spreads. * Zhou (2001) attempt to combine the advantages of structural-form models a clear economic mechanism behind the default process, and the ones of reduced- form models unpredictability of default. This model links RRs to the firm value at default so that the variation in RRs is endogenously generated and the correlation between RRs and credit ratings reported first in Altman (1989) and Gupton, Gates and Carty (2000) is justified. Lately portfolio view on credit losses has emerged by recognising that changes in credit quality tend to comove over the business cycle and that one can diversify part of the credit risk by a clever composition of the loan portfolio across regions, industries and countries. Thus in order to assess the credit risk of a loan portfolio, a bank must not only investigate the creditworthiness of its customers, but also identify the concentration risks and possible comovements of risk factors in the portfolio. * CreditMetrics by Gupton et al (1997) was publicized in 1997 by JP Morgan. Its methodology is based on probability of moving from one credit quality to another within a given time horizon (credit migration analysis). The estimation of the portfolio Value-at-Risk due to Credit (Credit-VaR) through CreditMetrics A rating system with probabilities of migrating from one credit quality to another over a given time horizon (transition matrix) is the key component of the credit-VaR proposed by JP Morgan. The specified credit risk horizon is usually one year. A rating system with probabilities of migrating from one credit quality to another over a given time horizon (transition matrix) is the key component of the credit-VaR proposed by JP Morgan. The specified credit risk horizon is usually one year. * (Sy, 2007), states that the primary cause of credit default is loan delinquency due to insufficient liquidity or cash flow to service debt obligations. In the case of unsecured loans, we assume delinquency is a necessary and sufficient condition. In the case of collateralized loans, delinquency is a necessary, but not sufficient condition, because the borrower may be able to refinance the loan from positive equity or net assets to prevent default. In general, for secured loans, both delinquency and insolvency are assumed necessary and sufficient for credit default. CHAPTER 2 THEORECTICAL FRAMEWORK 2.1 CREDIT RISK: Credit risk is risk due to uncertainty in a counterpartys (also called an obligors or credits) ability to meet its obligations. Because there are many types of counterparties—from individuals to sovereign governments—and many different types of obligations—from auto loans to derivatives transactions—credit risk takes many forms. Institutions manage it in different ways. Although credit losses naturally fluctuate over time and with economic conditions, there is (ceteris paribus) a statistically measured, long-run average loss level. The losses can be divided into two categories i.e. expected losses (EL) and unexpected losses (UL). EL is based on three parameters:  ·Ã¢â€š ¬Ã‚   The likelihood that default will take place over a specified time horizon (probability of default or PD)  · â‚ ¬Ã‚  The amount owned by the counterparty at the moment of default (exposure at default or EAD)  ·Ã¢â€š ¬Ã‚   The fraction of the exposure, net of any recoveries, which will be lost following a default event (loss given default or LGD). EL = PD x EAD x LGD EL can be aggregated at various different levels (e.g. individual loan or entire credit portfolio), although it is typically calculated at the transaction level; it is normally mentioned either as an absolute amount or as a percentage of transaction size. It is also both customer- and facility-specific, since two different loans to the same customer can have a very different EL due to differences in EAD and/or LGD. It is important to note that EL (or, for that matter, credit quality) does not by itself constitute risk; if losses always equaled their expected levels, then there would be no uncertainty. Instead, EL should be viewed as an anticipated â€Å"cost of doing business† and should therefore be incorporated in loan pricing and ex ante provisioning. Credit risk, in fact, arises from variations in the actual loss levels, which give rise to the so-called unexpected loss (UL). Statistically speaking, UL is simply the standard deviation of EL. UL= ÏÆ' (EL) = ÏÆ' (PD*EAD*LGD) Once the bank- level credit loss distribution is constructed, credit economic capital is simply determined by the banks tolerance for credit risk, i.e. the bank needs to decide how much capital it wants to hold in order to avoid insolvency because of unexpected credit losses over the next year. A safer bank must have sufficient capital to withstand losses that are larger and rarer, i.e. they extend further out in the loss distribution tail. In practice, therefore, the choice of confidence interval in the loss distribution corresponds to the banks target credit rating (and related default probability) for its own debt. As Figure below shows, economic capital is the difference between EL and the selected confidence interval at the tail of the loss distribution; it is equal to a multiple K (often referred to as the capital multiplier) of the standard deviation of EL (i.e. UL). The shape of the loss distribution can vary considerably depending on product type and borrower credit quality. For example, high quality (low PD) borrowers tend to have proportionally less EL per unit of capital charged, meaning that K is higher and the shape of their loss distribution is more skewed (and vice versa). Credit risk may be in the following forms: * In case of the direct lending * In case of the guarantees and the letter of the credit * In case of the treasury operations * In case of the securities trading businesses * In case of the cross border exposure 2.2 The need for Credit Risk Rating: The need for Credit Risk Rating has arisen due to the following: 1. With dismantling of State control, deregulation, globalisation and allowing things to shape on the basis of market conditions, Indian Industry and Indian Banking face new risks and challenges. Competition results in the survival of the fittest. It is therefore necessary to identify these risks, measure them, monitor and control them. 2. It provides a basis for Credit Risk Pricing i.e. fixation of rate of interest on lending to different borrowers based on their credit risk rating thereby balancing Risk Reward for the Bank. 3. The Basel Accord and consequent Reserve Bank of India guidelines requires that the level of capital required to be maintained by the Bank will be in proportion to the risk of the loan in Banks Books for measurement of which proper Credit Risk Rating system is necessary. 4. The credit risk rating can be a Risk Management tool for prospecting fresh borrowers in addition to monitoring the weaker parameters and taking remedial action. The types of Risks Captured in the Banks Credit Risk Rating Model The Credit Risk Rating Model provides a framework to evaluate the risk emanating from following main risk categorizes/risk areas: * Industry risk * Business risk * Financial risk * Management risk * Facility risk * Project risk 2.3 WHY CREDIT RISK MEASUREMENT? In recent years, a revolution is brewing in risk as it is both managed and measured. There are seven reasons as to why certain surge in interest: 1. Structural increase in bankruptcies: Although the most recent recession hit at different time in different countries, most statistics show a significant increase in bankruptcies, compared to prior recession. To the extent that there has been a permanent or structural increase in bankruptcies worldwide- due to increase in the global competition- accurate credit analysis become even more important today than in past. 2. Disintermediation: As capital markets have expanded and become accessible to small and mid sized firms, the firms or borrowers â€Å"left behind† to raise funds from banks and other traditional financial institutions (FIs) are likely to be smaller and to have weaker credit ratings. Capital market growth has produced â€Å"a winners† curse effect on the portfolios of traditional FIs. 3. More Competitive Margins: Almost paradoxically, despite the decline in the average quality of loans, interest margins or spreads, especially in wholesale loan markets have become very thin. In short, the risk-return trade off from lending has gotten worse. A number of reasons can be cited, but an important factor has been the enhanced competition for low quality borrowers especially from finance companies, much of whose lending activity has been concentrated at the higher risk/lower quality end of the market. 4. Declining and Volatile Values of Collateral: Concurrent with the recent Asian and Russian debt crisis in well developed countries such as Switzerland and Japan have shown that property and real assets value are very hard to predict, and to realize through liquidation. The weaker (and more uncertain) collateral values are, the riskier the lending is likely to be. Indeed the current concerns about deflation worldwide have been accentuated the concerns about the value of real assets such as property and other physical assets. 5. The Growth Of Off- Balance Sheet Derivatives: In many of the very large U.S. banks, the notional value of the off-balance-sheet exposure to instruments such as over-the-counter (OTC) swaps and forwards is more than 10 times the size of their loan books. Indeed the growth in credit risk off the balance sheet was one of the main reasons for the introduction, by the Bank for International Settlements (BIS), of risk based capital requirements in 1993. Under the BIS system, the banks have to hold a capital requirement based on the mark- to- market current values of each OTC Derivative contract plus an add on for potential future exposure. 6. Technology Advances in computer systems and related advances in information technology have given banks and FIs the opportunity to test high powered modeling techniques. A survey conducted by International Swaps and Derivatives Association and the Institute of International Finance in 2000 found that survey participants (consisting of 25 commercial banks from 10 countries, with varying size and specialties) used commercial and internal databases to assess the credit risk on rated and unrated commercial, retail and mortgage loans. 7. The BIS Risk-Based Capital Requirements Despite the importance of above six reasons, probably the greatest incentive for banks to develop new credit risk models has been dissatisfaction with the BIS and central banks post-1992 imposition of capital requirements on loans. The current BIS approach has been described as a ‘one size fits all policy, irrespective of the size of loan, its maturity, and most importantly, the credit quality of the borrowing party. Much of the current interest in fine tuning credit risk measurement models has been fueled by the proposed BIS New Capital Accord (or so Called BIS II) which would more closely link capital charges to the credit risk exposure to retail, commercial, sovereign and interbank credits. Chapter- 3 Credit Risk Approaches and Pricing 3.1 CREDIT RISK MEASUREMENT APPROACHES: 1. CREDIT SCORING MODELS Credit Scoring Models use data on observed borrower characteristics to calculate the probability of default or to sort borrowers into different default risk classes. By selecting and combining different economic and financial borrower characteristics, a bank manager may be able to numerically establish which factors are important in explaining default risk, evaluate the relative degree or importance of these factors, improve the pricing of default risk, be better able to screen out bad loan applicants and be in a better position to calculate any reserve needed to meet expected future loan losses. To employ credit scoring model in this manner, the manager must identify objective economic and financial measures of risk for any particular class of borrower. For consumer debt, the objective characteristics in a credit -scoring model might include income, assets, age occupation and location. For corporate debt, financial ratios such as debt-equity ratio are usually key factors. After data are identified, a statistical technique quantifies or scores the default risk probability or default risk classification. Credit scoring models include three broad types: (1) linear probability models, (2) logit model and (3) linear discriminant model. LINEAR PROBABILITY MODEL: The linear probability model uses past data, such as accounting ratios, as inputs into a model to explain repayment experience on old loans. The relative importance of the factors used in explaining the past repayment performance then forecasts repayment probabilities on new loans; that is can be used for assessing the probability of repayment. Briefly we divide old loans (i) into two observational groups; those that defaulted (Zi = 1) and those that did not default (Zi = 0). Then we relate these observations by linear regression to s set of j casual variables (Xij) that reflects quantative information about the ith borrower, such as leverage or earnings. We estimate the model by linear regression of: Zi = ÃŽ £ÃŽ ²jXij + error Where ÃŽ ²j is the estimated importance of the jth variable in explaining past repayment experience. If we then take these estimated ÃŽ ²js and multiply them by the observed Xij for a prospective borrower, we can derive an expected value of Zi for the probability of repayment on the loan. LOGIT MODEL: The objective of the typical credit or loan review model is to replicate judgments made by loan officers, credit managers or bank examiners. If an accurate model could be developed, then it could be used as a tool for reviewing and classifying future credit risks. Chesser (1974) developed a model to predict noncompliance with the customers original loan arrangement, where non-compliance is defined to include not only default but any workout that may have been arranged resulting in a settlement of the loan less favorable to the tender than the original agreement. Chessers model, which was based on a technique called logit analysis, consisted of the following six variables. X1 = (Cash + Marketable Securities)/Total Assets X2 = Net Sales/(Cash + Marketable Securities) X3 = EBIT/Total Assets X4 = Total Debt/Total Assets X5 = Total Assets/ Net Worth X6 = Working Capital/Net Sales The estimated coefficients, including an intercept term, are Y = -2.0434 -5.24X1 + 0.0053X2 6.6507X3 + 4.4009X4 0.0791X5 0.1020X6 Chessers classification rule for above equation is If P> 50, assign to the non compliance group and If P≠¤50, assign to the compliance group. LINEAR DISCRIMINANT MODEL: While linear probability and logit models project a value foe the expected probability of default if a loan is made, discriminant models divide borrowers into high or default risk classes contingent on their observed characteristic (X). Altmans Z-score model is an application of multivariate Discriminant analysis in credit risk modeling. Financial ratios measuring probability, liquidity and solvency appeared to have significant discriminating power to separate the firm that fails to service its debt from the firms that do not. These ratios are weighted to produce a measure (credit risk score) that can be used as a metric to differentiate the bad firms from the set of good ones. Discriminant analysis is a multivariate statistical technique that analyzes a set of variables in order to differentiate two or more groups by minimizing the within-group variance and maximizing the between group variance simultaneously. Variables taken were: X1::Working Capital/ Total Asset X2: Retained Earning/ Total Asset X3: Earning before interest and taxes/ Total Asset X4: Market value of equity/ Book value of total Liabilities X5: Sales/Total Asset The original Z-score model was revised and modified several times in order to find the scoring model more specific to a particular class of firm. These resulted in the private firms Z-score model, non manufacturers Z-score model and Emerging Market Scoring (EMS) model. 3.2 New Approaches TERM STRUCTURE DERIVATION OF CREDIT RISK: One market based method of assessing credit risk exposure and default probabilities is to analyze the risk premium inherent in the current structure of yields on corporate debt or loans to similar risk-rated borrowers. Rating agencies categorize corporate bond issuers into at least seven major classes according to perceived credit quality. The first four ratings AAA, AA, A and BBB indicate investment quality borrowers. MORTALITY RATE APPROACH: Rather than extracting expected default rates from the current term structure of interest rates, the FI manager may analyze the historic or past default experience the mortality rates, of bonds and loans of a similar quality. Here p1is the probability of a grade B bond surviving the first year of its issue; thus 1 p1 is the marginal mortality rate, or the probability of the bond or loan dying or defaulting in the first year while p2 is the probability of the loan surviving in the second year and that it has not defaulted in the first year, 1-p2 is the marginal mortality rate for the second year. Thus, for each grade of corporate buyer quality, a marginal mortality rate (MMR) curve can show the historical default rate in any specific quality class in each year after issue. RAROC MODELS: Based on a banks risk-bearing capacity and its risk strategy, it is thus necessary — bearing in mind the banks strategic orientation — to find a method for the efficient allocation of capital to the banks individual siness areas, i.e. to define indicators that are suitable for balancing risk and return in a sensible manner. Indicators fulfilling this requirement are often referred to as risk adjusted performance measures (RAPM). RARORAC (risk adjusted return on risk adjusted capital, usually abbreviated as the most commonly found forms are RORAC (return on risk adjusted capital), Net income is taken to mean income minus refinancing cost, operating cost, and expected losses. It should now be the banks goal to maximize a RAPM indicator for the bank as a whole, e.g. RORAC, taking into account the correlation between individual transactions. Certain constraints such as volume restrictions due to a potential lack of liquidity and the maintenance of solvency based on economic and regulatory capital have to be observed in reaching this goal. From an organizational point of view, value and risk management should therefore be linked as closely as possible at all organizational levels. OPTION MODELS OF DEFAULT RISK (kmv model): KMV Corporation has developed a credit risk model that uses information on the stock prices and the capital structure of the firm to estimate its default probability. The starting point of the model is the proposition that a firm will default only if its asset value falls below a certain level, which is function of its liability. It estimates the asset value of the firm and its asset volatility from the market value of equity and the debt structure in the option theoretic framework. The resultant probability is called Expected default Frequency (EDF). In summary, EDF is calculated in the following three steps: i) Estimation of asset value and volatility from the equity value and volatility of equity return. ii) Calculation of distance from default iii) Calculation of expected default frequency Credit METRICS: It provides a method for estimating the distribution of the value of the assets n a portfolio subject to change in the credit quality of individual borrower. A portfolio consists of different stand-alone assets, defined by a stream of future cash flows. Each asset has a distribution over the possible range of future rating class. Starting from its initial rating, an asset may end up in ay one of the possible rating categories. Each rating category has a different credit spread, which will be used to discount the future cash flows. Moreover, the assets are correlated among themselves depending on the industry they belong to. It is assumed that the asset returns are normally distributed and change in the asset returns causes the change in the rating category in future. Finally, the simulation technique is used to estimate the value distribution of the assets. A number of scenario are generated from a multivariate normal distribution, which is defined by the appropriate credit spread, t he future value of asset is estimated. CREDIT Risk+: CreditRisk+, introduced by Credit Suisse Financial Products (CSFP), is a model of default risk. Each asset has only two possible end-of-period states: default and non-default. In the event of default, the lender recovers a fixed proportion of the total expense. The default rate is considered as a continuous random variable. It does not try to estimate default correlation directly. Here, the default correlation is assumed to be determined by a set of risk factors. Conditional on these risk factors, default of each obligator follows a Bernoulli distribution. To get unconditional probability generating function for the number of defaults, it assumes that the risk factors are independently gamma distributed random variables. The final step in Creditrisk+ is to obtain the probability generating function for losses. Conditional on the number of default events, the losses are entirely determined by the exposure and recovery rate. Thus, the distribution of asset can be estimated from the fol lowing input data: i) Exposure of individual asset ii) Expected default rate iii) Default ate volatilities iv) Recovery rate given default 3.3 CREDIT PRICING Pricing of the credit is essential for the survival of enterprises relying on credit assets, because the benefits derived from extending credit should surpass the cost. With the introduction of capital adequacy norms, the credit risk is linked to the capital-minimum 8% capital adequacy. Consequently, higher capital is required to be deployed if more credit risks are underwritten. The decision (a) whether to maximize the returns on possible credit assets with the existing capital or (b) raise more capital to do more business invariably depends upon p Credit Risk Dissertation Credit Risk Dissertation CREDIT RISK EXECUTIVE SUMMARY The future of banking will undoubtedly rest on risk management dynamics. Only those banks that have efficient risk management system will survive in the market in the long run. The major cause of serious banking problems over the years continues to be directly related to lax credit standards for borrowers and counterparties, poor portfolio risk management, or a lack of attention to deterioration in the credit standing of a banks counterparties. Credit risk is the oldest and biggest risk that bank, by virtue of its very nature of business, inherits. This has however, acquired a greater significance in the recent past for various reasons. There have been many traditional approaches to measure credit risk like logit, linear probability model but with passage of time new approaches have been developed like the Credit+, KMV Model. Basel I Accord was introduced in 1988 to have a framework for regulatory capital for banks but the â€Å"one size fit all† approach led to a shift, to a new and comprehensive approach -Basel II which adopts a three pillar approach to risk management. Banks use a number of techniques to mitigate the credit risks to which they are exposed. RBI has prescribed adoption of comprehensive approach for the purpose of CRM which allows fuller offset of security of collateral against exposures by effectively reducing the exposure amount by the value ascribed to the collateral. In this study, a leading nationalized bank is taken to study the steps taken by the bank to implement the Basel- II Accord and the entire framework developed for credit risk management. The bank under the study uses the credit scoring method to evaluate the credit risk involved in various loans/advances. The bank has set up special software to evaluate each case under various parameters and a monitoring system to continuously track each assets performance in accordance with the evaluation parameters. CHAPTER 1 INTRODUCTION 1.1 Rationale Credit Risk Management in todays deregulated market is a big challenge. Increased market volatility has brought with it the need for smart analysis and specialized applications in managing credit risk. A well defined policy framework is needed to help the operating staff identify the risk-event, assign a probability to each, quantify the likely loss, assess the acceptability of the exposure, price the risk and monitor them right to the point where they are paid off. Generally, Banks in India evaluate a proposal through the traditional tools of project financing, computing maximum permissible limits, assessing management capabilities and prescribing a ceiling for an industry exposure. As banks move in to a new high powered world of financial operations and trading, with new risks, the need is felt for more sophisticated and versatile instruments for risk assessment, monitoring and controlling risk exposures. It is, therefore, time that banks managements equip them fully to grapple with the demands of creating tools and systems capable of assessing, monitoring and controlling risk exposures in a more scientific manner. According to an estimate, Credit Risk takes about 70% and 30% remaining is shared between the other two primary risks, namely Market risk (change in the market price and operational risk i.e., failure of internal controls, etc.). Quality borrowers (Tier-I borrowers) were able to access the capital market directly without going through the debt route. Hence, the credit route is now more open to lesser mortals (Tier-II borrowers). With margin levels going down, banks are unable to absorb the level of loan losses. Even in banks which regularly fine-tune credit policies and streamline credit processes, it is a real challenge for credit risk managers to correctly identify pockets of risk concentration, quantify extent of risk carried, identify opportunities for diversification and balance the risk-return trade-off in their credit portfolio. The management of banks should strive to embrace the notion of ‘uncertainty and risk in their balance sheet and instill the need for approaching credit administration from a ‘risk-perspective across the system by placing well drafted strategies in the hands of the operating staff with due material support for its successful implementation. There is a need for Strategic approach to Credit Risk Management (CRM) in Indian Commercial Banks, particularly in view of; (1) Higher NPAs level in comparison with global benchmark (2) RBI s stipulation about dividend distribution by the banks (3) Revised NPAs level and CAR norms (4) New Basel Capital Accord (Basel -II) revolution 1.2 OBJECTIVES To understand the conceptual framework for credit risk. To understand credit risk under the Basel II Accord. To analyze the credit risk management practices in a Leading Nationalised Bank 1.3 RESEARCH METHODOLOGY Research Design: In order to have more comprehensive definition of the problem and to become familiar with the problems, an extensive literature survey was done to collect secondary data for the location of the various variables, probably contemporary issues and the clarity of concepts. Data Collection Techniques: The data collection technique used is interviewing. Data has been collected from both primary and secondary sources. Primary Data: is collected by making personal visits to the bank. Secondary Data: The details have been collected from research papers, working papers, white papers published by various agencies like ICRA, FICCI, IBA etc; articles from the internet and various journals. 1.4 LITERATURE REVIEW * Merton (1974) has applied options pricing model as a technology to evaluate the credit risk of enterprise, it has been drawn a lot of attention from western academic and business circles.Mertons Model is the theoretical foundation of structural models. Mertons model is not only based on a strict and comprehensive theory but also used market information stock price as an important variance toevaluate the credit risk.This makes credit risk to be a real-time monitored at a much higher frequency.This advantage has made it widely applied by the academic and business circle for a long time. Other Structural Models try to refine the original Merton Framework by removing one or more of unrealistic assumptions. * Black and Cox (1976) postulate that defaults occur as soon as firms asset value falls below a certain threshold. In contrast to the Merton approach, default can occur at any time. The paper by Black and Cox (1976) is the first of the so-called First Passage Models (FPM). First passage models specify default as the first time the firms asset value hits a lower barrier, allowing default to take place at any time. When the default barrier is exogenously fixed, as in Black and Cox (1976) and Longstaff and Schwartz (1995), it acts as a safety covenant to protect bondholders. Black and Cox introduce the possibility of more complex capital structures, with subordinated debt. * Geske (1977) introduces interest-paying debt to the Merton model. * Vasicek (1984) introduces the distinction between short and long term liabilities which now represents a distinctive feature of the KMV model. Under these models, all the relevant credit risk elements, including default and recovery at default, are a function of the structural characteristics of the firm: asset levels, asset volatility (business risk) and leverage (financial risk). * Kim, Ramaswamy and Sundaresan (1993) have suggested an alternative approach which still adopts the original Merton framework as far as the default process is concerned but, at the same time, removes one of the unrealistic assumptions of the Merton model; namely, that default can occur only at maturity of the debt when the firms assets are no longer sufficient to cover debt obligations. Instead, it is assumed that default may occur anytime between the issuance and maturity of the debt and that default is triggered when the value of the firms assets reaches a lower threshold level. In this model, the RR in the event of default is exogenous and independent from the firms asset value. It is generally defined as a fixed ratio of the outstanding debt value and is therefore independent from the PD. The attempt to overcome the shortcomings of structural-form models gave rise to reduced-form models. Unlike structural-form models, reduced-form models do not condition default on the value of the firm, and parameters related to the firms value need not be estimated to implement them. * Jarrow and Turnbull (1995) assumed that, at default, a bond would have a market value equal to an exogenously specified fraction of an otherwise equivalent default-free bond. * Duffie and Singleton (1999) followed with a model that, when market value at default (i.e. RR) is exogenously specified, allows for closed-form solutions for the term-structure of credit spreads. * Zhou (2001) attempt to combine the advantages of structural-form models a clear economic mechanism behind the default process, and the ones of reduced- form models unpredictability of default. This model links RRs to the firm value at default so that the variation in RRs is endogenously generated and the correlation between RRs and credit ratings reported first in Altman (1989) and Gupton, Gates and Carty (2000) is justified. Lately portfolio view on credit losses has emerged by recognising that changes in credit quality tend to comove over the business cycle and that one can diversify part of the credit risk by a clever composition of the loan portfolio across regions, industries and countries. Thus in order to assess the credit risk of a loan portfolio, a bank must not only investigate the creditworthiness of its customers, but also identify the concentration risks and possible comovements of risk factors in the portfolio. * CreditMetrics by Gupton et al (1997) was publicized in 1997 by JP Morgan. Its methodology is based on probability of moving from one credit quality to another within a given time horizon (credit migration analysis). The estimation of the portfolio Value-at-Risk due to Credit (Credit-VaR) through CreditMetrics A rating system with probabilities of migrating from one credit quality to another over a given time horizon (transition matrix) is the key component of the credit-VaR proposed by JP Morgan. The specified credit risk horizon is usually one year. A rating system with probabilities of migrating from one credit quality to another over a given time horizon (transition matrix) is the key component of the credit-VaR proposed by JP Morgan. The specified credit risk horizon is usually one year. * (Sy, 2007), states that the primary cause of credit default is loan delinquency due to insufficient liquidity or cash flow to service debt obligations. In the case of unsecured loans, we assume delinquency is a necessary and sufficient condition. In the case of collateralized loans, delinquency is a necessary, but not sufficient condition, because the borrower may be able to refinance the loan from positive equity or net assets to prevent default. In general, for secured loans, both delinquency and insolvency are assumed necessary and sufficient for credit default. CHAPTER 2 THEORECTICAL FRAMEWORK 2.1 CREDIT RISK: Credit risk is risk due to uncertainty in a counterpartys (also called an obligors or credits) ability to meet its obligations. Because there are many types of counterparties—from individuals to sovereign governments—and many different types of obligations—from auto loans to derivatives transactions—credit risk takes many forms. Institutions manage it in different ways. Although credit losses naturally fluctuate over time and with economic conditions, there is (ceteris paribus) a statistically measured, long-run average loss level. The losses can be divided into two categories i.e. expected losses (EL) and unexpected losses (UL). EL is based on three parameters:  ·Ã¢â€š ¬Ã‚   The likelihood that default will take place over a specified time horizon (probability of default or PD)  · â‚ ¬Ã‚  The amount owned by the counterparty at the moment of default (exposure at default or EAD)  ·Ã¢â€š ¬Ã‚   The fraction of the exposure, net of any recoveries, which will be lost following a default event (loss given default or LGD). EL = PD x EAD x LGD EL can be aggregated at various different levels (e.g. individual loan or entire credit portfolio), although it is typically calculated at the transaction level; it is normally mentioned either as an absolute amount or as a percentage of transaction size. It is also both customer- and facility-specific, since two different loans to the same customer can have a very different EL due to differences in EAD and/or LGD. It is important to note that EL (or, for that matter, credit quality) does not by itself constitute risk; if losses always equaled their expected levels, then there would be no uncertainty. Instead, EL should be viewed as an anticipated â€Å"cost of doing business† and should therefore be incorporated in loan pricing and ex ante provisioning. Credit risk, in fact, arises from variations in the actual loss levels, which give rise to the so-called unexpected loss (UL). Statistically speaking, UL is simply the standard deviation of EL. UL= ÏÆ' (EL) = ÏÆ' (PD*EAD*LGD) Once the bank- level credit loss distribution is constructed, credit economic capital is simply determined by the banks tolerance for credit risk, i.e. the bank needs to decide how much capital it wants to hold in order to avoid insolvency because of unexpected credit losses over the next year. A safer bank must have sufficient capital to withstand losses that are larger and rarer, i.e. they extend further out in the loss distribution tail. In practice, therefore, the choice of confidence interval in the loss distribution corresponds to the banks target credit rating (and related default probability) for its own debt. As Figure below shows, economic capital is the difference between EL and the selected confidence interval at the tail of the loss distribution; it is equal to a multiple K (often referred to as the capital multiplier) of the standard deviation of EL (i.e. UL). The shape of the loss distribution can vary considerably depending on product type and borrower credit quality. For example, high quality (low PD) borrowers tend to have proportionally less EL per unit of capital charged, meaning that K is higher and the shape of their loss distribution is more skewed (and vice versa). Credit risk may be in the following forms: * In case of the direct lending * In case of the guarantees and the letter of the credit * In case of the treasury operations * In case of the securities trading businesses * In case of the cross border exposure 2.2 The need for Credit Risk Rating: The need for Credit Risk Rating has arisen due to the following: 1. With dismantling of State control, deregulation, globalisation and allowing things to shape on the basis of market conditions, Indian Industry and Indian Banking face new risks and challenges. Competition results in the survival of the fittest. It is therefore necessary to identify these risks, measure them, monitor and control them. 2. It provides a basis for Credit Risk Pricing i.e. fixation of rate of interest on lending to different borrowers based on their credit risk rating thereby balancing Risk Reward for the Bank. 3. The Basel Accord and consequent Reserve Bank of India guidelines requires that the level of capital required to be maintained by the Bank will be in proportion to the risk of the loan in Banks Books for measurement of which proper Credit Risk Rating system is necessary. 4. The credit risk rating can be a Risk Management tool for prospecting fresh borrowers in addition to monitoring the weaker parameters and taking remedial action. The types of Risks Captured in the Banks Credit Risk Rating Model The Credit Risk Rating Model provides a framework to evaluate the risk emanating from following main risk categorizes/risk areas: * Industry risk * Business risk * Financial risk * Management risk * Facility risk * Project risk 2.3 WHY CREDIT RISK MEASUREMENT? In recent years, a revolution is brewing in risk as it is both managed and measured. There are seven reasons as to why certain surge in interest: 1. Structural increase in bankruptcies: Although the most recent recession hit at different time in different countries, most statistics show a significant increase in bankruptcies, compared to prior recession. To the extent that there has been a permanent or structural increase in bankruptcies worldwide- due to increase in the global competition- accurate credit analysis become even more important today than in past. 2. Disintermediation: As capital markets have expanded and become accessible to small and mid sized firms, the firms or borrowers â€Å"left behind† to raise funds from banks and other traditional financial institutions (FIs) are likely to be smaller and to have weaker credit ratings. Capital market growth has produced â€Å"a winners† curse effect on the portfolios of traditional FIs. 3. More Competitive Margins: Almost paradoxically, despite the decline in the average quality of loans, interest margins or spreads, especially in wholesale loan markets have become very thin. In short, the risk-return trade off from lending has gotten worse. A number of reasons can be cited, but an important factor has been the enhanced competition for low quality borrowers especially from finance companies, much of whose lending activity has been concentrated at the higher risk/lower quality end of the market. 4. Declining and Volatile Values of Collateral: Concurrent with the recent Asian and Russian debt crisis in well developed countries such as Switzerland and Japan have shown that property and real assets value are very hard to predict, and to realize through liquidation. The weaker (and more uncertain) collateral values are, the riskier the lending is likely to be. Indeed the current concerns about deflation worldwide have been accentuated the concerns about the value of real assets such as property and other physical assets. 5. The Growth Of Off- Balance Sheet Derivatives: In many of the very large U.S. banks, the notional value of the off-balance-sheet exposure to instruments such as over-the-counter (OTC) swaps and forwards is more than 10 times the size of their loan books. Indeed the growth in credit risk off the balance sheet was one of the main reasons for the introduction, by the Bank for International Settlements (BIS), of risk based capital requirements in 1993. Under the BIS system, the banks have to hold a capital requirement based on the mark- to- market current values of each OTC Derivative contract plus an add on for potential future exposure. 6. Technology Advances in computer systems and related advances in information technology have given banks and FIs the opportunity to test high powered modeling techniques. A survey conducted by International Swaps and Derivatives Association and the Institute of International Finance in 2000 found that survey participants (consisting of 25 commercial banks from 10 countries, with varying size and specialties) used commercial and internal databases to assess the credit risk on rated and unrated commercial, retail and mortgage loans. 7. The BIS Risk-Based Capital Requirements Despite the importance of above six reasons, probably the greatest incentive for banks to develop new credit risk models has been dissatisfaction with the BIS and central banks post-1992 imposition of capital requirements on loans. The current BIS approach has been described as a ‘one size fits all policy, irrespective of the size of loan, its maturity, and most importantly, the credit quality of the borrowing party. Much of the current interest in fine tuning credit risk measurement models has been fueled by the proposed BIS New Capital Accord (or so Called BIS II) which would more closely link capital charges to the credit risk exposure to retail, commercial, sovereign and interbank credits. Chapter- 3 Credit Risk Approaches and Pricing 3.1 CREDIT RISK MEASUREMENT APPROACHES: 1. CREDIT SCORING MODELS Credit Scoring Models use data on observed borrower characteristics to calculate the probability of default or to sort borrowers into different default risk classes. By selecting and combining different economic and financial borrower characteristics, a bank manager may be able to numerically establish which factors are important in explaining default risk, evaluate the relative degree or importance of these factors, improve the pricing of default risk, be better able to screen out bad loan applicants and be in a better position to calculate any reserve needed to meet expected future loan losses. To employ credit scoring model in this manner, the manager must identify objective economic and financial measures of risk for any particular class of borrower. For consumer debt, the objective characteristics in a credit -scoring model might include income, assets, age occupation and location. For corporate debt, financial ratios such as debt-equity ratio are usually key factors. After data are identified, a statistical technique quantifies or scores the default risk probability or default risk classification. Credit scoring models include three broad types: (1) linear probability models, (2) logit model and (3) linear discriminant model. LINEAR PROBABILITY MODEL: The linear probability model uses past data, such as accounting ratios, as inputs into a model to explain repayment experience on old loans. The relative importance of the factors used in explaining the past repayment performance then forecasts repayment probabilities on new loans; that is can be used for assessing the probability of repayment. Briefly we divide old loans (i) into two observational groups; those that defaulted (Zi = 1) and those that did not default (Zi = 0). Then we relate these observations by linear regression to s set of j casual variables (Xij) that reflects quantative information about the ith borrower, such as leverage or earnings. We estimate the model by linear regression of: Zi = ÃŽ £ÃŽ ²jXij + error Where ÃŽ ²j is the estimated importance of the jth variable in explaining past repayment experience. If we then take these estimated ÃŽ ²js and multiply them by the observed Xij for a prospective borrower, we can derive an expected value of Zi for the probability of repayment on the loan. LOGIT MODEL: The objective of the typical credit or loan review model is to replicate judgments made by loan officers, credit managers or bank examiners. If an accurate model could be developed, then it could be used as a tool for reviewing and classifying future credit risks. Chesser (1974) developed a model to predict noncompliance with the customers original loan arrangement, where non-compliance is defined to include not only default but any workout that may have been arranged resulting in a settlement of the loan less favorable to the tender than the original agreement. Chessers model, which was based on a technique called logit analysis, consisted of the following six variables. X1 = (Cash + Marketable Securities)/Total Assets X2 = Net Sales/(Cash + Marketable Securities) X3 = EBIT/Total Assets X4 = Total Debt/Total Assets X5 = Total Assets/ Net Worth X6 = Working Capital/Net Sales The estimated coefficients, including an intercept term, are Y = -2.0434 -5.24X1 + 0.0053X2 6.6507X3 + 4.4009X4 0.0791X5 0.1020X6 Chessers classification rule for above equation is If P> 50, assign to the non compliance group and If P≠¤50, assign to the compliance group. LINEAR DISCRIMINANT MODEL: While linear probability and logit models project a value foe the expected probability of default if a loan is made, discriminant models divide borrowers into high or default risk classes contingent on their observed characteristic (X). Altmans Z-score model is an application of multivariate Discriminant analysis in credit risk modeling. Financial ratios measuring probability, liquidity and solvency appeared to have significant discriminating power to separate the firm that fails to service its debt from the firms that do not. These ratios are weighted to produce a measure (credit risk score) that can be used as a metric to differentiate the bad firms from the set of good ones. Discriminant analysis is a multivariate statistical technique that analyzes a set of variables in order to differentiate two or more groups by minimizing the within-group variance and maximizing the between group variance simultaneously. Variables taken were: X1::Working Capital/ Total Asset X2: Retained Earning/ Total Asset X3: Earning before interest and taxes/ Total Asset X4: Market value of equity/ Book value of total Liabilities X5: Sales/Total Asset The original Z-score model was revised and modified several times in order to find the scoring model more specific to a particular class of firm. These resulted in the private firms Z-score model, non manufacturers Z-score model and Emerging Market Scoring (EMS) model. 3.2 New Approaches TERM STRUCTURE DERIVATION OF CREDIT RISK: One market based method of assessing credit risk exposure and default probabilities is to analyze the risk premium inherent in the current structure of yields on corporate debt or loans to similar risk-rated borrowers. Rating agencies categorize corporate bond issuers into at least seven major classes according to perceived credit quality. The first four ratings AAA, AA, A and BBB indicate investment quality borrowers. MORTALITY RATE APPROACH: Rather than extracting expected default rates from the current term structure of interest rates, the FI manager may analyze the historic or past default experience the mortality rates, of bonds and loans of a similar quality. Here p1is the probability of a grade B bond surviving the first year of its issue; thus 1 p1 is the marginal mortality rate, or the probability of the bond or loan dying or defaulting in the first year while p2 is the probability of the loan surviving in the second year and that it has not defaulted in the first year, 1-p2 is the marginal mortality rate for the second year. Thus, for each grade of corporate buyer quality, a marginal mortality rate (MMR) curve can show the historical default rate in any specific quality class in each year after issue. RAROC MODELS: Based on a banks risk-bearing capacity and its risk strategy, it is thus necessary — bearing in mind the banks strategic orientation — to find a method for the efficient allocation of capital to the banks individual siness areas, i.e. to define indicators that are suitable for balancing risk and return in a sensible manner. Indicators fulfilling this requirement are often referred to as risk adjusted performance measures (RAPM). RARORAC (risk adjusted return on risk adjusted capital, usually abbreviated as the most commonly found forms are RORAC (return on risk adjusted capital), Net income is taken to mean income minus refinancing cost, operating cost, and expected losses. It should now be the banks goal to maximize a RAPM indicator for the bank as a whole, e.g. RORAC, taking into account the correlation between individual transactions. Certain constraints such as volume restrictions due to a potential lack of liquidity and the maintenance of solvency based on economic and regulatory capital have to be observed in reaching this goal. From an organizational point of view, value and risk management should therefore be linked as closely as possible at all organizational levels. OPTION MODELS OF DEFAULT RISK (kmv model): KMV Corporation has developed a credit risk model that uses information on the stock prices and the capital structure of the firm to estimate its default probability. The starting point of the model is the proposition that a firm will default only if its asset value falls below a certain level, which is function of its liability. It estimates the asset value of the firm and its asset volatility from the market value of equity and the debt structure in the option theoretic framework. The resultant probability is called Expected default Frequency (EDF). In summary, EDF is calculated in the following three steps: i) Estimation of asset value and volatility from the equity value and volatility of equity return. ii) Calculation of distance from default iii) Calculation of expected default frequency Credit METRICS: It provides a method for estimating the distribution of the value of the assets n a portfolio subject to change in the credit quality of individual borrower. A portfolio consists of different stand-alone assets, defined by a stream of future cash flows. Each asset has a distribution over the possible range of future rating class. Starting from its initial rating, an asset may end up in ay one of the possible rating categories. Each rating category has a different credit spread, which will be used to discount the future cash flows. Moreover, the assets are correlated among themselves depending on the industry they belong to. It is assumed that the asset returns are normally distributed and change in the asset returns causes the change in the rating category in future. Finally, the simulation technique is used to estimate the value distribution of the assets. A number of scenario are generated from a multivariate normal distribution, which is defined by the appropriate credit spread, t he future value of asset is estimated. CREDIT Risk+: CreditRisk+, introduced by Credit Suisse Financial Products (CSFP), is a model of default risk. Each asset has only two possible end-of-period states: default and non-default. In the event of default, the lender recovers a fixed proportion of the total expense. The default rate is considered as a continuous random variable. It does not try to estimate default correlation directly. Here, the default correlation is assumed to be determined by a set of risk factors. Conditional on these risk factors, default of each obligator follows a Bernoulli distribution. To get unconditional probability generating function for the number of defaults, it assumes that the risk factors are independently gamma distributed random variables. The final step in Creditrisk+ is to obtain the probability generating function for losses. Conditional on the number of default events, the losses are entirely determined by the exposure and recovery rate. Thus, the distribution of asset can be estimated from the fol lowing input data: i) Exposure of individual asset ii) Expected default rate iii) Default ate volatilities iv) Recovery rate given default 3.3 CREDIT PRICING Pricing of the credit is essential for the survival of enterprises relying on credit assets, because the benefits derived from extending credit should surpass the cost. With the introduction of capital adequacy norms, the credit risk is linked to the capital-minimum 8% capital adequacy. Consequently, higher capital is required to be deployed if more credit risks are underwritten. The decision (a) whether to maximize the returns on possible credit assets with the existing capital or (b) raise more capital to do more business invariably depends upon p