Wednesday, July 31, 2019

“A Brilliant Solution: Inventing the American Constitution” by Carol Berkin

In the book â€Å"A Brilliant Solution: Inventing the American Constitution† by Carol Berkin she explains the constitution from start to finish from how it all began, to the debates inside the convention and finally the end product. Berkin takes the reader and puts him directly in the middle of the convention of 1786; throughout the book you can feel the excitement, the frustration, the tensions between delegates and the overall commitment to making a new government work for all. The time for a new government came about in times of fear, many men such as William Livingston wondered â€Å"if the republic could even survive another decade† for Henry Knox made an excellent point in declaring â€Å"Our present federal government is a name, a shadow, without power, or effect†. Meantime the relationship between the states was poor and there was an uncertainty if they would even remain united what with the debts, the economic turmoil, and the slow realization that without England they had no protection from the outside world. The question on everyone’s mind was, is there anything that can be done to save their country? 55 delegates gathered in hopes of answering this question with a brilliant solution of their own. The Delegates that gathered in Philadelphia were among the most respected men of their time. They ranged from lawyers to politicians, from the aged Benjamin Franklin to the young Jonathon Dayton, and you’d find that all of these men were of blue blood wealth or the few who as Berkin put were the minority that â€Å"Had risen from obscurity to wealth by virtue of some combination of talent, luck, and well-made marriages. Nobody present would ever be considered just a common man, and for some such as Thomas Jefferson, they would see these men as â€Å"Demi-gods† instead of the regular, flawed, yet brilliant men that they were. It was during this convention that the Virginia Plan was proposed by Edmund Randolph, which was the proposal to write a new constitution instead of re vising the Articles of Confederation like intended. After many debates between the larger and smaller states on being fairly represented and even more adjustments and altering towards the plan itself, the convention created the Senate which was a body of wise men that was made up of two men rom each state. This worked out to satisfaction of the smaller states and for the larger states they were given a House of Representatives that would consist of a larger body of representatives for each state in proportion to the amount of the people in that state. During the Convention there were many controversies between these men, some were focused on not wanting to upset their constituents back in their home states, and the smaller states were constantly trying to protect themselves from the influence of the larger states, while the southern states feared that a national government would upset the slave trade. However, many of the delegates shared a bigger fear throughout the convention, which was putting too much power into a central government and the fear that the senate and House of Representatives would have too much power. George Mason, an initial advocate of a strong central government withdrew his support and refused to sign the finished Constitution, claiming that the new government would â€Å"produce a monarchy, or a corrupt, tyrannical aristocracy† so for the sake of trying to find a balance, the idea of giving the chief executive the power of veto over legislation was proposed. Along with this power a check was needed, so the idea that a three quarters vote from congress could over rule a President’s veto was adopted. How to elect a President was the cause for some of the longest and grueling debates in the convention, the question of who to trust too choose the president was heavy on all their minds. The delegates knew that it couldn’t be left to the legislative power to choose and Gouverneur Morris even declared â€Å"it would be like the election of a pope by a conclave of cardinals†. To leave the choice to the people alone left the certainty that the people would be led and lied to by â€Å"a few active and designing men† as Charles Pinckney put it. Lost and befuddled on what to do, the convention turned the matter over to the Committee of Postponed Matters where after weeks of debate of their own, they came up with something that we know today as the Electoral College. The States were able to elect an amount of electors equal to the number of representatives in the house and senate. These electors were to meet in their respective states and vote by ballot for two people to represent their state. From here their votes would be delivered signed, certified and in a sealed envelope, to Congress, where the results were to be counted in front of congressmen and senators. In the event of a tie, members of the House of Representatives would select which would be the president. It was through this process that our first President, who set the precedent for all presidents to come, was elected. Even when the Constitution had been drawn up and was ready for ratification by the states, which wouldn’t happen for another year after many political battles between Federalists and Anti-Federalists, the Delegates realized that though this document would suffice for now, they couldn’t escape the indefinite future, so they included in the constitution a capacity for change which was designed so that later down the road when times had changed and change was needed, it could be incorporated into the constitution as amendments. In closing, Carol Berkin did an excellent job of portraying the struggles and concerns that went on in that Philadelphia Independence Hall, the framers fought through frustration, pressure, and with each other. They knew what had to be done, and though some did not remain throughout the convention, we owe our law of the land to the determination of these 55 men.

Tuesday, July 30, 2019

Alcoholism and Domestic Violence

Alcoholism, also known as alcohol dependence, is unfortunately a widespread ailment which spans people of all age groups and socioeconomic levels. The health risks of this disease, and alcoholism is a disease, are as widespread as the individuals who contract it. In addition to these health risks, alcoholism is also an influencing factor in another problem plaguing societies, domestic violence. Thus, alcohol and anger create a sometimes fatal combination.Alcoholism is a disease which can be described by degree. Alcohol dependence describes individuals who have developed a â€Å"maladaptive pattern† of alcohol consumption which is characterized by a developing alcohol tolerance, withdrawal symptoms, or hangovers, and the inability to stop drinking. It doesn’t stop there People with alcohol dependence may progress to alcohol abuse which can significantly interfere with their social lives, their work or their interpersonal relationships.In addition, this abuse can also cau se a host of related issues including â€Å"major depression, dysthymia, mania, hypomania, panic disorder, phobias, generalized anxiety disorder, personality disorders, any drug use disorder,schizophrenia, and suicide† (Cargiulo 2007). According to the National Institute on Alcohol Abuse and Alcoholism (NIAAA), drinking up to 14 drinks in a week for men or seven drinks per week as a woman could indicate alcohol dependence. In addition, the NIAAA estimates that up to nearly 18 million Americans could be considered alcoholics (Lauer 2006).Despite the many mental and physiological problems that are associated with alcoholism, some of the most frightening are the health problems associated with the brain. Evidence exists that shows the damage that alcohol consumption does to the brain. Brain imaging studies have revealed that people with alcoholism have significant differences in parts of their brains than those without alcoholism. The brain volume is reduced in alcoholics as wel l as the blood flow to the brain.The reduced blood flow has been linked to a lowering of inhibitions and memory, impaired cognitive function in general and even damage to the corpus callosum (Cargiulo 2007). These problems can lead to long term brain damage. Lesions in the brain form in those with long term patterns of alcohol abuse. This can translate into Korsakoff’s disease which is characterized by motor impairment and thinking impairments which can affect a person’s ability to care for himself. In the end, the individual may have to be cared for institutionally.Alcohol affects the neurotransmitters in the brain. As the disease progresses to chronic status, the brain cells begin to adapt to the alcohol that seems to reside permanently in the brain. As a result, the brain becomes reliant on the alcohol to work. If alcohol is removed, the symptoms of withdrawal take longer and longer to subside. Ultimately, the brain tissue will rebel, in a way, and the withdrawal sy mptoms can be severe, even fatal. Once the cells in the brain die, they cannot be regenerated (Shoemaker 2003). These effects seem to affect males to a greater degree than females.This fact can be explained by differences in drinking patters, choice of alcoholic drinks, rate of alcohol metabolism and the protective effects of hormones such as estrogen (de Bruin, 2005) As such, alcohol dependency and abuse is three times more prominent in men as it is in women even though evidence suggests that for both genders, the numbers are underreported (Cargiulo 2007). As if the physical effects on the body were not bad enough, the behaviors of individuals who are addicted to alcohol are also quite dangerous.The drinkers find themselves to be less inhibited and more willing to engage in risky behaviors. Many of these behaviors can be characterized as aggressive and violent. One of the worst that researchers find among alcoholics is domestic violence or intimate partner violence (IPV). The Acade my of Domestic Violence has defined domestic violence as â€Å"a deliberate pattern of abusive tactics used by one partner in an intimate relationship to obtain and maintain power and control over the other person† which includes physical, sexual, psychological, emotional and economic abuses (Niolon 2004)The types of domestic violence have been organized by Dr. Richard Niolon (2004). He identifies one type as common couple violence which occurs in one or two isolated incidences over the course of the couples’ relationship. Though painful at the time, this type is not usually seen as a recurring pattern of abuse and control. The second type is identified by Niolon (2004) as intimate terrorism in which violence is used as a means of manipulation and control relatively regularly.Mutual violent control occurs more often when both the male and the female fight each other, and dysphoric-borderline violence is indicative of a dependent, emotional fragile individual who resort s to violence as a last resort. This type of violence often occurs when the abused person in the relationship snaps and lashes out violently against the other partner or when a new set of circumstances radically increases the frustration levels of one of the partners in the relationship, and he or she lashes out as a result of this new situation (Niolon 2004).These stages of violence typically follow a predictable cycle. The first stage of this cycle is a calm period in which tension slowly builds. Minor incidents may occur in this stage which can continue for various periods of time. The second stage is the one in which the abuser seems to explode and actually engage in the violence. Outside parties may have to intervene to stop the onslaught. The third states is called the honeymoon stage because the abuser will show distinct remorse for his actions, apologize profusely, and even shower the abused with gifts and affection, even promises.Unfortunately, the abused is likely to forgi ve the abuser at this point. (Niolon 2004). Risk factors for IPV include lower educational levels, lower income and/or employment levels, and, of course, alcohol misuse (Jeyaseelan, 2004). Sadly, alcohol and IPV often do go hand in hand. Not surprising, the most common locations for IPV to occur is in the home and at bars. According to interviews with abused wives, men were much more likely to have been drinking during the attacks than not.When the abusive husbands were interviewed, they reported to have had at least six drinks before the onset of the violence (Quigley and Leonard, 2004/2005). Thus the concurrence of alcoholism and IPV is shown. When drinking, a dangerous combination of increased aggression and reduced inhibition lead to these batterings. Many studies support this problem, which again seems to afflict more men than women. Quigley and Leonard (2004/2005) recount a study by Kaufman, Kantor and Straus in 1990 which found that the husbands heavy drinking was associated with husband on wife violence.Further studies show that a husband who drinks early in marriage is more prone to IPV later in marriage, and husbands who drink heavily before marriage are more likely to be violent toward their wives in the very first year of marriage (Quigley and Leonard, 2004/2005). In addition, these authors cite Caetano in noting that there are racial differences involved in IPC. They note that â€Å"nineteen percent of European American husbands and 24 percent of Hispanic husbands who drank at least five drinks a week committed IPV, as opposed to 40 percent of African American husbands who drank† (Quigley and Leonard, 2004/2005).This has harrowing implications for women of all races, particularly African American women. Galvani (2004) gives several possible reasons why this may be true. Physiological theories argue that ethanol, the drug in alcohol increase aggression biologically. A theory known as Disinhibition Theory notes the earlier link between alcoho l and cognitive function, specifically the portion of the brain mentioned above that regulates levels if inhibition. The Deviance Disavowal theory argues that the abusers use alcohol as a reason for their behavior and consciously drinks so that they can blame the alcohol for their actions.Social Learning theories explain that people will act in a way based on their experiences around others. Therefore, parents and societal expectations can lead to alcoholic abuse and abusive behaviors (Galvani, 2004). Both alcoholism and IPV are scourges upon society, creating physical and mental damage. When these are combined, their effects are even stronger and more widespread. With hope, individuals who find themselves in these situations will soon seek help to avoid permanent tragedy. References Cargiulo, T. (2007).Understanding the health impact of alcohol dependence. American Journal of Health-System Pharmacy 64: S1-S17 De Bruin, EA. (2005) Does alcohol intake relate to brain volume loss? The Brown University Digest of Addiction Theory & Application 24 (7): 5-6 Galvani, S. (2004). Responsible disinhibition: Alcohol, men and violence to women. Addiction Research & Theory 12 (4): 357-371 Jeyaseelan, L et al. (2004). World studies of abuse in the family environment – risk factors for physical intimate partner violence.Injury Control & Safety Promotion 11 (2): 117-124. Lauer, CS. (2006). When drinking turns serious. Modern Healthcare 36 (16): 22 Niolan, R. (2004). Types and Cycles of Domestic Violence. Retrieved 1 May 207 from http://www. psychpage. com/learning/index. html Quigley, BM & Leonard, KE. (2004/2005). Alcohol Use and Violence Among Young Adults. Alcohol Research & Health 28 (4): 191-194 Shoemaker, W. (2003). Alcohol’s Effects on the Brain. Nutritional Health Review: The Consumer’s Medical Journal 88: 3-8 .

Monday, July 29, 2019

My role model Essay Example | Topics and Well Written Essays - 750 words

My role model - Essay Example And this lady, born as Agnes Gonxha Bojaxhiu with ethnicity of Albania left her nation, kin and known environment and initiated her mission in India and served the poor, sick, orphans across the globe for more than forty five years. Thesis Statement This essay intends to examine and explore the way by which my life was motivated by the great works of the great lady called Mother Teresa. My Role Model Since my childhood only life of Mother inspired me to a great extent. She had that desperation, courage and strength to stand beside the poor and needy. Her life inspired me and captivated my spirit totally and very soon, I inculcated to stand by the side of the people who are really in need and distress. Once my locality was hit by a storm and after the calamity was over, by instinct, I felt to stand by the people who were the victims of the catastrophic calamity. While working for the people in distress, I recalled Mother again and again and it provided me immense mental and physical s trength to work like monster along with the disaster management cell of my area. We all are privileged with the love and care of our parents. We are blessed with a family that stays by our side through our thick and thin. But Mother took the pain to metamorphose herself into a universal Mother. She had love for all the children of the world. She loved us, like the son of God loved the mankind. This profound love for the children who are angles of heaven as Mother has seen them helped her to establish homes for the destitute. I was moved by the hardships and struggle Mother Teresa had to undergo to establish her homes in the city of Kolkata of India. I still do not have that much courage to stand beside the orphans in a way Mother stood and dedicated her life towards them. But the clarion call from inside helped me to sponsor a child for her education in one such homes. Whenever I meet her and spend some time with the child, I feel so blessed and happy from inside. This feeling canno t be made parallel with any other material pursuit in the world. Mother had immense potential. She worked for the sick across the globe. Not only she went for relief at the outbreak of any epidemic but laid the foundation and homes for the people who are suffering from acute congenital disease like tuberculosis and leprosy. She had immense network that launched awareness for HIV Aids, Tuberculosis and Leprosy. Her Leprosy mission is outstanding and she worked and contributed a lot to eradicate and aware people against HIV and Leprosy in the south-east Asia in particular. Her mission for the eradication of these diseases had spread across the globe very fast as well. While in my high school days when a community programme was launched for spreading the awareness against the HIV positive infection, I took active part in it. Under that programme, I got the opportunity to spread the awareness into the red light area of the city and meet the people, especially the women and children of t hose areas who live in darkness and get lost in that dark abbey of life easily. Along with awareness, these people need the light of education and care, love and empathy. The course of my life changed to a great extent after visiting this section of the society and in future I look forward to work for these underprivileged at a greater scale and on a more serious note. Conclusion Blessed Teresa of Kolkata is an entity which

Sunday, July 28, 2019

Love and desire Article Example | Topics and Well Written Essays - 1500 words

Love and desire - Article Example This is a topic of legislative debate across many countries, and it is unlikely that the contest will end soon. In Canada, there are groups calling for the striking off segments of the Criminal Code, mainly sections 210 to 213, which criminalize prostitution and related activities in order to protect sex workers and their clients (Betteridge, 2005, p. 11). On the other hand, there are others who are against the decriminalization of prostitution as this will only expose the prostitutes and the general public to greater risks. This weighty matter has left lawmakers and other stakeholders at crossroads. The main aim of this paper is to contribute to the debate on whether or not these activities should be decriminalized in Canada by arguing against the decriminalization. Decimalization of prostitution poses a major threat to the life and security of women, promotes sex trafficking, increases child prostitution and helps to expand the sex industry. Decriminalizing of prostitution will thu s prove costly in the long run. Why Prostitution should not be decriminalization In order to protect the lives and safety of the general public, it is important for prostitution to be viewed and treated as a form of sexual exploitation rather than as an occupation and a source of income. Decriminalizing prostitution has a number of consequences. First, decriminalization will lead to an increase in sex trafficking cases, both at local and international levels. As noted in the report by The Evangelical Fellowship of Canada (2010), it is likely that third party business persons will want to profit from such activities by acting as middlemen or entrepreneurs. These people will be involved in the ‘marketing’ and ‘selling’ of women for sex. In the event that these middlemen lack adequate women to satisfy the market demand for sex, it is certain that they will engage in trafficking of women for sex. This will put the entire country at risk. This will lead to an in crease in kidnap cases across the country and neighboring countries as well. Decriminalizing prostitution is one way of indirectly involving non-prostitutes in these activities since every man or woman will be seen as a potential client by these middlemen. This affects people’s daily activities and movements due to the fear of being kidnapped. In addition, this will paint a negative image of the country to the outside world, and thus affecting Canada’s relations with other countries. Secondly, decriminalization will only help to expand the sex industry in the country. This will mean that at all times, women of different races and ages will be put on display for sale, and this may involve foreigners trafficked from other countries (Raymond, 2003, p. 318). Prostitution will be converted into a quick profit earning business. This will significantly contribute to the expansion of the sex industry since people will start to engage in different forms of sexual exploitation i ncluding phone sex, table dancing and peep shows in order to satisfy their desires. In addition, decriminalization will further increase the access and consumption of pornographic material in the country. The major disadvantage is that when the sex industry is expanded, a majority of the population may be drawn into these activities, either voluntarily

Saturday, July 27, 2019

Healthy food Essay Example | Topics and Well Written Essays - 1750 words

Healthy food - Essay Example This could translate to a mentally healthy disposition that we could make sound choices in life that would make us better us a person. Environmental health meant the desirability of the physcial world that would facilitate the other dimensions of health. And the physical health is the most popular dimension of health which involves the wellness of the physique that enables to pursue our aspirations in life. In this project, the physical aspect of health will become the subject of interest because it is the most basic dimension of health. If we are not physically well, we cannot pursue anything and would defeat other dimensions of health when we are sick. In addition, physical health can easily be observed and measured, either by the improvement of the physical stature or the increased ability to engage in a physical activity. I personally elected this dimension of health because I had been sick before and it did not only feel horrible but also prevented me to engage in any activity. Such, being physically healthy would be a positive pre-emptive measure of not being sick. To make myself interested and engaged with the training program, I have to device the training to be not that difficult so that I will be motivated to commit. Such, before engaging in a high intensity program, I will gradually condition my body first so that it would become more ready to engage in intense training program. There are two training programs selected. First is swimming that would serve as an introductory training to build endurance, stamina and circuit training (whole body training). Swimming is an ideal method to introduce the body to an exercise because â€Å"it does not involve bearing of bodyweight, due to the buoyancy of water, compressive joint forces are lower and, as a consequence, adverse impact on the musculoskeletal system as well as injuries are

Discussion Essay Example | Topics and Well Written Essays - 250 words - 31

Discussion - Essay Example This implied significant impacts on the practice as the board has regulatory powers (Minnesota, n.d.). This decision was therefore expected to influence prescription and filling as long as it did not conflict any other state or federal written law. At the centre of the debate, however, is the conscience legislation that accommodates religious beliefs in professions and that has been used to support pharmacists’ refusal to prescribe or fill drugs whose application is against their religious beliefs. This has further led legislative attempts to force pharmacists to prescribe and fill drugs at patients’ request. Pharmacists however still employ personnel who observe the conscience clause’s provisions together with their religious beliefs and the courts have not been active in resolving cases of refusal to prescribe or fill drugs (Bergquist, 2006). The subject therefore seems to be more actively regulated by the conscience clause and pharmacists’ ethical regard while regulatory agencies’ directives remain unenforced. Public health, in Minnesota and other states, is however a universal subject that should not be subject to sub societal beliefs. Relevant healthcare agencies in Minnesota should therefore formulate laws that obligate pharmacists to prescribe and fill drugs that promote public health

Friday, July 26, 2019

American History after World War II Essay Example | Topics and Well Written Essays - 1750 words

American History after World War II - Essay Example In 1960s, this paper highlights the ways in which the American populations were affected by the Vietnam war despite of fact that the war was not fought on their land. In 1970s, this paper discusses on Nixon's Watergate scandal which was the biggest scandal exposed in the history of America and how it leads to awareness of regulating authorities, mass media and citizens. In 1980s, this paper discusses the economic boom led by the economics of President Reagan and provides brief literature on its effects on American industries and financial system. In 1990s, this paper displays the ways in which the America emerged as a super power in the world after end of cold war and its effect on the foreign policies of America. In 1950s, the impact of the "McCarthy" on American people was uniformly evident in the area of international affairs. (Schrecker, 2002) Antagonism to the "Cold War" had been so methodically acknowledged with socialism that it was no longer feasible to defy the fundamental postulations of American foreign policy devoid of gaining doubts of treachery. The uncertainty raised by "Joseph McCarthy" distressed the State Department for very long tenure, particularly with reference to East Asia. (Schrecker, 2002) "Joseph McCarthy's" association with the unremitting intellectual experience that bearded his name in history of America commenced with a speech on "Lincoln Day, February 9, 1950," to the Republican Women's Club of Wheeling by demonstrating an alleged list of identified Communists functioning for the State Department. (Schrecker, 2002) Whilst there were added reasons why television presented a featureless menu of quiz shows and Westerns in late 1950s, apprehensions in the period of McCarthy undoubtedly played a key role. Correspondingly, the blacklist contributed towards the disinclination of the silver screen industry to struggle with contentious social or biased issues. (Fried, 1991) The political inhibition of the McCarthy phase encouraged the growth of the national security state and assisted its growth into the rest of communal society. For the sake of shielding the country from communist penetration, federal agents harassed individual privileges and extended state influence into film studios, academies, work unions and many other seemingly self-regulating institutions. (Fried, 1991) Countless Americans lives and jobs were gone astray owing to McCarthy and his allegations. Hollywood's cream of the crop opposed politician's consent to control their employing practices but subsequent to the "HUAC" hearings the "blacklists" embarked on in Hollywood restricting employers to hire people who were acknowledged as communist in the blacklist. (Fried, 1991) The blacklists stayed in Hollywood while in the Government agencies over 2,000,000 employees were subjected to loyalty investigations no matter what their status was. Businesses akin to "General Electric, General Motors, CBS, the New York Times, New York City Board of Education and the United Auto Workers" (Fried, 1991) were forced to pursue Hollywood's

Thursday, July 25, 2019

Mobile Computing and Social Networks Essay Example | Topics and Well Written Essays - 2500 words - 1

Mobile Computing and Social Networks - Essay Example Quite a few market research studies also forecast a new internet revolution on mobile phones. Some of the new smart phones are more powerful than computers and operate in the ‘intimate’ space, accompanying people throughout the day (Arora 2012: 1). The convenience and capability to access data or do certain tasks with the help of applications from homes or any other public place using mobile devices even without a desktop computer has ensured a considerable increase of convenience and efficiency for businesses and people on the move The smart phone technology research and development wing has become very active in recent years and is improving day by day. Mobile networks are also creating phones that are increasingly better and tougher to intrude into or hack thus correspondingly increasing the capability of mobile devices and its applications. Effectiveness and Efficiency of Mobile Applications: A simple way to describe geolocation is to say that it is a technology that requires data from a computer or mobile phone to pinpoint a person's actual physical position. A better and succinct definition that could be used is as follows: â€Å"A geolocation system is an information technology solution that ascertains the location of an object in the physical (geo-spatial) or virtual (Internet) environment. Most often, the object is a person who wants to utilize a service based on location, while maintaining his/her privacy† (ISACA 2011: 5). This has caught up among today’s youth and social networking sites which they commonly use by providing us the ability to track or let friends know where we are, to identify certain or specific joints frequented by them and book tickets in cinema halls. These types of applications can also be accessed on a desktop system but will not be the same as when available on a mobile device. Most individuals have invariably used Google Maps to get directions from one place to another but again the thrill of using s uch an application from a mobile is unique. Depending on your movement from one place to another, the data sent and received will also change. This is possible due to the GPS (Global Positioning System) chip found inside the device. This chip uses two methods to track your position. In the first method, the chip uses satellite data to calculate a person's exact location but if there are any snags like interference or unavailability of service, then the chip uses data from cell phone towers to calculate location. If the person is driving through rain, cloud cover or even a canopy of trees, there could be a loss of communication but on a clear day, there should not be any problem. Here we should also note that if the software is very sophisticated, the accuracy provided will also be of high quality. Light should also be shed on some concerns regarding this type of application. Sharing of location could lead to personal risk. Anti-social elements like stalkers or even robbers who know that you are out may take advantage of situations. Although all these exist, many application developers are finding ways to counter such disadvantages by providing privacy preferences. If one chooses wisely, the benefits from such an application far outweigh the negative effects though it necessitates a small amount of privacy sacrifice from our side.

Wednesday, July 24, 2019

Production Chain and Sector Matrix Essay Example | Topics and Well Written Essays - 1500 words

Production Chain and Sector Matrix - Essay Example 13) in firms. In reaction to the low profits and high cost of capital in the 1980s and 1990s, several firms embarked on a wave of financialisation - creating, buying, or expanding financial subsidiaries to acquire financial assets - for the purpose of giving management greater flexibility in managing earnings, creating shareholder value, and satisfying the capital markets (p. 34). Most publicly listed firms, therefore, were "pressured" to show good results on a regular basis using the basic language familiar to capital markets: stock price reflects shareholder value that is a function of operating efficiency, lower expenditures, growth in turnover and earnings, and a steady flow of dividends. The more consistent the numbers, the better, as Froud et al., (2005) pointed out in their study of the American company GE. The main challenge to managements in a financialised universe of firms was to make ambitious strategic plans and deliver consistently on their promises. Firms became slaves to a ruthless capital market that, with a single recommendation, can punish poor performers by depressing a firm's stock price and raising its cost of capital. In a world obsessed with financial performance, managements searched for suitable analysis and planning tools. The production chain and the sector matrix were two of the many that, in this age of globalisation and management fads, were developed to help firms map out value-creation strategies. We explain each briefly, then compare and differentiate them with examples. The Matrix The matrix is a strategic tool that presents in a grid or table the strategic factors affecting the firm. The coordinates of the grid can vary, as shown in examples of two well-known models. The first is H. Igor Ansoff's product/market expansion grid or matrix (Ansoff, 1957) that recommends four strategies (market penetration, market development, product development, and diversification) a firm can adopt to grow or increase its turnover depending on the life cycle of the firm's new or current products and markets. The other is the Boston Consulting Group's Market Growth-Share matrix (Henderson, 1970, 1976a, 1976b) designed to help the firm identify businesses/product types by market share (an indicator of the firm's ability to compete) and market growth (indicator of market attractiveness). Firms, in effect, can manage their businesses as a portfolio of investments, much like a bank or an investor would hold, buy, or sell financial instruments. Firms that want to grow should hold or buy stars (high growth and high share businesses) or cows (low growth, high share, cash generating businesses), sell dogs (low growth and share), and think of what to do with question marks (high growth, low share, needs cash injections, but risky). Example of Matrix Use A prime example of how the matrix was used for strategic management is recounted in the study (Froud, et al., 2005, pp. 8 and 38) of General Electric (GE) that, with the help of consulting firm McKinsey and Co., adapted the BCG matrix and developed its own Nine-Cell Industry Attractiveness-Competitive Strength Matrix (Thompson and Strickland, 2001, 327-330), a three-by-three grid that mapped out alternative business positions and attractiveness of markets, on which are superimposed several scaled circles representing different markets and their sizes and showing the firm's market share within each market (See Figure 1). [Insert Figure 1 here] GE claimed that the matrix provided at a glance

Tuesday, July 23, 2019

The History of the American Association of Adult and Continuing Research Paper

The History of the American Association of Adult and Continuing Education (1926) - Research Paper Example History of AAACE Adult education operations in the United States have been dependant on interaction of five dimensions including â€Å"institutional, content, geographical, personnel, and activity† (Henschke). Numerous types of voluntary adult education institutions mainly included professional societies or associations. In the opinion of Knowles (as cited in Henschke), the adult educational role has two perspectives; (1) facilitating adult education by means of publications, conferences, and educational travel, and (2) motivating different associations and general public by providing educational resources regarding their areas of interest through various channels including mass media and publications. In the United States, adult education had not obtained considerable importance before 1924. In 1921, The National Education Association established the Department of Immigrant Education (DIE) in order to extent its operations to adult education field; the DIE was renamed to Depa rtment of Adult Education (NEA/DAE) after broadening its scope in 1924. Kessner and Rosenblum (1999) report that in 1923, Frederick P. Keppel, President of the Carnegie Corporation envisioned an association that could work effectively to unify adult education programs in the country. Carnegie Corporation called a series of regional conferences by 1925 and early 1926 with intent to achieve its goal of establishing a new national organization for adult education. As a result of these intense efforts, the American Association for Adult Education (AAAE) was established on 26th March 1926 at a national organizational meeting held in Chicago. Since purposes, programs, and memberships of both the NEA/DAE and AAAE were extensively overlapped, a strong sentiment developed for the merging of these two associations by 1949 which resulted in the formation of Adult Education Association of the United States on May 14, 1951. In 1952, the AEA/USA approved the operations of National Association for Public School Adult Education with intent to focus on the educational requirement of adult educators serving in public schools. The NAPSAE became a department of NEA in 1955. During the next thirty years, the NAPSAE grew into a separate organization and its name was changed to the National Association for Public Continuing Adult Education (NAPCAE). On realizing that they shared many members and objectives but had only limited resources, both AEA/USA and NAPCAE decided to integrate its operations. Consequently, AEA/USA and NAPCAE were amalgamated to form the American Association for Adult and Continuing Education (AAACE) during the National Conferences held at San Antonio in Texas in 1982 (Adult education association). Even though the AAACE continued to serve as the primary association for adult education, it restructured its goals and strategies to meet the different interests of a wide range of audiences in adult education. The Commission of Professors of Adult Education (CPAE) wa s formed in 1955 on the strength of the financial assistance provided by the W. K. Kellogg Foundation. The CPAE worked very closely with AAACE and its main was to assist the full time professors to carefully evaluate their own work, frame decisions on common issues, and choose most preferable courses of action. As Kasworm, Rose and Ross-Gordon

Monday, July 22, 2019

Network Design Essay Example for Free

Network Design Essay The objective at hand was to build a network from the ground up. This was accomplished by breaking down all of the sections and building upon all previous assignments. This was a good course as I learned a lot about all of the different sections of building a network. The pros are now I know how to build a network on the design side from the ground up. I learned quite a bit about using a lot of the technologies associated with networking and it allowed me to learn quite a few new concepts. Some of the downfalls about this course and what I have learned are I did not feel I accomplished much as there is no hands on training associated with the course. I do not feel like concepts and design ideas are a great resource to actually learn how to use any of the systems but they do give a pretty good idea. Cabling SpecificationsEthernet is a Local Area Network (LAN) technology with a transmission rate of 10 Mbps and has a typical star topology. Computers and devices must wait-and-listen for transmission time on the network as only one device can transmit at any one time. In order to operate with this network strategy, Ethernet incorporates CSMA/CD (Carrie Sense Multiple Access with Collision Detection). Each device on the network listens for the network to be clear before transmitting data. If more than one computer or device transmits data at the same time, then collisions occur. Once collisions are detected, all devices stop transmitting for a period of time until one of the devices senses the line is free and will then gain control of the line to transmit its data. Receiving devices just sit there waiting and listening for transmissions that are meant for them, which are determined by an IP (Internet Protocol) address. The main advantage to Ethernet is it is one of the cheapest networks to put into service. Compared to other hardware for Token Ring, Ethernet equipment such as hubs, switches, network interface cards, and cable (Cat5 common) is inexpensive. The main disadvantage to Ethernet is related to the collisions that occur on the network. Even though Ethernet cable (Cat5) is fairly inexpensive, it can become a cost issue if designing a large network as each device or computer requires its own cable connection to the central hub. Another disadvantage is distance limitation for node connections. The longest connection that can occur within an Ethernet network without a repeater is 100 meters. Todays Ethernet standards, 100 Mbps and 1000 Mbps, incorporate switched technology, which for the most part, eliminates collisions on the network. The IEEE (Institute of Electrical and Electronics Engineers) specification for Ethernet is 802.3 with three-part names designating the different types. For example, 10BASE-T is for 10 Mbps, and 100BASE-TX is for 100 Mbps. Token RingToken was developed by IBM as an alternative to Ethernet. The network is physically wired in star topology, but is arranged in a logical ring. Instead of a hub or switch like in an Ethernet network, a MAU (Multistation Access Unit) is used. Access to the network is controlled by possession of a token that is passed around the ring from computer to computer as data can only travel in one direction at a time. A computer that wishes to transmit data on the network takes possession of the token and replaces the token frame with data. The data goes around the ring and returns to the transmitting computer, which removes the data, creates a new token, and then forwards it to the next computer. The IEEE specification for Token Ring is 802.5 and it comes in two different speeds: 4 Mbps and 16 Mbps. The main advantage to Token Ring is there are never any collisions within the network, which makes it a highly reliable solution for high-traffic networks. The disadvantage to Token Ring is the network cards and MAU are more expensive than equivalent Ethernet hardware. FDDIFDDI (Fiber-Distributed Data Interface) is an architecture designed for high-speed backbones that operate at 100 Mbps, which are used to connect and extend LANs. A ring topology is used with two fiber optic cable rings. It  passes a token on both rings and in opposite directions. The specification for FDDI is designated by the American National Standards Institute as ANSI X3T9.5. The advantage to FDDI is that it uses two rings for protection in case one ring breaks. When a break occurs, data is rerouted in the opposite direction using the other ring. It is also considered reliable because it uses a token-passing strategy. The disadvantage to FDDI is the expensive network cards and fiber optic cable. In addition, the amount of fiber optic cable is doubled because it has redundant rings. WirelessLocal Area Network (LAN) TopologiesA mesh topology has a point-to-point connection to every other device (node) within the topology. The point-to-point link is dedicated between each device so it will only carry traffic to the two devices that is connected by that link. The advantage of a mesh topology is it works on the concept of routes, which means that traffic can take one of several paths between the source and destination. The network is also robust in that it will not be crippled if one path becomes unavailable or unstable due to each device being connected to every other device. The Internet uses a mesh topology to operate efficiently. The main disadvantage to a mesh topology is the fact that it requires a large number of cables, which is very expensive. A bus topology is a multipoint topology that entails each device being connected to a common link or path. The common link can be thought of as the backbone to the network. All devices typically connect to the backbone with a T-connector and coax cable. The main advantages of a bus topology are that it is easy to install and is not expensive (cost effective) because it uses very little cable to build. The main disadvantage is if there is a problem with the one backbone cable, then the entire network will no longer have the ability to communicate.  These networks are also very difficult to troubleshoot because any small problem such as a cable break, loose connector, or cable short can cause the outage. The entire length of cable and each connector must be inspected during troubleshooting. Another disadvantage is the lack of amplification of the signal, which results in a limited network size based on the characteristics of the cable because of how far a signal can travel down that cable. A ring topology means that each device is connected in a ring, or daisy-chain fashion, one after another. A dedicated connection only exists between a device and the device on each side of it. Data flows around the ring in one direction. Each device contains a repeater that regenerates the signal before passing it to the next device. The main advantage of a ring topology is that it is easy to install. One disadvantage includes difficulty to troubleshoot because data flows in one direction and it could take time to find the faulty device when there are problems. The entire network could be taken off line if there is a faulty device or cable break within the ring. The star topology has each device in the network connected to a central device called a hub, which can actually be a hub or switch. All traffic must pass through the hub in order to communicate with any other device on the network. There is no direct communication between devices like in a mesh topology. One advantage to a star topology is any failure to one cable or device connected to the hub will not bring the entire network down. Repairs can be done to individual nodes without disrupting traffic flow. Another advantage is expandability of the network. Additional devices can be added to the network without disrupting any of the current users. All that is required is an additional cable run from the device to the hub. One disadvantage includes cable costs because each device must have its own cable connected back to the hub. The other disadvantage is the hub itself.  Since all traffic runs through one device, it becomes the single point of failure. If the hub goes down, so does the entire network. Wide Area Network (WAN) DesignA WAN, also known as a Wide Area Network, is an essential part to bigger corporate networks most government networks and companies with multiple sites as well. A WAN, basically, is 2 or more LANs (Local Area Networks) stuck together and running as one big network over a big geographical area. Although a WAN could cover very small distances, most WANs cover much larger geographical areas such as a country or possibly even the world. The largest WAN today would technically be the internet or the World Wide Web. The internet is, in short, one giant WAN because it consists of many smaller LANs and servers. Most WANs can cover a fairly large geographical area, but some, such as the World Wide Web can cover the globe. The United States Government has quite a big WAN as a lot of their LANs are in other countries. They need to get data from one place to another almost instantaneously, and this is one of the quickest and easiest ways to be able to do so. To be able to get on the internet, a subscriber must go through an ISP (Internet Service Provider) and they will give the subscriber access to the internet for a certain price every month. There are different ways to get access to the internet depending on the geographical location in which you live. A subscriber can go through dial up, which is one of the slowest methods, but it is also one of the most common. There is also DSL (Digital Subscriber Line) through most phone companies if they have access in the area and cable which is usually one of the fastest and most expensive methods to access the internet. The last common method is using a satellite to obtain access. This is usually the most expensive ways to access the internet because the equipment usually needs to be bought. When talking about telephone lines, we start getting into analog versus digital signals and degradation over longer distances. A telephone system works on analog signals. These work by a computer transmitting a digital  signal to the modem which converts the signal into an analog signal (this is the beeping heard when a computer dials up to access the internet) and later being converted by a different computer back into a digital signal with the use of a modem. DSL is digital all the way, along with T1 and T3 lines. When using DSL or T1/T3 lines, a filter of some sort is used to filter out the digital and analog signals, so the phone and computer are receiving different signals. Companies usually use faster lines to access the internet or to have access to their other sites. Smaller companies can use DSL or Cable internet services, but when talking about larger corporations or the government, most use public systems such as telephone lines or satellites. Usually, when talking about larger companies and going through a public system, we are talking much faster speeds that can hold many more users. T1 and T3 lines are usually used, satellites are commonly used and fiber-optic is becoming much more common. When getting into many users on a WAN, we need to start talking about Network Latency. According to Javvin.com network latency is defined as “latency is a measure of how fast a network is running. The term refers to the time elapsed between the sending of a message to a router and the return of that message (even if the process only takes milliseconds, slowdowns can be very apparent over multi-user networks). Latency problems can signal network-wide slowdowns, and must be treated seriously, as latency issues cause not only slow service but data losses as well. At the user level, latency issues may come from software malfunctions; at the network level, such slowdowns may be a result of network overextension or bottlenecking, or DoS or DDoS activity.”Dos or DDos stands for Denial of Service and Distributed Denial of Service respectively. These types of attacks are usually by hackers or someone who does not want others to access a certain service. There was a recent DoS threat on the CNN webpage as some hackers wanted CNN to stop talking about a certain issue. This works by one or multiple people talking all of the networks latency or bandwidth from them and thus causing other not to be able to access their site or services. There are other issues that may slow down a users PC as well. Not all issues revolve around hacker attacks. A lot of problems could be caused by malicious software, such as, Spyware, Malware, Viruses, or other programs that may be problematic. These can usually be taken care of by installing anti-virus software or even a spyware removal tool. The issue here is instead of the malicious software causing slowdowns on a PC, there are slowdowns due to the software protecting a certain computer in the background. Sometimes a simple fix to this problem is to defragment a hard drive. This can tremendously speed up a PC, because the files will be closer together and easier and quicker to access. On a network, a simple way to test latency is to use the trace route program. To do this, simply go to a command prompt and type tracert and then an IP address if internal or a website if external. This will send out packets of information and check how much time has passed to receive a packet back. The time passed would be the latency time. Usually it says it only took a certain amount of milliseconds which does not seem like very much time, but it was only a tiny packet of information. The higher the milliseconds the higher the latency time. The higher the latency time, the longer it will take to do anything in a network. If a high latency time is present, there is bound to be lag somewhere down the line. In a WAN, the equipment that will be used is as follows. In each LAN there will be PCs connected to a router somewhere (this is a ring topology example) and that router should be connected into a switch. There may be more but this is a basic example. Each of these LANs then connects to a central HUB somewhere which should interconnect all of the LANs. All of the information then travels to the central hub which is then separated out to the correct switch, router and then PC. There are usually central servers that can store and backup all of the data on the network as well, but this was an example of a crude network. Most companies also a very repetitious and redundant with their WANs. This is because they do not want a central failure point to bring the entire company to itÂ’s knees. There are usually multiple switches that can tie the  entire system together. If a huge corporations Wan decided to fail, the company could lose a few million dollars in a matter of minutes. This is the main reason redundancy in this situation makes more than enough sense. A lot of companies use software called VPN software. This software will let users login from the outside into their computer inside the company. This is a very nice system because if an employee needs to do work from home, they have access to everything they working on onsite. This is also helpful from an Information Technology perspective as it allows the Tech who is working on a remote problem login remotely and find out what the issue is, make any configuration changes and fix most software related issues without actually having to be onsite. This works well when being on call from an offsite location. There are other software packages that work well too. A lot of companies use PCAnywhere to do this type of work and Bomgar is another solution to be able to remotely login. A WAN is an imperative part to any corporation, government agency or company with multiple locations, as it allows them to transfer data quickly, easily and over great distances at the click of a button. There seems to be more and more need for employees in the networking field today, because more and more corporations need to transfer data quicker and easier. There will be new technology soon that will improve our current technology such as fiber optic. Network ProtocolsThere are many solutions to remote access and the most common and one of the most cost efficient methods is the VPN (Virtual Private Network). VPN technology is already built in to most operating systems and is very easy to implement. With bigger environments and corporations, a consideration for concentrated VPN hardware should be in place because of the simultaneous users and stress on the servers. There are a few different types of VPN including IPsec, PPTP and SSL. Once the connection from remote access has been made, you need to make sure the files are readily accessible for the user logging in remotely. One way to do so is to use Samba which is an open source file access system. There  are other ways to allow access as well. Using remote desktop connection, the user has the ability to log directly in to their PC and use it as if they were sitting at their desk, rather than away from the company. A lot of companies use software called VPN software. This software will let users login from the outside into their computer inside the company. This is a very nice system because if an employee needs to do work from home, they have access to everything they working on onsite. This is also helpful from an Information Technology perspective as it allows the Tech who is working on a remote problem login remotely and find out what the issue is, make any configuration changes and fix most software related issues without actually having to be onsite. This works well when being on call from an offsite location. There are other software packages that work well too. A lot of companies use PCAnywhere to do this type of work and Bomgar is another solution to be able to remotely login. Network Remote AccessMost companies need to be able to access their work from many locations, including home and while traveling. The solution that allows them to access the network is one of two ways to access their network. The first is through a VPN (virtual private network) that allows the user access to remotely log in easily and quickly. The other way is through a dial up remote connection; this way is a bit easier to set up but can become very costly in the long run. The problem with being able to do this is it can be very costly and can eat up much of the IT departments time to set up, configure and implement this system into the current hardware. The definition from whatis.com about a VPN is “ virtual private network (VPN) is a network that uses a public telecommunication infrastructure, such as the Internet, to provide remote offices or individual users with secure access to their organizations network. A virtual private network can be contrasted with an expensive system of owned or leased lines that can only be used by one organization. The goal of a VPN is to provide the organization with the same capabilities, but at a much lower cost. VPN works by using the shared public infrastructure while maintaining privacy through security procedures and tunneling protocols such as the Layer Two Tunneling  Protocol (L2TP). In effect, the protocols, by encrypting data at the sending end and decrypting it at the receiving end, send the data through a tunnel that cannot be entered by data that is not properly encrypted. An additional level of security involves encrypting not only the data, but also the originating and receiving network addresses.”A VPN, also known as a Virtual Private Network is a helpful tool that allows users of a specific domain to be able to log in to their PC from anywhere in the world with the help of another PC. With this tool, they would log in with a special piece of software, using their user name and password to gain access to all functionality of the PC they want to log in to. This allows for a lot of comfortable solutions, such as if an employee is sick, they may still have an option to work from home. This allows a flexible company schedule as well because if a user needs to access a document from their home PC, they can essentially log in to their work PC and download t he document. Network Business ApplicationsA second way to access oneÂ’s computer from a different location would be using a dial up service, with this you can basically dial in to access all of their resources available within the server. Using this is a very secure and easy route to go, and allows the user access to files they may desperately need. Another good thing about using a remote connection to access a server is if the user is on a business trip, they have the ability to access all of their much needed documents easily and securely with out much fuss. The explanation between these two pieces of technology is “with dial-up remote access, a remote access client uses the telecommunications infrastructure to create a temporary physical circuit or a virtual circuit to a port on a remote access server. After the physical or virtual circuit is created, the rest of the connection parameters can be negotiated.With virtual private network remote access, a VPN client uses an IP internetwork to create a virtual point-to-point connection with a remote access server acting as the VPN server. After the virtual point-to-point connection is created, the rest of the connection parameters can be negotiated. ”There are many advantages and disadvantages to using a dial up remote connection over VPN. The biggest advantage I have been able to find is, it is easier to set  up and maintain while using VPN makes you set up and maintain individual accounts for both the VPN and the users name and password on the system. Another advantage of dialing up in to the system would be the fact that no matter where the user is all they need to do is plug into a phone jack and they should be able to log in. The disadvantage of this is depending on where the user is long distance charges may apply and it could rank up a pretty penny or two. Another disadvantage is although the system is cheaper in the short term, the system may be more expensive than VPN in the long run. There are also other methods of using VPN. One specific way is certain ISPs (Internet Service Providers) and other third party support companies are assisting in setting up the VPN and supporting it without a great deal of time spent on it by the current department. This may or may not be more cost efficient than setting it up yourself, but it does remove a lot of the headache that VPNs can give due to different errors. There are also many advantages and disadvantages to using a VPN over a dial up system. One of the biggest advantages to this system over a dial up system is in the long run this is a much cheaper system than a dial up system. This system is a little bit quicker than a dial up system as well. This system is cheaper than a dial up system because using a dial up system, long distance fees may apply, with the virtual private network, you do not need to worry about this as the user may call into a local internet service provider to gain access. Any internet connection will gain a user access to the companyÂ’s network through a VPN. Through all of this, there still needs to be security measures put in place to keep unwanted users off of the system while allowing employees or other authorized users access without down time. VPNs can work well with firewalls, all the IT department would need to do is allow the ports to be accessed by the VPN and the user should have full access. All in all, there are two very cost effective solutions at a companyÂ’s finger tips and both are fairly easy to set up. The company needs to decide if they want to save money up front and make it easier so they do not need  to set up multiple accounts per user, or if they would rather have a better solution and save more money down the road. The choice also depends on the amount of users logging in at any given moment. Backup and Disaster RecoverySecurity, back ups and disaster recovery are all important very parts of all networks in todays world. The problem with today is information on how to hack, destroy and program any type of malicious software (or malware) is easily accessible via the Internet and other easy to access sources. There are roughly 1.4 billion people on the Internet or that at least have access to the Internet in the world, which is about 25% of the worlds population. All of these people have extremely easy access to hacking networks, creating malware and destroying any personal or private data a user may have and wish to keep. There is not really any way to stop these people from harming our personal software and data from their side, this is why a user needs to make sure they have security on the users side. There are other things that happen besides people trying to maliciously harm a users files and data. Accidents can happen and destroy data as well. There could be many things that can harm a users data such as a fire, earthquake, power surge or worst case scenario, some sort of electro magnetic pulse (EMP). This is where data back ups and disaster recovery come in nicely. There are many companies that specialize in helping a user or company back up their data and store it off site such as SunGard (mostly used in bigger company settings). There are other ways to store a users data as well. One way is to make a physical copy of everything needed on CDs, DVDs, Flash Drive or some other type of media and store it at a friends house or some other persons house they trust. This keeps a hard copy of all of their data off site just in case something happens and it can now be restored. There are a few other companies as well that offer on line backups. For this a user downloads their software and it automatically backs up to a few different location for redundancy which allows the customer more safety and easier access to all of their files. One of the first steps to a business that wishes to be very secure in all  that they do is to set up a backup and disaster recovery plan to start it all off. Like I stated earlier, there are many way s to do it. If this is a larger company they probably want to hire someone internally to make a physical back up of all the data and send it to an off site company for storage. They should also keep another copy close to them at all times, preferably away from where the physical data lies. They should put it on the opposite side of the building than where the file server is. If anything happens to the servers, they can quickly and easily use their backed up copy of all the data and recover it on to the servers in which they lie. Most companies have 2 or 3 backup units on site for redundancy and this allows that if one of those go down as well there are still a couple others in which they can restore all of the data from. Although this can become a little more expensive than just a regular back up system, sometimes it can be well worth it. Network SecurityAccording to devx.com “the first step in drafting a disaster recovery plan is conducting a thorough risk analysis of your computer systems. List all the possible risks that threaten system uptime and evaluate how imminent they are in your particular IT shop. Anything that can cause a system outage is a threat, from relatively common man made threats like virus attacks and accidental data deletions to more rare natural threats like floods and fires. Determine which of your threats are the most likely to occur and prioritize them using a simple system: rank each threat in two important categories, probability and impact. In each category, rate the risks as low, medium, or high. For example, a small Internet company (less than 50 employees) located in California could rate an earthquake threat as medium probability and high impact, while the threat of utility failure due to a power outage could rate high probability and high impact. So in this companys risk analysis, a power outage would be a higher risk than an earthquake and would therefore be a higher priority in the disaster recovery plan.”Another big part of any security system development is the company (or department) needs to look at their budget and how much they are willing to spend on their system. A company can get a basic security system for their network (including firewall) for fairly cheap and this may do most of what is needed, but larger companies are going to need to spend quite a  bit more money than that of a small company. Most larger companies spend quite a bit because they usually have higher priced clients that they can not afford to lose and all of their data is invaluable to the company. Some companies actually have their own Information System Security employees to monitor the network in case of any type of attack. They also make sure all of the anti-virus and anti-malware softwares are running and updating properly. Lastly, another thing most companies forget about after they have their equipment and software installed is there is more than just the implementation of the hardware and software to save them. They need to make sure everything continues to run and update itself from newer and bigger threats. These companies need to make sure they continually test and check what needs to be done to continually maintain a network that can not be broken in to. There are people out there that can be hired to try and break into a companies network. They get paid and let the company know what needs to be fixed so others can not break into it as well. In conclusion, a company can be nothing or brought to its knees with out its network and servers. There are many things that can cripple a company without the help of man. The only way to avoid these is to have a proper disaster recovery plan and to make sure the network is not vulnerable in any way. References About, Inc. (2004). Network topologies : bus, ring, star, and all the rest. RetrievedOctober 12, 2004, from http://compnetworking.about.com /library/weekly/aa041601a.htmBrain, M. (2004). How stuff works : how wifi works. Retrieved October 12, 2004,from http://computer.howstuffworks.com/wireless-network.htm/printableNetwork Latency. (n.d.). Retrieved April 27, 2008, fromhttp://www.javvin.com/etraffic/network-latency.htmlBroadband Internet. (n.d.). Retrieved April 27, 2008, fromhttp://www.pcworld.idg.com.au/index.php/id;988596323Wide Area Networks.(n.d.). Retrieved April 27, 2008, fromhttp://www.erg.abdn.ac.uk/users/gorry/course/intro-pages/wan.htmlVirtual Private Network. (n.d.).retrieved May 11, 2008, fromhttp://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci213324,00.html#VPN vs. Dial up. (n.d.). Retrieved May 11, 2008, fromhttp://technet2.microsoft.com/windowsserver/en/library/d85d2477-796d-41bd-83fb-17d78fb1cd951033.mspx?mfr=trueHow to Create a Disaster Recovery Plan, RetrievedMay 23, 2008, from http://www.devx.com/security/Article/16390/1954World Internet Usage Statistics, RetrievedMay 23, 2008, from http://www.internetworldstats.com/stats.htm

Entrepreneurial Organization Essay Example for Free

Entrepreneurial Organization Essay In Business today the Entrepreneurial Organizations continue to grow, thrive and help change the way companies and people do business. These Entrepreneurial changes that happen internal to large organizations or at small start-up companies all have some of the same traits in common with each other and use some of the same tactics to create business opportunities. Some of the traits they share to promote their business are: Individual action and initiative, Innovation, Differentiation and Risk Taking. Individual action and initiatives taken by the employees within the organization are to create new product offerings or enhance existing products. These actions do not always succeed in creating revenue for the business, but might help start another product line that will create revenue in the future, but the failures of these products is not seen as a negative within the organization but are treated as growth and are seen as positive steps. Innovation is a primary and necessary building block for the entrepreneurial organization. There are 2 types of innovation that should be looked at, used, changed and viewed when using innovation and these are Product and Process. These types of innovation will create change in either a product or process but both are essential to using innovation within the organization and will be used to create new ideas, process and test new theories. Differentiation is another advantage that has to be used, displayed and shown for an entrepreneurial organization. This shows the advantage to customers and investors what unique good, service, talent and innovation that the organization has that makes the customers willing to pay a premium for their services. Risk taking from a large or small organization requires some sort of investment on the part of the company, either in personal resources or financial resources. The level of risk that the organization is willing to support shows the employees that the organization is willing and able to make changes if the risks are worth the reward. But risks must be taken in these types of organizations to create and discover new opportunities. In conclusion, these are four reasons why entrepreneurial organizations seek to use innovation to create new opportunities and are  some of the building blocks for many large and successful companies. With the landscape of business always changing, if companies are not willing to use innovation to try and create new opportunities they might not be successful in the future.

Sunday, July 21, 2019

Handwritten Character Recognition Using Bayesian Decision Theory

Handwritten Character Recognition Using Bayesian Decision Theory Abstract: Character recognition (CR) can solve more complex problem in handwritten character and make recognition easier. Handwriting character recognition (HCR) has received extensive attention in academic and production fields. The recognition system can be either online or offline. Offline handwritten character recognition is the sub fields of optical character recognition (OCR). The offline handwritten character recognition stages are preprocessing, segmentation, feature extraction and recognition. Our aim is to improve missing character rate of an offline character recognition using Bayesian decision theory. Keywords: Character recognition, Optical character recognition, Off-line Handwriting, Segmentation, Feature extraction, Bayesian decision theory. Introduction The recognition system can be either on-line or off-line. On-line handwriting recognition involves the automatic conversion of text as it is written on a special digitized or PDA, where a sensor picks up the pen-tip movements as well as pen-up/pen-down switching. That kind of data is known as digital ink and can be regarded as a dynamic representation of handwriting. Off-line handwriting recognition involves the automatic conversion of text in an image into letter codes which are usable within computer and text-processing applications. The data obtained by this form is regarded as a static representation of handwriting. The aim of character recognition is to translate human readable character to machine readable character. Optical character recognition is a process of translation of human readable character to machine readable character in optically scanned and digitized text. Handwritten character recognition (HCR) has received extensive attention in academic and production fields. Bayesian decision theory is a fundamental statistical approach that quantifies the tradeoffs between various decisions using probabilities and costs that accompany such decision. They divided the decision process into the following five steps: Identification of the problem. Obtaining necessary information. Production of possible solution. Evaluation of such solution. Selection of a strategy for performance. They also include a sixth stage implementation of the decision. In the existing approach missing data cannot be recognition which is useful in recognition historical data. In our approach we are recognition the missing words using Bayesian classifier. It first classifier the missing words to obtain minimize error. It can recover as much error as possible. Related Work The history of CR can be traced as early as 1900, when the Russian scientist Turing attempted to develop an aid for the visually handicapped [1]. The first character recognizers appeared in the middle of the 1940s with the development of digital computers. The early work on the automatic recognition of characters has been concentrated either upon machine-printed text or upon a small set of well-distinguished handwritten text or symbols. Machine-printed CR systems in this period generally used template matching in which an image is compared to a library of images. For handwritten text, low-level image processing techniques have been used on the binary image to extract feature vectors, which are then fed to statistical classifiers. Successful, but constrained algorithms have been implemented mostly for Latin characters and numerals. However, some studies on Japanese, Chinese, Hebrew, Indian, Cyrillic, Greek, and Arabic characters and numerals in both machine-printed and handwritten cas es were also initiated [2]. The commercial character recognizers were available in the 1950s, when electronic tablets capturing the x-y coordinate data of pen-tip movement was first introduced. This innovation enabled the researchers to work on the on-line handwriting recognition problem. A good source of references for on-line recognition until 1980 can be found in [3]. Studies up until 1980 suffered from the lack of powerful computer hardware and data acquisition devices. With the explosion of information technology, the previously developed methodologies found a very fertile environment for rapid growth addition to the statistical methods. The CR research was focused basically on the shape recognition techniques without using any semantic information. This led to an upper limit in the recognition rate, which was not sufficient in many practical applications. Historical review of CR research and development during this period can be found in [4] and [3] for off-line and on-line cases, respectively. The real progress on CR systems is achieved during this period, using the new development tools and methodologies, which are empowered by the continuously growing information technologies. In the early 1990s, image processing and pattern recognition techniques were efficiently combined with artificial intelligence (AI) methodologies. Researchers developed complex CR algorithms, which receive high-resolution input data and require extensive number crunching in the implementation phase. Nowadays, in addition to the more powerful computers and more accurate electronic equipments such as scanners, cameras, and electronic tablets, we have efficient, modern use of methodologies such as neural networks (NNs), hidden Markov models (HMMs), fuzzy set reasoning, and natural language processing. The recent systems for the machine-printed off-line [2] [5] and limited vocabulary, user-dependent on-line handwritten characters [2] [12] are quite satisfactory for restricted applications. However, there is still a long way to go in order to reach the ultimate goal of machine simulation of fluent human reading, especially for unconstrained on-line and off-line handwriting. Bayesian decision Theory (BDT), one of the statistical techniques for pattern classification, to identify each of the large number of black-and-white rectangular pixel displays as one of the 26 capital letters in the English alphabet. The character images were based on 20 different fonts and each letter within 20 fonts was randomly distorted to produce a file of 20,000 unique instances [6]. Existing System In this overview, character recognition (CR) is used as an umbrella term, which covers all types of machine recognition of characters in various application domains. The overview serves as an update for the state-of-the-art in the CR field, emphasizing the methodologies required for the increasing needs in newly emerging areas, such as development of electronic libraries, multimedia databases, and systems which require handwriting data entry. The study investigates the direction of the CR research, analyzing the limitations of methodologies for the systems, which can be classified based upon two major criteria: 1) the data acquisition process (on-line or off-line) and 2) the text type (machine-printed or handwritten). No matter in which class the problem belongs, in general, there are five major stages Figure1 in the CR problem: 1) Preprocessing 2) Segmentation 3) Feature Extraction 4) Recognition 5) Post processing 3.1. Preprocessing The raw data, depending on the data acquisition type, is subjected to a number of preliminary processing steps to make it usable in the descriptive stages of character analysis. Preprocessing aims to produce data that are easy for the CR systems to operate accurately. The main objectives of preprocessing are: 1) Noise reduction 2) Normalization of the data 3) Compression in the amount of information to be retained. In order to achieve the above objectives, the following techniques are used in the preprocessing stage. Preprocessing Segmentation Splits Words Feature Extraction Recognition Post processing Figure 1. Character recognition 3.1.1 Noise Reduction The noise, introduced by the optical scanning device or the writing instrument, causes disconnected line segments, bumps and gaps in lines, filled loops, etc. The distortion, including local variations, rounding of corners, dilation, and erosion, is also a problem. Prior to the CR, it is necessary to eliminate these imperfections. Hundreds of available noise reduction techniques can be categorized in three major groups [7] [8]: a) Filtering b) Morphological Operations c) Noise Modeling 3.1.2 Normalization Normalization methods aim to remove the variations of the writing and obtain standardized data. The following are the basic methods for normalization [4] [10][16]. a) Skew Normalization and Baseline Extraction b) Slant Normalization c) Size Normalization 3.1.3 Compression It is well known that classical image compression techniques transform the image from the space domain to domains, which are not suitable for recognition. Compression for CR requires space domain techniques for preserving the shape information. a) Threshold: In order to reduce storage requirements and to increase processing speed, it is often desirable to represent gray-scale or color images as binary images by picking a threshold value. Two categories of threshold exist: global and local. Global threshold picks one threshold value for the entire document image which is often based on an estimation of the background level from the intensity histogram of the image. Local (adaptive) threshold use different values for each pixel according to the local area information. b) Thinning: While it provides a tremendous reduction in data size, thinning extracts the shape information of the characters. Thinning can be considered as conversion of off-line handwriting to almost on-line like data, with spurious branches and artifacts. Two basic approaches for thinning are 1) pixel wise and 2) nonpareil wise thinning [1]. Pixel wise thinning methods locally and iteratively process the image until one pixel wide skeleton remains. They are very sensitive to noise and may deform the shape of the character. On the other hand, the no pixel wise methods use some global information about the character during the thinning. They produce a certain median or centerline of the pattern directly without examining all the individual pixels. In clustering-based thinning method defines the skeleton of character as the cluster centers. Some thinning algorithms identify the singular points of the characters, such as end points, cross points, and loops. These points are the source of problems. In a nonpareil wise thinning, they are handled with global approaches. A survey of pixel wise and nonpareil wise thinning approaches is available in [9]. 3.2. Segmentation The preprocessing stage yields a clean document in the sense that a sufficient amount of shape information, high compression, and low noise on a normalized image is obtained. The next stage is segmenting the document into its subcomponents. Segmentation is an important stage because the extent one can reach in separation of words, lines, or characters directly affects the recognition rate of the script. There are two types of segmentation: external segmentation, which is the isolation of various writing units, such as paragraphs, sentences, or words, and internal segmentation, which is the isolation of letters, especially in cursively written words. 1) External Segmentation: It is the most critical part of the document analysis, which is a necessary step prior to the off-line CR Although document analysis is a relatively different research area with its own methodologies and techniques, segmenting the document image into text and non text regions is an integral part of the OCR software. Therefore, one who works in the CR field should have a general overview for document analysis techniques. Page layout analysis is accomplished in two stages: The first stage is the structural analysis, which is concerned with the segmentation of the image into blocks of document components (paragraph, row, word, etc.), and the second one is the functional analysis, which uses location, size, and various layout rules to label the functional content of document components (title, abstract, etc.) [12]. 2) Internal Segmentation: Although the methods have developed remarkably in the last decade and a variety of techniques have emerged, segmentation of cursive script into letters is still an unsolved problem. Character segmentation strategies are divided into three categories [13] is Explicit Segmentation, Implicit Segmentation and Mixed Strategies. 3.3. Feature Extraction Image representation plays one of the most important roles in a recognition system. In the simplest case, gray-level or binary images are fed to a recognizer. However, in most of the recognition systems, in order to avoid extra complexity and to increase the accuracy of the algorithms, a more compact and characteristic representation is required. For this purpose, a set of features is extracted for each class that helps distinguish it from other classes while remaining invariant to characteristic differences within the class[14]. A good survey on feature extraction methods for CR can be found [15].In the following, hundreds of document image representations methods are categorized into three major groups are Global Transformation and Series Expansion, Statistical Representation and Geometrical and Topological Representation . 3.4. Recognition Techniques CR systems extensively use the methodologies of pattern recognition, which assigns an unknown sample into a predefined class. Numerous techniques for CR can be investigated in four general approaches of pattern recognition, as suggested in [16] are Template matching, Statistical techniques, and Structural techniques and Neural networks. 3.5. Post Processing Until this point, no semantic information is considered during the stages of CR. It is well known that humans read by context up to 60% for careless handwriting. While preprocessing tries to clean the document in a certain sense, it may remove important information, since the context information is not available at this stage. The lack of context information during the segmentation stage may cause even more severe and irreversible errors since it yields meaningless segmentation boundaries. It is clear that if the semantic information were available to a certain extent, it would contribute a lot to the accuracy of the CR stages. On the other hand, the entire CR problem is for determining the context of the document image. Therefore, utilization of the context information in the CR problem creates a chicken and egg problem. The review of the recent CR research indicates minor improvements when only shape recognition of the character is considered. Therefore, the incorporation of contex t and shape information in all the stages of CR systems is necessary for meaningful improvements in recognition rates. The proposed System Architecture The proposed research methodology for off-line cursive handwritten characters is described in this section as shown in Figure 2. 4.1 Preprocessing There exist a whole lot of tasks to complete before the actual character recognition operation is commenced. These preceding tasks make certain the scanned document is in a suitable form so as to ensure the input for the subsequent recognition operation is intact. The process of refining the scanned input image includes several steps that include: Binarization, for transforming gray-scale images in to black white images, scraping noises, Skew Correction- performed to align the input with the coordinate system of the scanner and etc., The preprocessing stage comprise three steps: (1) Binarization (2) Noise Removal (3) Skew Correction Scanned Document Image Feature Extraction Bayesian Decision Theory Training and Recognition Pre-processing Binarization Noise Removal Skew correction Segmentation Line Word Character Recognition o/p Figure 2. Proposed System Architecture 4.1.1 Binarization Extraction of foreground (ink) from the background (paper) is called as threshold. Typically two peaks comprise the histogram gray-scale values of a document image: a high peak analogous to the white background and a smaller peak corresponding to the foreground. Fixing the threshold value is determining the one optimal value between the peaks of gray-scale values [1]. Each value of the threshold is tried and the one that maximizes the criterion is chosen from the two classes regarded as the foreground and back ground points. 4.1.2 Noise Removal The presence of noise can cost the efficiency of the character recognition system; this topic has been dealt extensively in document analysis for typed or machine-printed documents. Noise may be due the poor quality of the document or that accumulated whilst scanning, but whatever is the cause of its presence it should be removed before further Processing. We have used median filtering and Wiener filtering for the removal of the noise from the image. 4.1.3 Skew Correction Aligning the paper document with the co-ordinate system of the scanner is essential and called as skew correction. There exist a myriad of approaches for skew correction covering correlation, projection, profiles, Hough transform and etc. For skew angle detection Cumulative Scalar Products (CSP) of windows of text blocks with the Gabor filters at different orientations are calculated. Alignment of the text line is used as an important feature in estimating the skew angle. We calculate CSP for all possible 50X50 windows on the scanned document image and the median of all the angles obtained gives the skew angle. 4.2 Segmentation Segmentation is a process of distinguishing lines, words, and even characters of a hand written or machine-printed document, a crucial step as it extracts the meaningful regions for analysis. There exist many sophisticated approaches for segmenting the region of interest. Straight-forward, may be the task of segmenting the lines of text in to words and characters for a machine printed documents in contrast to that of handwritten document, which is quiet difficult. Examining the horizontal histogram profile at a smaller range of skew angles can accomplish it. The details of line, word and character segmentation are discussed as follows. 4.2.1 Line Segmentation Obviously the ascenders and descanters frequently intersect up and down of the adjacent lines, while the lines of text might itself flutter up and down. Each word of the line resides on the imaginary line that people use to assume while writing and a method has been formulated based on this notion shown fig.3. Figure 3. Line Segmentation The local minima points are calibrated from each Component to approximate this imaginary baseline. To calculate and categorize the minima of all components and to recognize different handwritten lines clustering techniques are deployed. 4.2.2 Word and Character Segmentation The process of word segmentation succeeds the line separation task. Most of the word segmentation issues usually concentrate on discerning the gaps between the characters to distinguish the words from one another other. This process of discriminating words emerged from the notion that the spaces between words are usually larger than the spaces between the characters in fig 4. Figure 4. Word Segmentation There are not many approaches to word segmentation issues dealt in the literature. In spite of all these perceived conceptions, exemptions are quiet common due to flourishes in writing styles with leading and trailing ligatures. Alternative methods not depending on the one-dimensional distance between components, incorporates cues that humans use. Meticulous examination of the variation of spacing between the adjacent characters as a function of the corresponding characters themselves helps reveal the writing style of the author, in terms of spacing. The segmentation scheme comprises the notion of expecting greater spaces between characters with leading and trailing ligatures. Recognizing the words themselves in textual lines can itself help lead to isolation of words. Segmentation of words in to its constituent characters is touted by most recognition methods. Features like ligatures and concavity are used for determining the segmentation points. 4.3 Feature Extraction The size inevitably limited in practice, it becomes essential to exploit optimal usage of the information stored in the available database for feature extraction. Thanks to the sequence of straight lines, instead of a set of pixels, it is attractive to represent character images in handwritten character recognition. Whilst holding discriminated information to feed the classifier, considerable reduction on the amount of data is achieved through vector representation that stores only two pairs of ordinates replacing information of several pixels. Vectorization process is performed only on basis of bi-dimensional image of a character in off-line character recognition, as the dynamic level of writing is not available. Reducing the thickness of drawing to a single pixel requires thinning of character images first. Character before and after Thinning After streamlining the character to its skeleton, entrusting on an oriented search process of pixels and on a criterion of quality of represe ntation goes on the vectorization process. The oriented search process principally works by searching for new pixels, initially in the same direction and on the current line segment subsequently. The search direction will deviate progressively from the present one when no pixels are traced. The dynamic level of writing is retrieved of course with moderate level of accuracy, and that is object of oriented search. Starting the scanning process from top to bottom and from left to right, the starting point of the first line segment, the first pixel is identified. According to the oriented search principle, specified is the next pixel that is likely to be incorporated in the segment. Horizontal is the default direction of the segment considered for oriented search. Either if the distortion of representation exceeds a critical threshold or if the given number of pixels has been associated with the segment, the conclusion of line segment occurs. Computing the average distance between the l ine segment and the pixels associated with it will yield the distortion of representation. The sequence of straight lines being represented through ordinates of its two extremities character image representation is streamlined finally. All the ordinates are regularized in accordance to the initial width and height of character image to resolve scale Variance. 4.4 Bayesian Decision Theories The Bayesian decision theory is a system that minimizes the classification error. This theory plays a role of a prior. This is when there is priority information about something that we would like to classify. It is a fundamental statistical approach that quantifies the tradeoffs between various decisions using probabilities and costs that accompany such decisions. First, we will assume that all probabilities are known. Then, we will study the cases where the probabilistic structure is not completely known. Suppose we know P (wj) and p (x|wj) for j = 1, 2à ¢Ã¢â€š ¬Ã‚ ¦n. and measure the lightness of a fish as the value x. Define P (wj |x) as the a posteriori probability (probability of the state of nature being wj given the measurement of feature value x). We can use the Bayes formula to convert the prior probability to the posterior probability P (wj |x) = Where p(x) P (x|wj) is called the likelihood and p(x) is called the evidence. Probability of error for this decision P (w1 |x) if we decide w2 P (w2|x) if we decide w1 P (error|x) = { Average probability of error P (error) = P (error) = Bayes decision rule minimizes this error because P (error|x) = min {P (w1|x), P (w2|x)} Let {w1. . . wc} be the finite set of c states of nature (classes, categories). Let {ÃŽÂ ±1. . . ÃŽÂ ±a} be the finite set of a possible actions. Let ÃŽÂ » (ÃŽÂ ±i |wj) be the loss incurred for taking action ÃŽÂ ±i when the state of nature is wj. Let x be the D-component vector-valued random variable called the feature vector. P (x|wj) is the class-conditional probability density function. P (wj) is the prior probability that nature is in state wj. The posterior probability can be computed as P (wj |x) = Where p(x) Suppose we observe x and take action ÃŽÂ ±i. If the true state of nature is wj, we incur the loss ÃŽÂ » (ÃŽÂ ±i |wj). The expected loss with taking action ÃŽÂ ±i is R (ÃŽÂ ±i |x) = which is also called the conditional risk. The general decision rule ÃŽÂ ±(x) tells us which action to take for observation x. We want to find the decision rule that minimizes the overall risk R = Bayes decision rule minimizes the overall risk by selecting the action ÃŽÂ ±i for which R (ÃŽÂ ±i|x) is minimum. The resulting minimum overall risk is called the Bayes risk and is the best performance that can be achieved. 4.5 Simulations This section describes the implementation of the mapping and generation model. It is implemented using GUI (Graphical User Interface) components of the Java programming under Eclipse Tool and Database storing data in Microsoft Access. For given Handwritten image character and convert to Binarization, Noise Remove and Segmentation as shown in Figure 5(a). Then after perform Feature Extraction, Recognition using Bayesian decision theory as shown in Figure5(b). Figure 5(a) Binarization, Noise Remove and Segmentation Figure 5(b) Recognition using Bayesian decision theory 5. Results and Discussion This database contains 86,272 word instances from an 11,050 word dictionary written down in 13,040 text lines. We used the sets of the benchmark task with the closed vocabulary IAM-OnDB-t13. There the data is divided into four sets: one set for training; one set for validating the Meta parameters of the training; a second validation set which can be used, for example, for optimizing a language model; and an independent test set. No writer appears in more than one set. Thus, a writer independent recognition task is considered. The size of the vocabulary is about 11K. In our experiments, we did not include a language model. Thus the second validation set has not been used. Table1. Shows the results of the four individual recognition systems [17]. The word recognition rate is simply measured by dividing the number of correct recognized words by the number of words in the transcription. We presented a new Bayesian decision theory for the recognition of handwritten notes written on a whiteboard. We combined two off-line and two online recognition systems. To combine the output sequences of the recognizers, we incrementally aligned the word sequences using a standard string matching algorithm. Evaluation of proposed Bayesian decision theory with existing recognition systems with respect to graph is shown in figure 6. Table 1. Results of four individuals recognition systems System Method Recognition rate Accuracy 1st Offline Hidden Markov Method 66.90% 61.40% 1st Online ANN 73.40% 65.10% 2nd Online HMM 73.80% 65.20% 2nd Offline Bayesian Decision theory 75.20% 66.10% Figure 6 Evaluation of Bayesian decision theory with existing recognition systems Then each output position the word with the most occurrences has been used as the  ¬Ã‚ nal result. With the Bayesian decision theory could statistically signi ¬Ã‚ cantly increase the accuracy. 6. Conclusion We conclude that the proposed approach for offline character recognition, which fits the input character image for the appropriate feature and classifier according to the input image quality. In existing system missing characters cant be identified. Our approach using Bayesian Decision Theories which can classify missing data effectively which decrease error in compare to hidden Markova model. Significantly increases in accuracy levels will found in our method for character recognition