Monday, August 5, 2019

Health Effects of Occupational Exposure: Case Study

Health Effects of Occupational Exposure: Case Study A newly recruited employee at a furniture manufacturing plant has recently been complaining of cough, chest tightness and shortness of breath. Symptoms start soon after commencing work and continue throughout the day and night. They improve on the weekends but return as soon as he starts work again. What are the possible diagnoses and which is the most likely? What work-related factors could be involved? Discuss the probable occupational condition in this employee, outlining pathogenesis, risk factors, clinical picture, diagnostic measures, preventative strategies and possible outcomes. Exposure to wood dust can lead or increase the risk for cancer of the respiratory system and the gastrointestinal tract. A fourfold increase in risk for sinonasal cancer was found among men involved in the manufacture of wooden furniture, and a twofold increase in risk for gastric cancer was seen in all of the component industries of basic wood-processing (Olsen, Moller and Jensen, 1988). Therefore, such diagnosis is not a recent phenomena but the result of ongoing epidemiology research over the past decades. Prolonged or repeated exposure to air contaminants such as wood dust and other chemicals related to wood furniture manufacturing such as wood glue, wood stain and spray painting can cause irritation to the respiratory system leading to occupational health disease. Diagnosis In this case study, a newly recruited employee at a furniture manufacturing plant is complaining of cough, chest tightness and shortness of breath. Such symptoms can be diagnosed by attempting to identify what is causing this uncomfortable feeling. Symptoms start soon after commencing work and continue throughout the day and night for five days to improve on the weekends when the employee is absent from work. These symptoms re-start again when he returns to work on Monday. To diagnose such symptoms one must be aware of the possible hazards one is exposed to and by having an indication of what could be causing the distress to the employee. Kuruppuge, (1998) argues that the health effects of occupational exposure to wood dust can be summarized under five categories: toxicity (including dermatitis and allergic respiratory effects) non-allergic respiratory effects sinonasal effects other than cancer (nasal mucociliary clearance and mucostasis) nasal and other types of cancer lung fibrosis Medical diagnose will show that these symptoms are work related since symptoms started straight after employment and were not felt priory, that they improve when off from work and that they re-occur on returning back to work. This can be confirmed clinically by objective testing by taking measurements of the lungs function before and during work shift. Such testing is called Peak Expiratory Flow (PEF) and will determine if such symptoms are caused by being exposed to occupational hazard at the place of work. PEF rate measurement shows how much patients can blow out of their lungs in one breath and it is useful especially when they are having a flare up of their respiratory disease such as occupational asthma (OSCE Skills, 2013). The duties assigned to this employee are unknown. However, it is irrelevant since the durance of exposure and dose amount are causing such symptoms and not the job itself. The job assigned could be a clerical one and not necessarily a trade job, but if the employee is being exposed to chemicals or wood dust, then it is the working environment that is unhealthy. Work Related Factors One of the most common toxicity manifestations from inhaled agents in industrial exposures is the irritation of the airways, resulting in breathing difficulties and even death for the exposed individual (Dallas, 2000). Being exposed to wood dust and chemicals related to wood furniture manufacturing at all stages of wood processing can cause pain symptoms which can be of a detriment to both upper and lower respiratory tract. For many years, wood dust was considered to be an irritant dust that irritated the nose, eyes, or throat, but did not cause permanent health problems (Work Place Alberta, 2009). However, epidemiology research studies show that exposure to wood dust for a long term might lead to allergies and cancer. Wood dust is a potential health hazard since wood particles from processes such as sanding and cutting become airborne. Breathing these particles for a long period of time may cause allergic respiratory symptoms, mucosal and non-allergic respiratory symptoms, and cance r. Toxic chemicals that are used for furniture manufacturing are also detriment to occupational health. These chemicals can be absorbed into the body through the skin, lungs, or digestive system and cause effects in other parts of the body. The major wood working processes are debarking, sawing, sanding, milling, lathing, drilling, veneer cutting, chipping, mechanical defibrating and wood stain or spray painting. From the tree felling stage onwards through the various stages of wood working and manufacturing processes, workers are exposed to airborne hazard. Many individuals develop asthma following workplace exposure, and some asthmatics suffer additional provocation following the inhalation of certain industrial toxins and the inhalation of wood dusts, for instance, has been implicated in both situations (Dallas, 2000). Risk Factors Wood work operations generate dusts of different particle sizes, concentrations, and compositions. Particle-size distribution studies have shown that the major portion of airborne wood dust is contributed by particles larger than 10  µm size which can be trapped effectively in the nasal passages on inhalation and for which inhalable mass sampling is mostly appropriate. Inhalable Particulate Matter (IPM) sampling is the environmental measurement which is most closely predictive of the risk of developing nasal cancer (Hinds, 1988). According to the ISO (International Standard Organization), inhalable dust is defined as the mass fraction of total airborne particles which is inhaled through the nose and mouth (ISO, 1995). Pathogenesis Clinical Pictures The human respiratory system is a series of organs responsible for taking in oxygen and expelling carbon dioxide. In occupational Health, diseases and conditions of the respiratory system can be caused by the inhalation of foreign objects such as fine dust chemicals, allergens and other irritants. The human respiratory system has neutral mechanism against airborne hazards. (Dallas, 2000) explain in detail that the nose has fine hair as front line barrier filter for dust which is not greater than 5 femtometer (Fm). The trachea, also called the windpipe, filters the air that is inhaled. It branches into the bronchi, which are two tubes that carry air into the lungs. This fine dust is trapped in the nose, trachea and main bronchi and it can be cleared by coughing and by special body cells that destroy bacteria and viruses. However, dust which is finer than 5 Fm will go deeper in the lungs, reaching the bronchioles, alveolar ducts known as alveoli and settle there. These will likely to c ause hypersensitivity reactions-occupational asthma or hypersensitivity pneumonitis (inflammation of the walls of the air sacs and small airways), permanent obstructive disease and diffuse lung fibrosis which might lead to occupational asthma or cancer in the respiratory tract system. Hypersensitivity pneumonitis appears to be triggered when small particles penetrate deeply into the lungs where they trigger an allergic response (Work Place Alberta, 2009). Both (Kuruppuge, 1998) and (Dallas, 2000) describe that initial effects can develop within hours or after several days following exposure and are often confused with flu or cold symptoms (headache, chills, sweating, nausea, breathlessness, and other fever symptoms). Tightness of the chest and breathlessness often occur and can be severe. With exposure over a long period of time, this condition can worsen, causing permanent damage to the lungs. The walls of the air sacs thicken and stiffen, making breathing difficult. Occupational asthma develops only after an initial symptom free period or exposure, which causes breathing difficulties due to inflammation of bronchi and bronchioles. This causes a restriction in the airflow into the alveoli. Two types of allergic reaction can take place in the lungs. Decreased lung capacity is caused by mechanical or chemical irritation of lung tissue by the dust. This irritation causes the airways to narrow, reducing the volume of air taken into the lungs and producing breathlessness. It usually takes a long time to see a reduction in lung capacity. Chronic Obstructive Pulmonary Disease (COPD) is the intersection of three related conditions such as chronic bronchitis, chronic asthma, and emphysema which is a progressive disease that makes it very difficult to breathe (Zimmermann, 2012). Prevention measures to improve plants and possible outcome. In practice, there must be a distinction between the different types of wood dust and chemicals that is usually used. This is particularly the case for smaller craft businesses in Malta which the types of work and types of wood and working materials are constantly changing, and many different activities take place in a small area. The risk factor hazard should be minimized to zero and personal protective equipment should be the last resort as outlined in the European framework directive (Directive 89/391 EU, Art. 6). The employer must take all the necessary measurements to focus on the general reduction of dust levels as bound by L.N. 36 of 2003, Articles 4, 5 and 6. This objective has to be pursued regardless of the potential cancer risks as dust and chemicals carry a general risk to health, since it also influence the work flow and product quality. The employer must make a precise analysis of the existing risks and should record all the influencing factors, questioning the workers about their situation, their experiences and their proposals. On this basis, measures should be established for improving the working environment. Employees are obliged to follow all the strategic occupational health and safety procedures as outlined in L.N. 36 of 2003, Art. 15. The hierarchy of measures defined in Article 6 of the EU Directive 89/391 is as follows: a. Evaluating the risks which cannot be avoided, b. Combating the risks at source, c. Adapting to technical progress, d. Developing a coherent overall prevention policy which covers technology, organization of work, working conditions, social relationships and the influence of factors related to the working environment, e. Giving collective protective measures priority over individual protective measures, f. Giving appropriate instructions to the workers. The scope of these strategic measures is to encourage and ensure improvements in the health and safety of workers at work through the prevention of risks, the promotion and safeguard of occupational health and safety, and through the elimination of those risks and factors which are likely to cause accidents at work as outlined in L.N 36 of 2003 of the Occupational Health and Safety Act. References: Dallas, C.E. (2000). Pulmonotoxicity: Toxic Effects in the Lung in Williams, P.L., James, R.C., Roberts, S.M. (2000). Principles of Toxicology: Environmental and Industrial Applications. 2nd Edition. Wiley-Interscience Publication – Canada. Hinds W.C. (1988). Basis for particle size-selective sampling for wood. University of California, USA. ISO (1995). Air quality Particle size fraction definitions for health-related sampling. 1st ed. ISO 7708:1995(E). International Standard Organization, Geneva. Kuruppuge, U. A. (1998). Occupational Exposure to Wood Dust. Faculty of Medicine. University of Sydney, New South Wales – Australia. Retrieved December 19, 2013. From: http://prijipati.library.usyd.edu.au/bitstream/2123/392/2/adt- NU1999.0018whole.pdf Occupational Health and Safety Act 27 of 2000. L.N. 36 of 2003 General Provisions for Health and Safety at Work Places Regulations. Olsen, J.H., Moller, H., Jensen, O.M. (1988). Risks for respiratory and gastric cancer in wood-working occupations in Denmark. Retrieved December 21, 2013. From: http://www.ncbi.nlm.nih.gov/pubmed/3410880 OSCE Skills (2013). Peak Expiratory Flow Rate (PEFR) Technique. Retrieved December 20, 2013. From:  http://www.osceskills.com/e-learning/subjects/explaining-the-peak-expiratory-flow-rate-technique/ Williams, P.L., James, R.C., Roberts, S.M. (2000). Principles of Toxicology: Environmental and Industrial Applications. 2nd Edition. Wiley-Interscience Publication – Canada. Zimmermann, K.A. (2012). Respiratory System; Facts, Function and Diseases. Retrieved December 21, 2013. From:  http://www.livescience.com/22616-respiratory-system.html Paul Spiteri Results Chapter: Memory Research Results Chapter: Memory Research First of all, we have to determine the appropriate measure of product involvement. Referring to our meta-analysis, involvement is considered as an endogenous variable moderating the effect of incidental advertising exposure and one of consumer characteristics. Researchers have strived a great effort to develop tools with which to measure involvement since the introduction of the concept to marketing by Krugman (1965) and although researchers agree that the study of low versus high involvement states is interesting and important, there is presently little agreement about how to best define, and hence measure, the construct of Involvement. The reasons for the diverse definitions and measures of involvement are perhaps due to the different applications of the term involvement. We are especially interested by involvement with products that has been measured by numerous methods: rank ordering products, appraising a series of products on an eight point concentric scale as to their importan ce in the subjects life, asking how important it is to get a particular brand (Zaichkowsky, 1985). Zaichkowsky scale is considered a valid measurement for product involvement (Goldsmith and Emmert, 1991), thats why previous research investigating the influence of product involvement has relied on this scale (Celsi and Olson, 1988; Chow et al., 1990). In his study published in 1992, McQuarrie confirms the strong performance of Zaichkowsky Personal Involvement Inventory (PII) across a number of validation tests. He found that this measure is exceedingly reliable and it is highly predictive of a broad range of behavioural and it is able to successfully discriminate felt involvement across several products and a variety of situations. Zaichkowsky (1985) argued that the PII is context free, which makes it appropriate for measuring various types of involvement. In conclusion to his study, McQuarrie (1992) indicated that the involvement Zaichkowsky measure can be a sufficient tool for researchers who need a short measure with high criterion validity and who can tolerate a slight decrease in reliability. Since involvement is proposed to be a variable in the decision process, the PII offers researchers a quickly administered tool, generalizable across product categories that can be used as a covariate to other research questions (Zaichkow sky, 1985). All those positive points do not deny several limitations of this measure. It is long and elaborate; needlessly difficult to comprehend thats why this scale was revised and reduced by Zaichkowsky in 1994. In our research we measure this construct by five 9 point semantic differential scales (important/unimportant, of no concern/of concern to me, irrelevant/relevant, interested/uninterested, and appealing/unappealing) (Zaichkowsky, 1994). The Zaichkowsky (1994) five items were factor analysed, using SPSS with principal component analysis and direct oblimin rotation. The rotated factor pattern consists of one factor for the preattentive processing data, and the explained variance initial solution was 27.45%. Factorial contributions of the five items are greater than .600 and the quality of representations are greater than .400 which is the minimum required. Further, the results of the confirmatory factor analysis displayed to this scale demonstrate that the internal reliability of this scale is ÃŽÂ ± =.620 which is an acceptable value. KMO and Bartlett sphericity tests were utilised for revealing the correlation degree among the items considered. The KMO index (.859) and the Bartlett test .0000 are acceptable. The fit indices achieved from the confirmatory factor analysis indicated that the variable of product involvement had acceptable fit on the key indices with à Ã¢â‚¬ ¡Ãƒâ€šÃ‚ ²=142.00, Goodness fit index of . 936 and root mean square of .040. (see tables IV.3 and table IV.4.). In the third experiment of Janiszewski (1993) preattentive ad processing study, 10 to 33 percent of the subjects affirmed to recognize the target advertisements. Based on Janiszewski study (1993), our study set the threshold explicit memory rate at 25% as a condition for the successful manipulation of incidental advertising. We should verify whether the manipulation scenario is feasible for the main experiment, we conduct a test with different college students (N=30). Subjects were assigned to either attentive or incidental processing conditions. In the incidental processing condition, subjects were told that there would be a test over the contents of the magazine pages to test how will they understood the contents. We explain to them that the goal of the experiment is to better apprehend the memory of consumer and the process of memorization of different information of magazine pages (See Appendix D). As we noticed from our meta-analysis in the second chapter some researchers such a s Janiszewski (1993) dressed a scenario manipulation that gave the subjects the opportunity to glance their attention surrounding the ad content and attentively process it. We are particularly conscious that some attentive processing may be a natural part of the typical consumer viewing but we retain the ensuing condition: if we have a number of subjects remembering seeing the target ads below the 25%, we consider the manipulation as successful. So to verify it, we instructed to the subjects to read the content of three magazine pages. Once finishing reading the text, subjects were asked to complete recall and recognition tests. In a free recall test, subjects were asked to list all of the brand names from banner ads that they were exposed to. Subjects dressed a list of target brand names that were coded as a dichotomous variable (yes=1 and no=0). To be sure that the manipulation of incidental processing is successful, we should be certain that subjects advertising recall rate is close to zero or smaller than that for conscious processing where subjects will be asked to consciously evaluate a magazine page and we should respect the condition of subjects advertising recognition rate in the incidental below the threshold point of 25 percent affirmed by Janiszewski (1993). We led a Chi-square tests on both recall and recognition rates. The results of our experience showed that only three subjects in the incidental processing condition recalled the target advertising (3 to 15) and five subjects in the control processin g group (à Ã¢â‚¬ ¡Ãƒâ€šÃ‚ ²=6.533, p= .05). Meanwhile, two out of fifteen subjects in the incidental processing condition recognized the target ad compared to 60 percent of recognition rate (9/15) in the control processing group (à Ã¢â‚¬ ¡Ãƒâ€šÃ‚ ² =4.8, p= .05). For the advertising recognition, rates for the incidental processing condition (13, 33 percent) are below the threshold point we set in our study. Based to these results we suggest that is appropriate to use this scenario in the main experiment. . Conclusion Based upon the results of our three pretests,  «Cookies » and  «Laptop computers » were selected as the target product categories to be used in the chief experiment. We developed the three magazine pages by a professional designer that used the software of Adobe Dreamweaver 8.0 well known in the development of web pages. For word completion tests, a list of words was selected to be used in the main experiment. Finally, the feasibility of incidental processing advertising manipulation scenario was tested and the results showed that this manipulation scenario would successfully generate a condition for incidental processing of print advertising in the main experiment. We have to mention that we tried to lead this experimentation via internet and we have sent the questionnaire to 15 subjects and finally we found that 12 subjects had detected the presence of advertising thats why we decided finally to lead this experience in laboratory in order to control the duration of exposure and to not let the subjects the chance to detect the presence of incidental banner advertising. In the next chapter, the data analysis including the exploratory and confirmatory factorial analysis was led on the data seized by the questionnaires. Initially we led an exploratory factor analysis; this stage enable us to purify the items on the basis of factorial contributions by the analysis in principal components and to estimate the reliability of the dimensions retained in the light of alpha of Crombach. To assess the measurement model, our research conducts a confirmatory factor analysis (CFA) with reliability and constructs validity checks. To assess the overall fit of the proposed model, structural equation modelling was conducted as recommended by Anderson and Gerbing (1988). We describe in the next section the sample and manipulation check measures. In the third section, we give the different measures of the model variables and the results of purification of those measurements applying the principal component analysis (PCA). Principal component analyses with varimax rotation were conducted and factors with eigenvalues greater than 1 were extracted. The reliability of the utilised scales was checked by using the Cronbach alpha coefficient (Nunnally, 1978). Additionaly, in order to ensure the total quality of our data, we took care to validate the metric character of different measures. The measurement of Kaiser, Meyer and Olki (KMO) evaluates the degree of intercorrelation betwee n items and the test of sphericity of Bartlett which has the advantage of providing indications on the maximum number of factors to be retained. Finally, we close by an evaluation of dimensionality and validity of the measure, global, and structural model. V.2. Sample and manipulation check measures The sample is shaped by 310 undergraduate students (150 males and160 females) who participated in the experiment. The theoretical orientation of the present research is the origin of the choice of the non random sampling method. The age of subjects range from 19 to 30 with a mean of 24.4 years. In our research, students are recruited from the campus of University of Engineers EL Manar in Tunisia. No criterion has been used except the will of each subject to participate in the study. We have choosed to work with 310 for the main reason that the descriptive part of our research required a high number of participants to verify the different hypothesis. Our goal is to have a sample that is the most homogenous as possible. The part of our exploratory research does not lead to any form of generalization. It tasks rather to examine some theoretical links which are not sufficiently developed in the literature and to study the decision making processes in the context of incidental exposure to the advertising. Experimental sessions were conducted in laboratory via personal computers over twenty weeks period (from December 2010 to Mai 2011). The procedure of codification has taken three months (from June 2011 to September 2011). To be sure that subjects experienced incidental processing versus attentive (or conscious) processing of magazine advertising during the chief experiment, four manipulation check measures were employed. Objective knowledge: Eight objective knowledge questions were developed based on the contents of magazine pages. By those questions we want to know how subjects understood the contents of the magazine pages, by requesting them to choose appropriate answers from alternatives in multiple choice questions (See Appendix F). Advertising recall: To measure advertising recall, subjects were asked to cite all of the brand names from the banner ads they were exposed to during the experiment (free recall). The presence or absence of a brand name from the test (or target) ad on the subjects list was coded as a dichotomous variable (yes=1and no=0). Advertising recognition rate: Three banner advertisements including two target advertisements and one distractor (one filler) were advanced one at a time; and subjects were required to display whether they remind seeing the advertisement during the experiment. The design of the distractor is similar to those of the target advertisements. Advertising recognition was coded as dichotomous variable. Recall and recognition were measured for the purposes of the manipulation check. Familiarity and Gender: Subjects familiarity with the banner advertisements was assessed by a single nine point item anchored by very familiar and not at all familiar. We operationnalise gender as dichotomous variable; participants indicate if they are (1) male or (2) female. V.3. Evaluation of dimensionality and validity of scale measures The suggested hypothesis proposed the four main dependant variables to measure in the study: implicit memory, emotional responses, attitude toward the brand and consideration set and two independant variables: product involvement and cognitive style. V.3.1. Emotional response measure Emotional responses were assessed by SAM (Self Assessment Manikin), a nonverbal measurement of emotional response. The SAM measures the three P (pleasure), A (arousal) and D (dominance), the three dimensions of emotional responses of Mehrabian and Russell (1974). It is a non-verbal pictorial assessment technique that directly measures the pleasure, arousal and dominance associated with a persons affective reaction to a wide variety of stimuli. It represents a promising solution to the problems that have been associated with measuring emotional response to advertising (Morris et al., 1993). SAM depicts each PAD dimension with a graphic character displayed along continuous nine-point scale. For pleasure, SAM ranges from a smiling, unhappy figure; for arousal SAM ranges from sleepy with eyes. Our choice to work with SAM due to its capacity to eliminate the majority of problems associated with verbal measures or nonverbal measures that are based on human photographs. Bradley and Lang (1994) affirm that SAM was originally implemented as an interchangeable computer program and later was aggrandized to include a paper and pencil survey version for use in groups and mass screenings depicts the paper-and-pencil version of SAM elucidating its nonverbal, graphic drawing of differing points along each of the three major affective dimensions (Appendix F. ). SAM arrays from a smiling, happy figure to a frowing, unhappy figure when describing the pleasure dimension and ranges from an excited, wide-eyed figure to a relaxed, sleepy figure for the arousal dimension. The dominance dimens ion represents modifications in control with changes in the size of SAM (Bradley and Lang, 1994). In our experiment, we compare reports of affective experience obtained using SAM, which requires only three simple judgements, to the Semantic Differential scale devised by Mehrabian and Russell (1974) which requires 18 different ratings. Subjects report were measured to a series of pictures that varied in both affective valence and intensity. SAM is an economical, accessible method for immediately appraising reports of affective response in many contexts (Bradley and Lang, 1994). Further, we use this method because it is easy to administer, non verbal method for quickly assessing the pleasure, arousal and dominance affiliated with a persons emotional reaction to an event. Bradley (1994) said that SAM allows admitted rapid assessment of what arise to be fundamental dimensions in the organization of human emotional experience. Taking together, these data indicate that SAM is a useful method for measuring existing feeling states, relating them to other indices of emotional response and other processes affecting affective reactions to contextual stimuli. But before testing the hypothesis, the equivalent form reliability of the self assessment Manikin measures of emotional responses was first assessed by comparing them with the traditional Mehrabian and Russells (1972) 18 emotional measure items, since the SAM is new measure and it is never been applied to incidental processing. First, the internal consistency reliability for the three dimensions emotional responses showed that the pleasure factor had an alpha =.924, if we eliminate the item (satisfied/unsatisfied) the internal reliability of this factor increase (.925), while the Dominance factor had an alpha = .99 and finally arousal factor had an alpha .99. All alpha coefficients were within acceptable standards (Nunnally, 1978). Then, the Mehrabian and Russells (1974) 18 items were factor analyzed, Using SPSS with principal components analysis and direct oblimin rotation. The rotated factor pattern consists of three factors for the incidental processing data and the explained variance for the initial solution is .385.The Eigen values for all three factors were greater than 1, we notice that no item was cross-loaded on the extracted factors with the loadings above .500. Items combined with these loadings of .500 or higher were utilised to define the three factors pleasure, arousal and dominance. Inter-fac tor correlations are small (.052) for pleasure and arousal, (.115) for arousal and dominance, (.014) for pleasure and dominance. Therefore, a three-factor, seventeen item solution revealed the most particular and meaningful dimensions of emotional responses resulting from the unconscious processing of incidental advertising. The measure of sampling kaiser-meyer-olkin and the test of sphericity are excellent (.846 à ¢Ã¢â‚¬ °Ã‚ ¥.800). The results of exploratory factor analysis of emotional responses scale are presented in the next table V.1. What we can notice is that there is a lack of established measures of cognitive style that can be used in persuasion context. A third somewhat popular measure of imagery is the VVQ developed by Richardson (1977) to measure individual differences on a verbal-visual dimension of cognitive style. Among the multiplicity of the proposed instruments, some of them such as: Individual differences Questionnaires (VVQ, Richardson, 1977) and Style of Processing (SOP: Childers, Houston and Heckler, 1985) represent severe limitations. In fact, Kohzevnikov (2009) reveals that the main problem of these questionnaires is their low internal reliability and poor predictive validity (Alesandrini, 1981; Boswell and Pickett, 1991).One of the main reasons for the preceding problems was that many of studies on cognitive style were rather descriptive and did not attempt to relate cognitive styles to contemporary cognitive science theories. Blazhenkova and Kohzevnikov(2009) have a lot criticized the fact tha t preceding instruments focused primarily on assessing verbal expression and fluency, there has been a demand to amplify the previous verbal assessment to other aspect of cognitive style. For this study, we use the OSIVQ scale developed by Blazhenkova and Kohzevnikov (2009). They developed a new scale based on a new theoretical model of visual-verbal cognitive style that discerns three separate dimensions: object imagery, spatial imagery and verbal as opposed to the traditional bipolar Visual-Verbal cognitive style model that distinguishes between two opposing dimensions: Visual and Verbal. Blazhenkova and Kohzevnikov (2009) affirm, after a series of experiments in laboratory, that the results of the confirmatory factor analysis displayed that the overall fit of the new three-factor model is significantly greater than that of the traditional Visual-Verbal two-factor model. A pretest was conducted where 30 participants were tested individually. They were administered the OSIVQ items with the following instructions: This is a questionnaire about the way you think. Please, read the following statements and rate each of them on a 5-point scale. Circle 5 to indicate that you absolutely agree that the statement describes you and circle 1 to indicate that you totally disagree with the statement. Circle 3 if you are not sure, but try to make a choice. It is very important that you answer all items in the questionnaire. There was no time limit for the completion of the questionnaire. With SPSS 16 we proceed to an item analysis. The obtained alpha score is .602 an acceptable value for a research instrument. We notice that items 1,3,8,9, 10, 15, 21, 24, 25,28,32,37,38,41,42 are troublesome. They had a low item total correlation and alpha would increase if we were to remove those items. In fact, their deletion would increase alpha. It is necessary to delete the ci ted items to improve the reliability score of this scale. Those results are displayed in the next table. As discussed, implicit memory is defined as an automatic and nonconscious retrieval of stimuli. However, since subjects failed to remember seeing the incidental advertisements in the preattentive processing condition; this enhanced performance of subjects implicit memory is a function of unconscious priming effects involves spreading activation with a semantic network (Marcel, 1983). Theories of spreading activation (Anderson, 1983) suggest that the perception of a stimulus such as a priming word activates internal word representations associated with that prime in memory. This activation spreads to associated representations through a network of connections. Thus for our subjects in an incidental processing condition, target words presented in word completion tests presumably have received a portion of this spreading activation due to the prior incidental ad exposure, and by merit of being more active in memory, the completion of target words are greater than those for the control g roup. And this process is believed to occur very quickly and require no mental effort (Yoo, 2005). One of the methods used to measure implicit memory effects is a word-fill task (Duke and Carlson, 1993). In such a task, participants are exposed to a target word in some form of media. For advertising research, this would likely be a brand name or logo in an advertisement. Any instructions given to the participant make no reference to the previously completed task. Often target words are placed along with foil words on the test. A word is scored as correct if it matches the target word exactly in spelling. The goal of this type of experimental measure is to examine whether priming has occurred (Andrade, 2007). Holden and Vanhuele (1999) explored the possibility of dissociations between explicit and direct measures of memory (e.g. recognition) and implicit measures of memory (response facilitation in a lexical task). They argued that incidental exposed information may result in learning effects that cannot be detected through direct measures of memory but can be uncovered with indire ct measures (Pham, 1997). Yoo (2005) says that word fragment completion tests are known to be contaminated by a conscious recollection of words during test. That is both implicit and explicit memory retrieval may contribute to overall performance on such a test (Jacoby, 1991). For this issue, Jacoby (1991) proposed the process dissociation procedure (PDP) to analyze out the effects due to explicit memory retrieval, providing an unbiased estimate of the amount of influence caused by implicit memory retrieval. This study employed word fragment completion test with PDP to estimate the effects of incidental processing on implicit memory performance. This attempt is a methodological advancement in the area of studying incidental advertising and is recommended that more studies employ this procedure. As Shapiro and Krishnan (2001) mentioned, this procedure has not yet been adopted in the area of marketing and specially in the case of incidental advertising and even if Yoo (2005) used this procedure in marketing con text but this was not in the case of incidental advertising, it was in the preattentive web banners area. Jacoby (1991) developed a more elaborate process-dissociation procedure designated to quantify the strength of conscious and nonconscious forms of memory. The procedure involves combining results from opposition (or exclusion) condition with those from an inclusion condition in which subjects are told to use old words to complete test stems(Edel and Craik, 2000) . The PDP uses two different tasks. In an exclusion task, subjects are instructed to complete word stems with words that are not presented in the advertisement. In inclusion task. Thus in the exclusion task, an increased likelihood of completing word stems with exposed words would occur only if conscious memory retrieval failed (1-C) and if memory retrieval by unconscious processing lead to a correct response. He translated this discussion into a simple equation that describes performance for exclusion tasks provide a way to estimate the separate contributions of conscious and unconscious processing. He stated formally: Exclusion task performance= (1-C) U (1) Similarly for the inclusion task Inclusion task performance=C+ (1-C) U (2) Using equations 1 and 2, C and U can be obtained easily by simple algebra C=Inclusion Task Performance-Exclusion Task Performance and (3) U=Exclusion Task Performance/ (1-C) (4) Where performance is measured by the proportion of correctly completed words in the word completion test. Yonelinas and Jacoby (1994, 1995) used a variation of the original procedure. Instead of using two different instructions (exclusion/ inclusion) in a test, subjects were asked to determine whether each word presented was part of the incidental advertising during the experiment in this study. Referring to the results of our second pretest 13 words were chosen for the main experiment. Among them 10 word appeared in the Web magazine pages and the other words (distracters did not appear in target pages). Subjects are asked, Did this word appeared in the web advertising during experiment? when presented with 13 words (target and distracted words). Thus Yes responses for the words were taken as measures of inclusion tasks and No as measures for exclusion task performance. From each subjects response, the preattentive and conscious components were estimated. For instance, if a subject correctly identified three of the five target words (60%) in the inclusion task and two of the five target words (40% in the exclusion task), the extent of the conscious advertising influence, as given in equation 3, would be .60-.40=.20, while the extent of influence of preattentive processing [.40/ 1-.20] = .50 as given by equation 4. In our study and in order to not prime the stimulus, implicit memory was assessed before the explicit memory measures. V.3.4. Attitude toward the advertised brand measure Attitude toward the advertised brand is one of the most frequently utilised measures of effectiveness. Traditionally, attitude toward the brand as affective responses to ads has been a popular indicator for measuring the effectiveness of advertising in traditional media contexts. Most researchers examining attitude toward the brand agreed, implicitly or explicitly, on the importance of affective responses to the ad as an indicator of advertising effectiveness. Subjects were asked to evaluate each advertising on three nine-point bipolar items: positive/negative, good/bad and favourable/unfavourable (Gardner, 1985; Mackenzie, Lutz and Belch 1986, Mackenzie and Lutz, 1989). The items to measure the attitude toward the brand are three in number. Exploratory factor analysis was then conducted to determine As Shapiro et al. (1997) did we used the verbal checklist of brand names to measure the brand consideration set. This verbal checklist includes the brand names of ten product alternatives in each category. All ten brand names were real ones, to delete potential confounding effects from prior knowledge or attitude toward the existing brands. We present the brand names in an arbitrary order and no information other than brand names was administered. Two stimulus based consideration set checklists were developed, based on the two product categories used in the experiment (See Appendix E.). We present just the brand names with no other information. We pose the ensuing question to each subject Check the names of the brands that you would be interested in trying. Please checkmark as many or as few names as you wish. This technique is similar to used by Yoo (2005) .The consideration set size was also accounted by enumerating the number of examinated brand names and the presence or absence o f the target brand names (consideration composition) was recorded. V.4. Estimation of the quality of the model using confirmatory analysis Before verifying the hypothesis it is recommended that, in addition to the purification of different measures, to verify the validity of measuring instruments using factor analyzes. This is possible with the structural equations modeling. The evaluation of a model is to assess the quality fit of the theoretical model to empirical data. This analysis is carried out in several stages to the course of which adjustment is valued successively for: the global model, the measurement model and structural model (Kline, 1998). For a pragmatic picture of the underlying relationships that exist among these variables to emerge, an investigation for the proposed model with the structural equations modelling approach is needed. This extension of analysis is offered to add to the growing body of literature that specifies the interrelationships between these variables. The chief reason why we use this method is that there is greater recognition given to the efficacy and the dependability of observed scores from measurement instruments. Precisely measurement error has become a major error issue in many disciplines and structural equation modelling techniques explicitly take measurement error into account when statistically analyzing data (Adelaar et al, 2003). Analysis of Moment of Sample (AMOS v.19) a tool of SPSS (v.18) was used as analytical means for testing statistical assumptions and estimation of the measurement and structural equations models are described in the following sections of the study (Arbuckle, 2010). The conceptual model presented in the figure 3.1. was tested using structural equation modelling. The modelling was undertaken by deploying covariance matrix and the maximum likelihood estimation procedure. Structural equation modeling (SEM) was the convenient choice of analytic techniques available to test the theoretical model that was proposed a priori. Structural equation modeling, using the maximum likelihood estimation procedure, is a packed information technique in that all model parameters are appraised simultaneously and a change in one parameter during the iteration process could result in a change in other parameters of the model (Diamantopoulos and Siguaw, 2005). V.4.1. The adjustment of the global model A parsimonious fit measure was used to diagnose whether model fit has been achieved by over fitting the data with many coefficients. The model fit was measured using the chi-square statistic, the route mean square of approximation (RMSEA), the standardised route mean square residual (SRMR), the non-normed fit index (NNFI) and the comparative fit index(CFI). The root mean square error of approximation is usually regarded as the most informative fit indexes. Values less than .05 are indicative of good fit and between .050 and under .080 of reasonable fit. Likewise the smaller the standardized root mean square residual (SMRS) the better the model fit (Kelloway, 1998). There are several goodness-of-fit measures that can be used to assess the outcomes of a SEM analysis. Those fit indices are provided by AMOS (v19.) (Golob, 2003). Frequently, used measures include the root meant square error approximation (RMSEA), which is based on chi-square values and measures the discrepancy between observed and predicted values per degrees of freedom( a good model has an RMSEA value of less than .050), the comparative fit index (CFI); which compares the proposed model with a baseline model with no restrictions( a good model should exhibit a value greater the .090) ; the consistent Araike information (CAIC), which compares the model fit with the d

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.