Monday 28 December 2015

Issues affecting the reliability and validity of Schizophrenia diagnosis

Hello everyone, sorry it's been a while! Thought I'd finally finish off the last schizophrenia post - diagnostic criteria, and issues affecting diagnostic validity. As usual:

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Diagnostic Criteria


The diagnostic criteria for schizophrenia were separated by Schneider into two sets: "First-rank" and "Second-rank" symptoms, and it was suggested that having one of the 1st rank symptoms means that you are likely to have schizophrenia. The ICD (International Classification of Diseases) still focuses on 1st-rank symptoms, but the DSM (Diagnostic and Statistical Manual of mental disorders) has moved away from them.

First-rank symptoms include delusions (e.g of control, or persecution), auditory hallucinations, and thought disturbances (the belief that either your thoughts are being broadcast for others to hear, or that others are inserting thoughts into your head.)

Second-rank symptoms include disturbances of speech such as fragmentation, interruption, and incoherency, catatonic symptoms such as stupor, mutism and repeated movements, and "negative" symptoms such as apathy, avolition, a flattened range of emotional expression, and a lack of communication.

However, an issue with the concept of first-rank symptoms is that many other conditions such as bipolar disorder share these symptoms, threatening validity of diagnosis - the presence of just one first-rank symptom without any second-ranks to help make a more specific diagnosis could lead to a bipolar patient being incorrectly diagnosed with schizophrenia instead, under the ICD's diagnostic criteria.


Diagnostic Issues


Due to the two different classifications for diagnosis, criterion validity is an issue - the extent to which two different diagnostic systems agree. If schizophrenia can be diagnosed using one but not the other, either could be a potentially invalid diagnosis - one must be incorrect.

The DSM-V diagnoses schizophrenia on 5 axis: 1 and 2 for symptoms, 3 for medical conditions, 4 for social conditions, 5 for state of function. It no longer differentiates between subtypes of schizophrenia, as these are unreliable due to symptom overlap between subtypes and prominent symptom changes. To be diagnosed, a patient must have two of the criteria, one of which must be first-rank. The symptoms must be present for the last 6 months, and active for at least one.

The ICD-10 differentiates between 7 subtypes of schizophrenia, such as catatonic, paranoid, residual and undifferentiated, based on most prominent symptoms. To be diagnosed under the ICD, the patient must display one first-rank symptom, or two second-rank symptoms. It places much more emphasis on first-rank symptoms by allowing a patient to be diagnosed based on the presence of just one of them.

Comorbidity is another potential problem with schizophrenia diagnosis - the presence of one or more additional disorders occurring alongside schizophrenia makes it difficult to identify which disorder is the cause of a specific symptom, making it harder to diagnoses the correct disorders and treat them accordingly and reducing diagnostic validity.

Buckley et al (2009) analysed medical records of their schizophrenic patients, and found that 50% had depression, 47% had substance abuse disorders, 29% had PTSD, and 15% had panic or anxiety disorders. Bottes (2009) carried out a similar analysis for the Psychiatric Times, and found that 26% of schizophrenic patients had OCD, 52% had obsessive compulsive symptoms. These studies suggest that many schizophrenics have multiple disorders, and this should be taken into consideration when diagnosing patients in order to avoid an invalid diagnosis.

Contrastingly, symptom overlap can also invalidate diagnosis - many disorders share symptoms with schizophrenia, meaning that the wrong disorder could be diagnosed based on a specific, shared symptom.

Zorumski and Rubin (2013) found that bipolar disorder's most prominent symptom, severe episodes of mania, often includes delusions, hallucinations and catatonia - all symptoms that could lead to a bipolar sufferer being incorrectly diagnosed with schizophrenia.

Marsha et al (1995) found that 32% of bipolar sufferers showed 1st-rank symptoms of schizophrenia, and Carpenter (1974) found that 16% of depression sufferers showed them, suggesting that there is a clear symptom overlap between the conditions that could affect validity of diagnosis. 

Ross (1998) found that the more 1st-rank symptoms a patient displays, the more likely they are to be diagnosed with multiple personality disorder rather than schizophrenia, suggesting that 1st-rank symptoms are an indicator of MPD rather than schizophrenia specifically.


Reliability


Reliability is the consistency of diagnosis, measured by inter-rater reliability, internal consistency (if multiple patients with the same symptoms will be diagnosed in the same way) and test-retest consistency.

Several factors can reduce reliability of schizophrenia diagnosis. The main diagnostic method is the clinical interview, and individual differences between clinicians will mean different responses according to the age, personality, aptitude of the rater, as well as the bond of trust between the psychologist and the patient. These all reduce inter-rater reliability. The severity of symptoms at the time of diagnosis can affect the test-retest consistency, and deception can affect general reliability, as evidenced by Rosenhan's 1973 study.

In Rosenhan's 1973 study, 8 psychologists turned up at mental hospitals claiming to have experienced auditory hallucinations. Upon admission for schizophrenia, they stopped presenting symptoms, and were kept in for 7-52 days despite no further schizophrenic symptoms, and were given over 2000 doses of antipsychotic drugs between them. A followup on this told a mental hospital to expect pseudopatients over the next 3 months - 83 out of 193 patients were suspected of being pseudopatients by at least 1 medical professional, but they were all actually genuine.

Although based on the DSM-II criteria at the time of at least 1 auditory hallucination, the diagnoses were both reliable and valid, the admissions only showing an inability to recognise a lie, Rosenhan claimed from these results that psychiatrists could not make a consistent and accurate diagnosis. He suggested that patients' behaviour was viewed through their label of mental illness - the original set of pseudopatient psychologists' behaviour was pathologised, with note-taking observations being pathologised as "writing behaviour".

Rosenhan's conclusions lack temporal validity - the modern version of the DSM has been revised to have more stringent diagnostic criteria. Symptoms must now be present for 6 months and active for 1 month in order for schizophrenia to be diagnosed.

Once institutionalised, the participants were passive, not normal - passively keeping up the deception rather than admitting to their lie, and the relative speed of release in light of this could actually have suggested competent and efficient mental health staff.

Rosenhan's suggestion that psychiatrists cannot reliably and accurately diagnose has been challenged by: 

Jakobsen (2005) who tested 100 Danish schizophrenic patients and assessed them based on their case notes, coming to a correct diagnosis for 98% of them.

Hollis (2000) who used the DSM-IV and case notes to correctly assess and diagnose 100% of a sample of schizophrenic patients. 

These results suggest that mental health professionals are much better at diagnosing schizophrenia nowadays, and this part of Rosenhan's conclusion lacks temporal validity.



Cultural bias in diagnosis


Emic constructs are behaviours or norms that only apply to a certain number of cultures, whereas etic constructs apply globally. When an etic construct is considered to be a universal norm, this is imposing an etic. Eurocentrism leads to imposed etics in the diagnosis of schizophrenia, as the imposed etic of the DSM is used to apply a western, subjective idea of perfect mental health to non-western cultures. If people from one culture are assessing people from another, behaviour can be misconstrued, leading to an invalid diagnosis.


Cultural difference could help to explain the higher rates of schizophrenia diagnosis in some ethnic minorities. If someone is uneasy talking to a psychologist of a different ethnicity to them, they may show withdrawal, alogia, and a lack of eye contact - all of which could be interpreted as symptoms of schizophrenia, reducing diagnostic validity.

Cochrane (1977) found that schizophrenia rates in the UK and in the West Indies are very similar, and close to 1%, but people in the UK of Afro-Caribbean origin are 7 times more likely to be diagnosed with schizophrenia than those of white European ethnicity. Migration stress and socioeconomic factors were ruled out, as other ethnic groups such as South Asian that migrated at a similar time are no more likely to be diagnosed, suggesting that there is a bias that leads to racial overdiagnosis of Afro-Caribbean people.

Harrison (1997) supports the temporal validity of Cochrane's earlier research, finding that 20 years later, the gap in racial diagnosis rates had widened - Afro-Caribbean patients were now 8 times more likely to be diagnosed with schizophrenia. 

Stowell-Smith and McKeown (1999) carried out a discourse analysis of psychiatrists' reports on 18 white and 18 black psychopaths, and found that with black psychopaths there was more emphasis on aggression and potential threat to society, compared to a greater emphasis on trauma and emotional state with white psychopaths. This suggests a racial bias in the area of psychopathology that leads to biased reporting of symptoms, calling diagnostic reliability into question. 

Read (1970) gave 194 UK and 134 US psychiatrists a case report and asked them to come to a diagnosis from it. 69% of US and 2% of UK psychiatrists diagnosed schizophrenia from it, suggesting large cultural differences in behaviour interpretation and diagnostic criteria. 

Neki (1973) studied the prevalence of catatonic schizophrenia among schizophrenics in the UK and India, and found the rate was 44% in India but only 4% in the UK, again suggesting large cultural variations in behaviour interpretation and classification.

Friday 11 December 2015

Theories of relationship maintenance

The investment model can be used as AO2 with which to evaluate either of the other theories, as it is long-term and looks at past and future commitments in a relationship, rather than focusing solely on short-term cost and reward.


Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red- AO2 - Evaluation - evaluative points/IDAs


Social Exchange Theory (SET)


Proposed by Thibaut + Kelley, this economic theory views relationship behaviour as a series of exchanges, and suggests that everyone is innately selfish, looking for the most profitable relationship that offers the most reward for the least cost. Rewards include emotional fulfilment, sex, and companionship. The theory suggests that people will only stay in a relationship if it the rewards outweigh the costs in terms of time, effort and finances. Therefore, commitment to a relationship is dependent on its profitability - we assign behaviours a subconscious numerical value, either positive or negative, indicative of their status and magnitude as either a cost or a reward.

Thibaut and Kelley also proposed that we have "comparison levels" (CL) that form the standards against which we judge our own relationships. Media, parents, family, peers and ex-partners all function as these points of comparison with which we weigh up the costs and benefits of our relationship, as well as looking to internal schemas of how a relationship should be in order to come to a judgement about the value of our relationship. Alternative comparison levels (CL Alts) are other prospective relationships with which we compare our own, evaluating the costs and benefits of leaving our partner and forming a new relationship - if the benefits of the alternative relationship are better than our current one, we are more likely to leave and start a new one. If the benefits of our CL Alts are not as good, we will stay in our current relationship.

Mills and Clark (1980) provided conflicting evidence for social exchange theory with their identification of two types of romantic relationship - the "communal couple", giving out of altruism and concern for their partners, and the "exchange couple", where each keep mental records of who is ahead and who is behind in terms of social exchange. Their suggestion that there are two types of couple challenges the degree to which SET can be applied to real-world relationships - SET only really explains the relationship dynamic between the exchange couple, not the communal couple.

Hatfield (1979) provided further evidence that challenged the validity of SET. Looking at people in romantic relationships who felt over or under-benefitted, they found that those who gave more than they received felt angry and deprived, whereas those who received more than they gave felt guilty and uncomfortable. This challenges the theory that both partners of the relationship are intrinsically selfish and aiming for maximum reward - even though the over-benefitted were getting much more out of the relationship, they felt uncomfortable and unhappy because of this, and sought to equalise the balance.

However, research by Rusbult (1983) supports the central concepts of SET. Participants completed questionnaires over a 7-month period concerning rewards and costs associated with relationships. SET did not explain the early "honeymoon phase" of a relationship where balance of exchanges was ignored, but later on, relationship costs and benefits were significantly correlated with the degree of satisfaction, suggesting that this theory can help explain maintenance of long-term relationships quite well.

An issue with SET is that it could be considered overly reductionist, seeking to explain one of the most complex human behaviours as the result of a series of simple cost/reward analyses. It focuses only on the relationship in the present, ignoring past events and future rewards and commitments, oversimplifying the process of relationship maintenance in an attempt to numerically quantify different aspects of relationship behaviour. A more holistic explanation that takes into account factors such individual differences such as the degree to which someone desires a "profitable" relationship rather than an equal one might better explain relationship maintenance in economic terms.

Another issue with SET is that it suffers cultural bias through ethnocentricity, seeking to globally apply the emic construct of desire for individual reward, imposing it as an etic. Western, individualist cultures such as those of the UK and the USA are likely to place more emphasis on the advancement of the individual in society than Eastern, collectivist cultures such as China, which are more likely to emphasise communal interest rather than individual gain. Therefore, this theory cannot necessarily be applied on a cross-cultural level, limiting its application.

A problem with SET is that it relies on two key assumptio ns: firstly, that people constantly monitor their relationship's costs/rewards and compare them with alternative relationships. However, research has suggested that it is not until dissatisfaction with the relationship that people weigh up costs/rewards and compare them to CL Alts, so this theory may be more applicable to the breakdown of relationships than it is to maintenance. Secondly, it assumes that everyone is intrinsically selfish, motivated purely through a desire for personal gain, when this may not be true - Sedikides (2005) suggested that most people are unselfish, doing things for others without expecting anything in reward.


Equity Theory


Another economic theory, this one challenges the suggestion that each partner in the relationship is only aiming for personal rewards, suggesting that fairness is more important than profit. It claims that the person who gets less in a relationship feels dissatisfied, and the person who gets more feels guilty and uncomfortable. CL and CL Alts are still valid - comparing the relationship to schemas or alternatives that might offer a fairer deal.

Walster et al suggested a 4 stage model of equity. 
  • People try to maximise their profit in the relationship.
  • Trading rewards occurs to bring about fairness - e.g. a favour or privilege is repaid by the partner.
  • Inequality occurs, producing dissatisfaction - the partner who receives less experiences a greater degree of dissatisfaction.
  • The loser endeavours to rectify the situation and bring about equity - the greater the perceived inequity, the greater the effort to equalise.
Stafford and Canary (2006) provide supporting evidence for equity theory. Asking 200 couples to complete measures of relationship equity and marital satisfaction. Satisfaction was highest in couples who perceived their relationships to equitable, and lowest for partners who considered themselves to be relatively under-benefited by their relationship. The findings are consistent with the key principles of equity theory - that people are most satisfied in a relationship where the balance of rewards/costs are fairly even and consistent. 

However, research does not support the assumption that equity is equally important in all cultures. Aumer-Ryan et al interviewed men and women in Hawaiian (individualist) and Jamaican (collectivist) universities, and found equity to be less important in Jamaican relationships. This suggests that the theory is culturally biased and cannot be applied equally to both individualist and collectivist cultures, and seeks to impose desire for equity as an etic construct rather than the emic that it actually is.

This theory has real-world application to marital therapy. Attempts to resolve compatibility issues between spouses require issues associated with inequity dissatisfaction to be resolved first, because inequity indicates incompatibility in women's eyes. In research, wives reported lower levels of compatibility than husbands when the relationship was inequitable - suggesting that there are gender differences in how equity is perceived, and the theory is not equally applicable to either gender.

Another issue with this theory is that it assumes everybody wants equality. This is not always the case - as some partners may be perfectly happy to give more than they receive in a relationship without feeling dissatisfied, suggesting equity theory cannot fully explain every type of relationship.  


Investment Model


The final economic theory of relationship maintenance is based around long-term return on investments, looking for the best possible outcome. The number and importance of  long-term investments decides whether a relationship will be maintained or whether it will break down. Investments such as houses, children, time, holidays, and assets serve as barriers to dissolution. Commitment to staying in a relationship is based on three factors:
  • Satisfaction: feeling that the rewards it provides are unique
  • A belief that the relationship offers better rewards than any CL Alts.
  • Substantial investments in the relationship.
Impett et al provide supporting evidence for the investment model. Testing the model using a prospective study of married couples over 18 months, they found that the commitment to the marriage by both partners predicted relationship stability and success, suggesting that substantial investment in relationships helps to maintain and steady them.

Jerstad provides further supporting evidence - he found that investments, most notably time and effort put into the relationship were the best predictor of whether or not somebody would stay with a violent partner. Those who had experience the most violence were often the most committed.

The investment model is more long-term than the other economic theories of relationship maintenance, looking at past and future commitments in a relationship, rather than focusing solely on short-term cost and reward analysis.

However, an issue with the investment model is that it reflects an ethnocentric bias as an explanation of relationship maintenance. Cross-culturally, satisfaction, quality of CL Alts, and investment are not always factors that influence commitment .There may be cultural or religious pressures to stay in an unsatisfactory relationship, and in some cultures, relationship break up, especially of a marriage, is not socially acceptable. Alternatively, some cultures may have more a stigma towards one gender initiating a breakup than the other.

Monday 30 November 2015

Theories of relationship formation

Black - AO1 - Description
Blue - AO2 - Evaluation - studies
Red - AO2 – Evaluation - evaluative points/IDAs

Filter Theory


Kerckhoff and Davis proposed the filter theory, suggesting that relationship formation is based on systematic filtration of possible partners on three levels – starting from a "field of availables."

1 – Social demographic variables. Subconsciously, we filter down to a pool of people belonging to similar social demographics to us – same school, town, workplace etc. Individual characteristics play a very small role at this stage.

2 – Similarity of attitudes and values. Here, the pool is filtered based on the law of attraction – greater similarity brings better communication and a better chance of relationship developing further. Having similar hobbies, beliefs, and interests increases the chance that a relationship will develop further and more deeply.

3 – Complementarity of emotional needs. Once a couple is established in a fairly long term relationship, the relationship will develop for better or for worse depending on how well they fit together as a couple and mutually satisfy their needs. Similarities in the amount of emotional intimacy, sex, social interaction and physical proximity required increases the chance that the relationship will be successful in the long-term.

Kerckhoff and Davis provide supporting evidence for filter theory with their longitudinal study of student couples together for less or more than 18 months. Attitude similarity was the most important factor up until 18 months, after this, psychological compatibility and ability to mutually meet needs was the most important factor in determining the quality of the relationship.

The theory can be considered to have a degree of face validity, as it is common sense to assume that similarities in demographic factors, attitudes and values systems would lead to a more happy and successful relationship and would thus be filters that we apply in the selection process.

Spreecher challenged this hypothesis, suggesting that social variables are not the only initial filter, and that couples matched in physical attractiveness, social background and interests were more likely to develop a successful relationship.  This is supported by Murstein’s match hypothesis, which suggests that a significant factor in early attraction is the couple being of similar attractiveness levels – though people may desire the most physically attractive partner, they know in reality they are unlikely to get or to keep them, so they look for people of a similar attractiveness to themselves.

Gruber-Baldini et al (1995) carried out a longitudinal study of couples aged 21 and found that those who were similar in educational level and age at the start of the relationship were more likely to stay together and have a successful relationship, suggesting these two as factors ignored by Kerschoff and Davis in their Filter Theory.

An issue with filter theory is that it could be considered to be overly deterministic, failing to capture the dynamic and fluid nature of human relationships by its division into three distinct stages, and failing to take into account the role of free will in partner selection.  Not all couples will have the same priorities in their relationships at exactly the same stages, and to suggest so is too nomothetic, ignoring individual differences between couples. 

Another issue with filter theory is that it could also be considered to be overly reductionist, seeking to explain the complex nature of relationship behaviour as a result of simple filtration processes, selecting a partner through a process of elimination from a “field of availables.” This is potentially an oversimplification of relationship formation, and cannot definitively explain the formation of homosexual romantic relationships. Homosexual couples may not necessarily have the same experiences that lead to their relationship being initiated as heterosexual couples, so the theory could be considered to have a heterosexist bias.


Reward/Need Satisfaction Theory


Reward/Need satisfaction theory suggests in order to progress from early attraction, the two people need sufficient motivation to want to continue getting to know each other. Long-term relationships are more likely to be formed if the partners meet each others' needs, providing rewards in the form of fulfilment of a range of needs - including biologically based needs such as sex and emotional needs such as giving and receiving emotional support, and feeling a sense of belonging.

This theory works on two key principles of the behavioural approach: operant conditioning and classical conditioning. Through classical conditioning, doing activities you enjoy with your partner leads to a conditioned response of happiness to the conditioned stimulus of your partner, leading to an intrinsic feeling of happiness while being around your partner. Through operant conditioning, the sense of belonging and the fulfillment of emotional needs such as intimacy function as rewards in the positive reinforcement process, leading to a strengthened relationship between the couple - they will like each other more and want to spend more time together.

Supporting evidence for this theory comes from Argyll's explanation that relationship formation works as a means to the satisfaction of motivational systems. Argyll (1994) outlined several key motivational systems underpinning social behaviour, and explained how relationship formation satisfies several social needs, namely: Biological needs - collective eating, sex, Dependency - being comforted, Affiliation - a sense of belonging, and Self-esteem. These results support the theory of attraction developing around need fulfilment, with partners acting as means to the fulfilment of certain social needs. 

Further supporting evidence for reward/need satisfaction theory comes from Aron et al (2005), who gave 17 participants who reported being "intensely in love" MRI scans, finding that dopamine-rich areas of the brain showed much more activation when the participant was shown a photo of the person with whom they had fallen in love, in contrast to someone they just liked. The amount of dopaminergic activity was positively correlated with the degree to which they felt in love. This supports the role of operant conditioning as a component of reward/need satisfaction theory - just seeing the person they loved stimulated the release of dopamine in the brain's reward circuits.

In some ways, this study treated psychology as a science by using MRI equipment to give objective, scientific results, measuring the activation of dopamine reward circuitry in the brain. This gives the study a high degree of internal validity, providing an objective measurement of the brain activity it claims to measure. However, in some aspects this study was less scientific - the self-reported description of "very much in love" cannot be scientifically and objectively verified, and is not falsifiable.

A study by May and Hamilton (1980) also supports reward/need satisfaction theory, providing supporting evidence for the role of classical conditioning. Female participants evaluated photographs of men while listening to either rock music that stimulated a positive mood, to music that stimulated a negative mood, or no music at all. The participants gave much more positive evaluations of personal character, physical attractiveness and general attraction in the rock music condition than in the other two, suggesting that an association had formed between the positive feeling from the music, and the men they were evaluating.

Attraction does not necessarily equal formation of relationships - this theory ignores matching and the opportunity to meet. These studies and the theory only explain how a relationship develops once there is already a degree of mutual attraction between two people, not how this initial attraction develops, so they do not fully explain the initiation of relationships.

An issue with May and Hamilton's study is that it was only carried out on female participants. Research and evolutionary theories into sexual selection suggest that attraction develops differently in males and females, so to generalise from males to females without taking these potential gender differences into account would be beta gender bias, and likely to be an inaccurate generalisation. Research has suggested that males prioritise physical appearance in a partner, while females prioritise status and power - these differences in priorities must be taken into account.

Another issue with reward/satisfaction theory is that it is overly environmentally reductionist, explaining a complex human behaviour to be a result of simple behavioural learning based around reward mechanisms. The theory ignores social, cognitive and biological factors that could play a role in attraction, such as the social demographic variables described by Kerckhoff and Davis in their filter theory.

Thursday 12 November 2015

Disruption of biological rhythms

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs

Shift work


Normally, exogenous zeitgebers change gradually, such as the changing light levels around the year. However, with shift work and jet lag, this change is rapid, and exogenous zeitgebers become desynchronised with endogenous pacemakers. For animals, this could lead to dangerous situations such as an animal leaving their dens at night when dangerous predators are around. In humans, the lack of synchrony may lead to health problems such as gastrointestinal disorders.

Shift workers are required to be alert at night and must sleep in the day, contrary to our natural diurnal lifestyle, and out of synchronisation with available cues from zeitgebers. Night workers experience a "circadian trough" - a period of decreased alertness and body temperature between 12 a.m. and 4 a.m. during their shifts, triggered by a decrease in the stress hormone cortisol. They may also experience sleep deprivation due to being unable to sleep during the day, as daytime sleep is shorter than natural night-time sleep, and more likely to be interrupted.

Czeisler (1982) studied workers at a Utah chemical plant as they adjusted from the traditional backwards shift rotation to a forwards shift rotation. Workers reported feeling less stressed, with fewer health problems and sleeping difficulties, along with higher productivity.  This was due to the workers undergoing "phase delay", where sleep was delayed to adjust to new EZs, rather than the traditional "phase advance", where sleep time was advanced by sleeping earlier than usual. These results suggest that phase delay is healthier than phase advance, as it is significantly easier to adjust to so carries less risk of circadian rhythm disruption.

Czeisler's findings have valuable real-world applications. For businesses employing shift workers, using a forwards rather than backward shift rotation will increase productivity and reduce the risk of employees making mistakes, as well as improve health due to phase delay being easier for the body's circadian clock to adjust to than phase advance.

Gordon et al (1986) found similar results to Czeisler that support the superiority of forward rotation over backwards rotation. Moving police officers from a backwards to a forwards rotation led to a 30% reduction in sleeping on the job, and a 40% reduction in accidents. Officers reported better sleep and less stress.

Studies suggest that there is a significant relationship between chronic circadian disruption resulting from shift work, and organ disease. Knuttson (1996) found that individuals who worked shifts for more than 15 years were 3 times more likely to develop heart disease than non-shift workers. Martino et al (2008) found a link between shift work and kidney disease, and suggested that kidney disease is a potential hazard for long-term shift workers. However, the use of correlations in these studies means that a direct cause and effect cannot be established, and there is not enough evidence to conclude that organ disease is a direct result of shift work - third, intervening variables cannot be ruled out.

The Chernobyl nuclear power plant and the Challenger space shuttle disasters both occurred during night shifts, when performance of workers was most impaired by the circadian trough. The catastrophic nature of these events emphasises the importance that should be placed on healthy shift rotations and the minimising of circadian disruption for workers in order to avoid further disasters.


There are four suggested approaches with which to deal with shift work and its circadian disruption.


  • Permanent non-rotating shift work allows the body clock to synchronise with the new exogenous zeitgebers and adapt to a specific rhythm. However, this is unpopular because not many people want permanent night work.
  • Planned napping during shifts has been shown to reduce tiredness and improve employee performance - but this is unpopular with both employees and employers.
  • Improved daysleep for night shift workers - keeping bedrooms quiet and dark, avoiding bright light and stimulants such as caffeine. However, this method can be disruptive of family life and lead to its own pressures.
  • Rapid rotation: rotating shift work patterns every two or three days avoids even trying to adjust to new exogenous zeitgebers. However, it also means that most of the time, rhythms are out of synchronisation, and there is controversy over the suggested effectiveness of this tactic.

The majority of shift work studies (Czeisler, Gordon et al, etc.) involve only male participants, so research into this topic is often gender biased, with the results often unrepresentative of females. Sex differences could mean differences in the levels of neurotransmitters such as orexin and serotonin that affect sleep cycles, so circadian disruption may affect males and females differently. This means that it would be beta gender bias to generalise results from men to women without taking these neurochemical differences into account.


Jet Lag


Jet lag is the disruption in circadian rhythms caused by travelling through multiple time zones very quickly by aeroplane, causing endogenous pacemakers to become desynchronised with local exogenous zeitgebers. This can result in a number of problems including fatigue, insomnia, anxiety, immune weakness and gastrointestinal disruption.

Flying west to east causes worse symptoms and a greater degree of circadian disruption than flying east to west, because phase advance is required in order to adjust to EZ changes when flying east, whereas phase delay is required in order to adjust to EZ changes when flying west. Studies into shift work demonstrate that phase delay is easier for the body's circadian clock than phase advance, causing a lesser degree of disruption and impairment.

Three ways of coping with jet lag have been suggested. Melatonin supplements are widely prescribed in the US to restore melatonin levels when jet lag has greatly disrupted circadian rhythms in order to restore the synchronicity between the internal clock (EPs) and EZs. Planning sleep patterns beforehand has been shown to help adjustment - if arriving in the daytime, stay awake on the plane, if arriving at nighttime, sleep on the plane. Splitting the travel into two days can also help, as each disruption is less severe and people have to make a less significant adjustment on the day of arrival.

Cho (2001) found that airline staff who regularly travelled across 7 time zones had a reduction in temporal lobe size and memory function, providing supporting evidence for the idea that chronic disruption of circadian rhythms due to jet-lag has long-term symptoms of cognitive impairment and neurological damage.

Recht, Lew and Schwartz (1995) provide supporting evidence for the idea that circadian disruption from jet lag is less severe when travelling from east to west, rather than west to east. They studied US baseball teams travelling between time zones for 3 years, and found that teams travelling east to west won 44% of games, whereas teams travelling west to east won only 37%. Although it could be that some teams were simply better than others, the length and sample size of the study means that this should even out. This suggests that phase delay (westward-bound) is easier for the circadian clock to adjust to than phase advance (eastward-bound.)

A significant issue with this study is gender bias, considering that it only studied male participants. Research has suggested that hormonal and neurological differences between males and females can influence sleep behaviour and, by extension, circadian rhythms, so results from this study may not apply to females too - they could be differently affected by circadian disruption resulting from jet lag. It is beta gender bias to generalise these results to both genders without taking potential hormonal differences between genders into account.  

Coren (1996) suggested several real-world applications of research into circadian disruption that can help reduce the severity of jet lag's circadian disruption. Firstly, sleep well before travelling - this will help avoid sleep deprivation. Secondly, avoid stimulants and depressants such as alcohol and caffeine - they will make the symptoms worse by further disrupting endogenous pacemakers. Thirdly, immediately adjust to local exogenous zeitgebers upon arrival - going out into the morning daylight as soon as possible helps resynchronise due to light's function as an EZ. Finally, adjust flight behaviour in anticipation; sleep if you're arriving at night, stay awake if you're arriving in the day.


Theories on the function of sleep

In the exam, you can be asked a 24-marker specifically on either restoration or evolutionary theories, so it important to know both of these in equal depth and breadth.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Evolutionary theories of sleep


Evolutionary theories explain sleep as an adaptive behaviour - one that increases the chance of an organism's survival and reproduction, providing a selective advantage. Sleep has evolved as an essential behaviour due to this selective advantage it has provided over the course of our evolutionary history - animals who did not sleep were more likely to fall victim to predation, so could not go on to reproduce.

Meddis proposed the predator-prey status theory, claiming that sleep evolved to keep prey hidden and safe from predators when normal adaptive activities such as foraging are impossible - such as at night for diurnal animals, and in the day for nocturnal animals. Therefore, the hours of sleep required are related to an animal's need for and method of obtaining food, as well as their exposure to predators. Factors other than predator-prey status that can affect sleep behavior include sleeping environment and foraging requirements. Sleep evolved to ensure animals stay still and out of the way of predators when productive activities are impossible, so the higher the vulnerability to predation, the safer the sleep site, and the lesser the time required to spend foraging, the more time an animal should spend sleeping.

This explanation is supported by the fact that animals are often inconspicuous when sleeping - taking the time beforehand to find themselves adequate shelter to keep them hidden from predators. This also explains the early stages of the sleep cycle, "light sleep", as a transitional phase from wake to sleep, allowing the animal to ensure their own safety in their immediate environment before completely losing their alertness.

A study by DeCoursey also supports this explanation. 30 chipmunks had their suprachiasmatic nuclei (a part of the brain involved in regulation of the sleep/wake cycle) removed, and were released into the wild. All 30 chipmunks were killed by predators within 80 days, suggesting that sleep patterns are vital in ensuring the safety of an animal in its natural habitat.

A strength of DeCoursey's study was the scientific validity provided by the use of control groups, treating psychology with rigorous scientific methodology. Three groups of chipmunks were used: one with SCN damage, one who had brain surgery but no SCN damage (to control for the stress of brain surgery) and a healthy control group. The use of these controls mean that cause and effect can easily be determined - it can be reliably established that circadian disruption due to SCN damage increase the risk of death due to predation.

However, a study by Allison and Cicchetti challenges this explanation, finding that on average, prey sleep for fewer hours a night than predators - Meddis suggested the opposite trend, so his theory conflicts with these results.

The predator-prey status theory of sleep is holistic, compared to Webb's hibernation theory. Rather than only focusing on one factor, (status), Meddis suggested that several factors other than this can influence sleep behaviour, such as site of sleep (whether it's enclosed in a nest or a cave, or exposed on prairies or plains) and foraging requirements (whether it requires lots of grazing on nutrient-poor found sources, or relatively few hours gathering nutrient-rich foods such as nuts or insects.) A holistic theory that takes into account multiple factors is likely to be able to provide the best explanation for the complex behaviour that is sleep.

A problem with explaining sleep as a means to safety from predation is that many species may actually be far more vulnerable during sleep, and it would be safer to remain quiet and still yet alert. However, some species have adapted to this need for vigilance: porpoises only sleep one brain hemisphere at a time, while mallards sleep with one eye open to be able to see potential threats. The phenomenon of snoring also challenges this explanation, as it is likely to draw attention to the otherwise inconspicuous sleeping animal, and increase their risk of predation.

Webb proposed the hibernation theory, claiming that sleep evolved as a way of conserving energy when hunting or foraging were impossible. This theory suggests that animals should sleep for longer if they have a higher metabolic rate, as they burn up energy more quickly, so are in greater need of energy conservation.  Conservation of energy is best carried out by limiting the brain's sensory inputs, i.e. sleep.

Berger and Philips found that sleep deprivation causes increased energy expenditure, especially under bed rest conditions. This suggests that sleep does conserve energy, and is especially useful when you're not doing normal activities.

Studies have found a positive correlation between metabolic rate and required sleep duration - small animals such as mice generally sleep for longer than larger animals, supporting the idea that sleep is adaptive as a form of energy conservation.

In times of hardship, such as when food is scarce or the weather too cold, animals sleep for longer, suggesting that sleep helps them conserve all the energy they can when resources are scarce and every calorie is critical for survival.

However, not all organisms follow this general trend, and there are some extreme outliers that challenge this theory. The sloth, a relatively large animal with a slow metabolic rate sleeps for approximately 20 hours a day, challenging the general trend that Webb's theory.

REM sleep, characterised by high levels of brain activity, actually uses the same amount of energy as waking. If REM sleep did not serve some other purpose, it would be maladaptive, as it does not help conserve energy due to the high levels of brain activity.


Overall evaluation of evolutionary theories of sleep


Mukhametov (1984) found that bottlenose dolphins sleep with one cerebral hemisphere asleep at a time, allowing them to be asleep yet alert and moving simultaneously. This adaptation supports the theory that sleep behaviour adapts to suit their environment and better resist selection pressures, lending credibility to the approach.

Generally, evolutionary theories of sleep are holistic, looking at the entire lifestyle of an animal rather than single factors in an attempt to explain and predict sleeping behaviour. Holistic, complex theories are likely to be able to provide the best full explanation for the complex behaviour that is sleep.

Much of the evidence in support of this approach is based on observation of captive animals, rather than animals in the wild. It may not accurate reflect natural animal behaviour, so these studies may lack validity. Also, these theories are impossible to test through experiment or observation, as evolution happens over thousands of years, so it is not particularly scientific.

Finally, this approach is overly deterministic, seeing sleep behaviour in humans as well as animals as being entirely caused by our evolutionary past, with no role for free will. This is an oversimplification - there is evidence to suggest that free will can play a role in influencing biological processes such as sleep.


Restoration theories of sleep


Restoration theories explain the physiological patterns associated with sleep as produced by the body's natural recovery processes. Oswald explained NREM sleep as responsible for the body's regeneration, restoring skin cells due to the release of the body's growth hormone during deep sleep. He suggested that REM sleep restores the brain.

Oswald's theory is supported by the findings that newborn babies spend large amounts of time in proto-REM sleep (a third of every day.)  This is a time of massive brain growth, with the development of new synaptic connections requiring neuronal growth and neurotransmitter production. REM is a very active phase of sleep, with brain energy consumption similar to waking, so Oswald's theory can explain this phase and why it's so dominant in newborns.

Oswald also found that sufferers of severe brain trauma such as drug overdoses spend much more time in REM sleep. It was also known that new skin cells regenerate faster during sleep - Oswald used these results to conclude that REM sleep is for restoration of the brain, and NREM sleep is for restoration of the body.

Jouvet (1967) placed cats on upturned flowerpots surrounded by water, which they would fall into upon entering REM sleep. Over time, the cats became conditioned to wake up upon entering REM sleep, depriving them of the vital fifth stage of sleep. The cats became mentally disturbed very quickly, and died after an average of 35 days. This supports Oswald's theory: the cats had NREM sleep and suffered no obvious physical ailments, buts died from organ failure brought on by brain fatigue, resulting from the lack of REM sleep. 

Jouvet's use of non-human animals raises an important issue. As well as being potentially considered unethical due to the extreme cruelty inflicted upon the animals for relatively little in the way of socially important results, the use of cats is a problem due to physiological differences in the mechanisms controlling sleep in humans and cats, meaning that it is anthropomorphic to generalise the results to humans.

Horne's restoration theory suggests that REM and deep NREM sleep are essential for normal brain function, as the brain restores itself in these stages of "core sleep." Light NREM has no obvious function - Horne refers to it as optional sleep, that might have had a role in keeping the animal inconspicuous by ensuring safety before its progression to deep sleep. Entering NREM causes a surge in growth hormone release - but this is unlikely to be used for tissue growth and repair, as nutrients required will have already have been used. He therefore theorises that bodily restoration takes place in hours of relaxed wakefulness during the day, when energy expenditure is low and nutrients are readily available.

Supporting evidence for Horne's theory comes from  sleep-deprived participants given cognitive tasks to carry out. They can only maintain reasonable performance through significantly increased effort, suggesting that sleep deprivation causes cognitive impairment because the brain has not had enough sleep necessary to maintain prime cognitive function.

Radio DJ Peter Tripp managed to stay awake for 8 days (200 hours). During this time he suffered delusions and hallucinations so severe it was impossible to test his psychological functioning. It is thought that sleep deprivation caused these effects as the brain was unable to restore itself. This supports Horne's theory, as having no REM or NREM lead to cognitive disturbances, rather than any physical impairment.

Randy Gardner remained awake for 11 days (264) hours, suffering from slurred speech, blurred vision and paranoia. He had fewer symptoms than Tripp despite being awake for longer, and soon managed to adjust back to his usual sleep pattern after the experiment. This again supports Horne's theory - slurred speech, paranoia and blurred vision are likely to be a result of neurological rather than physical impairment due to lack of core sleep.

Both Tripp and Gardner's studies are case studies, meaning they lack generalisability to a wider population. The massive individual differences found between only two case studies suggest that individual differences alone play a large role in how the individual experiences sleep, and how much sleep they need, so individual differences affect sleep too much to draw any valid conclusions from case studies.

Also, Tripp and Gardner were both male. research has shown that hormonal differences and levels can play a large role in determining how the individual experiences sleep, so, taking into account hormonal differences between genders, it would be beta bias to attempt to generalise their results to females specifically.

Finally, a methodological issue in Gardner's study comes from the observation of symptoms like blurred vision. It is difficult to establish whether this has a psychological or physiological cause, as it could either a result of bodily impairment such as a malfunction of the optic nerve, or brain impairment such as occipital lobe malfunction, the part of the brain responsible for visual processing. This makes it difficult to establish what damage was done by the sleep deprivation - physical and mental as Oswald would suggest, or purely mental, as Horne would suggest?

Tuesday 10 November 2015

Lifespan changes in the nature of sleep

This is part of the "nature of sleep" topic, concerning stages of sleep as well as lifespan changes. However, questions have never specifically asked about stages, and there is a lot more to talk about as well as a lot more AO2 for lifespan changes than there is for the sleep cycle. If people want a specific post on stages, I can do that as well, but this is all you should need as far as I know.


Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Lifespan changes in the nature of sleep



The nature of sleep varies dramatically over the course of the human lifespan, both in terms of the required duration of sleep, and the proportion of the different stages of sleep. The proportion of REM (rapid eye movement) sleep shows an overall decrease throughout the years, whilst the proportion of NREM (non-rapid eye movement) sleep increases.


Floyd et al (2007) reviewed nearly 400 sleep studies, and found that REM sleep decreases by an average of 0.6% a decade. The proportion of REM sleep starts to increase again from about age 70, though this may be due to an overall decline in sleep duration.


Neonates (newborn babies) sleep for over 16 hours a day over several sleep periods. After birth, they display "active sleep" - an immature form of REM sleep showing high brain activity, and "quiet sleep"- an immature form of slow-wave (deep) sleep. The proportion of quiet sleep increases and the proportion of active sleep decreases as they grow from newborns to infants. Newborns in REM sleep are often restless, with arms, legs and facial muscles moving almost constantly. Newborns enter REM sleep immediately, and do not develop the NREM/REM sleep sequence until 3 months of age. Over the first few months, proportion of REM sleep declines rapidly.


Eaton-Evans and Dugdale (1988) found that the number of sleep periods for a baby decreases until about 6 months of age, then increases until 9 months, before slowly decreasing again. This disruption between 6 and 9 months could possibly be a result of teething problems - the pain from teething leading to restless and disrupted sleep, with frequent periods of wake.

Baird et al (2009) found an increased risk of waking between midnight and 6 a.m. in infants between 6 and 12 months old whose mothers had experienced depressive symptoms during or immediately preceding pregnancy. Regular night waking in the first year is associated with later sleep disruption, behavioural problems and learning difficulties.

A real-world application of this research is emphasising the importance of the early establishment of regular sleep patterns in infants, as well as the importance of the mother's mental health during pregnancy. In order to reduce the risk of behavioural problems and learning difficulties later on in childhood, parents should attempt to settle their baby into a regular sleep pattern as soon as possible after the disrupted sleep between 6 to 9 months has passed. Research has also suggested that nurturing a regular sleep cycle can reduce the incidence of SIDS (sudden infant death syndrome) 

Puberty marks the onset of adolescence, where sexual and pituitary hormones are released in pulses during slow wave sleep (deep sleep.) Sleep quality and quantity do not change drastically, but external pressures and stress may lead to a less regular sleep cycle. Both sexes may begin to experience erotic dreams.


Crowley et al (2007) explained the change in the sleep patterns of adolescents as a result of changes in hormone levels, described as "delayed sleep phase syndrome" upsetting the circadian clock.

This study has a valuable real world application in education. Wolfson and Carskadon suggested that schools should begin later on in the day to accommodate for the poor concentration and attention spans of adolescents earlier in the morning. This change could have a positive effect on learning and productivity, thereby improving exam results and academic achievement.


A shallowing and shortening of sleep may occur in middle age, with increasing levels of fatigue. There is a decrease in the amount of stage 4 (deep) sleep, and it may be more difficult to stay awake. Weight issues may lead to respiratory problems such as snoring that can affect quality of sleep.


Van Cauter et al (2000) examined several sleep studies involving male participants, and found two periods of significant reduction in total amount of sleep: between 16 and 25, and between 35 and 50.


Only male participants were studies, so it is difficult to generalise to females too, and it would be beta bias to attempt this generalisation. Other sleep studies have demonstrated the importance of hormones in controlling circadian rhythms and the sleep cycle, suggesting that hormonal differences between men and women lead to different changes in sleep at different life stages. Also, environmental factors that affect the nature of sleep, such as stress, can affect men and women differently, meaning that the results cannot be generalised to both genders.


Conclusion



There is significant evidence to suggest that both the type and quantity of sleep vary tremendously over the course of the human lifespan, with neonates experiencing a different form of sleep to other age groups, and most people undergoing a steady decline in the proportion of REM sleep up until senescence.


Studies in this area use rigorous scientific methodology in their approach to studying sleep, often using electroencephalograph (EEG) machines to measure electrical activity in the brain over the course of a night's sleep. The use of sleep labs and EEGs provides a reliable and objective measurement of brain activity, but the use of them can impede on a study's validity. When participants sleep in a sleep lab, they are not exposed to external interruptions such as traffic or noisy neighbours that can reduce quality of sleep. Additionally, EEG equipment is bulky and uncomfortable to wear, which might also reduce quality. These factors mean that results gathered may not have very high external validity, and lack real-world generalisability.


Research also suffers from cultural bias, as many of the studies take place in the UK or the USA, and thus are more likely to include American and British participants. Assuming that results obtained are applicable globally is beta culturally biased, and likely to be incorrect - for example, many Mediterranean countries take "siestas" - daytime naps, helping split their sleep up into two blocks rather than one. Cultural practices such as these mean that conclusions drawn from studying American and British participants are unlikely to be cross-culturally applicable.

Monday 9 November 2015

Infradian Rhythms

This follows on nicely from the circadian rhythms topic - similar construction, use of IDAs, description etc. I'll be focusing on Seasonal Affective Disorder and the menstrual cycle here as my two examples of infradian rhythms. Again, I'm writing in the style of an exam question response, so will only include as much detail as we could be expected to write in half an hour - but if anyone is interested in more studies to use, post a comment or message me!

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Control of infradian rhythms - EPs and EZs


Infradian rhythms are biological rhythms with a cycle length of more than 24 hours - such as the menstrual cycle, with a rhythm length of a month, or Seasonal Affective Disorder (SAD), with seasonal fluctuation. SAD is characterised by depression experienced in the winter months, which then disappears during the summer. There are two main endogenous pacemakers that control SAD: the hormone melatonin and the neurotransmitter serotonin. This two chemicals are normally in a balanced equilibrium, but when an increase in melatonin occurs, leading to a fall in serotonin levels, depression can develop. Light is a key exogenous zeitgeber in this cycle. During the winter months, when there is much less sunlight, the pineal gland will produce more melatonin, leading to a fall in serotonin levels that causes depression. This explains the winter onset and summer disappearance of SAD.

Evidence to support the serotonin-melatonin hypothesis of SAD comes from the successful real-world application of phototherapy in its treatment. Daily exposure to artificially heightened light levels using a light box has been shown to help alleviate symptoms of SAD. This, as well as being a valuable application of the theory, supports the idea of low light exposure causing SAD's melatonin rise and serotonin fall.

Further supporting evidence comes from a study by Eastman, who found that in SAD patients, a reduction in symptom severity was much more likely when exposed daily to bright morning light 
rather than dim evening light or a placebo. Again, this supports the concept of a sunlight deficiency being responsible for SAD - exposure to the bright lights triggered the release of serotonin, which would help restore a healthy serotonin/melatonin equilibrium.

However, Murphy measured the serotonin and melatonin of a sample of SAD sufferers and a control group of non-sufferers hourly over a few days, and found no significant differences between the groups for the levels of either chemical. This challenges the serotonin-melatonin hypothesis, suggesting that factors other than chemical levels are present in the development of SAD.

Biological reductionism is a key issue with this theory. During winter, many people have much lower levels of social contact and physical activity - both of these factors could play a role in the winter onset of SAD. It is overly simplistic to suggest that the disorder is purely a result of biological factors, when social factors could play just as important a role in explaining its winter onset.

The menstrual cycle is another infradian rhythm with an average cycle length of 28 days. It is regulated by the hormones progesterone and oestrogen, produced by the pituitary gland and the ovaries. As the cycle beginds, both hormone levels are low during menstruation, but a surge in oestrogen levels triggers ovulation. After this, progesterone levels steadily increase over two weeks to maintain the uterine lining to prepare for a pregnancy. If there is no fertilisation of the egg after two weeks, a drop in both hormone levels triggers menstruation, restarting the cycle.

Research has demonstrated that the pheromones of other women are an exogenous zeitgeber that affect hormone levels, influencing the menstrual cycle. Russell used a single blind trial, applying cotton pads containing traces of pheromones from the "odour donor" to participants, and found that the participants' menstrual cycles synchronised with those of the donor. This supports the concept of pheromones as an EZ that can influence this infradian rhythm.

Through its use of a single-blind trial and a control group to help establish cause and effect, this study uses fairly scientific methodology to gather data. This lends credibility to the results - due to these controls, it has high replicability, meaning it can be repeated with similar results on different sample groups, helping establish and generalise a conclusion.


However, a study by Yang and Schank challenged this conclusion. Studying 186 pairs of Chinese women who lived together, controlling for other studies' methodological errors, no synchronisation of cycles was found other than by pure chance. This suggests that pheromones are not an EZ that affects this biological rhythm, and the menstrual cycle is only controlled by hormonal EPs.

However, a key issue with this study is that despite a large sample size, it lacks generalisability, and is culturally biased when attempting to apply it. Studying only Chinese women and then attempting to globally apply the results is imposing an etic construct on all cultures - especially as high levels of environmental pollution in China could interfere with pheromone transmission, making Chinese women particularly unrepresentative of the global population. This attempt at global generalisation imposes an etic as it marginalises potential differences between ethnicities and cultures that could mean the results do not apply cross-culturally.  


Conclusion


In conclusion, while there is evidence to support the role of exogenous zeitgebers in infradian rhythm control, both SAD and the menstrual cycle seem to be predominantly regulated by endogenous pacemakers. However, evidence has shown that free will can affect biological and biochemical systems, (Born et al, 1999, early wakers by choice have higher ACTH levels in the blood), so suggesting that SAD is entirely a result of uncontrollable biological processes is overly deterministic. Similarly, the true mechanism that controls infradian rhythms is likely to be a result of both biological and environmental factors - it is too reductionist to single out either type of factor as being entirely responsible.

Thursday 5 November 2015

Schizophrenia - cognitive therapies

One more schizophrenia post after this one - diagnosis, reliability & validity. This post will cover Cognitive Behavioural Therapy and Cognitive Behavioural Family therapy, two psychological treatments for schizophrenia. There's probably more here than you can expect to write in half an hour, so pick your favourite studies and relevant evaluative points and use those. The only one I would recommend to definitely use would be be Falloon et al (1985), as it is some strong supporting evidence for the efficiency of CBFT compared to CBT.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points

Cognitive Behavioural Therapy


CBT is not a "cure" for schizophrenia, as the cognitive distortions and disorganised thinking associated with schizophrenia are a result of biological processes that will not right themselves when the correct interpretation of reality is explained to the patient. The patient is not in control of their thought processes. The goal of CBT is to help the patient use information from the world to make adaptive coping decisions - improving their ability to manage problems, to function independently and to be free of extreme distress and other psychological symptoms. CBT teaches them the social skills that they never learned, as well as how to learn from experience and better assess cause and effect. Skills taught often address negative symptoms, such as alogia, social withdrawal and avolition, and can include social communication skills, the importance of taking antipsychotics routinely, and managing paranoia and delusions of persecution by challenging the evidence for these irrational beliefs.

Cognitive Behavioural Family Therapy (CBFT) is designed to delay relapse by helping the family of the schizophrenic to support the patient, by methods such as stress management training, relaxation techniques, communication and social skills, emphasis on the importance of antipsychotic drugs, and assessment of expressed emotion. High levels of expressed emotion on scales of hostility, emotional over-involvement and critical comments have been linked to rehospitalisation, so CBFT uses cognitive and behavioural methods to lower the emotional intensity of the patient’s home life. It has two general goals: To educate family members about schizophrenia, and to restructure family relationships to facilitate a healthier emotional environment.

Laing suggested the most important factor in the progression of schizophrenia is the family and how they treat the patient. A study by Brown (1972) supports this - he studied family communication patterns in schizophrenics returning home after hospitalisation.  Results showed that communication was a critical variable in whether patients would relapse into a psychotic state – patients returning to homes with a high level of expressed emotion were much more likely to relapse than those returning to homes with a low level. This supports the role of expressed emotion in determining long-term outcomes for schizophrenics.

Vaughn + Leff (1976) studied 128 schizophrenics discharged from hospital and returned to their families. Communication patterns between family members were rated for EE. The crucial finding was that families showing high levels of negative expressed emotion (Hostility, over-involvement, criticism) were more likely to have their patient relapse than families showing low levels of negative EE. Relatives with high levels of negative EE responded fearfully to the patient, characterised by lacking insight into and understanding of the condition. 

Leff + Vaughn (1985) found that a high level of positive EE with communication patterns showing warmth and positive comments is associated with prevention of relapse. They concluded that not all expressed emotion is detrimental to the relapse prospects of the patient. 

Sarason + Sarason (1998) summarised key findings from research into EE and schizophrenia: 

  • Rates of EE in a family may change over time – during periods of lower symptom severity, rates of negative EE drop and vice versa. High rates of EE may only reflect periods of high symptoms severity, and not be an overall reflection of the family dynamic. 
  • Cultural factors may play a role in EE. The association between high EE rates and relapse has been replicated in many cultures, but cultural factors may influence rate of EE and the way it is communicated. Cross-cultural studies have shown that Indian and Mexican-American families show lower levels of negative EE than Anglo-American families. 
  • EE is not limited to families. The association between EE and relapse has been demonstrated with patients living in community care - the significant factor could be communication patterns between patient and those they live with, rather than with family. 

Falloon et al (1985) found a markedly lower relapse rate among schizophrenic patients receiving CBFT than in those just receiving individual CBT. FT sessions took place in the patients’ homes, with family and patient participating. Importance of medication was emphasised, and the family was instructed in ways in which to express both positive and negative emotions in a constructive, empathetic manner. Symptoms were explained to the family, and both family and patient were instructed in adaptive coping mechanisms. Large differences in effectiveness were found - 50% of those in the individual-therapy group returned to hospital over the course of the study compared with only 11% of those in the family-therapy group. 

However, investigators were mindful that patients undergoing CBFT may have improved more than the controls due to taking their medication more routinely – family therapy subjects complied better with their with their medication regimens.

However, these results were challenged by a study by the University of Hertfordshire, carrying out a meta-analysis of over 50 studies on the use of CBT from around the world. They only found a small therapeutic effect on schizophrenic symptoms such as delusions and hallucinations, which disappeared when blind studies were used, suggesting that CBT has a negligible effect in treating schizophrenia, if any at all.

Jauhar et al (2014) conducted a systematic review and meta-analysis of the effectiveness of CBT for schizophrenia, examining potential sources of bias. They found that overall, CBT has a small therapeutic effect on schizophrenic symptoms in the "small" range, but this effect reduces further when sources of bias are controlled for.


Sarason and Sarason suggested two ways in which findings have been misinterpreted. “Expressed emotion” has been misinterpreted to mean that the expression of any emotion is harmful, rather than just negative emotions – in fact, the expression of warmth and positive emotions can play a role in reducing relapse rates.


Secondly, some family members feel guilty for their emotional expression – it is important to emphasise to them that it does not play a direct causal role, but rather, is a factor that may influence relapse. Allowing them to feel guilt can actually increase the family's levels of negative expressed emotion, harming the patient's long-term prospects.

Global cultural differences mean that in some cultures, high levels of expressed emotion is not a social norm. Results from research suggest that cultures with lower levels of expressed emotion should have lower schizophrenia rates, but they do not, challenging the role of EE in the development and maintenance of schizophrenia.

There are ethical issues with comparing a therapy group with a control. If the therapy group improves and the control group doesn't, the researcher has knowingly denied the control group an effective method of treatment.

Schizophrenics do tend to lack social skills, and many of the negative symptoms (disorganised speech, speaking very little, lack of emotion) do help isolate them socially, so therapy that emphasises social functioning is appropriate and important in helping treat this specific aspect of the disorder.

Research into CBFT has not shown a causal role for the family in schizophrenia development, as Bateson once thought, but has shown that the family can be a powerful factor in determining the patient’s risk of relapsing to a psychotic state. CBFT used to restructure family relationships and reduce levels of negative EE is a crucial tool in both increasing the patient’s quality of life, and helping the family detect early warning signs of a relapse.

Studies that compare effectiveness between different therapies often do not measure outcomes in the same way - some look at attrition rates, some look at symptom severity before and after, and some look at relapse rates. Also, the "hello-goodbye effect" refers to the bias caused by patients who tend to exaggerate their symptoms before therapy, and exaggerate their progress after therapy, leading to inaccurate conclusions being drawn. This makes holistic comparisons of effectiveness between therapies difficult, and results must be handled with care.

CBT/CBFT does significantly improve functioning and reducing the suffering of schizophrenic patient, but it is not a cure, and is unlikely to work on its own. When used in conjunction with appropriate drugs, it tackles symptoms such as poor social skills, alogia and avolition, but is often unsuccessful at treating the more serious positive symptoms. Even when social function is the focus of therapy, it is not possible to increase social functioning to the level of non-schizophrenics.


Wednesday 4 November 2015

Circadian Rhythms

Thought I'd try the circadian rhythms topic from the other half of the course next. There are still two schizophrenia topics left to write up - psychological therapies, and diagnosis, classification & validity - these should be finished sometime in the next fortnight. IDAs (issues, debates and approaches) are much more important on this side of the course, I'll try and clearly signpost them as I go. I will try and write this in the style of a response to an exam question into this topic, but if people are interested, I can write an additional post including more studies than necessary so you can pick and choose your favourites.

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points/IDAs


Control of circadian rhythms - EPs and EZs


Circadian rhythms are biological rhythms with a cycle length of roughly 24 hours. They are controlled by both external factors (exogenous zeitgebers, or EZs) and internal factors (endogenous pacemakers, or EPs.) The two main EZs that affect the sleep/wake cycle, the best example of a circadian rhythm, are light (either natural or artificial) and social cues such as the sleep/wake cycles and behavior of those around us. The two main EPs that affect the sleep/wake cycle are the SCN (suprachiasmatic nucleus) and the pineal gland. The SCN is located within the brain, just behind the eye, and detects changes in light levels. When a decrease is detected, it stimulates the pineal gland, also in the brain, to release melatonin, a hormone that induces tiredness. When an increase is detected, the SCN stimulates the pineal gland to cease melatonin release, which helps wake us up when the sun rises in the morning. Thus, the hormone melatonin helps us regulate the sleep/wake cycle.

Stephan and Zucker (1972) provide supporting evidence for the role of the SCN in controlling circadian rhythms. They compared a control group of healthy rats to a group of rats that had undergone surgery to destroy their suprachiasmatic nuclei. Whilst before, the rats had maintained constant and reliable patterns of eating and movement, these patterns disappeared in the group that had had their SCN removed - being replaced with unpredictable and patternless behavior, the rats unable to maintain a steady circadian rhythm due to the brain damage. Even though the rats were kept in the exact same environment, those with damaged SCNs lost their circadian rhythms, demonstrating that the SCN is a crucial EP in the control of circadian rhythms.

In some respects, this study was highly scientific, taking place in a controlled laboratory environment and using a control group for causal establishment. This would normally mean that the study is highly generalisable, but despite its scientific replicability, care must be taken when generalising to humans, due to the nature of the sample. There are likely to be large physiological and neuroanatomical differences between the mechanisms that control the sleep/wake cycle in rats and in humans, and it is overly anthropomorphic to confidently extrapolate results from this study onto humans.

Another important issue is raised by the use of non-human animals in this study, and that is one of ethics. The levels of harm the rats were subjected to is not at all insignificant - 14 of the 25 rats died during surgery or as a result of complications, meaning the study is ethically questionable, even when factoring in the (small) scientific and social benefits of the results. Also, the severity of the surgery calls the experiment's validity into question - as over half of the sample died from it, it is highly possible that the surgery affected the rats' brains in some way unrelated to the SCN that could explain the elimination of circadian rhythms.

Further supporting evidence for the role of EPs in the control of circadian rhythms comes from Morgan (1995), who removed the SCN from hamsters. The hamsters without an SCN had their circadian rhythms eliminated, but the rhythms were restored when the SCN cells were transplanted back in. This study clearly demonstrates the role of the SCN as an EP in circadian rhythm control.

Generalisability to humans may be an issue, as there are definite physiological differences between the brains of humans and hamsters. Additionally, humans do not live in isolation as the hamsters did in this study, so other factors proven to have an influence on circadian control such as social cues affect the human sleep/wake cycle in ways that this study ignored.

Again, the validity of this study may be called into question, as the major operation may have contributed to the change in behaviour, weakening the supporting evidence for the role of EPs in controlling circadian rhythms.

However, a 1975 study by Siffre challenged the importance of EPs in the control of circadian rhythms. Siffre spent 179 days underground in a cave with no natural light or social cues to act as EZs, only artificial light on demand. During his stay, he recorded body temperature (another circadian rhythm), heart rate, blood pressure and sleep/wake pattern. His sleep/wake circadian extended from 24 to between 25 and 32 hours despite a fully functional SCN and pineal gland- his days became "longer."  On the 179th day, by his terms it was only the 151st day. This study demonstrates the importance of EZs such light and social cues in the control of circadian rhythms, proving that EPs such as the SCN alone are not enough - EZs are required as well in order to keep the sleep/wake cycle at a steady 24 hours. However, it does demonstrate that EPs can provide some regulation to circadian rhythms without light or social cues, as once the rhythms had increased to 25-32 hours, they remained fairly constant throughout.

Being a case study, an issue with Siffre's research is that it lacks generalisability, only studying one man. This means that results are likely to be unrepresentative of the general population, and care should be taken when applying them as such. Only studying a male is also problematic, as hormonal or neurological sex differences could mean that the mechanisms that control circadian rhythms in males and females function slightly differently, so it would be beta gender bias to apply Siffre's results to women too.

Luce and Segal studied circadian rhythms in the indigenous population of the Arctic Circle, where it is light all through the summer months, and dark all through the winter months, meaning light will not function properly as an EZ. Despite this, people who live there keep up a steady sleep/wake cycle of roughly 7 hours per night, due to the function of social cues as an EZ. This study supports the role of social cues as an EZ, but suggests that light is not as crucial as otherwise thought.

Cultural or ethnic differences in the indigenous people of the Arctic Circle could mean that these results cannot be accurately generalised to the global population, and it would be imposing an etic to attempt this. Having grown up and lived all their lives accustomed to this annual pattern of light, it is possible that they have developed biological or social mechanisms that allow them to function normally using only social cues as an EZ rather than light, and we shouldn't assume that the sleep/wake cycle of non-natives would regulate itself in the same way if exposed to the same patterns.


Conclusion


In conclusion, it seems that both exogenous zeitgebers and endogenous pacemakers are required for the healthy regulation of circadian rhythms - it is overly reductionist to study one in isolation, or to claim that EPs such as the SCN and pineal gland are responsible for controlling such complicated processes as the sleep/wake cycle and body. This theory ignores cognitive and social factors that play a role in the control of human sleep behaviour. Humans are able to make appraisals of situations and of the behaviour of others, and these factors play an important role in the control of circadian rhythms. Sleep is not purely a biological behaviour, and other approaches should not be completely ignored.


Additionally, claiming that human sleep behaviour is only a result of EPs and EZs is too deterministic, as humans generally have the free will to decide when we want to sleep or choose to partake in activities that will influence our sleep/wake cycle, thus altering the rhythm ourselves. This is demonstrated by everyone who alters their own sleep/wake cycles in order to participate in shift work. Born et al found higher ACTH levels in the blood of people who made themselves get up earlier, suggesting that even biological processes and factors can be influenced by free will.

Monday 2 November 2015

Schizophrenia - the psychological explanation

I think this topic will follow on well from the biological approach to schizophrenia - it's a little more complex, bringing together the social, psychodynamic and cognitive approaches. Psychological treatments should follow on from this shortly - CBT and CBFT should be coming up soon. In the exam, it's highly unlikely that a question will ask about a specific explanation - questions into this area are much more likely to be "discuss 1/2/2+ psychological explanations for schizophrenia" rather than "discuss the psychodynamic/cognitive/social approach to schizophrenia."

Black: AO1 - Description
Blue: AO2 - Evaluation - studies
Red: AO2 - Evaluation - evaluative points


Psychodynamic explanations of schizophrenia


The psychodynamic approach rose to popularity in the mid-20th century as an environmental explanation, emphasising the causal role of the family in the development of schizophrenia. It explains schizophrenia as a regression to the Id-dominated oral stage, with little awareness of the outside world. "Primary narcissism" develops - delusions arise from the child feeling threatened and persecuted by the  outside world, but also feeling omnipotent over their internal world.

The schizophrenogenic mother


Fromm-Reichman and Kasanin were key psychologists in pioneering the concept of a "schizophrenogenic" mother, who was not schizophrenic herself, but would cause the development of schizophrenia in her children due to her treatment of them. The two central traits of the schizophrenogenic mother are being domineering (maternal overprotection) yet cold and uninvested (maternal rejection.)

Kasanin (1934) studied the parents of 45 schizophrenics and found maternal rejection in 2 patients, and maternal overprotection in 33. These results suggest that overprotection is the most significant quality of the archetypal schizophrenogenic mother, supporting the hypothesis of certain maternal behaviours inducing the development of schizophrenia in their children. 

Kasanin gathered data through interviews and case-report studies, prone to methodological problems such as researcher bias and subjectivity that reduce internal and external validity. The case reports were retrospective, so detail may have been recalled incorrectly, biased towards reporting maternal overprotection due to leading questions, or carrying the risk of false information due to social desirability bias. Also, the significant mental disturbances, possible substance abuse, and chronic use of antipsychotic medications all contribute towards a mental state where information from early life may not be recalled correctly.

Schofield and Balian also studied the early lives of schizophrenic patients. The only significant distance they found between schizophrenics and non-schizophrenics was the quality of the maternal relationships - schizophrenics were more likely to have had less affectionate mothers.

Schofield and Balian's study was a retrospective, in-depth interview on the profoundly mentally ill, many of whom had histories of substance abuse and chronic use of antipsychotic medications, meaning that information was likely to be unreliable.

Mischler (1968) carried out an observation of mothers with schizophrenic children and found the mothers to be aloof, unresponsive and emotionally distant - but only towards their schizophrenic children, behaving normally towards their non-schizophrenic children. This raises an important issue of cause and effect - the coldness and distance attributed to the schizophrenogenic mother may be her response to psychological disturbances in the child.

Parker criticised this theory by suggesting that there is no archetypal schizophrenogenic mother- there is a parental type distinguised by hostility, criticism and intrusiveness, but this type is not particularly overrepresented by the parents of schizophrenics.

Hinsie and Cambell criticised this hypothesis for ignoring the fact that many mothers fulfill Kasanin's criteria for schizophrenogenesis, but very few of them have children who develop schizophrenia, and not all schizophrenics have the archetypal schizophrenogenic mother, so the hypothesis is reductionist to suggest that the disorder is only a result of maternal relationships. The hypothesis is also reductionist in its failure to take evidence for a biological basis into account, such as the evidence for a significant genetic component.

Marital schism, marital skew and double bind


Lidz proposed the concept of "marital skew" and "marital schism" being traits found in the relationship between the parents of a schizophrenic. Marital schism refers to the open hostility and criticism that occurs when the parents are unable to adopt role reciprocity (the ability to understand each other's goals, roles and motivations.) Marital skew refers to the tendency of one parent to dominate interaction - usually an intrusive and domineering mother and a distant, passive father.

A problem with this theory is the inability to isolate the direction of the cause and effect relationship between marital skew and schizophrenia. The early symptoms of schizophrenia in childhood and the resultant psychological vulnerability in the child could cause one parent to be more involved than the other.

However, Lidz did find supporting evidence for the role of the family in the development of schizophrenia - he found that 90% of schizophrenics have a family background that is disturbed in some way, while 60% have parents who either one or both suffer from a serious personality disorder.

Bateson proposed the idea of "double bind" scenarios being responsible for schizophrenia formation - receiving conflicting emotional messages from the parents in early childhood, for example, emotional warmth one day, withdrawal and hostility the next, leads to the child losing their grip on reality and seeing their own feelings as unreliable - contributing towards schizophrenia development.

Kennedy analysed letters sent between schizophrenics and their parents, and compared the results to a control group. The results showed evidence of double bind scenarios, but, like Parker's criticism of the schizophrenogenic mother hypothesis, double bind scenarios were not particularly over-represented by schizophrenic patients. Double bind scenarios were observed in the analysis of letters between non-schizophrenics and their parents too - he concluded that the majority of people get mixed messages, but most don't develop schizophrenia, so double bind theory cannot exclusively explain the illness. 

Overall evaluation of psychodynamic explanations


The causal role of the family lacks reliable and objective empirical evidence, and established relationship so far is only correlational.

However, lots of research suggests an unstable family background may increase the risk of schizophrenia in children who already have a biological predisposition - Sorri et al's study of Swedish adoptive children of schizophrenics found that the quality of adoptive parenting was the most important factor in determining whether the children grew up to develop schizophrenia. Wahlberg (2000) examined earlier data and concluded that environmental factors such as family communication can strongly affect the chance of schizophrenia development in children with a genetic predisposition to the disorder.

Neil suggested that political and cultural conditions post-WWII influenced psychodynamic theories into schizophrenia, as psychologists such as Bowlby placed greater emphasis on the role of the mother in the early development of the child. It is possible that the sudden focus on maternal roles led to the scientific popularity of the schizophrenogenic mother hypothesis, rather than the theory having any specific credibility.

Cause and effect is difficult to determine when carrying out research in this area - early schizophrenic symptoms in a child can put significant levels of stress upon a family, causing potential instability and disturbance.


Cognitive explanations of schizophrenia


The cognitive approach seeks to explain schizophrenia as the result of faulty information processing. Frith explains it as a result of faulty "metarepresentation" the cognitive process that allows us to reflect on our thoughts and behaviour, generate thoughts, ideas and intentions, and to reflect on the thoughts and behaviour of others. 

Metarepresentation takes place through the action of two systems - the "supervisory attention system" that is responsible for self-generated actions, and the "central monitoring system", that is responsible for recognising our thoughts as our own, and external voices as belonging to others. Problems with the supervisory attention system lead to the negative symptoms such as alogia, catatonia and apathy, while problems with the central monitoring system lead to the positive symptoms such as hallucinations, delusions and thought disturbances.

Frith (1980) carried out a card guessing game with groups of both schizophrenics and non-schizophrenics, guessing whether a drawn card would be black or red. Non-schizophrenics made logical choices, taking into account probabilities and cards already drawn. Schizophrenics made very rigid decisions, finding it difficult to take self-generated cognitive actions and ideas into account, as well as probabilities.

Frith and Done (1986) carried out a verbal fluency assessment of schizophrenics and non-schizophrenics, where they were given a category and asked to generate lists. Schizophrenics performed very poorly in this task compared to the control group. In a similar visual fluency task involving categorisation, they performed equally poorly.

Bentall (1991) had schizophrenics either generate words for a list, or read off the list. A week later, they were read the words used in the initial test, and asked if they'd generated them or read them a week ago. Compared to a control group, schizophrenics performed very poorly.

Lots of empirical evidence supports the cognitive explanation - the above studies support the concept of some very definite cognitive impairment in schizophrenics, such as an inability to recognise self-generated thoughts. However, the patients' history of strong antipsychotic medication could account for the cognitive impairment in these tests.

The cognitive explanation manages to explain the both positive and negative symptoms of schizophrenia, as opposed to the dopamine hypothesis which only manages to explain the positive symptoms. However, it only explains the symptoms of schizophrenia, not the causes - what causes the metarepresentative faults in the first place?

Hemsley (1993) explained schizophrenic symptoms as a result of an inability to activate schemas. Schemas develop in early childhood as a way to categorise and process information from the outside world - if these fail to activate correctly, self-generated sensory information could be interpreted as external, causing an auditory hallucination.

This theory manages to explain the origins of auditory hallucinations, but has little empirical evidence to support it.


Social explanations of schizophrenia


Social causation theory seeks to explain the overrepresentation of schizophrenics in poor and deprived urban populations as a result of factors such as poor education, diet, healthcare, access to drugs, unemployment, overcrowding and stress.

Social drift theory suggests that schizophrenia development is not a consequence of deprivation, but a cause of it - schizophrenia symptoms lead to economic and social hardship due to being unable to hold down jobs, mortgages, relationships, which leads to moving into poor urban areas.

Castle (1993) studied Camberwell, a deprived area of  London, and found that the majority of schizophrenics were born locally, as opposed to having moved there after developing schizophrenia, supporting the causation theory and challenging drift.

However, potential effects of drift cannot be ruled out - it is possible that the schizophrenics born there also had schizophrenic parents who moved there due to socioeconomic hardship - and evidence suggests a definite genetic basis to schizophrenia.

Overcrowding is a common issue affecting wellbeing in deprived areas, leading to greater exposure to viruses - if the viral explanation is true, that could explain how socioeconomic hardship increases the risk of schizophrenia development. Malnutrition is also a problem - poverty leads to malnutrition, which leads to illness and possibly abnormalities in the development of brain anatomy, another theorised explanation of schizophrenia.